Difference between revisions of "Research doc."
(72 intermediate revisions by the same user not shown) | |||
Line 10: | Line 10: | ||
− | Not only humans perceive the world, as an | + | Not only humans perceive the world, as an topic I turned my attention to machine vision. This research offers ways of thinking about photography that may enlarge our skill to explore and perhaps even improve image-making. The developments of the medium has always been closely linked with technological capabilities of a culture. Now we have entered the digital age, something unthinkable has happened. Images become disconnected from human acting and human vision. I attempt to see from their perspective; how do machines perceive the world and can they be a legitimate voice in the discourse of photography? No doubt this innovation will affect visual culture and transform society. Through this project I will explore in what way machine perception can expand the field of photography, open our view of external reality and aesthetic integration. |
= Central Question = | = Central Question = | ||
Line 28: | Line 28: | ||
− | Approaching | + | Approaching ‘seeing machines’ will introduce me to a new field of algorithms. Appealing is that they rely on photographic technologies, so in the end I still work with images. Focussing on the aspects of machine-to-machine communication I hope to find life in photography again, stimulating new approaches. In the ideal scenario, seeing machines become a tool, which expand our minds about how we vision the world: escaping my personal point of view to give new insights. This understanding of ‘seeing machines’ has to take into account that it is an extension of the photographic medium: The consequences are that I have to inquire myself with the history of machine image production. Through the artistic-based research I submit myself with the questions that calls for reflection and debate. By adopting this “New Vision” I wanted to make my work only perceivable for machines: photographs that are not optimized to be seen by human. The world they’ve shaped looks nothing like the world we thought we lived in. |
= Research Approach= | = Research Approach= | ||
+ | |||
+ | ‘Seeing Machines’ was the working title of my artistic-based research. I came across this concept a year ago in an interview with Ola Lanko. This understanding is much more all-encompassing, comparing to ‘photography’ leaving little to the imagination. Although I could not exactly explain what it would contain and of this should be accepted as photography. So, to reinvent the medium for myself I searched for literature and projects within this domain. | ||
+ | |||
+ | Paglen introduced me to the idea of ‘Seeing Machines’ and from this question based text I was confident enough to submit myself with the question if machine perception can expand the field of photography. To make the content visible of machine image algorithms lead actually back to Paglen, a recently exhibition ‘A Study of Invisible Images’. It reveals the hidden spectrum of machine-to-machine communication and provides photographs in answer to the questions he makes. The study takes reality as a starting point and makes a slice of the process visible. But his research approach, to collaborate with software developers and computer scientist and as artist-in-residence at Stanford University, will not be the same as mine. I am very curious to these rich topics of exploration and I want to undergo this myself in effect to explore my own relation to artificial intelligent. | ||
+ | |||
+ | To break the pattern with the traditional use of photography I experiment with new methods to give my own twist on its direction. The path I was going down is heavily influenced by the impact of new methods that recent technologies provide. I was not familiar with artificial intelligent but I did have a motivation to master the technical difficulties, in attempted to understand what photography can become. These insights could be very valuable for my development as an artist if I manage to understand the historical consciousness. Through this research I show my insights and that should be done transparently and in a critical-approach. | ||
+ | |||
= Key References= | = Key References= | ||
− | [[File: | + | [[File:7f202c26c5bbb5d5166c9cc8d185ccbd.jpeg]] |
+ | ''Trevor Paglen, A Study of Invisible Images, Megalith, pigment print, 821/2x72 inch. (2017)'' | ||
+ | [http://www.metropictures.com/exhibitions/trevor-paglen4/selected-works?view=slider#4] | ||
[[File:Schermafbeelding 2018-01-10 om 22.52.33.png]] | [[File:Schermafbeelding 2018-01-10 om 22.52.33.png]] | ||
+ | |||
+ | ''Sterling Crispin, Data-Masks, Stages of evolution for Chronos (Greco)(2013-2015)'' | ||
+ | [http://www.sterlingcrispin.com/data-masks.html] | ||
Line 42: | Line 54: | ||
''Memo Akten, Learning to see: Hello, World! (2017) | ''Memo Akten, Learning to see: Hello, World! (2017) | ||
− | https://vimeo.com/213295825 | + | '' [https://vimeo.com/213295825] |
= Literature= | = Literature= | ||
+ | |||
+ | My perception on photography isn’t the same as it once was. Naturally, I didn’t saw the complexity of the image. The image is rich in many different ways. Literature opened my eyes. A book that still dominates the way of thinking about photography is from Susan Sontag, on photography. My awareness about the medium developed from inanity to a kind of abhorrence. Indeed, it had actually influenced on my relation with photography. I didn’t want to see myself anymore as a photographer. Also Vilém Flusser confronted me with the inevitable that the image is no longer accepted as an automatically depicted world. All the layers that adhere the image are unavoidable.[…] ‘Maar bloed kruipt waar het niet gaan kan’ (Dutch expression). I want to explore this visual language and use the complexes and symbols of the image. The ambition is to create meaning full content, a tool for my own vision and knowledge on life. The current time asks to consider new perspectives and trough Fred Ritchin, Trevor Paglen, Andreas Broeckmann and Joanna Zylinska I became aware of this. What photography have become turns out to appear as alien to me. | ||
+ | |||
+ | “What photography evolves into is, to a significant degree, up to those interested in abetting its transformation the possibilities for change are freshly palpable. The stakes are momentous: our outlooks on life, both perceptual and conceptual, are sure to be deeply affected. What looms before us finally is not simply a question of media but one that, when answered, will help determine, to a degree greater than we now thing, our own uncertain fates.” (Ritchin 185) | ||
+ | |||
+ | Maybe this complex network media are part of a larger structure witch goes beyond the frame of a device, beyond the frame of photography. But to get back to my perception on photography I never asked myself before how the medium should work. That was a big revelation for me in the practices. I think that Paul Virilio was right, A ‘synthetic vision’, this understanding will be seen as a “competition” in the visual field. What will be the effects, the theoretical and practical consequences for our own ‘vision of the world’. Still dimly understood I started the project ‘Making meaning of the sun’. | ||
+ | |||
= Experiments= | = Experiments= | ||
+ | |||
+ | There was one simple thing I first wanted to discover, how do computers vision the world? | ||
+ | I approached this in a techinical way. The motive was this image belowe; entire abstract to the human eye. Artificial Neural Networks use a process that is called evolutionary algorithms to generate a variation of patterns, This set of patterns might look absurd to humans but for machine vision they are the most realistic represenation of a certain thing. They single out the best “performing” ones: the image that the system classifies with a high percentage of probabilliy as I mentioned before. The goal is to let computers understand the human vision. I wanted to explore this field and had to master the technical part, no mather what. This will be my first experience with AI. | ||
+ | |||
+ | [[File:Schermafbeelding 2018-01-11 om 01.10.18.png]] | ||
+ | |||
+ | ''The above image came from Stanford's CS 231N course taught by Andrej Karpathy and Justin Johnson. '' | ||
+ | |||
+ | |||
+ | "Instead, I would propose approaching “images made by and for machines” as objects of human perception and interpretation. The notion of the “image,” like that of the “machine,” is most usefully understood as a companion concept to the human. In the same way as the machine is a designation employed by a subject in order to address an aspect of the apparatus that constructs both subject and machine, in the same way an image is predicated on a human subject for whom the “image” is a particular form in which the world reveals itself.” (Broeckmann 127) | ||
+ | |||
+ | [...] | ||
+ | |||
= Insights from Experimentation= | = Insights from Experimentation= | ||
+ | |||
+ | To get an insight how computers approach the image I worked within the field of algorithms and machine learning. I had to find myself a tool, a code on Github, that visualizes the activations produced on each layer of a trained convnet as it processes an image or video. I’m not familiair with this so with some help of other students I archived to run the script within Torch7, Python and OpenCV. | ||
+ | |||
+ | |||
+ | [[File:Schermafbeelding 2017-11-30 om 16.23.57.png]] | ||
+ | |||
+ | |||
+ | The first result was a small convent (convolution neural network) that was feed, as input, an image of a bee. The output was unexpectedly text. This gave me the first insight on how the convnet single out the best performing ones. They range the probability from cabbage butterfly to the top result bee in a time of 0.23880815505981 seconds. The range of provided answers will depend on the data set the convnet is pre-trained on; I dived into the documents of the script and found a list of 845 objects and animals. The characteristic of this system is that it will always serves out an answer. This could be very valuable to the project, and it did. | ||
+ | |||
+ | |||
+ | [[File:Screenshot from 2017-12-14 13-28-52.png]][[File:Screenshot from 2017-12-14 15-39-21.png]][[File:Screenshot from 2017-12-14 12-32-17.png]][[File:Screenshot from 2017-12-14 12-50-31.png]][[File:Screenshot from 2017-12-14 13-10-31.png]] | ||
+ | |||
= Artistic/Design Principles= | = Artistic/Design Principles= | ||
+ | |||
+ | The project has to be realised without the help of traditional devices of photography (yes!), to escape my personal perception. As an image-maker I want to explore my own relation to A.I. So the criteria of the design are that they are produce by deep learning. | ||
+ | |||
= Artistic/Design Proposal= | = Artistic/Design Proposal= | ||
+ | |||
+ | By adopting a ‘New Vision’ the purpose was the make photographs for machines. A textbook example is ‘data masks’ from Sterling Crispin created by reverse engineering facial recognition and detection algorithms. The face detection masks are the result of layers on layers of images that the system classifies with ‘human’. The pattern recognition is the essential part I proposal. | ||
+ | |||
= Realised work= | = Realised work= | ||
− | '' | + | |
+ | ‘Making meaning of the sun’ reveals images produced by machine image algorithms to analyse and identity images of the sun. Composed in several layers they reveal the production of machine-produced images, which are not optimized to be seen by human as well as obscure identifications by algorithms which range from ‘orange’ to ‘nematode’. The project explores in what way machine perception can expand the field of photography, open our view of external reality and aesthetic integration. | ||
+ | |||
+ | I’m not interested in the banal visuality of the sun. The sun is the most meaning-giving object by humans. We could never face the sun, so our knowledge is based on images. How does a machine relate on the sun? Opening up to a new view compared to a child that is learning to understand the world. | ||
+ | |||
+ | |||
+ | |||
+ | 'Making Meaning of the Sun' | ||
+ | lightbox, 57,5 x 65,5 cm, wood, glass, print, LED | ||
+ | |||
+ | 'Making Meaning of the Sun' | ||
+ | video, 5:00, loop, fragments: | ||
+ | |||
+ | [[File:Volcano.jpg]] | ||
+ | |||
+ | [[File:Nematode.jpg]] | ||
+ | |||
+ | [[File:Orange.jpg]] | ||
= Final Conclusions= | = Final Conclusions= | ||
+ | |||
+ | |||
+ | During the project ‘Making meaning of the sun’ I explored for the first time the field of algorithms, artificial intelligent and convolution neural networks. I submit myself with the question if machine perception can expand the field of photography. The contemporary revolution in photography calls for new reflection and debate. Photography has going beyond his existing framework and perception. The idea of ‘seeing machines’ helps me to understand what it has become. Machine perception will shape our viewing practices as well as our image-making devices. Human or Nonhuman, this double point of view will expand our understanding on our vision of life; And by this concept expand the definition on photography. In this post humanist world the established ways of seeing will change and will go beyond our human limitations. "We see the world, not as it is, but as we are" – Talmud. | ||
+ | I think the artistic-based research paid off in the insights I acquire. More than I ever knew, I find new ways in approaching photography. That will have positive consequences on my practices. But looking back at the project I stilly dimly understood the possibilities of the future for photography. Zylinska introduced me to the absurd idea of the photographic practice in a nonhuman culture “after the human”. I think I’m still a small thinker and I’m not critical enough to the questions that I made. Now I think that the potential is there, but I have completely let myself down in the experimental face, when I look up to the work of Memo Akten. A Deep neural network opening its eyes for the first time, not pre-trained and starts of completely blank, just trying to understand what is sees. You really see the network constantly learning. This kind of technical approaches where not visible for my, because I was to focused on one thing. | ||
+ | In closing, I think this is a very exciting subject I want to explore more and more in practice and theory. | ||
+ | |||
= Bibliography= | = Bibliography= | ||
Line 80: | Line 153: | ||
Trevor Paglen. 'invisible Images (Your Pictures Are Looking at You)' The new inquiry. (2016): Online. Internet. 08.12.2016. Available | Trevor Paglen. 'invisible Images (Your Pictures Are Looking at You)' The new inquiry. (2016): Online. Internet. 08.12.2016. Available | ||
[https://thenewinquiry.com/invisible-images-your-pictures-are-looking-at-you/] | [https://thenewinquiry.com/invisible-images-your-pictures-are-looking-at-you/] | ||
+ | |||
+ | http://www.metropictures.com/exhibitions/trevor-paglen4/press-release | ||
Blaise Agüera y Arcas. ‘Art in the age of machine intelligence’ Medium. (2006): Online. Internet. 23 February 2016. Available | Blaise Agüera y Arcas. ‘Art in the age of machine intelligence’ Medium. (2006): Online. Internet. 23 February 2016. Available | ||
Line 93: | Line 168: | ||
[https://adeshpande3.github.io/adeshpande3.github.io/A-Beginner%27s-Guide-To-Understanding-Convolutional-Neural-Networks/] | [https://adeshpande3.github.io/adeshpande3.github.io/A-Beginner%27s-Guide-To-Understanding-Convolutional-Neural-Networks/] | ||
− | http://www.memo.tv/learning-to-see/ | + | Memo Akten, Learning to see (2017) Online. Internet. Available. [http://www.memo.tv/learning-to-see/] |
+ | |||
+ | Aysegul, 'visualizing what ConvNets learn with camera' Github 30.09.2015: Online. Internet. Available. [https://github.com/Aysegul/torch-visbox] |
Latest revision as of 00:39, 16 January 2018
Contents
Forward/Introduction
First let me introduce myself, I’m Sanne Schilder a fourth year photography student. Like a novel, I wanted photography to be an extension of myself. Collecting images of time and space where I was tempted to press the shutter. Throughout my study those thoughts soon disappeared completely.
My urgency lies within the image. Ever since I was little, I was fascinated on how the image was a mediator between the world and myself. It was an entrance to make the world imaginable. It did not take long before I wanted to harvest my own images of the world. So in order to collect and appropriate the image, photography was the medium that I grabbed on to. At the beginning I made snapshots for my own satisfaction. But that seemed to disappear partly over time. I missed the craft and the possibilities to explore the material. I experiment with photosensitive films and papers, chemicals and webcams. But I was looking for a different kind of knowledge. Photography, as we all traditionally known, had undergone a transition. The further I came to realize that, the more I wanted to dissociate myself from its tradition. The deluge of images, the saturation, has prompted me to ask if it still makes sense to photograph in his existing framework. Everyone with a camera these days can make pictures without knowing about the complex processes.
So as an image-maker in the digital age I think about new ways of seeing. When the world changes, the image together with it. Today, in the age of smart phones, Google Earth, satellites and CCTV, image practices become all pervasive. The definition of photography expands. Opening new possibilities. Fred Ritchin once remarked: “Photography, as we have known it, is both ending and enlarging, with an evolving medium hidden inside it as in a Trojan horse, camouflaged, for the moment, as if it were nearly identical: its doppelganger, only better.” Without question, the photographic landscape and image-making devices will change and it will play a fundamental role in many basic elements of our lives. The development makes me very curious and that is why I do not abandon mine practice. I haven’t seen anything yet.
Abstract
Not only humans perceive the world, as an topic I turned my attention to machine vision. This research offers ways of thinking about photography that may enlarge our skill to explore and perhaps even improve image-making. The developments of the medium has always been closely linked with technological capabilities of a culture. Now we have entered the digital age, something unthinkable has happened. Images become disconnected from human acting and human vision. I attempt to see from their perspective; how do machines perceive the world and can they be a legitimate voice in the discourse of photography? No doubt this innovation will affect visual culture and transform society. Through this project I will explore in what way machine perception can expand the field of photography, open our view of external reality and aesthetic integration.
Central Question
In what way can machine perception expand the field of photography?
Relevance of the Topic
The contemporary revolution in photography offers opportunities that exceed wildest expectations. The medium is embedded in our everyday life on many different levels: from automated license plate recognition systems, Google Earth, Instagram, drones media to the advent of infinite image storage. It contains many different kind of technologies, imaging devices and practices. The photographic landscape has ultimately transformed society and affect visual culture. “Photography can therefore be described as a technology of life: it not only represents life but also shapes and regulates it–while also documenting or even envisioning its demise.” (Zylinska) To my mind, photography as it was once understood, has going beyond his existing framework and perception. The definition extends to help us see what photography has become. Photographs are no longer positioned as a discrete object, like the traditional way. Billions of images are added daily and this saturation has prompted me to ask if it still makes sense to photograph in his existing framework. Everyone these days has cameras and image-processing software at his or her fingertips. Knowledge of craftsmanship is no longer necessary to produce an image-quality that was only possible with years of practise and training in equipment. I attempt to find a new perspective through exploring this deluge of images; I had to look critically again. The developments are going so far that over the last decade or so, something radical has happened. In this posthumanist world, the human is no longer automatically the subject who sees. The shift has been barley noticed; an invisible landscape of images produced by machines for other machines to see. Trevor Paglen introduced me to “the idea of photography as seeing machines and explore questions such as: How do we see the world with machines? What happens if we think about photography in terms of imaging systems instead of images? How can we think about images made by machines for other machines? What are the implications of a world in which photography is both ubiquitous and, curiously, largely invisible? Paglen proposed a simple definition that has far-reaching consequences: seeing machines.” (Paglen)
“Now objects perceive me,” the painter Paul Klee wrote in his notebook, according to Paul Virilio in The Vision Machine. The French theorist explains that we are on the verge of synthetic vision, the automation of perception – “ a machine that would be capable not only of recognising the contours of shapes, but also of completely interpreting the visual field.” – (Virilio 1994* 1988) Historically, the performances of machines go beyond expectations. In the field of perception, the process in order to sensory information and turn it into concepts, the understanding became a reality. Trough deep learning or A.I., computers and devices are able to do some of the things that brains do. Inspired by natural evolution, Artificial Neural Networks use a process that is called evolutionary algorithms to generate a variation of patterns – composed in several layers, unrecognizable to humans. This set of patterns might look absurd to humans but for machine vision they are the most realistic representation of a certain thing. They single out the best “performing” ones: the image that the system classifies with a high percentage of probability. But what came completely unexpectedly is that machine perception has a connected with machine creativity. Can machines turn a concept into something out there in the world? Blaise Agüera y Arcas a software engineer, software architect and designer explains this connection: “I think Michelangelo had a penetrating insight into to this dual relationship between perception and creativity. This is a famous quote of his: "Every block of stone has a statue inside of it, and the job of the sculptor is to discover it." So I think that what Michelangelo was getting at is that we create by perceiving, and that perception itself is an act of imagination and is the stuff of creativity.”(Arcas) If your think from this point of view, that perception and creativity are intimately connected, than any creature is able to create and are by no means uniquely human.
In closing, I think that there is a new frontier and it will challenge the established ways of seeing. The embracing of machine perception will introduce us to a “New Vision”. This double point of view will expand our understanding, just like the invention of photography. I wonder, if photography became this life-shaping medium what will his nearly identical replica achieve? The development of photography has been from his origin a process of increasing awareness of the concept of knowledge and supported our visual capacities if there where inadequate. “Embracing nonhuman vision as both a concept and a mode of being in the world will allow humans to see beyond the humanist limitations of their current philosophies and worldviews, to unsee themselves in their godlike positioning of both everywhere and nowhere, and to become reanchored and reattached again.” (Zylinska 15) As an artist I think it’s very exited to adopt this “New Vision”. As machine intelligence develops, can it be a legitimate voice in the discourse of my practice? In which aspect will it engaging with external reality and aesthetic integration? In ways, it’s hard to imagine from today’s point of view, but I think this can be a new entrance to make the world imaginable.
Hypothesis
Approaching ‘seeing machines’ will introduce me to a new field of algorithms. Appealing is that they rely on photographic technologies, so in the end I still work with images. Focussing on the aspects of machine-to-machine communication I hope to find life in photography again, stimulating new approaches. In the ideal scenario, seeing machines become a tool, which expand our minds about how we vision the world: escaping my personal point of view to give new insights. This understanding of ‘seeing machines’ has to take into account that it is an extension of the photographic medium: The consequences are that I have to inquire myself with the history of machine image production. Through the artistic-based research I submit myself with the questions that calls for reflection and debate. By adopting this “New Vision” I wanted to make my work only perceivable for machines: photographs that are not optimized to be seen by human. The world they’ve shaped looks nothing like the world we thought we lived in.
Research Approach
‘Seeing Machines’ was the working title of my artistic-based research. I came across this concept a year ago in an interview with Ola Lanko. This understanding is much more all-encompassing, comparing to ‘photography’ leaving little to the imagination. Although I could not exactly explain what it would contain and of this should be accepted as photography. So, to reinvent the medium for myself I searched for literature and projects within this domain.
Paglen introduced me to the idea of ‘Seeing Machines’ and from this question based text I was confident enough to submit myself with the question if machine perception can expand the field of photography. To make the content visible of machine image algorithms lead actually back to Paglen, a recently exhibition ‘A Study of Invisible Images’. It reveals the hidden spectrum of machine-to-machine communication and provides photographs in answer to the questions he makes. The study takes reality as a starting point and makes a slice of the process visible. But his research approach, to collaborate with software developers and computer scientist and as artist-in-residence at Stanford University, will not be the same as mine. I am very curious to these rich topics of exploration and I want to undergo this myself in effect to explore my own relation to artificial intelligent.
To break the pattern with the traditional use of photography I experiment with new methods to give my own twist on its direction. The path I was going down is heavily influenced by the impact of new methods that recent technologies provide. I was not familiar with artificial intelligent but I did have a motivation to master the technical difficulties, in attempted to understand what photography can become. These insights could be very valuable for my development as an artist if I manage to understand the historical consciousness. Through this research I show my insights and that should be done transparently and in a critical-approach.
Key References
Trevor Paglen, A Study of Invisible Images, Megalith, pigment print, 821/2x72 inch. (2017) [1]
Sterling Crispin, Data-Masks, Stages of evolution for Chronos (Greco)(2013-2015) [2]
Memo Akten, Learning to see: Hello, World! (2017) [3]
Literature
My perception on photography isn’t the same as it once was. Naturally, I didn’t saw the complexity of the image. The image is rich in many different ways. Literature opened my eyes. A book that still dominates the way of thinking about photography is from Susan Sontag, on photography. My awareness about the medium developed from inanity to a kind of abhorrence. Indeed, it had actually influenced on my relation with photography. I didn’t want to see myself anymore as a photographer. Also Vilém Flusser confronted me with the inevitable that the image is no longer accepted as an automatically depicted world. All the layers that adhere the image are unavoidable.[…] ‘Maar bloed kruipt waar het niet gaan kan’ (Dutch expression). I want to explore this visual language and use the complexes and symbols of the image. The ambition is to create meaning full content, a tool for my own vision and knowledge on life. The current time asks to consider new perspectives and trough Fred Ritchin, Trevor Paglen, Andreas Broeckmann and Joanna Zylinska I became aware of this. What photography have become turns out to appear as alien to me.
“What photography evolves into is, to a significant degree, up to those interested in abetting its transformation the possibilities for change are freshly palpable. The stakes are momentous: our outlooks on life, both perceptual and conceptual, are sure to be deeply affected. What looms before us finally is not simply a question of media but one that, when answered, will help determine, to a degree greater than we now thing, our own uncertain fates.” (Ritchin 185)
Maybe this complex network media are part of a larger structure witch goes beyond the frame of a device, beyond the frame of photography. But to get back to my perception on photography I never asked myself before how the medium should work. That was a big revelation for me in the practices. I think that Paul Virilio was right, A ‘synthetic vision’, this understanding will be seen as a “competition” in the visual field. What will be the effects, the theoretical and practical consequences for our own ‘vision of the world’. Still dimly understood I started the project ‘Making meaning of the sun’.
Experiments
There was one simple thing I first wanted to discover, how do computers vision the world? I approached this in a techinical way. The motive was this image belowe; entire abstract to the human eye. Artificial Neural Networks use a process that is called evolutionary algorithms to generate a variation of patterns, This set of patterns might look absurd to humans but for machine vision they are the most realistic represenation of a certain thing. They single out the best “performing” ones: the image that the system classifies with a high percentage of probabilliy as I mentioned before. The goal is to let computers understand the human vision. I wanted to explore this field and had to master the technical part, no mather what. This will be my first experience with AI.
The above image came from Stanford's CS 231N course taught by Andrej Karpathy and Justin Johnson.
"Instead, I would propose approaching “images made by and for machines” as objects of human perception and interpretation. The notion of the “image,” like that of the “machine,” is most usefully understood as a companion concept to the human. In the same way as the machine is a designation employed by a subject in order to address an aspect of the apparatus that constructs both subject and machine, in the same way an image is predicated on a human subject for whom the “image” is a particular form in which the world reveals itself.” (Broeckmann 127)
[...]
Insights from Experimentation
To get an insight how computers approach the image I worked within the field of algorithms and machine learning. I had to find myself a tool, a code on Github, that visualizes the activations produced on each layer of a trained convnet as it processes an image or video. I’m not familiair with this so with some help of other students I archived to run the script within Torch7, Python and OpenCV.
The first result was a small convent (convolution neural network) that was feed, as input, an image of a bee. The output was unexpectedly text. This gave me the first insight on how the convnet single out the best performing ones. They range the probability from cabbage butterfly to the top result bee in a time of 0.23880815505981 seconds. The range of provided answers will depend on the data set the convnet is pre-trained on; I dived into the documents of the script and found a list of 845 objects and animals. The characteristic of this system is that it will always serves out an answer. This could be very valuable to the project, and it did.
Artistic/Design Principles
The project has to be realised without the help of traditional devices of photography (yes!), to escape my personal perception. As an image-maker I want to explore my own relation to A.I. So the criteria of the design are that they are produce by deep learning.
Artistic/Design Proposal
By adopting a ‘New Vision’ the purpose was the make photographs for machines. A textbook example is ‘data masks’ from Sterling Crispin created by reverse engineering facial recognition and detection algorithms. The face detection masks are the result of layers on layers of images that the system classifies with ‘human’. The pattern recognition is the essential part I proposal.
Realised work
‘Making meaning of the sun’ reveals images produced by machine image algorithms to analyse and identity images of the sun. Composed in several layers they reveal the production of machine-produced images, which are not optimized to be seen by human as well as obscure identifications by algorithms which range from ‘orange’ to ‘nematode’. The project explores in what way machine perception can expand the field of photography, open our view of external reality and aesthetic integration.
I’m not interested in the banal visuality of the sun. The sun is the most meaning-giving object by humans. We could never face the sun, so our knowledge is based on images. How does a machine relate on the sun? Opening up to a new view compared to a child that is learning to understand the world.
'Making Meaning of the Sun' lightbox, 57,5 x 65,5 cm, wood, glass, print, LED
'Making Meaning of the Sun' video, 5:00, loop, fragments:
Final Conclusions
During the project ‘Making meaning of the sun’ I explored for the first time the field of algorithms, artificial intelligent and convolution neural networks. I submit myself with the question if machine perception can expand the field of photography. The contemporary revolution in photography calls for new reflection and debate. Photography has going beyond his existing framework and perception. The idea of ‘seeing machines’ helps me to understand what it has become. Machine perception will shape our viewing practices as well as our image-making devices. Human or Nonhuman, this double point of view will expand our understanding on our vision of life; And by this concept expand the definition on photography. In this post humanist world the established ways of seeing will change and will go beyond our human limitations. "We see the world, not as it is, but as we are" – Talmud. I think the artistic-based research paid off in the insights I acquire. More than I ever knew, I find new ways in approaching photography. That will have positive consequences on my practices. But looking back at the project I stilly dimly understood the possibilities of the future for photography. Zylinska introduced me to the absurd idea of the photographic practice in a nonhuman culture “after the human”. I think I’m still a small thinker and I’m not critical enough to the questions that I made. Now I think that the potential is there, but I have completely let myself down in the experimental face, when I look up to the work of Memo Akten. A Deep neural network opening its eyes for the first time, not pre-trained and starts of completely blank, just trying to understand what is sees. You really see the network constantly learning. This kind of technical approaches where not visible for my, because I was to focused on one thing. In closing, I think this is a very exciting subject I want to explore more and more in practice and theory.
Bibliography
Ritchin, Fred. After Photography. Norton, 2010.
Zylinska, Joanna. nonhuman photography. Londen: The MIT press, 2017.
Flusser, Vilém. Een filosofie van de fotografie. Trans. Marc Geerards. Utrecht: Uitgeverij IJzer, 2007.
Sontag, Susan. Over fotografie. Trans. Henny Scheepmaker. Amsterdam: De bezige bij, 2015.
Broeckmann, Andreas. Machine Art in the Twentieth Century. Londen: The MIT press, 2016.
Virilio, Paul. The vision machine. Trans. Julie Rose. Londen: British Film Institute, 1994.
-
Trevor Paglen. ‘Is photography over?’ Foto museum Winterthur. (2014): Online. Internet. 03.03.2014. Available [4]
Trevor Paglen. 'seeing machines' Foto museum Winterthur. (2014): Online. Internet. 13.03.2014. Available [5]
Trevor Paglen. 'Scripts' Foto museum Winterthur. (2014): Online Internet. 24.03.2014. Available [6]
Trevor Paglen. 'invisible Images (Your Pictures Are Looking at You)' The new inquiry. (2016): Online. Internet. 08.12.2016. Available [7]
http://www.metropictures.com/exhibitions/trevor-paglen4/press-release
Blaise Agüera y Arcas. ‘Art in the age of machine intelligence’ Medium. (2006): Online. Internet. 23 February 2016. Available [8]
Blaise Agüera y Arcas. 'How computers learning to be creative' TED@BCG Paris: Online. Internet. Available [9]
Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hod Lipson. 'Understanding Neural Networks Through Deep Visualization' (2015): Online. Internet. 8.06.2015. Available
Adit Deshpande. 'A Beginner's Guide To Understanding Convolutional Neural Networks' (2016): Online. Internet. 20.07.2016. Available. [11]
Memo Akten, Learning to see (2017) Online. Internet. Available. [12]
Aysegul, 'visualizing what ConvNets learn with camera' Github 30.09.2015: Online. Internet. Available. [13]