Seeing Machines
Once upon a time, the US army decided to develop a computer system for detecting camouflaged tanks. They built a neural network — a kind of artificial brain — and trained it on hundreds of photos of tanks hidden among trees, and hundreds of photos of trees without any tanks, until it could tell the difference between the two types of pictures. And they saved another few hundred images which the network hadn’t seen, in order to test it. When they showed it the second set of images, it performed perfectly: correctly separating the pictures with tanks in them from the ones without tanks. So the researchers sent in their network — and the army sent it straight back, claiming it was useless.
Upon further investigation, it turned out that the soldiers taking the photos had only had a tank to camouflage for a couple of days, when the weather had been great. After the tank was returned, the weather changed, and all the photos without a tank in them were taken under cloudy skies. As a result, the network had not learned to discriminate between tanks, but between weather conditions: it was very good at deciding if the weather in the photograph was sunny or overcast, but not much else. The moral of the story is that machines are great at learning; it’s just very hard to know what it is that they’ve learned. – James Bridle
James Bridle - Activations [1][2]
Hito Steyerl - How Not to be Seen: A Fucking Didactic Educational [3]
Sterling Crispin – data masks [4]
Trevor Paglen – A Study of Invisible Images
Adam Harvey – Hyperface [5]
TEXT
Paul Virilio – the vision machine 1994 [6]
Trevor Paglen – Is photography over? [7]
Computer vision
glimp of A.I.
Histogram of Oriented Gradients (HOG)
Deformable Parts Model (DMP)
Convolutional Neural Network (CNN)
EXP. CONVNET
Minor - 'Making Meaning of the Sun'
‘Making meaning of the sun’ reveals images produced by machine image algorithms to analyse and identity images of the sun. Composed in several layers they reveal the production of machine-produced images, which are not optimized to be seen by human as well as obscure identifications by algorithms which range from ‘orange’ to ‘nematode’. The project explores in what way machine perception can expand the field of photography, open our view of external reality and aesthetic integration.
I’m not interested in the banal visuality of the sun. The sun is the most meaning-giving object by humans. We could never face the sun, so our knowledge is based on images. How does a machine relate on the sun? Opening up to a new view compared to a child that is learning to understand the world.
'Making Meaning of the Sun' lightbox, 57,5 x 65,5 cm, wood, glass, print, LED
'Making Meaning of the Sun' video, 5:00, loop, fragments:
Graduating in 4 days
Posthuman Photography
Technologie kruist met de fysieke ruimte om ons heen/Technology crosses with the physical space around us.
For this workshop, graduating in 4 days, there where no rules. No copyright. I used my own footage from the project 'Making Meaning of the Sun' and exp. before that. The text I used to explain the project was from fred ritchin after photography.
“Photography, as we have known it, is both ending and enlarging, with an evolving medium hidden inside it as in a Trojan horse, camouflaged, for the moment, as if it were nearly identical: its doppelganger, only better.”
For the presentation I'm thinking about several experiments, that also come off the screen.
(!) Exp: work interact with audience through there own machines, make it tangible.. understandable.. glimp from
(> Photoshop)
How can I process that in the experiments? (Iphone/Snapchat). Different layer, like the stripes and line on the floor of the exhibition from James Bridle - Activations to communicate.
Technical problems with the foregoing convnet, torch-visbox. after a month of trying, I gave it up.I went to look for new opportunities and found several possibilities. The first is with OpenFrameworks, It not working that well. But it will be enough to start. In order not to be dependent on the computers at school, I found a cloud for machine learning. Which will hopefully save a lot of time. The only thing that still works is the text output from the torch-visbox. shown in the image below, maybe I can still use this.
See inside a Convnet. From Gene kogan. [10][11] For now only live feed.
What I was afraid of is that running this script is very heavy for the laptop. Maybe I will switch to the cloud, Today. I had some fun insides, with snapchat filters on obscure images.
I think I will start with printing obscure, nodes, and different materials to bring the convolutional neural network in to the physical space ' Digital affect the physical spaces '
Looking into convnet predicitor (...)
THOUGHTS A.I
“we see the world not as it is, but as we are” - The Talmud.
“Embracing nonhuman vision as both a concept and a mode of being in the world will allow humans to see beyond the humanist limitations of their current philosophies and worldviews, to unsee themselves in their godlike positioning of both everywhere and nowhere, and to become reanchored and reattached again (Zylinska 15) As an artist I think it’s very exited to adopt this “New Vision”. As machine intelligence develops, can it be a legitimate voice in the discourse of my practice? In which aspect will it engaging with external reality and aesthetic integration? In ways, it’s hard to imagine from today’s point of view, but I think this can be a new entrance to make the world imaginable.
[edit]
CONVNET -A convolution neural network. Seen as a DARKROOM
“The greatest benefit of the arrival of artificial intelligence is that AIs will help define humanity. We need AIs to tell us who we are.” [52] kevin kelly