Seeing Machines
Once upon a time, the US army decided to develop a computer system for detecting camouflaged tanks. They built a neural network — a kind of artificial brain — and trained it on hundreds of photos of tanks hidden among trees, and hundreds of photos of trees without any tanks, until it could tell the difference between the two types of pictures. And they saved another few hundred images which the network hadn’t seen, in order to test it. When they showed it the second set of images, it performed perfectly: correctly separating the pictures with tanks in them from the ones without tanks. So the researchers sent in their network — and the army sent it straight back, claiming it was useless.
Upon further investigation, it turned out that the soldiers taking the photos had only had a tank to camouflage for a couple of days, when the weather had been great. After the tank was returned, the weather changed, and all the photos without a tank in them were taken under cloudy skies. As a result, the network had not learned to discriminate between tanks, but between weather conditions: it was very good at deciding if the weather in the photograph was sunny or overcast, but not much else. The moral of the story is that machines are great at learning; it’s just very hard to know what it is that they’ve learned. – James Bridle
James Bridle - Activations [1][2]
Hito Steyerl - How Not to be Seen: A Fucking Didactic Educational [3]
Sterling Crispin – data masks [4]
Trevor Paglen – A Study of Invisible Images
Adam Harvey – Hyperface [5]
TEXT
Paul Virilio – the vision machine 1994 [6]
Trevor Paglen – Is photography over? [7]
Computer vision
glimp of A.I.
Histogram of Oriented Gradients (HOG)
Deformable Parts Model (DMP)
Convolutional Neural Network (CNN)
EXP. CONVNET
Minor - 'Making Meaning of the Sun'
‘Making meaning of the sun’ reveals images produced by machine image algorithms to analyse and identity images of the sun. Composed in several layers they reveal the production of machine-produced images, which are not optimized to be seen by human as well as obscure identifications by algorithms which range from ‘orange’ to ‘nematode’. The project explores in what way machine perception can expand the field of photography, open our view of external reality and aesthetic integration.
I’m not interested in the banal visuality of the sun. The sun is the most meaning-giving object by humans. We could never face the sun, so our knowledge is based on images. How does a machine relate on the sun? Opening up to a new view compared to a child that is learning to understand the world.
'Making Meaning of the Sun' lightbox, 57,5 x 65,5 cm, wood, glass, print, LED
'Making Meaning of the Sun' video, 5:00, loop, fragments:
Graduating in 4 days
Posthuman Photography
[File:Schermafbeelding 2018-03-27 om 14.49.02.png]
Technical problems with the foregoing convnet, torch-visbox. after a month of trying, I gave it up.I went to look for new opportunities and found several possibilities. The first is with OpenFrameworks, It not working that well. But it will be enough to start. In order not to be dependent on the computers at school, I found a cloud for machine learning. Which will hopefully save a lot of time.