Once upon a time, the US army decided to develop a computer system for detecting camouflaged tanks. They built a neural network — a kind of artificial brain — and trained it on hundreds of photos of tanks hidden among trees, and hundreds of photos of trees without any tanks, until it could tell the difference between the two types of pictures. And they saved another few hundred images which the network hadn’t seen, in order to test it. When they showed it the second set of images, it performed perfectly: correctly separating the pictures with tanks in them from the ones without tanks. So the researchers sent in their network — and the army sent it straight back, claiming it was useless.
Upon further investigation, it turned out that the soldiers taking the photos had only had a tank to camouflage for a couple of days, when the weather had been great. After the tank was returned, the weather changed, and all the photos without a tank in them were taken under cloudy skies. As a result, the network had not learned to discriminate between tanks, but between weather conditions: it was very good at deciding if the weather in the photograph was sunny or overcast, but not much else. The moral of the story is that machines are great at learning; it’s just very hard to know what it is that they’ve learned. – James Bridle
Hito Steyerl - How Not to be Seen: A Fucking Didactic Educational 
Sterling Crispin – data masks 
Trevor Paglen – A Study of Invisible Images
Adam Harvey – Hyperface 
Paul Virilio – the vision machine 1994 
Trevor Paglen – Is photography over? 
glimp of A.I. Histogram of Oriented Gradients (HOG) Deformable Parts Model (DMP) Convolutional Neural Network (CNN)
Minor - 'Making Meaning of the Sun'
‘Making meaning of the sun’ reveals images produced by machine image algorithms to analyse and identity images of the sun. Composed in several layers they reveal the production of machine-produced images, which are not optimized to be seen by human as well as obscure identifications by algorithms which range from ‘orange’ to ‘nematode’. The project explores in what way machine perception can expand the field of photography, open our view of external reality and aesthetic integration.
I’m not interested in the banal visuality of the sun. The sun is the most meaning-giving object by humans. We could never face the sun, so our knowledge is based on images. How does a machine relate on the sun? Opening up to a new view compared to a child that is learning to understand the world.
'Making Meaning of the Sun' lightbox, 57,5 x 65,5 cm, wood, glass, print, LED
'Making Meaning of the Sun' video, 5:00, loop, fragments:
Graduating in 4 days
Technologie kruist met de fysieke ruimte om ons heen/Technology crosses with the physical space around us.
For this workshop, graduating in 4 days, there where no rules. No copyright. I used my own footage from the project 'Making Meaning of the Sun' and exp. before that. The text I used to explain the project was from fred ritchin after photography.
“Photography, as we have known it, is both ending and enlarging, with an evolving medium hidden inside it as in a Trojan horse, camouflaged, for the moment, as if it were nearly identical: its doppelganger, only better.”
For the presentation I'm thinking about several experiments, that also come off the screen.
(!) Exp: work interact with audience through there own machines, make it tangible.. understandable.. glimp from (> Photoshop)
How can I process that in the experiments? (Iphone/Snapchat). Different layer, like the stripes and line on the floor of the exhibition from James Bridle - Activations to communicate.
Technical problems with the foregoing convnet, torch-visbox. after a month of trying, I gave it up.I went to look for new opportunities and found several possibilities. The first is with OpenFrameworks, It not working that well. But it will be enough to start. In order not to be dependent on the computers at school, I found a cloud for machine learning. Which will hopefully save a lot of time. The only thing that still works is the text output from the torch-visbox. shown in the image below, maybe I can still use this.
What I was afraid of is that running this script is very heavy for the laptop. Maybe I will switch to the cloud, Today. I had some fun insides, with snapchat filters on obscure images.
Looking into convnet predicitor (...)
“we see the world not as it is, but as we are” - The Talmud.
I AM concerend with the agressive overdevelopment of A.I.
I WANT to rasing critical awareness for the crucial issues of our age
A.I in relation to myself.
Finding live in photography again. [ stimulating new approaches ] Photography as a live-shaping medium > A.I.
“Embracing nonhuman vision as both a concept and a mode of being in the world will allow humans to see beyond the humanist limitations of their current philosophies and worldviews, to unsee themselves in their godlike positioning of both everywhere and nowhere, and to become reanchored and reattached again (Zylinska 15) As an artist I think it’s very exited to adopt this “New Vision”. As machine intelligence develops, can it be a legitimate voice in the discourse of my practice? In which aspect will it engaging with external reality and aesthetic integration? In ways, it’s hard to imagine from today’s point of view, but I think this can be a new entrance to make the world imaginable.
- A convolution neural network. Seen as a DARKROOM. [ Appealing is that they rely on photographic technologies ]
starn back into its own mind
“The greatest benefit of the arrival of artificial intelligence is that AIs will help define humanity. We need AIs to tell us who we are.”  kevin kelly
machine-to-machine communication. Why should it be accepted as photography? ( besides that I am a photographer ) (X) > 4 statements, toekomst creeeren, hoe ziet het er visueel uit; vraag beantwoorden
Can this 'New vision', See different/more/other way than the human eye. in what way can A.I expant the field of photography
Digital affect the physical spaces // The digital age has profoundly changed the way we produce, share and use information. No lens-bases input ? exp. <3D OBJECT
which expand our minds about how we vision the world:
By adopting this “New Vision” I wanted to make my work only perceivable for machines: photographs that are not optimized to be seen by human. The world they’ve shaped looks nothing like the world we thought we lived in
darkroom light experiments in CNN
image rarely corresponds with reality.
EXP MILLING MACHINE (IMAGE)
HOUGH LINE TRANSFORM SURF/SWIF