Difference between revisions of "Seeing Machines"

From DigitalCraft_Wiki
Jump to navigation Jump to search
Line 175: Line 175:
 
'''darkroom'''
 
'''darkroom'''
 
light experiments in CNN
 
light experiments in CNN
 +
  
 
image rarely corresponds with reality.
 
image rarely corresponds with reality.
 +
 +
LIGHT
  
 
[[File:Schermafbeelding 2018-04-09 om 15.54.28.png| thumb ]]
 
[[File:Schermafbeelding 2018-04-09 om 15.54.28.png| thumb ]]
 
[[File:Schermafbeelding 2018-04-16 om 14.03.35.png]]
 
[[File:Schermafbeelding 2018-04-16 om 14.03.35.png]]
 +
 +
[[File:Schermafbeelding 2018-04-24 om 23.49.21.png]]
  
 
[[File:Schermafbeelding 2018-04-16 om 15.30.00.png| thumb| jaya pelupessy ]]
 
[[File:Schermafbeelding 2018-04-16 om 15.30.00.png| thumb| jaya pelupessy ]]
Line 186: Line 191:
  
 
[[File:Schermafbeelding 2018-04-25 om 13.39.52.png]]
 
[[File:Schermafbeelding 2018-04-25 om 13.39.52.png]]
 +
 +
  
  

Revision as of 10:44, 26 April 2018

Once upon a time, the US army decided to develop a computer system for detecting camouflaged tanks. They built a neural network — a kind of artificial brain — and trained it on hundreds of photos of tanks hidden among trees, and hundreds of photos of trees without any tanks, until it could tell the difference between the two types of pictures. And they saved another few hundred images which the network hadn’t seen, in order to test it. When they showed it the second set of images, it performed perfectly: correctly separating the pictures with tanks in them from the ones without tanks. So the researchers sent in their network — and the army sent it straight back, claiming it was useless.

Upon further investigation, it turned out that the soldiers taking the photos had only had a tank to camouflage for a couple of days, when the weather had been great. After the tank was returned, the weather changed, and all the photos without a tank in them were taken under cloudy skies. As a result, the network had not learned to discriminate between tanks, but between weather conditions: it was very good at deciding if the weather in the photograph was sunny or overcast, but not much else. The moral of the story is that machines are great at learning; it’s just very hard to know what it is that they’ve learned. – James Bridle

James Bridle - Activations [1][2]6 bridle failing-5731.jpg


Schermafbeelding 2018-03-27 om 14.21.44.png

Hito Steyerl - How Not to be Seen: A Fucking Didactic Educational [3]

Sterling Crispin – data masks [4]

Trevor Paglen – A Study of Invisible Images

Scarf detections 960.jpg

Adam Harvey – Hyperface [5]


TEXT

Paul Virilio – the vision machine 1994 [6]

Trevor Paglen – Is photography over? [7]


Computer vision

{HOG} Schermafbeelding 2017-11-21 om 15.49.59.png

{CNN} Schermafbeelding 2017-11-22 om 12.53.56.png Slow motion5x5 filters.gif


{–––}Schermafbeelding 2018-03-07 om 13.36.38.png

{Deepfish} Schermafbeelding 2018-03-20 om 15.34.51.png Schermafbeelding 2018-03-20 om 15.38.02.png Schermafbeelding 2018-03-20 om 15.36.59.png


glimp of A.I. Histogram of Oriented Gradients (HOG) Deformable Parts Model (DMP) Convolutional Neural Network (CNN)

Schermafbeelding 2018-03-27 om 14.10.02.png Schermafbeelding 2018-03-27 om 14.15.49.png

Imagenet [8] Paperspace [9]

EXP. CONVNET

Minor - 'Making Meaning of the Sun'

Screenshot from 2017-12-14 13-28-52.pngScreenshot from 2017-12-14 15-39-21.pngScreenshot from 2017-12-14 12-32-17.pngScreenshot from 2017-12-14 12-50-31.pngScreenshot from 2017-12-14 13-10-31.png


‘Making meaning of the sun’ reveals images produced by machine image algorithms to analyse and identity images of the sun. Composed in several layers they reveal the production of machine-produced images, which are not optimized to be seen by human as well as obscure identifications by algorithms which range from ‘orange’ to ‘nematode’. The project explores in what way machine perception can expand the field of photography, open our view of external reality and aesthetic integration.

I’m not interested in the banal visuality of the sun. The sun is the most meaning-giving object by humans. We could never face the sun, so our knowledge is based on images. How does a machine relate on the sun? Opening up to a new view compared to a child that is learning to understand the world.


Orange.jpg


'Making Meaning of the Sun' lightbox, 57,5 x 65,5 cm, wood, glass, print, LED

'Making Meaning of the Sun' video, 5:00, loop, fragments:


Graduating in 4 days

Posthuman Photography

Schermafbeelding 2018-03-27 om 14.48.48.png Schermafbeelding 2018-03-27 om 14.48.40.png Schermafbeelding 2018-02-07 om 15.20.57.png

Schermafbeelding 2018-03-27 om 14.49.02.png Schermafbeelding 2018-03-27 om 14.53.25.png


Technologie kruist met de fysieke ruimte om ons heen/Technology crosses with the physical space around us.

For this workshop, graduating in 4 days, there where no rules. No copyright. I used my own footage from the project 'Making Meaning of the Sun' and exp. before that. The text I used to explain the project was from fred ritchin after photography.

“Photography, as we have known it, is both ending and enlarging, with an evolving medium hidden inside it as in a Trojan horse, camouflaged, for the moment, as if it were nearly identical: its doppelganger, only better.”


Schermafbeelding 2018-02-07 om 16.39.34.pngSchermafbeelding 2018-02-07 om 00.51.54.pngSchermafbeelding 2018-03-27 om 15.21.01.png


For the presentation I'm thinking about several experiments, that also come off the screen.


(!) Exp: work interact with audience through there own machines, make it tangible.. understandable.. glimp from (> Photoshop)

How can I process that in the experiments? (Iphone/Snapchat). Different layer, like the stripes and line on the floor of the exhibition from James Bridle - Activations to communicate.


Technical problems with the foregoing convnet, torch-visbox. after a month of trying, I gave it up.I went to look for new opportunities and found several possibilities. The first is with OpenFrameworks, It not working that well. But it will be enough to start. In order not to be dependent on the computers at school, I found a cloud for machine learning. Which will hopefully save a lot of time. The only thing that still works is the text output from the torch-visbox. shown in the image below, maybe I can still use this.

Schermafbeelding 2017-11-30 om 16.23.57.png


See inside a Convnet. From Gene kogan. [10][11] For now only live feed.

What I was afraid of is that running this script is very heavy for the laptop. Maybe I will switch to the cloud, Today. I had some fun insides, with snapchat filters on obscure images.

Schermafbeelding 2018-03-27 om 15.55.16.png Schermafbeelding 2018-03-27 om 16.12.18.pngIMG 4534.jpg


Schermafbeelding 2018-03-27 om 16.25.08.pngSchermafbeelding 2018-03-27 om 16.25.22.png


Looking into convnet predicitor (...)


Schermafbeelding 2018-04-16 om 12.19.21.pngSchermafbeelding 2018-04-16 om 14.01.04.png

THOUGHTS A.I

“we see the world not as it is, but as we are” - The Talmud.

I AM concerend with the agressive overdevelopment of A.I.

I WANT to rasing critical awareness for the crucial issues of our age

A.I in relation to myself.

Finding live in photography again. [ stimulating new approaches ] Photography as a live-shaping medium > A.I.


“Embracing nonhuman vision as both a concept and a mode of being in the world will allow humans to see beyond the humanist limitations of their current philosophies and worldviews, to unsee themselves in their godlike positioning of both everywhere and nowhere, and to become reanchored and reattached again (Zylinska 15) As an artist I think it’s very exited to adopt this “New Vision”. As machine intelligence develops, can it be a legitimate voice in the discourse of my practice? In which aspect will it engaging with external reality and aesthetic integration? In ways, it’s hard to imagine from today’s point of view, but I think this can be a new entrance to make the world imaginable.

[edit]

    • A convolution neural network. Seen as a DARKROOM. [ Appealing is that they rely on photographic technologies ]

starn back into its own mind

“The greatest benefit of the arrival of artificial intelligence is that AIs will help define humanity. We need AIs to tell us who we are.” [52] kevin kelly


machine-to-machine communication. Why should it be accepted as photography? ( besides that I am a photographer ) (X) > 4 statements, toekomst creeeren, hoe ziet het er visueel uit; vraag beantwoorden

Can this 'New vision', See different/more/other way than the human eye. in what way can A.I expant the field of photography


Digital affect the physical spaces // The digital age has profoundly changed the way we produce, share and use information. No lens-bases input ? exp. <3D OBJECT

 which expand our minds about how we vision the world:

By adopting this “New Vision” I wanted to make my work only perceivable for machines: photographs that are not optimized to be seen by human. The world they’ve shaped looks nothing like the world we thought we lived in

Making

darkroom light experiments in CNN


image rarely corresponds with reality.

LIGHT

Schermafbeelding 2018-04-09 om 15.54.28.png

Schermafbeelding 2018-04-16 om 14.03.35.png

Schermafbeelding 2018-04-24 om 23.49.21.png

jaya pelupessy

FIRE

Schermafbeelding 2018-04-25 om 13.39.52.png





objects

Schermafbeelding 2018-04-11 om 13.09.27.png

Schermafbeelding 2018-04-16 om 14.19.24.png Schermafbeelding 2018-02-07 om 00.28.38.png

Schermafbeelding 2018-04-16 om 14.38.51.png Schermafbeelding 2018-04-16 om 14.23.18.pngSchermafbeelding 2018-04-16 om 14.25.26.png

acrylic/print mimaki/foam

Clement valla

EXP MILLING MACHINE (IMAGE)

LIGHT/GRID

IMG 4930.JPG LigEEEEht.jpg

SUN

IMG 4928.JPG

HOUGH LINE TRANSFORM SURF/SWIF