Difference between revisions of "INTRO WEEK ASSIGNMENT"

From DigitalCraft_Wiki
Jump to navigation Jump to search
(Created page with "<br> <br> <center> '''VISITING THE BOYMANS MUSEUM ON SEPTEMBER 6TH 2015''' <BR> <gallery widths=300px heights=300px perrow=4 caption=""> File:2015-09-06-foto-1.jpeg File:​ ...")
 
Line 17: Line 17:
 
<br>
 
<br>
 
<br>
 
<br>
 
+
<br>
----
+
<br>
 
+
- - -
 +
<br>
 +
<br>
 
<br>
 
<br>
 
<br>
 
<br>

Revision as of 21:22, 30 November 2015



VISITING THE BOYMANS MUSEUM ON SEPTEMBER 6TH 2015





- - -



INNITIAL BRAINSTORM SESSION
1. Scanning an art piece by describing it with words.
2. Mesurements of colors which will be converted to different heights.
3. Scanning an object with multiple people at the same time. Working together in order to gain more information at the same time (eg. making use of reflective or colorful greenscreen coats)
4. Making a picture every minute and show these in a seethrough drawer. You own movement will create the 3D effect and the water creates a dreaming holographic effect.





STEALING FROM THE MUSEUM
After we decided what to do, we had to figure out how to actually do it.
How are we able to get the information. How do we receive the input.
I used Rhino in combination with Grasshopper, firefly and other plugins to make all the connections.

1. We connected our Kinect camera to our grasshopper script. The script was made in a way so we could trace a persons right hand.
we recorded his/her right hand every half half a second and recorded this visually. Eventually we were able to drawing in 3D in open air.
2. The other input method we made use of was eye tracking. We recorded our own iris while we were tracing the object with our eye.
Eventually with the use of Adobe after effect we traced a specific spot in the eye ball from the recorded video.
3. The 3rd method of input we used was an audio description. We used the voice recognition feature of our iPhone to record the voice to text.
We connected the phone wireless to our grasshopper script so we could match all the words in real time to a database of words which were related to certain colors.
eg. when somebody would say the word "red" during his description, that word would get picked out because that is matched agains the database.

In the end we visualized all this information in a very static way using layers of transparent sheets.
The conclusion was that when somebody would watch through the layers of the object, he could see through the eyes of the person that looked at the art pierced in the museum.