Difference between revisions of "Stealing From The Museum/thecriminals"
Remykonings (talk | contribs) |
Remykonings (talk | contribs) |
||
(31 intermediate revisions by 3 users not shown) | |||
Line 1: | Line 1: | ||
+ | == THE TEAM == | ||
+ | |||
'''Criminal one:''' Remy Konings | '''Criminal one:''' Remy Konings | ||
<br> | <br> | ||
Line 5: | Line 7: | ||
'''Criminal three:''' Joeke van der Veen | '''Criminal three:''' Joeke van der Veen | ||
<br> | <br> | ||
− | '''AND Criminal four:''' | + | '''AND Criminal four:''' Lizet van der Knaap |
<br> | <br> | ||
<br> | <br> | ||
− | + | ||
+ | == INSPIRATION == | ||
+ | |||
+ | [http://cuppetellimendoza.com/nervous-structure-field/ NOTIONAL FIELD] | ||
+ | |||
+ | The motion of the projected lines is ruled by a simulation, which makes them act like soft ropes, and said motion is influenced by a viewer’s movements as interpreted by a computer that surveys the scene through a video camera. | ||
+ | |||
+ | [http://anf.nu/bfa-x-schwarm-vl/ BRUTE FORCE] | ||
+ | |||
+ | Software study as preparation for Brute Force Method, which was generating and auto-publishing images to Tumblr. | ||
+ | |||
+ | |||
+ | ==Update 7 september== | ||
<br> | <br> | ||
− | We have two scanning methodes that provides us with data. | + | We have two scanning methodes that provides us with data. We want to track the movements of the hands of the viewer and also track the eye movements of the viewer. |
<br> | <br> | ||
1. Words (describing an 2D/3D object by using natural language) | 1. Words (describing an 2D/3D object by using natural language) | ||
Line 17: | Line 31: | ||
<br> | <br> | ||
<br> | <br> | ||
− | - We want to use | + | - We want to use describing an object as our scanning method. By using the words that you use to describe an object, and match those with a database of words that represents different shapes, we can recreate a new/copy of the "old" object. Each person has a different view and thinks that the essential part of the object can be something else. The words they ultimatly choose - will define the new abstract object. |
<br> | <br> | ||
<br> | <br> | ||
Line 23: | Line 37: | ||
<br> | <br> | ||
<br> | <br> | ||
− | + | 'The possibility that everybody can describe their own essences of an object, makes the new object more concentrated. It's a combination of shapes that solely describes the essenence of the object from the viewer's perspective. | |
− | ''' | + | |
+ | ==Update 8 september== | ||
+ | |||
+ | We want to use the spectator as data source by tracking the eye motions and body motions while describing an art piece. We want Compare the lines that will appear while tracking these motions. | ||
+ | It is a visualization of how someone sees an artpiece in different ways and compare this to the actual artpiece. | ||
+ | By tracking the left hand, the viewer is drawing the artpiece digitally. The kinect camera is capturing the movement and with this data we could create a 3D line. | ||
+ | |||
+ | |||
+ | <gallery widths=200px heights=200px perrow=4 caption="Pictures of the Process"> | ||
+ | |||
+ | File:Jep1.jpg | ||
+ | File:Jep2.jpg | ||
+ | File:Jep3.jpg | ||
+ | File:Jep4.jpg | ||
+ | File:Jep5.jpg | ||
+ | File:Jep6.jpg | ||
+ | |||
+ | </gallery> | ||
+ | |||
+ | ==Update 9 september== | ||
+ | |||
+ | We've collected the data at the Boijmans Museum. We brought Denise's laptop and the kinect camera in the museum and put them in front of our chosen artpiece. | ||
+ | We decided that we wanted to collect the data of 4 different people, so we can also compare the differences in experiences per person. | ||
+ | |||
+ | [[File:museum.jpg|800px]] | ||
+ | [[File:Boijmans Tracking 2.JPG|340px]] | ||
+ | |||
+ | ==Update 10 september== | ||
+ | |||
+ | We decided what to do with all the collected data. We want to make an alignment from all the different data we collected in front of the original artpiece, in order to show the differences of experiences. | ||
+ | We are going to make 4 x 4 different layers. The first and bottom layer is the original art piece, the second layer is the drawing we made from the movement tracking, the third layer is the drawing we made from the eye-movement tracking and the fourth layer is a drawing made from people's descriptions. All these images were printed seperately on transparent layers, so that you can look through the images and actually see the difference from each layer. | ||
+ | |||
+ | |||
+ | Results data tracking | ||
+ | |||
+ | [[File:overview.jpg|800px]] | ||
+ | |||
+ | ==Update 11 september== | ||
+ | |||
+ | Result at the exhibition: | ||
+ | |||
+ | [[File:result03.jpg|800px]] | ||
+ | |||
+ | [[File:result02.jpg|800px]] | ||
+ | |||
+ | [[File:result01.jpg|800px]] |
Latest revision as of 15:40, 17 September 2015
Contents
THE TEAM
Criminal one: Remy Konings
Criminal two: Denise Nedermeijer
Criminal three: Joeke van der Veen
AND Criminal four: Lizet van der Knaap
INSPIRATION
The motion of the projected lines is ruled by a simulation, which makes them act like soft ropes, and said motion is influenced by a viewer’s movements as interpreted by a computer that surveys the scene through a video camera.
Software study as preparation for Brute Force Method, which was generating and auto-publishing images to Tumblr.
Update 7 september
We have two scanning methodes that provides us with data. We want to track the movements of the hands of the viewer and also track the eye movements of the viewer.
1. Words (describing an 2D/3D object by using natural language)
2. Gestures (making use of the automatic gestures during a conversation)
- We want to use describing an object as our scanning method. By using the words that you use to describe an object, and match those with a database of words that represents different shapes, we can recreate a new/copy of the "old" object. Each person has a different view and thinks that the essential part of the object can be something else. The words they ultimatly choose - will define the new abstract object.
- The other tactic we like to combine is making use of gestures by using a Kinect sensor. The Kinect will detect our natural hand-gestures while we are describing the object. This will translate in the randomness of the painting. Where to put the new shapes. The gestures give guides to the new object.
'The possibility that everybody can describe their own essences of an object, makes the new object more concentrated. It's a combination of shapes that solely describes the essenence of the object from the viewer's perspective.
Update 8 september
We want to use the spectator as data source by tracking the eye motions and body motions while describing an art piece. We want Compare the lines that will appear while tracking these motions. It is a visualization of how someone sees an artpiece in different ways and compare this to the actual artpiece. By tracking the left hand, the viewer is drawing the artpiece digitally. The kinect camera is capturing the movement and with this data we could create a 3D line.
Update 9 september
We've collected the data at the Boijmans Museum. We brought Denise's laptop and the kinect camera in the museum and put them in front of our chosen artpiece. We decided that we wanted to collect the data of 4 different people, so we can also compare the differences in experiences per person.
Update 10 september
We decided what to do with all the collected data. We want to make an alignment from all the different data we collected in front of the original artpiece, in order to show the differences of experiences. We are going to make 4 x 4 different layers. The first and bottom layer is the original art piece, the second layer is the drawing we made from the movement tracking, the third layer is the drawing we made from the eye-movement tracking and the fourth layer is a drawing made from people's descriptions. All these images were printed seperately on transparent layers, so that you can look through the images and actually see the difference from each layer.
Results data tracking
Update 11 september
Result at the exhibition: