Difference between revisions of "UNRVL9-Project1"

From DigitalCraft_Wiki
Jump to navigation Jump to search
 
(60 intermediate revisions by the same user not shown)
Line 1: Line 1:
==13/10/2020==
+
==Tuesday 13/10/2020==
===References===
+
===<span style="color:#FD00FF; background: linear-gradient(to right, red, orange, yellow, green, blue);linear-gradient(to right, red, orange, yellow, green, blue);">References</span>===
 
— https://en.wikipedia.org/wiki/Jeremiah_Denton </br>
 
— https://en.wikipedia.org/wiki/Jeremiah_Denton </br>
 
— https://learn.adafruit.com/chatty-light-up-cpx-mask </br>
 
— https://learn.adafruit.com/chatty-light-up-cpx-mask </br>
 
— https://learn.adafruit.com/anatomical-3d-printed-beating-heart-with-makecode </br>
 
— https://learn.adafruit.com/anatomical-3d-printed-beating-heart-with-makecode </br>
—https://learn.adafruit.com/sound-activated-shark-mask </br>
+
— https://learn.adafruit.com/sound-activated-shark-mask </br>
 +
— https://fabacademy.org/archives/2013/students/brimer.liron/21~index.html </br>
  
=== Alternative Greetings===
+
=== <span style="color:#FD00FF; background: linear-gradient(to right, red, orange, yellow, green, blue);linear-gradient(to right, red, orange, yellow, green, blue);">Alternative Greetings</span>===
We talked about the scale of greetings from intimate to formal and how to represent that.
 
Greetings can be awkward if you don't have the same expectations. A conversation about the greeting habits of your new lovers family can be useful to avoid very awkward semi-hand-shake-kiss-hug-situations.
 
  
How will greetings evolve after Corona? Will they fade? Will we greet more extreme because we missed it so much? Hugging as much as possible to compensate the previous touchless times. Or maybe we'll act more aggressively and head bump instead of kiss.  
+
<span style="font-family: Courier">We talked about the scale of greetings from intimate to formal and how to represent that.
 +
Greetings can be awkward if you don't have the same expectations. A conversation about the greeting habits of your new lovers family can be useful to avoid very awkward semi-hand-shake-kiss-hug-situations. How will greetings evolve after Corona? Will they fade? Will we greet more extreme because we missed it so much? Hugging as much as possible to compensate he previous touchless times. Or maybe we'll act more aggressively and head bump instead of kiss.  
 +
</span>
  
 +
 +
'''ZOOM GREETINGS'''
 +
</br>
 +
[[File:Greeting1.gif|400px]]
 +
</br>
 +
<span style="font-size: 13px; " >''Wiping the camera to show respect. </br> How many times you wipe it, what you wipe it with indicate levels </br>of respect. You can customize your wipe pattern like you can </br>a secret handshake with a friend.''
 +
 +
[[File:Greeting2.gif|400px]]
 +
</br>
 +
<span style="font-size: 13px;" >''Bowing your camera indicates respect because you can see</br> if someone took the effort to put on pants to meet you.''</span>
  
  
*'''ZOOM GREETINGS'''
+
'''PERSONAL GREETINGS'''
[[File:Greeting1.gif]]
+
</br>
 +
[[File:Greeting3.gif|400px]]
 +
</br>
 +
[[File:Greeting4.gif|400px]]
 +
</br>
 +
<span style="font-size: 13px;" >''Using the eyes and eyebrows because you can't touch others or </br>see their mouth. Eyebrow raise could indicate a smile or a hello, </br>the amount of times raised indicate the scale</br> from intimate to formal.''</span>
 
</br>
 
</br>
Wiping the camera to show respect. How many times you wipe it, what you wipe it with indicate levels of respect. You can customize your wipe pattern like you can a secret handshake with a friend.
 
  
[[File:Greeting2.gif]]
+
[[File:Greeting5.gif|400px]]
 
</br>
 
</br>
Bowing your camera indicates respect because you can see if someone took the effort to put on pants to meet you.
+
<span style="font-size: 13px;" >''The ultimate sign of respect could be to close eyes and raise </br>eyebrows to show you really trust someone, like for a grandparent.''</span>
 +
 
 +
 
 +
'''involving the CIRCUIT PLAYGROUND'''
  
*'''PERSONAL GREETINGS'''
 
[[File:Greeting3.gif|(this one is formal)]]
 
 
</br>
 
</br>
[[File:Greeting4.gif|(this one is informal)]]
+
[[File:Circuitplayground.jpg|150px]]
 +
</br>
 +
 
 +
How can we convert a scale of intimacy while greeting another person by using color, speed, pattern, tonal value of sound?
 +
 
 +
The digital version of our wipe-your-camera-idea would be wiping the circuit instead of your camera. It would light up in more warm or cold tones to visualize a different level of intimacy. A blue wave would be a formal greeting, a handshake. A red wave would be a tight hug.
 +
 
 +
===<span style="color:#FD00FF; background: linear-gradient(to right, red, orange, yellow, green, blue);linear-gradient(to right, red, orange, yellow, green, blue);">Questions</span>===
 +
 
 +
'''How can we show non-verbal types of conversation without being present?'''
 +
:1. What is beyond the frame? Can we experiment with the viewport of our camera?
 +
::''You only see one square cut out of someone's environment when you're zooming. A
 +
::glimpse of your surroundings are shown when you are switching places. This is nice.''
 +
::''We can use a servo motor? Processing?''
 +
:2. What is the energy in the room?
 +
:3. How can we get a better sense of spacial location?
 +
:4. What is the output?
 +
:5. Can we transfer sight into something more abstract?
 +
 
 +
==Wednesday 14/10/2020==
 +
===<span style="color:#11FF00; background: linear-gradient(to right, red, orange, yellow, green, blue);linear-gradient(to right, blue, orange, red, green, yellow );">References</span>===
 +
— https://www.pretotyping.org/</br>
 +
— https://lauren-mccarthy.com/us </br>
 +
 
 +
===<span style="color:#11FF00; background: linear-gradient(to right, red, orange, yellow, green, blue);linear-gradient(to right, blue, orange, red, green, yellow );">Thoughts</span>===
 +
 
 +
<span style="font-family: Courier">Wednesday we talked about the different types of greeting then we eventually transitioned to talking about body language and how you can't read body language through a video screen. We had decided to use motion detection under the table to show the movement. That data would relate to emotions, like shaking your foot violently indicates impatience and other things along that line. These movements would be translated into colors varying from warm to cool colors depending on what the movement indicated. We would also use processing to create a shape that would change from soft to sharp depending on the mood the sensor is conveying.</span>
 +
 
 +
 
 +
[[File:under.gif]]
 +
</br>
 +
''As an experiment we filmed parts of ourselves that is not visible during a Zoom call.''
 +
 
 +
===<span style="color:#11FF00; background: linear-gradient(to right, red, orange, yellow, green, blue);linear-gradient(to right, blue, orange, red, green, yellow );">Question</span>===
 +
====How are we going to detect body language?====
 +
:1. Different wearables attached to different body parts (fout, hand, head?) can give us input.
 +
 
 +
==Thursday 15/10/2020==
 +
===<span style="color:#00FFDB; background: linear-gradient(to right, red, orange, yellow, green, blue);linear-gradient(to right, blue, orange, red, green, yellow );">References and sources</span>===
 +
— https://en.wikipedia.org/wiki/Synesthesia</br>
 +
— http://soundbible.com/tags-water.html</br>
 +
— https://community.troikatronix.com/topic/6744/solved-syphon-virtual-webcam-and-zoom/6</br>
 +
 
 +
===<span style="color:#00FFDB; background: linear-gradient(to right, red, orange, yellow, green, blue);linear-gradient(to right, blue, orange, red, green, yellow );">Updating our concept</span>===
 +
 
 +
<span style="font-family: Courier">We specified our idea. Our concept will focus on the level of attention in a context of a meeting. It will mainly be useful for a meeting where there is one person speaking and several others listening. The tool we're making is helping the speaker feeling more confident about presenting by making it possible to detect body language of interested listeners. It helps you noticing if one is fading away during a presentation. For listeners it's a tool helping you being aware of your own interest during a reading or workshop.
 +
 
 +
===<span style="color:#00FFDB; background: linear-gradient(to right, red, orange, yellow, green, blue);linear-gradient(to right, blue, orange, red, green, yellow );">Work Marathon</span>===
 +
====The Drop (3D-model)====
 +
<span style="color:#8200FF; font-family: Caslon; font-size: 15px">Anna Beirinckx & Yingzhou Chen</span>
 +
</br><gallery mode="slideshow">
 +
File:3d1.jpg
 +
File:3d2.jpg
 +
File:3d3.jpg
 +
File:3d4.jpg
 +
File:3d5.jpg
 +
File:3d6.jpg
 +
File:3d7.jpg
 +
File:3d8.jpg
 +
File:3d9.jpg
 +
</gallery>
 +
</br>
 +
 
 +
====Coding====
 +
<span style="color:#8200FF; font-family: Caslon; font-size: 15px">Mats Cornegoor & Cato Speltincx (Python)</span>
 +
</br>
 +
 
 +
[[File:python.png|400px|baseline]] [[File:Sensor1.jpg|300px|baseline]] [[File:Sensor2.jpg|300px|baseline]]  
 +
</br>
 +
 
 +
For the technical interactive part of the project we tinkered around with different types of sensors (apart from the touch sensors that are already on the Circuit Playground board). We wanted to try to find out whether the intensity of the grip could also be detected by some form of a pressure sensor. Our range of sensor materials consisted of "resistive foam" and adhesive copper. By applying pressure to either the resistive foam and the copper-paper pressure sensor, the resistive value changes and gives a corresponding input value.
 +
 
 +
During prototyping, Cato found out that only the capacitive touch sensor would provide enough data because of the sensitivity of the touch inputs. The pads have to be held with normal proper force in order to register a positive value, opposed to having to apply an unreasonable amount of force to these resistive materials that we tinkered with.
 +
 
 +
The code on the Circuit Playground communicated to Processing through USB serial. It detects the grip of the user on the device by counting how many points are being touched with a proper grip. This numeric value is then visually translated by Processing into a level of attention, by altering the height of the wave.
 +
</br>
 +
 
 +
 
 +
<span style="color:#8200FF; font-family: Caslon; font-size: 15px">Giovanni Zanella (Processing)</span>
 
</br>
 
</br>
Using the eyes and eyebrows because you can't touch others or see their mouth. Eyebrow raise could indicate a smile or a hello, the amount of times raised indicate the scale from intimate to formal.
+
The level of attention is visualized by a wave going up and down your screen. When you're holding the device tightly the wave will not cover your face and you are present and visible. If your holding on some time you will be rewarded and see a sun. When you're fading away and losing grip of the device your screen will fill with water. You're drowning and disappearing. If this disinterest remains you'll get disconnected from the Zoom room.
 
</br>
 
</br>
[[File:Greeting5.gif]]
+
<gallery mode="slideshow">
 +
File:Sequence1.png|''Giovanni paying no attention''
 +
File:Sequence2.png|''Giovanni paying not much attention''
 +
File:Sequence3.png|''Giovanni paying some attention''
 +
File:Sequence4.png|''Giovanni paying attention''
 +
File:Sequence5.png|''Giovanni paying great attention''
 +
</gallery>
 
</br>
 
</br>
The ultimate sign of respect could be to close eyes and raise eyebrows to show you really trust someone, like for a grandparent.
 
  
*'''CIRCUIT PLAYGROUND (possible processing added to that for screen visuals)'''
+
====Visualisation====
:We can convert a scale of intimacy while greeting another person by using
+
<span style="color:#8200FF; font-family: Caslon; font-size: 15px">Yingzhou Chen, Soo Seng & Ana Tobin</span>
::color
+
<gallery mode="slideshow">
::light
+
File:wom-poster1.jpg|''Poster''
::speed
+
File:Video01.png |''watch the explainer video: https://www.youtube.com/watch?v=roBvgKq0usg''
::pattern
+
File:wom-video02.png |''watch the explainer video: https://www.youtube.com/watch?v=roBvgKq0usg''
::tonal value of sound
+
File:wom-video03.png |''watch the explainer video: https://www.youtube.com/watch?v=roBvgKq0usg''
 +
File:wom-video04.png |''watch the explainer video: https://www.youtube.com/watch?v=roBvgKq0usg''
 +
File:wom-video05.png |''watch the explainer video: https://www.youtube.com/watch?v=roBvgKq0usg''
 +
File:wom-video06.png |''watch the explainer video: https://www.youtube.com/watch?v=roBvgKq0usg''
 +
File:wom-video07.png |''watch the explainer video: https://www.youtube.com/watch?v=roBvgKq0usg''
 +
File:wom-video08.png |''watch the explainer video: https://www.youtube.com/watch?v=roBvgKq0usg''
 +
File:wom-video09.png |''watch the explainer video: https://www.youtube.com/watch?v=roBvgKq0usg''
 +
File:wom-video10.png |''watch the explainer video: https://www.youtube.com/watch?v=roBvgKq0usg''
 +
File:wom-video11.png |''watch the explainer video: https://www.youtube.com/watch?v=roBvgKq0usg''
 +
</gallery>
  
==14/10/2020==
+
==Friday 16/10/2020==
==15/10/2020==
+
===<span style="color:#DDC0F5; background: linear-gradient(to right, red, orange, yellow, green, blue);linear-gradient(to right, blue, orange, red, green, yellow );">Presentation</span>===
 +
<gallery mode="slideshow">
 +
File:presentation01.jpg
 +
File:presentation02.jpg
 +
File:presentation03.jpg
 +
File:presentation04.jpg
 +
File:presentation05.jpg
 +
File:presentation06.jpg
 +
File:presentation07.jpg
 +
File:presentation08.jpg
 +
File:presentation09.jpg
 +
File:presentation10.jpg
 +
File:presentation11.jpg
 +
File:presentation12.jpg
 +
File:Livedemonstration1.png| watch the live demonstration https://www.youtube.com/watch?v=Xf_Iw74pYOk
 +
File:presentation13.jpg
 +
</gallery>

Latest revision as of 08:44, 18 October 2020

Tuesday 13/10/2020

References

https://en.wikipedia.org/wiki/Jeremiah_Denton
https://learn.adafruit.com/chatty-light-up-cpx-mask
https://learn.adafruit.com/anatomical-3d-printed-beating-heart-with-makecode
https://learn.adafruit.com/sound-activated-shark-mask
https://fabacademy.org/archives/2013/students/brimer.liron/21~index.html

Alternative Greetings

We talked about the scale of greetings from intimate to formal and how to represent that. Greetings can be awkward if you don't have the same expectations. A conversation about the greeting habits of your new lovers family can be useful to avoid very awkward semi-hand-shake-kiss-hug-situations. How will greetings evolve after Corona? Will they fade? Will we greet more extreme because we missed it so much? Hugging as much as possible to compensate he previous touchless times. Or maybe we'll act more aggressively and head bump instead of kiss.


ZOOM GREETINGS
Greeting1.gif
Wiping the camera to show respect.
How many times you wipe it, what you wipe it with indicate levels
of respect. You can customize your wipe pattern like you can
a secret handshake with a friend.

Greeting2.gif
Bowing your camera indicates respect because you can see
if someone took the effort to put on pants to meet you.


PERSONAL GREETINGS
Greeting3.gif
Greeting4.gif
Using the eyes and eyebrows because you can't touch others or
see their mouth. Eyebrow raise could indicate a smile or a hello,
the amount of times raised indicate the scale
from intimate to formal.

Greeting5.gif
The ultimate sign of respect could be to close eyes and raise
eyebrows to show you really trust someone, like for a grandparent.


involving the CIRCUIT PLAYGROUND


Circuitplayground.jpg

How can we convert a scale of intimacy while greeting another person by using color, speed, pattern, tonal value of sound?

The digital version of our wipe-your-camera-idea would be wiping the circuit instead of your camera. It would light up in more warm or cold tones to visualize a different level of intimacy. A blue wave would be a formal greeting, a handshake. A red wave would be a tight hug.

Questions

How can we show non-verbal types of conversation without being present?

1. What is beyond the frame? Can we experiment with the viewport of our camera?
You only see one square cut out of someone's environment when you're zooming. A
glimpse of your surroundings are shown when you are switching places. This is nice.
We can use a servo motor? Processing?
2. What is the energy in the room?
3. How can we get a better sense of spacial location?
4. What is the output?
5. Can we transfer sight into something more abstract?

Wednesday 14/10/2020

References

https://www.pretotyping.org/
https://lauren-mccarthy.com/us

Thoughts

Wednesday we talked about the different types of greeting then we eventually transitioned to talking about body language and how you can't read body language through a video screen. We had decided to use motion detection under the table to show the movement. That data would relate to emotions, like shaking your foot violently indicates impatience and other things along that line. These movements would be translated into colors varying from warm to cool colors depending on what the movement indicated. We would also use processing to create a shape that would change from soft to sharp depending on the mood the sensor is conveying.


Under.gif
As an experiment we filmed parts of ourselves that is not visible during a Zoom call.

Question

How are we going to detect body language?

1. Different wearables attached to different body parts (fout, hand, head?) can give us input.

Thursday 15/10/2020

References and sources

https://en.wikipedia.org/wiki/Synesthesia
http://soundbible.com/tags-water.html
https://community.troikatronix.com/topic/6744/solved-syphon-virtual-webcam-and-zoom/6

Updating our concept

We specified our idea. Our concept will focus on the level of attention in a context of a meeting. It will mainly be useful for a meeting where there is one person speaking and several others listening. The tool we're making is helping the speaker feeling more confident about presenting by making it possible to detect body language of interested listeners. It helps you noticing if one is fading away during a presentation. For listeners it's a tool helping you being aware of your own interest during a reading or workshop.

Work Marathon

The Drop (3D-model)

Anna Beirinckx & Yingzhou Chen



Coding

Mats Cornegoor & Cato Speltincx (Python)

Python.png Sensor1.jpg Sensor2.jpg

For the technical interactive part of the project we tinkered around with different types of sensors (apart from the touch sensors that are already on the Circuit Playground board). We wanted to try to find out whether the intensity of the grip could also be detected by some form of a pressure sensor. Our range of sensor materials consisted of "resistive foam" and adhesive copper. By applying pressure to either the resistive foam and the copper-paper pressure sensor, the resistive value changes and gives a corresponding input value.

During prototyping, Cato found out that only the capacitive touch sensor would provide enough data because of the sensitivity of the touch inputs. The pads have to be held with normal proper force in order to register a positive value, opposed to having to apply an unreasonable amount of force to these resistive materials that we tinkered with.

The code on the Circuit Playground communicated to Processing through USB serial. It detects the grip of the user on the device by counting how many points are being touched with a proper grip. This numeric value is then visually translated by Processing into a level of attention, by altering the height of the wave.


Giovanni Zanella (Processing)
The level of attention is visualized by a wave going up and down your screen. When you're holding the device tightly the wave will not cover your face and you are present and visible. If your holding on some time you will be rewarded and see a sun. When you're fading away and losing grip of the device your screen will fill with water. You're drowning and disappearing. If this disinterest remains you'll get disconnected from the Zoom room.


Visualisation

Yingzhou Chen, Soo Seng & Ana Tobin

Friday 16/10/2020

Presentation