Difference between revisions of "User:Philip"

From DigitalCraft_Wiki
Jump to navigation Jump to search
Line 203: Line 203:
 
[[File:IMG 0288 pretty.JPG | 400px | Pretty...]]
 
[[File:IMG 0288 pretty.JPG | 400px | Pretty...]]
  
 +
 +
==== People that agreed ====
 +
[[File:21245294 10209183928905420 1151500573 n.jpg|400px]]
 +
<br> <br> <br>
 +
[[File:I agreed people.jpg|1200px]]
  
 
== Week 3 & 4 - Sensors and sensitivity ==
 
== Week 3 & 4 - Sensors and sensitivity ==

Revision as of 08:53, 1 November 2017

Philip Ghering
0888689
Communicatie & Multimedia Design
http://www.philipghering.nl


Digital Craft minor 2017: How to be Human

Week 1 & 2 - On the body

My body part was the brain. These are the resources i used:

Inside the mind
Brain computer interface
Proof that Darwin was right
10 ways monkeys are more like us than we think
Your 21 senses
Proprioceptors
10 lesser known but important human senses
Khan Academy lesson on Somatosensation
How many senses do you have?
3d model of a spine

I became aware of the fact that we can actually sense a lot more than our five senses (hearing, tasting, touching, smelling and seeing) would logically allow us. The conservative science world has adopted the truth that humans have only these five senses. The reason for that is unclear to me. I became fascinated by proprioception for example. That is the way you can know where your arms and legs are relative to the rest of your body. Imagine being in absolute darkness, you can still tell if you've lifted your arms up due to receptors in your muscles. Also you can sense when you're hungry, in need of sleep or rest. You can tell when you have to stop exercising. These are all things that we could not know if we actually only had the traditional five senses. Also there are many different types of receptors needed to know all types of touch for example. Super hard pressure turns to pain, while heat also turns to pain.

I wanted to go into a direction that explores these mysteries around our sensing. The limitations and possibilities it beholds.



'I agree'

Because of the judgement in time we decided it would be best to completely team up and do one project together. Instead of all making something separately, with at least one thing in common. 'I agree' was the result of that.


Experimentation and concepting

Slitscan, the first prototype of taking pictures using the webcam. Using a program called 'Processing' we literally abused someone's body by deforming it:

screenshot #1 screenshot #2

screenshot #3 screenshot #4

screenshot #5 screenshot #6

screenshot #7 screenshot #8

screenshot #9 screenshot #10 screenshot # 11


The eventual result of our research is a sequence where the pixels come together to form the last picture taken:

1 2 3 4

Before...
...after!
Before...
...after!


We became interested in further 'abusing' and deforming the body in a digital way. So we thought further into how this actually happens online in any sort of way. The most interesting thing we could think of was the fact that our pictures and other content gets abused by services we use. That is where the concepts originated from.


'I agree' consists of a webcam attached to a long pole standing straight, with a spotlight shining in the background. A welcoming voice of a woman beckons people to come closer and asks if they will agree to see what's next. If they agree, she tells them to press the button. Pressing this button will make the webcam take your picture and spread it to multiple screens throughout the room.


Poster


Before you are able use 'I agree' you have to agree to the terms and conditions.

Have you read the terms of use? No. Have you accepted them? Yes. You probably have never thought about the possible consequences of signing a contract which you have not read. Although millions of people do it everyday, by pressing ‘I Agree’ at the end of every pop up screen of terms of and conditions you come across.

Thank you for letting us confiscate your personal freedom. By blindly accepting the terms and conditions, you have officially lost control over your rights regarding the service you are using. From this moment, your personal data are now under copyright of I agree, or any other service you agreed to use.

Congratulations, you have now exposed yourself to the whole world (wide web) to see. Your identity is now forever stored on the web. Your online ID will forever be accessible for the eyes of everyone.

Concept

We live in a digital era in which it is nearly impossible to stay anonymous. The digital world contains so many recources that are directly available. Especially the fact that the accessibility takes so little time is quite convenient. When a service has a large amount of users, it creates a feeling that the service can be used safely, and makes an individual blindly trust it. The desire to use a popular service as fast as possible in today’s time spirit, results in skipping to read the services’ terms and conditions. Besides, what harm can pressing ‘I agree’ actually cause? Somewhere in these conditions you will be protected, one would think. Somewhere in the back of your mind you know a few negative scenarios that have happened to a few unknown people. But in comparison to the fact that so many people use these services it would never happen to you. But should you really expose so much of your body, your identity, your brains, to the eyes of the whole world? The digital network and the amount of (personal) data is growing to extends we can not even imagine. And because of this unimaginable factor the awareness of our digital footprint we leave behind for eternity fades away.

With the project of ‘I agree’ we want to raise awareness about this phenomenon of blindly accepting the terms and conditions of these services, and possible consequences.








































Spoken text by the voice

'Hi there, how are you?'

'Don’t be shy, come closer!'

'I enjoy meeting new people. Would you like to be friends?'

'We can share an experience. But first I’d like to show you something.'

'Do you agree to see what’s next?'

'Stand in front of the light and press the button to start.'

watching 'Her' (2013), an inspiration for the voice


Behind the scenes

Writing code... Playing with pixels...

Experimentation... Pretty...


People that agreed

21245294 10209183928905420 1151500573 n.jpg


I agreed people.jpg

Week 3 & 4 - Sensors and sensitivity

Individual research

I first started doing research into different types of sensors and what a sensor really is. When is something a sensor and when it isn't.

Article on sweating hands, it's meaning and experiment
Emotional sweating across the body: Comparing 16 different skin conductance measurement locations
The truth meter

These articles explain what sweaty hands mean and how you can measure it. So I had the idea to make an 'excitement meter' that could be a new comparison tool for IMBD for example. To let people watch films with these sensors on their fingers to show in an objective manner how exciting the film actually was.


Sweat philip.jpg

My first expriment trying to make a sweaty sensor. This radically failed due to the fact I couldn't find a way to get this sensor to give actual reliable readings.


I also thought I could push myself a bit further creatively-wise. So I started doing a little more research into something I've found interesting ever since the first time my physics teacher in high school explained the basics. Gyroscopes!

Gyro sensors - How they work and what's ahead

OK, to actually build one yourself is super complicated and requires quite a lot of special materials. Especially because I want to be able to connect other electronics to it. And using a Gyro 3-axis breakout sensor is something I've done a couple of times in the past so there's no challenge in that. So I started looking at video's for inspiration.

Da Vinci machines
Water sound waves
Magnetic Field Visualizer - How To See Invisible Magnetic Lines - 3D DIY
CYMATICS: Science Vs. Music - Nigel Stanford

This really is an awesome video clip. Visualizing sound is something that attracts me a lot because it blurs the lines of our sensor. All of a sudden you can not only hear sound but you can also see them in the form of wrinkles, patterns, bubbles and splashes. From here on I started learning how to build a speaker circuit.

How to Build a Speaker Circuit with Adjustable Volume
A simple 1 watt audio amplifier
LM386 Audio Amplifier Circuit




With this last setup I managed to make a couple of nice patterns.


Combining ideas

When I teamed up with Kenah we decided to use his idea, the body voltage meter, as an input for my sound visualizer. After some research into the body voltage meter we could not find more than people being afraid of radiation and static electricity building up in their bodies. Another way to sort of measure body voltage is using capacitive sensing. This is not the same but allows you to measure proximity and/or touch. This happens by moving your hand through a magnetic field generated by the sensor. Your body is then the other side of a capacitor, changing the capacitance which you can measure. This number can be converted to distance. This requires lots of calibration and filtering the measured values.

Theremin philip.jpg

Because we found out the capacitive sensor is an unstable manner to sense proximity we attempted to build a theremin.

Our theremin which is not functioning at all.

After this attempt and a talk to Jeanine who build one last year we decided not to do the theremin as it is very difficult to get it as well calibrated and reliable as we had in mind. We did more research into capacitive sensing and we found out they are used in every touchscreen, trackpad, magic mouse and many other touch based products. Nobody we spoke to had any idea about this and found touchscreens to be rather magical. Therefore we sketched out some ideas of what we could do with this.














Capacitivetest1 philip.jpg

We found the first idea to be the most fruitful because of the fact it demonstrates the capacitive sensor the clearest way. So I made a little test setup in combination with processing:

This is when we realised it is more like a trackpad when you make it squared, not like a beam. The circles get bigger when you (almost) touch a sensor pin.

Therefore the idea changed into a super low-resolution trackpad that sort of demonstrates how this principle works. The LED's show where your hand is by lighting up.













Final product



Week 5 & 6 - Mind (of) the machine

Reconstructed Memories Workshop

Koekoek philip.jpg

When I was younger I visited the Booijmans museum, the painting I found most fascinating was 'Zomerlandschap' by B.C. Koekoek. The canvas is pretty big and extremely detailed. The only image I could find of this piece is greatly reduced in quality by the Booijmans museum. Therefore I wanted to get this picture in high resolution.

With this image enhancer it would be possible to greatly increase the resolution of any picture you can find (online). This makes it much harder for lots of websites that deliberately decrease image resolutions, for example to prevent people from bypassing the need to pay for high resolution images.

Sadly, the algorithm only works for pictures that only contain faces. This is because the model is trained on faces and pretty much only knows what faces look like. So in this case, the computer has no clue what to enhance as there is no reference to trees anywhere in the training data.

This does mean that you can feed the algorithm anything, as long as the images in the dataset are alike enough. You can for example feed it lots of licence plates first so you can re-enhance the blurred licence plates on Google street view. This means all blurred faces can be de-blurred as well. This can cause major privacy issues for everyone visible on these pictures.

On the other hand, big opportunities for the police come up because often private surveillance are poor quality making it hard to get a good view of someone doing any sort of crime. They can use the picture enhancer NCIS style making it easier to recognize the person.

Of course, with these sort of technologies, there are up- and downsides. Based on the purpose, who and the way it is used determines we, the crowd, feels good or bad about the use of this technology.

Another interesting aspect of this technique is that the computer starts to have some sort of imagination. When a human looks a blurred face, he can imagine what that person looks like in real life because we know what faces look like. So based on all the faces you've seen, you can fill in the gaps about this new one. You imagine the rest of the pixels. This is what the computer does too. Based on all the faces the algorithm has been fed, he can make an educated guess about how a new, blurred face will look like in higher resolution. For me, the idea that computers can have imagination is fascinating. Because then, can computers have creativity? Can they then also empathise with humans?

This imagining can also lead to problems though. It remains a guess. The computer can just have imagined wrong, just like we sometimes do. But because it is the computer saying it, people are tended to trust the answer more. It is a pretty large shift from dealing with computers that compute and give us mathematical answers to computers that imagine and give us guessed answers. In the NCIS example, I believe it could be more problematic to see a wrong face of a person you're searching for than a blurred face. If you're really looking for someone that looks like the computers guess, you can potentially overlook the actual person because he does not resemble that image enough. I believe these are the problems we are going to have to deal with when these technologies will emerge on a bigger scale and will be used for more different kind of tasks.


Study of [...] Assignement

For this assignment I chose to download an Instagram profile of a woman who is super consistent in the content she uploads. I had seen this system work on a dataset with grand differences in pictures, but these gave pretty much only super abstract images back.


A project that explores issues surrounding artificial intelligence

I started off trying to understand what Artificial Intelligence, Machine Learning and Deep Learning is. This video helped a lot with that: Deep Learning Demystified
The basic idea is that deep learning is really good at finding patterns. And because it saves the probability something will be according to the same pattern, it can make predictions and combine certain input. So, what API's and such are available for me to use? Here's a really complete and useful list:

50+ Useful Machine Learning & Prediction APIs


So what products on the market right now already use this technique? Another great video:

10 Machine Learning based Products
Especially the Alpha Go project has gotten a lot of attention in the media. This is mainly because people are astonished that a computer with this software was able to beat the world champion Go player. Go is a game in which you have a really countless amount of possible actions. That's why the game is more based on intuition rather than pure logic. Chess is a game that is more logic based because you can actually calculate every single move to determine the best option. This software is built on top of Tensorflow, a super nice framework that allows pretty much anyone to build Artificial Intelligence systems for their own purposes.

Getting started with Tensorflow


This allowed me to test out lots and lots of different things people have tried out and posted on Github. Many of them are build around Tensorflow, some others use Torch as the main computational framework. (CU)Torch allows people with a Nvidia CUDA processor to be super quick in training and sampling models. Tensorflow does not allow this (as easily). Here's a list of nice Github pages:

Example implementation of the DeepStack algorithm for no-limit Leduc poker
Train your own image generator
Multi-layer Recurrent Neural Networks (LSTM, RNN) for word-level language models in Python using TensorFlow.
Music Generator Demo by @Sirajology on Youtube
This repository contains code in Torch 7 for text classification from character-level using convolutional networks.
Deep learning driven jazz generation using Keras & Theano!
Character-level language modelling
A Neural Algorithm of Artistic Style

Of course some worked a lot better than others. For example the character level text generators did not give me readable output. Because these are character based they do not actually save a file that contains all available words. They try to find patterns in the sequence of characters and make words out of that. It works reasonably well considering the fact the system does not understand the concept of words at all, but it requires and incredible dataset of text. And still often it returns non existing words. The word-level did surprisingly well. Even though the sentences are complete gibberish of course, you can kind of understand the bigger picture of the text you feed it. I've tested it with Geert Wilders' speeches of the last year, and sometimes it actually samples some right wing statements.

The Deep Jazz github actually make some decent piano music off the jazz song it was fed. When I fed it Welcome to the Jungle - Guns 'N' Roses, it kind of did the same thing. So I'm not really sure what is happening but it certainly does not understand genre yet. I believe it is made to only sound well when you play the midi file on a piano. But overall nice start.

Also the Artistic style neural network does a really good job creating some nice pictures. It works with two input pictures, one is the style and the other is the actual input. So it transfers the style into the input picture creating new image. This does not have to me a certain style of art. You can also make a picture of a skyline go from day to night for example.



For me the interesting thing was that all of a sudden my computer did something creative. It had imagination and creativity. I found a nice article on intuition vs logic: http://artificial-intuition.com/intuition.html. This article states: "Intuition is theory-free. It does not require a high-level logical model. This neatly solves a bootstrapping problem of Artificial Intelligence. You cannot create high-level models until you already have intelligence." This is why the Alpha Go system (Sonnet) is genius. It is build to be general purpose, you can feed it anything and it will find a way to deal with it. It will find patterns and (when there is a feedback loop) learns from its mistakes. This whole training process is intuitive trail and error.


[]