User:Philip
Philip Ghering
0888689
Communicatie & Multimedia Design
http://www.philipghering.nl
Contents
Digital Craft minor 2017: How to be Human
Week 1 & 2 - On the body
My body part was the brain. These are the resources i used:
Inside the mind
Brain computer interface
Proof that Darwin was right
10 ways monkeys are more like us than we think
Your 21 senses
Proprioceptors
10 lesser known but important human senses
Khan Academy lesson on Somatosensation
How many senses do you have?
3d model of a spine
I became aware of the fact that we can actually sense a lot more than our five senses (hearing, tasting, touching, smelling and seeing) would logically allow us. The conservative science world has adopted the truth that humans have only these five senses. The reason for that is unclear to me. I became fascinated by proprioception for example. That is the way you can know where your arms and legs are relative to the rest of your body. Imagine being in absolute darkness, you can still tell if you've lifted your arms up due to receptors in your muscles. Also you can sense when you're hungry, in need of sleep or rest. You can tell when you have to stop exercising. These are all things that we could not know if we actually only had the traditional five senses. Also there are many different types of receptors needed to know all types of touch for example. Super hard pressure turns to pain, while heat also turns to pain.
I wanted to go into a direction that explores these mysteries around our sensing. The limitations and possibilities it beholds.
'I agree'
Because of the judgement in time we decided it would be best to completely team up and do one project together. Instead of all making something separately, with at least one thing in common. 'I agree' was the result of that.
Experimentation and concepting
Slitscan, the first prototype of taking pictures using the webcam. Using a program called 'Processing' we literally abused someone's body by deforming it:
The eventual result of our research is a sequence where the pixels come together to form the last picture taken:
We became interested in further 'abusing' and deforming the body in a digital way. So we thought further into how this actually happens online in any sort of way. The most interesting thing we could think of was the fact that our pictures and other content gets abused by services we use. That is where the concepts originated from.
'I agree' consists of a webcam attached to a long pole standing straight, with a spotlight shining in the background. A welcoming voice of a woman beckons people to come closer and asks if they will agree to see what's next. If they agree, she tells them to press the button. Pressing this button will make the webcam take your picture and spread it to multiple screens throughout the room.
Before you are able use 'I agree' you have to agree to the terms and conditions.
Have you read the terms of use? No. Have you accepted them? Yes. You probably have never thought about the possible consequences of signing a contract which you have not read. Although millions of people do it everyday, by pressing ‘I Agree’ at the end of every pop up screen of terms of and conditions you come across.
Thank you for letting us confiscate your personal freedom. By blindly accepting the terms and conditions, you have officially lost control over your rights regarding the service you are using. From this moment, your personal data are now under copyright of I agree, or any other service you agreed to use.
Congratulations, you have now exposed yourself to the whole world (wide web) to see. Your identity is now forever stored on the web. Your online ID will forever be accessible for the eyes of everyone.
Concept
We live in a digital era in which it is nearly impossible to stay anonymous. The digital world contains so many recources that are directly available. Especially the fact that the accessibility takes so little time is quite convenient. When a service has a large amount of users, it creates a feeling that the service can be used safely, and makes an individual blindly trust it. The desire to use a popular service as fast as possible in today’s time spirit, results in skipping to read the services’ terms and conditions. Besides, what harm can pressing ‘I agree’ actually cause? Somewhere in these conditions you will be protected, one would think. Somewhere in the back of your mind you know a few negative scenarios that have happened to a few unknown people. But in comparison to the fact that so many people use these services it would never happen to you. But should you really expose so much of your body, your identity, your brains, to the eyes of the whole world? The digital network and the amount of (personal) data is growing to extends we can not even imagine. And because of this unimaginable factor the awareness of our digital footprint we leave behind for eternity fades away.
With the project of ‘I agree’ we want to raise awareness about this phenomenon of blindly accepting the terms and conditions of these services, and possible consequences.
Spoken text by the voice
'Hi there, how are you?'
'Don’t be shy, come closer!'
'I enjoy meeting new people. Would you like to be friends?'
'We can share an experience. But first I’d like to show you something.'
'Do you agree to see what’s next?'
'Stand in front of the light and press the button to start.'
Behind the scenes
People that agreed
Final exhibition setup
We added the possibility to let us delete your photo for €5. You can do this through the left screen on the right picture. This is a simple form with payment system. This proposes an alternative to the current situation of free services. We think it would be better if these services allow users to choose a paid version which guarantees privacy.
Week 3 & 4 - Sensors and sensitivity
Individual research
I first started doing research into different types of sensors and what a sensor really is. When is something a sensor and when it isn't.
Article on sweating hands, it's meaning and experiment
Emotional sweating across the body: Comparing 16 different skin conductance measurement locations
The truth meter
These articles explain what sweaty hands mean and how you can measure it. So I had the idea to make an 'excitement meter' that could be a new comparison tool for IMBD for example. To let people watch films with these sensors on their fingers to show in an objective manner how exciting the film actually was.
My first expriment trying to make a sweaty sensor. This radically failed due to the fact I couldn't find a way to get this sensor to give actual reliable readings.
I also thought I could push myself a bit further creatively-wise. So I started doing a little more research into something I've found interesting ever since the first time my physics teacher in high school explained the basics. Gyroscopes!
Gyro sensors - How they work and what's ahead
OK, to actually build one yourself is super complicated and requires quite a lot of special materials. Especially because I want to be able to connect other electronics to it. And using a Gyro 3-axis breakout sensor is something I've done a couple of times in the past so there's no challenge in that. So I started looking at video's for inspiration.
Da Vinci machines
Water sound waves
Magnetic Field Visualizer - How To See Invisible Magnetic Lines - 3D DIY
CYMATICS: Science Vs. Music - Nigel Stanford
This really is an awesome video clip. Visualizing sound is something that attracts me a lot because it blurs the lines of our sensor. All of a sudden you can not only hear sound but you can also see them in the form of wrinkles, patterns, bubbles and splashes. From here on I started learning how to build a speaker circuit.
How to Build a Speaker Circuit with Adjustable Volume
A simple 1 watt audio amplifier
LM386 Audio Amplifier Circuit
With this last setup I managed to make a couple of nice patterns.
Combining ideas
When I teamed up with Kenah we decided to use his idea, the body voltage meter, as an input for my sound visualizer. After some research into the body voltage meter we could not find more than people being afraid of radiation and static electricity building up in their bodies. Another way to sort of measure body voltage is using capacitive sensing. This is not the same but allows you to measure proximity and/or touch. This happens by moving your hand through a magnetic field generated by the sensor. Your body is then the other side of a capacitor, changing the capacitance which you can measure. This number can be converted to distance. This requires lots of calibration and filtering the measured values.
Because we found out the capacitive sensor is an unstable manner to sense proximity we attempted to build a theremin.
Our theremin which is not functioning at all.
After this attempt and a talk to Jeanine who build one last year we decided not to do the theremin as it is very difficult to get it as well calibrated and reliable as we had in mind. We did more research into capacitive sensing and we found out they are used in every touchscreen, trackpad, magic mouse and many other touch based products. Nobody we spoke to had any idea about this and found touchscreens to be rather magical. Therefore we sketched out some ideas of what we could do with this.
We found the first idea to be the most fruitful because of the fact it demonstrates the capacitive sensor the clearest way. So I made a little test setup in combination with processing:
This is when we realised it is more like a trackpad when you make it squared, not like a beam. The circles get bigger when you (almost) touch a sensor pin.
Therefore the idea changed into a super low-resolution trackpad that sort of demonstrates how this principle works. The LED's show where your hand is by lighting up.
Final product
These are WS2814 LED's which have special drivers inside which make it super easy to address each individual LED. Only 1 data line is needed. The LED's are grouped to match the sensors. Sensor 1 activates LED 1. Sensor 2 activates LED 2 & 3. And so on. The middle sensor activates LED 6, 7, 10 and 11.
Week 5 & 6 - Mind (of) the machine
Reconstructed Memories Workshop
When I was younger I visited the Booijmans museum, the painting I found most fascinating was 'Zomerlandschap' by B.C. Koekoek. The canvas is pretty big and extremely detailed. The only image I could find of this piece is greatly reduced in quality by the Booijmans museum. Therefore I wanted to get this picture in high resolution.
With this image enhancer it would be possible to greatly increase the resolution of any picture you can find (online). This makes it much harder for lots of websites that deliberately decrease image resolutions, for example to prevent people from bypassing the need to pay for high resolution images.
Sadly, the algorithm only works for pictures that only contain faces. This is because the model is trained on faces and pretty much only knows what faces look like. So in this case, the computer has no clue what to enhance as there is no reference to trees anywhere in the training data.
This does mean that you can feed the algorithm anything, as long as the images in the dataset are alike enough. You can for example feed it lots of licence plates first so you can re-enhance the blurred licence plates on Google street view. This means all blurred faces can be de-blurred as well. This can cause major privacy issues for everyone visible on these pictures.
On the other hand, big opportunities for the police come up because often private surveillance are poor quality making it hard to get a good view of someone doing any sort of crime. They can use the picture enhancer NCIS style making it easier to recognize the person.
Of course, with these sort of technologies, there are up- and downsides. Based on the purpose, who and the way it is used determines we, the crowd, feels good or bad about the use of this technology.
Another interesting aspect of this technique is that the computer starts to have some sort of imagination. When a human looks a blurred face, he can imagine what that person looks like in real life because we know what faces look like. So based on all the faces you've seen, you can fill in the gaps about this new one. You imagine the rest of the pixels. This is what the computer does too. Based on all the faces the algorithm has been fed, he can make an educated guess about how a new, blurred face will look like in higher resolution. For me, the idea that computers can have imagination is fascinating. Because then, can computers have creativity? Can they then also empathise with humans?
This imagining can also lead to problems though. It remains a guess. The computer can just have imagined wrong, just like we sometimes do. But because it is the computer saying it, people are tended to trust the answer more. It is a pretty large shift from dealing with computers that compute and give us mathematical answers to computers that imagine and give us guessed answers. In the NCIS example, I believe it could be more problematic to see a wrong face of a person you're searching for than a blurred face. If you're really looking for someone that looks like the computers guess, you can potentially overlook the actual person because he does not resemble that image enough. I believe these are the problems we are going to have to deal with when these technologies will emerge on a bigger scale and will be used for more different kind of tasks.
A project that explores issues surrounding artificial intelligence
I started off trying to understand what Artificial Intelligence, Machine Learning and Deep Learning is. This video helped a lot with that:
Deep Learning Demystified
The basic idea is that deep learning is really good at finding patterns. And because it saves the probability something will be according to the same pattern, it can make predictions and combine certain input. So, what API's and such are available for me to use? Here's a really complete and useful list:
50+ Useful Machine Learning & Prediction APIs
So what products on the market right now already use this technique? Another great video:
10 Machine Learning based Products
Especially the Alpha Go project has gotten a lot of attention in the media. This is mainly because people are astonished that a computer with this software was able to beat the world champion Go player. Go is a game in which you have a really countless amount of possible actions. That's why the game is more based on intuition rather than pure logic. Chess is a game that is more logic based because you can actually calculate every single move to determine the best option. This software is built on top of Tensorflow, a super nice framework that allows pretty much anyone to build Artificial Intelligence systems for their own purposes.
Getting started with Tensorflow
This allowed me to test out lots and lots of different things people have tried out and posted on Github. Many of them are build around Tensorflow, some others use Torch as the main computational framework. (CU)Torch allows people with a Nvidia CUDA processor to be super quick in training and sampling models. Tensorflow does not allow this (as easily). Here's a list of nice Github pages:
Example implementation of the DeepStack algorithm for no-limit Leduc poker
Train your own image generator
Multi-layer Recurrent Neural Networks (LSTM, RNN) for word-level language models in Python using TensorFlow.
Music Generator Demo by @Sirajology on Youtube
This repository contains code in Torch 7 for text classification from character-level using convolutional networks.
Deep learning driven jazz generation using Keras & Theano!
Character-level language modelling
A Neural Algorithm of Artistic Style
Of course some worked a lot better than others. For example the character level text generators did not give me readable output. Because these are character based they do not actually save a file that contains all available words. They try to find patterns in the sequence of characters and make words out of that. It works reasonably well considering the fact the system does not understand the concept of words at all, but it requires and incredible dataset of text. And still often it returns non existing words. The word-level did surprisingly well. Even though the sentences are complete gibberish of course, you can kind of understand the bigger picture of the text you feed it. I've tested it with Geert Wilders' speeches of the last year, and sometimes it actually samples some right wing statements.
The Deep Jazz github actually make some decent piano music off the jazz song it was fed. When I fed it Welcome to the Jungle - Guns 'N' Roses, it kind of did the same thing. So I'm not really sure what is happening but it certainly does not understand genre yet. I believe it is made to only sound well when you play the midi file on a piano. But overall nice start.
Also the Artistic style neural network does a really good job creating some nice pictures. It works with two input pictures, one is the style and the other is the actual input. So it transfers the style into the input picture creating new image. This does not have to me a certain style of art. You can also make a picture of a skyline go from day to night for example.
For me the interesting thing was that all of a sudden my computer did something creative. It had imagination and creativity. I found a nice article on intuition vs logic: http://artificial-intuition.com/intuition.html. This article states: "Intuition is theory-free. It does not require a high-level logical model. This neatly solves a bootstrapping problem of Artificial Intelligence. You cannot create high-level models until you already have intelligence." This is why the Alpha Go system (Sonnet) is genius. It is build to be general purpose, you can feed it anything and it will find a way to deal with it. It will find patterns and (when there is a feedback loop) learns from its mistakes. This whole training process is intuitive trail and error.
To display the 'creativeness' computers can adopt using this technology we decided to generate poems of off popular song lyrics. We posted these on http://hellopoetry.com. The nice thing about this website is that you need to do a small application before you can activate your profile. This is done by submitting one of your poems. It seemed like someone would read this and allow you or not. We got in!
To display all the creativeness the computer could do I made a website that showed three creative sectors side by side. http://philipghering.nl/creatibot/.
Week 7 - Not Mica but Donghua project
I did not participate in the Mica project because I joined a CMD trip to Shanghai to do a project with students from the Donghua University from the factulty Art & Design. The students I worked with were doing the major Graphic Design and New Media.
The project was for Somersby. A cider brand owned by Carlsberg. They want their cider to be the summer drink of 2018. Somersby is all about the magic moments. To bring friends and family together, to make people happy etcetera. It's about the small moments, "When you wake up in the morning and realise it is weekend." for example.
We came up with a concept based on the symbolic meaning of an apple in China. They think ping-an meaning apple, and ping-che sound alike. So they believe an apple stands for safety. Prosperity. We created a present to give away explaining you wish that person safety, prosperity and happiness. By sharing a Somersby cider together. In the present is a bottle opener and an invitation to have a cider somewhere. The bottle opener is important because the target group (20-25 year olds) generally do not have a bottle opener. This is because there is not really a drinking culture in China. Especially not in a non-bar or non-club setting.
What is my Craft?
What is your craft? (define your discipline, method or approach)
Technologist, start with technique or phenomena. Experiment and refine concept whilst prototyping.
What are the tools and media of your craft?
Code, electronics, wood, 3d print, light (pixels) and sound.
What are the borders of this practice? (what new media technologies have arisen / what is its future of the field)
New technologies? Where to start? Future will become more about going back to analogue media. Or the illusion of direct manipulation. I think people lost feeling with the products they use and are more longing for a bigger understanding to have the feeling of control. Knowing what to do when something breaks and how something roughly works is a place where I want to go back to.
Connect to a historical discourse and give concrete examples of contemporary practitioners.
The global accessibility of internet. Awesome practitioners are Jonas Vorwerk, Jen Lewin, Tea Uglow and Wu Jeuhui.
What is the position of your practice in relation to newer technologies?
What I like to do is communicating ideas and points of view using technologies. I like to prototype. Going from idea to something tangible in a short time is the happiest thing in the design proces for me. I am always curious to new technologies and possibilities. I want to try them, just to have them tried out.
From a tinkering perspective it is always nice to be able to use different stuff to fiddle around with. But from a consumer point of view I rarely see examples of products that use new technologies that actually solve problems and make life easier. For example the ‘smart home movement’ shows very well how hard big companies are trying to find new ways to ‘help’ consumers. As far as I know, the wake-up light has never been proven to actually be health beneficial. How hard is it to touch a light switch when you arrive? And what do I do when all of a sudden this stuff does not respond?
So. My position is that I like to play around with new stuff whilst being critical. I want to make products or installations that explore unusual interactions, haptic/tactility and the everydayness of things.
What theme you will explore for your Q14 project?
What interests me now is when devices give no room for skill or adjustment the end result can be the same but the user ends up being less satisfied. By adding steps or customization possibilities users can mess up the end result or become experts in using that device. That gives satisfaction.
By not making something easy to use and understand, in the long run satisfaction will be higher. Of course this does not apply to everything.
I want to find out to what extend this is true? In which cases and in which not? How can you put the ease of services like Spotify into something more satisfying? How can you get the feeling of owning music into services like Spotify? How can you create (the illusion of) control in products that have new technologies in them? How would opening a digital folder look like when it is inside physical storage? What everydayness do I want to amplify? What do I want to take away?
The first prototype I am going to make is a hacked Senseo coffee machine. I am going to add a few turn knobs from which you can adjust the water pressure, amount of water and temperature. By doing this I want to find out what it is about having more control over the outcome of this product that makes people happy or annoyed. This is applicable to many more types of everyday products. For example a light switch, I would like to build a test box with different switches that may or may not exist yet. There are digital ways to set timers to lamps, I would like to build those too in an analogue way.
This is also going further into haptics and tactile interfaces. I've started this study in the sensors & sensitivity project by making a 3d trackpad. The surface was elastic textile which felt nicely. The interesting thing about that for me was how people reacted to it at first contact. I noticed many people are very cautious at pressing into the surface. People are stroke the surface gently. I guess we did not use the right glues in the design of the box to visually tell people you can press it all the way down without breaking it. People are probably used to touch responsiveness due to the wide use of touchscreens. Everything that can be touched will be associated with how conventional touch screens work.
I am going to build further on this insights and research in Q14.
Q14 - Only touch
The first ideas about my theme for Q14 are about what makes people satisfied and happy about the product they are using. Not making something easy to use, isn't the only way to research this. I started reading the book: The paradox of Choice by B. Schwartz. A central theme in this book is the difference between people who tend to be maximizers in comparison to satisficers. The basic question here is: How can it be that depression rates is western countries are on the rise? The amount of luxury most people in western countries experience is tremendous. He basically states this: If the ability to choose enables you to get a better car, house, job, vacation, or coffeemaker, but the process of choice makes you feel worse about what you’ve chosen, you really haven’t gained anything from the opportunity to choose.
So what do philosophers across history say about happiness and desire?
Seek happiness by limiting desires, rather than satisfying them. - J.S. Mill
Happiness is found not in seeking more, but the capacity to enjoy less. - Socrates
A wise man is content with his lot, without wishing for what he has not. - Seneca
If you are depressed, you are living in the past. If you are anxious, you are living in the future. If you are at peace, you are living in the present. - Lao Tzu
Happiness is like a butterfly, the more you chase it, the more it will elude you. But if your attention is to other things, it will come and sit softly on your shoulder. - H.D. Thoreau
I think no further explanation is needed to express the overall theme of these quotes.
Another theme that has always had my interest is mindfulness. Not necessarily meditation or Tai Chi, but really just being at this moment at this time only. People tend to always be busy with people and places they are not with at that time. This is not necessarily a bad thing, it becomes a bad thing when you are never really experiencing your direct environment. Due to the smart techniques used in many digital devices and services people get lured into a reciprocity cycle. This is a widely used way to get people to return to platforms increasing the attention time. It also destroys your focus, attention span and overall efficiency. In my opinion people are not really aware that they 1) are being heavily persuaded to use services more often and longer. 2) are getting drained out of focus and that your attention span heavily decreases when you are being distracted by notifications too much.