Difference between revisions of "User:Noemiino/year4"
Line 82: | Line 82: | ||
[[File:WhatsApp_Image_2018-09-27_at_14.46.00_(1).jpeg|400px|]] | [[File:WhatsApp_Image_2018-09-27_at_14.46.00_(1).jpeg|400px|]] | ||
[[File:WhatsApp_Image_2018-09-13_at_10.27.27.jpeg|400px|]] | [[File:WhatsApp_Image_2018-09-13_at_10.27.27.jpeg|400px|]] | ||
+ | <BR> | ||
+ | <br> | ||
+ | <br> | ||
+ | <br> | ||
+ | <br><br> | ||
+ | <br> | ||
+ | <br> | ||
+ | <br> | ||
=Project 2 _ Cybernetic Prosthetics= | =Project 2 _ Cybernetic Prosthetics= |
Revision as of 21:17, 2 October 2018
Contents
Info
NOEMI BIRO
0919394@hr.nl
Graphic Design
Introduction
As a graphic designer I am trained to look at small visual details and make adjustments to them. I am interested in these details not just from a human perspective but how our new technologies enable us new ways of exploration. I want to include technology in my work as much as possible in form of interactions, new layers or as research experiments.
I look at digital craft and see the opportunity to work with my combined interests: analog + digital in one hybrid and that excited me. I am curious about how technology enables machines to recognize, think, design (?) and how humans can create conditions for these interactions to happen. In my opinion when technology is used interaction is already created, left for the designer is to make sure the conditions are the framework in which they happen.
Project 1_Critical Making exercise
Reimagine an existing technology or platform using the provided sets of cards. In the first lesson of this project, our group used the cards provided by Shailoh to choose a theme, method and presentation technique. Picking the cards randomly gave us a combination of cards that spelled 'Make an object designed for a tree to use Youtube Comments and use the format of a company and a business model to present your idea'.
Group project realised by Sjoerd Legue, Tom Schouw, Emiel Gilijamse, Manou, Iris and Noemi Biro
RESEARCH
We used this randomly picked cards to set up our project and brainstorm about the potential theme. Almost immediately several ideas popped up in our mind, the one becoming the base of our concept being the writings of Peter Wohlleben, a German forester who is interested in the scientific side of trees communication with each other. The book he wrote, The Hidden Life of Trees became our main source of information. Basically what Wohlleben is researching is the connection between trees to trade nutrients and minerals in order for their fellow trees, mostly family, to survive. For example the fact that older, bigger trees send down nutrients to the smaller trees closer to the surface which have less ability to generate energy through photosynthesis.
Examples
Using their root network to send and receive nutrients. But not completely by themselves, the communication system, also known as the 'Wood Wide Web', is a symbiosis of the trees root network and the mycelium networks that grow between the roots, connecting one tree to another. The mycelium network, also known as Mycorysol, is responsible for the successful communication between trees.
Other scientists are also working on this subject. Suzanne Simard from Canada is also researching the communicative networks in forests. She is mapping the communication taking place between natural forests. Proving the nurturing abilities of trees, working together to create a sustainable living environment. A network where the so-called 'mother-trees' take extra care for their offspring but also other species, by sharing her nutrition for those in need.
Artist, scientists and designers are also intrigued by this phenomenon. For example Barbara Mazzolai from the University in Pisa has had her work published in the book 'Biomimicry for Designers" by Veronika Kapsali. She developed a robot inspired by the communicative abilities of trees and mimicking their movements in the search for nutrition in the soil.
Bios pot
The idea of presenting this project within a business model introduced us the company Bios. A Californian based company which produces biodegradable urn which can become a tree after planting the urn in the soil. We wanted to use this concept and embed this in our project. The promotional video could provide us with interesting video material for our own video. Besides the material, we were inspired by their application that was part of their product.
CONCEPT
Using the research about the Wood Wide Web and the ability of trees to communicate with each other. We wanted to make the tree able to communicate with us. Using the same principle of sending different kinds of nutrition depending on what the tree wants to communicate, such as 'danger' or 'help me out'. We wanted to let the trees talk to us. Using a digital interface logged into the root network of the tree and communicate with each other.
At first, we wanted to give the tree a voice with giving it the ability to post likes using its youtube account. Where in- and output would take place in the same root network. But is a tree able to receive information we can? And if so, what will it do with it? We wanted to stay in touch with the scientific evidence of the talking trees and decided to focus on the other application within the field of human-tree communication.
An other desire of our team was the fact that we wanted to present a consumer product taking the role as a company trying to sell our product for the global market. After researching other products concerning plants and trees, we found the biodegradable Bios urn. An urn which use as a pot to plant a tree or plant, which can later be buried in the ground somewhere. This product inspired us to use this example as our physical part of the project. So we wanted to construct a smart vase which had the technical ability to sense the chemical secretion from the roots and convert these to a positive or negative output. The input would also take place using the built-in sensors in the vase using a wireless internet connection.
PROCESS
Project 2 _ Cybernetic Prosthetics
In small groups, you will present a cluster of self-directed works as a prototype of a new relationships between a biological organism- and a machine, relating to our explorations on reimagining technology in the posthuman age. The prototypes should be materialized in 3D form, and simulate interactive feedback loops that generate emergent forms.
Group project realised by Sjoerd Legue, Tom Schouw, Emiel Gilijamse and Noemi Biro
INSPIRATION
We started this project with a big brainstorm around different human senses. Looking at researches and recent publications, this article from the guardian << No hugging: are we living through a crisis of touch? >> raised the question of touching in our current state of society. What we found intriguing about this sense is how it is becoming more and more repressed to the technological interfaces of our daily life. It is becoming a taboo to touch a stranger but it is considered normal to walk around the streets holding our idle phone. Institutions are also putting regulations on what is considered appropriate contact between professionals and patients. For example, if a reaction to a bad news used to be a hug, nowadays it is more likely to pat somebody on the shoulders than to have such a large area connection with each other.
Based on the above-mentioned article we started to search for more scientific research and projects around touch and technological surfaces to gain insight into how we treat our closest gadgets.
This recent article from Forbes magazine relates the research of Alicia Heraz from the Brain Mining Lab in Montreal << This AI Can Recognize Anger, Awe, Desire, Fear, Hate, Grief, Love ... By How You Touch Your Phone >> who trained an algorithm to recognize humans emotional pattern from the way we touch our phone. In her official research document << Recognition of Emotions Conveyed by Touch Through Force-Sensitive Screens >> Alicia reaches the conclusion:
- "Emotions modulate our touches on force-sensitive screens, and humans have a natural ability to recognize other people’s emotions by watching prerecorded videos of their expressive touches. Machines can learn the same emotion recognition ability and do better than humans if they are allowed to continue learning on new data. It is possible to enable force-sensitive screens to recognize users’ emotions and share this emotional insight with users, increasing users’ emotional awareness and allowing researchers to design better technologies for well-being."
We looked at current artificial intelligence models trained on senses and we recognized the pattern that Alicia also mentioned: there is not enough focus on touch. Most of the emotional processing focuses on facial expressions through computer vision. There is an interesting distinction about how private we are about somebody touching our faces but the same body part has become a public domain through security cameras and shared pictures.
With further research into the state of current artificial intelligence on the market and in our surroundings, this quote from the documentary << More Human than Human >> captured our attention
- " We need to make it as human as possible "
Looking into the future of AI technology the documentary imagines a world where in order for human and machine to coexist they need to evolve together under the values of compassion and equality. We, humans, are receptive to our surroundings by touch thus we started to imagine how we could introduce AI into this circle to make the first step towards equality. Even though the project is about the extension of AI on an emotional level we recognized this attempt as a humanity-saving mission. Once AI is capable of autonomous thoughts and it can collect all the information from the internet our superiority as a species is being questioned and many specialists even argue that it will be overthrown. That is why it is essential to think of this new relationship in terms of equality and feed our empathetical information into the robots so they can function under the same ethical codes as we do.
OBJECTIVE
From this premise, we first started to think of a first AID kit for robots from where they could learn about our gestures towards each other expressing different emotions. The best manifestation of this kit we saw as an ever-growing database which by traveling around the world could categorize not only the emotion deducted from the touch but also a cultural background linked to geographical location.
For the first prototype, our objective was to realize a working interface where we could make the process of gathering data feel natural and give real-time feedback to the contributor.
MATERIAL RESEARCH
We decided to focus on the human head as a base for our data collection because, on one hand, it is an intimate surface for touch with an assumption for truthful connection, on the other hand, the nervous system of the face can be a base for the visual circuit reacting to the touch.
The first idea was to buy a mannequin head and cast it ourselves from a softer more skin-like material that has a soft memory foam aspect. Searching on the internet and in stores for a base for the cast was already asking so much time from us that we decided on the alternative of searching for the mannequin head in the right material. We found such a head in the makeup industry, used for practicing makeup and eyelash extensions.
Once we had the head we could get on to experiment with the circuits to be used on the head not only as conductors of touch but also as the visual center points of the project.
First based on the nervous system we divided the face into forehead and cheeks as different mapping sites. Then with a white paper, we looked at the curves and what the optimal shape on the face looks like folded out. From a rough outline of the shape, we worked toward a smooth outline and then we used offset to get concentric lines inside the shape.
To get to the final circuit shapes we decided on the connection points with the crocodile clips to be on the end of the head and underneath the ears. With 5 touchpoints on the forehead and 3 on each side of the face, the design followed the original concentric sketch but added open endings in form of dots to the face.
Not only the design was challenging for the circuits but also the use of the material. For the first technical prototype in which we used a grid of 3x3 to test the capacitive sensor, we used copper tape. Although this would have been the best material to use in terms of conduciveness and instead sticking surface the price for copper sheets big enough for our designs exceeded our budget and using copper tape would have meant assembling the circuit from multiple parts. The alternative material was gold ($$$$), aluminum ($) or graphite ($). Luckily Tom had two cans of graphite spray and we tried it on paper and it worked. We tested it with an LED - it is blinking by the way.
We cut the designs out with a plotter from a matte white foil and then sprayed the designs with the graphite spray. After we read into how to make the graphite a more efficient conductor we tried the tip to rub the surface with a cloth or cotton buns. The result was a shiny metalic surface that added even more character to the visual of the mannequin head.
TECHNICAL COMPONENTS
Adafruit MPR121 12-Key Capacitive Touch Sensor
EXHIBITION
It has been scientifically proven that humans have the ability to recognize and communicate emotions through an expressive touch. A new research proved that force-sensitive screens through machine learning are also able to recognize the users` emotions.
From this starting point, we created Midas.
Midas is designed to harvest a global database of human emotions expressed through touch. By giving up our unique emotional movement machines can gain emotional intelligence leading to an equal communicational platform.
Through adding touch as an emotional receptor to Artificial Intelligence we upload our unspoken ethical code into this new lifeform. This action is the starting point for a compassionate cohabitant.