Difference between revisions of "User:Noemiino/year3"

From DigitalCraft_Wiki
Jump to navigation Jump to search
 
(10 intermediate revisions by the same user not shown)
Line 117: Line 117:
  
 
= How to be human in the near future of Q10?=
 
= How to be human in the near future of Q10?=
 +
 +
From the projects I worked on in Q9 I was fascinated by the way a machine can create a new image from what it is fed. While I was preparing the images to upload to the wiki I had a gif that I had to reduce the quality so much to fit in the 2 MB restriction that it became a 100 × 56 image - this gif is in the end not more than 400 KB and has specific settings for the colors that is why the bigger I set the resolution for it here on screen the more interesting it becomes. The pixels become visible and a new layer of the image emerges.<br>
 +
[[File:artificial-flower3.gif‎|‎|100px]]
 +
[[File:artificial-flower3.gif‎|‎|200px]]
 +
[[File:artificial-flower3.gif‎|‎|800px]]
 +
[[File:artificial-flower3.gif‎|‎|1000px]]
 +
[[File:artificial-flower3.gif‎|‎|2000px]]
 +
 +
==== Other examples====
 +
In the examples above the original image kind of gets disassembled and every small pixel has a visuality in its own. I am curious about the restrictions file formats have in deconstructing them to one pixel or id it is possible to change the pixels of an image. I want to try making these zoomed in pixels sharp. Generate sharp vector symbols based on the pixels and then reconstruct the image. I want to see this manifest in print but also on different sized computer screens and resolutions.
 +
<br> I would like to create a world of modified pixels that is experienceable. So work with interaction of a viewer, make the pixels change accordingly.
 +
<br>
 +
[[File:pixel-example1.jpg‎|‎|600px]]<br>
 +
[[File:pixel-example2.jpg‎|‎|400px]]
 +
[[File:pixel-example3.jpg‎|‎|400px]]
 +
[[File:pixel-example4.jpg‎|‎|400px]]<br>
 +
[[File:pixel-example5.jpg‎|‎|600px]]
 +
[[File:pixel-example6.jpg‎|‎|600px]]
 +
[[File:pixel-example7.jpg‎|‎|600px]]

Latest revision as of 08:14, 2 November 2017

Info

NOEMI BIRO 

0919394@hr.nl

Graphic Design

Plotters

Just some research into plotters

AN intro into using plotters with pen: [1]
good description of printing and saving file types for HQ prints [2]
video using the plotter [3]
plotter and sound mutual performance [4]
processing and plotting [5]

BALANCE - An artefact for/on/or about the body

Body & balance

I was interested in balance in the body. I recently started to do yoga and during balancing exercises, I noticed that there is always one side of the body that I felt more balanced about. So, for example, I could balance longer on my left leg than on my right leg. Balance.jpg

Mind & balance

I started to consider how other parts of my body are in balance in relation to each other. For example, I am right handed so there is a really big imbalance between using my right hand and left hand for activities. I read about Leonardo Da Vinci [6] using both of his hands in perfect sync. This also meant that he was balancing out his right and left brain hemisphere usage, changing between rational and creative thinking.

Hand & balance

ambidextrous = disastrous?
I started from the basic motion of writing, coloring, tracing. As input for this I used letters found in magazines and books around me and from the found image I started reproducing it with my right an left hand.


Machine & balance

I was interested of how a machine would further distort the images that I created so I fed them to the embroidery machine. I was curious whether the machine would be able to correct the small defects that for example my left hand drawn image had due to shakiness and instability. As for the material I choose to transpose the images on socks because they can be worn in either leg functioning the same way, a very ambidextrous material for the body.

Machine & creation

I also found it was interesting how the embroidery machine can use different patterns to stitch the image is given. I tried out the N because I was not at all satisfied with the big green blob that it was.

Balance-types-n5.jpg

(in)human factors - STRETCHING - sensitivity training

teammates: Stephanie, Chiara

First stretches

Stretching of the human body - The limit of the human body is sometimes obvious, but sometimes it can be played around. We accept that some people can bend more then us, but how do we know for sure what is the limit for every individual? In this world of enhancement we took footage of 4 yoga poses where we exagerated the strech. When is the point where the person reached the maximum and what is fictional?

yoga pose 1 - https://youtu.be/WgFzPLByGYE
giphy.gif

yoga pose 2 - https://youtu.be/k123sUIBNk0
giphy.gif

yoga pose 3 - https://youtu.be/guqI5-AuKsI
giphy.gif

yoga pose 4 - https://youtu.be/CegI04qY9Ts
giphy.gif


Final stretch

for the final video, we edited the videos in an even more exagerated way making the distance even bigger between what is a human and its limits and what is animation.

yoga sequence black & white negative - https://www.youtube.com/watch?v=oOdf-jYyiho
Screen Shot 2017-09-22 at 15.53.29.png


A small "booklet" exploring the Mind [of] the Machine

teammate: Emiel

I found the theme of Artificial Intelligence very interesting because it challenges the superiority of the human. The algorithm that was the graduation project of Boris Smeenk and Arthur Boer was the first AI object that I encountered. Although I do not understand what the algorithm does or how it runs behind the programme I found it interesting that the teaching process is very similar to the way we learn stuff. By feeding the programme various images of an object the computer analyzes the details of it and starts to distinguish it.

This example Teaching machines to draw also brings up another question that interested me. If the machine is trying to learn about something, in the example below about cats and you feed him a different image (chair) the computer becomes a creative. Catchair.jpg
Fun and all there is danger in this. Like the beauty Snapchat filter that learned from the images that were fed what a beauty standard for males is. Because there was not a big user variety the beauty standard was similar to the most popular class using the application.

Flowers

For testing the algorithm ourselves first we choose a database of flowers, which contained 8000 images of flowers centrally positioned in similar thumbnails. Because the images were curated by the Oxford University we could be sure that it is a pure dataset without any other image interfering in the correct learning of the computer about flowers. For me, it was really surprising to see the way the images developed in the process, pixel by pixel getting clearer and closer to reality.
‎ quality on youtube
Artificial-flower.jpg Artificial-flower2.jpg Oxford 64.png Bloem2.jpg
The results are very convincing. The algorithm managed to combine the shapes and textures of the flowers in a few thumbnail images. The meshed collage is also a creation of the algorithm when it meshes all the thumbnails into a big canvas.

Microscopic

Because the first dataset proved how a machine can come close to reality when given a clear dataset we wanted to run another one trying a polluted set. We took the latest 489 images from Instagram with the hashtag #microscopic. In this data collection, we found ourselves everything from what we expected: abstract shapes, x-rays, microbiology but also mirror selfies and memes. We didn't make a selection but fed all the images to the machine. Because the dataset was not as extensive as the first one the programme re-run the images several times and as a result produced beautiful abstract shapes. What was interesting in choosing microscopic images is that for us humans it is also hard to define what fits into this category because it is a less defined concept. Thus what the machine gave us can be considered a good representation of what microscopic is.(?)

Microscopic 64.png Microscopic.jpg
The generated image is very beautiful in its abstraction. Interesting to notice how the thumbnails appear a bit pixelated showing the layered way the algorithm constructs the image.

Booklet

The concept for the design of the booklet came from the way we were looking at the two datasets. We were analyzing their differences and similarities and we decided to present this in one booklet instead of two. While still keeping them separate with the stitch binding, by color - yellow as the dominant color for the flowers and blue predominant in the microscopic images they are still bound by the book cover. For the cover of the booklet, we choose to vectorize two of the thumbnail pictures to play more with this idea of a machine produced image gets image traced by a programme run on a machine then produces a new image with a new technique- the embroidery machine.
Patterns.png Flower-ai-emb.jpg

Foto 3 - kopie (2).JPG Embroidery01.JPG

The booklet asks the user to go through images that were also fed to the algorithm before ending up at the results. This way the user gets a glimpse into the variety of pictures the algorith had to process and can reflect wether the created images seem real in his/her opinion or not.

Booklet01.JPG Booklet02.JPG Booklet-ai.Gif
Booklet-ai.JPGBooklet05.JPG Booklet06.JPG

What is my craft?

In my craft I use machines, algorithms, and technologies as input in my design. I want to create design in collaboration with a machine. Feeding it my design and analyzing the output, feeding that output to a different machine what is then the output? I am interested in what stand can a graphic designer take when her work is influenced by machines? What stand can it take when the outcome is influenced by humans? Can the two influences come together in one project while there is still a graphic design function to be fulfilled? Or is it then curating? Duality is an important word for me in my creation. I want to explore the fusion of both human and inhuman factors in thriving for an outcome. Where can be an intersection or an interaction? In my craft, I want to create situations, experiences, and objects that either were created through collaborating with machine/human or they are only final when they are in interaction. I don`t want to design finished products. I want to design space for change in a framework defined by me. You be the change by touch, voice, motion or let the machine be the change by algorithms, randomness, programmes.

How to be human in the near future of Q10?

From the projects I worked on in Q9 I was fascinated by the way a machine can create a new image from what it is fed. While I was preparing the images to upload to the wiki I had a gif that I had to reduce the quality so much to fit in the 2 MB restriction that it became a 100 × 56 image - this gif is in the end not more than 400 KB and has specific settings for the colors that is why the bigger I set the resolution for it here on screen the more interesting it becomes. The pixels become visible and a new layer of the image emerges.
‎ ‎ ‎ ‎ ‎

Other examples

In the examples above the original image kind of gets disassembled and every small pixel has a visuality in its own. I am curious about the restrictions file formats have in deconstructing them to one pixel or id it is possible to change the pixels of an image. I want to try making these zoomed in pixels sharp. Generate sharp vector symbols based on the pixels and then reconstruct the image. I want to see this manifest in print but also on different sized computer screens and resolutions.
I would like to create a world of modified pixels that is experienceable. So work with interaction of a viewer, make the pixels change accordingly.
‎
‎ ‎ ‎
‎ ‎ ‎