Archive | Processing RSS feed for this section

Final Installation code

23 Jan

The Final code can be downloaded from this link:

https://drive.google.com/folderview?id=0B0MgAekvsXNaQkVJa1E5Vkxxems&usp=sharing

Advertisements

Testing Installation 2

20 Jan

After testing the project in Weymouth house I decided I wanted to gather some more personal opinions so setup the project on a screen at home and asked my friends to play around with it and tell me what they thought

The most interesting response I received was that of one of my more shy friends. Initially she didn’t like being on camera and was hiding her face when I was asking her to give me her opinion and then when she figured out how the process worked and that actually when her face was detected and could be seen that the screen would blur. This then meant she was much more her self and not trying to hide away, she then begun to move around on and of screen playing with it seeing its capabilities. Out of all the interactions this was the most memorable as it distinctly links back to the research surrounding performance theories that I researched towards the beginning of the project and therefore shows that the project meets direct action required in the brief.

Other small noted taken by different friends where one, they thought that there blur wasn’t strong enough as some detail was still noticeable. This was an easy iteration after some testing of numbers within the code and discussion between other I ended up doubling the blur filter from what it was initially. Finally the other note was that people wanted the video feed to be full screen as to be a more immersive feel and have nothing else in the way. Again this was easy to implement by just using this simple line of code;

boolean sketchFullScreen() {
  return true;
}

Testing Installation 1

15 Jan

Today I begun testing my installation in the Weymouth foyer, this was achieved by using an external usb webcam and my laptop plugged into the wall mounted screens via HDMI cable. Initially I decided to use the monitor that was positioned facing into the foyer space as I thought this area would provide the most traffic and therefore high chance of people viewing the installation. However after some testing I realised this wasn’t the best location for my project to be displayed. This is when I chose to move my setup around the corner/behind the original screen and use the monitors pointing towards the entrance of the foyer. The usb camera used was helpful as I was able to place it above the screen being used so that it captured video at the eye level of the viewers, this was the optimal position.

After setting up the installation everything was working as planned and surprisingly no new issues where discovered. A problem I have mentioned in a previous blog was that it was unclear wether distance between the camera and faces was going to be a issue until the testing process. Luckily after some self testing and passerby interactions it was clear that faces even in the background of the video capture where still being picked up so no issue arrived thankfully.

I received a mixture of reactions from people within the space, as i chose the location facing the entrance i was able to judge individual reactions very easily. Many people would glance at the display and shy away as they know that the are being observed/recorded, then as they come on to screen and they where being tracked so the screen would then blur it was noticeable that they would then not shy away from the screens as normal. Many variations of this action where noted and was very positive, however a downfall of the location i believe was that sometime as people where in a rush they didn’t pay much attention to the screen and therefore walk by and not have any reaction. This is understandable and can’t be changed unless forcing attention to the screen by asking people questions on what they thought of the installation.

Overall I feel that the testing was a success and that no immediate changes where needed. The only thing that I want to do differently is to ask some peoples opinions as this is something I didn’t engage in within the space. To do this I plan on setting up the installation at home and letting friends and housemates interact with it and seeing what they have to say.

Below are some photos and a video to show the final installation within the space.

No face detected no blur applied

No face detected no blur applied

Face detected blur applied

Face detected blur applied

Creating Installation 3

12 Jan

After working with the face tracking code and blur feature with mouse tracking I now been working on implementing these two ideas together. Using the code from face tracking and the blur feature I implemented on my earlier mouse tracking implementation i was able to finalise the project into a working installation.

Using the blur function i was able to remove the box tracking the face and place the blur in its place. Subsequently this made the screen blur when a face was detected on screen and no blur when no face was detected. The overall process of this implementation wasn’t to challenging. I found the whole process very interesting, from being able to understand the code and then being able to manipulate what happens on the screen.

Below I shall be posting a video of the process working and the code used to create the effect.

import gab.opencv.*;
import processing.video.*;
import java.awt.*;

// initiate video and openCV
Capture video;
OpenCV opencv;

void setup() {
size(640, 480);
//scale video down to make it run smoother
video = new Capture(this, 640/2, 480/2);
opencv = new OpenCV(this, 640/2, 480/2);
opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);

video.start();
}

void draw() {
//scale everything back up to fit window
scale(2);
opencv.loadImage(video);

image(video, 0, 0 );
video.loadPixels();

noFill();
stroke(0, 255, 0);
strokeWeight(3);

//create array for faces
Rectangle[] faces = opencv.detect();
println(faces.length);

noFill();
noStroke();
for( int i=0; i<faces.length; i++ ) {
rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height );
blur();
}
}

void blur() {
filter(BLUR, 6);
}

void captureEvent(Capture c) {
c.read();
}

Creating the Installation 2

31 Dec

Today I installed the OpenCv library in Processing and begun to look into the face tracking code needed for my installation project.

I began playing around with the basic tracking code one my own and with friends to understand how it worked and what i could and couldn’t track.

This was important as it gave me and understanding of how my installation will work once complete. Playing with the code i was able to find out the the face could be track from a decent enough distance away from the camera, this was very important for the project as people experiencing the installation wouldn’t be as close to the camera as normal, so knowing that wouldn’t be a problem was good.

Other things noticed where that the face can easily be blocked by objects such as hands and also interestingly large hats worn sometimes confuse the installation and then a face isn’t detected in some cases. However is very unlikely so shall not be a huge problem in my case.

Below is a short video showing the tracking working.

Creating the Installation 1

28 Dec

So my idea is to track movement/faces and to use a blur effect once a face moves onto the screen and when nothing is detected for the screen to be in focus.

First of I wanted to test my idea use mouse tracking and if statements. if mouse movement was detected a blur would be applied to the screen and if no mouse was detected the blur would not be applied. Below are two screen grabs of this process in affect.

Mouse detected and blur applied

Mouse detected and blur applied

No mouse detected and no blur

No mouse detected and no blur

Testing out this idea using simple if statements allowed me to get and understanding of how the process would effect the individual being watched and the outcome of it on screen. Next i shall be looking into the face tracking code and applying the same ideas to that rather than mouse tracking.

Michel Foucault and the Panopticon

18 Dec

Michel Foucault was one of many that understood that we as individuals react differently in different situations. A technique that Foucault cited often was the architectural design of the ‘Panopticon’   by Jeremy Bentham that was intended for prisons, insane asylums, schools, hospitals and factories. The was to regulate behave of people via observations rather than violent methods (Mason).

The regulation was achieved as prisoners would be under the impression that they where always being monitored. the structure designed allowed the guards to see into each of the cells from the high central tower but at the same time unseen by the prisoners. This idea of constant observations was used as a control method for the inmates.

This idea ties tightly with the idea of where do we draw the line between surveillance/security and our freedom. As this is such an important topic as surveillance technology is improving and getting wider spread throughout urban spaces.

The topic is that passers by are under a form of constant observation and knowing this they are then changing there behaviour.

Panopticon

Panopticon

Mason, Moya. ‘Foucault And His Panopticon – Power, Knowledge, Jeremy Bentham, Surveillance, Smart Mobs, Protests, Cooperation, Philosopher’. Moyak.com. N.p., 2014. Web. 18 Dec. 2014.

The Hawthorne Effect

15 Dec

The Hawthorne Effect is an old psychological theory that shows that the individuals are likely to change the way they behave if they know that they are being watched. The theory started in a business environment in which they analysed wether the level of light would effect the individuals productivity levels. The results from this very inconclusive as a increase in productivity was noticed but didn’t correlate to either high or low light. However what was interesting was that when the observers left a noticeable drop in the individuals productivity was seen. So there initial criteria was not affective however they where able to conclude that the individuals productivity would increase when the where being observed (Shuttleworth, 2015).

This theory can be linked to my processing idea, in which I want to blur the scene when a face is tracked and not blur when there isn’t one. This idea may prove that individuals will act as they would if they can see that the camera isn’t able to capture faces.

Shuttleworth, M. (2015). Hawthorne Effect – Observation Bias. [online] Explorable.com. Available at: https://explorable.com/hawthorne-effect [Accessed 15 Dec. 2014].

Processing – Final Idea

8 Dec

After using Processing for some time now and beginning to understand the fundamentals i have become interested in the use of camera interactions and manipulations you can do based on what the camera is able to interpret. I have now been experimenting and looking at examples from the OpenCV Library.

In the media currently privacy and anonymity is huge deal, with this being said I wanted my project to have a simplistic idea based around these two concepts. This being said my final idea was the use OpenCV’s library to track faces on the screen and then manipulate the screen to blur whenever a face was present and not blur when a face wasn’t.

The aim of this is to get the passersby to think more about how they are watched and how many times they are captured on camera throughout a single day. Its overwhelming just how many CCTV cameras are in operation throughout the UK. The idea is the get the passersby to understand this and take into consideration if they agree or disagree with this and whether this alters there actions in any way.

I shall be looking into further detail about the ideas surrounding privacy and anonymity with relevant theories and current media forms.

The Iterative Design Process

20 Nov

Throughout this uni my work will be focusing on the design as a iterative process. This is the idea that as designers we must evaluate and test our projects constantly through the design process and be making changes to it to ensure that it meets the intended requirements of the initial brief. The use of this process is intended to ensure that when a project meets its final release that it shows the intended message that it was designed for.

Throughout lectures we have been looking at a few example of iterative design processes, one of the being the ‘Waterfall Model’. This method take a linear progress in step style format, this early style of iteration was successful in the fact that it was very easy to implement and it simplicity. However the principal of this method doesn’t work as well as intended as within an actual design process clients are likely to not know exactly what they require at the start of the brief and things often have to be change. This process doesn’t typical allow changes half way through and subsequently the designer has to then start the whole process all over again.

Waterfall Model

Waterfall Model

Understanding the downfalls of the waterfall model a much more suitable method is the cyclical process. This uses the same steps but in a much more flexible way allowing analysis within each of the steps carried out. this method allows the designer to work on previous iterations of the project completing set goals to improve the project overall. This idea is that the process is repeated continually until a effective working product is produced.

Cyclical Process

Cyclical Process