Archive | January, 2015

Final Installation code

23 Jan

The Final code can be downloaded from this link:

https://drive.google.com/folderview?id=0B0MgAekvsXNaQkVJa1E5Vkxxems&usp=sharing

Advertisements

Testing Installation 2

20 Jan

After testing the project in Weymouth house I decided I wanted to gather some more personal opinions so setup the project on a screen at home and asked my friends to play around with it and tell me what they thought

The most interesting response I received was that of one of my more shy friends. Initially she didn’t like being on camera and was hiding her face when I was asking her to give me her opinion and then when she figured out how the process worked and that actually when her face was detected and could be seen that the screen would blur. This then meant she was much more her self and not trying to hide away, she then begun to move around on and of screen playing with it seeing its capabilities. Out of all the interactions this was the most memorable as it distinctly links back to the research surrounding performance theories that I researched towards the beginning of the project and therefore shows that the project meets direct action required in the brief.

Other small noted taken by different friends where one, they thought that there blur wasn’t strong enough as some detail was still noticeable. This was an easy iteration after some testing of numbers within the code and discussion between other I ended up doubling the blur filter from what it was initially. Finally the other note was that people wanted the video feed to be full screen as to be a more immersive feel and have nothing else in the way. Again this was easy to implement by just using this simple line of code;

boolean sketchFullScreen() {
  return true;
}

Testing Installation 1

15 Jan

Today I begun testing my installation in the Weymouth foyer, this was achieved by using an external usb webcam and my laptop plugged into the wall mounted screens via HDMI cable. Initially I decided to use the monitor that was positioned facing into the foyer space as I thought this area would provide the most traffic and therefore high chance of people viewing the installation. However after some testing I realised this wasn’t the best location for my project to be displayed. This is when I chose to move my setup around the corner/behind the original screen and use the monitors pointing towards the entrance of the foyer. The usb camera used was helpful as I was able to place it above the screen being used so that it captured video at the eye level of the viewers, this was the optimal position.

After setting up the installation everything was working as planned and surprisingly no new issues where discovered. A problem I have mentioned in a previous blog was that it was unclear wether distance between the camera and faces was going to be a issue until the testing process. Luckily after some self testing and passerby interactions it was clear that faces even in the background of the video capture where still being picked up so no issue arrived thankfully.

I received a mixture of reactions from people within the space, as i chose the location facing the entrance i was able to judge individual reactions very easily. Many people would glance at the display and shy away as they know that the are being observed/recorded, then as they come on to screen and they where being tracked so the screen would then blur it was noticeable that they would then not shy away from the screens as normal. Many variations of this action where noted and was very positive, however a downfall of the location i believe was that sometime as people where in a rush they didn’t pay much attention to the screen and therefore walk by and not have any reaction. This is understandable and can’t be changed unless forcing attention to the screen by asking people questions on what they thought of the installation.

Overall I feel that the testing was a success and that no immediate changes where needed. The only thing that I want to do differently is to ask some peoples opinions as this is something I didn’t engage in within the space. To do this I plan on setting up the installation at home and letting friends and housemates interact with it and seeing what they have to say.

Below are some photos and a video to show the final installation within the space.

No face detected no blur applied

No face detected no blur applied

Face detected blur applied

Face detected blur applied

Creating Installation 3

12 Jan

After working with the face tracking code and blur feature with mouse tracking I now been working on implementing these two ideas together. Using the code from face tracking and the blur feature I implemented on my earlier mouse tracking implementation i was able to finalise the project into a working installation.

Using the blur function i was able to remove the box tracking the face and place the blur in its place. Subsequently this made the screen blur when a face was detected on screen and no blur when no face was detected. The overall process of this implementation wasn’t to challenging. I found the whole process very interesting, from being able to understand the code and then being able to manipulate what happens on the screen.

Below I shall be posting a video of the process working and the code used to create the effect.

import gab.opencv.*;
import processing.video.*;
import java.awt.*;

// initiate video and openCV
Capture video;
OpenCV opencv;

void setup() {
size(640, 480);
//scale video down to make it run smoother
video = new Capture(this, 640/2, 480/2);
opencv = new OpenCV(this, 640/2, 480/2);
opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);

video.start();
}

void draw() {
//scale everything back up to fit window
scale(2);
opencv.loadImage(video);

image(video, 0, 0 );
video.loadPixels();

noFill();
stroke(0, 255, 0);
strokeWeight(3);

//create array for faces
Rectangle[] faces = opencv.detect();
println(faces.length);

noFill();
noStroke();
for( int i=0; i<faces.length; i++ ) {
rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height );
blur();
}
}

void blur() {
filter(BLUR, 6);
}

void captureEvent(Capture c) {
c.read();
}