iat 320 || body interface

Friday, April 13, 2007

Weeks 13 - The Sophomore Album & Final Act

Here is a list of our six sensors and what each did with the revised mapping in [ ]:

- Copper coin glove + coin circuits: Binary coded to change playlists ---> [Removed the glove]
- Photo cell attached to glove: Swap songs ---> [Paint with Jitter and change playlists according to frequency of painting]
- Right elbow conductive foam + copper tape: Right channel volume ---> ["Scratching"]
- Left elbow conductive foam + copper tape: Left channel volume ---> [Cross-fading between two songs]
- Left armpit conductive foam + copper tape: Playback speed ---> [Frequency modulation effect]
- Right armpit conductive foam + copper tape: Echo (the effect was not giving us the feedback we wanted with our loops, so we disconnected that sensor reading.) ---> [Master Volume]



I changed how I processed the loops and created loop pairs so that there would be a beat loop and a rhythm loop that matched. We also cut off the Copper coin glove sensor because John managed to code an alternate method of playlist changing. By "painting" with the photo cell, the patch takes readings that counts how many times the user painted in ten seconds. Painting 1-2 times would result in playlist 1, 3-4 would trigger playlist 2, and 5-6 times would initiate playlist 3.

It was hard to remember these combinations and in the presentation, the class did not get to see the painted result because we were using two laptops to run the patches and had netsend/netreceive to send the paint data to the painting laptop, and the netsend/netreceive connection got disconnected. Gordon pointed out that it would have been easier for the user to have the coins and change the playlist. Completely agree with that but like I mentioned in the previous post, having those coins on the body was an easy way out and we wanted to try another approach that worked with "beyond arms reach".

If I had to redo this project from ground zero, I would improve the harness design. Each of us had little knowledge of sewing, and didn't know the basics of tailoring something so custom. For a first try it was great, but for a second round I would use different elbow and armpit strap material. After the first presentation, My elbow joints were red and blotchy because of the abrasive material.

Another improvement I would look at is the durability of the foam. Initially, we doubled up on the foam, but found that it was really uncomfortable. So we went with one layer for the rest of the design and never thought about durability until the first presentation when I noticed that our readings were weakening. By the second presentation, the sensors were on their last performance. Like many other groups that used conductive foam, the foam just wore out due to all the bending.

Perhaps I would move away from foam and use accelerometers like the Nintendo Wii controller and we would add a bit more spice to how the GoGoDJ might be used. It would also have been nice to have worked with the RFID tags that we initially wanted to use to change our playlists.

Overall, my experience with GoGoDJ was fantastic. It opened me to DJ culture and music even more, working with cheap sensors that were so sensitive in control was amazing, and being able to revise our idea was a great bonus.

Weeks 12 - The First Album

We completed our sensor the night before the first presentation and agreed to meet up a few hours before class the next day.

Here is a list of our six sensors and what each did:

- Copper coin glove + coin circuits: Binary coded to change playlists
- Photo cell attached to glove: Swap songs
- Right elbow conductive foam + copper tape: Right channel volume
- Left elbow conductive foam + copper tape: Left channel volume
- Left armpit conductive foam + copper tape: Playback speed
- Right armpit conductive foam + copper tape: Echo (the effect was not giving us the feedback we wanted with our loops, so we disconnected that sensor reading.)

GoGoDJ Gallery

1230 hrs:

I arrived on campus and met up with John and Sherry. John's patch had been working with pseudo sensor readings and this was our first test with all the sensors and patch. We were confident in our design and implementation, so we were comfortable testing this late. In all honesty though, performing initial tests on the day of the presentation should never be done.

I gave John the sound loops I edited to use in the patch. Their sound quality was top notch outside of Max.

1300 hrs:

After a few tweaks, John initialized all the loops and the patch was up and reading the sensors. We now had to make adjustments to the numbers coming in so that they would properly scale to the patch.

The loops sounded fine on the laptop speakers in the mezzanine. This turned out to be an obvious lack of quality control.

1430 hrs:

After the tweaks and John's bouncing around to other groups helping them with their patches, I decide that it would be best to don the harness and put the sensors through usage tests. We go through each sensor again and calibrate the numbers a couple more times. Things are looking good.

1600 hrs:

Our right armpit sensor and its accompanying echo effect were not giving us a desired result, so we decided to cut that demo from the presentation and work with five sensors instead of six.

Still wearing the harness, I quickly put on the rest of my disguise and I note that the harness is still quite comfortable and that I could have taken it off at anytime and slip it back on again, unlike Cody Church who had his sensors taped/wrapped onto his body.

We pack our stuff and moved to the room to set up the coin circuits that we needed to change playlists.

1830 hrs:



The patch was being run off Windows XP through Apple's BootCamp software on my MacBook Pro and because the drivers and XP are just raw without any third party software to give easier control to external display settings, getting the projector to talk to the notebook was difficult. We couldn't figure out how to mirror the desktop so after a few minutes the projector connection would be severed and our sound would cut off and John would have to reconnect and I could perform again.

I don't understand why the AV system has to make the audio inputs not work if the projector has no video coming in. The audio should work by itself and the video should work by itself. Audio Only, Video Only, Audio & Video. Simple.

We were reluctant to take the comment of implementing the coins onto our body to change playlists because of a video Greg showed us where there was this guy who basically made a one man band by putting touch sensors all over his body and played himself like an instrument. Greg noted that it was running along the lines of cheesiness and we agreed. Putting the coins around the room gave the touching of another object some meaning. Also as the one wearing the harness, I pretended that I had coins on me to touch and it just felt silly.

So we were challenged for the next week to fix our sound quality, find another way to change playlists, and if we could remove the gloves to help make the sensor more transparent.

Thursday, March 22, 2007

Week 11 - Wii Loop Machine



Wii Loop Machine Demo on Vimeo

HOLY CROW! I woke up this morning and was going through my usual gadget and video game blogs and discovered this gem on Joystiq. He can lay down the base beat, then add layers, pitch shift the beat, and do a bunch of other sound manipulations using his Max patch which is a downloadable application. The whole patch the guy did is pretty much what we're trying to achieve!

Here's the link to his blog: The Amazing Rolo

Labels: , , , , , , , , , , ,

Wednesday, March 21, 2007

Week 10 - Scratching and Mixing

I was looking for inspiration for the performance side of our project. I tried listening to various DJs, but it was hard to visualize what they were doing during their sessions. So I went to YouTube and searched for DJ videos. I came across DJ Qbert, who I had heard of, but never really appreciated how much of a genius he is until I watched his videos.



This video of him displays his ability of scratching. Using his turntable as an instrument, he is able to put out great scratches. Horrible scratches usually end up sounding like noise. On top of what DJ Qbert produces, he also has great showmanship. We hope that our project applies enough embodiment that we can concentrate on our performance.

Here is another video of DJ Qbert drumming with a record. It is a pre-recorded drum beat, but he scratches it in a way that he is able to create new beats.



DJ Qbert is a great example of what scratching is, but our patch involves more mixing than scratching. Our piezo sensor is supposed to "simulate" scratching, but I'm having second thoughts about using it after researching more about scratch techniques.

Mixing is when DJs have two sound sources and they mix between the two to create a new sound. On the The Beat radio station, during rush hour there is a show called the "5 O'Clock Traffic Jam" where DJ Flipout mixes through a whole playlist of songs. The mixing takes place mostly in the transitions between songs. Good DJs are able to do this so seemlessly that listeners don't notice when songs are switched and one track of music is created. When listeners can tell when songs are transitioned because the entire rhythm got thrown off.

Our output is going to simulate this type of mixing. Obviously it won't be flawless, but we're excited to see what we can create.

Labels: , , , , , , ,

Thursday, March 1, 2007

Week 8 - Final Assignment: GoGoDJ is G2G

Round 3.

I have teamed up with John Pang and Sherry Lai for the final and they were impressed with my DJ concept and we've decided to take it to the next step and bring concept to reality. The scenario we want to work around is that of a dance club/discotechque. Some clubs have multiple rooms that play different genres of music. There could be a Hip Hop room and a room for House Beats or Reggae.

If there were sensors that detected which room the DJ was in, the DJ would then "pick up" that room's playlist and could mix using on-body controls. Sensors on the DJ's body would control song selection and the various manipulation techniques. The DJ could bounce between rooms or multiple DJs could be switching rooms.

Week 7 - Assignment 2: DJ2Go

Round 2.

Nothing really changed on the sensor. I just soldered extension wires to the existing ones, and soldered the ends of wire to the input pins for the sensor-to-Arduino connection. Whatever tweaks that were made, were done to my Max patch.

The first patch basically faded between two sound sources. For the second one, the main mechanic was still fading between two sound sources, but I added two more for a total of four sources the user could manipulate. I tried to dynamically map the sounds so that they would change after passing a few parameters, but I couldn't get that done in time. Instead, I just put a metro that would change the sound source after 10 seconds.

This week, I further developed the DJ concept I had with the BT Arduino. I thought about how DJs are usually confined to their mixing booth and aren't really able to interact with their audience on the dance floor or in other rooms. They can wave their hands and try to get the audience to follow, but they can't mix and do that at the same time.

So for the second assignment, I tried to solve that by simulating DJ controls and LP manipulation by mapping that onto the sensor. For the most part, I almost forgot that I was using the glove as I was walking around and trying to "dance" to the beat. As Ihde puts it, it became transparent for me because I could concentrate on trying to impersonate a DJ rather than focus on controlling the sensor.

To someone who's watching this performance, it is obvious that I am using a glove sensor. So the next step would be to hide the sensor in clothing around another area where pressure occurs.

The transparency of the sensor faded away when I noticed that not all of my sound sources were playing. I was only getting two sources and the patch wasn't changing the sounds up. Hopefully, this concept could be developed further.

Friday, February 9, 2007

Week 5 - The Presentations: Aftermath

Last week most of the teams for Daniel's section were able to present except for my team and two others. From what I can remember about the sensors presented last week, many of them involved an interpretation of an action. Grabbing one's chest during a heart attack, hugging, flexing biceps, and lifting heavy objects. Performing these actions would then result in a sound that reflected that action. Heart beat stopping, heart beat increasing, grunting, and different levels of strain.

Sherry and John's sensor really displayed a good body expression that embodied the sensor. I really liked how they translated the swimming motion into the sounds of water splashing when her arm is down which then changed into calming music when her arm was up in the air. I really liked how they took the arm's full range of motion into play. Compared to the sensor we made, we only looked at the one range of motion that the index and thumb could do together.

Perhaps for future reference, we should keep the rest of the hand and fingers in mind. Much like Matt and Kurtis' sensor where they found a common ground for all their signs, the satan, fist, and gun. If we were to expand on what we have created so far, it would be nice to see two glvoes in action and have ten sensors for each of the finger joints. Map out different instruments and beats to the fingers and then when activated, the original pinching motion can be used to control up to four sounds.

After seeing the different sensors and going through the reading about Embodiment, Hermeneutics, Alterity, and Background, the gears in the idea factory are starting to turn for me. Hopefully next week can help refine what we can accomplish for the next assignment.

Labels: , , , ,