Thursday, 4 July, 2019

We’ve been playing primarily with two of the three prototypes that Ryan designed. These are rough draft programmes using a Kinect sensor, surround sound speakers, a menu of sounds that correspond with physical gestures performed in the space. 

One of the prototypes is called the “Drum Kit.” There are set gestures that are done by lifting your arm, pulling back your elbow, or even punching out in front of you. The range of gestures are linked to specific sound cues using machine learning. So when you move, it triggers that sound to play. It’s like being the conductor of an orchestra using your whole body to make all the different instrumentation.

The other prototype is the “Drone.” This one is sensitive to a performer’s spatial relationship to the Kinect sensor. The closer you are to the sensor, the more pure the sound is. The farther away, the more reverb on the sound. The pitch can also be changed depending on how high or low the performer moves their hands.

We’re having a great time experimenting with the technology, but we are also being challenged by how to make meaning out of the interaction. It’s one thing to stand still and lift up a hand, but it’s another thing to have a connected, narrative reason for raising a hand and a specific sound being made because of it. There are flickers of success and we’re hoping to keep unlocking the pieces where it works and where it doesn’t. But the moments when it lines up, feels surprising and strangely powerful: human and technology responding to each other instead of one controlling the other.

- Sarah Rose