Gesturally Controlled Granular Synthesiser in VR

In this video, Brad demonstrates an early implementation of our VR DSP framework, developed in Unity3D’s multi-threaded DOTS framework.

Brad demonstrating a simple granular synthesiser interface in VR

Interface

For this example, we’re using an interface design very similar to a conventional 2D granular synth you would control with a mouse or hardware sliders and faders. Robert Henke’s Granulator is a good example of this common approach to granular synth control. While it doesn’t necessarily push the boundaries of interface design, it allowed us to more easily isolate the impact of replacing a mouse with VR controllers on the user’s ability for real-time expression.

Interaction

Mappings

  • Grain Density was performed via a twisting action of the two hands
  • Pitch was attached to the user’s hands performing a rolling gesture
  • Playhead Centre Position was mapped directly to the centre position of the two hands along the x-axis
  • Finally, the Playhead Randomise Range could be increased and decreased by moving the hands apart and bringing them together, respectively.

Level of Expression

Unsurprisingly (but of course, anecdotally), using two Oculus Rift S hand-held controllers allows us to vastly improve the level of live sonic expression. When I say sonic expression, I’m ultimately referring the synth’s range of sonic outputs and precision of control. The gestural/spatial mappings were simple to execute and provided expected results on intuitiveness.

The visual representation of the audio file made navigating the audio clip simple, while a small amount of experimentation and play was required to become familiar with controlling the other parameters, which were attached to grain pitch and density.

Thoughts

This VR implementation of a traditional granular synthesiser interface demonstrates a level of real-time control that is not typically afforded by a mouse or other commonly-used hardware controllers. This effectively turns what was previously a clunky off-line sound generation method, into a capable musical instrument that can actually be performed.

We’ve put this experiment down, as it is just a quick example to demonstrate something that might be more familiar before we share the more abstracted examples we’ve produced.

Finally, in the video Brad also mentions that the audio system has been developed using Unity3D’s multi-threading paradigm, DOTS. I’ll save the details on this for a future post, but I’d just like to say how excited we are to share the impact that unlocking so much CPU power has on the ability to generate ridiculous numbers grains every second inside an immersive environment.

Leave a Reply

Your email address will not be published. Required fields are marked *