With all of the peripherals released recently for motion controls and VR, I’ve become more interested in figuring out how development would work for a specific piece of hardware. Recently, I actually got a chance to experience this, in this case with the now-obsolete Razer Hydra controllers, which consist of two handheld motion controllers, and a base in the center that tracked their position. I decided to use them for implementing a prototype in Unity, since I wanted to use an engine I was familiar with for this project, and because a plugin for the engine was available.

razer-hydra-portal2-gallery-3

A picture of the Razer Hydra setup, with controllers and a central base.

Developing using the plugin was surprisingly easy: all that was required was that an instance of the “SixenseInput” prefab was put into the Unity scene using it. After that, all information regarding the input, position, and rotation of the controllers was accessed through static members of the attached “SixenseInput” script component. Differentiating between the left and right controller is done by using the two-item container “SixenseInput.Controllers”, with Controllers[0] being left, and Controllers[1] being right. Using this, location of a controller can be accessed as a Vector3, while rotation can be returned as a Quaternion:

updateHands

A function I call from Update(), getting the current position and rotation of the Hydra controllers.

For the prototype I was making, I wanted to have the player feel like they were able to pick up objects using “psychic powers”, so first I needed to be able to calculate where each of the player’s controllers were pointing. Logically, raycasting was the best method of handling this, so I could just use the hand positions as the starting point for a ray, and the “handPosition.forward” Vector3 as the direction, while setting an arbitrary maximum length.

getHitPosition

The function called for each hand from UpdateHands(), casting a ray in order to assign a location for the hitPoint object at the end of it.

When the trigger is held on either controller, that means that the player’s hand is grabbing an object, which will lerp its location and rotation to that of the end point of the raycast done above. Doing this allows the player to move it around with their hand, creating the feeling that they are actually manipulating it with their mind. In order to do this, a spherecast is done when the OnTriggerDown event is triggered for either controller, which searches for an object with the “Grabbable” script component.

initGrab

The code for grabbing an object. A spherecast is used to mitigate issues with precise aiming that the Hydras have.

The resulting prototype is actually fairly robust, and I feel that it goes a long way to proving the experience I initially set out for. What astonishes me, though, is how simple the Sixense team has made their library to use. While taking some time in the beginning to learn how it was set up was required, I soon found development for the Hydras to be very intuitive, and was able to focus more on the logistics of implementing my prototype.

A video of my prototype showing off hand tracking and rotation, as well as manipulating objects with the Hydras.

The Sixense team has implemented their library to work with Source Engine and Unreal Engine 4 as well, and has expressed interest in creating support for CryEngine. With the Hydras depricated, the company has now gone on to their next iteration of hardware, the STEM, which is supposed to be backwards-compatible with the Hydra version of the Sixense library. I’m glad that I took the time to learn how to use this, especially since this information will still be valuable when Sixense’s next product comes out.

Advertisement