Tuesday 15th November 2016
Share this article
Recommend Tweet Share
The virtual reality rulebook is still being written, but there are several self-taught experts who are quick to identify what the most important element of an immersive experience is.
For some, immersion is dependent on performance, with even the slightest drop in framerate throwing people out of their virtual world (and potentially causing motion sickness). For others, it’s the audio that draws players most into the world, or perhaps a consistent visual style. For Yuka Kojima, CEO of US-based VR tech firm Fove, it all comes down to the controls.
“Input defines what you can and can’t do, so it is critically important to virtual reality and its level of immersion,” she tells GamesIndustry.biz. “Good and active input gets out of the way and feels natural and intentional. To achieve true presence, seamless stress-free input is essential.”
When the current wave of virtual reality platforms first emerged, led by the crowdfunded pioneer Oculus Rift, input was generally limited to traditional devices like the conventional gamepad. For many VR titles, this is a sufficient way for players to interact with the game. Core gamers in particular will be so familiar with the controller that they will often be able to follow button prompts and perform complex actions without needing to look at the device in their hands – much as they do when playing games on a standard TV or monitor.
However, Kojima says that while the familiarity does serve as an advantage for developers targeting a core audience, the use of gamepads in isolation is a “limited utility in serious VR”.
“By only using a gamepad, you force one of two things to occur,” she explains. “Either you get a slow user experience and a parallax error [where what the player expects based on their input and what they actually is slightly different and therefore jarring], or you must be overly reliant on head-tracking, which in our opinion is also slow, tedious and potentially induces motion sickness.”
VR leaders have since branched out into motion controls, most notably with HTC Vive’s wand-like controllers, the revived PlayStation Move and Oculus’ Touch devices. These have opened up a world of possibilities, enabling developers to map realistic gestures to in-game actions: players no longer wiggle an analogue stick to aim a gun or swing a sword, they physically move their arms as they would in real life.
But again, while Kojima praises these devices, the former SCE Japan producer believes these are just a step towards truly immersive controls.
“Motion controllers are an essential part of VR,” she says, “but they don’t quite go far enough yet. They are excellent for many games but for anything too complicated, they can be frustrating.”
Over the last few years, scattered reports have emerged of companies experimenting with even more advanced methods of inputting game commands. Finger-tracking devices and haptic gloves are among the many innovative new technologies currently in development as firms try to bring one-to-one movements into virtual reality.
Kojima believe these devices, when perfected and released commercially, will become an “essential component” of virtual reality, but adds that they are still just another step on the road to the ultimate VR solution. In fact, she says no single system of input will be able to accomplish this.
“The full potential of VR requires a number of different technologies working in unison,” she says. “Current systems provide an interpretation of touch and control, but VR at its best will be more like what happens in the movie Avatar. Of course, this technology is still a ways off, but most of it is closer than most people think.”
Fove’s own contribution to this ongoing quest for truly immersive VR input is eye-tracking. The headset is able to follow not only why the user’s eyes are directed at within the built-in display, but even what they are focused on. The theory is that this will also enable VR games to adapt to what the player is paying attention to, even optimising the rendering by only generating the focal point in high detail.
The applications for this range from not just adding realistic depth of field, but also new and more natural mechanics. Complicated user interfaces, for example, could be activated by simply gazing at the option required, while teleporting would be based on where players look rather than where they point or guide a virtual marker. It can also make virtual characters far more engaging by having them meet players’ eyes.
“Rather than just communicating with other players in a chat box, we want them to be able to make eye contact, interact and laugh with one another,” says Kojima. “Virtual reality is an amazing tool to be able to bring characters to life, and we want to take that one step further.
But, the CEO stresses, eye-tracking is designed to enhance established control methods rather than replace them.
“Eye tracking is an additive layer for input,” she says. “It can make a gamepad fast and accurate, and can free up your hands to do other things, or be lazy while you navigate menus. Eye-tracking is a quantum leap beyond neck-based gaze input.
“Our biggest technical challenge is, of course, getting eye-tracking right for everyone. This includes users of every race, with glasses and without, kids and elderly users. Using advanced machine learning and analytics, we are close to achieving this.”
And much like motion controls and finger-tracking, eye-tracking is not the end goal for virtual reality inputs. Eventually, Fove and other VR firms will expand beyond the eyes to encompass the entire expression and perhaps even emotion of the user. The future of VR is taking every reaction into account, according to Kojima.
“We want to combine our technology with face-tracking to bring people completely into VR as the avatar of their choosing,” she concludes. “This will enable VR communication to rival that of the real world and bring so many more possibilities to this already fantastic environment.
“VR is undergoing rapid change and the possibilities are almost limitless, but right now we are bound by current inputs. Mainstream input systems will capture an optimally functional set as every additional feature has costs associated with it. Naturally, as these costs decrease this set will expand, but right now we need to spend resources to ensure the quality of content.”