Connect with us

Tech

Innovation…Eye-gaze tracking for computer interaction

Published

on

Mice, and now touchscreens, have become a daily part of our lives in the way we interact with computers. But what about people who lack the ability to use a mouse or touchscreen? Or situations where these would be impractical or outright dangerous?

Many researchers have explored eye-gaze tracking as a potential control mechanism. These tracking mechanisms have become sophisticated and small enough that they currently feature in devices such as smartphones and tablets. But on their own, these mechanisms may not offer the precision and speed needed to perform complex computing tasks.

Now, a team of researchers at the Department of Engineering has developed a computer control interface that uses a combination of eye-gaze tracking and other inputs. The team’s research was published in a paper, ‘Multimodal Intelligent Eye-Gaze Tracking System’, in the International Journal of Human-Computer Interaction.

Dr Pradipta Biswas, Senior Research Associate in the Department’s Engineering Design Centre, and the other researchers provided two major enhancements to a standalone gaze-tracking system. First, sophisticated software interprets factors such as velocity, acceleration and bearing to provide a prediction of the user’s intended target. Next, a second mode of input is employed, such as a joystick.

“We hope that our eye-gaze tracking system can be used as an assistive technology for people with severe mobility impairment,” Pradipta said. “We are also exploring the potential applications in military aviation and automotive environments where operators’ hands are engaged with controlling an aircraft or vehicle.”

One challenge that arises when designing such a system is that once the target is selected, how does the user indicate a desire for selection? On a typical personal computer, this is accomplished with a click of the mouse; with a phone or tablet, a tap on the screen.

Basic eye-gaze tracking systems often use a signal such as blinking the eyes to indicate this choice. However, blinking is not often ideal. For example, in combat situations, pilots’ eyes might dry up, precluding their ability to blink at the right time.

Pradipta’s team experimented with several ways to solve the selection problem, including manipulating joystick axes, enlarging predicted targets, and using a spoken keyword such as ‘fire’ to indicate a target.

Unsurprisingly, they found that a mouse remains the fastest and least-cognitively stressful method of selecting a target – possibly assisted by the fact that most computer users are already comfortable with this technique. But, a multimodal approach combining eye-gaze tracking, predictive modeling, and a joystick can almost match a mouse in terms of accuracy and cognitive load.

Further, when testing computer novices and with sufficient training in the system, the intelligent multimodal approach can even be faster.

The hope is that these revelations will lead to systems that perform as well – or better – than a mouse. “I am very excited for the prospects of this research,” Pradipta said. “When clicking a mouse isn’t possible for everyone, we need something else that’s just as good.”

– TechCrunch, April 24, 2015

Ripples… without borders, without fears

Join the conversation

Opinions

Support Ripples Nigeria, hold up solutions journalism

Balanced, fearless journalism driven by data comes at huge financial costs.

As a media platform, we hold leadership accountable and will not trade the right to press freedom and free speech for a piece of cake.

If you like what we do, and are ready to uphold solutions journalism, kindly donate to the Ripples Nigeria cause.

Your support would help to ensure that citizens and institutions continue to have free access to credible and reliable information for societal development.

Donate Now