GaVe: A webcam-based gaze vending interface using one-point calibration
DOI:
https://doi.org/10.16910/jemr.16.1.2Keywords:
human-computer interaction, gaze interaction, touchless, gaze, eye movement, eye tracking, usability, dwell timeAbstract
Gaze input, i.e., information input via eye of users, represents a promising method for contact-free interaction in human-machine systems. In this paper, we present the GazeVending interface (GaVe), which lets users control actions on a display with their eyes. The interface works on a regular webcam, available on most of today's laptops, and only requires a short one-point calibration before use. GaVe is designed in a hierarchical structure, presenting broad item cluster to users first and subsequently guiding them through another selection round, which allows the presentation of a large number of items. Cluster/item selection in GaVe is based on the dwell time, i.e., the time duration that users look at a given Cluster/item. A user study (N=22) was conducted to test optimal dwell time thresholds and comfortable human-to-display distances. Users' perception of the system, as well as error rates and task completion time were registered. We found that all participants were able to quickly understand and know how to interact with the interface, and showed good performance, selecting a target item within a group of 12 items in 6.76 seconds on average. We provide design guidelines for GaVe and discuss the potentials of the system.Downloads
Published
2023-01-25
Issue
Section
Articles
License
Copyright (c) 2023 Zhe Zeng, Sai Liu, Hao Cheng, Hailong Liu, Yang Li, Yu Feng, Felix Wilhelm Siebert
![Creative Commons License](http://i.creativecommons.org/l/by/4.0/88x31.png)
This work is licensed under a Creative Commons Attribution 4.0 International License.
How to Cite
GaVe: A webcam-based gaze vending interface using one-point calibration. (2023). Journal of Eye Movement Research, 16(1). https://doi.org/10.16910/jemr.16.1.2