Detecting performance difficulty of learners in colonoscopy: Evidence from eye-tracking
Abstract
Eye-tracking can help decode the intricate control mechanism in human performance. In healthcare, physicians-in-training requires extensive practice to improve their healthcare skills. When a trainee encounters any difficulty in the practice, they will need feedback from experts to improve their performance. The personal feedback is time-consuming and subjected to bias. In this study, we tracked the eye movements of trainees during their colonoscopic performance in simulation. We applied deep learning algorithms to detect the eye-tracking metrics on the moments of navigation lost (MNL), a signature sign for performance difficulty during colonoscopy. Basic human eye gaze and pupil characteristics were learned and verified by the deep convolutional generative adversarial networks (DCGANs); the generated data were fed to the Long Short-Term Memory (LSTM) networks with three different data feeding strategies to classify MNLs from the entire colonoscopic procedure. Outputs from deep learning were compared to the expert’s judgment on the MNLs based on colonoscopic videos. The best classification outcome was achieved when we fed human eye data with 1000 synthesized eye data, where accuracy (90%), sensitivity (90%), and specificity (88%) were optimized. This study built an important foundation for our work of developing a self-adaptive education system for training healthcare skills using simulation.
License
Copyright (c) 2021 Xin Liu, Bin Zheng, Xiaoqin Duan, Wenjing He, Yuandong Li, Jinyu Zhao, Chen Zhao, Lin Wang
This work is licensed under a Creative Commons Attribution 4.0 International License.