logo
ResearchBunny Logo
Understanding How Low Vision People Read Using Eye Tracking

Engineering and Technology

Understanding How Low Vision People Read Using Eye Tracking

R. Wang, L. Zeng, et al.

This research by Ru Wang, Linxiu Zeng, Xinyong Zhang, Sanbrita Mondal, and Yuhang Zhao delves into the reading experiences of low vision individuals. By employing an improved calibration interface with commercial eye trackers, the study uncovers unique gaze patterns and the challenges faced by low vision readers, paving the way for innovative gaze-based technologies.

00:00
00:00
Playback language: English
Introduction
Reading is crucial for accessing information, but presents significant challenges for individuals with low vision—visual impairments not correctable by standard treatments. Low vision encompasses various conditions (central, peripheral vision loss, night blindness, blurry vision) affecting reading ability differently. While most low vision individuals utilize their residual vision and aids like handheld magnifiers or computer screen magnifiers, enlarged fonts, and contrast adjustments, reading remains slow and difficult. Previous research indicates that low vision individuals read approximately three times slower than sighted individuals. Eye tracking offers a potential solution by revealing fine-grained gaze behaviors, enabling more targeted assistive technologies. Commercial eye trackers, increasingly integrated into everyday devices, can capture this data; however, the standard calibration methods may not be suitable for low vision individuals due to their varying visual abilities and eye characteristics. This study aimed to explore the feasibility of using commercial eye trackers to collect high-quality gaze data from low vision individuals and analyze their unique reading gaze behaviors to inform the design of gaze-based assistive technology. The research addressed four questions: 1) Can reliable eye gaze data be collected from low vision individuals using commercial eye trackers? 2) How do low vision individuals' gaze behaviors differ from sighted individuals during reading? 3) How do different visual conditions affect low vision individuals' gaze behaviors? 4) How do different screen magnification modes affect low vision individuals' gaze patterns?
Literature Review
Existing research highlights the challenges low vision individuals face in reading. Reading performance varies greatly due to different visual conditions. For example, individuals with blurry vision show a strong relationship between reading time and word length due to reduced visual span, while those with macular degeneration (central vision loss) read significantly slower than those with comparable acuity but intact central vision. Screen magnifiers, while helpful, present usability issues; users may lose context due to limited field of view, and manipulating the magnifier increases cognitive load. These issues contribute to low vision individuals reading significantly slower than sighted individuals. Eye tracking offers valuable insights into reading behaviors. Commercial eye trackers, using algorithms to estimate gaze points based on pupil, iris, and corneal reflections, require calibration to account for individual differences. While accurate for sighted individuals, these methods overlook the unique needs of low vision individuals whose visual acuity and eye characteristics vary. While some research in optometry and vision science has investigated low vision individuals' gaze patterns using specialized, expensive eye trackers, HCI research in this area remains limited. Some previous work has explored using eye tracking for gaze-based low vision assistance in virtual reality or with specially designed interfaces, but the feasibility and details of using readily available commercial eye trackers have not been fully explored. This paper bridges this gap, focusing on the use of off-the-shelf eye trackers.
Methodology
This study aimed to collect high-quality gaze data from low vision individuals using commercial eye trackers and investigate their unique gaze patterns during reading. Twenty low vision and twenty sighted participants were recruited. Low vision participants ranged in age from 19 to 86 (M = 58.3, SD = 22.1), with seven legally blind. Sighted participants ranged from 21 to 51 (M = 31.1, SD = 9.5). A Tobii Pro Fusion eye tracker (120Hz) was used. The study employed an improved calibration interface with adjustable target size (36px to 256px) to accommodate low vision participants' varying acuity. A dominant-eye-based data collection method was implemented to address inconsistencies in gaze behavior between eyes in some low vision participants. A web-based interface was developed using React and Flask, providing regular reading mode (adjustable font size, weight, and color) and magnification modes (lens magnifier and full-screen magnifier). Visual acuity and field of view were assessed for low vision participants using ETDRS charts and a custom visual field test interface. The study involved four phases: 1) initial interview and visual acuity test; 2) gaze calibration and validation; 3) gaze collection during reading tasks (six passages from the CLEAR corpus, counterbalanced magnification mode order); 4) exit interview and visual field test. Data analysis involved quantitative and qualitative methods. Quantitative analysis focused on gaze recognition accuracy, data loss, fixation number and duration, saccade number and length, revisitation rate, line switching behavior, and smooth pursuit. Statistical methods included Aligned Rank Transform (ART) for nonparametric factorial ANOVA and Pearson’s correlation test. Qualitative data from interviews were analyzed using open coding and thematic analysis.
Key Findings
The study's findings demonstrate the potential for using commercial eye trackers to collect high-quality gaze data from low vision individuals. The improved calibration interface led to comparable gaze recognition accuracy and data loss between low vision and sighted participants. The alignment between audio-recorded reading and gaze data validated the collected data. Low vision participants exhibited unique gaze patterns during reading, characterized by more but shorter fixations, and more but shorter forward saccades compared to sighted controls. This indicated lower reading efficiency, with less information processed per fixation. Low vision participants also experienced greater difficulty in line switching, requiring more line searches. Visual acuity and visual field significantly impacted gaze behavior. Participants with lower acuity and limited field showed more but shorter fixations and shorter forward saccades, suggesting reduced information processing capacity. Magnification mode significantly influenced reading time. The regular mode (with adjustable font size) was faster than lens and full-screen magnifiers. The lens magnifier resulted in shorter fixation durations, more regressive saccades, and a higher revisitation rate, reflecting increased difficulty in tracking and maintaining reading position. A wider lens magnifier window correlated positively with forward saccade length and negatively with reading time, suggesting that a wider window may improve perceptual span and reading speed. However, taller windows were not always preferred, as some participants found them distracting. Qualitative data revealed the challenges of hand-eye coordination with screen magnifiers and a need for better support during line switching (e.g., highlighting lines, labeling line indices).
Discussion
This study demonstrates the feasibility of using readily available commercial eye trackers for low vision research. The improved calibration interface and dominant-eye data collection method yielded comparable data quality to that obtained from sighted participants. The findings reveal unique gaze patterns that reflect specific challenges faced by low vision readers, particularly concerning information processing, line switching, and interaction with screen magnifiers. These findings have significant implications for designing gaze-based assistive technology. The observed challenges regarding line switching highlight the need for systems that provide real-time support for line tracking and line switching, such as line highlighting or dynamic line spacing adjustments. The difficulty with word recognition suggests the need for systems that can detect this difficulty and provide assistance such as reading words aloud. The challenges associated with screen magnifiers highlight the need for hands-free, gaze-controlled screen magnifiers and context-aware magnification that adjusts window size based on user needs. This research goes beyond previous work which often relied on reading performance measures, providing direct evidence of gaze behaviors at the word and sentence levels for a deeper understanding of challenges.
Conclusion
This research provides the first detailed investigation of gaze data collection for low vision individuals using commercial eye trackers and a thorough exploration of their reading challenges. The study confirmed the feasibility of using commercial eye trackers, identified unique gaze patterns among low vision readers, and provided valuable design implications for gaze-based assistive technologies. Future research should focus on specific low vision subgroups, recruit age-matched control groups, and explore gaze behaviors during silent reading to further enhance the understanding and support of low vision individuals.
Limitations
The study's limitations include the variety of visual conditions among low vision participants, a potential age difference between the low vision and sighted groups, and the use of reading aloud which may not fully reflect silent reading. Further research with larger, more homogeneous participant groups, age-matched controls, and investigation of silent reading behavior are necessary to enhance the generalizability of the findings.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny