Analyzing Eye Movement For Better Map Design


Map design and understanding visual perception of spatial data have employed methods that monitor eye movement among viewers. Studies, for instance, have looked at the length of time and the regions in which eyes focus on as viewers are given maps. Mixed with standard methods utilizing questionnaires, improving interactive maps can benefit from incorporating eye movement monitoring as they give insight into cognitive perception through the length of time and regions eyes are fixed upon.[1]  Improvements in how maps provide information have incorporated experiments where data record eye movements in different types of visual information, recording which information is more consistent and effective for different viewers in perceiving correct information easily and quickly.[2] Researchers have developed methods that can be quantitatively used to find insight as to how time and regional patterns of gazes could lead to cognitive understanding. One method utilizes Space-Time-Cubes that provide time slices and regional insight so that spatiotemporal analysis, including Space-Time-Paths, can be mapped and determined, providing greater insight that traditional, static maps that simply record where someone looks.[3]

Heat maps showing the gaze patterns on two different map interfaces. Source: Coltekin et. al, 2009.

Heat maps showing the gaze patterns on two different map interfaces. Source: Coltekin et. al, 2009.

In addition to map design and understanding, other approaches have attempted to extend spatial awareness and other functions, including driving, where safety often means visual and cognitive coordination. Researchers have developed models, such as the visual-motor coordination model, which tracks how eye movements change and move in relation to vehicle motor movements. The idea is to understand how motor movements may relate to drivers’ cognitive perception, including how quickly perception can lead to driving actions (e.g., taking evasive maneuverers). Eye movements can be geocoded, while a GPS can be used to obtain precise car movements to understand each minor change in a car’s positioning as eye movement change occurs.[4] In fact, for older drivers, differences between car and eye movements have been more noticeable, where research has shown right turns and turns within roundabout give older drivers greater difficulties between perception and ability to react in maneuvering cars effectively.[5]

Fixation duration of a driver navigating a roundabout. Source: Sun, Xia, Falkmer, & Lee, 2016)

Fixation duration of a driver navigating a roundabout. Source: Sun, Xia, Falkmer, & Lee, 2016)

Differences in how various viewers perceive information have utilized statistical techniques such as analysis of variance (ANOVA); however, increasingly the major challenge has become visually displaying and understanding the large amount of eye movement data as a “big data” challenge, where visualization and large spatiotemporal datasets need to be assessed for larger patterns. The field of large dataset visual analytics has been leading the way in creating new techniques and methods for mathematical and visual analysis, where techniques incorporate statistical and machine learning approaches.[6] With new and growing areas where eye monitoring and GIS can be utilized together, big data techniques will be a leading way in which visual and analytical understanding of spatial cognition will be understood in coming years.


[1] For more on how digital mapping can utilize measured eye movements to improve maps, see:  Çöltekin, Arzu, Benedikt Heil, Simone Garlandini, and Sara Irina Fabrikant. 2009. “Evaluating the Effectiveness of Interactive Map Interface Designs: A Case Study Integrating Usability Metrics with Eye-Movement Analysis.” Cartography and Geographic Information Science 36 (1): 5–17.

[2] For more on how eye movement measurements can improve map labeling quality, see:  Franke, Conrad, and Jürgen Schweikart. 2017. “Mental Representation of Landmarks on Maps: Investigating Cartographic Visualization Methods with Eye Tracking Technology.” Spatial Cognition & Computation 17 (1–2): 20–38. doi:10.1080/13875868.2016.1219912.

[3] For more on Space-Time-Cube approaches, see:  Li, Xia, Arzu Çöltekin, and Menno-Jan Kraak. 2010. “Visual Exploration of Eye Movement Data Using the Space-Time-Cube.” In Geographic Information Science, edited by Sara Irina Fabrikant, Tumasch Reichenbacher, Marc van Kreveld, and Christoph Schlieder, 6292:295–309. Berlin, Heidelberg: Springer Berlin Heidelberg.

[4] For more on vehicle and eye monitoring measurements, see:  Sun, Qian (Chayn), Jianhong (Cecilia) Xia, Nandakumaran Nadarajah, Torbjörn Falkmer, Jonathan Foster, and Hoe Lee. 2016. “Assessing Drivers’ Visual-Motor Coordination Using Eye Tracking, GNSS and GIS: A Spatial Turn in Driving Psychology.” Journal of Spatial Science 61 (2): 299–316. doi:10.1080/14498596.2016.1149116.

[5] For more on how older drivers have been studied using eye movement and car movement technologies, see:  Falkmer, Torbjörn; Curtin University, Australia Linköping University, Sweden La Trobe University, Australia, Hoe; Curtin University Lee Australia, Qian; Department of Spatial Sciences Sun Curtin University, Australia, and Jianhong (Cecilia); Curtin University Xia Australia. 2016. “Investigating the Spatial Pattern of Older Drivers’ Eye Fixation Behaviour and Associations with Their Visual Capacity.”

[6] For more on utilizing big data and visual analytic techniques to gaining insight into eye movement, see:  Blascheck, Tanja, Michael Burch, Michael Raschke, and Daniel Weiskopf. 2015. “Challenges and Perspectives in Big Eye-Movement Data Visual Analytics.” In , 1–8. IEEE. doi:10.1109/BDVA.2015.7314288.



Like this article and want more?

Enter your email to receive the weekly GIS Lounge newsletter: