<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd">

<article article-type="research-article" xmlns:xlink="http://www.w3.org/1999/xlink">
 <front>
    <journal-meta>
	<journal-id journal-id-type="publisher-id">Jemr</journal-id>
      <journal-title-group>
        <journal-title>Journal of Eye Movement Research</journal-title>
      </journal-title-group>
      <issn pub-type="epub">1995-8692</issn>
	  <publisher>								
	  <publisher-name>Bern Open Publishing</publisher-name>
	  <publisher-loc>Bern, Switzerland</publisher-loc>
	</publisher>
    </journal-meta>
    <article-meta>
	<article-id pub-id-type="doi">10.16910/jemr.10.3.2</article-id> 
	  <article-categories>								
				<subj-group subj-group-type="heading">
					<subject>Research Article</subject>
				</subj-group>
		</article-categories>
      <title-group>
        <article-title>Eye-tracking Analysis of Interactive 3D Geovisualization</article-title>
      </title-group>
	   <contrib-group> 
				<contrib contrib-type="author">
					<name>
						<surname>Herman</surname>
						<given-names>Lukas</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Popelka</surname>
						<given-names>Stanislav</given-names>
					</name>
					<xref ref-type="aff" rid="aff2">2</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Hejlova</surname>
						<given-names>Vendula</given-names>
					</name>
					<xref ref-type="aff" rid="aff2">2</xref>
				</contrib>					
        <aff id="aff1">
		<institution>Masaryk University</institution>, <country>Czech Republic</country>
        </aff>
		<aff id="aff2">
		<institution>Palack&#xDD; University</institution>, <country>Czech Republic</country>
        </aff>
		</contrib-group>
     
	  <pub-date date-type="pub" publication-format="electronic"> 
		<day>31</day>  
		<month>5</month>
        <year>2017</year>
      </pub-date>
	  <pub-date date-type="collection" publication-format="electronic"> 
	  <year>2017</year>
	</pub-date>
      <volume>10</volume>
      <issue>3</issue>
	  <elocation-id>10.16910/jemr.10.3.2</elocation-id>
	<permissions> 
	<copyright-year>2017</copyright-year>
	<copyright-holder>Herman,Popelka and Hejlova</copyright-holder>
	<license license-type="open-access">
  <license-p>This work is licensed under a Creative Commons Attribution 4.0 International License, 
  (<ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">
    https://creativecommons.org/licenses/by/4.0/</ext-link>), which permits unrestricted use and redistribution provided that the original author and source are credited.</license-p>
</license>
	</permissions>
      <abstract>
        <p>This paper describes a new tool for eye-tracking data and their analysis with the use of interactive 3D models. This tool helps to analyse interactive 3D models easier than by time-consuming, frame-by-frame investigation of captured screen recordings with superimposed scanpaths. The main function of this tool, called 3DgazeR, is to calculate 3D coordinates (X, Y, Z coordinates of the 3D scene) for individual points of view. These 3D coordinates can be calculated from the values of the position and orientation of a virtual camera and the 2D coordinates of the gaze upon the screen. The functionality of 3DgazeR is introduced in a case study example using Digital Elevation Models as stimuli. The purpose of the case study was to verify the functionality of the tool and discover the most suitable visualization methods for geographic 3D models. Five selected methods are presented in the results section of the paper. Most of the output was created in a Geographic Information System. 3DgazeR works with the SMI eye-tracker and the low-cost EyeTribe tracker connected with open source application OGAMA, and can compute 3D coordinates from raw data and fixations.</p>
      </abstract>
      <kwd-group>
        <kwd>eye-tracking</kwd>
        <kwd>3D visualization</kwd>
        <kwd>3D model</kwd>
        <kwd>cartography</kwd>
        <kwd>Geographic Information System</kwd>
        <kwd>3D analysis tool</kwd>
      </kwd-group>
    </article-meta>
  </front>
  
  <body>

    <sec id="S1">
      <title>Introduction</title>
      <p>The introduction summarizes state of the art in 3D
cartography eye-tracking research, followed by a presentation
of previous attempts to record eye-tracking data over
interactive 3D models. In the methods section, 3DgazeR and
its implementation are described. The results contain five
selected data visualization methods applied in the example
of the simple case study. At the end of the paper, a
summary of 3DgazeR advantages and limitations is described.</p>

      <sec id="S1a">
        <title>3D Geovisualization</title>
        
		<p>Bleisch (
          <xref ref-type="bibr" rid="R47">1</xref>
          ) defines 3D geovisualization
as a generic term used for a range of 3D visualizations
representing the real world, parts of the real world, or other
data with a spatial reference. With the advent of virtual
globes such as Google Earth, or perhaps even earlier with
the notion of a digital earth (
          <xref ref-type="bibr" rid="R48">2</xref>
          ), they have become
increasingly popular, and many people already know about 3D
geovisualizations even though they may not call them as
such. Most 3D geovisualizations are digital elevation
models draped with ortho or satellite imagery and relatively
detailed 3D city models (
          <xref ref-type="bibr" rid="R47">1</xref>
          ). These perspective views are
often referred to as <italic>3D maps</italic>. The overview of the usability
and usefulness of 3D geovisualizations was presented by
&#xC7;&#xF6;ltekin, Lokka (
          <xref ref-type="bibr" rid="R49">3</xref>
          ). Authors categorized the results from
existing empirical studies according to visualization type,
task type, and user type.</p>
                
         <p>3D geovisualization is not limited to the depiction of
terrain where the Z axis represents elevation. The
development of a phenomenon in time is often displayed, for
example, with the aid of a so-called Space-Time-Cube
(STC). H&#xE4;gerstraand (
          <xref ref-type="bibr" rid="R50">4</xref>
          ) proposed a framework for time
geography to study social interaction and the movement of
individuals in space and time. The STC is a visual
representation of this framework where the cube&#x2019;s horizontal
plane represents space, and the 3D vertical axis represents
time (
          <xref ref-type="bibr" rid="R51">5</xref>
          ). With a Space-Time-Cube, any spatio-temporal
data can be displayed. That data can be, for example,
information recorded by GPS devices, statistics with
location and time components, or data acquired with
eye-tracking technology (
          <xref ref-type="bibr" rid="R52">6</xref>
          ).</p>
        
        <p>3D maps and visualizations can generally be divided
into two categories: static and interactive. Static
visualizations are essentially perspective views (images) of any 3D
scene. In interactive 3D visualizations, the user can control
and manipulate the scene. The disadvantages of static 3D
maps are mainly overlapping objects in the 3D scene and
the distortion of distant objects. Inexperienced users could
have problems with scene manipulation using a mouse (
          <xref ref-type="bibr" rid="R53">7</xref>
          ).</p>
		  
        <p>Most of the cases referred to as 3D geovisualization are
not true 3D, but a pseudo 3D (or 2.5D &#x2013; each X and Y
coordinate corresponds to exactly one Z value). According
to Kraak (
          <xref ref-type="bibr" rid="R54">8</xref>
          ), true 3D can be used in those cases where
special equipment achieves realistic 3D projection (i.e. 3D
LCD displays, holograms, stereoscopic images, anaglyphs
or physical models).</p>

        <p>Haeberling (
          <xref ref-type="bibr" rid="R55">9</xref>
          ) notes that there is almost no
cartographic theory or principles for creating 3D maps. In his
dissertation, G&#xF3;ralski (
          <xref ref-type="bibr" rid="R56">10</xref>
          ) also argues that solid
knowledge of 3D cartography is still missing. A similar
view can be found in other studies (
          <xref ref-type="bibr" rid="R53 R57 R58 R59">7, 11-13</xref>
          ). The authors
report that there is still very little known about how and in
which cases 3D visualization can be effectively used.
Performing an appropriate assessment of the usability of 3D
maps is necessary.</p>
      </sec>
	  
      <sec id="S1b">
        <title>Usability methods for 3D geovisualization (3D maps)</title>
		
        <p>Due to the massive increase in map production in
recent years, it is important to focus on map usability
research. Maps can be modified and optimized to better
serve users based on the results of this research.</p>

        <p>One of the first works dealing with map usability
research was published by Petchenik (
          <xref ref-type="bibr" rid="R60">14</xref>
          ). In her work
"Cognition in Cartography", she states that for the successful
transfer of information between the map creator and map
reader, it is necessary for the reader to understand the map
in the same way as the map creator. The challenge of
cognitive cartography is understanding how users read various
map elements and how the meanings of those elements
between different users vary.</p>

        <p>The primary direction of cognitive cartography
research leads to studies in how maps are perceived, to
increase their efficiency, and adapt their design to the needs
of a specific group of users. The International
Cartographic Association (ICA) has two commissions devoted
to map users, the appraisal of map effectiveness, and map
optimization &#x2013; the Commission on Use and User Issues
(
<ext-link ext-link-type="uri" xlink:href="http://use.icaci.org/" xlink:show="new">http://use.icaci.org/</ext-link>
) and the Commission on Cognitive
Visualization (
<ext-link ext-link-type="uri" xlink:href="http://cogvis.icaci.org/" xlink:show="new">http://cogvis.icaci.org/</ext-link>
). User aspects are
examined in respect of the different purposes of maps
(for example Stan&#x11B;k, Friedmannov&#xE1; (
          <xref ref-type="bibr" rid="R61">15</xref>
          ) or Kub&#xED;&#x10D;ek,
&#x160;a&#x161;inka (
          <xref ref-type="bibr" rid="R62">16</xref>
          ).</p>
		  
        <p>Haeberling (
          <xref ref-type="bibr" rid="R63">17</xref>
          ) evaluated the design variables
employed in 3D maps (camera angle and distance, the
direction of light, sky settings and the amount of haze).
Petrovi&#x10D; and Ma&#x161;era (
          <xref ref-type="bibr" rid="R64">18</xref>
          ) used a questionnaire to
determine user preferences between 2D and 3D maps.
Participants of their study had to decide which type of map they
would use to solve four tasks: measuring distances,
comparing elevation, determining the direction of north, and
evaluating the direction of tilt. Results of the study of
Petrovi&#x10D; and Ma&#x161;era (
          <xref ref-type="bibr" rid="R64">18</xref>
          ) showed that 3D maps are better
for estimating elevation and orientation than their 2D
equivalents, but 3D maps may cause potential problems
for distance measuring.</p>

        <p>Savage, Wiebe (
          <xref ref-type="bibr" rid="R65">19</xref>
          ) tried to answer the question
whether using 3D perspective views has an advantage over
using traditional 2D topographic maps. Participants were
randomly divided into two groups and asked to solve
spatial tasks with a 2D or a 3D map. The results of the study
showed no advantage in using 3D maps for tasks that
involved estimating elevation. Additionally, in tasks where
it was not necessary to determine an object&#x2019;s elevation
(e.g. measuring distances), the 3D variant was not as good.</p>

        <p>User testing of 3D interactive virtual environments is
relatively scarce. One of the few articles describing such
an environment is presented by Wilkening and Fabrikant
(
          <xref ref-type="bibr" rid="R66">20</xref>
          ). Using the Google Earth application, they monitored
the proportion of applied movement types &#x2013; zoom, pan,
tilt, and rotation. Bleisch, Burkhard (
          <xref ref-type="bibr" rid="R67">21</xref>
          ) assessed the 3D
visualization of abstract numeric data. Although speed and
accuracy were measured, no information about navigation
in 3D space was recorded in this study. Lokka and
&#xC7;&#xF6;ltekin (
          <xref ref-type="bibr" rid="R68">22</xref>
          ) investigated memory capacity in the context
of navigating a path in a virtual 3D environment. They
observed the differences between age groups.</p>

        <p>Previous studies (
          <xref ref-type="bibr" rid="R66 R69 R70">20, 23, 24</xref>
          ) indicate that there are
considerable differences between individuals in how they read
maps, especially in the strategies and procedures used to
determine an answer to a question. To understand map
reading strategy, the use of eye-tracking facilitates the
study.</p>
      </sec>
	  
      <sec id="S1c">
        <title>Eye-tracking in Cartography</title>
		
        <p>Although eye-tracking to study maps was first used in
the late 1950s, it has seen increased use over the last ten to
fifteen years. Probably the first eye-tracking study for
evaluating cartographic products was the study of Enoch
(
          <xref ref-type="bibr" rid="R71">25</xref>
		  ), who used as stimuli simple maps drawn on a
background of aerial images. Steinke (
          <xref ref-type="bibr" rid="R72">26</xref>
		  ) presented one of the
first published summaries about the application of
eyetracking in cartography. He compiled the results of former
research and highlighted the importance of distinguishing
between the perceptions of user groups of different age or
education.</p>

        <p>Today, several departments in Europe and the USA
conduct eye-tracking research in cartography (
          <xref ref-type="bibr" rid="R73">27</xref>
		  ). In
Olomouc, Czech Republic, eye-tracking has been used to
study the output of landscape visibility analyses (
          <xref ref-type="bibr" rid="R74">28</xref>
		  ) and
to investigate cartographic principles (
          <xref ref-type="bibr" rid="R75">29</xref>
		  ). In Zurich,
Switzerland, Fabrikant, Rebich-Hespanha (
          <xref ref-type="bibr" rid="R76">30</xref>
		  ) evaluated a
series of maps expressing the evolution of phenomenon over
time and weather maps (
          <xref ref-type="bibr" rid="R77">31</xref>
		  ). &#xC7;&#xF6;ltekin from the same
university analyzed users&#x2019; visual analytics strategies (
          <xref ref-type="bibr" rid="R78">32</xref>
		  ). In
Ghent, Belgium paper and digital topographic maps were
compared (
          <xref ref-type="bibr" rid="R69">33</xref>
          ) and differences in attentive behavior
between novice and expert map users were analyzed (
          <xref ref-type="bibr" rid="R80">34</xref>
          ).
Ooms, Coltekin (
          <xref ref-type="bibr" rid="R81">35</xref>
          ) proposed the methodology for
combining eye-tracking with user logging to reference
eyemovement data to geographic objects. This approach is
similar to ours, but instead of 3D model a dynamic map is
used.</p>
      </sec>
	  
      <sec id="S1d">
        <title>Eye-tracking to assess 3D visualization</title>
		
        <p>The issue of 3D visualization on maps has so far only
been addressed marginally. At the State University of
Texas, Fuhrmann, Komogortsev (
          <xref ref-type="bibr" rid="R82">36</xref>
          ) evaluated the
differences in how a traditional topographic map and its 3D
holographic equivalent were perceived. Participants were
asked to suggest an optimal route. Analysis of the
eyetracking metrics showed the better option to be the
holographic map.</p>

        <p>One of the first and more complex studies dealing with
eye-tracking and the evaluation of 3D maps is the study by
Putto, Kettunen (
          <xref ref-type="bibr" rid="R83">37</xref>
          ). In this study, the impact of three
types of terrain visualization was evaluated while being
required to solve four tasks (visual search, area selection,
and route planning). The shortest average length of
fixation was observed for the shaded relief, indicating that this
method is the easiest for users.</p>

        <p>Eye-tracking for evaluating 3D visualization in
cartography is widely used at Palack&#xFD; University in Olomouc,
Czech Republic, with studies examining the differences in
how 3D relief maps are perceived (
          <xref ref-type="bibr" rid="R84">38</xref>
          ), 3D maps of cities
(
          <xref ref-type="bibr" rid="R85">39</xref>
          ), a 3D model of an extinct village (
          <xref ref-type="bibr" rid="R86">40</xref>
          ), and tourist
maps with hill-shading (
          <xref ref-type="bibr" rid="R87">41</xref>
          ) being produced there. These
studies showed that it is not possible to generalize the
results and state that 3D is more effective than 2D or vice
versa. The effectivity of visualization depends on the exact
type of stimuli and also on the task.</p>

        <p>In all these studies static images were used as stimuli.
Nevertheless, the main advantage of 3D models is being
able to manipulate them (pan, zoom, rotate). An analysis
of eye-tracking data measured on interactive stimuli is
costly, as eye-trackers produce video material with
overlaid gaze-cursors and any classification of fixations
requires extensive manual effort (
          <xref ref-type="bibr" rid="R88">42</xref>
          ). Eye tracking studies
dealing with interactive 3D stimuli typically comprise a
time-consuming frame-by-frame analysis of captured
screen recordings with superimposed scanpaths. One of
the few available gaze visualization techniques for 3D
contexts is the representation of fixations and saccades as 3D
scanpaths (
          <xref ref-type="bibr" rid="R89">43</xref>
          ). A challenge with 3D stimuli is mapping
fixations onto the correct geometrical model of the
stimulus (
          <xref ref-type="bibr" rid="R90">44</xref>
          ).</p>
		  
        <p>Several attempts to analyze eye-tracking data recorded
during the work with interactive 3D stimuli exist. Probably
the most extensive work has been done by Stellmach, who
developed tool called SWEETER &#x2013; a gaze analysis tool
adapted to the Tobii eye-tracker system and XNA
Framework. SWEETER offers a coherent framework for loading
3D scenes and corresponding gaze data logs, as well as
deploying adapted gaze visualizations techniques (
          <xref ref-type="bibr" rid="R91">45</xref>
          ).</p>
		  
        <p>Another method for visualizing the gaze data of
dynamic stimuli was developed by Ramloll, Trepagnier (
          <xref ref-type="bibr" rid="R92">46</xref>
          ).
It is especially useful for 3D objects on retail sites allowing
shoppers to examine products as interactive,
non-stereoscopic 3D objects on 2D displays. In this approach, each
gaze position and fixation point is mapped to a 3D object&#x2019;s
relevant polygon. A 3D object is then flattened and
overlaid with the appropriate gaze visualizations. The
advantage of this flattening is that the output can be
reproduced on a 2D static medium (i.e. paper).</p>

        <p>Both approaches used a remote eye-tracker to record
data. Pfeiffer (
          <xref ref-type="bibr" rid="R88">42</xref>
          ) used a head-mounted eye-tracking
system by Arrington Research. This study extended recent
approaches of combining eye-tracking with motion capture,
including holistic estimations of the 3D point of regard. In
addition, he presented a refined version of 3D attention
volumes for representing and visualizing attention in 3D
space.</p>

        <p>Duchowski, Medlin (
          <xref ref-type="bibr" rid="R93">47</xref>
          ) developed an algorithm for
binocular eye-tracking in virtual reality, which is capable
of calculating the three-dimensional virtual coordinates of
the viewer&#x2019;s gaze.</p>

        <p>A head-mounted eye-tracker from the SMI was used in
the study of Baldauf, Fr&#xF6;hlich (
          <xref ref-type="bibr" rid="R94">48</xref>
          ), who developed the
application KIBITZER &#x2013; a wearable gaze-sensitive system to
explore urban surroundings. The eye-tracker is connected
via a smartphone and the user&#x2019;s eye-gaze is analyzed to
scan the visible surroundings for georeferenced digital
information. The user is informed about points of interest in
his or her current gaze direction.</p>

        <p>SMI glasses were also involved in the work of Paletta,
Santner (
          <xref ref-type="bibr" rid="R95">49</xref>
          ), who used them in combination with
Microsoft Kinect. A 3D model of the environment was
acquired with Microsoft Kinect and gaze positions captured
by the SMI glasses were mapped onto the 3D model.</p>

        <p>Unfortunately, all the presented approaches work with
specific types of device and are not generally available for
the public. For this reason, we decided to develop our own
application called 3DgazeR (3D Gaze Recorder).
3DgazeR can place recorded raw data and fixations into
the 3D model&#x2019;s coordinate system. The application works
primarily with geographical 3D models (DEM &#x2013; Digital
Elevation Models in our pilot study). Majority of
visualization of results of the case study is performed in
open source Geographic Information System QGIS. The
application works with data from an SMI RED 250 device
and a low-cost, EyeTribe eye-tracker. This eye-tracker is
connected with open source application OGAMA. Many
different eye-trackers could be connected with OGAMA
and then our tool will work with their data.</p>
      </sec>
    </sec>
	
    <sec id="S2">
      <title>Methods</title>
	  
      <p>We designed and implemented our own experimental
application, 3DgazeR, due to the unavailability of tools
allowing eye-tracking while using interactive 3D stimuli.
The main function of this instrument is to calculate the 3D
coordinates (X, Y, Z coordinates of the 3D scene) for
individual points of view. These 3D coordinates can be
calculated from the values of the position and orientation of
a virtual camera and the 2D coordinates of the gaze on the
screen. 2D screen coordinates are obtained from the
eyetracking system, and the position and orientation of the
virtual camera are recorded with the 3DgazeR tool (see
Figure 1).</p>

<fig id="fig01" fig-type="figure" position="float">
					<label>Fig. 1</label>
					<caption>
						<p>Schema of 3DgazeR modules.</p>
					</caption>
					<graphic id="graph01" xlink:href="jemr-10-03-b-figure-01.png"/>
				</fig>

      <p>3DgazeR incorporates a modular design. The three modules are:</p>
	  
	  <p>&#x2022; <bold>Data acquisition module</bold></p>
	  <p>&#x2022; <bold>Connecting module</bold> to combine the virtual camera
data and eye-tracking system data</p>
	  <p>&#x2022; <bold>Calculating module</bold> to calculate 3D coordinates</p>	  
      
        <p>The modular design reduces computational complexity
for data acquisition. Data for gaze position and virtual
camera position and orientation are recorded
independently. Combining the data and calculating 3D
coordinates is done in the post-processing phase. Splitting the
modules for combining data and calculating 3D
coordinates allows information from different eye-tracking
systems (SMI RED, EyeTribe) and various types of data (raw
data, fixation) to be processed.</p>

      <p>All three modules constituting 3DgazeR only use open
web technologies: HTML (HyperText Markup Language),
PHP (Hypertext Preprocessor), JavaScript, jQuery and
JavaScript library for rendering 3D graphics X3DOM. Library
X3DOM was chosen because of its broad support in
commonly used web browsers, as well as documentation for the
accessibility and availability of software to create stimuli.
X3DOM uses an X3D (eXtensible 3D) structure format and
is built on HTML5, JavaScript, and WebGL. The current
implementation of X3DOM uses a so-called fallback model
that renders 3D scenes through an InstantReality plug-in,
a Flash11 plug-in, or WebGL. To run X3DOM, no specific
plug-in is needed. X3DOM is free for both for
non-commercial and commercial use (
        <xref ref-type="bibr" rid="R96">50</xref>
        ). Common JavaScript events,
such as <italic>onclick</italic> on 3D objects, are supported in X3DOM. A
runtime API is also available and provides a proxy object
for reading and modifying runtime parameters
programmatically. The API functions serve for interactive navigation,
resetting views or changing navigation modes. X3D data
can be stored in an HTML file or as part of external files.
Their combination is achieved via an inline element.
Particular X3D elements can be clearly distinguished through
their DEF attribute, which is a unique identifier. Other
principles and advantages of X3DOM are described in (
        <xref ref-type="bibr" rid="R96 R97 R98 R99">50-53</xref>
        ).</p>
		
      <sec id="S2a">
        <title>Data acquisition module</title>
		
        <p>The data acquisition module is used to collect primary
data. Its main component is a window containing the 3D
model used as a stimulus. This 3D scene can be navigated
or otherwise manipulated. The rendering of virtual content
inside a graphics pipeline is the orthographic or
perspective projection of 3D geometry onto a 2D plane. The
parameters for this projection are usually defined by some
form of virtual camera. Only main parameters, position,
and orientation of the virtual camera are recorded in the
proposed solution. The position and orientation of the
virtual camera are recorded every 50 milliseconds (frequency
of records 20 Hz). The recording is performed using
functions from X3DOM runtime API and JavaScript in
general. The recorded position and orientation of the virtual
camera is sent every two seconds to a server and stored
using a PHP script to a CSV (Comma Separated Value)
file. Storage of the 3D scene loading time is necessary for
subsequent combination with eye-tracking data. Similarly,
termination of the 3D scene is also stored. The interface is
designed as a full-screen 3D scene while input for answers
is provided on the following screen (after the 3D scene).</p>
      </sec>
	  
      <sec id="S2b">
        <title>Connection module</title>
		
        <p>The connecting module combines two partial CSV files
based on timestamps. The first step is joining trimmed data
(from the eye-tracker and from the movement of the virtual
environment) by markers of the beginning and end of the
depiction of a 3D scene. The beginning in both records is
designated as time 0. Each record from the eye-tracker is
then assigned to the nearest previous recorded position of
the virtual camera (by timestamp), which is the simplest
method of joining temporal data and it was not difficult to
implement. The maximum time deviation (inaccuracy) is
16.67 ms.</p>

<fig id="fig02" fig-type="figure" position="float">
					<label>Fig. 2</label>
					<caption>
						<p>Examples of data about eye-tracking data (left) and virtual camera movement (right) and schema of their connecting</p>
					</caption>
					<graphic id="graph02" xlink:href="jemr-10-03-b-figure-02.png"/>
				</fig>

        <p>Four variants of the connecting module were created &#x2013;
for SMI RED 250 and EyeTribe, and for raw data and
fixations. The entire connecting module is implemented in
JavaScript.</p>
      </sec>
	  
      <sec id="S2c">
        <title>Calculating module</title>
		
        <p>The calculating module comprises a similar window
and 3D model to those used in the test module. The same
screen resolution must be used as during the acquisition of
data. For every record, the intersection of the viewing ray
with the displayed 3D model is calculated. A 3D scene is
depicted with a virtual camera&#x2019;s input position and
orientation. The X3DOM runtime API function <italic>getViewingRay</italic>
and screen coordinates as input data are used for this
calculation. Setting and calculating the virtual camera&#x2019;s
parameters is automated using the FOR cycle. The result is
a table containing timestamps, 3D scene coordinates (X,
Y, Z), the DEF element the ray intersects with, and
optionally, a normal vector to this intersection. If the user is not
looking at any particular 3D object, this fact is also
recorded, including whether the user is looking beyond the
dimensions of the monitor. This function is based on ray
casting method (see Figure 3) and can be divided into three
steps:</p>

        <p>&#x2022; calculation of the direction of the viewing ray from
the virtual camera position, orientation and screen
coordinates (function <italic>calcViewRay</italic>);</p>
        <p>&#x2022; ray casting to the scene;</p>
        <p>&#x2022; finding the intersection with the closest object
(function <italic>hitPnt</italic>).</p>

        <p>For more information about ray casting see Hughes,
Van Dam (
          <xref ref-type="bibr" rid="R100">54</xref>
          ).</p>
		  
<fig id="fig03" fig-type="figure" position="float">
					<label>Fig. 3</label>
					<caption>
						<p>Principle of ray casting method for 3D scene coordinates calculation.</p>
					</caption>
					<graphic id="graph03" xlink:href="jemr-10-03-b-figure-03.png"/>
				</fig>		  
		  
        <p>For additional processing, analysis, and visualization
of calculated data, GIS software is used. It was primarily
open source program QGIS, but ArcGIS with 3D Analyst
and ArcScene (3D viewing application) can also be used.
We worked with QGIS version 2.12 with several
additional plug-ins. Most important was Qgis2threejs plug-in.
Qgis2threejs creates 3D models and exports terrain data,
map canvas images, and overlaid vector data to a web
browser supporting WebGL).</p>
      </sec>
    </sec>
	
    <sec id="S3">
      <title>Pilot study</title>
	  
      <p>Our pilot experiment was designed as exploratory
research. The primary goal of this experiment was to test the
possibilities of 3DgazeR in evaluating different methods
of visualization and analyzing eye-tracking data acquired
with an interactive 3D model.</p>

      <sec id="S3a">
        <title>Apparatus, tasks and stimuli</title>
		
        <p>For the testing, we chose a low-cost EyeTribe device.
Currently, the EyeTribe tracker is the most inexpensive
commercial eye-tracker in the world at a price of $99 (
<ext-link ext-link-type="uri" xlink:href="https://theeyetribe.com" xlink:show="new">https://theeyetribe.com</ext-link>) 
Popelka, Stachon (
          <xref ref-type="bibr" rid="R101">55</xref>
          ) compared
the precision of the EyeTribe and the professional device
SMI RED 250. The results of the comparison show that the
EyeTribe tracker can be a valuable tool for cartographic
research. The eye-tracker was connected to OGAMA
software (
          <xref ref-type="bibr" rid="R102">56</xref>
          ). The device operated at a frequency of 30Hz.
EyeTribe also works with a frequency of 60Hz, however
saving information about camera orientation caused
problems with a frequencies higher than 20Hz. Some computer
setups were not able to store camera data correctly when
the frequency was higher than 20Hz. The length of the file
was shorter than real recording, because some rows were
omitted.</p>

        <p>Two versions of the test were created &#x2013; variant A and
variant B. Each variant included eight tasks over almost
the same 3D models (differing only in the used texture).
The 3D models in variant A had no transparency, and the
terrain was covered with a hypsometric color scale (from
green to brown). The same hypsometric scale covered four
3D models in variant B, but transparency was set at 30%.
The second half of the models in variant B had no
transparency, but the terrain was covered with satellite images
from Landsat 8. The order of the tasks was different in both
variants. A comparison of variant A and variant B for the
same task is shown in Figure 4. Four tasks were required:</p>

        <p>&#x2022; Which object has the highest elevation? (Variant A &#x2013;
tasks 1, 5; Variant B &#x2013; tasks 1, 2)</p>
        <p>&#x2022; Find the highest peak. (Variant A &#x2013; tasks 2, 6; Variant
B &#x2013; tasks 3, 4)</p>
        <p>&#x2022; Which elements are visible from the given position?
(Variant A &#x2013; tasks 3, 7; Variant B &#x2013; tasks 5, 6)</p>
        <p>&#x2022; From which positions a given object is visible?
(Variant A &#x2013; tasks 4, 8; Variant B &#x2013; tasks 7, 8)</p>


        <p>The first two tasks had only one correct answer, while
the other two had one or more correct answers.</p>

<fig id="fig04" fig-type="figure" position="float">
					<label>Fig. 4</label>
					<caption>
						<p>An example of stimuli from variant A – terrain covered with a hypsometric scale (left) and variant B – terrain covered with a satellite image (right).</p>
					</caption>
					<graphic id="graph04" xlink:href="jemr-10-03-b-figure-04.png"/>
				</fig>
      </sec>
	  
      <sec id="S3b">
        <title>Design and participants</title>
		
        <p>Before the pilot test we decided that 20 participants
would be tested on both variants, with an interval of at least
three days between both testing sessions. Participants were
recorded on both variants, but not influenced by a learning
effect when performing the second variant of the test
because of the interval.</p>

        <p>Half of the participants were students of the
Department of Geoinformatics with cartographic knowledge, half
of them were cartographic novices. Half of the participants
were men, half women. The age range was 18-32 years.</p>

        <p>Screen resolution was 1600 x 900 and the sampling
frequency was set to 30 Hz. Each participant was seated at an
appropriate distance from the monitor with an eye-tracking
device calibrated with 16 points. Calibration results of
either Perfect or Good (on the scale used in OGAMA) were
accepted. An external keyboard was connected to the
laptop to start and end the tasks (F2 key for start and F3 for
the end). A researcher controlled the keyboard. The
participant performed the test using only a common PC mouse.</p>

        <p>The experiment began with calibrating the device in
the OGAMA environment. After that, participants filled in
their ID and other personal information such as age, sex,
etc. The experiment was captured as a screen recording.</p>

        <p>Prepared with individual HTML pages, the experiment
included questions, tasks, 3D models, and input screens for
answers. The names of the CSV files where recording of
the virtual camera movement would be stored coincided
with the task in the subsequently created eye-tracking
experiment in OGAMA. This would allow correct
combination in the connecting module.</p>

        <p>As recording began, a page with initial information
about the experiment appeared. The experiment ran in
Google Chrome in full-screen mode and is available (in
Czech) at 
<ext-link ext-link-type="uri" xlink:href="http://eyetracking.upol.cz/3d/" xlink:show="new">http://eyetracking.upol.cz/3d/</ext-link>. 
Each task was limited to 60 seconds duration, and the whole experiment
lasts approximately 10 to 15 minutes. Longer total time of
the experiment may affect the user performance. From the
evidence from previous experiments, we found out that
when the recording is longer than e.g. 20 minutes, the
participants started to be tired and they lose concentration.</p>

        <p>Care was taken with the correct starting time for tasks.
A screen with a target symbol appeared after the 3D model
had loaded. The participant switched to the task by pressing
the F2 key. This keypress was recorded by OGAMA and used
by 3DGazeR to divide recording according to the task. After
that, a participant could manipulate the 3D model to try to
discover the correct answer. The participant then pressed F3,
and a screen with a selection of answers appeared.</p>
      </sec>
	  
      <sec id="S3c">
        <title>Recording, data processing and validation</title>
		
        <p>It is necessary to store the data for each task separately,
alternatively control or manually modify (e.g. delete the
unnecessary lines at the end of recording). The data is then
processed in the connecting module where data from the
eyetracking device is combined with virtual camera movement.
The output is then sent to the calculation module which must
be switched to full-screen mode. The calculation must take
place on the same screen resolution as the testing. The output
should be modified for import into GIS software and
visualized. For example, the data format of the time column had to
be modified into a form required to subsequently create
animation.</p>

        <p>These adjusted data can be imported into QGIS. CSV data
are loaded and displayed here using the Create a Layer from
a Delimited Text File dialog. The retrieved data can be stored
in GML (Geography Markup Language) or Shapefile as point
layers. After the export and re-render of this new layer above
the 3D model, it is possible that some data may have the
wrong elevation (see Figure 5). This distortion occurs when
the 3D model is rotated while eyes are simultaneously
focused on a specific place, or when the model is rotated, and
eyes track with a smooth pursuit. To remove these distortions
and correctly fit eye-tracking data exactly on the model, the
Point Sampling Tool plug-in in Qgis2threejs was used.</p>

<fig id="fig05" fig-type="figure" position="float">
					<label>Fig. 5</label>
					<caption>
						<p>Raw data displayed as a layer in GIS-software (green points – calculated 3D gaze data; red – points with incorrect el-evation).</p>
					</caption>
					<graphic id="graph05" xlink:href="jemr-10-03-b-figure-05.png"/>
				</fig>
      </sec>
	  
      <sec id="S3d">
        <title>Evaluation of the data validity</title>
		
        <p>For the evaluation of the validity of 3DGazeR output,
we have created the short animation of 3D model with one
red sphere in the middle. The diameter of the sphere was
approximately 1/12 of the 3D model width. In the
beginning, the sphere was located in the middle of the screen.
After five seconds, the camera changed its position (it took
two seconds), and the sphere moved to the upper left side
of the screen. The camera stayed there for six seconds and
then moved again, so the sphere was displayed in the next
corner of the screen. This process was repeated for all four
corners of the screen. The validation study is available at
<ext-link ext-link-type="uri" xlink:href="http://eyetracking.upol.cz/3d/" xlink:show="new">http://eyetracking.upol.cz/3d/</ext-link>.</p>

        <p>The task of the participant was to look at the sphere all
time. The validation was performed on five participants.
Recorded data were processed in connection and
calculating modules of 3DGazeR. For the evaluation of the data
validity, we decided to analyze how many data samples
were assigned to the sphere. Average values for all five
participants are displayed in Figure 6. Each bar in the
graph represents one camera position (or movement). The
blue color corresponds to the data samples where the gaze
coordinates were assigned to the sphere; the red color is
used when the gaze was recorded out of the sphere. It is
evident that inaccuracies were observed for the first
position of the sphere because it took some time to participants
to find the sphere. A similar problem was found when the
first movement appeared. Later, the percent of samples
recorded out of the sphere is minimal. In total, average
amount of samples recorded out of the sphere is 3.79 %.
These results showed that the tool works correctly and the
inaccuracies are caused by the inability of the respondents
to keep eyes focused on the sphere that was verified by
watching the video recording in OGAMA.</p>

<fig id="fig06" fig-type="figure" position="float">
					<label>Fig. 6</label>
					<caption>
						<p>Evaluation of the data validity. Red color corresponds to the data samples where gaze was not recorded on the target sphere.</p>
					</caption>
					<graphic id="graph06" xlink:href="jemr-10-03-b-figure-06.png"/>
				</fig>
      </sec>
    </sec>
	
    <sec id="S4">
      <title>Results</title>
	  
      <p>Visualization techniques allow researchers to analyze
different levels and aspects of recorded eye tracking data
in an exploratory and qualitative way. Visualization
techniques help to analyze the spatio-temporal aspect of eye
tracking data and the complex relationships it contains
(
        <xref ref-type="bibr" rid="R90">44</xref>
        ). We decided to use both fixations and raw data that
for visualization. 3D alternatives to the usual methods of
eye-tracking data were created, and other methods suitable
for visualization of 3D eye-tracking data were explored.
The following visualization methods were tested:</p>

      <p>&#x2022; 3D raw data</p>
      <p>&#x2022; 3D scanpath (fixations and saccades)</p>
      <p>&#x2022; 3D attention map</p>
      <p>&#x2022; Animation</p>
      <p>&#x2022; Graph of Z coordinate variation over time.</p>

      <sec id="S4a">
        <title>3D raw data</title>
		
        <p>First, we tried to visualize raw data as simple points
placed on a 3D surface. This method is very simple, but its
main disadvantage is the poor arrangement of depicting data
in this way, mainly in areas with a high density of points.
The size, color, and transparency of symbols can be set in
used GIS software. With this type of visualization, data from
different groups of participants can be compared, as shown
in Figure 7. Raw data displayed as points were used as input
for creating other types of visualizations. Figure 7 shows the
3D visualization of raw data created in QGIS. Visualization
of a large number of points in the 3D scene in a web browser
through Three.js is hardware demanding. Thus,
visualization of raw data is more effective in ArcScene.</p>

<fig id="fig07" fig-type="figure" position="float">
					<label>Fig. 7</label>
					<caption>
						<p>Comparison of 3D raw data (red points – females, blue points – males) for variant B, task 6.</p>
					</caption>
					<graphic id="graph07" xlink:href="jemr-10-03-b-figure-07.png"/>
				</fig>
      </sec>
	  
      <sec id="S4b">
        <title>3D scanpath</title>
		
        <p>The usual approach for depicting eye-tracking data is
scanpath visualization superimposed on a stimulus
representation. Scanpaths show the eye-movement trajectory by
drawing connected lines (saccades) between subsequent
fixation positions. A traditional spherical representation of
fixations was chosen, but (
          <xref ref-type="bibr" rid="R91">45</xref>
          ) also demonstrate different
types of representation. Cones can be used to represent
fixations or viewpoints and view directions for camera paths.</p>

        <p>The size of each sphere was determined from the length
of the fixation. Fixations were detected in OGAMA
environment with the use of I-DT algorithm. The settings for
thresholds were set to maximum distance of 30px and
minimum number of three samples per fixation. Fixation
length was used as the attribute for the size of each sphere.
Transparency (30 %) was set because of overlaps. In the
next step we created 3D saccades linking fixations. The
PointConnector plug-in in QGIS was used for this purpose.</p>

        <p>This visualization method is quite clear. It provides an
overview of the duration of individual fixations, their
position, and relation to each other. It tells where the
participant&#x2019;s gaze lingered and where it stayed only briefly. Lines
indicate if a participant skipped between remote locations
and back or if the observation of the stimulus was smooth.
The scanpath from one participant solving variant A, task
4 is shown in Figure 8. From the length of fixations, it is
evident that the participant observed locations near
spherical bodies defining the target points crucial for task
solving. His gaze shifted progressively from target to target,
whereby the red target attracted the most attention.</p>

<fig id="fig08" fig-type="figure" position="float">
					<label>Fig. 8</label>
					<caption>
						<p>Scanpath (3D fixations and saccades) of one user for variant A, task 4. Interactive version is available at <ext-link ext-link-type="uri" xlink:href="http://eyetracking.upol.cz/3d/" xlink:show="new">http://eyetracking.upol.cz/3d/</ext-link>.</p>
					</caption>
					<graphic id="graph08" xlink:href="jemr-10-03-b-figure-08.png"/>
				</fig>
      </sec>
	  
      <sec id="S4c">
        <title>3D Attention Map</title>
		
        <p>Visual gaze analysis in three-dimensional virtual
environments still lacks the methods and techniques for
aggregating attentional representations. Stellmach, Nacke (
          <xref ref-type="bibr" rid="R91">45</xref>
          )
introduced three types of attention maps suitable for 3D
stimuli &#x2013; projected, object-based, and surface-based
attention maps. In Digital Elevation Models, the use of
projected attention maps is the most appropriate. Object-based
attention maps, which are relatively similar to the concept
of Areas of Interest, can also be used for eye-tracking
analysis of interactive 3D models with 3DgazeR. In this case,
stimuli must contain predetermined components (objects)
with unique identifiers (attribute DEF in X3DOM library).</p>

        <p>Projected attention maps can be created in the ArcScene
environment using the Heatmap plug-in in QGIS function.
Heatmap calculates the density of features (in our case
fixations) in a neighborhood around those features.
Conceptually, a smoothly curved surface is fitted over each point. The
important factors for creating Heatmap are grid cell size and
search radius. We used a cell size of 25 m (it is about one
thousandth of the terrain model size) and search radius as an
implicit value (see Figure 9).</p>

<fig id="fig09" fig-type="figure" position="float">
					<label>Fig. 9</label>
					<caption>
						<p>Comparison of 3D attention maps from cartographers (left) and non-cartographers (right) for variant B, task 6. Inter-active versions are available at <ext-link ext-link-type="uri" xlink:href="http://eyetracking.upol.cz/3d/" xlink:show="new">http://eyetracking.upol.cz/3d/</ext-link>.</p>
					</caption>
					<graphic id="graph09" xlink:href="jemr-10-03-b-figure-09.png"/>
				</fig>

        <p>The advantage of projected attention maps is their
clarity for visualization of a large amount of data. In a
Geographic Information System, the exact color scheme of the
attention map can be defined (with minimum and
maximum values).</p>

        <p>An interesting result was obtained from task 6, variant
B. Figure 9 compares the resultant attention maps from
participants with cartographic knowledge with those from
the general public. For cartographers, the most important
part of the terrain was around the blue cube. Participants
without cartographic knowledge focused on other objects
in the terrain. An interpretation of this behavior could be
that the cartographers were consistent with the task and
looked at the blue cube from different areas. By contrast,
novices used the opposite approach and investigated which
objects were visible from the blue cube&#x2019;s position.</p>
      </sec>
	  
      <sec id="S4d">
        <title>Animation</title>
		
        <p>A suitable tool for evaluating user strategies is
animation. Creating an animation with a 3D model is not
possible in QGIS software, so we used ArcScene (with the
function Create Time Animation) for this purpose. The model
can also be rotated during the animation, providing
interactivity from data acquisition through to final analysis.
Animations can be used to study fixations of individuals
or to compare several users. Animations can be exported
from ArcScene software as video files (e.g. AVI), but it
loses its interactivity. AVI files exported from ArcScene
are available at 
<ext-link ext-link-type="uri" xlink:href="http://eyetracking.upol.cz/3d/" xlink:show="new">http://eyetracking.upol.cz/3d/</ext-link>. A similar
method to animation is taking screenshots, which can also
be alternatively used in the qualitative (manual) analysis
of critical task solving moments, such as at the end or when
entering an answer.</p>
      </sec>
	  
      <sec id="S4e">
        <title>Graph</title>
		
        <p>When analyzing 3D eye-tracking data, it would be
appropriate to concentrate on analyzing the Z coordinate
(height). From the data recorded with 3DgazeR, the Z
coordinate&#x2019;s changes over time can be displayed, so the
elevations the participants looked at in the model during
the test can be investigated. Data from the program
ArcScene were exported into a DBF table and then
analyzed in OpenOffice Calc. A scatter plot with data points
connected by lines should be used here. A graph for one
participant (see Figure 10) or multiple participants can be
created. A graph of Z coordinate raw data values was
created in this case.</p>

        <p>It is apparent from this graph when participants 
looked at higher ground or lowlands. In Figure 10, we can see 
how the participants initially fluctuated between elevations in 
observing locations and focused on the highest point around the 
27<sup>th</sup> second during the task. In general, we con-clude that this 
participant studied the entire terrain quite carefully and looked 
at a variety of low to very high eleva-tions.</p>

<fig id="fig10" fig-type="figure" position="float">
					<label>Fig. 10</label>
					<caption>
						<p>Graph of observed elevations during task (variant A, task 4, participant no. 20).</p>
					</caption>
					<graphic id="graph10" xlink:href="jemr-10-03-b-figure-10.png"/>
				</fig>
      </sec>
    </sec>
	
    <sec id="S5">
      <title>Discussion</title>
	  
      <p>We developed our own testing tool, 3DgazeR, because
none of the software tools found through literature review
were freely available for application. Those software tools
worked with specific devices, or had proprietary licenses,
and were not free or open source software. 3DgazeR is
freely available to interested parties under a BSD license
to fill this gap. English version of 3DgazeR is available at
<ext-link ext-link-type="uri" xlink:href="http://eyetracking.upol.cz/3d/" xlink:show="new">http://eyetracking.upol.cz/3d/</ext-link>. 
Furthermore, 3DgazeR has several significant advantages:</p>

      <p>&#x2022; It permits evaluation of different types of 3D stimuli
because the X3DOM library is very flexible &#x2013; for an
overview of various 3D models displayed through
X3DOM see (
        <xref ref-type="bibr" rid="R97 R98 R99">51-53</xref>
        )</p>
      <p>&#x2022; It is based on open web technologies and thus an
inexpensive solution, and does not need special
software installed or plug-ins on the client or server
sides</p>
      <p>&#x2022; It combines open JavaScript libraries and PHP, and so
may be easily extended or modified</p>
      <p>&#x2022; It writes data into a CSV file, allowing easy analysis
under various commercial, freeware, and open source
programs.</p>

      <p>3DgazeR also demonstrates general approaches in
creating eye-tracking analyses of interactive 3D
visualizations. Some limitations of this testing tool, however, were
identified during the pilot test:</p>

      <p>&#x2022; A higher recording frequency of virtual camera
position and orientation in the data acquisition module
would allow greater precision during analysis</p>
      <p>&#x2022; Some of the calculated 3D gaze data (points) are not
correctly placed on a surface. This distortion happens
when the 3D model is rotated while eyes are
simultaneously focused on a specific place, or when the
model is rotated, and eyes track with a smooth motion.
A higher frequency in recording virtual camera position
and orientation can solve this problem</p>
      <p>&#x2022; Data processing is time-consuming and involves
manual effort. Automating this process and
developing tools to speed up data analysis and
visualization would greatly enhance its productivity.</p>

      <p>Future development of 3DgazeR should aim at
overcoming these limitations. Other possible extensions to our
methodology and the 3DgazeR tool have been identified:</p>

      <p>&#x2022; We want to modify 3DgazeR to support other types of
3D models (e.g. 3D models of buildings, machines, or
similar objects), and focus mainly on the design and
testing of such procedures to create 3D models
comprising individual parts marked with unique
identifiers (as mentioned above &#x2013; with a DEF attribute).
Such 3D models also allow us to create object-based
attention maps. The first trials in this direction are
already underway. They represent simple 3D models
which are predominantly created manually. This is
time-consuming and requires knowledge of XML
(eXtensible Markup Language) structure and X3D
format. We would like to simplify and automate this
process as much as possible in the future.</p>
      <p>&#x2022; We would like to increase the frequency of records of
position and orientation of the virtual camera,
especially during its movement. On the other hand,
when it is no user interaction (virtual camera position
is not changed at this time), it would be suitable to
decrease the frequency to reduce the size of created
CSV file. The ideal solution would be the recording
with adaptive frequency, depending on whether the
virtual camera is moving or not.</p>
      <p>&#x2022; We also want to improve the connecting module to
use more accurate method for joining data of the
movement of the virtual camera with data from the
eye-tracking system.</p>
      <p>&#x2022; We tested primarily open source software (QGIS,
OpenOffice Calc) for visualization of the results.
Creation of 3D animation was not possible in QGIS,
so commercial software ArcScene was used for this
purpose. The use of ArcScene is more effective also
in the case of raw data visualization. We want to test
the possibilities of advanced statistical analysis in
some open source program, e.g. R.</p>

      <p>3DgazerR enables each participant&#x2019;s strategy (e.g. Fig.
8 and Fig. 10) to be studied, their pairs compared, and
group strategies (e.g. Fig. 7 and Fig. 9) analyzed. In the
future, once the above adjustments and additions have
been included, we want use 3DgazerR for complex
analysis of user interaction in virtual space and compare 3D
eyetracking data with user interaction recordings introduced
by Herman and Stacho&#x148; (
        <xref ref-type="bibr" rid="R70">24</xref>
		). We would like to extend the
results of existing studies, e.g. (
        <xref ref-type="bibr" rid="R91">45</xref>
        ), in this manner.</p>
    </sec>
	
    <sec id="S6">
      <title>Conclusion</title>
	  
      <p>We created an experimental tool called 3DgazeR to
record eye-tracking data for interactive 3D visualizations.
3DgazeR is freely available to interested parties under
a BSD license. The main function of 3DgazeR is to
calculate 3D coordinates (X, Y, Z coordinates of the 3D scene)
for individual points of view. These 3D coordinates can be
calculated from the values of the position and orientation
of a virtual camera and the 2D coordinates of the gaze upon
the screen. 3DgazeR works with both the SMI eye-tracker
and the low-cost EyeTribe tracker and can compute 3D
coordinates from raw data and fixations. The functionality of
the 3DgazeR has been tested in a case study using terrain
models (DEM) as stimuli. The purpose of this test was to
verify the functionality of the tool and discover suitable
methods of visualizing and analyzing recorded data. Five
visualization methods were proposed and evaluated: 3D
raw data, 3D scanpath (fixations and saccades), 3D
attention map (heat map), animation, and a graph of Z
coordinate variation over time.</p>
    </sec>
	
    <sec id="S7">
      <title>Acknowledgements</title>
	  
      <p>A special thank you to Lucie Bartosova, who
performed the testing and did a lot of work preparing data for
analysis. Lukas Herman is supported by Grant No.
MUNI/M/0846/2015, &#x201C;Influence of cartographic
visualization methods on the success of solving practical and
educational spatial tasks&#x201D; and Grant No. MUNI/A/
1419/2016, &#x201C;Integrated research on environmental
changes in the landscape sphere of Earth II&#x201D;, both awarded
by Masaryk University, Czech Republic. Stanislav
Popelka is supported by the Operational Program
Education for Competitiveness &#x2013; European Social Fund (project
CZ.1.07/2.3.00/20.0170 of the Ministry of Education,
Youth and Sports of the Czech Republic).</p>
    </sec>
  </body>
  <back>
<ref-list>
<ref id="R47"><mixed-citation publication-type="conference" specific-use="parsed"><person-group person-group-type="author"><string-name><surname>Bleisch</surname> <given-names>S</given-names></string-name>, <string-name><surname>Burkhard</surname> <given-names>J</given-names></string-name> and <string-name><surname>Nebiker</surname> <given-names>S</given-names></string-name></person-group>. (<year>2009</year>) <article-title>Efficient Integration of data graphics into virtual 3D Environ-ments.</article-title> <source>Proceedings of 24th International Cartography Conference</source>.</mixed-citation></ref>
<ref id="R48"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Gore</surname>, <given-names>A.</given-names></string-name></person-group> (<year>1998</year>). <article-title>The digital earth: understanding our planet in the 21st century.</article-title> <source>Australian surveyor</source>, <volume>43</volume>(<issue>2</issue>), <fpage>89</fpage>–<lpage>91</lpage>. doi: <pub-id pub-id-type="doi">10.1080/00050348.1998.10558728</pub-id></mixed-citation></ref>
<ref id="R49"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Çöltekin</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Lokka</surname>, <given-names>I.</given-names></string-name>, &amp; <string-name><surname>Zahner</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2016</year>). <article-title>On the Usability and Usefulness of 3D (Geo) Visualizations-A Focus on Virtual Reality Environments.</article-title> <source>The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences</source>, <volume>XLI</volume>(<supplement>B2</supplement>), <fpage>387</fpage>–<lpage>392</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.5194/isprs-archives-xli-b2-387-2016</pub-id> <pub-id pub-id-type="doi">10.5194/isprsarchives-XLI-B2-387-2016</pub-id><issn>1682-1750</issn></mixed-citation></ref>
<ref id="R50"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Hägerstraand</surname>, <given-names>T.</given-names></string-name></person-group> (<year>1970</year>). <article-title>What about people in regional science?</article-title> <source>Papers in Regional Science</source>, <volume>24</volume>(<issue>1</issue>), <fpage>7</fpage>–<lpage>24</lpage>. <pub-id pub-id-type="doi">10.1111/j.1435-5597.1970.tb01464.x</pub-id><issn>1056-8190</issn></mixed-citation></ref>
<ref id="R51"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Kveladze</surname>, <given-names>I.</given-names></string-name>, <string-name><surname>Kraak</surname>, <given-names>M.-J.</given-names></string-name>, &amp; <string-name><surname>van Elzakker</surname>, <given-names>C. P.</given-names></string-name></person-group> (<year>2013</year>). <article-title>A methodological framework for researching the usa-bility of the space-time cube.</article-title> <source>The Cartographic Journal</source>, <volume>50</volume>(<issue>3</issue>), <fpage>201</fpage>–<lpage>210</lpage>. <pub-id pub-id-type="doi">10.1179/1743277413Y.0000000061</pub-id><issn>0008-7041</issn></mixed-citation></ref>
<ref id="R52"><mixed-citation publication-type="book-chapter" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Li</surname>, <given-names>X.</given-names></string-name>, <string-name><surname>Çöltekin</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Kraak</surname>, <given-names>M.-J.</given-names></string-name></person-group> (<year>2010</year>). <chapter-title>Visual exploration of eye movement data using the space-time-cube.</chapter-title> <source>Geographic Information Science</source>, <publisher-name>Springer</publisher-name>, <fpage>295</fpage>-<lpage>309</lpage>.</mixed-citation></ref>
<ref id="R53"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Wood</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Kirschenbauer</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Döllner</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Lopes</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Bodum</surname>, <given-names>L.</given-names></string-name></person-group> (<year>2005</year>). <chapter-title>Using 3D in visualization</chapter-title>. In <person-group person-group-type="editor"><string-name><given-names>J.</given-names> <surname>Dykes</surname></string-name> (<role>Ed.</role>),</person-group> <source>Exploring Geovisualization</source> (pp. <fpage>295</fpage>–<lpage>312</lpage>). <publisher-name>Elsevier</publisher-name>. <pub-id pub-id-type="doi">10.1016/B978-008044531-1/50432-2</pub-id></mixed-citation></ref>
<ref id="R54"><mixed-citation publication-type="book" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Kraak</surname>, <given-names>M.</given-names></string-name></person-group> (<year>1988</year>) <source>Computer-assisted cartographical 3D imaging techniques</source>. <publisher-name>Delft University Press, 175</publisher-name>.</mixed-citation></ref>
<ref id="R55"><mixed-citation publication-type="conference" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Haeberling</surname>, <given-names>C.</given-names></string-name></person-group> (<year>2002</year>). <article-title>3D Map Presentation–A System-atic Evaluation of Important Graphic Aspects</article-title>. <source>Proceedings of ICA Mountain Cartography Workshop "Mount Hood"</source>, <fpage>1</fpage>-<lpage>11</lpage>.</mixed-citation></ref>
<ref id="R56"><mixed-citation publication-type="thesis" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Góralski</surname>, <given-names>R.</given-names></string-name></person-group> (<year>2009</year>). <article-title>Three-dimensional interactive maps : theory and practice.</article-title> <comment>Unpublished Ph.D. thesis. University of Glamorgan</comment>.</mixed-citation></ref>
<ref id="R57"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Ellis</surname>, <given-names>G.</given-names></string-name>, &amp; <string-name><surname>Dix</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2006</year>). <article-title>An explorative analysis of user evaluation studies in information visualisation.</article-title> <source>Proceedings of the 2006 AVI workshop</source>, <fpage>1</fpage>-<lpage>7</lpage>. doi: <pub-id pub-id-type="doi">10.1145/1168149.1168152</pub-id></mixed-citation></ref>
<ref id="R58"><mixed-citation publication-type="book" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>MacEachren</surname>, <given-names>A. M.</given-names></string-name></person-group> (<year>2004</year>). <source>How maps work: representation, visualization, and design</source>. <publisher-name>Guilford Press, 513</publisher-name>.</mixed-citation></ref>
<ref id="R59"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Slocum</surname>, <given-names>T. A.</given-names></string-name>, <string-name><surname>Blok</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Jiang</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Koussoulakou</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Montello</surname>, <given-names>D. R.</given-names></string-name>, <string-name><surname>Fuhrmann</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Hedley</surname>, <given-names>N. R.</given-names></string-name></person-group> (<year>2001</year>). <article-title>Cognitive and usability issues in geovisualization.</article-title> <source>Cartography and Geographic Information Science</source>, <volume>28</volume>(<issue>1</issue>), <fpage>61</fpage>–<lpage>75</lpage>. <pub-id pub-id-type="doi">10.1559/152304001782173998</pub-id><issn>1523-0406</issn></mixed-citation></ref>
<ref id="R60"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Petchenik</surname>, <given-names>B. B.</given-names></string-name></person-group> (<year>1977</year>). <article-title>Cognition in cartography. Cartographica</article-title>. <source>The International Journal for Geographic Information and Geovisualization</source>, <volume>14</volume>(<issue>1</issue>), <fpage>117</fpage>–<lpage>128</lpage>. <pub-id pub-id-type="doi">10.3138/97R4-84N4-4226-0P24</pub-id></mixed-citation></ref>
<ref id="R61"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Staněk</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Friedmannová</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Kubíček</surname>, <given-names>P.</given-names></string-name>, &amp; <string-name><surname>Konečný</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2010</year>). <article-title>Selected issues of cartographic communication optimization for emergency centers.</article-title> <source>International Journal of Digital Earth</source>, <volume>3</volume>(<issue>4</issue>), <fpage>316</fpage>–<lpage>339</lpage>. <pub-id pub-id-type="doi">10.1080/17538947.2010.484511</pub-id><issn>1753-8947</issn></mixed-citation></ref>
<ref id="R62"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Kubíček</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Šašinka</surname>, <given-names>Č.</given-names></string-name>, <string-name><surname>Stachoň</surname>, <given-names>Z.</given-names></string-name>, <string-name><surname>Štěrba</surname>, <given-names>Z.</given-names></string-name>, <string-name><surname>Apeltauer</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Urbánek</surname>, <given-names>T.</given-names></string-name></person-group> (<year>2017</year>). <article-title>Cartographic Design and Usability of Visual Variables for Linear Features.</article-title> <source>The Cartographic Journal</source>, <volume>54</volume>(<issue>1</issue>), <fpage>91</fpage>–<lpage>102</lpage>. <pub-id pub-id-type="doi">10.1080/00087041.2016.1168141</pub-id><issn>0008-7041</issn></mixed-citation></ref>
<ref id="R63"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Haeberling</surname>, <given-names>C.</given-names></string-name></person-group> (<year>2003</year>). <source>Topografische 3D-Karten-Thesen für kartografische Gestaltungsgrundsätze</source>. <publisher-name>ETH Zürich</publisher-name>; <pub-id pub-id-type="doi">10.3929/ethz-a-004709715</pub-id></mixed-citation></ref>
<ref id="R64"><mixed-citation publication-type="conference" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Petrovič</surname>, <given-names>D.</given-names></string-name>, &amp; <string-name><surname>Mašera</surname>, <given-names>P.</given-names></string-name></person-group> (<year>2004</year>). <article-title>Analysis of user’s response on 3D cartographic presentations</article-title>. <source>Proceedings of 7th meeting of the ICA Commission on Mountain Cartography</source>, <fpage>1</fpage>-<lpage>10</lpage>.</mixed-citation></ref>
<ref id="R65"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Savage</surname>, <given-names>D. M.</given-names></string-name>, <string-name><surname>Wiebe</surname>, <given-names>E. N.</given-names></string-name>, &amp; <string-name><surname>Devine</surname>, <given-names>H. A.</given-names></string-name></person-group> (<year>2004</year>). <article-title>Performance of 2d versus 3d topographic representations for different task types.</article-title> <source>Proceedings of the Human Factors and Ergonomics Society Annual Meeting</source>, <volume>48</volume>(<issue>16</issue>), <fpage>1793</fpage>–<lpage>1797</lpage>. <pub-id pub-id-type="doi">10.1177/154193120404801601</pub-id><issn>1071-1813</issn></mixed-citation></ref>
<ref id="R66"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Wilkening</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Fabrikant</surname>, <given-names>S. I.</given-names></string-name></person-group> (<year>2013</year>). <article-title>How users interact with a 3D geo-browser under time pressure.</article-title> <source>Cartography and Geographic Information Science</source>, <volume>40</volume>(<issue>1</issue>), <fpage>40</fpage>–<lpage>52</lpage>. <pub-id pub-id-type="doi">10.1080/15230406.2013.762140</pub-id><issn>1523-0406</issn></mixed-citation></ref>
<ref id="R67"><mixed-citation publication-type="conference" specific-use="parsed"><person-group person-group-type="author"><string-name><surname>Bleisch</surname> <given-names>S</given-names></string-name>, <string-name><surname>Burkhard</surname> <given-names>J</given-names></string-name> and <string-name><surname>Nebiker</surname> <given-names>S</given-names></string-name></person-group>. (<year>2009</year>) <article-title>Efficient Integration of data graphics into virtual 3D Environ-ments.</article-title> <source>Proceedings of 24th International Cartography Conference</source>.</mixed-citation></ref>
<ref id="R68"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Lokka</surname>, <given-names>I.</given-names></string-name>, &amp; <string-name><surname>Çöltekin</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Simulating navigation with virtual 3D geovisualizations–A focus on memory related factors.</article-title> <source>The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences</source>, <volume>XLI</volume>(<supplement>B2</supplement>), <fpage>671</fpage>–<lpage>673</lpage>. <pub-id pub-id-type="doi">10.5194/isprsarchives-XLI-B2-671-2016</pub-id><issn>1682-1750</issn></mixed-citation></ref>
<ref id="R69"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Špriňarová</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Juřík</surname>, <given-names>V.</given-names></string-name>, <string-name><surname>Šašinka</surname>, <given-names>Č.</given-names></string-name>, <string-name><surname>Herman</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Štěrba</surname>, <given-names>Z.</given-names></string-name>, <string-name><surname>Stachoň</surname>, <given-names>Z.</given-names></string-name>, <string-name><surname>Chmelík</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Kozlíková</surname>, <given-names>B.</given-names></string-name></person-group> (<year>2015</year>). <chapter-title>Human-Computer Interaction in Real-3D and Pseudo-3D Cartographic Visualization: A Comparative Study</chapter-title>. <source>Cartography-Maps Connecting the World</source>, <publisher-name>Springer</publisher-name>, <fpage>59</fpage>-<lpage>73</lpage>. doi: <pub-id pub-id-type="doi">10.1007/978-3-319-17738-0_5</pub-id></mixed-citation></ref>
<ref id="R70"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Herman</surname>, <given-names>L.</given-names></string-name>, &amp; <string-name><surname>Stachoň</surname>, <given-names>Z.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Comparison of User Performance with Interactive and Static 3D Visualization–Pilot Study.</article-title> <source>The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences</source>, <volume>XLI</volume>(<supplement>B2</supplement>), <fpage>655</fpage>–<lpage>661</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.5194/isprs-archives-XLI-B2-655-2016</pub-id> <pub-id pub-id-type="doi">10.5194/isprsarchives-XLI-B2-655-2016</pub-id><issn>1682-1750</issn></mixed-citation></ref>
<ref id="R71"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Enoch</surname>, <given-names>J. M.</given-names></string-name></person-group> (<year>1959</year>). <article-title>Effect of the size of a complex display upon visual search.</article-title> <source>JOSA</source>, <volume>49</volume>(<issue>3</issue>), <fpage>280</fpage>–<lpage>286</lpage>. <pub-id pub-id-type="doi">10.1364/JOSA.49.000280</pub-id><pub-id pub-id-type="pmid">13631562</pub-id><issn>0030-3941</issn></mixed-citation></ref>
<ref id="R72"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Steinke</surname>, <given-names>T. R.</given-names></string-name></person-group> (<year>1987</year>). <article-title>Eye movement studies in cartography and related fields.</article-title>. <source>Cartographica. The International Journal for Geographic Information and Geo-visualization</source>, <volume>24</volume>(<issue>2</issue>), <fpage>40</fpage>–<lpage>73</lpage>. <pub-id pub-id-type="doi">10.3138/J166-635U-7R56-X2L1</pub-id></mixed-citation></ref>
<ref id="R73"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Wang</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Chen</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Yuan</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Ye</surname>, <given-names>H.</given-names></string-name>, &amp; <string-name><surname>Zheng</surname>, <given-names>S.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Visualizing the Intellectual Structure of Eye Movement Research in Cartography.</article-title> <source>ISPRS International Journal of Geo-Information</source>, <volume>5</volume>(<issue>10</issue>), <fpage>168</fpage>. <pub-id pub-id-type="doi">10.3390/ijgi5100168</pub-id><issn>2220-9964</issn></mixed-citation></ref>
<ref id="R74"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Popelka</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Brychtova</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Svobodova</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Brus</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Dolezal</surname>, <given-names>J.</given-names></string-name></person-group> (<year>2013</year>). <article-title>Advanced visibility analyses and visibility evaluation using eye-tracking.</article-title> <source>Proceedings of 21st International Conference on Geoinformatics</source>, <fpage>1</fpage>-<lpage>6</lpage>. doi: <pub-id pub-id-type="doi">10.1109/Geoinformatics.2013.6626176</pub-id></mixed-citation></ref>
<ref id="R75"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Brychtova</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Popelka</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Dobesova</surname>, <given-names>Z.</given-names></string-name></person-group> (<year>2012</year>). <article-title>Eye - tracking methods for investigation of cartographic principles. 12th International Multidisciplinary Scientific Geoconference</article-title>. <source>SGEM</source>, <volume>II</volume>, <fpage>1041</fpage>–<lpage>1048</lpage>. <pub-id pub-id-type="doi">10.5593/sgem2012/s09.v2016</pub-id></mixed-citation></ref>
<ref id="R76"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Fabrikant</surname>, <given-names>S. I.</given-names></string-name>, <string-name><surname>Rebich-Hespanha</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Andrienko</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Andrienko</surname>, <given-names>G.</given-names></string-name>, &amp; <string-name><surname>Montello</surname>, <given-names>D. R.</given-names></string-name></person-group> (<year>2008</year>). <article-title>Novel method to measure inference affordance in static small-multiple map displays representing dynamic processes.</article-title> <source>The Cartographic Journal</source>, <volume>45</volume>(<issue>3</issue>), <fpage>201</fpage>–<lpage>215</lpage>. <pub-id pub-id-type="doi">10.1179/000870408X311396</pub-id><issn>0008-7041</issn></mixed-citation></ref>
<ref id="R77"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Fabrikant</surname>, <given-names>S. I.</given-names></string-name>, <string-name><surname>Hespanha</surname>, <given-names>S. R.</given-names></string-name>, &amp; <string-name><surname>Hegarty</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2010</year>). <article-title>Cognitively inspired and perceptually salient graphic displays for efficient spatial inference making.</article-title> <source>Annals of the Association of American Geographers</source>, <volume>100</volume>(<issue>1</issue>), <fpage>13</fpage>–<lpage>29</lpage>. <pub-id pub-id-type="doi">10.1080/00045600903362378</pub-id><issn>0004-5608</issn></mixed-citation></ref>
<ref id="R78"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Çöltekin</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Fabrikant</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Lacayo</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2010</year>). <article-title>Explor-ing the efficiency of users’ visual analytics strategies based on sequence analysis of eye movement recordings.</article-title> <source>International Journal of Geographical Information Science</source>, <volume>24</volume>(<issue>10</issue>), <fpage>1559</fpage>–<lpage>1575</lpage>. <pub-id pub-id-type="doi">10.1080/13658816.2010.511718</pub-id><issn>1365-8816</issn></mixed-citation></ref>
<ref id="R79"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Incoul</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Ooms</surname>, <given-names>K.</given-names></string-name>, &amp; <string-name><surname>De Maeyer</surname>, <given-names>P.</given-names></string-name></person-group> (<year>2015</year>). <chapter-title>Comparing paper and digital topographic maps using eye tracking.</chapter-title> <source>Modern Trends in Cartography</source>, <publisher-name>Springer</publisher-name>, <fpage>339</fpage>-<lpage>356</lpage>. doi: <pub-id pub-id-type="doi">10.1007/978-3-319-07926-4_26</pub-id></mixed-citation></ref>
<ref id="R80"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Ooms</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>De Maeyer</surname>, <given-names>P.</given-names></string-name>, &amp; <string-name><surname>Fack</surname>, <given-names>V.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Study of the attentive behavior of novice and expert map users using eye tracking.</article-title> <source>Cartography and Geographic Information Science</source>, <volume>41</volume>(<issue>1</issue>), <fpage>37</fpage>–<lpage>54</lpage>. <pub-id pub-id-type="doi">10.1080/15230406.2013.860255</pub-id><issn>1523-0406</issn></mixed-citation></ref>
<ref id="R81"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Ooms</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Çöltekin</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>De Maeyer</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Dupont</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Fabrikant</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Incoul</surname>, <given-names>A.</given-names></string-name>, <etal>. . .</etal> <string-name><surname>Van der Haegen</surname>, <given-names>L.</given-names></string-name></person-group> (<year>2015</year>). <article-title>Combining user logging with eye tracking for interactive and dynamic applications.</article-title> <source>Behavior Research Methods</source>, <volume>47</volume>(<issue>4</issue>), <fpage>977</fpage>–<lpage>993</lpage>. <pub-id pub-id-type="doi">10.3758/s13428-014-0542-3</pub-id><pub-id pub-id-type="pmid">25490980</pub-id><issn>1554-351X</issn></mixed-citation></ref>
<ref id="R82"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Fuhrmann</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Komogortsev</surname>, <given-names>O.</given-names></string-name>, &amp; <string-name><surname>Tamir</surname>, <given-names>D.</given-names></string-name></person-group> (<year>2009</year>). <article-title>Investigating Hologram‐Based Route Planning.</article-title> <source>Transactions in GIS</source>, <volume>13</volume>(<supplement>s1</supplement>), <fpage>177</fpage>–<lpage>196</lpage>. <pub-id pub-id-type="doi">10.1111/j.1467-9671.2009.01158.x</pub-id><issn>1361-1682</issn></mixed-citation></ref>
<ref id="R83"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Putto</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Kettunen</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Torniainen</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Krause</surname>, <given-names>C. M.</given-names></string-name>, &amp; <string-name><surname>Tiina Sarjakoski</surname>, <given-names>L.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Effects of cartographic elevation visualizations and map-reading tasks on eye movements.</article-title> <source>The Cartographic Journal</source>, <volume>51</volume>(<issue>3</issue>), <fpage>225</fpage>–<lpage>236</lpage>. <pub-id pub-id-type="doi">10.1179/1743277414Y.0000000087</pub-id><issn>0008-7041</issn></mixed-citation></ref>
<ref id="R84"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Popelka</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Brychtova</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2013</year>). <article-title>Eye-tracking Study on Different Perception of 2D and 3D Terrain Visual-isation.</article-title> <source>The Cartographic Journal</source>, <volume>50</volume>(<issue>3</issue>), <fpage>240</fpage>–<lpage>246</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1179/1743277413y.0000000058</pub-id> <pub-id pub-id-type="doi">10.1179/1743277413Y.0000000058</pub-id><issn>0008-7041</issn></mixed-citation></ref>
<ref id="R85"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Dolezalova</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Popelka</surname>, <given-names>S.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Evaluation of the user strategy on 2D and 3D city maps based on novel scanpath comparison method and graph visualization.</article-title> <source>The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences</source>, <fpage>637</fpage>-<lpage>640</lpage>. <pub-id pub-id-type="doi">10.5194/isprsarchives-XLI-B2-637-2016</pub-id><issn>1682-1750</issn></mixed-citation></ref>
<ref id="R86"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Popelka</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Dedkova</surname>, <given-names>P.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Extinct village 3D visualization and its evaluation with eye-movement recording.</article-title> <source>Lecture Notes in Computer Science</source>, <volume>8579</volume>, <fpage>786</fpage>–<lpage>795</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-319-09144-0_54</pub-id><issn>0302-9743</issn></mixed-citation></ref>
<ref id="R87"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Popelka</surname>, <given-names>S.</given-names></string-name></person-group> (<year>2014</year>). <article-title>The role of hill-shading in tourist maps.</article-title> <source>CEUR Workshop Proceedings</source>, <fpage>17</fpage>-<lpage>21</lpage></mixed-citation></ref>
<ref id="R88"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Pfeiffer</surname>, <given-names>T.</given-names></string-name></person-group> (<year>2012</year>). <article-title>Measuring and visualizing attention in space with 3d attention volumes.</article-title> <source>Proceedings of the Symposium on Eye Tracking Research and Applications</source>. ACM, <fpage>29</fpage>-<lpage>36</lpage>. doi: <pub-id pub-id-type="doi">10.1145/2168556.2168560</pub-id></mixed-citation></ref>
<ref id="R89"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Stellmach</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Nacke</surname>, <given-names>L.</given-names></string-name>, &amp; <string-name><surname>Dachselt</surname>, <given-names>R.</given-names></string-name></person-group> (<year>2010</year>a). <article-title>3d attentional maps: aggregated gaze visualizations in three-dimensional virtual environments.</article-title> <source>Proceedings of the international conference on advanced visual interfaces</source>, ACM, <fpage>345</fpage>-<lpage>348</lpage>. <pub-id pub-id-type="doi">10.1145/1842993.1843058</pub-id></mixed-citation></ref>
<ref id="R90"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Blascheck</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Kurzhals</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Raschke</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Burch</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Weiskopf</surname>, <given-names>D.</given-names></string-name>, &amp; <string-name><surname>Ertl</surname>, <given-names>T.</given-names></string-name></person-group> (<year>2014</year>). <article-title>State-of-the-art of visualization for eye tracking data.</article-title> <source>Proceedings of EuroVis</source>. doi: <pub-id pub-id-type="doi">10.2312/eurovisstar.20141173</pub-id></mixed-citation></ref>
<ref id="R91"><mixed-citation publication-type="conference" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Stellmach</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Nacke</surname>, <given-names>L.</given-names></string-name>, &amp; <string-name><surname>Dachselt</surname>, <given-names>R.</given-names></string-name></person-group> (<year>2010</year>b). <article-title>Ad-vanced gaze visualizations for three-dimensional virtual environments.</article-title> <source>Proceedings of the 2010 symposium on eye-tracking research &amp; Applications</source>, <publisher-name>ACM</publisher-name>, <fpage>109</fpage>-<lpage>112</lpage>.</mixed-citation></ref>
<ref id="R92"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Ramloll</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Trepagnier</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Sebrechts</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Beedasy</surname>, <given-names>J.</given-names></string-name></person-group> (<year>2004</year>). <article-title>Gaze data visualization tools: opportunities and challenges.</article-title> <source>Proceedings of Eighth International Conference on Information Visualisation</source>, <fpage>173</fpage>-<lpage>180</lpage>. doi: <pub-id pub-id-type="doi">10.1109/IV.2004.1320141</pub-id></mixed-citation></ref>
<ref id="R93"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Duchowski</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Medlin</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Cournia</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Murphy</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Gramopadhye</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Nair</surname>, <given-names>S.</given-names></string-name>, <etal>. . .</etal> <string-name><surname>Melloy</surname>, <given-names>B.</given-names></string-name></person-group> (<year>2002</year>). <article-title>3-D eye movement analysis.</article-title> <source>Behavior Research Methods, Instruments, &amp; Computers</source>, <volume>34</volume>(<issue>4</issue>), <fpage>573</fpage>–<lpage>591</lpage>. <pub-id pub-id-type="doi">10.3758/BF03195486</pub-id><pub-id pub-id-type="pmid">12564561</pub-id><issn>0743-3808</issn></mixed-citation></ref>
<ref id="R94"><mixed-citation publication-type="conference" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Baldauf</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Fröhlich</surname>, <given-names>P.</given-names></string-name>, &amp; <string-name><surname>Hutter</surname>, <given-names>S.</given-names></string-name></person-group> (<year>2010</year>) <article-title>KIBITZ-ER: a wearable system for eye-gaze-based mobile ur-ban exploration.</article-title> <source>Proceedings of the 1st Augmented Human International Conference</source>. ACM, <fpage>9</fpage>-<lpage>13</lpage>. doi: <pub-id pub-id-type="doi">10.1145/1785455.1785464</pub-id></mixed-citation></ref>
<ref id="R95"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Paletta</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Santner</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Fritz</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Mayer</surname>, <given-names>H.</given-names></string-name>, &amp; <string-name><surname>Schrammel</surname>, <given-names>J.</given-names></string-name></person-group> (<year>2013</year>). <article-title>3D attention: measurement of visual saliency using eye tracking glasses</article-title>. <source>CHI'13 Extended Abstracts on Human Factors in Computing Systems</source>. <publisher-name>ACM</publisher-name>, <fpage>199</fpage>-<lpage>204</lpage>. doi: <pub-id pub-id-type="doi">10.1145/2468356.2468393</pub-id></mixed-citation></ref>
<ref id="R96"><mixed-citation publication-type="conference" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Behr</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Eschler</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Jung</surname>, <given-names>Y.</given-names></string-name>, &amp; <string-name><surname>Zöllner</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2009</year>). <article-title>X3DOM: a DOM-based HTML5/X3D integration model</article-title>. <source>Proceedings of the 14th International Conference on 3D Web Technology</source>, <publisher-name>ACM</publisher-name>, <fpage>127</fpage>-<lpage>135</lpage>. doi: <pub-id pub-id-type="doi">10.1145/1559764.1559784</pub-id></mixed-citation></ref>
<ref id="R97"><mixed-citation publication-type="conference" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Behr</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Jung</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Keil</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Drevensek</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Zoellner</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Eschler</surname>, <given-names>P.</given-names></string-name>, &amp; <string-name><surname>Fellner</surname>, <given-names>D.</given-names></string-name></person-group> (<year>2010</year>). <article-title>A scalable architecture for the HTML5/X3D integration model X3DOM</article-title>.   <source>Proceedings of the 15th International Conference on Web 3D Technology</source>, <publisher-name>ACM</publisher-name>. doi: <pub-id pub-id-type="doi">10.1145/1836049.1836077</pub-id></mixed-citation></ref>
<ref id="R98"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Herman</surname>, <given-names>L.</given-names></string-name>, &amp; <string-name><surname>Reznik</surname>, <given-names>T.</given-names></string-name></person-group> (<year>2015</year>). <article-title>3D web visualization of environmental information-integration of heterogeneous data sources when providing navigation and interaction.</article-title> <source>The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences</source>, <volume>XL-3</volume>(<supplement>W3</supplement>), <fpage>479</fpage>–<lpage>485</lpage>. <pub-id pub-id-type="doi">10.5194/isprsarchives-XL-3-W3-479-2015</pub-id><issn>1682-1750</issn></mixed-citation></ref>
<ref id="R99"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Herman</surname>, <given-names>L.</given-names></string-name>, &amp; <string-name><surname>Russnák</surname>, <given-names>J.</given-names></string-name></person-group> (<year>2016</year>). <article-title>X3DOM: Open Web Platform for Presenting 3D Geographical Data and E-learning.</article-title> <source>Proceedings of 23rd Central European Conference</source>, <fpage>31</fpage>-<lpage>40</lpage>.</mixed-citation></ref>
<ref id="R100"><mixed-citation publication-type="book" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Hughes</surname>, <given-names>J. F.</given-names></string-name>, <string-name><surname>Van Dam</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Foley</surname>, <given-names>J. D.</given-names></string-name>, &amp; <string-name><surname>Feiner</surname>, <given-names>S. K.</given-names></string-name></person-group> (<year>2014</year>). <source>Computer graphics: principles and practice.</source> <publisher-name>Pearson Education, 1264</publisher-name>.</mixed-citation></ref>
<ref id="R101"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Popelka</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Stachoň</surname>, <given-names>Z.</given-names></string-name>, <string-name><surname>Šašinka</surname>, <given-names>Č.</given-names></string-name>, &amp; <string-name><surname>Doležalová</surname>, <given-names>J.</given-names></string-name></person-group> (<year>2016</year>). <article-title>EyeTribe Tracker Data Accuracy Evaluation and Its Interconnection with Hypothesis Software for Cartographic Purposes.</article-title> <source>Computational Intelligence and Neuroscience</source>, <volume>2016</volume>, <fpage>1</fpage>-<lpage>14</lpage>. <pub-id pub-id-type="doi">10.1155/2016/9172506</pub-id><pub-id pub-id-type="pmid">27087805</pub-id><issn>1687-5265</issn></mixed-citation></ref>
<ref id="R102"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Vosskühler</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Nordmeier</surname>, <given-names>V.</given-names></string-name>, <string-name><surname>Kuchinke</surname>, <given-names>L.</given-names></string-name>, &amp; <string-name><surname>Jacobs</surname>, <given-names>A. M.</given-names></string-name></person-group> (<year>2008</year>). <article-title>OGAMA (Open Gaze and Mouse Analyzer): Open-source software designed to analyze eye and mouse movements in slideshow study designs.</article-title> <source>Behavior Research Methods</source>, <volume>40</volume>(<issue>4</issue>), <fpage>1150</fpage>–<lpage>1162</lpage>. <pub-id pub-id-type="doi">10.3758/BRM.40.4.1150</pub-id><pub-id pub-id-type="pmid">19001407</pub-id><issn>1554-351X</issn></mixed-citation></ref>
</ref-list>
</back>
</article>  
