<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd">

<article article-type="research-article" xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
     <journal-meta>
	<journal-id journal-id-type="publisher-id">Jemr</journal-id>
      <journal-title-group>
        <journal-title>Journal of Eye Movement Research</journal-title>
      </journal-title-group>
      <issn pub-type="epub">1995-8692</issn>
	  <publisher>								
	  <publisher-name>Bern Open Publishing</publisher-name>
	  <publisher-loc>Bern, Switzerland</publisher-loc>
	</publisher>
    </journal-meta>
    <article-meta><article-id pub-id-type="doi">10.16910/jemr.10.2.3 </article-id> 
	  <article-categories>								
				<subj-group subj-group-type="heading">
					<subject>Research Article</subject>
				</subj-group>
		</article-categories>
      <title-group>
        <article-title>Using Coefficient to Distinguish Ambient/Focal Visual Attention During Cartographic Tasks</article-title>
      </title-group>
         <contrib-group> 
				<contrib contrib-type="author">
					<name>
						<surname>Krejtz</surname>
						<given-names>Krzysztof </given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Çöltekin</surname>
						<given-names>Arzu</given-names>
					</name>
					<xref ref-type="aff" rid="aff2">2</xref>
				</contrib>
					<contrib contrib-type="author">
					<name>
						<surname>Duchowski</surname>
						<given-names>Andrew</given-names>
					</name>
					<xref ref-type="aff" rid="aff3">4</xref>
				</contrib>
					<contrib contrib-type="author">
					<name>
						<surname>Niedzielska</surname>
						<given-names>Anna</given-names>
					</name>
					<xref ref-type="aff" rid="aff4">4</xref>
				</contrib>
				
				
        <aff id="aff1">
		<institution>SWPS University of Social Sciences and Humanities,
Warsaw</institution>, <country>Poland</country>
        </aff>
		 <aff id="aff2">
		<institution>Department of Geography, University of Zürich</institution>, <country>Switzerland</country>
        </aff>
		 <aff id="aff3">
		<institution>Clemson University</institution>, <country>USA</country>
        </aff>
		 <aff id="aff4">
		<institution>National Information Processing Institute, Warsaw</institution>, <country>Poland</country>
        </aff>
		</contrib-group>
     
	  <pub-date date-type="pub" publication-format="electronic"> 
		<day>3</day>  
		<month>4</month>
        <year>2017</year>
      </pub-date>
	  <pub-date date-type="collection" publication-format="electronic"> 
	  <year>2017</year>
	</pub-date>
      <volume>10</volume>
      <issue>2</issue> 
	  <elocation-id>10.16910/jemr.10.2.3</elocation-id>
	
	<permissions> 
	<copyright-year>2017</copyright-year>
	<copyright-holder>Krejtz et al.</copyright-holder>
	<license license-type="open-access">
  <license-p>This work is licensed under a Creative Commons Attribution 4.0 International License, 
  (<ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">
    https://creativecommons.org/licenses/by/4.0/</ext-link>), which permits unrestricted use and redistribution provided that the original author and source are credited.</license-p>
</license>
	</permissions>
      <abstract>
        <p>We demonstrate the use of the ambient/focal coefficient <italic>&#x039A;</italic> for studying the dynamics of visual
behavior when performing cartographic tasks. Participants viewed a cartographic map and
satellite image of Barcelona while performing a number of map-related tasks. Cartographic
maps can be viewed as summary representations of reality, while satellite images are typically
more veridical, and contain considerably more information. Our analysis of traditional eye
movement metrics suggests that the satellite representation facilitates longer fixation durations,
requiring greater scrutiny of the map. The cartographic map affords greater peripheral scanning, as evidenced by larger saccade amplitudes. Evaluation of <italic>&#x039A;</italic> elucidates task dependence
of ambient/focal attention dynamics when working with geographic visualizations: localization progresses from ambient to focal attention; route planning fluctuates in an ambient-focalambient pattern characteristic of the three stages of route end point localization, route following, and route confirmation.</p>
      </abstract>
      <kwd-group>
        <kwd>ambient/focal attention</kwd>
        <kwd>coefficient <italic>&#x039A;</italic></kwd>
        <kwd> cartography</kwd>
        <kwd>route planning</kwd>
        <kwd>visual search</kwd>
      
      </kwd-group>
    </article-meta>
  </front>
  
  
  <body>
  <sec id="s1">
  <title>Introduction</title>
  <p>Following construction of a map, via, e.g., selection, designation, classification, etc. (see Keates [<xref ref-type="bibr" rid="b29">29</xref>]), cartographers are interested in evaluating its use, including objective
analysis of the user’s visual and/or cognitive engagement.
Beyond measurement of a map’s intrinsic or visual complexity, which often relies on image-based measures related
to saliency, clutter, or entropy (e.g., see Fairbairn [<xref ref-type="bibr" rid="b15">15</xref>];
Schnur, Bektas, Salahi, and Çöltekin [<xref ref-type="bibr" rid="b54">54</xref>]; Brychtová, Çöltekin, and Pászto [<xref ref-type="bibr" rid="b3">3</xref>]), it is important to find ways to measure perceived complexity, so that maps are well-suited to
task types and to target user groups [<xref ref-type="bibr" rid="b56">56</xref>]. Št ˇ erba et al. suggest ˇ
aiming psychological analyses to detect mechanisms and
cognitive processes evoked during various tasks performed
on maps, or cartographic products in general. Toward this end, we use K. Krejtz, Duchowski, Krejtz, Szarkowska, and
Kopacz’s [<xref ref-type="bibr" rid="b37">37</xref>] <italic>&#x039A;</italic> coefficient as a gaze metric to distinguish
ambient and focal attention when performing cartographic
tasks. In particular, our goal is to compare and contrast the
use of a cartographically designed abstract map with its corresponding satellite image.</p>
  <p>Due to the coupling between attention and saccades, saccade duration and amplitude are thought to reflect attentional selection and thus the spatial extent of parafoveal
processing—peripheral scene degradation tends to curtail saccadic amplitudes [<xref ref-type="bibr" rid="b4">4</xref>]. Ambient attention is typically characterized by relatively short fixations followed by long saccades.
Conversely, focal attention is described by long fixations followed by short saccades [<xref ref-type="bibr" rid="b59">59</xref>]. The <italic>&#x039A;</italic> coefficient captures the temporal
relation between standardized (z-score) fixation duration and
subsequent saccade amplitude. <italic>&#x039A;</italic> &#x003E; 0 indicates focal viewing while <italic>&#x039A;</italic> &#x003C; 0 suggests ambient viewing. Fluctuating between focal and ambient modes, <italic>&#x039A;</italic> could indicate changes
in cognitive load corresponding to stimulus or task complexity, while, becoming more focal over time, <italic>&#x039A;</italic> could indicate
conclusion of visual search, and, for example, boredom, or
the culmination of a decision.</p>
	<p>In real-time applications, the <italic>&#x039A;</italic> coefficient can potentially act as a contextual cue which could be exploited by software
such as recommender systems, e.g., by not interrupting the
user when in ambient search mode, or oscillating between
ambient and focal search. Gaze-based recommender systems
are designed to respond with information contingent on the
viewer’s gaze, e.g., in geographic contexts, when directed to
a particular location in physical or virtual space (such as on
a map). Geographic gaze-based recommender systems have
been referred to as location-aware (e.g., mobile) eye tracking
systems [<xref ref-type="bibr" rid="b34">34</xref>]. For the system
to provide an appropriate response, the system must identify
the viewer’s desire for information through analysis of their
gaze behavior. Generally, this is accomplished via computation of an interest metric [<xref ref-type="bibr" rid="b55 b46 b20">55, 46, 20</xref>]. Recent approaches characterize interest or boredom via Support
Vector Machines [<xref ref-type="bibr" rid="b32">32</xref>] or
Area Of Interest (AOI) revisitation [<xref ref-type="bibr" rid="b31">31</xref>].</p>	
<p>We demonstrate the utility of <italic>&#x039A;</italic> by comparing visual
search behavior over two different geographic representations (a cartographic map and a satellite image) of the city
of Barcelona alongside traditional eye movement metrics.</p>
	</sec>	
	 
	<sec id="s2">
  <title>Background</title>
  <p>Among the many operations involved in map construction
(see Keates [<xref ref-type="bibr" rid="b29">29</xref>]), an important aspect of map design is
the control of the level of detail through generalization operations, e.g., via simplification [<xref ref-type="bibr" rid="b61">61</xref>].
Cartographers remove unwanted objects to deal with complexity, thus implicitly or explicitly acknowledge that visual
clutter is undesirable, as it can negatively affect visual search
[<xref ref-type="bibr" rid="b50 b63">50, 63</xref>]. In general, the utility of
a map depends on the amount of represented data: as visual
density increases, so does information load, decreasing the
map’s usability [<xref ref-type="bibr" rid="b56">56</xref>]</p>
  <p>A map’s utility can in part be evaluated by considering
visual search performance. Visual search is a fundamental
function all sighted beings execute on a daily basis. We
plan our paths at a glance to avoid danger or to find food
and shelter. In other words, visual search is important for
our survival. It is also a commonly tested task in the attention literature as visual search may be facilitated (or interrupted) by salient objects in our (central or peripheral) visual
field [<xref ref-type="bibr" rid="b50">50</xref>]. Understanding human strategies in visual search is interesting to
many professional groups such as psychologists, vision researchers, educators, designers, advertisers.</p>
	<p>Visual search is also a fundamental (low-level) cartographic task [<xref ref-type="bibr" rid="b2">2</xref>]. Another term
that cartographic literature uses for visual search is localization when a viewer is asked to find an object of interest on
a map. This is a principal map use task, because regardless
of the map type or the final goal of the map reader, an object
or point of interest must be found before it can be studied
further. Geographic task taxonomies widely acknowledge
this task among the most basic and common [<xref ref-type="bibr" rid="b35 b5 b42">35, 5, 42</xref>].</p>	
<p>The deployment of visual attention as well as its response
to changing conditions is often linked to our cognitive state.
In eye movement studies, overt visual attention is typically
associated with the viewer’s point of gaze [<xref ref-type="bibr" rid="b18">18</xref>]. Since eye tracking devices can effectively capture
only the central (foveal) gaze point, attempts to model visual
behavior that is triggered by the global complexity of the visual stimuli, or signals received from peripheral vision, are
studied only to a limited extent [<xref ref-type="bibr" rid="b41 b49">41, 49</xref>].</p>
<p>Map complexity has been a topic of interest in cartography for many decades [<xref ref-type="bibr" rid="b14 b39 b16 b54">14, 39, 16, 54</xref>]. However, there have
only been a few attempts to study map complexity using
eye movements. Castner and Eastman [<xref ref-type="bibr" rid="b6 b7">6, 7</xref>] distinguished between focal and ambient processing (even though
they did not use these terms), and emphasized the importance of this distinction in their studies. They utilized fixation duration as an indicator of “depth of cognitive processing” and interfixation distance as an indicator of “extent of
peripheral processing”. They observed a correlation between
what they termed imageability and perceived complexity, and
concluded that eye movements are useful in assessing the
“holistic properties of maps”.</p>
<p>Our work is conceptually similar to Castner and Eastman’s [<xref ref-type="bibr" rid="b7">7</xref>] in that we also distinguish between focal and
ambient attention during cartographic tasks. However, while
they used traditional eye movement metrics, e.g., derived
from fixations (see Jacob and Karn [<xref ref-type="bibr" rid="b26">26</xref>]), we evaluate <italic>&#x039A;</italic>
for its applicability to analysis of cartographic tasks. Because
task is known to influence eye movements [<xref ref-type="bibr" rid="b64">64</xref>],
especially their dynamics [<xref ref-type="bibr" rid="b42">42</xref>], we test <italic>&#x039A;</italic> under three different cartographic tasks, namely Localization,
Point Of Interest, and Route Planning. These can be thought
of as instances of Locate, Identify, and a combination of Associate and Correlate, respectively, using the cartographic
task taxonomy found in Šterba et al. [<xref ref-type="bibr" rid="b56">56</xref>] (see also Knapp ˇ
[<xref ref-type="bibr" rid="b35">35</xref>] and Wehrend and Lewis [<xref ref-type="bibr" rid="b60">60</xref>]).</p>
<p>Eye tracking experiments have investigated map-based
wayfinding, suggesting that route planning (and route
choice) are followed by a phase of transformation
and encoding—see Kiefer, Giannopoulos, Raubal, and
Duchowski [<xref ref-type="bibr" rid="b33">33</xref>] for a review. In this paper we consider
route planning as one of a number of map-related tasks and
demonstrate how <italic>&#x039A;</italic> corresponds to different phases of route planning. Of six typical map-based visual tasks, namely free
exploration, visual search, polygon comparison, line following, focused search, and route planning, Kiefer, Giannopoulos, Duchowski, and Raubal [<xref ref-type="bibr" rid="b30">30</xref>] found route planning and
focused search to be the most cognitively demanding (as indicated by mean difference of pupil diameter with respect to
the free exploration task, considered as baseline). Their work
is possibly the most similar to our application of coefficient
<italic>&#x039A;</italic> to analysis of cartographic tasks.</p>
	</sec>	


	<sec id="s3">
  <title>Attentional Dynamics</title>
  <p>The <italic>&#x039A;</italic> coefficient is derived by subtracting the standardized (z-score) fixation duration from the standardized amplitude of the subsequent saccade [<xref ref-type="bibr" rid="b37">37</xref>], reproduced here for convenience in <xref ref-type="fig" rid="eq01">Equation 1</xref>:</p>
  
  
  <fig id="eq01" fig-type="equation" position="anchor">
					<graphic id="equation01" xlink:href="jemr-10-02-c-equation-01.png"/>
				</fig>
  
  
  <p>where µd, µa are the mean fixation duration and saccade amplitude, respectively, and σd, σa are the fixation duration and
saccade amplitude standard deviations, respectively, computed over all n fixations and hence n Ki coefficients.</p>
	<p>Similar combinations of fixation duration and saccadic
amplitude have been proposed for the analysis of static and
dynamic scene viewing [<xref ref-type="bibr" rid="b59 b36">59, 36</xref>]. Specifically, short fixation durations combined with long saccades
are characteristic of ambient processing, while longer fixation durations followed by shorter saccades are indicative of
focal processing [<xref ref-type="bibr" rid="b58">58</xref>]. The pattern of visual attention attributed to the two
ambient/focal modes of information acquisition [<xref ref-type="bibr" rid="b57">57</xref>] has been variably referred to as orienting and evaluating [<xref ref-type="bibr" rid="b24">24</xref>], noticing and examining [<xref ref-type="bibr" rid="b62">62</xref>], exploring and inspecting [<xref ref-type="bibr" rid="b59">59</xref>],
skimming and scrutinizing [<xref ref-type="bibr" rid="b38">38</xref>], or
exploring and exploiting [<xref ref-type="bibr" rid="b45">45</xref>].</p>	
<p>The interplay between focal and ambient visual information processing changes dynamically. Shorter fixations followed by longer saccades appear to characterize early stages
of scene perception. Once a target has been identified, longer
fixations ensue and are followed by shorter saccades [<xref ref-type="bibr" rid="b25">25</xref>].</p>
<p>Using Velichkovsky et al.’s [<xref ref-type="bibr" rid="b59">59</xref>] terms of exploration
and inspection, inspection may be comprised of decision and
confirmation [<xref ref-type="bibr" rid="b28">28</xref>]. Pannasch, Helmert,
Roth, Herbold, and Walter [<xref ref-type="bibr" rid="b44">44</xref>] showed a systematic increase in fixation durations and a decrease in saccadic amplitudes over the time course of scene perception. In their work,
fixation durations and saccadic amplitudes were considered
as two independent data streams. We combine both into a
single dynamic stream explicitly capturing the interplay of
ambient and focal modes of visual attention.</p>
<p>Holmqvist et al. [<xref ref-type="bibr" rid="b23">23</xref>] review several means of operationalization of ambient/focal viewing: thresholding on the
ratio of fixation duration to saccade amplitude, and computation of a saccade/fixation ratio. None of these approaches,
however, explicitly considers dynamics of how the saccade/-
fixation ratio changes over time.</p>
<p>Our approach allows for both clear distinction of ambient
(<italic>&#x039A;</italic> &#x003C; 0) and focal (<italic>&#x039A;</italic> &#x003E; 0) eye movements and its continuous
dynamics. There is, however, an implicit ambiguity in <italic>&#x039A;</italic> =0,
which reflects effective equivalence between fixation duration and saccadic amplitude, relative to their z-scores, i.e.,
each is equivalent to its mean. The implicit ambiguity arises
since <italic>&#x039A;</italic> = 0 is neither focal nor ambient, however, its occurrence is rather rare (e.g., in the present study only 0.86% of
the data fell within the <italic>&#x039A;</italic> &#x0395; [−0.01, 0.01] range).</p>


<p>In studying cartographic visual tasks, a measure of ambient/focal attention could indicate perceived complexity, task
difficulty, or cognitive load. It is important to remember that
various factors can contribute to understanding cartographic
complexity, e.g., spatial abilities vary strongly among people
[<xref ref-type="bibr" rid="b1 b21">1, 21</xref>], expertise [<xref ref-type="bibr" rid="b8">8</xref>] and familiarity can change people’s strategies [<xref ref-type="bibr" rid="b19">19</xref>]. Furthermore, the map’s content and design also affects performance [<xref ref-type="bibr" rid="b9">9</xref>].</p>
	</sec>	




	<sec id="s4">
  <title>Methodology</title>
  <p>To evaluate effectiveness of a cartographic representation compared to its satellite rendering using coefficient <italic>&#x039A;</italic>,
we performed an experimental eye-tracking study of cartographic tasks. Three tasks were carried out by participants on
a city view, displayed either as cartographic map or satellite
rendering. Our hypotheses follow.</p>

<p> 1. The satellite view requires more attentive visual
search, if it can be assumed to be more cognitively demanding. As such, we predict longer task completion
times, longer fixation durations, and shorter saccades
during inspection of this type of image compared to
its cartographic representation. Moreover, we predict
that the ambient/focal <italic>&#x039A;</italic> coefficient will show more focal eye movements on the satellite image than on the
cartographic map.</p>
<p>2. The route planning task elicits a different pattern of eye
movement dynamics than the localization task. We assume the pattern of ambient/focal viewing reflects the
different stages required to complete route planning:
localization of the route start and end points (ambient viewing), traversing the route (focal viewing), and
confirmation of the route (ambient viewing).</p>
	
	<sec id="s4a">
  <title>Overview and Experimental Design</title>
   <p>The experiment used a 3×2 mixed design, including cartographic task (Localization vs. Point Of Interest (POI) vs.
Route Planning ) as a within-subjects factor and visualization (cartographic map vs. satellite rendering) as a betweensubjects factor. We also controlled for spatial working memory capacity (SWMC) of each individual (see below).</p>
  <p>The three cartographic tasks involved two types of visual
search (Localization of a stated map landmark followed by
search of a nearby POI) followed by Route Planning. Participants were asked to find locations on the map when viewing
a city representation as either cartographic map or satellite
image (Google’s cartographic or satellite rendering,
respectively, see <xref ref-type="fig" rid="fig01">Figure 1</xref> and below for technical details).</p>
	<p>Note that for statistical analyses we skipped the POI task.
The reason for this was the task’s simplicity. The task was
to find a Point Of Interest close to the target of the first Localization task. Due to the POI’s proximity to the initial target, localization of the secondary POI was subsumed by the
first task, making the distinction between stated hypotheses
effectively meaningless.</p>	

<fig id="fig01" fig-type="figure" position="float">
					<label>Figure 1</label>
					<caption>
						<p>Experimental settings and (Barcelona) stimuli: satellite image and cartographic map. (a) Apparatus and experimental setting. (b) Satellite image. (c) Cartographic map.</p>
					</caption>
					<graphic id="graph01" xlink:href="jemr-10-02-c-figure-01.png"/>
				</fig>

	</sec>	
	<sec id="s4b">
  <title>Participants</title>
   <p>Sixty-three (N = 63) university students took part in the
study, with 7 excluded due to technical and procedural problems (e.g., poor calibration). The final sample included 56
participants (20 M, 36 F, ages M = 25.43, SD = 3.94).
Calibration scores for the sample were as follows: vertical
M = 0.54&#x25E6; and horizontal M = 0.49&#x25E6;. All participants took
part in the experiment after signing a consent form.</p>

	</sec>	
	<sec id="s4c">
  <title>Apparatus</title>
   <p>All stimuli were presented on a computer monitor (1680×
1050 resolution, 2200 LCD, 60 Hz refresh rate) connected to a
standard PC laptop computer. Eye movements were recorded
at 250 Hz with an SMI RED 250 eye tracking system. Stimuli presentation was controlled by SMI’s Experiment Centre
software. SMI’s BeGaze software was used for fixation and
saccade detection with a velocity-based event detection algorithm. The algorithm first detects saccades, then fixations.
The minimum duration of saccades was set to 22 ms, with
peak velocity threshold 40◦/s, and minimum fixation duration set to 50 ms. There is no consensus for fixation identification based on duration. For example, Velichkovsky et al.
[<xref ref-type="bibr" rid="b59">59</xref>] consider a minimum fixation duration of 20 ms. However, other researchers consider fixations with larger minima,
e.g., 100-200 ms [<xref ref-type="bibr" rid="b52 b43">52, 43</xref>]. In the present paper, we applied a minimum fixation duration of 80 ms for the analyses (i.e., ignoring fixations with very short durations in range [50, 80] ms).</p>
  
	</sec>	
		<sec id="s4d">
  <title>Research Materials
</title>
  <p>Background questionnaire. A short online survey with
the use of the LimeSurvey open-source platform [<xref ref-type="bibr" rid="b53">53</xref>] included questions about
demographics, familiarity with Barcelona as well as Google
Maps.</p>
  <p>Spatial Working Memory Capacity. A Spatial Working Memory Capacity (SWMC) measure was adopted from
the Berlin Test of Intelligence [<xref ref-type="bibr" rid="b27">27</xref>], following Dajlido [<xref ref-type="bibr" rid="b11">11</xref>]. We used two tasks for spatial working memory capacity measurement. Both were presented to participants in paper-and-pencil form. Example test
boards are presented in <xref ref-type="fig" rid="fig02">Figure 2</xref>. We followed the test procedure and its timings provided by Dajlido [<xref ref-type="bibr" rid="b11">11</xref>].</p>
	<p>In the first task, participants were asked to memorize, in
30 seconds, a path connecting 10 objects (buildings) presented on a board, see Figure 2(a). The buildings were shown
on a background resembling streets. Afterwards, participants
were presented with an answer sheet with only the buildings
shown. Their task was to reproduce, with a pencil, the original path. The task was scored by a number of correctly reproduced connections in the path, resulting in the final score
ranging from 0 to 10.</p>	
<fig id="fig02" fig-type="figure" position="float">
					<label>Figure 2</label>
					<caption>
						<p>Examples of spatial working memory capacity tasks.(a) Spatial memory task 1. (b) Spatial memory task 2.</p>
					</caption>
					<graphic id="graph02" xlink:href="jemr-10-02-c-figure-02.png"/>
				</fig>



<p>In the second task, participants were asked to remember
the position of objects (buildings) presented on the test board,
see Figure 2(b). The task involved memory of each object position as well as spatial relations between them. The time
limit of this task was 45 seconds. During the test phase the
task was to enter numbers assigned to each building in the
proper empty spots on the answer sheet. All correct entries
were summed for the final score ranging in [0, 12].</p>
<p>In order to obtain a final indicator of spatial working memory capacity, proportional scores from both tasks were averaged. They were then normalized to obtain a single score
of spatial working memory capacity ranging between 0 and
1. The normalized score was used in subsequent statistical
analyses as a covariant.</p>
<p>Experimental stimuli. The map stimuli (screen shots
of the cartographic map and the satellite image) were created using Google Mapstm JavaScript API v3<xref ref-type="fn" rid="FN1">1</xref>, an Application Program Interface (API) made publicly (and commercially) available by Google, Inc. The API allows stylized
rendering of a map through specification of JavaScript parameters. Maps were rendered (see Figure 1) by disabling visual user interface controls for navigation, scale, rotate, pan,
and zoom, and limiting the number of Points Of Interest to
two. Specifically, the Barcelona map (using Google’s latitude/longitude coordinates: 41.375384, 2.141004) displayed
only the “park” and “sports_complex” POIs at zoom level
17 with the transit layer turned on. The maps were rendered
to 1280×1024 resolution, then screen-captured and cropped
to the same dimensions. The 1280×1024 images were fit
vertically and centered on the 1680×1050 display, leaving
grey margins on either side of the stimulus, see Figure 1(a).
Centering the stimuli horizontally reduced the likelihood of
eye movements made to distant horizontal screen locations,
where eye tracking accuracy is lowest [<xref ref-type="bibr" rid="b40">40</xref>].</p>
<p>Note that because Google Maps manipulates which POIs
are visible at discrete magnification (zoom) levels [<xref ref-type="bibr" rid="b10">10</xref>], it was impossible to control the selection of specific POIs at any given zoom level. Google Maps lacks application transparency and does not provide a means of determining which subset of existing POIs Google chooses to
display. However, we controlled for this factor by fixing the
zoom levels to a static number (17, in all cases).</p>
	</sec>	
			<sec id="s4e">
  <title>Procedure</title>
 
	<p>Prior to the experiment, participants filled in an online
background questionnaire on a laboratory computer. The
tests for spatial memory were presented before or after the
main experimental procedure to avoid order effects between
their scores and the main part of the experiment.</p>	
<p>In the main part of the experiment, participants were randomly assigned to either cartographic map (N=29) or satellite image (N = 25). Following this, the eye tracking system was calibrated to each individual. Participants were instructed to view a roving calibration dot which moved to
successive screen coordinates covering the viewport extents.
Following calibration, participants carried out the localization task (after having located the start point).</p>
<p>For Barcelona (see Figure 1), participants were given the
scenario shown in <xref ref-type="table" rid="t01">Table 1</xref>. The first cartographic tasks were
localization (visual search), the last included both localization and route planning. For brevity, we refer to the cartographic tasks as follows: Localization, and Route Plan-ning. To indicate selection of search targets, participants
were asked to visually dwell on them for 3 seconds to indicate successful localization.</p>


<table-wrap id="t01" position="float">
					<label>Table 1</label>
					<caption>
						<p>Procedures for Barcelona stimulus</p>
					</caption>
<table >
  <tr>
    <th colspan="3">You have rented an apartment in Barcelona, located at the intersection of Carrer de Vilardell and Carrer d’Hostafrancs de Sió (first localization, or visual search task). Using either of the map or sat views, complete the following tasks:</th>
  </tr>
  <tr>
    <td >1.</td>
    <td >[Localization.] Locate the apartment (street intersection) and fixate it for 3 seconds</td>
    <td ></td>
  </tr>
  <tr>
    <td >2.</td>
    <td >[POI.] Locate the name of the closest metro station and fixate it for 3 seconds.</td>
    <td ></td>
  </tr>
  <tr>
    <td colspan="3">You plan to go to the gym every morning at the Pavelló de l’Espanya Industrial sports complex (hint: large grey building with a domedroof abutting the large park of the same name). Using either of the map or sat views, complete the following tasks:</td>
  </tr>
  <tr>
    <td >3.</td>
    <td > [Route Planning]. Plan a route that you’re likely to take to this complex from your apartment every morning:</td>
    <td ></td>
  </tr>
  <tr>
    <td ></td>
    <td >a.</td>
    <td ></td>
  </tr>
  <tr>
    <td ></td>
    <td >b.</td>
    <td >Using the mouse, click on the apartment location.</td>
  </tr>
  <tr>
    <td ></td>
    <td >c.</td>
    <td >Locate the sports complex, and, using the mouse, indicate the path you would take.</td>
  </tr>
  <tr>
    <td ></td>
    <td >d.</td>
    <td >Using the mouse, click on the sports complex.</td>
  </tr>
  <tr>
    <td ></td>
    <td ></td>
    <td >Press the space bar when done.</td>
  </tr>
  <tr>
    <td ></td>
    <td ></td>
    <td ></td>
  </tr>
</table>

					
					</table-wrap>

<p>Because some of the street names might have sounded foreign to participants (making it difficult to remember), a card
with their names was made available by the display for reference. This was pointed out to participants just following
calibration and prior to viewing of the stimulus. Participants
were asked to view the stimulus (cartographic or satellite representation) as they would normally, and to balance speed
and accuracy when performing the visual search.</p>
<p>All cartographic tasks were given in the same order as they
were designed to follow a logical scenario. Both localization
and route planning tasks were realistically achievable. In preserving their natural characteristics, cartographic tasks were
self-paced and their completion time was not limited. This
facilitated the use of the task completion time in analyses of
performance.</p>
	</sec>	
		<sec id="s4f">
  <title>Dependent Measures
</title>
 <p>Results were analyzed in terms of task performance (effectiveness and efficiency) and eye movement characteristics
and their dynamics. In particular, the following dependent
variables were examined:</p>	
	<p>1. Task completion time (ms). We treated completion
time (efficiency) as a main indicator of performance
since all participants were able to complete all of the
tasks successfully (effectiveness).</p>	
<p>2. Fixation duration (ms). A classical measure in eyetracking research, averaged fixation duration is often
treated as one of the indicators of cognitive resource
management in visual information processing during
scene viewing, e.g., see Henderson and Pierce [<xref ref-type="bibr" rid="b22">22</xref>],
Rayner, Smith, Malcolm, and Henderson [<xref ref-type="bibr" rid="b48">48</xref>].</p>
<p>3. Saccade amplitude (deg). A classical measure of
global vs. local visual information processing. Long saccades amplitudes are related to global visual scanning mode while short to a local search, e.g., Pannasch
et al. [<xref ref-type="bibr" rid="b44">44</xref>], Unema et al. [<xref ref-type="bibr" rid="b58">58</xref>], Mills et al. [<xref ref-type="bibr" rid="b42">42</xref>].</p>
<p>4. Ambient/Focal <italic>&#x039A;</italic> Coefficient. Derived by subtracting
the standardized fixation duration from the standardized amplitude of the subsequent saccade, as expressed
by (1), coefficient <italic>&#x039A;</italic> was calculated for each participant. Negative values of <italic>&#x039A;</italic> indicate relatively ambient
viewing while positive values indicate relatively focal
viewing. The higher the absolute value, the higher the
ambient/focal magnitude [<xref ref-type="bibr" rid="b37">37</xref>].</p>
<p>For the statistical analyses of ambient/focal attention dynamics, task completion time was used as a within-subjects
independent measure, where we divided each experimental
task completion time into five equal periods for each participant. The five temporal periods were thus relative to the
task duration, in other words, normalized with respect to
task completion time, making them proportionately equivalent between tasks (see analysis of <italic>&#x039A;</italic> below).</p>
	</sec>	
</sec>	
	
	
	<sec id="s5">
  <title>Results</title>
 <p>To verify hypotheses we used the Analysis of Covariance
(ANCOVA) with spatial working memory capacity as a covariant, followed by pairwise comparisons with Tukey HSD
correction when their effects reached statistical significance.
All statistical computations were performed using the R statistical language [<xref ref-type="bibr" rid="b47">47</xref>].</p>	
	<sec id="s5a">
  <title>Familiarity with Barcelona and Google Maps</title>
 <p>Familiarity with the city of Barcelona was evaluated with
a question about the number of visits. The percentage of participants who had visited the city at least once in their lives
was 24%. None of the participants indicated that they visited
Barcelona more than 3 times in their life. One may conclude
that overall familiarity with Barcelona was low.</p>	
 <p>Google maps was popular among participants, with only
7.1% claiming they had never used the service. <xref ref-type="table" rid="t02">Table 2</xref>
presents the detailed distribution of answers to the question
“How often do you use Google Maps?” Observed performance (task duration) and process (visual attention) measures mainly represent attention and performance of experienced users of Google Maps. Our findings are thus most
relevant when the location is new to the map user (e.g., as
one would study a destination on a map prior to travel) but
they are already familiar with the service.</p>	
<table-wrap id="t02" position="float">
					<label>Table 2</label>
					<caption>
						<p>Responses to questionnaire question: “How often do you use
Google Maps?” (N =56)</p>
					</caption>
				<table >
  <tr>
    <th >Response </th>
    <th >Percent of responses</th>
  </tr>
  <tr>
    <td >Every Day</td>
    <td >7.4%</td>
  </tr>
  <tr>
    <td >2-3 times a week</td>
    <td >33.33%</td>
  </tr>
  <tr>
    <td >once a week</td>
    <td >12.96%</td>
  </tr>
  <tr>
    <td >2-3 times a month</td>
    <td >20.37%</td>
  </tr>
  <tr>
    <td >one a month</td>
    <td >18.52%</td>
  </tr>
  <tr>
    <td >never</td>
    <td >7.41%</td>
  </tr>
</table>
				
					</table-wrap>

	</sec>	
	<sec id="s5b">
  <title>Cartographic Task Performance
</title>
 <p>All subjects successfully completed the tasks. To gauge
task performance we studied task duration by analyzing basic
eye movement metrics. We used a 2×2 ANCOVA with task
duration as the dependent variable. The between-subjects
predictor was the visualization (cartographic map vs. satellite rendering). The within-subjects predictor was the cartographic task (localization vs. route planning). Spatial working memory capacity was treated as a covariant.</p>	
 <p>As expected, analysis revealed a main effect of visualization type, F(1, 53) = 13.05, p &#x003C; 0.001, η2 = 0.11.
Both tasks took longer to complete with the satellite image
(M=100842.03 ms, SE=598.92) than with the cartographic
map (M=62342.85 ms, SE=466.57).</p>	
 
 <p>Analysis also showed a statistically significant main effect
of covariant (spatial working memory capacity), F(1, 53) =
5.41, p &#x003C; 0.05, η2 = 0.03. We performed a linear regression with task duration as a dependent variable and spatial
working memory capacity as a predictor. Results showed
that the slope of the regression line is significantly negative,
β = −51328, SE = 21883, t(54) = 2.35, p &#x003C; 0.05. Results
imply, not surprisingly, that the higher the spatial working
memory capacity, the faster the completion time for both localization and route planning tasks.</p>
 <p>No other main or interaction effects were statistically significant (p>0.1).</p> 
	</sec>	
		<sec id="s5c">
  <title>Fixation Duration
</title>
 <p>Task performance results indicated that the cartographic
map afforded faster task completion. Analysis of process
measures (i.e., eye movements) can help reveal whether task
has an impact on performance. If route planning is the more
cognitively demanding task, as suggested by Kiefer et al.’s
[<xref ref-type="bibr" rid="b30">30</xref>] findings of increased cognitive load, then longer fixation durations would be expected during this task if, according to Just and Carpenter [<xref ref-type="bibr" rid="b28">28</xref>], they correspond to the
duration of cognitive processing of fixated material. If the
complexity of the visualizations has no impact, then similar task-dependent differences should be observed with both
visualizations.</p>	
 <p>To test these predictions we performed a 2 ×2 ANCOVA
with average fixation duration as the dependent variable.
The between-subjects predictor was the visualization (cartographic map vs. satellite image). The within-subjects predictor was the cartographic task (localization vs. route planning). Spatial working memory capacity was treated as a
covariant. Analysis showed a statistically significant interaction effect between visualization (cartographic map vs.
satellite image) and task (localization vs. route planning),
F(1, 53) = 5.87, p &#x003C; 0.02, η2 = 0.03, see <xref ref-type="fig" rid="fig03">Figure 3</xref>. Pairwise
comparisons with visualization as moderator showed that,
on the cartographic map, participants produced significantly
longer fixations (M = 360.36 ms, SE = 3.30) while completing route planning than while performing the localization
task (M=320.39 ms, SE=1.44), t(52)=2.24, p&#x003C;0.03.</p>	



<fig id="fig03" fig-type="figure" position="float">
					<label>Figure 3</label>
					<caption>
						<p>Interaction effect of task and visualization on average fixation duration. Whiskers represent ±1SE (standard
error). Significant differences (p&#x003C;0.05) are marked with ?.</p>
					</caption>
					<graphic id="graph03" xlink:href="jemr-10-02-c-figure-03.png"/>
				</fig>




 
 <p>However, on the satellite image, the difference in average fixation duration was not statistically significant between the localization task (M = 354.32 ms, SE = 2.07)
and the route planning task (M = 349.02 ms, SE = 2.22),
t(52) = 1.16, p > 0.1. No other main or interaction effects
were statistically significant.</p>
 
	</sec>	
		<sec id="s5d">
  <title>Saccade Amplitude
</title>
 <p>Because saccade amplitude and direction is thought to reflect attentional selection and the spatial extent of parafoveal
processing (peripheral scene degradation tends to curtail saccadic amplitudes [<xref ref-type="bibr" rid="b4">4</xref>]), larger saccade amplitudes are expected in tasks that require greater parafoveal
processing over stimuli that do not in some way degrade peripheral scene perception.</p>	
 <p>To evaluate the effect of cartographic visualization and
task on saccade amplitude, a 2×2 ANCOVA was performed
with saccade amplitude (in visual degrees) as the dependent
measure. The between-subjects predictor was the visualization (cartographic map vs. satellite image). The withinsubjects predictor was cartographic task (localization vs.
route planning). Spatial memory capacity was the covariant.</p>	
 
 <p>Analysis revealed that the interaction of task and visualization was marginally significant, F(1, 53) = 3.38, p =
0.072, η2 =0.01, see <xref ref-type="fig" rid="fig04">Figure 4</xref>.</p>
 <p>Pairwise comparisons with visualization as moderator
showed that saccade amplitude is marginally greater during route planning (M = 4.65◦, SE = 0.06) than during
localization (M = 4.20◦, SE = 0.04) on the cartographic
map, t(52) = 1.78, p = 0.081. On the satellite image,
the difference in saccade amplitude between the route planning task (M = 4.67, SE = 0.06) and the localization task
(M = 4.38, SE = 0.04) was not statistically significant,
t(53)=0.85, p>0.1.</p> 




<fig id="fig04" fig-type="figure" position="float">
					<label>Figure 4</label>
					<caption>
						<p>Marginally significant interaction effect of task and
visualization on average saccade amplitude. Whiskers represent ±1SE (standard error).</p>
					</caption>
					<graphic id="graph04" xlink:href="jemr-10-02-c-figure-04.png"/>
				</fig>



<p>It is worth mentioning that the interaction effect of task
and spatial memory capacity reached marginal significance,
F(1, 53)=3.48, p=0.068, η2 =0.01.</p>
	</sec>	
		<sec id="s5e">
  <title>Ambient/Focal Viewing
</title>
 <p>Preliminary analysis of the <italic>&#x039A;</italic> ambient/focal coefficient as
a dependent measure used a similar design as for fixation
duration and saccade amplitude analyses, namely a 2×2 ANCOVA with visualization as a between-subjects factor and
task as a within-subjects factor. Spatial working memory was
treated as a covariant.</p>	
 <p>Analysis revealed a statistically significant main effect
of visualization, F(1, 53) = 8.24, p&#x003C; 0.01; η2 = 0.09,
with the satellite image eliciting greater focal eye movements (M = 0.09, SE = 0.06) than the cartographic map,
which elicited significantly greater ambient eye movements
(M=−0.12, SE=0.04).</p>	
 
 <p>Similar to the analyses of fixation duration, the interaction
effect of task and spatial working memory capacity reached
marginal significance, F(1, 53)=3.53, p=0.066, η2 =0.02.</p>
 <p>To delve deeper into the differences of dynamical patterns
of ambient/focal fluctuation between localization and route
planning tasks, two analyses of covariance were conducted,
one for each of the cartographic and satellite visualizations.
Both analyses followed the same 2×5 design with task and
time sequence (5 periods) as within-subjects fixed factors.
Spatial working memory capacity was treated as a covariant.</p> 
  <p>Analyses of the satellite image revealed a significant main
effect of time period, F(3.28, 81.94)=15.14, p&#x003C;0.001, η2 =
0.13. In line with previous literature [<xref ref-type="bibr" rid="b59">59</xref>], pairwise comparisons showed that, regardless of task,
attention changes from ambient to focal over time, see <xref ref-type="fig" rid="fig05">Figure 5</xref>(a): 1st period (M = −0.22, SE = 0.04), 2nd period
(M = 0.03, SE = 0.04), 3rd period (M = 0.15, SE = 0.04),
4th period (M = 0.15, SE = 0.04), and 5th period (M =
0.25, SE=0.04). The difference between the 1st time period
and all others was statistically significant, T1:T2 t(100) =
3.63, p&#x003C; 0.01, T1:T3 t(100) = 5.23, p&#x003C; 0.001, T1:T4
t(100)=5.90, p&#x003C;0.001, and T1:T5 t(100)=7.33, p&#x003C;0.001.
The difference between T2 and T5 (t(100) = 3.69, p&#x003C; 0.01)
also reached significance.</p>	
 <p>Analysis of the cartographic map revealed similar effects, namely a significant main effect of time period,
F(3.28, 88.55) = 14.00, p&#x003C; 0.001, η<sup>2</sup> = 0.13. Descriptive
statistics also showed a similar progression from ambient
to focal attention in the time course of both tasks, see <xref ref-type="fig" rid="fig05">Figure 5</xref>(b): 1st period (M = −0.44, SE = 0.05), 2nd period (M = −0.16, SE = 0.05), 3rd period (M = −0.11, SE =
0.06), 4th period (M = 0.02, SE = 0.05), and 5th period
(M = 0.22, SE = 0.05). The difference between the first time
period and the subsequent periods was statistically signifi
cant, T1:T3 t(112.89) = 3.29, p&#x003C; 0.01, T1:T4 t(112.89) =
5.70, p&#x003C; 0.001, T1:T5 t(112.89) = 6.72, p&#x003C; 0.001, T2:T4
t(112.89)=3.37, p&#x003C;0.01, T2:T5 t(112.89)=4.39, p&#x003C;0.001,
and T3:T5 t(112.89) = 3.42, p&#x003C; 0.01. Interestingly, a sig
nificant interaction effect between task and time period was
found, F(3.16, 85.25) = 5.24, p&#x003C; 0.01, η<sup>2</sup> = 0.07, see Fig
ure 5(b). The following pairwise comparisons with time
period as moderator showed that in the 2nd time period at
tention is significantly more ambient during the localization
task (M = −0.23, SE = 0.06) than during route planning
(M = −0.06, SE = 0.08), t(125.11) = 2.04, p&#x003C; 0.05. A
similar marginally significant difference was found in the
3rd time period (M = −0.18, SE = 0.07 for localization
and M = −0.02, SE = 0.08 for route planning), t(125.11) =
1.96, p=0.053. However, the pattern reverses in the last time
period, with attention becoming significantly more ambient
during route planning (M = 0.05, SE = 0.09) than during lo
calization (M=0.32, SE=0.06), t(125,11)=3.01, p&#x003C;0.01. </p>	


<fig id="fig05" fig-type="figure" position="float">
					<label>Figure 5</label>
					<caption>
						<p>the user interface of ELAN. The software supports multiple synchronized media sources and an arbitrary number of annotation tiers. Videos are blurred to protect participants.(a) Coefficient <italic>&#x039A;</italic> for task and time sequence on satellite image. (b) Coefficient <italic>&#x039A;</italic> for task and time sequence on cartographic map.</p>
					</caption>
					<graphic id="graph05" xlink:href="jemr-10-02-c-figure-05.png"/>
				</fig>



 
 <p>Finally, the interaction effect of task and spatial working
memory capacity reached significance for the cartographic
map, F(1, 27) = 6.67, p&#x003C; 0.01, η<sup>2</sup> = 0.04. The following analyses of linear regression with coefficient <italic>&#x039A;</italic> as the
dependent variable and spatial working memory capacity as
predictor for the localization task revealed a significant negative slope, β = −0.82, S E = 0.30, t(27) = 2.69, p&#x003C; 0.02,
while for route planning, the slope was positive but not significant, β = 0.36, S E = 0.36, t(27) = 0.99, p > 0.1, n.s:.
Results suggest that higher spatial working memory capacity led to more ambient attention on the cartographic map
but only during localization and not during route planning.
Presumably, working memory serves as facilitator of visual
search which dominates localization on a cartographic map.
During route following perhaps more complex cognitive resources are involved beyond spatial working memory capacity. Further research is needed to investigate which cognitive
resources are required for controlling the dynamics of visual
attention during different tasks.</p>
 
	</sec>	
	</sec>	
	
		<sec id="s6">
  <title>General Discussion</title>
 <p>Analysis of performance measures (efficiency and effectiveness, or speed and accuracy) show that the cartographic
representation affords faster task completion than the satellite
representation, regardless of task. Because everyone managed to complete all tasks, a ceiling effect precludes discussion of effect of task or cartographic product on accuracy.
Of the two main tasks considered, route planning tended to
be performed faster than localization, perhaps because both
route endpoints had already been identified. The satellite image typically includes greater detail, and can be more cognitively demanding, than its cartographic counterpart. Results
suggest a link between completion time and pattern of visual
attention. Analysis of process measures (eye movements)
provides further insights, yielding possible effects of cartographic product on cognitive requirements.</p>	
 <p>Analysis of fixation durations shows that task type has impact but only when using the cartographic map. This appears
to agree with Mills et al.’s [<xref ref-type="bibr" rid="b42">42</xref>] observations of task influence. Results are also in line with the eye-mind assumption posited by Just and Carpenter [<xref ref-type="bibr" rid="b28">28</xref>], who pointed out
that fixation duration corresponds to the duration of cognitive
processing of fixated material. Salthouse and Ellis [<xref ref-type="bibr" rid="b51">51</xref>]
also classically described a series of experiments showing
that fixation duration is prolonged when participants are instructed to process visual information. The interaction effect
of task and cartographic product on fixation duration suggests that the cognitive requirements of the satellite image
override those of the task, i.e., the complexity of the satellite
image obscures the effect of task.</p>	
 
 <p>Although route planning appeared to be performed faster
(at a statistical tendency level), fixation durations show that
route planning may have been more demanding than localization, as observed when using the cartographic map. This
would be in agreement with Kiefer et al.’s [<xref ref-type="bibr" rid="b30">30</xref>] finding
of increased cognitive load associated with route planning.
Decreased saccade amplitude in route planning compared to
localization, also suggests greater cognitive load, insofar as
decreased amplitude suggests greater focal viewing, as also
indicated by <italic>&#x039A;</italic>.</p>
 <p>Extending traditional fixation duration metrics, coefficient
<italic>&#x039A;</italic> fosters understanding of the dynamics of eye movements
as revealed in differing patterns between both tasks. The localization task produced a fairly common dynamical pattern,
with eye movements initially ambient, becoming more focal
over time. When close to locating the target on the map,
eye movements become more focal with the ratio between
saccade amplitude and fixation duration leaning towards the
latter. The route planning task, however, yielded a more
complex dynamical pattern, but only over the cartographic
map. Starting in ambient mode when locating the start and
end points of the route, eye movements become focal when
following the route visually, then finally turn more ambient
during route confirmation. The use of <italic>&#x039A;</italic> showed the three
stages of route planning: route end point localization, route
following, and route confirmation. However, the complexity
of the satellite image again obscures this progression.</p> 
  <p>Just and Carpenter [<xref ref-type="bibr" rid="b28">28</xref>] noted that eye fixation data
make it possible to distinguish the three stages of visual
search performance, although their analysis relied on the relation between fixation duration and angular disparity. While
qualitatively effective, the relation provided no easy way of
combining fixation duration and disparity into a useful quantity with which to distinguish the cognitive stages. The difference in cartographic product notwithstanding, our <italic>&#x039A;</italic> metric [<xref ref-type="bibr" rid="b37">37</xref>] illustrates when these inter-stage
transitions may occur. When <italic>&#x039A;</italic> &#x003C; 0, relatively short fixations
are followed by relatively long saccades, suggesting ambient
processing during visual search. When <italic>&#x039A;</italic>  &#x003E; 0, relatively long
fixations are followed by short saccade amplitudes, suggesting focal processing during decision-making. Subsequent
gaze transitions may indicate confirmation, as noted by Just
and Carpenter [<xref ref-type="bibr" rid="b28">28</xref>].</p>	

	</sec>	
	
	<sec id="s7">
  <title>Conclusions</title>
 <p>We presented a demonstration of how traditional gaze
metrics can be augmented by analysis of dynamic attention
with coefficient <italic>&#x039A;</italic> to study differences in cartographic tasks
while examining the utility of cartographic maps versus their
satellite image counterparts.</p>	
 <p>We showed how traditional gaze metrics of fixation durations and saccade amplitudes help explain differences in
performance observed during cartographic tasks. Specifically, we observed performance and gaze behavior differences among participants as they worked with satellite and
cartographic representations. Performance results regarding
the cartographic map suggest a nuanced outcome corroborating earlier work, suggesting impoverished performance
using satellite images [<xref ref-type="bibr" rid="b12 b17 b13">12, 17, 13</xref>].</p>	
 
 <p>The benefits of cartographic maps were explained to a certain extent by fixation durations and saccadic amplitudes. On
average, fixations were shorter on cartographic maps than on
satellite images, likely facilitating faster cognitive processing, assuming Just and Carpenter’s [<xref ref-type="bibr" rid="b28">28</xref>] eye-mind assumption. Fixation durations and saccade amplitudes were also
able to indicate task differences, suggesting route planning
as the more demanding task due to the significantly longer
fixation durations and (marginally) larger saccade amplitudes
employed compared to the localization (search) task.</p>
 <p>Beyond traditional eye movement metrics, which describe
visual behavior over the duration of the task (in the aggregate
or mean), coefficient <italic>&#x039A;</italic> showed how the tasks differed over
the course of their execution. The localization task elicited
a fairly common dynamical pattern with gaze initially ambient, becoming more focal over time. The route planning task
on the cartographic map, however, yielded a more complex
pattern potentially resembling Just and Carpenter’s [<xref ref-type="bibr" rid="b28">28</xref>]
search ! decide ! confirm progression.</p> 
 
	</sec>	
	
	
	 <sec id="s8" sec-type="COI-statement">
      <title>Acknowledgements</title>
      <p>We would like to thank
Mr. Janusz Arabski, undergraduate student of the University
of Social Sciences and Humanities in Warsaw, Poland for
his help in conducting the study.</p>
<p>The authors declare that there is
no conflict of interest regarding the publication of this paper.	</p>
	
	</sec>	
	</body>	



	 
  <back>
<ref-list>
<ref id="b1"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Allen</surname>, <given-names>G. L.</given-names></string-name></person-group> (<year>1999</year>). <chapter-title>Spatial Abilities, Cognitive Maps, and Wayfinding: Bases for Individual Differences in Spatial Cognition and Behavior</chapter-title>. In <person-group person-group-type="editor"><string-name><given-names>R. G.</given-names> <surname>Golledge</surname></string-name> (<role>Ed.</role>),</person-group> <source>Wayfinding Behavior: Cognitive Mapping and Other Spatial Processes</source> (pp. <fpage>46</fpage>–<lpage>80</lpage>). <publisher-name>The Johns Hopkins University Press</publisher-name>.</mixed-citation></ref>
<ref id="b2"><mixed-citation publication-type="conference" specific-use="parsed"><person-group person-group-type="author"><string-name><surname>Boér</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Çöltekin</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Clarke</surname>, <given-names>K. C.</given-names></string-name></person-group> (<year>2013</year>). <article-title>An Evaluation of Web-based Geovisualizations for Different Levels of Abstraction and Realism—What do users predict?</article-title> In <source>Proceedings of the International Cartographic Conference</source> (pp. <fpage>209</fpage>–<lpage>220</lpage>). <conf-loc>Dresden, Germany</conf-loc>.</mixed-citation></ref>
<ref id="b3"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Brychtová</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Çöltekin</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Pászto</surname>, <given-names>V.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Do the visual complexity algorithms match the generalization process in geographical displays?</article-title> ISPRSInternational Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 375–378. Retrieved from http://dx.doi.org/<pub-id pub-id-type="doi">10.5194/isprs-archives-XLI-B2-375-2016</pub-id> doi:</mixed-citation></ref>
<ref id="b4"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Cajar</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Schneeweiß</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Engbert</surname>, <given-names>R.</given-names></string-name>, &amp; <string-name><surname>Laubrock</surname>, <given-names>J.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Coupling of attention and saccades when viewing scenes with central and peripheral degradation.</article-title> <source>Journal of Vision (Charlottesville, Va.)</source>, <volume>16</volume>(<issue>2</issue>), <fpage>8</fpage>. <pub-id pub-id-type="doi">10.1167/16.2.8</pub-id><pub-id pub-id-type="pmid">27271524</pub-id><issn>1534-7362</issn></mixed-citation></ref>
<ref id="b5"><mixed-citation publication-type="conference" specific-use="parsed"><person-group person-group-type="author"><string-name><surname>Carter</surname>, <given-names>J. R.</given-names></string-name></person-group> (<year>2005</year>). <article-title>The many dimensions of map use.</article-title> In <source>Proceedings of the International Cartographic Conference</source>.</mixed-citation></ref>
<ref id="b6"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Castner</surname>, <given-names>H. W.</given-names></string-name>, &amp; <string-name><surname>Eastman</surname>, <given-names>R. J.</given-names></string-name></person-group> (<year>1984</year>). <article-title>Eye-Movement Parameters and Perceived Map Complexity–I.</article-title> <source>American Cartographer</source>, <volume>11</volume>(<issue>2</issue>), <fpage>107</fpage>–<lpage>117</lpage>. <pub-id pub-id-type="doi">10.1559/152304084783914768</pub-id><issn>0094-1689</issn></mixed-citation></ref>
<ref id="b7"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Castner</surname>, <given-names>H. W.</given-names></string-name>, &amp; <string-name><surname>Eastman</surname>, <given-names>R. J.</given-names></string-name></person-group> (<year>1985</year>). <article-title>Eye-Movement Parameters and Perceived Map Complexity–II.</article-title> <source>American Cartographer</source>, <volume>12</volume>(<issue>1</issue>), <fpage>29</fpage>–<lpage>40</lpage>. <pub-id pub-id-type="doi">10.1559/152304085783914712</pub-id><issn>0094-1689</issn></mixed-citation></ref>
<ref id="b8"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Çöltekin</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Fabrikant</surname>, <given-names>S. I.</given-names></string-name>, &amp; <string-name><surname>Lacayo</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2010</year>). <article-title>Exploring the efficiency of users’ visual analytics strategies based on sequence analysis of eye movement recordings.</article-title> <source>International Journal of Geographical Information Science</source>, <volume>24</volume>(<issue>10</issue>), <fpage>1559</fpage>–<lpage>1575</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1080/13658816.2010.511718</pub-id> <pub-id pub-id-type="doi">10.1080/13658816.2010.511718</pub-id><issn>1365-8816</issn></mixed-citation></ref>
<ref id="b9"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Çöltekin</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Heil</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Garlandini</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Fabrikant</surname>, <given-names>S. I.</given-names></string-name></person-group> (<year>2009</year>). <article-title>Evaluating the Effectiveness of Interactive Map Interface Designs: A Case Study Integrating Usability Metrics with Eye-Movement Analysis.</article-title> <source>Cartography and Geographic Information Science</source>, <volume>36</volume>(<issue>1</issue>), <fpage>5</fpage>–<lpage>17</lpage>. <pub-id pub-id-type="doi">10.1559/152304009787340197</pub-id><issn>1523-0406</issn></mixed-citation></ref>
<ref id="b10"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Dähn</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Cap</surname>, <given-names>C.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Application Transparency: How and Why are Providers Manipulating Our Information?</article-title> <source>IEEE Computer</source>, <volume>47</volume>(<issue>2</issue>), <fpage>56</fpage>–<lpage>61</lpage>. <pub-id pub-id-type="doi">10.1109/MC.2013.187</pub-id></mixed-citation></ref>
<ref id="b11"><mixed-citation publication-type="thesis" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Dajlido</surname>, <given-names>P.</given-names></string-name></person-group> (<year>2013</year>). Przetwarzanie Materiału Realnego lub Abstrakcyjnego: Konstrukcja Testów w Ramach Nowego Wymiaru Pomiaru Kompetencji Poznawczych (Unpublished master’s thesis). University of Social Sciences and Humanities, Warsaw, Poland.</mixed-citation></ref>
<ref id="b12"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Dillemuth</surname>, <given-names>J.</given-names></string-name></person-group> (<year>2005</year>). <article-title>Map Design Evaluation for Mobile Display.</article-title> <source>Cartography and Geographic Information Science</source>, <volume>32</volume>(<issue>4</issue>), <fpage>285</fpage>–<lpage>301</lpage>. <pub-id pub-id-type="doi">10.1559/152304005775194773</pub-id><issn>1523-0406</issn></mixed-citation></ref>
<ref id="b13"><mixed-citation publication-type="unknown" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Dong</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Liao</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Roth</surname>, <given-names>R. E.</given-names></string-name>, &amp; <string-name><surname>Wang</surname>, <given-names>S.</given-names></string-name></person-group> (<year>2014</year>, February). <article-title>Eye Tracking to Explore the Potential of Enhanced Imagery Basemaps in Web Mapping.</article-title> The Cartographic Journal, 1743277413Y.000. doi: <pub-id pub-id-type="doi">10.1179/1743277413Y.0000000071</pub-id></mixed-citation></ref>
<ref id="b14"><mixed-citation publication-type="thesis" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Eastman</surname>, <given-names>J. R.</given-names></string-name></person-group> (<year>1977</year>). Map Complexity: An Information Approach (Unpublished doctoral dissertation). Queen’s University, Kingston, ON, Canada.</mixed-citation></ref>
<ref id="b15"><mixed-citation publication-type="unknown" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Fairbairn</surname>, <given-names>D.</given-names></string-name></person-group> (<year>2006</year>a). <article-title>Measuring Map Complexity.</article-title> The Cartographic Journal, 43(3), 224–238. Retrieved from http://dx.doi.org/<pub-id pub-id-type="doi">10.1179/000870406X169883</pub-id> doi:</mixed-citation></ref>
<ref id="b16"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Fairbairn</surname>, <given-names>D.</given-names></string-name></person-group> (<year>2006</year>b). <article-title>Measuring Map Complexity.</article-title> <source>The Cartographic Journal</source>, <volume>43</volume>(<issue>3</issue>), <fpage>224</fpage>–<lpage>238</lpage>. <pub-id pub-id-type="doi">10.1179/000870406X169883</pub-id><issn>0008-7041</issn></mixed-citation></ref>
<ref id="b17"><mixed-citation publication-type="thesis" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Francelet</surname>, <given-names>R.</given-names></string-name></person-group> (<year>2014</year>). Realism and Individual Differences in Route-Learning (Unpublished master’s thesis). University of Zürich.</mixed-citation></ref>
<ref id="b18"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Goldberg</surname>, <given-names>J. H.</given-names></string-name>, &amp; <string-name><surname>Kotval</surname>, <given-names>X. P.</given-names></string-name></person-group> (<year>1999</year>). <article-title>Computer Interface Evaluation Using Eye Movements: Methods and Constructs.</article-title> <source>International Journal of Industrial Ergonomics</source>, <volume>24</volume>(<issue>6</issue>), <fpage>631</fpage>–<lpage>645</lpage>. <pub-id pub-id-type="doi">10.1016/S0169-8141(98)00068-7</pub-id><issn>0169-8141</issn></mixed-citation></ref>
<ref id="b19"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Golledge</surname>, <given-names>R. G.</given-names></string-name>, <string-name><surname>Dougherty</surname>, <given-names>V.</given-names></string-name>, &amp; <string-name><surname>Bell</surname>, <given-names>S.</given-names></string-name></person-group> (<year>1995</year>). <article-title>Acquiring Spatial Knowledge: Survey Versus Route-Based Knowledge in Unfamiliar Environments.</article-title> <source>Annals of the Association of American Geographers</source>, <volume>85</volume>(<issue>1</issue>), <fpage>134</fpage>–<lpage>158</lpage>. <pub-id pub-id-type="doi">10.1080/00045602409356894</pub-id><issn>0004-5608</issn></mixed-citation></ref>
<ref id="b20"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Hammer</surname>, <given-names>J. H.</given-names></string-name>, <string-name><surname>Maurus</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Beyerer</surname>, <given-names>J.</given-names></string-name></person-group> (<year>2013</year>). <chapter-title>Realtime 3D Gaze Analysis in Mobile Applications</chapter-title>. In <source>Proceedings of the 2013 conference on eye tracking south africa</source> (pp. <fpage>75</fpage>–<lpage>78</lpage>). <publisher-loc>New York, NY</publisher-loc>: <publisher-name>ACM</publisher-name>; Retrieved from <ext-link ext-link-type="uri" xlink:href="http://doi.acm.org/10.1145/2509315.2509333">http://doi.acm.org/10.1145/2509315.2509333</ext-link>, <pub-id pub-id-type="doi">10.1145/2509315.2509333</pub-id></mixed-citation></ref>
<ref id="b21"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Hegarty</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Waller</surname>, <given-names>D. A.</given-names></string-name></person-group> (<year>2005</year>). <chapter-title>Individual Differences in Spatial Abilities</chapter-title>. In <person-group person-group-type="editor"><string-name><given-names>P.</given-names> <surname>Shah</surname></string-name> &amp; <string-name><given-names>A.</given-names> <surname>Miyake</surname></string-name> (<role>Eds.</role>),</person-group> <source>The cambridge handbook of visuopatial thinking</source> (pp. <fpage>121</fpage>–<lpage>169</lpage>). <publisher-name>Cambridge University Press</publisher-name>. <pub-id pub-id-type="doi">10.1017/CBO9780511610448.005</pub-id></mixed-citation></ref>
<ref id="b22"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Henderson</surname>, <given-names>J. M.</given-names></string-name>, &amp; <string-name><surname>Pierce</surname>, <given-names>G. L.</given-names></string-name></person-group> (<year>2008</year>). Eye movements during scene viewing: Evidence for mixed control of fixation durations. Psychonomic Bulletin &amp;Review, 15(3), 566–573. Retrieved from http://dx.doi.org/<pub-id pub-id-type="doi">10.3758/PBR.15.3.566</pub-id> doi:</mixed-citation></ref>
<ref id="b23"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Holmqvist</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Nyström</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Andersson</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Dewhurst</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Jarodzka</surname>, <given-names>H.</given-names></string-name>, &amp; <string-name><surname>Van de Weijer</surname>, <given-names>J.</given-names></string-name></person-group> (<year>2011</year>). <source>Eye Tracking: A Comprehensive Guide to Methods and Measures</source>. <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref>
<ref id="b24"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Ingle</surname>, <given-names>D.</given-names></string-name></person-group> (<year>1967</year>). <article-title>Two visual mechanisms underlying the behavior of fish.</article-title> <source>Psychologische Forschung</source>, <volume>31</volume>(<issue>1</issue>), <fpage>44</fpage>–<lpage>51</lpage>. <pub-id pub-id-type="doi">10.1007/BF00422385</pub-id><pub-id pub-id-type="pmid">5605116</pub-id><issn>0033-3026</issn></mixed-citation></ref>
<ref id="b25"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Irwin</surname>, <given-names>D. E.</given-names></string-name>, &amp; <string-name><surname>Zelinsky</surname>, <given-names>G. J.</given-names></string-name></person-group> (<year>2002</year>). <article-title>Eye movements and scene perception: Memory for things observed.</article-title> <source>Perception &amp; Psychophysics</source>, <volume>64</volume>(<issue>6</issue>), <fpage>882</fpage>–<lpage>895</lpage>. <pub-id pub-id-type="doi">10.3758/BF03196793</pub-id><pub-id pub-id-type="pmid">12269296</pub-id><issn>0031-5117</issn></mixed-citation></ref>
<ref id="b26"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Jacob</surname>, <given-names>R. J. K.</given-names></string-name>, &amp; <string-name><surname>Karn</surname>, <given-names>K. S.</given-names></string-name></person-group> (<year>2003</year>). <chapter-title>Eye Tracking in Human-Computer Interaction and Usability Research: Ready to Deliver the Promises</chapter-title>. In <person-group person-group-type="editor"><string-name><given-names>J.</given-names> <surname>Hyönä</surname></string-name>, <string-name><given-names>R.</given-names> <surname>Radach</surname></string-name>, &amp; <string-name><given-names>H.</given-names> <surname>Deubel</surname></string-name> (<role>Eds.</role>),</person-group> <source>The Mind’s Eye: Cognitive and Applied Aspects of Eye Movement Research</source> (pp. <fpage>573</fpage>–<lpage>605</lpage>). <publisher-loc>Amsterdam, The Netherlands</publisher-loc>: <publisher-name>Elsevier Science</publisher-name>. <pub-id pub-id-type="doi">10.1016/B978-044451020-4/50031-1</pub-id></mixed-citation></ref>
<ref id="b27"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Jäger</surname>, <given-names>A. O.</given-names></string-name>, <string-name><surname>Süß</surname>, <given-names>H. M.</given-names></string-name>, &amp; <string-name><surname>Beauducel</surname>, <given-names>A.</given-names></string-name></person-group> (<year>1997</year>). <source>Berlin test of intelligence</source>. <publisher-loc>Göttingen</publisher-loc>: <publisher-name>Hogrefe</publisher-name>.</mixed-citation></ref>
<ref id="b28"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Just</surname>, <given-names>M. A.</given-names></string-name>, &amp; <string-name><surname>Carpenter</surname>, <given-names>P. A.</given-names></string-name></person-group> (<year>1976</year>). <article-title>Eye Fixations and Cognitive Processes.</article-title> <source>Cognitive Psychology</source>, <volume>8</volume>(<issue>4</issue>), <fpage>441</fpage>–<lpage>480</lpage>. <pub-id pub-id-type="doi">10.1016/0010-0285(76)90015-3</pub-id><issn>0010-0285</issn></mixed-citation></ref>
<ref id="b29"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Keates</surname>, <given-names>J. S.</given-names></string-name></person-group> (<year>1982</year>). <source>Understanding maps. Burnt Mill</source>. <publisher-loc>Harlow, Essex, UK</publisher-loc>: <publisher-name>Longman Group Limited</publisher-name>.</mixed-citation></ref>
<ref id="b30"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Kiefer</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Giannopoulos</surname>, <given-names>I.</given-names></string-name>, <string-name><surname>Duchowski</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Raubal</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Measuring Cognitive Load for Map Tasks through Pupil Diameter.</article-title> In <source>Proceedings of the Ninth International Conference on Geographic Information Science (GIScience 2016)</source>. <publisher-name>Springer International Publishing.</publisher-name> <pub-id pub-id-type="doi">10.1007/978-3-319-45738-3_21</pub-id></mixed-citation></ref>
<ref id="b31"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Kiefer</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Giannopoulos</surname>, <given-names>I.</given-names></string-name>, <string-name><surname>Kremer</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Schlieder</surname>, <given-names>C.</given-names></string-name>, &amp; <string-name><surname>Raubal</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2014</year>). <chapter-title>Starting to get bored: An outdoor eye tracking study of tourists exploring a city panorama</chapter-title>. In <source>Proceedings of the 2014 Symposium on Eye Tracking Research and Applications</source> (pp. <fpage>315</fpage>–<lpage>318</lpage>). <publisher-loc>New York, NY</publisher-loc>: <publisher-name>ACM</publisher-name>; Retrieved from <ext-link ext-link-type="uri" xlink:href="http://doi.acm.org/10.1145/2578153.2578216">http://doi.acm.org/10.1145/2578153.2578216</ext-link>, <pub-id pub-id-type="doi">10.1145/2578153.2578216</pub-id></mixed-citation></ref>
<ref id="b32"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Kiefer</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Giannopoulos</surname>, <given-names>I.</given-names></string-name>, &amp; <string-name><surname>Raubal</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2013</year>). <chapter-title>Using eye movements to recognize activities on cartographic maps</chapter-title>. In <source>Proceedings of the 21st ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems</source> (pp. <fpage>488</fpage>–<lpage>491</lpage>). <publisher-loc>New York, NY</publisher-loc>: <publisher-name>ACM</publisher-name>; Retrieved from <ext-link ext-link-type="uri" xlink:href="http://doi.acm.org/10.1145/2525314.2525467">http://doi.acm.org/10.1145/2525314.2525467</ext-link>, <pub-id pub-id-type="doi">10.1145/2525314.2525467</pub-id></mixed-citation></ref>
<ref id="b33"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Kiefer</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Giannopoulos</surname>, <given-names>I.</given-names></string-name>, <string-name><surname>Raubal</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Duchowski</surname>, <given-names>A. T.</given-names></string-name></person-group> (<year>2017</year>). <article-title>Eye Tracking for Spatial Research: Cognition, Computation, Challenges.</article-title> <source>Spatial Cognition and Computation</source>, <volume>17</volume>(<issue>1–2</issue>), <fpage>1</fpage>–<lpage>19</lpage>. <pub-id pub-id-type="doi">10.1080/13875868.2016.1254634</pub-id><issn>1387-5868</issn></mixed-citation></ref>
<ref id="b34"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Kiefer</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Straub</surname>, <given-names>F.</given-names></string-name>, &amp; <string-name><surname>Raubal</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2012</year>). <chapter-title>Towards location-aware mobile eye tracking</chapter-title>. In <source>Proceedings of the 2012 Symposium on Eye Tracking Research and Applications</source> (pp. <fpage>313</fpage>–<lpage>316</lpage>). <publisher-loc>New York, NY</publisher-loc>: <publisher-name>ACM</publisher-name>; Retrieved from <ext-link ext-link-type="uri" xlink:href="http://doi.acm.org/10.1145/2168556.2168624">http://doi.acm.org/10.1145/2168556.2168624</ext-link>, <pub-id pub-id-type="doi">10.1145/2168556.2168624</pub-id></mixed-citation></ref>
<ref id="b35"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Knapp</surname>, <given-names>L.</given-names></string-name></person-group> (<year>1995</year>). <chapter-title>A Task Analysis Approach to the Visualization of Geographic Data</chapter-title>. In <person-group person-group-type="editor"><string-name><given-names>T. L.</given-names> <surname>Nygeres</surname></string-name>, <string-name><given-names>D. M.</given-names> <surname>Mark</surname></string-name>, <string-name><given-names>R.</given-names> <surname>Laurini</surname></string-name>, &amp; <string-name><given-names>M. J.</given-names> <surname>Egenhofer</surname></string-name> (<role>Eds.</role>),</person-group> <source>Cognitive aspects of human computer interaction For geographic information systems</source> (pp. <fpage>355</fpage>–<lpage>371</lpage>). <publisher-name>Kluwer Academic Publishers</publisher-name>. <pub-id pub-id-type="doi">10.1007/978-94-011-0103-5_25</pub-id></mixed-citation></ref>
<ref id="b36"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Krejtz</surname>, <given-names>I.</given-names></string-name>, <string-name><surname>Szarkowska</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Krejtz</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Walczak</surname>, <given-names>K.</given-names></string-name>, &amp; <string-name><surname>Duchowski</surname>, <given-names>A. T.</given-names></string-name></person-group> (<conf-date>2012, March 28-30</conf-date>). <article-title>Audio Description as an Aural Guide of Children’s Visual Attention: Evidence from an Eye-Tracking Study.</article-title> In <source>Proceedings of the 2012 Symposium on Eye Tracking Research and Applications</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>ACM</publisher-name>. <pub-id pub-id-type="doi">10.1145/2168556.2168572</pub-id></mixed-citation></ref>
<ref id="b37"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Krejtz</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Duchowski</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Krejtz</surname>, <given-names>I.</given-names></string-name>, <string-name><surname>Szarkowska</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Kopacz</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2016</year>). Discerning Ambient/Focal Attention with Coefficient K. Transactions on Applied Perception, 13(3).</mixed-citation></ref>
<ref id="b38"><mixed-citation publication-type="conference" specific-use="parsed"><person-group person-group-type="author"><string-name><surname>Lohmeyer</surname>, <given-names>Q.</given-names></string-name>, &amp; <string-name><surname>Meboldt</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2015</year>). <article-title>How We Understand Engineering Drawings: An Eye Tracking Study Investigating Skimming and Scrutinizing Sequences.</article-title> In <source>Proceedings of the International Conference on Engineering Design</source>. <conf-loc>Milan, Italy</conf-loc>.</mixed-citation></ref>
<ref id="b39"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>MacEachren</surname>, <given-names>A. M.</given-names></string-name></person-group> (<year>1982</year>). <article-title>Map Complexity: Comparison and Measurement.</article-title> <source>American Cartographer</source>, <volume>9</volume>(<issue>1</issue>), <fpage>31</fpage>–<lpage>46</lpage>. <pub-id pub-id-type="doi">10.1559/152304082783948286</pub-id><issn>0094-1689</issn></mixed-citation></ref>
<ref id="b40"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Mantiuk</surname>, <given-names>R.</given-names></string-name></person-group> (<year>2017</year>). <chapter-title>Accuracy of High-End and Self-build Eye-Tracking Systems</chapter-title>. In <person-group person-group-type="editor"><string-name><given-names>S.</given-names> <surname>Kobayashi</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Piegat</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Pejas</surname></string-name>, <string-name><given-names>I.</given-names> <surname>El Fray</surname></string-name>, &amp; <string-name><given-names>J.</given-names> <surname>Kacprzyk</surname></string-name> (<role>Eds.</role>),</person-group> <source>´ Hard and Soft Computing for Artificial Intelligence, Multimedia and Security</source> (<edition>1st ed.</edition>, pp. <fpage>216</fpage>–<lpage>227</lpage>). <publisher-name>Springer International Publishing</publisher-name>. <pub-id pub-id-type="doi">10.1007/978-3-319-48429-7_20</pub-id></mixed-citation></ref>
<ref id="b41"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>McConkie</surname>, <given-names>G. W.</given-names></string-name>, &amp; <string-name><surname>Rayner</surname>, <given-names>K.</given-names></string-name></person-group> (<year>1975</year>). <article-title>The Span of the Effective Stimulus During a Fixation in Reading.</article-title> <source>Perception &amp; Psychophysics</source>, <volume>17</volume>(<issue>6</issue>), <fpage>578</fpage>–<lpage>586</lpage>. <pub-id pub-id-type="doi">10.3758/BF03203972</pub-id><issn>0031-5117</issn></mixed-citation></ref>
<ref id="b42"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Mills</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Hollingworth</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Van der Stigchel</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Hoffman</surname>, <given-names>L.</given-names></string-name>, &amp; <string-name><surname>Dodd</surname>, <given-names>M. D.</given-names></string-name></person-group> (<year>2011</year>). <article-title>Examining the influence of task set on eye movements and fixations.</article-title> <source>Journal of Vision (Charlottesville, Va.)</source>, <volume>11</volume>(<issue>8</issue>), <fpage>17</fpage>. <pub-id pub-id-type="doi">10.1167/11.8.17</pub-id><pub-id pub-id-type="pmid">21799023</pub-id><issn>1534-7362</issn></mixed-citation></ref>
<ref id="b43"><mixed-citation publication-type="unknown" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Nyström</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Holmqvist</surname>, <given-names>K.</given-names></string-name></person-group> (<year>2010</year>). <article-title>An adaptive algorithm for fixation, saccade, and glissade detection in eyetracking data.</article-title> Behaviour Research Methods, 42(1), 188–204. Retrieved from http://dx.doi.org/<pub-id pub-id-type="doi">10.3758/BRM.42.1.188</pub-id> doi:</mixed-citation></ref>
<ref id="b44"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Pannasch</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Helmert</surname>, <given-names>J. R.</given-names></string-name>, <string-name><surname>Roth</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Herbold</surname>, <given-names>A.-K.</given-names></string-name>, &amp; <string-name><surname>Walter</surname>, <given-names>H.</given-names></string-name></person-group> (<year>2008</year>). <article-title>Visual Fixation Durations and Saccade Amplitudes: Shifting Relationship in a Variety of Conditions.</article-title> <source>Journal of Eye Movement Research</source>, <volume>2</volume>(<issue>2</issue>), <fpage>1</fpage>–<lpage>19</lpage>.<issn>1995-8692</issn></mixed-citation></ref>
<ref id="b45"><mixed-citation publication-type="thesis" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Peysakhovich</surname>, <given-names>V.</given-names></string-name></person-group> (<year>2016</year>). Study of pupil diameter and eye movements to enhance flight safety (Unpublished doctoral dissertation). Université de Toulouse, Toulouse, France.</mixed-citation></ref>
<ref id="b46"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Qvarfordt</surname>, <given-names>P.</given-names></string-name>, &amp; <string-name><surname>Zhai</surname>, <given-names>S.</given-names></string-name></person-group> (<year>2005</year>). <article-title>Conversing with the user based on eye-gaze patterns.</article-title> In <source>Proceedings of the SIGCHI Conference on Human Factors in Computing Systems</source> (pp. <fpage>221</fpage>–<lpage>230</lpage>). <publisher-loc>New York, NY, USA</publisher-loc>: <publisher-name>ACM</publisher-name>. <pub-id pub-id-type="doi">10.1145/1054972.1055004</pub-id></mixed-citation></ref>
<ref id="b47"><mixed-citation publication-type="web-page" specific-use="unparsed"><person-group person-group-type="author"><collab>R Development Core Team</collab></person-group>. (<year>2011</year>). R: A Language and Environment for Statistical Computing [Computer software manual]. Vienna, Austria. Retrieved from <ext-link ext-link-type="uri" xlink:href="http://www.R-project.org/">http://www.R-project.org/</ext-link> (ISBN 3-900051-07-0)</mixed-citation></ref>
<ref id="b48"><mixed-citation publication-type="unknown" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Rayner</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Smith</surname>, <given-names>T. J.</given-names></string-name>, <string-name><surname>Malcolm</surname>, <given-names>G. L.</given-names></string-name>, &amp; <string-name><surname>Henderson</surname>, <given-names>J. M.</given-names></string-name></person-group> (<year>2009</year>). <article-title>Eye Movements and Visual Encoding During Scene Perception.</article-title> Psychological Science, 20(1), 6–10. Retrieved from http://doi.org/<pub-id pub-id-type="doi">10.1111/j.1467-9280.2008.02243.x</pub-id> doi:</mixed-citation></ref>
<ref id="b49"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Reingold</surname>, <given-names>E. M.</given-names></string-name>, <string-name><surname>Charness</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Pomplun</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Stampe</surname>, <given-names>D. M.</given-names></string-name></person-group> (<year>2001</year>). <article-title>Visual span in expert chess players: Evidence from eye movements.</article-title> <source>Psychological Science</source>, <volume>12</volume>(<issue>1</issue>), <fpage>48</fpage>–<lpage>55</lpage>. <pub-id pub-id-type="doi">10.1111/1467-9280.00309</pub-id><pub-id pub-id-type="pmid">11294228</pub-id><issn>0956-7976</issn></mixed-citation></ref>
<ref id="b50"><mixed-citation publication-type="unknown" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Rosenholtz</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Huang</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Raj</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Balas</surname>, <given-names>B. J.</given-names></string-name>, &amp; <string-name><surname>Ilie</surname>, <given-names>L.</given-names></string-name></person-group> (<year>2012</year>, January). <article-title>A summary statistic representation in peripheral vision explains visual search.</article-title> Journal of Vision, 12(4). doi: <pub-id pub-id-type="doi" specific-use="author">10.1167/12.4.14</pub-id> Rosenholtz, R., Li, Y., &amp; Nakano, L. (2007, August). Measuring visual clutter. Journal of Vision, 7(2), 1–22. doi:<pub-id pub-id-type="doi">10.1167/7.2.17</pub-id> <pub-id pub-id-type="doi">10.1167/12.4.14</pub-id></mixed-citation></ref>
<ref id="b51"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Salthouse</surname>, <given-names>T. A.</given-names></string-name>, &amp; <string-name><surname>Ellis</surname>, <given-names>C. L.</given-names></string-name></person-group> (<year>1980</year>). <article-title>Determinants of eye-fixation duration.</article-title> <source>The American Journal of Psychology</source>, <volume>93</volume>(<issue>2</issue>), <fpage>207</fpage>–<lpage>234</lpage>. <pub-id pub-id-type="doi">10.2307/1422228</pub-id><pub-id pub-id-type="pmid">7406068</pub-id><issn>0002-9556</issn></mixed-citation></ref>
<ref id="b52"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Salvucci</surname>, <given-names>D. D.</given-names></string-name>, &amp; <string-name><surname>Goldberg</surname>, <given-names>J. H.</given-names></string-name></person-group> (<year>2000</year>). <chapter-title>Identifying Fixations and Saccades in Eye-tracking Protocols</chapter-title>. In <source>Proceedings of the 2000 Symposium on Eye Tracking Research &amp; Applications</source> (pp. <fpage>71</fpage>–<lpage>78</lpage>). <publisher-loc>New York, NY</publisher-loc>: <publisher-name>ACM</publisher-name>; Retrieved from <ext-link ext-link-type="uri" xlink:href="http://doi.acm.org/10.1145/355017.355028">http://doi.acm.org/10.1145/355017.355028</ext-link>, <pub-id pub-id-type="doi">10.1145/355017.355028</pub-id></mixed-citation></ref>
<ref id="b53"><mixed-citation publication-type="web-page" specific-use="unparsed">Schmitz, C., &amp; LimeSurvey Project Team. (<year>2012</year>). LimeSurvey: An Open Source Survey Tool [Computer software manual]. Hamburg, Germany. Retrieved from <ext-link ext-link-type="uri" xlink:href="http://www.limesurvey.org">http://www.limesurvey.org</ext-link></mixed-citation></ref>
<ref id="b54"><mixed-citation publication-type="conference" specific-use="parsed"><person-group person-group-type="author"><string-name><surname>Schnur</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Bektas</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Salahi</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Çöltekin</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2010</year>). <article-title>A Comparison of Measured and Perceived Visual Complexity for Dynamic Web Maps.</article-title> In <source>Proceedings of the Sixth International Conference on Geographic Information Science (GIScience 2010)</source>. <publisher-name>Springer International Publishing.</publisher-name> Retrieved from https://doi.org/<pub-id pub-id-type="doi">10.5167/uzh-38771</pub-id> doi:</mixed-citation></ref>
<ref id="b55"><mixed-citation publication-type="conference" specific-use="parsed"><person-group person-group-type="author"><string-name><surname>Starker</surname>, <given-names>I.</given-names></string-name>, &amp; <string-name><surname>Bolt</surname>, <given-names>R. A.</given-names></string-name></person-group> (<year>1990</year>). <article-title>A gaze-responsive selfdisclosing display.</article-title> In <source>Proceedings of the SIGCHI Conference on Human Factors in Computing Systems</source> (pp.<fpage>3</fpage>–<lpage>10</lpage>). <publisher-loc>New York, NY, USA</publisher-loc>: <publisher-name>ACM</publisher-name>.</mixed-citation></ref>
<ref id="b56"><mixed-citation publication-type="book" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Šterba</surname>, <given-names>Z.</given-names></string-name></person-group>, Šašinka, ˇ C., Stacho ˇ n, Z., Štampach, R., &amp; Morong, K. (<year>2015</year>). Selected Issues of Experimental Testing in Cartography. Brno, Czech Republic: Masaryk University Press. doi: <pub-id pub-id-type="doi">10.5817/CZ.MUNI.M210-7893-2015</pub-id></mixed-citation></ref>
<ref id="b57"><mixed-citation publication-type="unknown" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Trevarthen</surname>, <given-names>C. B.</given-names></string-name></person-group> (<year>1968</year>). <article-title>Two mechanisms of vision in primates.</article-title> Psychologische Forschung, 31(4), 299–337. Retrieved from http://dx.doi.org/<pub-id pub-id-type="doi">10.1007/BF00422717</pub-id> doi:</mixed-citation></ref>
<ref id="b58"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Unema</surname>, <given-names>P. J. A.</given-names></string-name>, <string-name><surname>Pannasch</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Joos</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Velichkovsky</surname>, <given-names>B. M.</given-names></string-name></person-group> (<year>2005</year>). Time course of information processing during scene perception: The relationship between saccade amplitude and fixation duration. Visual Cognition, 12(3), 473–494. Retrieved from http://dx.doi.org/<pub-id pub-id-type="doi">10.1080/13506280444000409</pub-id> doi:</mixed-citation></ref>
<ref id="b59"><mixed-citation publication-type="conference" specific-use="parsed"><person-group person-group-type="author"><string-name><surname>Velichkovsky</surname>, <given-names>B. M.</given-names></string-name>, <string-name><surname>Joos</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Helmert</surname>, <given-names>J. R.</given-names></string-name>, &amp; <string-name><surname>Pannasch</surname>, <given-names>S.</given-names></string-name></person-group> (<year>2005</year>). <article-title>Two Visual Systems and their Eye Movements: Evidence from Static and Dynamic Scene Perception.</article-title> In <source>CogSci 2005: Proceedings of the XXVII Conference of the Cognitive Science Society</source> (pp. <fpage>2283</fpage>–<lpage>2288</lpage>). <conf-loc>Stresa, Italy</conf-loc>.</mixed-citation></ref>
<ref id="b60"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Wehrend</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Lewis</surname>, <given-names>C.</given-names></string-name></person-group> (<year>1990</year>). <chapter-title>A Problem-oriented Classification of Visualization Techniques</chapter-title>. In <source>Proceedings of the 1st Conference on Visualization ’90</source> (pp. <fpage>139</fpage>–<lpage>143</lpage>). <publisher-loc>Los Alamitos, CA</publisher-loc>: <publisher-name>IEEE Computer Society Press</publisher-name>; Retrieved from <ext-link ext-link-type="uri" xlink:href="http://dl.acm.org/citation.cfm?id=949531.949553">http://dl.acm.org/citation.cfm?id=949531.949553</ext-link> <pub-id pub-id-type="doi">10.1109/VISUAL.1990.146375</pub-id></mixed-citation></ref>
<ref id="b61"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Weibel</surname>, <given-names>R.</given-names></string-name>, &amp; <string-name><surname>Brassel</surname>, <given-names>K. E.</given-names></string-name></person-group> (<year>2006</year>). <chapter-title>Map Generalization—what a difference two decades make</chapter-title>. In <person-group person-group-type="editor"><string-name><given-names>P.</given-names> <surname>Fisher</surname></string-name> (<role>Ed.</role>),</person-group> <source>Classics from IJGIS: Twenty years of the International Journal of Geographical Information Science and Systems</source> (pp. <fpage>59</fpage>–<lpage>65</lpage>).</mixed-citation></ref>
<ref id="b62"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Weiskrantz</surname>, <given-names>L.</given-names></string-name></person-group> (<year>1972</year>). <article-title>Behavioural analysis of the monkey’s visual nervous system.</article-title> <source>Proceedings of the Royal Society of London</source>, <volume>182</volume>(<issue>1069</issue>), <fpage>427</fpage>–<lpage>455</lpage>. <pub-id pub-id-type="doi">10.1098/rspb.1972.0087</pub-id><pub-id pub-id-type="pmid">4404807</pub-id><issn>0370-1662</issn></mixed-citation></ref>
<ref id="b63"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Wolfe</surname>, <given-names>J. M.</given-names></string-name>, <string-name><surname>Alvarez</surname>, <given-names>G. A.</given-names></string-name>, <string-name><surname>Rosenholtz</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Kuzmova</surname>, <given-names>Y. I.</given-names></string-name>, &amp; <string-name><surname>Sherman</surname>, <given-names>A. M.</given-names></string-name></person-group> (<year>2011</year>). <article-title>Visual search for arbitrary objects in real scenes.</article-title> <source>Attention, Perception &amp; Psychophysics</source>, <volume>73</volume>(<issue>6</issue>), <fpage>1650</fpage>–<lpage>1671</lpage>. <pub-id pub-id-type="doi">10.3758/s13414-011-0153-3</pub-id><pub-id pub-id-type="pmid">21671156</pub-id><issn>1943-3921</issn></mixed-citation></ref>
<ref id="b64"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Yarbus</surname>, <given-names>A. L.</given-names></string-name></person-group> (<year>1967</year>). <source>Eye Movements and Vision</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Plenum Press</publisher-name>. <pub-id pub-id-type="doi">10.1007/978-1-4899-5379-7</pub-id></mixed-citation></ref>
</ref-list>
<fn-group>
  <fn id="FN1">
  <p>https://developers.google.com/maps/
documentation/javascript/, last accessed May, 2015.
Google Maps provides an alternative API for producing static
images, however, this API (v2) lacks the richness of features
provided by the JavaScript API, which is why the latter was chosen.</p>
  </fn>
  </fn-group>

</back>
</article>
