<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd">

<article article-type="research-article" xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
     <journal-meta>
	<journal-id journal-id-type="publisher-id">Jemr</journal-id>
      <journal-title-group>
        <journal-title>Journal of Eye Movement Research</journal-title>
      </journal-title-group>
      <issn pub-type="epub">1995-8692</issn>
	  <publisher>								
	  <publisher-name>Bern Open Publishing</publisher-name>
	  <publisher-loc>Bern, Switzerland</publisher-loc>
	</publisher>
    </journal-meta>
    <article-meta><article-id pub-id-type="doi">10.16910/jemr.10.1.1</article-id> 
	  <article-categories>								
				<subj-group subj-group-type="heading">
					<subject>Research Article</subject>
				</subj-group>
		</article-categories>
      <title-group>
        <article-title>Topology for gaze analyses - Raw data segmentation</article-title>
      </title-group>
         <contrib-group> 
				<contrib contrib-type="author">
					<name>
						<surname>Hein</surname>
						<given-names>Oliver</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Zangemeister</surname>
						<given-names>Wolfgang</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
				
        <aff id="aff1">
		<institution>Neurological University Clinic Hamburg UKE</institution>, <country>Germany</country>
        </aff>
		</contrib-group>
     
	  <pub-date date-type="pub" publication-format="electronic"> 
		<day>13</day>  
		<month>3</month>
        <year>2017</year>
      </pub-date>
	  <pub-date date-type="collection" publication-format="electronic"> 
	  <year>2017</year>
	</pub-date>
      <volume>10</volume>
      <issue>1</issue> 
	  <elocation-id>10.16910/jemr.11.1.2</elocation-id>
	
	<permissions> 
	<copyright-year>2017</copyright-year>
	<copyright-holder>Hein et al.</copyright-holder>
	<license license-type="open-access">
  <license-p>This work is licensed under a Creative Commons Attribution 4.0 International License, 
  (<ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">
    https://creativecommons.org/licenses/by/4.0/</ext-link>), which permits unrestricted use and redistribution provided that the original author and source are credited.</license-p>
</license>
	</permissions>
      <abstract>
        <p>Recent years have witnessed a remarkable growth in the way mathematics, informatics, and computer science can process data. In disciplines such as machine learning,
pattern recognition, computer vision, computational neurology, molecular biology,
information retrieval, etc., many new methods have been developed to cope with the
ever increasing amount and complexity of the data. These new methods offer interesting possibilities for processing, classifying and interpreting eye-tracking data. The
present paper exemplifies the application of topological arguments to improve the
evaluation of eye-tracking data. The task of classifying raw eye-tracking data into
saccades and fixations, with a single, simple as well as intuitive argument, described
as coherence of spacetime, is discussed, and the hierarchical ordering of the fixations
into dwells is shown. The method, namely identification by topological characteristics
(ITop), is parameter-free and needs no pre-processing and post-processing of the raw
data. The general and robust topological argument is easy to expand into complex
settings of higher visual tasks, making it possible to identify visual strategies.</p>
      </abstract>
      <kwd-group>
        <kwd>gaze trajectory</kwd>
        <kwd>event detection</kwd>
        <kwd>topological data analysis (TDA)</kwd>
        <kwd>clustering</kwd>
        <kwd>parameter-free classification</kwd>
        <kwd>visual strategy</kwd>
		<kwd>global scanpath</kwd>
		<kwd>local scanpath</kwd>
		
      </kwd-group>
    </article-meta>
  </front>
  <body>
  
  





<sec id="s1">
<title>Introduction</title>
<p>Gaze trajectories can tell us many interesting
things about human nature, including attention, memory,
consciousness, etc., with important applications
[<xref ref-type="bibr" rid="b55 b36 b168 b131">55, 36, 168, 131</xref>] as
well as facilitating the diagnosis and helping to understand
the mechanisms of diseases [<xref ref-type="bibr" rid="b94 b111 b28">94, 111, 28</xref>]. Normally, viewing behavior is studied
with simple paradigms to keep the complexity of natural
viewing situations as low as possible, e.g., in a
search paradigm, a person looks at a computer screen
with a simple static geometric configuration under well
defined optical constraints, i.e., constant illumination,
head immobilized by a chin rest or bite bar, no distractors,
etc.</p>
<p>The task of analyzing, classifying, and interpreting
gaze trajectories for realistic situations proves to be much more difficult because of the many different factors
influencing the steering of the eyes. The usual
scientific approach is to break down real world complexity
into easy to define and control partial modules,
and then to try to reassemble reality from these simple
modules. This has also been done for gaze trajectories.
The task of analyzing the gaze trajectory data can
roughly be split into two subtasks: the low level description
of the noisy raw data that are produced from
the gaze tracker, and the high level description of the
data in combination with the viewing task and the cognitive
processes. The first subtask could be regarded
as the mathematical modeling of high frequency timeseries,
given that modern gaze trackers can sample
eye position and orientation at 2000 Hz or even more
[<xref ref-type="bibr" rid="b4">4</xref>].</p>
<p>The careful choice of the data model and data representation
is the basis for all of the following analyses.
Only a model capable of incorporating the many subtleties
of the gaze trajectory is able to support the complex
questions which appear in the context of modeling
the looking task in relation to the assumed cognitive
processes<xref ref-type="fn" rid="FN1">1</xref>.


1 Of course, a more complex model is harder
to implement and interpret. There is a permanent balancing
between data load, explanatory potential, and model complexity.</p>	
</sec>	


<sec id="s2">
<title>Splitting trajectory data into events</title>
<p>In this section a general outline of splitting raw
eye-tracking data into meaningful events is given. At
present, the most important segmentation of the data is
the dichotomous splitting into fixations and saccades.
Although this is a long standing approach, up to now
no definite algorithm for the splitting exists. The reasons
are discussed.</p>
	<sec id="s2a">
	<title>The basic oculomotor events</title>
	<p>The eyes’ scanning of the surrounding is done in
a sequential manner, since the movement of the eyes,
seen as a mechanical system, is limited to sequential
movements. It has to be remarked that, in many aspects,
this is not true for the information extraction
and processing of the visual data within the brain,
which can process information in parallel [<xref ref-type="bibr" rid="b157 b160">157, 160</xref>]. It is well
known that a detailed analysis can only be done for
a very small part of the visual scene, approximately
1 up to 5 degrees of visual angle [<xref ref-type="bibr" rid="b27 b37">27, 37</xref>]. This is the part of the scene which
is projected onto the fovea, the region of the retina with
the highest concentration of cone cells. To capture the
whole scene, the eyes have to switch swiftly to other
regions within the scene, which is done via saccades,
i.e., very fast movements [<xref ref-type="bibr" rid="b47 b88">47, 88</xref>].
In fact, saccades are operationally defined by velocity,
acceleration, and amplitude criteria. Saccades exhibit
a clear characteristic, which is relatively stable across
subjects [<xref ref-type="bibr" rid="b95">95</xref>]. Quantitatively this relationship
is expressed in the main sequence [<xref ref-type="bibr" rid="b11 b10 b9 b20">11, 10, 9, 20</xref>]. Speed is crucial,
because the brain has to integrate many parts of
the whole scene into one consistent and stable internal
representation of our surrounding world, and because
of the fact that the observer has decreased sensitivity
while the eyes are moving fast, a phenomenon called
saccadic suppression [<xref ref-type="bibr" rid="b103 b95">103, 95</xref>].
Information gathering works by swiftly scanning the
scene and minimizing the timespan of decreased sensitivity.
This fact makes a bipartition of the gaze trajectory
data desirable.</p>
	<p>The gaze trajectory is broken down into two general
subsegments, fixations and saccades. Saccades allow
the gaze to change between parts of the scene, while
fixations are intended for analyzing parts of the scene.
Saccades are the segments of the trajectory where eyes
are moving fast and in a preprogrammed, directed manner, whereas in a fixation eyes are moving slowly
and in a random-like fashion [<xref ref-type="bibr" rid="b129">129</xref>]. The two
modes of movement are displayed alternatively and
exclusively. Fixations may then be defined as the part
between the saccades or vice-versa. This is a sensible
and convenient assumption, but also a major simplification.
It is well known that fixations can contain
microsaccades as subitems [<xref ref-type="bibr" rid="b100 b129 b42">100, 129, 42</xref>], mixing the two assumed
modes of movement.</p>
	<p>These two different movement characteristics can be
operationalized. The bipartite classification of gaze
points in saccade points and fixation points is normally
achieved through a combination of space and
time characteristics, i.e., for a fixation, the dispersion of
the gaze points on the display combined with the duration
of a cluster of gaze points in time; for a saccade, it is
the velocity, acceleration, and amplitude of the movement.
The exact determination of the parameters and
the algorithmic implementation has a long history and
many parameterizations exist<xref ref-type="fn" rid="FN2">2</xref>.</p>
	<p>The classification of eye movements into fixations
and saccades is by no means straightforward. One always
has to bear in mind that the dichotomic splitting
of the data follows our desire for simple and parsimonious
models,3 it is not Nature’s design<xref ref-type="fn" rid="FN3">3</xref>. It has to be
noted that the eye has a much broader repertoire of
movements [<xref ref-type="bibr" rid="b97">97</xref>].
“Patterns” of eye movements other than fixations and
saccades occur in real data, e.g., vestibular and vergence
eye movements, dynamic over-/undershooting,
microsaccades, drift, tremor, etc. This becomes even
more complex when viewing dynamic scenes as opposed
to still images [<xref ref-type="bibr" rid="b28">28</xref>]. Because of
the moving content, the eyes have to follow the infocus part of the scene. The concept of a fixation as
being localized in a small subregion of a still image is
no longer valid and has to be replaced by the concept of
smooth pursuit [<xref ref-type="bibr" rid="b18">18</xref>]. As of now the most important event types
are fixations, saccades and smooth pursuit. More recently
post-saccadic oscillations (PSO) have come into
focus [<xref ref-type="bibr" rid="b116 b3">116, 3</xref>]. Zemblys,
Niehorster, Komogortsev, and Holmqvist [<xref ref-type="bibr" rid="b186">186</xref>] estimate
15-25 events that, as of now, have been described
in the psychological and neurological eye-movement
literature.</p>
<p>As common for biological systems, all movements
exhibit a normal physiological variability [<xref ref-type="bibr" rid="b146 b166">146, 166</xref>]. Different application
regimes also show different characteristics, e.g.,
normal reading is different from reading a drifting text
[<xref ref-type="bibr" rid="b165">165</xref>] as it is now common when reading,
or even browsing, texts on mobile devices (swiping
the text). Furthermore, gaze tracking data can be
interrupted by blinks. Blinks interrupt the flow of gaze
tracking data, while the eye is still moving consistently.
Though coupled [<xref ref-type="bibr" rid="b67">67</xref>], blinks are
considered noise.</p>
<p>Even if all possible events were known and clearly
defined, the algorithmic processing would introduce a
bias into the results. There are many reasons for this
finding. One reason lies in the different sensitivities to
noise and filter effects [<xref ref-type="bibr" rid="b68 b159">68, 159</xref>], e.g., numerical differentiation is an operation
with notorious “bad behavior”. Furthermore,
the filters used for preprocessing also call for parameters
and introduce a bias into the data.</p>
	</sec>
	<sec id="s2b">
	<title>Higher level use for oculomotor events</title>
	<p>Another motivation for the development of more
and more sophisticated algorithms is the growing –
one might say exploding – applicability of eye tracking
devices. In the past eye tracking was restricted to
scientific uses and the tasks people were performing
were relatively low in complexity, e.g., a simple search
task. Nowadays, with the increase of performance in
eye-tracking hardware and computing power, the tasks
under investigation have become more and more complex,
producing a wealth of data.</p>
	<p>Recent years especially have shown a growing interest
in the investigation of complex dynamic settings. In
these settings the viewing subject is no longer looking
at a static image from a (head-)fixed position. In the
extreme, the subject is moving freely and interacting
with its environment, like playing table tennis or driving
a car [<xref ref-type="bibr" rid="b90 b89 b91 b92">90, 89, 91, 92</xref>]. Driven
by industrial applications such as market research, dynamic
scenes are playing a more and more important
role. These can be watching TV and movies [<xref ref-type="bibr" rid="b52 b22 b34">52, 22, 34</xref>], video clips [<xref ref-type="bibr" rid="b26 b15 b161">26, 15, 161</xref>] or interactively playing
a video game [<xref ref-type="bibr" rid="b121 b152">121, 152</xref>]. Another application is
the assessment of the driving ability in diseases like
glaucoma [<xref ref-type="bibr" rid="b28">28</xref>] or Parkinson’s Disease
[<xref ref-type="bibr" rid="b23">23</xref>], where patients view hazardous
situations in a car driving context. The system calibration
can be automated, allowing the collection of
data for many subjects. As an example, the eye movements
of 5,638 subjects have successfully been recorded
while they viewed digitized images of paintings from
the National Gallery collection in the course of the millennium
exhibition [<xref ref-type="bibr" rid="b181 b182 b180">181, 182, 180</xref>]. It is
apparent that such data sets can not be evaluated manually.
A recent application is online tracking of eye
movements for integration in gaze contingent applications,
e.g., driving assistance, virtual reality, gaming,
etc. Here the online tracking produces a continuous
stream of highly noisy data, and the system has to extract
the relevant events in real time and has to infer the
users’ intents to adjust itself to their needs.</p>
	<p>These more complex settings and large sample sizes
are not only a challenge for the hard- and software, but
also require a rethinking of the concepts being used to
interpret the data, especially when it comes to the theoretical
possibility of inferring people’s intent from their
eye movements [<xref ref-type="bibr" rid="b59 b21 b54">59, 21, 54</xref>].</p>
	<p>In summary, the analysis of eye tracking data can be
organized in a hierarchy spanning different scales, going
from low level segmentation ascending to higher
levels, relevant for the physiological and psychological
interpretation. Topmost is the comparison and analysis
of different eye movement patterns within and between
groups of people, as is relevant for the inference
of underlying physiological and cognitive processes,
which forms the basis for important eye tracking applications,
see <xref ref-type="fig" rid="fig23">Table 1</xref> Highlighted in light gray background
is the first level aggregation into basic events.
Highlighted in dark gray is the second level aggregation
for higher use, i.e., sets of sequential fixations in a
confined part of the viewing area [<xref ref-type="bibr" rid="b135">135</xref>]<xref ref-type="fn" rid="FN4">4</xref>.</p>



<fig id="fig23" fig-type="figure" position="float">
					<label>Table 1</label>
					<caption>
						<p>Functional overviewe</p>
					</caption>
					

					
					
					<graphic id="graph23" xlink:href="jemr-10-01-a-figure-23.png"/>
				</fig>






	</sec>
	<sec id="s2c">
	<title>The problem of defining a fixation</title>
	<p>For most areas of inquiry this level of information
in the raw data is not necessary. It is sufficient to reduce
the gaze-points into oculomotor events, i.e., into
the fixations and saccades forming the scanpath. Here
scanpath <xref ref-type="fn" rid="FN6">6</xref> means any higher level time ordered representation
of the raw data which form the physical gaze
trajectory. The fixations can further be attributed to regions
of interest (RoI), each RoI representing a larger
part of the scene with interesting content for the viewing
subject.</p>
	<p>While intuitively easy to grasp, it is by no means obvious
how to explicitly define these concepts and make
them available for numerical calculations [<xref ref-type="bibr" rid="b3">3</xref>]. Very often only basic saccade and fixation
identification algorithms are part of the eye-tracking
system at delivery ([<xref ref-type="bibr" rid="b156">156</xref>], leaving the higher splitting
up to the user. This is desirable in the academic setting,
but not in the industrial setting, where time efficient
analysis has to be conducted, e.g., in marketing research
[<xref ref-type="bibr" rid="b127">127</xref>] or in usability evaluation [<xref ref-type="bibr" rid="b51">51</xref>].
Most commercial implementations incorporate dispersion
threshold methods, e.g., ASL [<xref ref-type="bibr" rid="b7">7</xref>] or velocity
threshold methods, e.g., seeingmachines [<xref ref-type="bibr" rid="b141">141</xref>]; Olsen
[<xref ref-type="bibr" rid="b117">117</xref>]; Tobii [<xref ref-type="bibr" rid="b158">158</xref>]. Some offer the user flexibility in
choosing the thresholds, while others mask the complexity
from the user by assuming a sort of lowest common
denominator for the thresholds in different application
domains, although it is known that parameters
can vary between different tasks, e.g., the mean fixation
duration amounts to 225 ms on silent reading, 275 ms
on visual search, and 400 ms on hand-eye coordination
[<xref ref-type="bibr" rid="b125">125</xref>]. To account for these variations, some
implementations have 10 parameters to adjust [<xref ref-type="bibr" rid="b126">126</xref>], requiring a good understanding of the
theory of gaze trajectories.</p>
	<p>It is well known that the parametrization of the algorithm
can substantially affect the results, but there is
no rule which algorithm and which parametrization to
employ in a given experimental setting [<xref ref-type="bibr" rid="b149 b116 b177">149, 116, 177</xref>]. A comparison of the different algorithms and
the bias which can result under different parameterizations
is given in Shic et al. [<xref ref-type="bibr" rid="b145">145</xref>]; Špakov [<xref ref-type="bibr" rid="b176">176</xref>];
Andersson et al. [<xref ref-type="bibr" rid="b3">3</xref>]. For instance, post-saccadic oscillations
(PSOs), i.e., wobbling over/under-shootings,
are usually not explicitly mentioned, but form a normal
part of eye movements. The PSOs are attributed
to fixations or saccades, influencing the overall statistics
of the measurement [<xref ref-type="bibr" rid="b116 b3">116, 3</xref>]. The algorithms to implement
the classification are therefore different and researchers
aim to improve and extend the algorithms constantly
[<xref ref-type="bibr" rid="b166 b176 b83 b108 b96 b175 b177 b165 b31 b3 b63 b186">166, 176, 83, 108, 96, 175, 177, 165, 31, 3, 63, 186</xref>].</p>
	<p>Many researchers agree that a normative definition
and protocol is desirable but at present far from becoming
reality [<xref ref-type="bibr" rid="b77 b81 b116 b3">77, 81, 116, 3</xref>]. As Karsh and Breitenbach [<xref ref-type="bibr" rid="b77">77</xref>] stated rightly:</p>
	<p>The problem of defining a fixation is
one that perhaps deserves more recognition
than it had in the past. Generally speaking,
the more complex the system the more
complex the task of definition will be. ...
Once these needs are recognized and implemented,
comparison between studies take
on considerably more meaning.</p>
	</sec>
	<sec id="s2d">
	<title>Topological approach to the problem</title>
	<p>Up until now, no single algorithm has been able
to cover all the various aspects in eye tracking data
[<xref ref-type="bibr" rid="b3">3</xref>]. The aim here is to show that
there exists a strikingly simple argument for demarcating
the different components of the gaze trajectory
in a normative way. From well-known approaches a
data representation is derived, which forms the basis
for a consistent analysis scheme to cover the basic aggregation
steps, see gray parts of <xref ref-type="fig" rid="fig23">Table 1</xref>. The argument
for the segmentation is a topological one and is by its
very nature global and scale-invariant. It is the mathematical
formulation that a fixation is a coherent part
in space and time. The meaning of “coherent in space
and time” will be clarified in the next sections. The argument
needs no thresholds or calibration and is independent
of any experimental setting or paradigm. The
delineation of the gaze trajectory is unambiguously reproducible.</p>
	</sec>
</sec>

<sec id="s3">
<title>Overview of existing approaches</title>
<p>This section presents an overview of different approaches
to event detection. From these, a common
argument is isolated, the coherence of sample data in
space and time, which in turn forms the basis for the
new algorithm.</p>
	<sec id="s3a">
	<title>Taxonomy of algorithms</title>
	<p>At present, we see a wide variety of different methods
being used to extract the main oculumotor events
from raw eye tracking data [<xref ref-type="bibr" rid="b64">64</xref>].
Each approach to the data highlights at least one prominent
and distinguishing feature of the main oculomotor
events in the trajectory data and makes use of specialized
algorithms to filter/detect these features against
the noisy background. Noise is to be understood as being
the part of the measurement which is not relevant for the investigation, e.g., micro saccades can be considered
noise in one study, but be of central interest in
another setting. In its narrow sense noise is the random
part inherent in any measurement. There is a common
logic to all these approaches, from which a data representation
and global topological argument can be derived.
To better understand the topological approach,
algorithms currently in use are systematized in a taxonomy.
The taxonomy was first introduced in Salvucci
and Goldberg [<xref ref-type="bibr" rid="b134">134</xref>]. This classification has often been
repeated and adapted in the literature [<xref ref-type="bibr" rid="b81 b80 b83 b136 b3">81, 80, 83, 136, 3</xref>]. Here, as in
Salvucci and Goldberg [<xref ref-type="bibr" rid="b134">134</xref>], the classification is based
on the role of time and space as well as algorithms used
to evaluate raw data. Broadly speaking, there are two different approaches to the data, which differ in complexity.</p>
	<p>The algorithmically simplest approach is based on
thresholds for saccades and fixations. In the case of
saccades these are thresholds for velocity (I-VT: identification
by velocity threshold), acceleration, and even
jerk, very often calculated as the discrete numerical
space-time n-point difference approximations to the
continuous differentials. E.g., a saccade is detected
whenever the eye’s angular velocity is greater than 30
deg/s [<xref ref-type="bibr" rid="b122 b150 b44 b48 b120">122, 150, 44, 48, 120</xref>]. These algorithms are called “saccade
pickers” [<xref ref-type="bibr" rid="b76">76</xref>].</p>
	<p>The second group targets the space dispersion (I-DT:
identification by dispersion (position-variance) threshold)
or space-time dispersion (I-DDT: identification by
dispersion and duration thresholds), i.e., when a consecutive
series of gaze points occur near each other in
display space, they are considered part of a fixation.
E.g., in a reading context, a fixation lasts between 200
and 300 msec and a saccade spans approximately seven
character spaces [<xref ref-type="bibr" rid="b125">125</xref>]. Gaze points consistent
with this are aggregated and assumed to form a single
fixation. These algorithms are called “fixation pickers”.
Most algorithms use simple thresholds to cluster
data into saccades and fixations, which in practice need
to be optimized. A fixed parameter approach may perform
well on a specific record but is very often too
imprecise and error-prone when applied to different
records <xref ref-type="fn" rid="FN7">7</xref>. In order to improve results, researchers adapt the threshold in a dynamic way [<xref ref-type="bibr" rid="b41 b116">41, 116</xref>], or combine
criteria, e.g., a saccade is detected when the angular
velocity is higher than 30 deg/s, the angular acceleration
exceeds 8000 deg/s2, the deflection in eye position
is of at least 0.1 deg, and a minimum duration of 4
ms is exceeded [<xref ref-type="bibr" rid="b154 b46 b147 b148">154, 46, 147, 148</xref>].
Note that dispersion thresholds can be inversely defined
for saccades, i.e., in relations to a fixation, a saccade
is over-dispersed, i.e., it has a minimum jumping
distance. This is essential when delineating micro saccades
from saccades.</p>
	<p>Parameters are often chosen subject to individual
judgment or even rather arbitrarily [<xref ref-type="bibr" rid="b70">70</xref>]. Even
after using more criteria, human post-processing is required
[<xref ref-type="bibr" rid="b177">177</xref>], and means to reduce the human
interaction are being sought [<xref ref-type="bibr" rid="b32">32</xref>].</p>
	<p>A higher sampling rate of the eye-tracker will give
better approximations of velocity and acceleration, but
the devices are more expensive and demand higher restrictions
for the tested subjects, e.g., a chin rest, etc.
It is remarkable that functional relationships like the
main sequence [<xref ref-type="bibr" rid="b11">11</xref>] are rarely employed,
considering that they give good guidance for setting
parameter thresholds [<xref ref-type="bibr" rid="b68">68</xref>]; a recent
exception is Liston et al. [<xref ref-type="bibr" rid="b96">96</xref>].</p>
	<p>All these approaches are purely operational, call for
experience, and are driven by technical as well as programming
restrictions. More complex algorithms are
of course harder to code and often suffer from performance
issues. The simple velocity and dispersion
based classifiers are exemplified in <xref ref-type="table" rid="t01">Table 2</xref> (citations contain an explicit exposure of algorithm).</p>


<table-wrap id="t01" position="float">
					<label>Table 2</label>
				
<table >
  <tr>
    <th colspan="2">saccade pickers</th>
  </tr>
  <tr>
    <td >d/dt velocity threshold I-VT</td>
    <td >fix (Stampe, 1993[<xref ref-type="bibr" rid="b150">150</xref>]) </td>
  </tr>
  <tr>
    <td ></td>
    <td >adaptive (Nyström and Holmqvist, 2010[<xref ref-type="bibr" rid="b116">116</xref>])</td>
  </tr>
  <tr>
    <td >d<sup>2</sup>/dt<sup>2</sup> acceleration threshold I-AT</td>
    <td >fix (Behrens and Weiss, 1992[<xref ref-type="bibr" rid="b13">13</xref>]; Behrens, MacKeben, and Schröder-Preikschat, 2010[<xref ref-type="bibr" rid="b12">12</xref>])</td>
  </tr>
  <tr>
    <td >d<sup>3</sup>/dt<sup>3</sup>v jerk threshold I-JT</td>
    <td >fix (Wyatt, 1998[<xref ref-type="bibr" rid="b183">183</xref>]), (Matsuoka and Harato, 1983[<xref ref-type="bibr" rid="b104">104</xref>], in Japanese)</td>
  </tr>
  <tr>
    <td colspan="2">fixation pickers</td>
  </tr>
  <tr>
    <td >dispersion threshold I-DT</td>
    <td >fix (Mason, 1976 [<xref ref-type="bibr" rid="b102"></xref>]; Kliegl and Olson, 1981[<xref ref-type="bibr" rid="b79">79</xref>])</td>
  </tr>
  <tr>
    <td >dispersion and duration thresholds I-DDT</td>
    <td >fix (Widdel, 1984[<xref ref-type="bibr" rid="b179">179</xref>]; Nodine, Kundel, Toto, and Krupinski, 1992[<xref ref-type="bibr" rid="b112">112</xref>]; Manor and Gordon, 2003[<xref ref-type="bibr" rid="b99">99</xref>]; Krassanakis, Filippakopoulou, and Nakos, 2014[<xref ref-type="bibr" rid="b85">85</xref>])</td>
  </tr>
</table>
	</table-wrap>

	<p>A considerable advantage of these approaches is that
thresholds are easy to understand, interpret, and implement.
The values for thresholds depend on research
domain, e.g., the space-time dispersion values in I-DDT
are different in reading and in visual search. Fixation
times are domain specific, i.e., the duration of a typical
fixation in reading is different to fixation times in visual
search, etc. [<xref ref-type="bibr" rid="b125">125</xref>]. Hand-tuning is often
requisite to get good results and is based on heuristics.</p>




	</sec>
	<sec id="s3b">
	<title>Range of advanced methods</title>
	<p>The more sophisticated algorithms use ramified versions
of the basic velocity/dispersion features taken
from signal processing, statistics, Kalman filtering,
Bayesian state estimation, clustering, pattern classifier
algorithms, and machine learning.</p>
	<p>These are taken from other disciplines like</p>
	<p>Signal processing
– Finite impulse response filter [<xref ref-type="bibr" rid="b159">159</xref>]
– Cumulative sum (CUSUM) [<xref ref-type="bibr" rid="b118 b158 b58">118, 158, 58</xref>]</p>
	<p>Statistics
– F-test and correlation [<xref ref-type="bibr" rid="b173 b174 b172">173, 174, 172</xref>]
– Gap-statistics [<xref ref-type="bibr" rid="b108">108</xref>]</p>
	<p>Stochastic processes and time series analysis
– Auto-regressive processes and wavelet analysis
[<xref ref-type="bibr" rid="b35">35</xref>]</p>
	<p>Bayesian approaches
– Hidden Markov model [<xref ref-type="bibr" rid="b133 b130">133, 130</xref>]
– Kalman filter [<xref ref-type="bibr" rid="b137 b84">137, 84</xref>]
– Bayesian mixture model [<xref ref-type="bibr" rid="b153 b78">153, 78</xref>]
– Particle filter [<xref ref-type="bibr" rid="b31">31</xref>]</p>
	<p>Data clustering – k-means clustering [<xref ref-type="bibr" rid="b124">124</xref>]
– Projection clustering [<xref ref-type="bibr" rid="b163">163</xref>]
– Mean shift clustering [<xref ref-type="bibr" rid="b135">135</xref>]
– Mean shift clustering and entropy [<xref ref-type="bibr" rid="b171">171</xref>]
– Two-means clustering [<xref ref-type="bibr" rid="b63">63</xref>]</p>
	<p>Machine learning
– Random forest classifier [<xref ref-type="bibr" rid="b186">186</xref>]
– Neural networks [<xref ref-type="bibr" rid="b66 b1">66, 1</xref>]</p>
	<p>Graph theory
– Minimum spanning tree [<xref ref-type="bibr" rid="b50 b128">50, 128</xref>]</p>
	<p>Fuzzy-set methods
– [<xref ref-type="bibr" rid="b6 b30">6, 30</xref>]</p>
	<p>Shape features<xref ref-type="fn" rid="FN8">8</xref>.
– Single feature (simple) [<xref ref-type="bibr" rid="b74">74</xref>], [<xref ref-type="bibr" rid="b15">15</xref>]
– Multiple features (complex) [<xref ref-type="bibr" rid="b175">175</xref>]
– Mathematical morphology [<xref ref-type="bibr" rid="b98">98</xref>]</p>
	<p>Speech recognition
– Mel-frequency cepstral analysis [<xref ref-type="bibr" rid="b29">29</xref>]</p>
	<p>Template matching
– Velocity-Duration template [<xref ref-type="bibr" rid="b96">96</xref>]</p>
	<p>Dynamic system analysis
– Time-delay reconstruction [<xref ref-type="bibr" rid="b142 b143">142, 143</xref>]</p>
	<p>As of now threshold based methods are common
standard. Probabilistic methods are promising candidates
inasmuch as they offer the possibility to implement
an online learning algorithm to adjust to changing
viewing behavior. Very recent candidates for event
classification are neural networks [<xref ref-type="bibr" rid="b66 b1">66, 1</xref>], random forests
[<xref ref-type="bibr" rid="b186">186</xref>] or machine learning in general
[<xref ref-type="bibr" rid="b185">185</xref>].</p>
	</sec>
</sec>	

<sec id="s4">
<title>Topological data analysis</title>
<p>A relative recent field of data analysis is topological
data analysis (TDA). In this section, a topological approach
to the data is given. To this end, the notion of
different spaces, projections and metrics for the trajectory
is introduced. The idea of trajectory spacetime coherence coherence
is given a precise meaning in topological terms,
i.e., “no holes in trajectory spacetime”, a strikingly simple
topological argument for the separation of the sample
data. An intuition and first use for the argument is
given by the visual assessment of the trajectory spacetime,
showing the coarse/fine (global/local) structure
of a scanpath.</p>
	<sec id="s4a">
	<title>Configuration in physical space</title>
	<p>The crucial aspect for partitioning the data is the representation
of space and time. Space is here understood
as the three-dimensional physical space, called
world space, which contains as objects the viewer, items
viewed, and tracking equipment. Essentially, the
viewer’s head and eyes have position (location) and
orientation, together called pose, in world space. In the
case of the eyes, very often only the direction is determined.
The starting point for analysis is the set of raw
data from the gaze tracker. The logging of continuous
movement of head and eyes consists of the discretely
sampled position and orientation of head and eyes
in three-dimensional space at equidistant moments in
time during the timespan of the experiment.</p>
	<p>If it were the intention only to detect fixations or saccades,
it would be sufficient to analyze the movement
of the eyes in head space. In the context of, e.g., cognitive
studies, position and orientation of head and eyes
is not interesting in itself; of interest are the visual field,
the objects within the visual field and the distribution
of allocated attention within the viewer’s internal representation
of the visual field, “the objects looked at”.
Because of this, the motion of the visual field in world
space will be modeled.</p>
	<p>The visual field encompasses the part of the environment
which is in principle accessible for gathering optical
information. It is well known in visual optics that
the way of light from an object onto the retina is a multistage
process which depends on the optical conditions
in world space as well as the geometry and refractive
power of the different parts of the individual eye [<xref ref-type="bibr" rid="b5 b107">5, 107</xref>]. Taken together,
this is a complex setting to analyze.</p>
	<p>In order to cope with the complexity, several assumptions
and simplifications have to be made in the
course of modeling. The visual field is not directly accessible
to the eye tracker. The eye tracker can only
measure related signals. These signals are linked by
calibration to the point of regard. E.g., in video based
head-eye tracking, camera(s) take pictures of the head and eyes of a subject. The individual images are processed
to identify predefined external features of the
head and the eyes, e.g., the corners of the mouth and
the eyes, the pupil, and glints from light emitting
diodes on the light reflecting surfaces of the eyes. From
the relative position of these features in image space(s)
and the calibration, the gaze <xref ref-type="fn" rid="FN9">9</xref> can be determined.</p>
	<p>The visual field for one eye is approximated as a
right circular cone of one sheet with the gaze-ray as
its axis, the center of the entrance pupil as its apex,
and with a varying aperture, neglecting any asymmetry
of the visual field. For foveated objects the cone
angle of a bundle of rays that come to a focus is very
small, approximately 0.5 degrees. In the limit of 0.0 degrees
only a ray remains, which is convenient for calculations.
One calculates the point of intersection of the
gaze-ray (starting from the center of the entrance pupil)
with an object in world space, and not the projection of
the content of the gaze cone onto the retina. Very often
one does not work with the gaze-rays of the two
eyes separately but instead with only one of the two
(the dominant eye); alternatively, the two gaze-rays are
combined into a single gaze-ray, i.e., a mean gaze-ray
known as “cyclops view” [<xref ref-type="bibr" rid="b39">39</xref>]. In addition, very often the head is fixed to prevent
head movements at the cost of a somewhat nonphysiological
setting.</p>
	<p>To describe the geometric and topological approach
to the data in detail, we will choose the situation where
a subject is looking at a screen presenting a visual task
(which is a common experimental setting). The point
of regard (PoR) is the location toward which the eyes
are pointed at a moment in time, i.e., the point of intersection
of the (mean) gaze-ray with the screen. Please
note that the topological method can work just as well
in a three-dimensional setting, e.g., navigating in outdoor
scenes. The 3D case is of recent interest for orientation
in real and virtual space. For the sake of clarity
of explanation, we will now discuss a typical two dimensional
setting.</p>
	</sec>
	<sec id="s4b">
	<title>Coherence in space and time</title>
	<p>The rationale behind the intended clustering is that
trajectory points which have a certain coherence in
space and time should be grouped together. The question
is how to define and express spacetime coherence
for trajectory points. The argumentation starts with the
continuous gaze trajectory tr. The gaze trajectory consists
of the time-ordered points of intersection Pts of the
mean gaze-ray with the screen or screen space and#x03A3;, within
the timespan ts of the experiment. In mathematical abstraction:</p>
<fig id="eq01" fig-type="equation" position="anchor">
					<graphic id="equation01" xlink:href="jemr-10-01-a-equation-01.png"/>
				</fig>







	<p>The terminology and notation is not a mathematical
pedantism. In the following, different spaces will be introduced
and it is essential not to lose track of one’s current
conceptual location. It is important to note that the
unparametrized Ps form a multiset because the gazeray
can visit the same screen point at many time points
(within a fixation and recurrently). Contrary to screen
points, a time point, representing an instant or moment
in the flow of time, can be visited or passed only once.
In practical terms we only have a finite number of discrete
data, i.e., the protocol pr of sampled tr. The pr
results from a discretization of continuous space and
time. The screen consists of a finite number of square
pixels all with equal side length &#x2206;x = &#x2206;y = constant,
the constituting discrete elements of screen space &#x03A3;'  =
&#x007B;Px;y : x 2 &#x007B0; 1 1023&#x007D; y 2 &#x007B;0, 1, ..., 767&#x007D;&#x007D;, here XGA
resolution is assumed, and the tracker takes pictures at
moments in time with a constant sampling rate (time points or moments) <italic>ts&#x0027;</italic> = <italic>&#x007B;&#x004D;</italic> <sub>i</sub>&#x003A;&#x0049;&#x03F5;&#x007B;&#x004F;&#x002C;&#x0031;&#x002C;&#x002E;&#x002E;&#x002C;&#x004E;&#x002D;&#x0031;&#x007D;
therefore <italic>pr = &#x007B;P<sub>M0</sub> , P<sub>M1</sub> , P<sub>M2</sub> ,..., P<sub>Mn</sub> </italic>&#x007D;. Time is considered
to be an ordering parameter, and because of the
constant sampling rate, only time index is noted pr =
(P<sub>0</sub>, P<sub>1</sub>, P<sub>2</sub>, ..., P<sub>n</sub>) with the ordering parameter <italic>i</italic> &#x03F5; &#x2115;<sub>0</sub>.
It is important to note that the points of intersection
alone do not carry any time information. If we want to
convey the information about time ordering, we must
label points, i.e., show the index. Graphically we can
also show a polyline with the line segments sensed, i.e.,
showing an arrowhead, see <xref ref-type="fig" rid="fig01">Figure 1</xref></p>

<fig id="fig01" fig-type="figure" position="float">
					<label>Figure 1</label>
					<caption>
						<p>Trajectory in screen space</p>
					</caption>
					<graphic id="graph01" xlink:href="jemr-10-01-a-figure-01.png"/>
				</fig>



	<p>The crucial step for the following is to take a different
position with regard to the subject, the combinatorial
view. In analogy to space dispersion algorithms, the spatial distance of two points is taken, but this time not
only for consecutive points in time but all possible 2-
point combinations over time. This could be regarded
as taking the maximal window size in the dispersion
algorithms. This way one obtains the time indexed matrix
D of all combinatorial 2-point distances for the trajectory
space. D serves as the basis for further evaluation.
The representation as a time indexed matrix
of combinatorial 2-point distances makes the trajectory
independent of Euclidean motions because distances
are the invariants of Euclidean geometry. The property
of being independent of Euclidean motions is especially
desirable when comparing scanpaths [<xref ref-type="bibr" rid="b71">71</xref>]. At first sight this approach
may seem to resemble a superfluous brute force
dispersion approach. The advantage of such an approach
will be clear from the subsequent sections.</p>
	<p>First, we can make the spatio-temporal relationship
of the Pis directly visible with an imaging technique. To
this end, we convert, for all time ordered pairs of trajectory
points (Pi; Pj); the screen space distance values
d<sub>i,j</sub> into gray values of a picture, img(D); of size &#x007C;pr&#x007C;X&#x007C;pr&#x007C;.
E.g., when the gaze tracker takes 633 samples one obtains
an image measuring 633 by 633 pixels<xref ref-type="fn" rid="FN10">10</xref>.</p>
	<p>In the first line 
<xref ref-type="fig" rid="fig01">Figure 2</xref> should seem suggestive. For the
visual system of the human observer, the square block
structure of img(D) along the diagonal is easy to identify.
The squares along the diagonal represent the fixations.
While fixations are spatially confined, their sample
distances are short and their gray level is near black.
The duration of a fixation is the diagonal (side) length
of the square. The first off-diagonal rectangles represent
the saccades between successive fixations. Spatially
wider saccadic jumps are brighter and shorter
jumps are darker. The building blocks form a hierarchy.
First level squares are the fixations, second level
squares are clusters of fixations, and so on, see fig. 3 (a).
The hierarchy of squares along the diagonal is the visual
representation for the trajectory (screen)spacetime
coherence over different time spans, i.e., the scaling
property in time. The scale runs from the base-scale, set
by the sampling rate of the tracker, into its first physiological
scale, i.e., the time-scale in a single fixation,
showing, e.g., tremor, drift, and microsaccades, into the
time-scale of several fixations within a dwell, viewing
interesting regions, and finally into the time-scale of
shifts in interest, changing the viewing behavior.</p>

<fig id="fig02" fig-type="figure" position="float">
					<label>Figure 2</label>
					<caption>
						<p>Image of time indexed matrix of 2-point combinatorial distances img(D)</p>
					</caption>
					<graphic id="graph02" xlink:href="jemr-10-01-a-figure-02.png"/>
				</fig>
	</sec>
	<sec id="s4c">
	<title>Visual assessment of trajectory spacetime</title>
	<p>The higher level splitting of the viewing behavior
in space and time is a much debated subject
[<xref ref-type="bibr" rid="b170">170</xref>]. The
rationale comes under various names in different contexts. At its base, there is a dichotomy in terms of
global/local [<xref ref-type="bibr" rid="b57 b106 b56">57, 106, 56</xref>], coarse/fine
[<xref ref-type="bibr" rid="b119 b49">119, 49</xref>], ambient/focal [<xref ref-type="bibr" rid="b62">62</xref>], where/what [<xref ref-type="bibr" rid="b144">144</xref>], examining/noticing [<xref ref-type="bibr" rid="b178">178</xref>],
which is backed by anatomical findings, i.e., the concept
of a ventral and dorsal pathway for visual information
processing [<xref ref-type="bibr" rid="b162 b144">162, 144</xref>].</p>
	<p>If this dichotomous splitting is right, it would be
sensible to find a corresponding splitting in the output
of visual processing, i.e., in the spatio-temporal
pattern of fixations and saccades. Here, the visual
assessment of tendency of the spacetime representation
will proove helpful. As an example, in <xref ref-type="fig" rid="fig03">Figure 3</xref>,
three scanpaths from the publicly available database
DOVES [<xref ref-type="bibr" rid="b167">167</xref>] are shown. DOVES contains the scanpaths of
29 human observers as they viewed 101 natural images [<xref ref-type="bibr" rid="b169">169</xref>]. Studying
human viewing behavior while viewing pictures and
images is a common subject in vision research. Since
the seminal work of Buswell [<xref ref-type="bibr" rid="b24">24</xref>], one often repeated
general statement is that people tend to make spatially
widely scattered short fixations early, transitioning to
periods of spatially more confined longer fixations as
viewing time increased [<xref ref-type="bibr" rid="b8">8</xref>].
This behavior is exhibited in fig. 3 (b). Here, observer
CMG2 looks at stimulus img01019. Visible are
three major second level blocks. The classical interpretation
would be that the second block, with its more
variable structure, reflects the global examining phase,
while the following more homogeneous block reflects
the noticing phase. The first block at the beginning
represents the well known central fixation bias in scene
viewing [<xref ref-type="bibr" rid="b155 b17">155, 17</xref>].</p>

<fig id="fig03" fig-type="figure" position="float">
					<label>Figure 3</label>
					<caption>
						<p>Hierarchy of sample clusters, first level are fixations, second level are clusters of fixations, rectangles
of the first off-diagonal represent saccades</p>
					</caption>
					<graphic id="graph03" xlink:href="jemr-10-01-a-figure-03.png"/>
				</fig>



	<p>Interestingly, the database contains also good examples
for the inverse behavior, e.g., observer ABT2 looking
at image img00077, see fig. 3 (c). Here the spatiotemporal
pattern could be interpreted as: first the central
fixation bias, second a local noticing, and only then
a global scanning. This behavior is not uncommon, as
Follet, Le Meur, and Baccino [<xref ref-type="bibr" rid="b45">45</xref>] have noted.</p>
	<p>These are only two examples from the database DOVES, which contains approximately 3000 scanpaths.
The visual inspection makes it possible to get a quick
overview of the spatio-temporal patterns for many
scanpaths and to get an intuitive understanding of
prevailing pattern classes. Scanning DOVES visually
shows that a significant portion of the scanpaths exhibit
a spatio-temporal pattern which does not fit into the
classical coarse-fine structure, e.g., subject KW2 looking
at img00031 in fig. 3 (d). Of course, the examples
are cursory and it is not our intention at this stage to
discuss image scanning behavior. The purpose of the
examples is twofold: firstly, to show that by a visual
assessment of img(D)s, one can reach a good intuitive
understanding of spatio-temporal patterns and regularities
in scanpaths. The human visual system is an
excellent pattern detector, a resource for investigations
that should be utilized, notwithstanding the fact that
a statistical examination of the data and the statistical
test of hypotheses must confirm “seen” patterns. The
search for simple scanpath patterns is a common task
for many research questions [<xref ref-type="bibr" rid="b105">105</xref>].</p>
	<p>Secondly, that the time course of the scanpaths is an
important factor, especially when discussed in the context
of top-down strategies versus bottom-up saliency.
A good quantitative model should replicate the empirical
observed spatio-temporal pattern classes, reflecting
the order of transits between different scanning
regimes and their internal substructure. The whole pattern
shows a global statistics as well as substatistics in
the different regimes. When modeling scanpaths, very
often scanpath data are aggregated into simple feature
vectors containing summary statistics as features,
i.e., mean number of fixations, mean fixation duration,
mean saccadic amplitude, etc. A model is considered
good if it can replicate the empirical summary statistics.
This neglects any time course and hierarchy in the
patterns.</p>
	<p>The next step will be to exploit the representation as
a time indexed matrix of all combinatorial 2-point distances
as a precise instrument of trajectory segmentation
and interpretation.</p>
	</sec>
	<sec id="s4d">
	<title>Homology for spacetime coherence</title>
	<p>At this stage, the human visual system has still been
serving as pattern detector. The goal is to extract the interesting
part of the information about the hierarchical
spatio-temporal configuration of fixations, clusters of
fixations and returns from the distance representation,
and to do so on an automated basis, without any user
defined parametrization, in a robust way. The question
is how to express and implement this coherence
algorithmically. The task will be accomplished in three
steps.<xref ref-type="fig" rid="fig20">Step 1</xref>(see also <xref ref-type="fig" rid="fig04">Figure 4</xref> </p>


<fig id="fig20" fig-type="figure" position="float">
					<label>Step 1</label>
					
					<graphic id="graph20" xlink:href="jemr-10-01-a-figure-20.png"/>
				</fig>
				
				<fig id="fig04" fig-type="figure" position="float">
					<label>Figure 04</label>
					<caption>
						<p>Surface plot of time indexed matrix of combinatorial 2-point distances</p>
					</caption>
					<graphic id="graph04" xlink:href="jemr-10-01-a-figure-04.png"/>
				</fig>

	<p>Clearly visible in the surface plot representation are
rectangular columns with a small on-top variation. The
small variation in blocks is considered noise. In the
image view it could be regarded as a kind of texture.
For a better intuitive understanding of the topological
approach consider the 3D surface plot as kind of
a landscape which is progressively flooded. Coherent
are parts of the landscape which are below a certain sea
level and form an area like a lake, without internal islands.
Lying under or lying above sea level is filtering
the level values according to a threshold. This is done
in the next <xref ref-type="fig" rid="fig21">Step</xref>.</p>


<fig id="fig21" fig-type="figure" position="float">
					<label>Step 2</label>
					
					<graphic id="graph21" xlink:href="jemr-10-01-a-figure-21.png"/>
				</fig>


	<p>Notice the punctuated block structure in the image
representation img( ft(D)), see <xref ref-type="fig" rid="fig05">Figure 5</xref>. While the overall
square block structure along the diagonal and the
off-diagonal rectangle block structure is still visible, the
holes are representing the incoherence or noise. The incoherence
is eliminated by closing the holes, i.e., raising
the threshold (<xref ref-type="fig" rid="fig22">Step 3</xref>.</p>
<fig id="fig05" fig-type="figure" position="float">
					<label>Figure 5</label>
					<caption>
						<p>Filtered time indexed matrix of combinatorial
2-point distances. Magnification shows small components.</p>
					</caption>
					<graphic id="graph05" xlink:href="jemr-10-01-a-figure-05.png"/>
				</fig>
				<fig id="fig22" fig-type="figure" position="float">
					<label>Step 3</label>
					
					<graphic id="graph22" xlink:href="jemr-10-01-a-figure-22.png"/>
				</fig>



	<p>The coherent white part along the diagonal in the image
representation is the partition of the data that we
have been seeking.</p>
	<p>It should be stated explicitly that the parameter tc for
separation is not preassigned. The definition for separation
is the coherent structure/pattern of trajectory
spacetime. The distance threshold is increased until coherence
is reached. This is done individually for every
trajectory. The pattern is global for the trajectory
and does not depend on local specifics. It is important
to note that a more detailed analysis within each block
will separate the noise into physiological noise (tremor,
drift, micro saccades, etc.) and instrument noise. In the
supplementary document this approach can be interactively
investigated.</p>
	<p>All this is easy to understand for human intuition,
but needs a formal mathematical theory along with
an algorithm and efficient computer implementation.
Generally speaking, there exist three methods to tackle
the problem. The first is the obvious way, i.e., a human
observer varies the “sea level”. Human evaluation especially
of noisy data is common practice in eye tracking
data analysis ([<xref ref-type="bibr" rid="b132">132</xref>]. The second way is using a simple “brute force”
image analysis algorithm. The third, more elegant, way
is to use algebraic topology in the form of homology.
Homology tells us about the connectivity and number
of holes in a space, in our representation the “islands
and lakes” created while flooding the space. Counting
the number of connected components and the number
of holes is calculating the first two Betti numbers, <italic>&#x03B2;<sub>0</sub></italic>
and <italic>&#x03B2;<sub>1</sub></italic>, which is a fairly simple topological characteristic.
The detailed description of the theory can be found
in any good book on algebraic topology, e.g., Munkres
[<xref ref-type="bibr" rid="b109">109</xref>], Hatcher [<xref ref-type="bibr" rid="b60">60</xref>], or Kaczynski, Mischaikow, and
Mrozek [<xref ref-type="bibr" rid="b75">75</xref>]. At first sight, a formal theory might
seem daunting, but the important fact is that a simple,
almost trivial topological argument “no holes in trajectory
spacetime” is sufficient to unambiguously determine
sample clusters on different scales. The very nature
of an event and a cluster of events is its “coherence”
in space and time. Time comes with an order
(consecutive) and space comes with a topology (vicinity,
nearness).</p>
	<p>What we have obtained is the adjacency matrix
A = [ai; j] of graph theory for our gaze trajectory.
The side length of a square around the diagonal is
proportional to the duration of fixation (the time scale
is fixed by the sampling rate of the gaze tracker).
The rectangles in the upper and lower triangular
matrix represent a return (recurrence). The length
of each block contains the time information, i.e., the
duration of a cluster. Separating the blocks results in the sequence of fixations and their durations as well
as the duration of intermediate gaps. Suppressing
the time information in the matrix, i.e., shrinking the
squares along the diagonal to one point entries, one
arrives at the classical scanpath string representation
of ABCDEC in the form of a matrix, see <xref ref-type="fig" rid="fig06">Figure 6</xref>.</p>
	<p>The off-diagonal elements are the coupling, i.e.,
recurrence of the fixations. The same argument for the
second level squares yields the dwells, i.e., one obtains
(ABC)1(DE)2C1 (superscript numbers the dwell).</p>

<fig id="fig06" fig-type="figure" position="float">
					<label>Figure 6</label>
					<caption>
						<p>Matrix representation for scanpath</p>
					</caption>
					<graphic id="graph06" xlink:href="jemr-10-01-a-figure-06.png"/>
				</fig>


	<p>To summarize: for trajectory separation, three computational
steps are needed. A distance representation
for the gaze-trajectory in form of a time indexed matrix
of all combinatorial 2-point distances is calculated. To
separate the matrix into subparts a sliding threshold t
is set, which is the sought diameter of a fixation. The
threshold t is increased from 0 in steps and the number
of connected parts, <italic>&#x03B2;<sub>0</sub></italic>, and holes, <italic>&#x03B2;<sub>1</sub></italic>, is traced. As soon
as the square blocks along the diagonal form a simply
connected area without holes, the minimum threshold
tc for the segmentation into fixations has been found.
Further raising the threshold yields the dwells.</p>
	</sec>
	<sec id="s4e">
	<title>Abstract spacetime clustering</title>
	<p>So far, the segmentation process for the gaze trajectory
in screen space has been discussed, but the method
can be made much more far-reaching. In order to do so,
the meaning and interpretation of space will be generalized.</p>
	<p>Up to now the concept of space has been the physical
space and its Euclidean modeling, specifically its Euclidean
metric. The crucial point is that the eyes, seen
as a mechanical system, are moving in physical space,
but the driving physiological and psychological processes
are working in “physiological and psychological
spaces”. An example of a physiological space is the
color space and a much more complex space is the social
space of humans when interacting, say, at a cocktail
party. In this space the items or “points” are interlocutors,
and the eyes are switching between these points
with motivations such as signaling interest in the interlocutor’s small talk, which is a gesture of politeness,
and does not have the primary goal of gathering visual
information. Gathering information is looking at the
face to feel out the mood, etc. What counts is not the
physical distance between the interlocutors, but rather
some sort of social communication-distance. Relevant
are the “content” of the scene and the “strategy” of the
observer while interacting, which in turn is reflected in
the saccade-and-fixate pattern. Physical space-distance
is not a restricted resource for the eyes. The eyes can
move effortlessly from one point to each other point in
physical space.</p>
	<p>As an example for the approach try for yourself the
following search paradigm, see <xref ref-type="fig" rid="fig07">Figure 7</xref>. In the collage of
colored shapes all but two colored shapes occur three
times, one colored shape occurs twice and another colored
shape occurs four times: which two are they? Admittedly,
searching for numerosity is hard! Nevertheless,
numerosity is a good example for an abstract feature,
not tied to a primary sensory input. You can track
and visualize your own search strategy in the supplementary
interactive document.</p>
<fig id="fig07" fig-type="figure" position="float">
					<label>Figure 7</label>
					<caption>
						<p>Search plus path</p>
					</caption>
					<graphic id="graph07" xlink:href="jemr-10-01-a-figure-07.png"/>
				</fig>



	<p>At the beginning many trajectories have fixations
on a color. This derives from the fact that humans
can identify color-blobs very easily in their view field.
Thus, the first “search channel” is very often color.)<xref ref-type="fn" rid="FN11">11</xref>
The second channel is an easily detectable “geometry”.
While the distinct color blobs are far apart in terms of
geometric Euclidean distance they are near in colorspace,
i.e., the red disk (0,9) is near, actually identical,
in color to the red disks (5,3) and (5,11). The same holds
true for the “geometry channel”, e.g., the motives with
a circular boundary. It is likely that most subjects will
start out with a random search strategy, which after a
while will be abandoned in favor of a systematic, rowby-
row, search strategy.</p>
	<p>The qualitative approach to the geometric stimuli
analysis is taken in “Gestaltpsychology". A more recent
and formal approach to it is taken in structural information theory and algorithmic information theory,
which can be made quantitative. Using specialized
metrics differentiates the channels in the search strategy
in a metric way and helps to classify viewers. It
is helpful to change the terminology and to say that
the eyes are moving in “feature space”. This space has
different dimensions like color, shape, etc., which form
subspaces. The feature space is a topological space. For
ease of use it could be modeled as a metric space and
the path is encoded in feature distance. Of course, the
metric has to be adapted for special purposes. A simple
example is the distance in color-space. Simple is certainly
relative, taking into account the long way from
first color theories of the 19<sup>th</sup> century into the elaborated
color spaces like the HUE space, used in printing
and computer imaging. This development has by no
means come to an end. A (much) more complex example
is the distance in social interaction.</p>
	<p>Nevertheless, the starting point is always the basic
notion of a metrizable “neighborhood or nearness” relation
in the form of a metric. The metric is the crucial
starting point to emphasize different aspects in the trajectory.
Let us start with the metric on a space X. The
general mathematical notion of a metric is a function (<xref ref-type="fig" rid="eq02">Equation 2</xref>)</p>
<fig id="eq02" fig-type="equation" position="anchor">
					<graphic id="equation02" xlink:href="jemr-10-01-a-equation-02.png"/>
				</fig>

<p>

satisfying for all <italic>x, y, z &#x03F5; X</italic> the conditions</p>
	<p>Positiveness: d(x; y) &#x2265; 0 with equality only for x = y
Symmetry: d(x; y) = d(y; x)
Triangle inequality: d(x; y) &#x2264; d(x; z) + d(z; y)</p>
	<p>This definition is only the bare skeleton of a metric. By
itself it does not preassign any structure in the data, as
is shown in the example:(<xref ref-type="fig" rid="eq03">Equation 3</xref>)</p>
<fig id="eq03" fig-type="equation" position="anchor">
					<graphic id="equation03" xlink:href="jemr-10-01-a-equation-03.png"/>
				</fig>



	<p>A more complex metric gives a much richer structure,
emphasizing interesting aspects in the data. In RGB
color space the distance between two colors C1(R;G; B)
and C2(R;G; B) simply is:

(<xref ref-type="fig" rid="eq04">Equation 4</xref>)</p>
<fig id="eq04" fig-type="equation" position="anchor">
					<graphic id="equation04" xlink:href="jemr-10-01-a-equation-04.png"/>
				</fig>



	



	<p>Adifferent example is reading. Here it would be appropriate
to work within text space. For the understanding
of reading patterns, not only the physical spacing
of characters, but also the semantic distance is important.
The semantic distance measures the difficulty of understanding words in a reading context. In the flow
of reading, words can be physically close together, but
if a word does not fit into the context or is not known to
the reader, the reader will have difficulties in processing
the word and a regression is most likely. Understanding
a text requires coherence of word semantics
as well as with the narrative in which they occur. The
reader is traveling in general feature spaces and coherence
is maintained or broken.</p>
	<p>Along these lines more complex spaces can be constructed
and analyzed. Clustering the data in feature
space reveals directly the process related time ordering
without intermediate separation of data into fixations,
saccades, and then assigning areas of interest. The process
pattern works directly on the items of interest. To
cite Stark and Ellis [<xref ref-type="bibr" rid="b151">151</xref>] Sensory elements are semantic subfeatures
of scenes or pictures being observed
and motor elements are saccades that represent
the syntactical structural or topological
organization of the scene.</p>
	<p>The ITop algorithm is essentially meant for stimulispace
based analyses. The idea of directly connecting
stimuli information and eye tracking data is also proposed
in [<xref ref-type="bibr" rid="b3">3</xref>].</p>
	</sec>
</sec>	


<sec id="s5">
<title>Results for fixation identification</title>
<p>To show the algorithm’s potential for level one eyetracking
data segmentation, a basic comparison with a
state-of-the-art algorithm is given. An in-depth evaluation
together with a MATLAB
R reference implementation
will be provided in a follow-up article.</p>
<p>Current research has raised the awareness that algorithms
commonly in use, especially when used “out
of the box”, markedly differ in their results and an
overall standard is lacking [<xref ref-type="bibr" rid="b3">3</xref>].
This situation escalates with each new algorithm proposed.
The topological approach introduced herein is
no exception. To make results comparable as much as
possible a common reference set together with computed
results, e.g., number and duration of events,
event detected at samples, would be preferable. In
a recent article, [<xref ref-type="bibr" rid="b63">63</xref>] introduced a
new algorithm, identification by two-means clustering
(I2MC), together with an open source reference implementation
as well as ten datasets to show the performance
of their approach. The I2MC algorithm is evaluated
against seven state-of-the-art event detection algorithms
and is reported to be the most robust to high
noise and data loss levels, which makes it suitable for
eye-tracking research with infants, school children, and certain patient groups. To ensure performance and
comparability the identification by topological arguments
(ITop) is checked against I2MC. The data are
taken from www.github.com/royhessels/I2MC. The
datasets comprise two participants, each participant
having five trials, resulting in ten datasets overall. Both
eyes are tracked. I2MC makes use of the data from both
eyes for fixation detection, ITop classifies solely on the
basis of the left eye data series. I2MC uses an interpolation
algorithm for gap-filling. ITop works without gap
filling. <xref ref-type="fig" rid="fig08">Figure 8</xref> shows the classification results for the ten
datasets under the ITop and I2MC algorithm.</p>

<fig id="fig08" fig-type="figure" position="float">
					<label>Figure 8</label>
					<caption>
						<p>Performance of ITop and I2MC on ten
datasets. The y-axis is in participant.trial, the x-axis
is in samples. ITop fixation periods are in yellow and
I2MC fixation periods are in orange. Dark blue is the
gap between detected fixations or periods of data loss.</p>
					</caption>
					<graphic id="graph08" xlink:href="jemr-10-01-a-figure-08.png"/>
				</fig>


<p>At some positions the ITop signal is splitted into two
peaks, e.g., 1.3 (at samples 360–382 and 533–542) and
2.5 (at samples 1155–1165). This is no error, it is a finer
view of the data. This is discussed in the following examples.
The two approaches are in good agreement.
Whenever I2MC detects a fixation ITop also does. ITop
detects two additional fixations, one for 2.2 (at samples
1048–1049) and one for 2.3 (at samples 17–19). A closer
look at the scatter plot as well as the position plot reveals
two very close fixations, see (<xref ref-type="fig" rid="fig09">Figure 9</xref>, <xref ref-type="fig" rid="fig10">Figure 10</xref>) and
(<xref ref-type="fig" rid="fig11">Figure 11</xref>, <xref ref-type="fig" rid="fig12">Figure 12</xref>).</p>	
<p>Although no data interpolation is done, ITop can
identify a shift in the direct neighborhood of data loss.
This is shown for 2.1 at samples 242–246, see <xref ref-type="fig" rid="fig13">Figure 13</xref>.</p>
<p>At some positions the gap between fixations is split,
e.g., for 1.3 at samples 360–382. This is a finer view
of the data. As discussed, a saccade very often shows
a complex stopping signal [<xref ref-type="bibr" rid="b65">65</xref>], post saccadic oscillations are a
prominent example [<xref ref-type="bibr" rid="b116">116</xref>]. The
term complex is meant in contrast to abrupt stopping.
It does not necessarily mean a post-saccadic oscillation
(PSO). A PSO is only an example for a named event
with a more complicated “braking” pattern. This is reflected
in the splitting of the signal. The position plot
for 1.3 at samples 360–382 shows such a complex behavior,
see <xref ref-type="fig" rid="fig14">Figure 14</xref>.</p>
<p>The splitting according to braking can be much finer
but is still detected by ITop. An example is 1.3 at samples
533–543. Here, a very small shift in the mean of the
y-position signal occurs shortly after stopping, showing
the high sensitivity of ITop, see <xref ref-type="fig" rid="fig15">Figure 15</xref>.</p>
<p>It must further be noted that the saccades according
to ITop are longer (spatially wider) than under I2MC.
As an example, dataset 2.3 at samples 499–515 is shown
in detail. I2MC detects a gap between two fixations at
samples 502–507, see <xref ref-type="fig" rid="fig16">Figure 16</xref>.</p>
<p>ITop detects the gap at the same location at samples
499–515 and is therefore approximately twice as long,
see <xref ref-type="fig" rid="fig17">Figure 17</xref>. The position plot shows a jag in the y-signal, which
could potentially mislead an algorithm, see <xref ref-type="fig" rid="fig18">Figure 18</xref>. ITop also indicates other changes in the data series,
like stationarity, e.g., the double peaked signal for
dataset 2.5 at samples 1155–1165 indicates the onset of
a drift in a fixation, see <xref ref-type="fig" rid="fig19">Figure 19</xref>.</p>
<p>Notwithstanding that I2MC and ITop are in good
overall agreement they also show differences on a finer
scale. If one takes into consideration the broad number
of algorithms and different approaches for event
detection it must be clear that the overall results can
be markedly different. This can only be mitigated by
defining events in an unambiguous and definite way
and comparing algorithms on the basis of standard
data on a sample by sample level.</p>	

				
<fig id="fig09" fig-type="figure" position="float">
					<label>Figure 9</label>
					<caption>
						<p>Scatter plot for dataset 2.2 at sample 1048 (red
square at sample 1048) shows two clusters very close to
each other</p>
					</caption>
					<graphic id="graph09" xlink:href="jemr-10-01-a-figure-09.png"/>
				</fig>
				
				<fig id="fig10" fig-type="figure" position="float">
					<label>Figure 10</label>
					<caption>
						<p>Position plot for dataset 2.2 at sample 1048
(red line at sample 1048) shows a small jump in the
mean. The small jump is detected in spite of significant
noise.</p>
					</caption>
					<graphic id="graph10" xlink:href="jemr-10-01-a-figure-10.png"/>
				</fig>
				
				<fig id="fig11" fig-type="figure" position="float">
					<label>Figure 11</label>
					<caption>
						<p>Scatter plot for dataset 2.3 at samples 17–19
(red square at sample 18) shows two clusters.</p>
					</caption>
					<graphic id="graph11" xlink:href="jemr-10-01-a-figure-11.png"/>
				</fig>
				
				<fig id="fig12" fig-type="figure" position="float">
					<label>Figure 12</label>
					<caption>
						<p>Position plot for dataset 2.3 at samples 17–19
(red line at sample 18) shows a small jump in the mean.</p>
					</caption>
					<graphic id="graph12" xlink:href="jemr-10-01-a-figure-12.png"/>
				</fig>
				
		
				
				<fig id="fig13" fig-type="figure" position="float">
					<label>Figure 13</label>
					<caption>
						<p>Position plot for dataset 2.1 at samples 242–
246 (red line at sample 242) shows a small jump in the
mean after a period of data loss.</p>
					</caption>
					<graphic id="graph13" xlink:href="jemr-10-01-a-figure-13.png"/>
				</fig>
				
				
				
				
				
				<fig id="fig14" fig-type="figure" position="float">
					<label>Figure 14</label>
					<caption>
						<p>Position plot for dataset 1.3 between sample
360 (green line) and sample 382 (red line) showing a
complex transit between two fixations.</p>
					</caption>
					<graphic id="graph14" xlink:href="jemr-10-01-a-figure-14.png"/>
				</fig>
				
				
				
				
				<fig id="fig15" fig-type="figure" position="float">
					<label>Figure 15</label>
					<caption>
						<p>Position plot for dataset 1.3 between sample
533 (green line) and sample 543 (red line) showing a
small jump in the mean of the y-position after stopping.
The jump occurs at the red line.</p>
					</caption>
					<graphic id="graph15" xlink:href="jemr-10-01-a-figure-15.png"/>
				</fig>
				
				
				
				<fig id="fig16" fig-type="figure" position="float">
					<label>Figure 16</label>
					<caption>
						<p>Scatter plot for dataset 2.3 between sample
502 (green square) and sample 507 (red square).</p>
					</caption>
					<graphic id="graph16" xlink:href="jemr-10-01-a-figure-16.png"/>
				</fig>
				
				
				
				<fig id="fig17" fig-type="figure" position="float">
					<label>Figure 17</label>
					<caption>
						<p>Scatter plot for dataset 2.3 between sample
499 (green square) and sample 515 (red square).</p>
					</caption>
					<graphic id="graph17" xlink:href="jemr-10-01-a-figure-17.png"/>
				</fig>
				
				
				
				
				<fig id="fig18" fig-type="figure" position="float">
					<label>Figure 18</label>
					<caption>
						<p>Position plot for dataset 2.3 between sample
499 (green line) and sample 515 (red line). A jag occurs
at sample 504, potentially misleading algorithms.</p>
					</caption>
					<graphic id="graph18" xlink:href="jemr-10-01-a-figure-18.png"/>
				</fig>
				
				
	<fig id="fig19" fig-type="figure" position="float">
					<label>Figure 19</label>
					<caption>
						<p>Position plot for dataset 2.5 shows a drift
beginning at sample 1155 (red line).</p>
					</caption>
					<graphic id="graph19" xlink:href="jemr-10-01-a-figure-19.png"/>
				</fig>
				


</sec>	

<sec id="s6">
<title>Discussion</title>
<p>A general overview of the algorithms currently in
use for event detection in eye-tracking data is given,
showing that there is no standard for event detection,
even in the case of the most basic events such as fixations
and saccades.</p>
<p>A topological approach to event detection in raw
eye-tracking data is introduced, ITop. The detection
is based on the topological abstraction of coherence in
space and time of the sample points. The idea of trajectory
spacetime coherence is given a precise meaning
in topological terms, i.e., “no holes in trajectory spacetime”,
a strikingly simple topological argument for the
separation of the sample data. The topological argument
is a kind of common rationale for most of the algorithms
currently in use. The basis for the topological
approach is the representation of raw eye-tracking data
in the form of a time indexed matrix of combinatorial
2-point distances. This representation makes the coherence
of sample data in space and time easyly accessible.
The time ordered 2-point combinatorial distances representation
makes the gaze trajectory independent of
Euclidean motions, which is a desired property when
comparing scanpaths, since distances are the invariants
of Euclidean geometry.</p>
<p>For visualization, the matrix is displayed as a
grayscale image to show the spatio-temporal ordering
and coherence of the gaze-points in display space.</p>	
<p>For the human visual system the interesting parts are
easy to detect, e.g., fixations, dwells, etc. The visual
assessment of spatio-temporal coherence is discussed
and exemplified in the context of coarse-fine (globallocal)
scanpath characteristics. It is argued that the visual
assessment of the trajectory spacetime is helpful
to identify general patterns in viewing behavior and to
develop an intuitive understanding thereof.</p>
<p>To separate fixations and higher level clusters of fixations
out of eye-tracking data, the common argument
of spatio-temporal coherence, implicitly used in existing
algorithms, is converted into an explicit topological
argument, i.e., “no holes in trajectory spacetime”. The
method encompasses the well known criteria which are
partially expressed as thresholds for velocity, acceleration,
amplitude, duration, etc. Tracking the number of
connected parts and holes while varying the scale allows
the partitioning of the distances matrix into the
classical scanpath oculomotor events, i.e., segments of
fixations and saccades. The segments are identified by
their spatio-temporal coherence by means of simple homology,
which is a classical tool of algebraic topology.
For processing the data no preprocessing is needed,
i.e., gap-filling, filtering, and smoothing, preserving the
data “as is”. This approach makes it possible to identify
the single events without any predefined parameters.
A postprocessing of the found events, like merging of
nearby fixations or the removal of physiologically implausible
short fixations and saccades is not needed.</p>
<p>The topological segmentation is introduced in the familiar
setting of Euclidean space and its well known
metric. The advantage of this approach is that it
can be easily expanded to general spaces like color
spaces, shape spaces, etc., allowing the analysis of complex
patterns in higher human activities. The ITop algorithm
is essentially meant for stimuli-space based
analysis.</p>
<p>In order to facilitate the intuitive understanding the
article is accompanied by a supplementary interactive
document.</p>
<p>ITop is considered as a fourth approach to eyetracking
data in addition to the well known threshold
based approaches and the newer probabilistic and
machine learning methods. An expanded comparison,
analysis, and classification of the ITop detection
patterns together with an open source MATLAB
R reference
implementation will be provided in a further
work.</p>
</sec>	

<sec id="s7">
<title>Acknowledgement</title>
<p>We thank the anonymous reviewers who provided
helpful comments on earlier drafts of the manuscript and whose comments/suggestions helped to improve
and clarify this manuscript. The provision of important
references and preprints is also greatly appreciated.</p>
</sec>	

</body>

<back>
<ref-list>
<ref id="b1"><mixed-citation publication-type="unknown" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Anantrasirichai</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Gilchrist</surname>, <given-names>I. D.</given-names></string-name>, and <string-name><surname>Bull</surname>, <given-names>D. R.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Fixation identification for low-sample-rate mobile eye trackers.</article-title> In 2016 ieee international conference on image processing (icip) (p. 3126-3130). doi: <pub-id pub-id-type="doi">10.1109/ICIP.2016.7532935</pub-id></mixed-citation></ref>
<ref id="b2"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Anderson</surname>, <given-names>N. C.</given-names></string-name>, <string-name><surname>Bischof</surname>, <given-names>W. F.</given-names></string-name>, <string-name><surname>Laidlaw</surname>, <given-names>K. E. W.</given-names></string-name>, <string-name><surname>Risko</surname>, <given-names>E. F.</given-names></string-name>, and <string-name><surname>Kingstone</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2013</year>). <article-title>Recurrence quantification analysis of eye movements.</article-title> <source>Behavior Research Methods</source>, <volume>45</volume>(<issue>3</issue>), <fpage>842</fpage>–<lpage>856</lpage>. <pub-id pub-id-type="doi">10.3758/s13428-012-0299-5</pub-id><pub-id pub-id-type="pmid">23344735</pub-id><issn>1554-351X</issn></mixed-citation></ref>
<ref id="b3"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Andersson</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Larsson</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Holmqvist</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Stridh</surname>, <given-names>M.</given-names></string-name>, and <string-name><surname>Nyström</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2016</year>). One algorithm to rule them all? an evaluation and discussion of ten eye movement event-detection algorithms. Behavior Research Methods, 1–22. Retrieved from  http://dx.doi.org/<pub-id pub-id-type="doi">10.3758/s13428-016-0738-9</pub-id> doi:</mixed-citation></ref>
<ref id="b4"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Andersson</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Nyström</surname>, <given-names>M.</given-names></string-name>, and <string-name><surname>Holmqvist</surname>, <given-names>K.</given-names></string-name></person-group> (<year>2010</year>). <article-title>Sampling frequency and eye-tracking measures: How speed affects durations, latencies, and more.</article-title> <source>Journal of Eye Movement Research</source>, <volume>3</volume>(<issue>3</issue>), <fpage>1</fpage>–<lpage>12</lpage>.<issn>1995-8692</issn></mixed-citation></ref>
<ref id="b5"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Artal</surname>, <given-names>P.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Optics of the eye and its impact in vision: A tutorial.</article-title> <source>Advances in Optics and Photonics</source>, <volume>6</volume>(<issue>3</issue>), <fpage>340</fpage>–<lpage>367</lpage>. <pub-id pub-id-type="doi">10.1364/AOP.6.000340</pub-id><issn>1943-8206</issn></mixed-citation></ref>
<ref id="b6"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Arzi</surname>, <given-names>M.</given-names></string-name>, and <string-name><surname>Magnin</surname>, <given-names>M.</given-names></string-name></person-group> (<year>1989</year>). <article-title>A fuzzy set theoretical approach to automatic analysis of nystagmic eye movements.</article-title> <source>IEEE Transactions on Biomedical Engineering</source>, <volume>36</volume>(<issue>9</issue>), <fpage>954</fpage>–<lpage>963</lpage>. <pub-id pub-id-type="doi">10.1109/10.35304</pub-id><pub-id pub-id-type="pmid">2506123</pub-id><issn>0018-9294</issn></mixed-citation></ref>
<ref id="b7"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><collab>ASL</collab></person-group>. (<year>2007</year>). Eye tracker system manual asl eyetrac 6 eyenal analysis software [Computer software manual].</mixed-citation></ref>
<ref id="b8"><mixed-citation publication-type="book-chapter" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Babcock</surname>, <given-names>J. S.</given-names></string-name>, <string-name><surname>Lipps</surname>, <given-names>M.</given-names></string-name>, and <string-name><surname>Pelz</surname>, <given-names>J. B.</given-names></string-name></person-group> (<year>2002</year>). How people look at pictures before, during, and after scene capture: Buswell revisited. In Proc.spie 4662, human vision and electronic imaging vii (Vol. 4662, p. 34-47). Retrieved from http://dx.doi.org/<pub-id pub-id-type="doi">10.1117/12.469552</pub-id> doi:</mixed-citation></ref>
<ref id="b9"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Bahill</surname>, <given-names>A. T.</given-names></string-name>, and <string-name><surname>Kallman</surname>, <given-names>J. S.</given-names></string-name></person-group> (<year>1983</year>). <article-title>Predicting final eye position halfway through a saccade.</article-title> <source>IEEE Transactions on Biomedical Engineering</source>, <volume>30</volume>(<issue>12</issue>), <fpage>781</fpage>–<lpage>786</lpage>. <pub-id pub-id-type="doi">10.1109/TBME.1983.325078</pub-id><pub-id pub-id-type="pmid">6662536</pub-id><issn>0018-9294</issn></mixed-citation></ref>
<ref id="b10"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Bahill</surname>, <given-names>A. T.</given-names></string-name>, <string-name><surname>Brockenbrough</surname>, <given-names>A.</given-names></string-name>, and <string-name><surname>Troost</surname>, <given-names>B. T.</given-names></string-name></person-group> (<year>1981</year>). <article-title>Variability and development of a normative data base for saccadic eye movements.</article-title> <source>Investigative Ophthalmology and Visual Science</source>, <volume>21</volume>(<issue>1 Pt 1</issue>), <fpage>116</fpage>–<lpage>125</lpage>.<pub-id pub-id-type="pmid">7251295</pub-id><issn>0146-0404</issn></mixed-citation></ref>
<ref id="b11"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Bahill</surname>, <given-names>A. T.</given-names></string-name>, <string-name><surname>Clark</surname>, <given-names>M. R.</given-names></string-name>, and <string-name><surname>Stark</surname>, <given-names>L.</given-names></string-name></person-group> (<year>1975</year>). <article-title>The main sequence, a tool for studying human eye movements.</article-title> <source>Mathematical Biosciences</source>, <volume>24</volume>(<issue>3-4</issue>), <fpage>191</fpage>–<lpage>204</lpage>. <pub-id pub-id-type="doi">10.1016/0025-5564(75)90075-9</pub-id><issn>0025-5564</issn></mixed-citation></ref>
<ref id="b12"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Behrens</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Mackeben</surname>, <given-names>M.</given-names></string-name>, and <string-name><surname>Schröder-Preikschat</surname>, <given-names>W.</given-names></string-name></person-group> (<year>2010</year>). <article-title>An improved algorithm for automatic detection of saccades in eye movement data and for calculating saccade parameters.</article-title> <source>Behavior Research Methods</source>, <volume>42</volume>(<issue>3</issue>), <fpage>701</fpage>–<lpage>708</lpage>. <pub-id pub-id-type="doi">10.3758/BRM.42.3.701</pub-id><pub-id pub-id-type="pmid">20805592</pub-id><issn>1554-351X</issn></mixed-citation></ref>
<ref id="b13"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Behrens</surname>, <given-names>F.</given-names></string-name>, and <string-name><surname>Weiss</surname>, <given-names>L.-R.</given-names></string-name></person-group> (<year>1992</year>). <article-title>An algorithm separating saccadic from nonsaccadic eye movements automatically by use of the acceleration signal.</article-title> <source>Vision Research</source>, <volume>32</volume>(<issue>5</issue>), <fpage>889</fpage>–<lpage>893</lpage>. <pub-id pub-id-type="doi">10.1016/0042-6989(92)90031-D</pub-id><pub-id pub-id-type="pmid">1604857</pub-id><issn>0042-6989</issn></mixed-citation></ref>
<ref id="b14"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Bennett</surname>, <given-names>A.</given-names></string-name>, and <string-name><surname>Rabbetts</surname>, <given-names>R. B.</given-names></string-name></person-group> (<year>2007</year>). <source>Clinical visual optics</source> (<edition>4th ed.</edition>). <publisher-name>Butterworth Heinemann Elsevier</publisher-name>.</mixed-citation></ref>
<ref id="b15"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Berg</surname>, <given-names>D. J.</given-names></string-name>, <string-name><surname>Boehnke</surname>, <given-names>S. E.</given-names></string-name>, <string-name><surname>Marino</surname>, <given-names>R. A.</given-names></string-name>, <string-name><surname>Munoz</surname>, <given-names>D. P.</given-names></string-name>, and <string-name><surname>Itti</surname>, <given-names>L.</given-names></string-name></person-group> (<year>2009</year>). <article-title>Free viewing of dynamic stimuli by humans and monkeys.</article-title> <source>Journal of Vision (Charlottesville, Va.)</source>, <volume>9</volume>(<issue>5</issue>), <fpage>1</fpage>–<lpage>15</lpage>. <pub-id pub-id-type="doi">10.1167/9.5.19</pub-id><pub-id pub-id-type="pmid">19757897</pub-id><issn>1534-7362</issn></mixed-citation></ref>
<ref id="b16"><mixed-citation publication-type="book-chapter" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Bezdek</surname>, <given-names>J.</given-names></string-name>, and <string-name><surname>Hathaway</surname>, <given-names>R.</given-names></string-name></person-group> (<year>2002</year>). Vat: a tool for visual assessment of (cluster) tendency. In Neural networks, 2002. ijcnn ’02. proceedings of the 2002 international joint conference on (pp. 2225–2230).</mixed-citation></ref>
<ref id="b17"><mixed-citation publication-type="web-page" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Bindemann</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2010</year>). <article-title>Scene and screen center bias early eye movements in scene viewing.</article-title> Vision Research, 50(23), 2577 - 2587. Retrieved from <ext-link ext-link-type="uri" xlink:href="http://www.sciencedirect.com/science/article/pii/S0042698910004025">http://www.sciencedirect.com/science/article/pii/S0042698910004025</ext-link> (Vision Research Reviews) doi: http://dx.doi.org/<pub-id pub-id-type="doi">10.1016/j.visres.2010.08.016</pub-id></mixed-citation></ref>
<ref id="b18"><mixed-citation publication-type="book-chapter" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Blackmon</surname>, <given-names>T. T.</given-names></string-name>, <string-name><surname>Ho</surname>, <given-names>Y. F.</given-names></string-name>, <string-name><surname>Chernyak</surname>, <given-names>D. A.</given-names></string-name>, <string-name><surname>Azzariti</surname>, <given-names>M.</given-names></string-name>, and <string-name><surname>Stark</surname>, <given-names>L. W.</given-names></string-name></person-group> (<year>1999</year>). Dynamic scanpaths: Eye movement analysis methods. In Is andt/spieconference on human vision and electronic imaging ivspie vol. 3644.</mixed-citation></ref>
<ref id="b19"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Blignaut</surname>, <given-names>P.</given-names></string-name></person-group> (<year>2009</year>). <article-title>Fixation identification: The optimum threshold for a dispersion algorithm.</article-title> <source>Attention, Perception and Psychophysics</source>, <volume>71</volume>(<issue>4</issue>), <fpage>881</fpage>–<lpage>895</lpage>. <pub-id pub-id-type="doi">10.3758/APP.71.4.881</pub-id><pub-id pub-id-type="pmid">19429966</pub-id><issn>1943-3921</issn></mixed-citation></ref>
<ref id="b20"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Bollen</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Bax</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>van Dijk</surname>, <given-names>J. G.</given-names></string-name>, <string-name><surname>Koning</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Bos</surname>, <given-names>J. E.</given-names></string-name>, <string-name><surname>Kramer</surname>, <given-names>C. G. S.</given-names></string-name>, and <string-name><surname>van der Velde</surname>, <given-names>E. A.</given-names></string-name></person-group> (<year>1993</year>). <article-title>Variability of the main sequence.</article-title> <source>Investigative Ophthalmology and Visual Science</source>, <volume>34</volume>(<issue>13</issue>), <fpage>3700</fpage>–<lpage>3704</lpage>.<pub-id pub-id-type="pmid">8258530</pub-id><issn>0146-0404</issn></mixed-citation></ref>
<ref id="b21"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Borji</surname>, <given-names>A.</given-names></string-name>, and <string-name><surname>Itti</surname>, <given-names>L.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Defending Yarbus: Eye movements reveal observers’ task.</article-title> <source>Journal of Vision (Charlottesville, Va.)</source>, <volume>14</volume>(<issue>3</issue>), <fpage>29</fpage>. <pub-id pub-id-type="doi">10.1167/14.3.29</pub-id><pub-id pub-id-type="pmid">24665092</pub-id><issn>1534-7362</issn></mixed-citation></ref>
<ref id="b22"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Brasel</surname>, <given-names>S. A.</given-names></string-name>, and <string-name><surname>Gips</surname>, <given-names>J.</given-names></string-name></person-group> (<year>2008</year>). <article-title>Points of view: Where do we look when we watch TV?</article-title> <source>Perception</source>, <volume>37</volume>(<issue>12</issue>), <fpage>1890</fpage>–<lpage>1894</lpage>. <pub-id pub-id-type="doi">10.1068/p6253</pub-id><pub-id pub-id-type="pmid">19227380</pub-id><issn>0301-0066</issn></mixed-citation></ref>
<ref id="b23"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Buhmann</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Maintz</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Hierling</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Vettorazzi</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Moll</surname>, <given-names>C. K.</given-names></string-name>, <string-name><surname>Engel</surname>, <given-names>A. K.</given-names></string-name>, <etal>. . .</etal> <string-name><surname>Zangemeister</surname>, <given-names>W. H.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Effect of subthalamic nucleus deep brain stimulation on driving in Parkinson disease.</article-title> <source>Neurology</source>, <volume>82</volume>(<issue>1</issue>), <fpage>32</fpage>–<lpage>40</lpage>. Retrieved from <ext-link ext-link-type="uri" xlink:href="http://www.neurology.org/content/early/2013/12/18/01.wnl.0000438223.17976.fb.abstract">http://www.neurology.org/content/early/2013/12/18/01.wnl.0000438223.17976.fb.abstract</ext-link>. <pub-id pub-id-type="doi">10.1212/01.wnl.0000438223.17976.fb</pub-id><pub-id pub-id-type="pmid">24353336</pub-id><issn>0028-3878</issn></mixed-citation></ref>
<ref id="b24"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Buswell</surname>, <given-names>G. T.</given-names></string-name></person-group> (<year>1935</year>). <source>How people look at pictures: A study of the psychology of perception in art</source>. <publisher-name>The University of Chicago Press</publisher-name>.</mixed-citation></ref>
<ref id="b25"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Camilli</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Nacchia</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Terenzi</surname>, <given-names>M.</given-names></string-name>, and <string-name><surname>Di Nocera</surname>, <given-names>F.</given-names></string-name></person-group> (<year>2008</year>). <article-title>ASTEF: A simple tool for examining fixations.</article-title> <source>Behavior Research Methods</source>, <volume>40</volume>(<issue>2</issue>), <fpage>373</fpage>–<lpage>382</lpage>. <pub-id pub-id-type="doi">10.3758/BRM.40.2.373</pub-id><pub-id pub-id-type="pmid">18522045</pub-id><issn>1554-351X</issn></mixed-citation></ref>
<ref id="b26"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Carmi</surname>, <given-names>R.</given-names></string-name>, and <string-name><surname>Itti</surname>, <given-names>L.</given-names></string-name></person-group> (<year>2006</year>). <article-title>The role of memory in guiding attention during natural vision.</article-title> <source>Journal of Vision (Charlottesville, Va.)</source>, <volume>6</volume>(<issue>9</issue>), <fpage>898</fpage>–<lpage>914</lpage>. <pub-id pub-id-type="doi">10.1167/6.9.4</pub-id><pub-id pub-id-type="pmid">17083283</pub-id><issn>1534-7362</issn></mixed-citation></ref>
<ref id="b27"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Carpenter</surname>, <given-names>R. H. S.</given-names></string-name></person-group> (<year>1988</year>). <source>Movements of the eyes</source> (<edition>2nd ed.</edition>). <publisher-name>Pion</publisher-name>.</mixed-citation></ref>
<ref id="b28"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Crabb</surname>, <given-names>D. P.</given-names></string-name>, <string-name><surname>Smith</surname>, <given-names>N. D.</given-names></string-name>, <string-name><surname>Rauscher</surname>, <given-names>F. G.</given-names></string-name>, <string-name><surname>Chisholm</surname>, <given-names>C. M.</given-names></string-name>, <string-name><surname>Barbur</surname>, <given-names>J. L.</given-names></string-name>, <string-name><surname>Edgar</surname>, <given-names>D. F.</given-names></string-name>, and <string-name><surname>Garway-Heath</surname>, <given-names>D. F.</given-names></string-name></person-group> (<year>2010</year>). <article-title>Exploring eye movements in patients with glaucoma when viewing a driving scene.</article-title> <source>PLoS One</source>, <volume>5</volume>(<issue>3</issue>), <fpage>e9710</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0009710</pub-id><pub-id pub-id-type="pmid">20300522</pub-id><issn>1932-6203</issn></mixed-citation></ref>
<ref id="b29"><mixed-citation publication-type="conference" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Cuong</surname>, <given-names>N. V.</given-names></string-name>, <string-name><surname>Dinh</surname>, <given-names>V.</given-names></string-name>, and <string-name><surname>Ho</surname>, <given-names>L. S. T.</given-names></string-name></person-group> (<year>2012</year>). <article-title>Melfrequency cepstral coefficients for eye movement identification.</article-title> In 2012 ieee 24th international conference on tools with artificial intelligence (p. 253-260). doi: <pub-id pub-id-type="doi">10.1109/ICTAI.2012.42</pub-id></mixed-citation></ref>
<ref id="b30"><mixed-citation publication-type="book-chapter" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Czabanski</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Pander</surname>, <given-names>T.</given-names></string-name>, and <string-name><surname>Przybyla</surname>, <given-names>T.</given-names></string-name></person-group> (<year>2014</year>). <chapter-title>Fuzzy approach to saccades detection in optokinetic nystagmus.</chapter-title> In D. A. Gruca, T. CzachÃ¸srski, and S. Kozielski (Eds.), Man-machine interactions 3 (Vol. 242, p. 231-238). Springer International Publishing. <pub-id pub-id-type="doi">10.1007/978-3-319-02309-0_24</pub-id></mixed-citation></ref>
<ref id="b31"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Daye</surname>, <given-names>P. M.</given-names></string-name>, and <string-name><surname>Optican</surname>, <given-names>L. M.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Saccade detection using a particle filter.</article-title> <source>Journal of Neuroscience Methods</source>, <volume>235</volume>, <fpage>157</fpage>–<lpage>168</lpage>. <pub-id pub-id-type="doi">10.1016/j.jneumeth.2014.06.020</pub-id><pub-id pub-id-type="pmid">25043508</pub-id><issn>0165-0270</issn></mixed-citation></ref>
<ref id="b32"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>de Bruin</surname>, <given-names>J. A.</given-names></string-name>, <string-name><surname>Malan</surname>, <given-names>K. M.</given-names></string-name>, and <string-name><surname>Eloff</surname>, <given-names>J. H. P.</given-names></string-name></person-group> (<year>2013</year>). <article-title>Saccade deviation indicators for automated eye tracking analysis.</article-title> In <source>Proceedings of the 2013 conference on eye tracking south africa</source> (p. <fpage>47</fpage>-<lpage>54</lpage>). <publisher-loc>New York, NY, USA</publisher-loc>: <publisher-name>ACM</publisher-name>. <pub-id pub-id-type="doi">10.1145/2509315.2509324</pub-id></mixed-citation></ref>
<ref id="b33"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Dorr</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Jarodzka</surname>, <given-names>H.</given-names></string-name>,and<string-name><surname>Barth</surname>, <given-names>E.</given-names></string-name></person-group> (<year>2010</year>). <article-title>Space-variant spatio-temporal filtering of video for gaze visualization and perceptual learning.</article-title> In Etra ’10: Proceedings of the 2010 symposium on eye-tracking research and#38; applications (pp. 307–314). New York, NY, USA: ACM. doi: <pub-id pub-id-type="doi">10.1145/1743666.1743737</pub-id></mixed-citation></ref>
<ref id="b34"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Dorr</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Vig</surname>, <given-names>E.</given-names></string-name>, and <string-name><surname>Barth</surname>, <given-names>E.</given-names></string-name></person-group> (<year>2012</year>). <article-title>Eye movement prediction and variability on natural video data sets.</article-title> <source>Visual Cognition</source>, <volume>20</volume>(<issue>4-5</issue>), <fpage>495</fpage>–<lpage>514</lpage>. <pub-id pub-id-type="doi">10.1080/13506285.2012.667456</pub-id><pub-id pub-id-type="pmid">22844203</pub-id><issn>1350-6285</issn></mixed-citation></ref>
<ref id="b35"><mixed-citation publication-type="book-chapter" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Duchowski</surname>, <given-names>A. T.</given-names></string-name></person-group> (<year>1998</year>, March). <chapter-title>3d wavelet analysis of eye movements</chapter-title>. In <person-group person-group-type="editor"><string-name><given-names>H. H.</given-names> <surname>Szu</surname></string-name> (<role>Ed.</role>),</person-group> <source>Proc. spie 3391, wavelet applications v, 435</source>.</mixed-citation></ref>
<ref id="b36"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Duchowski</surname>, <given-names>A. T.</given-names></string-name></person-group> (<year>2002</year>). <article-title>A breadth-first survey of eye-tracking applications.</article-title> <source>Behavior Research Methods, Instruments, and Computers</source>, <volume>34</volume>(<issue>4</issue>), <fpage>455</fpage>–<lpage>470</lpage>. <pub-id pub-id-type="doi">10.3758/BF03195475</pub-id><pub-id pub-id-type="pmid">12564550</pub-id><issn>0743-3808</issn></mixed-citation></ref>
<ref id="b37"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Duchowski</surname>, <given-names>A. T.</given-names></string-name></person-group> (<year>2007</year>). <source>Eye tracking methodology</source> (<edition>2nd ed.</edition>). <publisher-name>Springer</publisher-name>.</mixed-citation></ref>
<ref id="b38"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Eckmann</surname>, <given-names>J.-P.</given-names></string-name>, <string-name><surname>Kamphorst</surname>, <given-names>S. O.</given-names></string-name>, and <string-name><surname>Ruelle</surname>, <given-names>D.</given-names></string-name></person-group> (<year>1987</year>). <article-title>Recurrence plots of dynamical systems.</article-title> <source>Europhysics Letters</source>, <volume>4</volume>(<issue>9</issue>), <fpage>973</fpage>–<lpage>977</lpage>. <pub-id pub-id-type="doi">10.1209/0295-5075/4/9/004</pub-id><issn>0295-5075</issn></mixed-citation></ref>
<ref id="b39"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Elbaum</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Wagner</surname>, <given-names>M.</given-names></string-name>, and <string-name><surname>Botzer</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2017</year>). <article-title>Cyclopean vs. dominant eye in gaze-interface-tracking.</article-title> <source>Journal of Eye Movement Research</source>, <volume>10</volume>, <fpage>•••</fpage>. Retrieved from <ext-link ext-link-type="uri" xlink:href="https://bop.unibe.ch/index.php/JEMR/article/view/2961">https://bop.unibe.ch/index.php/JEMR/article/view/2961</ext-link><issn>1995-8692</issn></mixed-citation></ref>
<ref id="b40"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Engbert</surname>, <given-names>R.</given-names></string-name>, and <string-name><surname>Kliegl</surname>, <given-names>R.</given-names></string-name></person-group> (<year>2003</year>). <article-title>Microsaccades uncover the orientation of covert attention.</article-title> <source>Vision Research</source>, <volume>43</volume>(<issue>9</issue>), <fpage>1035</fpage>–<lpage>1045</lpage>. <pub-id pub-id-type="doi">10.1016/S0042-6989(03)00084-1</pub-id><pub-id pub-id-type="pmid">12676246</pub-id><issn>0042-6989</issn></mixed-citation></ref>
<ref id="b41"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Engbert</surname>, <given-names>R.</given-names></string-name>, and <string-name><surname>Mergenthaler</surname>, <given-names>K.</given-names></string-name></person-group> (<year>2006</year>). <article-title>Microsaccades are triggered by low retinal image slip.</article-title> <source>Proceedings of the National Academy of Sciences of the United States</source>, <volume>103</volume>(<issue>18</issue>), <fpage>7192</fpage>–<lpage>7197</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.0509557103</pub-id><pub-id pub-id-type="pmid">16632611</pub-id><issn>0027-8424</issn></mixed-citation></ref>
<ref id="b42"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Engbert</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Mergenthaler</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Sinn</surname>, <given-names>P.</given-names></string-name>, and <string-name><surname>Pikovsky</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2011</year>). <article-title>An integrated model of fixational eye movements and microsaccades.</article-title> <source>Proceedings of the National Academy of Sciences of the United States of America</source>, <volume>108</volume>(<issue>39</issue>), <fpage>E765</fpage>–<lpage>E770</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1102730108</pub-id><pub-id pub-id-type="pmid">21873243</pub-id><issn>0027-8424</issn></mixed-citation></ref>
<ref id="b43"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Farnand</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Vaidyanathan</surname>, <given-names>P.</given-names></string-name>, and <string-name><surname>Pelz</surname>, <given-names>J.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Recurrence metrics for assessing eye movements in perceptual experiments.</article-title> <source>Journal of Eye Movement Research</source>, <volume>9</volume>(<issue>4</issue>). <pub-id pub-id-type="doi">10.16910/jemr.9.4.1</pub-id><issn>1995-8692</issn></mixed-citation></ref>
<ref id="b44"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Fischer</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Biscaldi</surname>, <given-names>M.</given-names></string-name>, and <string-name><surname>Otto</surname>, <given-names>P.</given-names></string-name></person-group> (<year>1993</year>). <article-title>Saccadic eye movements of dyslexic adult subjects.</article-title> <source>Neuropsychologia</source>, <volume>31</volume>(<issue>9</issue>), <fpage>887</fpage>–<lpage>906</lpage>. <pub-id pub-id-type="doi">10.1016/0028-3932(93)90146-Q</pub-id><pub-id pub-id-type="pmid">8232847</pub-id><issn>0028-3932</issn></mixed-citation></ref>
<ref id="b45"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Follet</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Le Meur</surname>, <given-names>O.</given-names></string-name>, and <string-name><surname>Baccino</surname>, <given-names>T.</given-names></string-name></person-group> (<year>2011</year>). New insights into ambient and focal visual fixations using an automatic classification algorithm. i-Perception, 2(6). Retrieved from https://doi.org/<pub-id pub-id-type="doi">10.1068/i0414</pub-id> doi:</mixed-citation></ref>
<ref id="b46"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Frey</surname>, <given-names>H.-P.</given-names></string-name>, <string-name><surname>Honey</surname>, <given-names>C.</given-names></string-name>, and <string-name><surname>König</surname>, <given-names>P.</given-names></string-name></person-group> (<year>2008</year>). <article-title>What’s color got to do with it? The influence of color on visual attention in different categories.</article-title> <source>Journal of Vision (Charlottesville, Va.)</source>, <volume>8</volume>(<issue>14</issue>), <fpage>1</fpage>–<lpage>17</lpage>. <pub-id pub-id-type="doi">10.1167/8.14.6</pub-id><pub-id pub-id-type="pmid">19146307</pub-id><issn>1534-7362</issn></mixed-citation></ref>
<ref id="b47"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Gilchrist</surname>, <given-names>I. D.</given-names></string-name></person-group> (<year>2011</year>). <chapter-title>The oxford handbook of eye movements</chapter-title>. In <person-group person-group-type="editor"><string-name><given-names>S. P.</given-names> <surname>Liversedge</surname></string-name>, <string-name><given-names>I. D.</given-names> <surname>Gilchrist</surname></string-name>, and <string-name><given-names>S.</given-names> <surname>Everling</surname></string-name> (<role>Eds.</role>),</person-group> <source>chap. Saccades). Oxford University Press</source>.</mixed-citation></ref>
<ref id="b48"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Gitelman</surname>, <given-names>D. R.</given-names></string-name></person-group> (<year>2002</year>). <article-title>ILAB: A program for postexperimental eye movement analysis.</article-title> <source>Behavior Research Methods, Instruments, and Computers</source>, <volume>34</volume>(<issue>4</issue>), <fpage>605</fpage>–<lpage>612</lpage>. <pub-id pub-id-type="doi">10.3758/BF03195488</pub-id><pub-id pub-id-type="pmid">12564563</pub-id><issn>0743-3808</issn></mixed-citation></ref>
<ref id="b49"><mixed-citation publication-type="unknown" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Godwin</surname>, <given-names>H. J.</given-names></string-name>, <string-name><surname>Reichle</surname>, <given-names>E. D.</given-names></string-name>, and <string-name><surname>Menneer</surname>, <given-names>T.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Coarse-to-fine eye movement behavior during visual search.</article-title> Psychonomic Bulletin and Review, 21(5), 1244–1249. Retrieved from http://dx.doi.org/<pub-id pub-id-type="doi" specific-use="author">10.3758/s13423-0140613-6</pub-id> doi: <pub-id pub-id-type="doi">10.3758/s13423-014-0613-6</pub-id> <pub-id pub-id-type="doi">10.3758/s13423-014-0613-6</pub-id></mixed-citation></ref>
<ref id="b50"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Goldberg</surname>, <given-names>J. H.</given-names></string-name>, and <string-name><surname>Schryver</surname>, <given-names>J. C.</given-names></string-name></person-group> (<year>1995</year>). <article-title>Eyegaze-contingent control of the computer interface: Methodology and example for zoom detection.</article-title> <source>Behavior Research Methods</source>, <volume>27</volume>(<issue>3</issue>), <fpage>338</fpage>–<lpage>350</lpage>. <pub-id pub-id-type="doi">10.3758/BF03200428</pub-id><issn>1554-351X</issn></mixed-citation></ref>
<ref id="b51"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Goldberg</surname>, <given-names>J. H.</given-names></string-name>, and <string-name><surname>Wichansky</surname>, <given-names>A. M.</given-names></string-name></person-group> (<year>2003</year>). <chapter-title>Eye tracking in usability evaluation: A practitioner’s guide</chapter-title>. In <person-group person-group-type="editor"><string-name><given-names>J.</given-names> <surname>Hyönä</surname></string-name>, <string-name><given-names>R.</given-names> <surname>Radach</surname></string-name>, and <string-name><given-names>H.</given-names> <surname>Deubel</surname></string-name> (<role>Eds.</role>),</person-group> <source>The mind’s eye: Cognitive and applied aspects of eye movement research</source> (pp. <fpage>493</fpage>–<lpage>516</lpage>). <publisher-name>North-Holland</publisher-name>. <pub-id pub-id-type="doi">10.1016/B978-044451020-4/50027-X</pub-id></mixed-citation></ref>
<ref id="b52"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Goldstein</surname>, <given-names>R. B.</given-names></string-name>, <string-name><surname>Woods</surname>, <given-names>R. L.</given-names></string-name>, and <string-name><surname>Peli</surname>, <given-names>E.</given-names></string-name></person-group> (<year>2007</year>). <article-title>Where people look when watching movies: Do all viewers look at the same place?</article-title> <source>Computers in Biology and Medicine</source>, <volume>37</volume>(<issue>7</issue>), <fpage>957</fpage>–<lpage>964</lpage>. <pub-id pub-id-type="doi">10.1016/j.compbiomed.2006.08.018</pub-id><pub-id pub-id-type="pmid">17010963</pub-id><issn>0010-4825</issn></mixed-citation></ref>
<ref id="b53"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Green</surname>, <given-names>P.</given-names></string-name></person-group> (<year>2002</year>). <chapter-title>Where do drivers look while driving (and for how long)?</chapter-title> In <person-group person-group-type="editor"><string-name><given-names>R.</given-names> <surname>Dewar</surname></string-name> and <string-name><given-names>P.</given-names> <surname>Olson</surname></string-name> (<role>Eds.</role>),</person-group> <source>Human factors in traffic safety</source> (<edition>2nd ed.</edition>, pp. <fpage>57</fpage>–<lpage>82</lpage>). <publisher-name>Lawyers and Judges</publisher-name>.</mixed-citation></ref>
<ref id="b54"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Greene</surname>, <given-names>M. R.</given-names></string-name>, <string-name><surname>Liu</surname>, <given-names>T.</given-names></string-name>, and <string-name><surname>Wolfe</surname>, <given-names>J. M.</given-names></string-name></person-group> (<year>2012</year>). <article-title>Reconsidering Yarbus: A failure to predict observers’ task from eye movement patterns.</article-title> <source>Vision Research</source>, <volume>62</volume>, <fpage>1</fpage>–<lpage>8</lpage>. Retrieved from <ext-link ext-link-type="uri" xlink:href="http://www.sciencedirect.com/science/article/pii/S0042698912000922">http://www.sciencedirect.com/science/article/pii/S0042698912000922</ext-link>. <pub-id pub-id-type="doi">10.1016/j.visres.2012.03.019</pub-id><pub-id pub-id-type="pmid">22487718</pub-id><issn>0042-6989</issn></mixed-citation></ref>
<ref id="b55"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Groner</surname>, <given-names>R.</given-names></string-name>, and <string-name><surname>Groner</surname>, <given-names>M.</given-names></string-name></person-group> (<year>1982</year>). <chapter-title>Towards a hypothetico-deductive theory of cognitive activity</chapter-title>. In <person-group person-group-type="editor"><string-name><given-names>R.</given-names> <surname>Groner</surname></string-name> and <string-name><given-names>P.</given-names> <surname>Fraisse</surname></string-name> (<role>Eds.</role>),</person-group> <source>Cognition and eye movements</source> (pp. <fpage>100</fpage>–<lpage>121</lpage>)., Retrieved from <ext-link ext-link-type="uri" xlink:href="https://www.researchgate.net/publication/312424385_Groner_R_Groner_M_1982_Towards_a_hypothetico-deductive_theory_of_cognitive_activity_In_R_Groner_P_Fraisse_Eds_Cognition_and_eye_movements_Amsterdam_North_Holland">https://www.researchgate.net/publication/312424385_Groner_R_Groner_M_1982_Towards_a_hypothetico-deductive_theory_of_cognitive_activity_In_R_Groner_P_Fraisse_Eds_Cognition_and_eye_movements_Amsterdam_North_Holland</ext-link></mixed-citation></ref>
<ref id="b56"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Groner</surname>, <given-names>R.</given-names></string-name>, and <string-name><surname>Groner</surname>, <given-names>M. T.</given-names></string-name></person-group> (<year>1989</year>). Attention and eye movement control: An overview. Europea archives of psychiatry and neurological sciences, 239(1), 9–16. Retrieved from http://dx.doi.org/<pub-id pub-id-type="doi">10.1007/BF01739737</pub-id> doi:</mixed-citation></ref>
<ref id="b57"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Groner</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Walder</surname>, <given-names>F.</given-names></string-name>, and <string-name><surname>Groner</surname>, <given-names>M.</given-names></string-name></person-group> (<year>1984</year>). <chapter-title>Looking at faces: Local and global aspects of scanpaths</chapter-title>. In <person-group person-group-type="editor"><string-name><given-names>A. G.</given-names> <surname>Gale</surname></string-name> and <string-name><given-names>F.</given-names> <surname>Johnson</surname></string-name> (<role>Eds.</role>),</person-group> <source>Theoretical and applied aspects of eye movement research selected/edited proceedings of the second European conference on eye movements</source> (<volume>Vol. 22</volume>, pp. <fpage>523</fpage>–<lpage>533</lpage>). <publisher-name>North-Holland</publisher-name>; Retrieved from <ext-link ext-link-type="uri" xlink:href="http://www.sciencedirect.com/science/article/pii/S0166411508618749">http://www.sciencedirect.com/science/article/pii/S0166411508618749</ext-link>, <pub-id pub-id-type="doi">10.1016/S0166-4115(08)61874-9</pub-id></mixed-citation></ref>
<ref id="b58"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Gustafsson</surname>, <given-names>F.</given-names></string-name></person-group> (<year>2000</year>). <source>Adaptive filtering and change detection</source>. <publisher-name>Wiley</publisher-name>.</mixed-citation></ref>
<ref id="b59"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Haji-Abolhassani</surname>, <given-names>A.</given-names></string-name>, and <string-name><surname>Clark</surname>, <given-names>J. J.</given-names></string-name></person-group> (<year>2014</year>). <article-title>An inverse Yarbus process: Predicting observers’ task from eye movement patterns.</article-title> <source>Vision Research</source>, <volume>103</volume>, <fpage>127</fpage>–<lpage>142</lpage>. Retrieved from <ext-link ext-link-type="uri" xlink:href="http://www.sciencedirect.com/science/article/pii/S0042698914002004">http://www.sciencedirect.com/science/article/pii/S0042698914002004</ext-link>. <pub-id pub-id-type="doi">10.1016/j.visres.2014.08.014</pub-id><pub-id pub-id-type="pmid">25175112</pub-id><issn>0042-6989</issn></mixed-citation></ref>
<ref id="b60"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Hatcher</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2002</year>). <source>Algebraic topology</source>. <publisher-name>Cambridge University Press</publisher-name>.</mixed-citation></ref>
<ref id="b61"><mixed-citation publication-type="conference" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Havens</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Bezdek</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Keller</surname>, <given-names>J.</given-names></string-name>, and <string-name><surname>Popescu</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2008</year>, dec.). <article-title>Dunn’s cluster validity index as a contrast measure of vat images.</article-title> In Pattern recognition, 2008. icpr 2008. 19th international conference on (pp.1–4).</mixed-citation></ref>
<ref id="b62"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Helo</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Rämä</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Pannasch</surname>, <given-names>S.</given-names></string-name>, and <string-name><surname>Meary</surname>, <given-names>D.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Eye movement patterns and visual attention during scene viewing in 3- to 12-month-olds.</article-title> <source>Visual Neuroscience</source>, <volume>33</volume>, <fpage>E014</fpage>. Retrieved from <ext-link ext-link-type="uri" xlink:href="https://www.cambridge.org/core/article/div-class-title-eye-movement-patterns-and-visual-attention-during-scene-viewing-in-3-to-12-month-olds-div/625327B87FEED9F7771B929A94926C53">https://www.cambridge.org/core/article/div-class-title-eye-movement-patterns-and-visual-attention-during-scene-viewing-in-3-to-12-month-olds-div/625327B87FEED9F7771B929A94926C53</ext-link>. <pub-id pub-id-type="doi">10.1017/S0952523816000110</pub-id><pub-id pub-id-type="pmid">28359348</pub-id><issn>0952-5238</issn></mixed-citation></ref>
<ref id="b63"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Hessels</surname>, <given-names>R. S.</given-names></string-name>, <string-name><surname>Niehorster</surname>, <given-names>D. C.</given-names></string-name>, <string-name><surname>Kemner</surname>, <given-names>C.</given-names></string-name>, and <string-name><surname>Hooge</surname>, <given-names>I. T. C.</given-names></string-name></person-group> (<year>2016</year>). Noise-robust fixation detection in eye movement data: Identification by two-means clustering (i2mc). Behavior Research Methods, 1–22. Retrieved from http://dx.doi.org/<pub-id pub-id-type="doi">10.3758/s13428-016-0822-1</pub-id> doi:</mixed-citation></ref>
<ref id="b64"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Holmqvist</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Nyström</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Anderson</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Dewhurst</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Jarodzka</surname>, <given-names>H.</given-names></string-name>, and <string-name><surname>van de Weijer</surname>, <given-names>J.</given-names></string-name></person-group> (<year>2011</year>). <source>Eye tracking</source>. <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref>
<ref id="b65"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Hooge</surname>, <given-names>I.</given-names></string-name>, <string-name><surname>Nyström</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Cornelissen</surname>, <given-names>T.</given-names></string-name>, and <string-name><surname>Holmqvist</surname>, <given-names>K.</given-names></string-name></person-group> (<year>2015</year>). <article-title>The art of braking: Post saccadic oscillations in the eye tracker signal decrease with increasing saccade size.</article-title> <source>Vision Research</source>, <volume>112</volume>, <fpage>55</fpage>–<lpage>67</lpage>. Retrieved from <ext-link ext-link-type="uri" xlink:href="http://www.sciencedirect.com/science/article/pii/S0042698915001170">http://www.sciencedirect.com/science/article/pii/S0042698915001170</ext-link>. <pub-id pub-id-type="doi">10.1016/j.visres.2015.03.015</pub-id><pub-id pub-id-type="pmid">25982715</pub-id><issn>0042-6989</issn></mixed-citation></ref>
<ref id="b66"><mixed-citation publication-type="web-page" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Hoppe</surname>, <given-names>S.</given-names></string-name>, and <string-name><surname>Bulling</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2016</year>). End-to-end eye movement detection using convolutional neural networks. arXiv e-prints. Retrieved from <ext-link ext-link-type="uri" xlink:href="http://arxiv.org/abs/1609.02452">http://arxiv.org/abs/1609.02452</ext-link></mixed-citation></ref>
<ref id="b67"><mixed-citation publication-type="book-chapter" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Horn</surname>, <given-names>A. K.</given-names></string-name>, and <string-name><surname>Adamczyk</surname>, <given-names>C.</given-names></string-name></person-group> (<year>2012</year>). Chapter 9 – reticular formation: Eye movements, gaze and blinks. In J. K. M. Paxinos (Ed.), The human nervous system (third edition) (Third Edition ed., p. 328 - 366). San Diego: Academic Press.</mixed-citation></ref>
<ref id="b68"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Inchingolo</surname>, <given-names>P.</given-names></string-name>, and <string-name><surname>Spanio</surname>, <given-names>M.</given-names></string-name></person-group> (<year>1985</year>). <article-title>On the identification and analysis of saccadic eye movements-a quantitative study of the processing procedures. Biomedical Engineering</article-title>. <source>IEEE Transactions on</source>, <volume>32</volume>(<issue>9</issue>), <fpage>683</fpage>–<lpage>695</lpage>.<pub-id pub-id-type="pmid">4054932</pub-id></mixed-citation></ref>
<ref id="b69"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Inhoff</surname>, <given-names>A. W.</given-names></string-name>, <string-name><surname>Seymour</surname>, <given-names>B. A.</given-names></string-name>, <string-name><surname>Schad</surname>, <given-names>D.</given-names></string-name>, and <string-name><surname>Greenberg</surname>, <given-names>S.</given-names></string-name></person-group> (<year>2010</year>). <article-title>The size and direction of saccadic curvatures during reading.</article-title> <source>Vision Research</source>, <volume>50</volume>(<issue>12</issue>), <fpage>1117</fpage>–<lpage>1130</lpage>. <pub-id pub-id-type="doi">10.1016/j.visres.2010.03.025</pub-id><pub-id pub-id-type="pmid">20381515</pub-id><issn>0042-6989</issn></mixed-citation></ref>
<ref id="b70"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Itti</surname>, <given-names>L.</given-names></string-name></person-group> (<year>2005</year>). <article-title>Quantifying the contribution of low-level saliency to human eye movements in dynamic scenes.</article-title> <source>Visual Cognition</source>, <volume>12</volume>(<issue>6</issue>), <fpage>1093</fpage>–<lpage>1123</lpage>. <pub-id pub-id-type="doi">10.1080/13506280444000661</pub-id><issn>1350-6285</issn></mixed-citation></ref>
<ref id="b71"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Jarodzka</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Holmqvist</surname>, <given-names>K.</given-names></string-name>, and <string-name><surname>Nyström</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2010</year>). <chapter-title>A vector-based, multidimensional scanpath similarity measure</chapter-title>. In <source>Etra ’10: Proceedings of the 2010 symposium on eye-tracking research and#38; applications</source> (pp. <fpage>211</fpage>–<lpage>218</lpage>). <publisher-loc>New York, NY, USA</publisher-loc>: <publisher-name>ACM. doi</publisher-name>; <ext-link ext-link-type="uri" xlink:href="http://doi.acm.org/10.1145/1743666.1743718">http://doi.acm.org/10.1145/1743666.1743718</ext-link> <pub-id pub-id-type="doi">10.1145/1743666.1743718</pub-id></mixed-citation></ref>
<ref id="b72"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Junejo</surname>, <given-names>I. N.</given-names></string-name>, <string-name><surname>Dexter</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Laptev</surname>, <given-names>I.</given-names></string-name>, and <string-name><surname>Pérez</surname>, <given-names>P.</given-names></string-name></person-group> (<year>2011</year>). <article-title>View-independent action recognition from temporal self-similarities.</article-title> <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>, <volume>33</volume>(<issue>1</issue>), <fpage>172</fpage>–<lpage>185</lpage>. <pub-id pub-id-type="doi">10.1109/TPAMI.2010.68</pub-id><pub-id pub-id-type="pmid">21088326</pub-id><issn>0162-8828</issn></mixed-citation></ref>
<ref id="b73"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Just</surname>, <given-names>M. A.</given-names></string-name>, and <string-name><surname>Carpenter</surname>, <given-names>P. A.</given-names></string-name></person-group> (<year>1980</year>). <article-title>A theory of reading: From eye fixations to comprehension.</article-title> <source>Psychological Review</source>, <volume>87</volume>(<issue>4</issue>), <fpage>329</fpage>–<lpage>354</lpage>. <pub-id pub-id-type="doi">10.1037/0033-295X.87.4.329</pub-id><pub-id pub-id-type="pmid">7413885</pub-id><issn>0033-295X</issn></mixed-citation></ref>
<ref id="b74"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Jüttner</surname>, <given-names>M.</given-names></string-name>, and <string-name><surname>Wolf</surname>, <given-names>W.</given-names></string-name></person-group> (<year>1992</year>). <article-title>Occurrence of human express saccades depends on stimulus uncertainty and stimulus sequence.</article-title> <source>Experimental Brain Research</source>, <volume>89</volume>(<issue>3</issue>), <fpage>678</fpage>–<lpage>681</lpage>. <pub-id pub-id-type="doi">10.1007/BF00229892</pub-id><pub-id pub-id-type="pmid">1644130</pub-id><issn>0014-4819</issn></mixed-citation></ref>
<ref id="b75"><mixed-citation publication-type="book-chapter" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Kaczynski</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Mischaikow</surname>, <given-names>K.</given-names></string-name>, and <string-name><surname>Mrozek</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2010</year>). Computational homology (S. S. Antmann, J. E. Marsden, and S. L., Eds.) (No. 157). Springer.</mixed-citation></ref>
<ref id="b76"><mixed-citation publication-type="book-chapter" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Karn</surname>, <given-names>K. S.</given-names></string-name></person-group> (<year>2000</year>, November). “saccade pickers” vs. “fixation pickers”: The effect of eye tracking instrumentation on research. In Proceedings eye tracking research and applications symposium 2000 (pp. 87–88).</mixed-citation></ref>
<ref id="b77"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Karsh</surname>, <given-names>R.</given-names></string-name>, and <string-name><surname>Breitenbach</surname>, <given-names>F. W.</given-names></string-name></person-group> (<year>1983</year>). <chapter-title>Looking at looking: The amorphous fixation measure</chapter-title>. In <person-group person-group-type="editor"><string-name><given-names>R.</given-names> <surname>Groner</surname></string-name> (<role>Ed.</role>),</person-group> <source>Eye movements and psychological functions: International views</source> (pp. <fpage>53</fpage>–<lpage>64</lpage>). <publisher-loc>Hillsdale</publisher-loc>: <publisher-name>Lawrence Erlbaum</publisher-name>.</mixed-citation></ref>
<ref id="b78"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Kasneci</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Kasneci</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Kübler</surname>, <given-names>T. C.</given-names></string-name>, and <string-name><surname>Rosenstiel</surname>, <given-names>W.</given-names></string-name></person-group> (<year>2014</year>). <article-title>The applicability of probabilistic methods to the online recognition of fixations and saccades in dynamic scenes.</article-title> In <source>Proceedings of the symposium on eye tracking research and applications</source> (pp. <fpage>323</fpage>–<lpage>326</lpage>). <publisher-loc>New York, NY, USA</publisher-loc>: <publisher-name>ACM</publisher-name>. <pub-id pub-id-type="doi">10.1145/2578153.2578213</pub-id></mixed-citation></ref>
<ref id="b79"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Kliegl</surname>, <given-names>R.</given-names></string-name>, and <string-name><surname>Olson</surname>, <given-names>R. K.</given-names></string-name></person-group> (<year>1981</year>). <article-title>Reduction and calibration of eye monitor data.</article-title> <source>Behavior Research Methods and Instrumentation</source>, <volume>13</volume>(<issue>2</issue>), <fpage>107</fpage>–<lpage>111</lpage>. <pub-id pub-id-type="doi">10.3758/BF03207917</pub-id><issn>0005-7878</issn></mixed-citation></ref>
<ref id="b80"><mixed-citation publication-type="conference" specific-use="parsed"><person-group person-group-type="author"><string-name><surname>Koh</surname>, <given-names>D. H.</given-names></string-name>, <string-name><surname>Gowda</surname>, <given-names>S. M.</given-names></string-name>, and <string-name><surname>Komogortsev</surname>, <given-names>O. V.</given-names></string-name></person-group> (<year>2010</year>). <article-title>Real time eye movement identification protocol.</article-title> In <source>Extended abstracts of the 28th international conference on human factors in computing systems</source> (pp. <fpage>3499</fpage>–<lpage>3504</lpage>).</mixed-citation></ref>
<ref id="b81"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Komogortsev</surname>, <given-names>O. V.</given-names></string-name>, <string-name><surname>Gobert</surname>, <given-names>D. V.</given-names></string-name>, <string-name><surname>Jayarathna</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Koh</surname>, <given-names>D. H.</given-names></string-name>, and <string-name><surname>Gowda</surname>, <given-names>S.</given-names></string-name></person-group> (<year>2010</year>). <article-title>Standardization of automated analyses of oculomotor fixation and saccadic behaviors.</article-title> <source>IEEE Transactions on Biomedical Engineering</source>, <volume>57</volume>(<issue>11</issue>), <fpage>2635</fpage>–<lpage>2645</lpage>. <pub-id pub-id-type="doi">10.1109/TBME.2010.2057429</pub-id><pub-id pub-id-type="pmid">20667803</pub-id><issn>0018-9294</issn></mixed-citation></ref>
<ref id="b82"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Komogortsev</surname>, <given-names>O. V.</given-names></string-name>, <string-name><surname>Jayarathna</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Koh</surname>, <given-names>D. H.</given-names></string-name>, and <string-name><surname>Gowda</surname>, <given-names>S. M.</given-names></string-name></person-group> (<year>2009</year>). <source>Qualitative and quantitative scoring and evaluation of the eye movement classification algorithms (Tech. Rep.)</source>. <publisher-name>Texas State Unversity San Marcos Department of Computer Science</publisher-name>.</mixed-citation></ref>
<ref id="b83"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Komogortsev</surname>, <given-names>O. V.</given-names></string-name>, and <string-name><surname>Karpov</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2013/13</year>). <article-title>Automated classification and scoring of smooth pursuit eye movements in the presence of fixations and saccades.</article-title> <source>Behavior Research Methods</source>, <volume>45</volume>(<issue>1</issue>), <fpage>203</fpage>–<lpage>215</lpage>. <pub-id pub-id-type="doi">10.3758/s13428-012-0234-9</pub-id><pub-id pub-id-type="pmid">22806708</pub-id><issn>1554-351X</issn></mixed-citation></ref>
<ref id="b84"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Komogortsev</surname>, <given-names>O. V.</given-names></string-name>, and <string-name><surname>Khan</surname>, <given-names>J. I.</given-names></string-name></person-group> (<year>2007</year>). <article-title>Kalman filtering in the design of eye-gaze-guided computer interfaces.</article-title> In Human-computer interaction. hci intelligent multimodal interaction environments: 12<sup>th</sup> international conference, hci international 2007, beijing. <pub-id pub-id-type="doi">10.1007/978-3-540-73110-8_74</pub-id></mixed-citation></ref>
<ref id="b85"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Krassanakis</surname>, <given-names>V.</given-names></string-name>, <string-name><surname>Filippakopoulou</surname>, <given-names>V.</given-names></string-name>, and <string-name><surname>Nakos</surname>, <given-names>B.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Eyemmv toolbox: An eye movement postanalysis tool based on a two-step spatial dispersion threshold for fixation identification.</article-title> <source>Journal of Eye Movement Research</source>, <volume>7</volume>(<issue>1</issue>), <fpage>1</fpage>–<lpage>10</lpage>.<issn>1995-8692</issn></mixed-citation></ref>
<ref id="b86"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Krauzlis</surname>, <given-names>R. J.</given-names></string-name>, and <string-name><surname>Miles</surname>, <given-names>F. A.</given-names></string-name></person-group> (<year>1996</year>). <article-title>Release of fixation for pursuit and saccades in humans: Evidence for shared inputs acting on different neural substrates.</article-title> <source>Journal of Neurophysiology</source>, <volume>76</volume>(<issue>5</issue>), <fpage>2822</fpage>–<lpage>2833</lpage>. <pub-id pub-id-type="doi">10.1152/jn.1996.76.5.2822</pub-id><pub-id pub-id-type="pmid">8930235</pub-id><issn>0022-3077</issn></mixed-citation></ref>
<ref id="b87"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Kumar</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Klingner</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Puranik</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Winograd</surname>, <given-names>T.</given-names></string-name>, and <string-name><surname>Paepcke</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2008</year>). <article-title>Improving the accuracy of gaze input for interaction.</article-title> In Etra ’08: Proceedings of the 2008 symposium on eye tracking research and applications (pp. 65–68). ACM. <pub-id pub-id-type="doi">10.1145/1344471.1344488</pub-id></mixed-citation></ref>
<ref id="b88"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Land</surname>, <given-names>M. F.</given-names></string-name></person-group> (<year>2011</year>). <chapter-title>Oculomotor behavior in vertebrates and invertebrates</chapter-title>. In <person-group person-group-type="editor"><string-name><given-names>S. P.</given-names> <surname>Liversedge</surname></string-name>, <string-name><given-names>I. D.</given-names> <surname>Gilchrist</surname></string-name>, and <string-name><given-names>S.</given-names> <surname>Everling</surname></string-name> (<role>Eds.</role>),</person-group> <source>The oxford handbook of eye movements</source>. <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref>
<ref id="b89"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Land</surname>, <given-names>M. F.</given-names></string-name>, and <string-name><surname>Furneaux</surname>, <given-names>S.</given-names></string-name></person-group> (<year>1997</year>). <article-title>The knowledge base of the oculomotor system.</article-title> <source>Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences</source>, <volume>352</volume>(<issue>1358</issue>), <fpage>1231</fpage>–<lpage>1239</lpage>. <pub-id pub-id-type="doi">10.1098/rstb.1997.0105</pub-id><pub-id pub-id-type="pmid">9304689</pub-id><issn>0962-8436</issn></mixed-citation></ref>
<ref id="b90"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Land</surname>, <given-names>M. F.</given-names></string-name>, and <string-name><surname>Lee</surname>, <given-names>D. N.</given-names></string-name></person-group> (<year>1994</year>). <article-title>Where we look when we steer.</article-title> <source>Nature</source>, <volume>369</volume>(<issue>6483</issue>), <fpage>742</fpage>–<lpage>744</lpage>. <pub-id pub-id-type="doi">10.1038/369742a0</pub-id><pub-id pub-id-type="pmid">8008066</pub-id><issn>0028-0836</issn></mixed-citation></ref>
<ref id="b91"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Land</surname>, <given-names>M. F.</given-names></string-name>, and <string-name><surname>Tatler</surname>, <given-names>B. W.</given-names></string-name></person-group> (<year>2009</year>). <source>Looking and acting</source>. <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref>
<ref id="b92"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Lappi</surname>, <given-names>O.</given-names></string-name>, and <string-name><surname>Lehtonen</surname>, <given-names>E.</given-names></string-name></person-group> (<year>2013</year>). <article-title>Eye-movements in real curve driving: Pursuit-like optokinesis in vehicle frame of reference, stability in an allocentric reference coordinate system.</article-title> <source>Journal of Eye Movement Research</source>, <volume>6</volume>(<issue>1</issue>), <fpage>1</fpage>–<lpage>13</lpage>.<issn>1995-8692</issn></mixed-citation></ref>
<ref id="b93"><mixed-citation publication-type="thesis" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Larsson</surname>, <given-names>P.</given-names></string-name></person-group> (<year>2002</year>). Automatic visual behavior analysis (Unpublished master’s thesis). Control and Communication Department of electrical engineering Linköping University, Sweden.</mixed-citation></ref>
<ref id="b94"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Leigh</surname>, <given-names>R. J.</given-names></string-name>, and <string-name><surname>Kennard</surname>, <given-names>C.</given-names></string-name></person-group> (<year>2004</year>, Mar). <article-title>Using saccades as a research tool in the clinical neurosciences.</article-title> Brain, 127(Pt 3), 460–477. Retrieved from http://dx.doi.org/<pub-id pub-id-type="doi">10.1093/brain/awh035</pub-id> doi:</mixed-citation></ref>
<ref id="b95"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Leigh</surname>, <given-names>R. J.</given-names></string-name>, and <string-name><surname>Zee</surname>, <given-names>D. S.</given-names></string-name></person-group> (<year>2006</year>). <source>The neurology of eye movements</source> (<edition>4th ed.</edition>). <publisher-loc>Oxford</publisher-loc>.</mixed-citation></ref>
<ref id="b96"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Liston</surname>, <given-names>D. B.</given-names></string-name>, <string-name><surname>Krukowski</surname>, <given-names>A. E.</given-names></string-name>, and <string-name><surname>Stone</surname>, <given-names>L. S.</given-names></string-name></person-group> (<year>2012</year>). <article-title>Saccade detection during smooth tracking.</article-title> <source>Displays</source>.<issn>0141-9382</issn></mixed-citation></ref>
<ref id="b97"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="editor"><string-name><surname>Liversedge</surname>, <given-names>S. P.</given-names></string-name>, <string-name><surname>Gilchrist</surname>, <given-names>I. D.</given-names></string-name>, and <string-name><surname>Everling</surname>, <given-names>S.</given-names></string-name> (<role>Eds.</role>)</person-group>. (<year>2011</year>). <source>The oxford handbook of eye movements</source>. <publisher-name>Oxford University Press</publisher-name>. <pub-id pub-id-type="doi">10.1093/oxfordhb/9780199539789.001.0001</pub-id></mixed-citation></ref>
<ref id="b98"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Longbotham</surname>, <given-names>H. G.</given-names></string-name>, <string-name><surname>Engelken</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Rea</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Shelton</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Rouzky</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Chababe</surname>, <given-names>A.</given-names></string-name>, and <string-name><surname>Harris</surname>, <given-names>J.</given-names></string-name></person-group> (<year>1994</year>). <article-title>Nonlinear approaches for separation of slow and fast phase nystagmus signals.</article-title> <source>Biomedical Sciences Instrumentation</source>, <volume>30</volume>, <fpage>99</fpage>–<lpage>104</lpage>.<pub-id pub-id-type="pmid">7948658</pub-id><issn>0067-8856</issn></mixed-citation></ref>
<ref id="b99"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Manor</surname>, <given-names>B. R.</given-names></string-name>, and <string-name><surname>Gordon</surname>, <given-names>E.</given-names></string-name></person-group> (<year>2003</year>). <article-title>Defining the temporal threshold for ocular fixation in free-viewing visuocognitive tasks.</article-title> <source>Journal of Neuroscience Methods</source>, <volume>128</volume>(<issue>1-2</issue>), <fpage>85</fpage>–<lpage>93</lpage>. <pub-id pub-id-type="doi">10.1016/S0165-0270(03)00151-1</pub-id><pub-id pub-id-type="pmid">12948551</pub-id><issn>0165-0270</issn></mixed-citation></ref>
<ref id="b100"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Martinez-Conde</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Macknik</surname>, <given-names>S. L.</given-names></string-name>, <string-name><surname>Troncoso</surname>, <given-names>X. G.</given-names></string-name>, and <string-name><surname>Hubel</surname>, <given-names>D. H.</given-names></string-name></person-group> (<year>2009</year>). <article-title>Microsaccades: A neurophysiological analysis.</article-title> <source>Trends in Neurosciences</source>, <volume>32</volume>(<issue>9</issue>), <fpage>463</fpage>–<lpage>475</lpage>. <pub-id pub-id-type="doi">10.1016/j.tins.2009.05.006</pub-id><pub-id pub-id-type="pmid">19716186</pub-id><issn>0166-2236</issn></mixed-citation></ref>
<ref id="b101"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Marwan</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Romano</surname>, <given-names>M. C.</given-names></string-name>, <string-name><surname>Thiel</surname>, <given-names>M.</given-names></string-name>, and <string-name><surname>Kurths</surname>, <given-names>J.</given-names></string-name></person-group> (<year>2007</year>). <article-title>Recurrence plots for the analysis of complex systems.</article-title> <source>Physics Reports</source>, <volume>438</volume>(<issue>5–6</issue>), <fpage>237</fpage>–<lpage>329</lpage>. <pub-id pub-id-type="doi">10.1016/j.physrep.2006.11.001</pub-id><issn>0370-1573</issn></mixed-citation></ref>
<ref id="b102"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Mason</surname>, <given-names>R. L.</given-names></string-name></person-group> (<year>1976</year>). <article-title>Digital computer estimation of eye fixations.</article-title> <source>Behavior Research Methods and Instrumentation</source>, <volume>8</volume>(<issue>2</issue>), <fpage>185</fpage>–<lpage>188</lpage>. <pub-id pub-id-type="doi">10.3758/BF03201770</pub-id><issn>0005-7878</issn></mixed-citation></ref>
<ref id="b103"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Matin</surname>, <given-names>E.</given-names></string-name></person-group> (<year>1974</year>). <article-title>Saccadic suppression: A review and an analysis.</article-title> <source>Psychological Bulletin</source>, <volume>81</volume>(<issue>12</issue>), <fpage>899</fpage>–<lpage>917</lpage>. <pub-id pub-id-type="doi">10.1037/h0037368</pub-id><pub-id pub-id-type="pmid">4612577</pub-id><issn>0033-2909</issn></mixed-citation></ref>
<ref id="b104"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Matsuoka</surname>, <given-names>K.</given-names></string-name>, and <string-name><surname>Harato</surname>, <given-names>H.</given-names></string-name></person-group> (<year>1983</year>). <article-title>Detection of rapid phases of eye movements using third order derivatives.</article-title> <source>Japanese J. Ergonomics</source>, <volume>19</volume>(<issue>3</issue>), <fpage>147</fpage>–<lpage>153</lpage>. <pub-id pub-id-type="doi">10.5100/jje.19.147</pub-id></mixed-citation></ref>
<ref id="b105"><mixed-citation publication-type="web-page" specific-use="linked"><person-group person-group-type="author"><string-name><surname>McClung</surname>, <given-names>S. N.</given-names></string-name>, and <string-name><surname>Kang</surname>, <given-names>Z.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Characterization of visual scanning patterns in air traffic control.</article-title> Computational Intelligence and Neuroscience. Retrieved from <ext-link ext-link-type="uri" xlink:href="https://www.hindawi.com/journals/cin/2016/8343842/">https://www.hindawi.com/journals/cin/2016/8343842/</ext-link> doi: <pub-id pub-id-type="doi">10.1155/2016/8343842</pub-id></mixed-citation></ref>
<ref id="b106"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Menz</surname>, <given-names>C.</given-names></string-name>, and <string-name><surname>Groner</surname>, <given-names>R.</given-names></string-name></person-group> (<year>1985</year>). <chapter-title>The effects of stimulus characteristics, task requirements and individual differences on scanning patterns</chapter-title>. In <person-group person-group-type="editor"><string-name><given-names>R.</given-names> <surname>Groner</surname></string-name>, <string-name><given-names>G. W.</given-names> <surname>McConkie</surname></string-name>, and <string-name><given-names>C.</given-names> <surname>Menz</surname></string-name> (<role>Eds.</role>),</person-group> <source>Eye movements and human information processing</source>. <publisher-loc>Amsterdam</publisher-loc>: <publisher-name>North Holland</publisher-name>.</mixed-citation></ref>
<ref id="b107"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Arba Mosquera</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Verma</surname>, <given-names>S.</given-names></string-name>, and <string-name><surname>McAlinden</surname>, <given-names>C.</given-names></string-name></person-group> (<year>2015</year>). <article-title>Centration axis in refractive surgery.</article-title> <source>Eye and Vision (London, England)</source>, <volume>2</volume>(<issue>1</issue>), <fpage>4</fpage>. <pub-id pub-id-type="doi">10.1186/s40662-015-0014-6</pub-id><pub-id pub-id-type="pmid">26605360</pub-id><issn>2326-0254</issn></mixed-citation></ref>
<ref id="b108"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Mould</surname>, <given-names>M. S.</given-names></string-name>, <string-name><surname>Foster</surname>, <given-names>D. H.</given-names></string-name>, <string-name><surname>Amano</surname>, <given-names>K.</given-names></string-name>, and <string-name><surname>Oakley</surname>, <given-names>J. P.</given-names></string-name></person-group> (<year>2012</year>). <article-title>A simple nonparametric method for classifying eye fixations.</article-title> <source>Vision Research</source>, <volume>57</volume>(<issue>15</issue>), <fpage>18</fpage>–<lpage>25</lpage>. <pub-id pub-id-type="doi">10.1016/j.visres.2011.12.006</pub-id><pub-id pub-id-type="pmid">22227608</pub-id><issn>0042-6989</issn></mixed-citation></ref>
<ref id="b109"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Munkres</surname>, <given-names>J. R.</given-names></string-name></person-group> (<year>1984</year>). <source>Elements of algebraic topology</source>. <publisher-name>Perseus Publishing</publisher-name>.</mixed-citation></ref>
<ref id="b110"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Munn</surname>, <given-names>S. M.</given-names></string-name>, <string-name><surname>Stefano</surname>, <given-names>L.</given-names></string-name>, and <string-name><surname>Pelz</surname>, <given-names>J. B.</given-names></string-name></person-group> (<year>2008</year>). <article-title>Fixationidentification in dynamic scenes: comparing an automated algorithm to manual coding.</article-title> In Apgv ’08: <source>Proceedings of the 5th symposium on applied perception in graphics and visualization</source> (pp. <fpage>33</fpage>–<lpage>42</lpage>). <publisher-loc>New York, NY, USA</publisher-loc>: <publisher-name>ACM</publisher-name>. <pub-id pub-id-type="doi">10.1145/1394281.1394287</pub-id></mixed-citation></ref>
<ref id="b111"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Munoz</surname>, <given-names>D. P.</given-names></string-name>, <string-name><surname>Armstrong</surname>, <given-names>I.</given-names></string-name>, and <string-name><surname>Coe</surname>, <given-names>B.</given-names></string-name></person-group> (<year>2007</year>). <chapter-title>Using eye movements to probe development and dysfunction</chapter-title>. In <person-group person-group-type="editor"><string-name><given-names>R. P. V.</given-names> <surname>Gompel</surname></string-name>, <string-name><given-names>M. H.</given-names> <surname>Fischer</surname></string-name>, <string-name><given-names>W. S.</given-names> <surname>Murray</surname></string-name>, and <string-name><given-names>R. L.</given-names> <surname>Hill</surname></string-name> (<role>Eds.</role>),</person-group> <source>Eye movements</source> (pp. <fpage>99</fpage>–<lpage>124</lpage>). <publisher-loc>Oxford</publisher-loc>: <publisher-name>Elsevier</publisher-name>. <pub-id pub-id-type="doi">10.1016/B978-008044980-7/50007-0</pub-id></mixed-citation></ref>
<ref id="b112"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Nodine</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Kundel</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Toto</surname>, <given-names>L.</given-names></string-name>, and <string-name><surname>Krupinski</surname>, <given-names>E.</given-names></string-name></person-group> (<year>1992</year>). <article-title>Recording and analyzing eye-position data using a microcomputer workstation.</article-title> <source>Behavior Research Methods, Instruments, and Computers</source>, <volume>24</volume>(<issue>3</issue>), <fpage>475</fpage>–<lpage>485</lpage>. <pub-id pub-id-type="doi">10.3758/BF03203584</pub-id><issn>0743-3808</issn></mixed-citation></ref>
<ref id="b113"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Noton</surname>, <given-names>D.</given-names></string-name>, and <string-name><surname>Stark</surname>, <given-names>L.</given-names></string-name></person-group> (<year>1971</year>a). <article-title>Eye movements and visual perception.</article-title> <source>Scientific American</source>, <volume>224</volume>(<issue>6</issue>), <fpage>35</fpage>–<lpage>43</lpage>.<pub-id pub-id-type="pmid">5087474</pub-id><issn>0036-8733</issn></mixed-citation></ref>
<ref id="b114"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Noton</surname>, <given-names>D.</given-names></string-name>, and <string-name><surname>Stark</surname>, <given-names>L.</given-names></string-name></person-group> (<year>1971</year>b). <article-title>Scanpaths in eye movements during pattern perception.</article-title> <source>Science</source>, <volume>171</volume>(<issue>3968</issue>), <fpage>308</fpage>–<lpage>311</lpage>. <pub-id pub-id-type="doi">10.1126/science.171.3968.308</pub-id><pub-id pub-id-type="pmid">5538847</pub-id><issn>0036-8075</issn></mixed-citation></ref>
<ref id="b115"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Noton</surname>, <given-names>D.</given-names></string-name>, and <string-name><surname>Stark</surname>, <given-names>L.</given-names></string-name></person-group> (<year>1971</year>c). <article-title>Scanpaths in saccadic eye movements while viewing and recognizing patterns.</article-title> <source>Vision Research</source>, <volume>11</volume>(<issue>9</issue>), <fpage>929</fpage>–<lpage>942</lpage>. <pub-id pub-id-type="doi">10.1016/0042-6989(71)90213-6</pub-id><pub-id pub-id-type="pmid">5133265</pub-id><issn>0042-6989</issn></mixed-citation></ref>
<ref id="b116"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Nyström</surname>, <given-names>M.</given-names></string-name>, and <string-name><surname>Holmqvist</surname>, <given-names>K.</given-names></string-name></person-group> (<year>2010</year>). <article-title>An adaptive algorithm for fixation, saccade, and glissade detection in eyetracking data.</article-title> <source>Behavior Research Methods</source>, <volume>42</volume>(<issue>1</issue>), <fpage>188</fpage>–<lpage>204</lpage>. <pub-id pub-id-type="doi">10.3758/BRM.42.1.188</pub-id><pub-id pub-id-type="pmid">20160299</pub-id><issn>1554-351X</issn></mixed-citation></ref>
<ref id="b117"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Olsen</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2012</year>). <source>The tobii i-vt fixation filter (Tech. Rep.)</source>. <publisher-name>Tobii Technology</publisher-name>.</mixed-citation></ref>
<ref id="b118"><mixed-citation publication-type="thesis" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Olsson</surname>, <given-names>P.</given-names></string-name></person-group> (<year>2007</year>). Real-time and offline filters for eye tracking (Unpublished master’s thesis). KTH Royal Institute of Technology.</mixed-citation></ref>
<ref id="b119"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Over</surname>, <given-names>E. A.</given-names></string-name>, <string-name><surname>Hooge</surname>, <given-names>I. T.</given-names></string-name>, <string-name><surname>Vlaskamp</surname>, <given-names>B. N.</given-names></string-name>, and <string-name><surname>Erkelens</surname>, <given-names>C. J.</given-names></string-name></person-group> (<year>2007</year>). <article-title>Coarse-to-fine eye movement strategy in visual search.</article-title> <source>Vision Research</source>, <volume>47</volume>(<issue>17</issue>), <fpage>2272</fpage>–<lpage>2280</lpage>. Retrieved from <ext-link ext-link-type="uri" xlink:href="http://www.sciencedirect.com/science/article/pii/S0042698907002088">http://www.sciencedirect.com/science/article/pii/S0042698907002088</ext-link>. <pub-id pub-id-type="doi">10.1016/j.visres.2007.05.002</pub-id><pub-id pub-id-type="pmid">17617434</pub-id><issn>0042-6989</issn></mixed-citation></ref>
<ref id="b120"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Paulsen</surname>, <given-names>D. J.</given-names></string-name>, <string-name><surname>Hallquist</surname>, <given-names>M. N.</given-names></string-name>, <string-name><surname>Geier</surname>, <given-names>C. F.</given-names></string-name>, and <string-name><surname>Luna</surname>, <given-names>B.</given-names></string-name></person-group> (<year>2015</year>). <article-title>Effects of incentives, age, and behavior on brain activation during inhibitory control: A longitudinal fMRI study.</article-title> <comment>[Proceedings from the inaugural Flux Congress; towards an integrative developmental cognitive neuroscience]</comment>. <source>Developmental Cognitive Neuroscience</source>, <volume>11</volume>, <fpage>105</fpage>–<lpage>115</lpage>. <pub-id pub-id-type="doi">10.1016/j.dcn.2014.09.003</pub-id><pub-id pub-id-type="pmid">25284272</pub-id><issn>1878-9293</issn></mixed-citation></ref>
<ref id="b121"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Peters</surname>, <given-names>R. J.</given-names></string-name>, and <string-name><surname>Itti</surname>, <given-names>L.</given-names></string-name></person-group> (<year>2008</year>). <article-title>Applying computational tools to predict gaze direction in interactive visual environments.</article-title> <source>ACM Transactions on Applied Perception</source>, <volume>5</volume>(<issue>2</issue>), <fpage>1</fpage>–<lpage>19</lpage>. <pub-id pub-id-type="doi">10.1145/1279920.1279923</pub-id><issn>1544-3558</issn></mixed-citation></ref>
<ref id="b122"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Poulton</surname>, <given-names>E. C.</given-names></string-name></person-group> (<year>1962</year>). <article-title>Peripheral vision, refractoriness and eye movements in fast oral reading.</article-title> <source>British Journal of Psychology</source>, <volume>53</volume>(<issue>4</issue>), <fpage>409</fpage>–<lpage>419</lpage>. <pub-id pub-id-type="doi">10.1111/j.2044-8295.1962.tb00846.x</pub-id><pub-id pub-id-type="pmid">13985785</pub-id><issn>0007-1269</issn></mixed-citation></ref>
<ref id="b123"><mixed-citation publication-type="book-chapter" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Privitera</surname>, <given-names>C. M.</given-names></string-name></person-group> (<year>2006</year>). <chapter-title>The scanpath theory: its definition and later developments.</chapter-title> In Human vision and electronic imaging xi (Vol. 6057, pp. 87–91). <pub-id pub-id-type="doi">10.1117/12.674146</pub-id></mixed-citation></ref>
<ref id="b124"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Privitera</surname>, <given-names>C. M.</given-names></string-name>, and <string-name><surname>Stark</surname>, <given-names>L. W.</given-names></string-name></person-group> (<year>2000</year>). <article-title>Algorithms for defining visual regions-of-interest: Comparison with eye fixations.</article-title> <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>, <volume>22</volume>(<issue>9</issue>), <fpage>970</fpage>–<lpage>982</lpage>. <pub-id pub-id-type="doi">10.1109/34.877520</pub-id><issn>0162-8828</issn></mixed-citation></ref>
<ref id="b125"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Rayner</surname>, <given-names>K.</given-names></string-name></person-group> (<year>1998</year>). <article-title>Eye movements in reading and information processing: 20 years of research.</article-title> <source>Psychological Bulletin</source>, <volume>124</volume>(<issue>3</issue>), <fpage>372</fpage>–<lpage>422</lpage>. <pub-id pub-id-type="doi">10.1037/0033-2909.124.3.372</pub-id><pub-id pub-id-type="pmid">9849112</pub-id><issn>0033-2909</issn></mixed-citation></ref>
<ref id="b126"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Reimer</surname>, <given-names>B.</given-names></string-name>, and <string-name><surname>Sodhi</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2006</year>). <article-title>Detecting eye movements in dynamic environments.</article-title> <source>Behavior Research Methods</source>, <volume>38</volume>(<issue>4</issue>), <fpage>667</fpage>–<lpage>682</lpage>. <pub-id pub-id-type="doi">10.3758/BF03193900</pub-id><pub-id pub-id-type="pmid">17393839</pub-id><issn>1554-351X</issn></mixed-citation></ref>
<ref id="b127"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Reutskaja</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Nagel</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Camerer</surname>, <given-names>C. F.</given-names></string-name>, and <string-name><surname>Rangel</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2011</year>). <article-title>Search dynamics in consumer choice under time pressure: An eye-tracking study.</article-title> <source>The American Economic Review</source>, <volume>101</volume>(<issue>2</issue>), <fpage>900</fpage>–<lpage>926</lpage>. <pub-id pub-id-type="doi">10.1257/aer.101.2.900</pub-id><issn>0002-8282</issn></mixed-citation></ref>
<ref id="b128"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Rigas</surname>, <given-names>I.</given-names></string-name>, <string-name><surname>Economou</surname>, <given-names>G.</given-names></string-name>, and <string-name><surname>Fotopoulos</surname>, <given-names>S.</given-names></string-name></person-group> (<year>2012</year>). <article-title>Biometric identification based on the eye movements and graph matching techniques.</article-title> <source>Pattern Recognition Letters</source>, <volume>33</volume>(<issue>6</issue>), <fpage>786</fpage>–<lpage>792</lpage>. <pub-id pub-id-type="doi">10.1016/j.patrec.2012.01.003</pub-id><issn>0167-8655</issn></mixed-citation></ref>
<ref id="b129"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Rolfs</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2009</year>). <article-title>Microsaccades: Small steps on a long way.</article-title> <source>Vision Research</source>, <volume>49</volume>(<issue>20</issue>), <fpage>2415</fpage>–<lpage>2441</lpage>. <pub-id pub-id-type="doi">10.1016/j.visres.2009.08.010</pub-id><pub-id pub-id-type="pmid">19683016</pub-id><issn>0042-6989</issn></mixed-citation></ref>
<ref id="b130"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Rothkopf</surname>, <given-names>C. A.</given-names></string-name>, and <string-name><surname>Pelz</surname>, <given-names>J. B.</given-names></string-name></person-group> (<year>2004</year>). <article-title>Head movement estimation for wearable eye tracker.</article-title> In Proceedings of the eye tracking research and application symposium etra 2004. <pub-id pub-id-type="doi">10.1145/968363.968388</pub-id></mixed-citation></ref>
<ref id="b131"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Russo</surname>, <given-names>J. E.</given-names></string-name></person-group> (<year>2010</year>). <chapter-title>Eye fixations as a process trace</chapter-title>. In <person-group person-group-type="editor"><string-name><given-names>M.</given-names> <surname>Schulte-Mecklenbeck</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Kühlberger</surname></string-name>, and <string-name><given-names>R.</given-names> <surname>Ranyard</surname></string-name> (<role>Eds.</role>),</person-group> <source>A handbook of process tracing methods for decision research</source> (pp. <fpage>43</fpage>–<lpage>64</lpage>). <publisher-name>Taylor and Francis</publisher-name>.</mixed-citation></ref>
<ref id="b132"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Saez de Urabain</surname>, <given-names>I. R.</given-names></string-name>, <string-name><surname>Johnson</surname>, <given-names>M. H.</given-names></string-name>, and <string-name><surname>Smith</surname>, <given-names>T. J.</given-names></string-name></person-group> (<year>2015</year>). Grafix: A semiautomatic approach for parsing low- and high-quality eye-tracking data. Behavior Research Methods, 47(1), 53–72. Retrieved from http://dx.doi.org/<pub-id pub-id-type="doi">10.3758/s13428-014-0456-0</pub-id> doi:</mixed-citation></ref>
<ref id="b133"><mixed-citation publication-type="conference" specific-use="parsed"><person-group person-group-type="author"><string-name><surname>Salvucci</surname>, <given-names>D. D.</given-names></string-name>, and <string-name><surname>Anderson</surname>, <given-names>J. R.</given-names></string-name></person-group> (<year>1998</year>). <article-title>Tracing eye movement protocols with cognitive process models.</article-title> In <source>Proceedings of the twentieth annual conference of the cognitive science society</source> (pp. <fpage>923</fpage>–<lpage>928</lpage>).</mixed-citation></ref>
<ref id="b134"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Salvucci</surname>, <given-names>D. D.</given-names></string-name>, and <string-name><surname>Goldberg</surname>, <given-names>J. H.</given-names></string-name></person-group> (<year>2000</year>). <article-title>Identifying fixations and saccades in eye-tracking protocols.</article-title> In Etra ’00: Proceedings of the 2000 symposium on eye tracking research and applications. <pub-id pub-id-type="doi">10.1145/355017.355028</pub-id></mixed-citation></ref>
<ref id="b135"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Santella</surname>, <given-names>A.</given-names></string-name>, and <string-name><surname>DeCarlo</surname>, <given-names>D.</given-names></string-name></person-group> (<year>2004</year>). <chapter-title>Robust clustering of eye movement recordings for quantification of visual interest</chapter-title>. In <source>Etra ’04: Proceedings of the 2004 symposium on eye tracking research and applications</source> (pp. <fpage>27</fpage>–<lpage>34</lpage>). <publisher-loc>New York, NY, USA</publisher-loc>: <publisher-name>ACM</publisher-name>; Retrieved from <ext-link ext-link-type="uri" xlink:href="http://doi.acm.org/10.1145/968363.968368">http://doi.acm.org/10.1145/968363.968368</ext-link>, <pub-id pub-id-type="doi">10.1145/968363.968368</pub-id></mixed-citation></ref>
<ref id="b136"><mixed-citation publication-type="book-chapter" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Santini</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Fuhl</surname>, <given-names>W.</given-names></string-name></person-group>, Kübler, and Kasneci, E. (<year>2016</year>). Bayesian identification of fixations, saccades, and smooth pursuits. In Acm symposium on eye tracking research and applications, etra 2016.</mixed-citation></ref>
<ref id="b137"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Sauter</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Martin</surname>, <given-names>B. J.</given-names></string-name>, <string-name><surname>Di Renzo</surname>, <given-names>N.</given-names></string-name>, and <string-name><surname>Vomscheid</surname>, <given-names>C.</given-names></string-name></person-group> (<year>1991</year>). <article-title>Analysis of eye tracking movements using innovations generated by a Kalman filter.</article-title> <source>Medical and Biological Engineering and Computing</source>, <volume>29</volume>(<issue>1</issue>), <fpage>63</fpage>–<lpage>69</lpage>. <pub-id pub-id-type="doi">10.1007/BF02446297</pub-id><pub-id pub-id-type="pmid">2016922</pub-id><issn>0140-0118</issn></mixed-citation></ref>
<ref id="b138"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Schwartz</surname>, <given-names>S. H.</given-names></string-name></person-group> (<year>2010</year>). <source>Visual perception</source> (<person-group person-group-type="editor"><string-name><given-names>J.</given-names> <surname>Morita</surname></string-name> and <string-name><given-names>P. J.</given-names> <surname>Boyle</surname></string-name>, <role>Eds.</role></person-group>). <edition>4th ed.</edition>). <publisher-name>McGraw-Hill</publisher-name>.</mixed-citation></ref>
<ref id="b139"><mixed-citation publication-type="book" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Schwartz</surname>, <given-names>S. H.</given-names></string-name></person-group> (<year>2013</year>). <source>Geometric and visual optics</source> (second ed.). McGraw-Hill education.</mixed-citation></ref>
<ref id="b140"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Scinto</surname>, <given-names>L. F. M.</given-names></string-name>, and <string-name><surname>Barnette</surname>, <given-names>B. D.</given-names></string-name></person-group> (<year>1986</year>). <article-title>An algorithm for determining clusters, pairs or singletons in eye movement scan-path records.</article-title> <source>Behavior Research Methods, Instruments, and Computers</source>, <volume>18</volume>(<issue>1</issue>), <fpage>41</fpage>–<lpage>44</lpage>. <pub-id pub-id-type="doi">10.3758/BF03200992</pub-id><issn>0743-3808</issn></mixed-citation></ref>
<ref id="b141"><mixed-citation publication-type="unknown" specific-use="unparsed">seeingmachines. (<year>2005</year>). facelab 4 user manual (4th ed.) [Computer software manual].</mixed-citation></ref>
<ref id="b142"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Shelhamer</surname>, <given-names>M.</given-names></string-name></person-group> (<year>1998</year>). <article-title>Nonlinear dynamic systems evaluation of rhythmic’ eye movements (optokinetic nystagmus).</article-title> <source>Journal of Neuroscience Methods</source>, <volume>83</volume>(<issue>1</issue>), <fpage>45</fpage>–<lpage>56</lpage>. <pub-id pub-id-type="doi">10.1016/S0165-0270(98)00062-4</pub-id><pub-id pub-id-type="pmid">9765050</pub-id><issn>0165-0270</issn></mixed-citation></ref>
<ref id="b143"><mixed-citation publication-type="unknown" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Shelhamer</surname>, <given-names>M.</given-names></string-name>, and <string-name><surname>Zalewski</surname>, <given-names>S.</given-names></string-name></person-group> (<year>2001</year>). <article-title>A new application for time-delay reconstruction: detection of fast-phase  eye movements.</article-title> Physics Letters A, 291(4 – 5), 349 – 354. <pub-id pub-id-type="doi">10.1016/S0375-9601(01)00725-3</pub-id></mixed-citation></ref>
<ref id="b144"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Sheth</surname>, <given-names>B. R.</given-names></string-name>, and <string-name><surname>Young</surname>, <given-names>R.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Two visual pathways in primates based on sampling of space: Exploitation and exploration of visual information.</article-title> <source>Frontiers in Integrative Neuroscience</source>, <volume>10</volume>, <fpage>37</fpage>. Retrieved from <ext-link ext-link-type="uri" xlink:href="http://journal.frontiersin.org/article/10.3389/fnint.2016.00037">http://journal.frontiersin.org/article/10.3389/fnint.2016.00037</ext-link>. <pub-id pub-id-type="doi">10.3389/fnint.2016.00037</pub-id><pub-id pub-id-type="pmid">27920670</pub-id><issn>1662-5145</issn></mixed-citation></ref>
<ref id="b145"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Shic</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Scassellati</surname>, <given-names>B.</given-names></string-name>, and <string-name><surname>Chawarska</surname>, <given-names>K.</given-names></string-name></person-group> (<year>2008</year>). <article-title>The incomplete fixation measure.</article-title> In Etra ’08: Proceedings of the 2008 symposium on eye tracking research and applications (pp. 111–114). New York, NY, USA: ACM. <pub-id pub-id-type="doi">10.1145/1344471.1344500</pub-id></mixed-citation></ref>
<ref id="b146"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Smeets</surname>, <given-names>J. B. J.</given-names></string-name>, and <string-name><surname>Hooge</surname>, <given-names>I. T. C.</given-names></string-name></person-group> (<year>2003</year>). <article-title>Nature of variability in saccades.</article-title> <source>Journal of Neurophysiology</source>, <volume>90</volume>(<issue>1</issue>), <fpage>12</fpage>–<lpage>20</lpage>. <pub-id pub-id-type="doi">10.1152/jn.01075.2002</pub-id><pub-id pub-id-type="pmid">12611965</pub-id><issn>0022-3077</issn></mixed-citation></ref>
<ref id="b147"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Smith</surname>, <given-names>N. D.</given-names></string-name>, <string-name><surname>Crabb</surname>, <given-names>D. P.</given-names></string-name>, <string-name><surname>Glen</surname>, <given-names>F. C.</given-names></string-name>, <string-name><surname>Burton</surname>, <given-names>R.</given-names></string-name>, and <string-name><surname>Garway-Heath</surname>, <given-names>D. F.</given-names></string-name></person-group> (<year>2012</year>). <article-title>Eye movements in patients with glaucoma when viewing images of everyday scenes.</article-title> <source>Seeing and Perceiving</source>, <volume>25</volume>(<issue>5</issue>), <fpage>471</fpage>–<lpage>492</lpage>. <pub-id pub-id-type="doi">10.1163/187847612X634454</pub-id><pub-id pub-id-type="pmid">23193606</pub-id><issn>1878-4755</issn></mixed-citation></ref>
<ref id="b148"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Smith</surname>, <given-names>T. J.</given-names></string-name>, and <string-name><surname>Mital</surname>, <given-names>P. K.</given-names></string-name></person-group> (<year>2013</year>). <article-title>Attentional synchrony and the influence of viewing task on gaze behavior n static and dynamic scenes.</article-title> <source>Journal of Vision (Charlottesville, Va.)</source>, <volume>13</volume>(<issue>8</issue>), <fpage>1</fpage>–<lpage>24</lpage>. <pub-id pub-id-type="doi">10.1167/13.8.16</pub-id><pub-id pub-id-type="pmid">23818677</pub-id><issn>1534-7362</issn></mixed-citation></ref>
<ref id="b149"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Smyrnis</surname>, <given-names>N.</given-names></string-name></person-group> (<year>2008</year>). <article-title>Metric issues in the study of eye movements in psychiatry.</article-title> <source>Brain and Cognition</source>, <volume>68</volume>(<issue>3</issue>), <fpage>341</fpage>–<lpage>358</lpage>. <pub-id pub-id-type="doi">10.1016/j.bandc.2008.08.022</pub-id><pub-id pub-id-type="pmid">18842328</pub-id><issn>0278-2626</issn></mixed-citation></ref>
<ref id="b150"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Stampe</surname>, <given-names>D. M.</given-names></string-name></person-group> (<year>1993</year>). <article-title>Heuristic filtering and reliable calibration methods for video-based pupil-tracking systems.</article-title> <source>Behavior Research Methods, Instruments, and Computers</source>, <volume>25</volume>(<issue>2</issue>), <fpage>137</fpage>–<lpage>142</lpage>. <pub-id pub-id-type="doi">10.3758/BF03204486</pub-id><issn>0743-3808</issn></mixed-citation></ref>
<ref id="b151"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Stark</surname>, <given-names>L.</given-names></string-name>, and <string-name><surname>Ellis</surname>, <given-names>S. R.</given-names></string-name></person-group> (<year>1981</year>). <chapter-title>Scanpath revisited: Cognitive models direct active looking</chapter-title>. In <person-group person-group-type="editor"><string-name><given-names>D. F.</given-names> <surname>Fisher</surname></string-name>, <string-name><given-names>R. A.</given-names> <surname>Monty</surname></string-name>, and <string-name><given-names>J. W.</given-names> <surname>Senders</surname></string-name> (<role>Eds.</role>),</person-group> <source>Eye movements: cogniton and visual perception</source> (pp. <fpage>193</fpage>–<lpage>226</lpage>). <publisher-loc>Hillsdale, NJ</publisher-loc>.</mixed-citation></ref>
<ref id="b152"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Sundstedt</surname>, <given-names>V.</given-names></string-name>, <string-name><surname>Stavrakis</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Wimmer</surname>, <given-names>M.</given-names></string-name>, and <string-name><surname>Reinhard</surname>, <given-names>E.</given-names></string-name></person-group> (<year>2008</year>). <article-title>A psychohysical study of fixation behavior in a computer game.</article-title> In Apgv ’08: <source>Proceedings of the 5th symposium on applied perception in graphics and visualization</source> (pp. <fpage>43</fpage>–<lpage>50</lpage>). <publisher-name>ACM.</publisher-name> <pub-id pub-id-type="doi">10.1145/1394281.1394288</pub-id></mixed-citation></ref>
<ref id="b153"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Tafaj</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Kasneci</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Rosenstiel</surname>, <given-names>W.</given-names></string-name>, and <string-name><surname>Bogdan</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2012</year>). <article-title>Bayesian online clustering of eye movement data.</article-title> In <source>Proceedings of the symposium on eye tracking research and applications</source> (pp. <fpage>285</fpage>–<lpage>288</lpage>). <publisher-loc>New York, NY, USA</publisher-loc>: <publisher-name>ACM</publisher-name>. <pub-id pub-id-type="doi">10.1145/2168556.2168617</pub-id></mixed-citation></ref>
<ref id="b154"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Tatler</surname>, <given-names>B. W.</given-names></string-name>, <string-name><surname>Wade</surname>, <given-names>N. J.</given-names></string-name>, and <string-name><surname>Kaulard</surname>, <given-names>K.</given-names></string-name></person-group> (<year>2007</year>). <article-title>Examining art: Dissociating pattern and perceptual influences on oculomotor behaviour.</article-title> <source>Spatial Vision</source>, <volume>21</volume>(<issue>1-2</issue>), <fpage>165</fpage>–<lpage>184</lpage>.<pub-id pub-id-type="pmid">18073057</pub-id><issn>0169-1015</issn></mixed-citation></ref>
<ref id="b155"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Tatler</surname>, <given-names>B. W.</given-names></string-name></person-group> (<year>2007</year>). The central fixation bias in scene viewing: Selecting an optimal viewing position independently of motor biases and image feature distributions. Journal of Vision, 7(14), 4. Retrieved from http://dx.doi.org/<pub-id pub-id-type="doi">10.1167/7.14.4</pub-id> doi:</mixed-citation></ref>
<ref id="b156"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Tatler</surname>, <given-names>B. W.</given-names></string-name>, <string-name><surname>Wade</surname>, <given-names>N. J.</given-names></string-name>, <string-name><surname>Kwan</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Findlay</surname>, <given-names>J. M.</given-names></string-name>, and <string-name><surname>Velichkovsky</surname>, <given-names>B. M.</given-names></string-name></person-group> (<year>2010</year>). Yarbus, eye movements, and vision. i-Perception, 1, 7 - 27.</mixed-citation></ref>
<ref id="b157"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Thornton</surname>, <given-names>T. L.</given-names></string-name>, and <string-name><surname>Gilden</surname>, <given-names>D. L.</given-names></string-name></person-group> (<year>2007</year>). <article-title>Parallel and serial processes in visual search.</article-title> <source>Psychological Review</source>, <volume>114</volume>(<issue>1</issue>), <fpage>71</fpage>–<lpage>103</lpage>. <pub-id pub-id-type="doi">10.1037/0033-295X.114.1.71</pub-id><pub-id pub-id-type="pmid">17227182</pub-id><issn>0033-295X</issn></mixed-citation></ref>
<ref id="b158"><mixed-citation publication-type="unknown" specific-use="unparsed">Tobii. (<year>2014</year>). Tobii studio version 3.3.0 (3.3.0 ed.) [Computer software manual].</mixed-citation></ref>
<ref id="b159"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Tole</surname>, <given-names>J. R.</given-names></string-name>, and <string-name><surname>Young</surname>, <given-names>L. R.</given-names></string-name></person-group> (<year>1981</year>). <chapter-title>Digital filters for saccade and fixation detection</chapter-title>. In <person-group person-group-type="editor"><string-name><given-names>D. F.</given-names> <surname>Fisher</surname></string-name>, <string-name><given-names>R. A.</given-names> <surname>Monty</surname></string-name>, and <string-name><given-names>J. W.</given-names> <surname>Senders</surname></string-name> (<role>Eds.</role>),</person-group> <source>Eye movements: Cognition and visual perception</source>. <publisher-name>Lawrence Erlbaum</publisher-name>.</mixed-citation></ref>
<ref id="b160"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Trukenbrod</surname>, <given-names>H. A.</given-names></string-name>, and <string-name><surname>Engbert</surname>, <given-names>R.</given-names></string-name></person-group> (<year>2012</year>). Eye movements in a sequential scanning task: Evidence for distributed processing. Journal of Vision, 12((1):5), 1–12.</mixed-citation></ref>
<ref id="b161"><mixed-citation publication-type="unknown" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Tseng</surname>, <given-names>P.-H.</given-names></string-name>, <string-name><surname>Carmi</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Cameron</surname>, <given-names>I. G. M.</given-names></string-name>, <string-name><surname>Munoz</surname>, <given-names>D. P.</given-names></string-name>, and <string-name><surname>Itti</surname>, <given-names>L.</given-names></string-name></person-group> (<year>2009</year>). <article-title>Quantifying center bias of observers in free viewing of dynamic natural scenes.</article-title> Journal of Vision, 9(7), 4. Retrieved from http://dx.doi.org/<pub-id pub-id-type="doi">10.1167/9.7.4</pub-id> doi:</mixed-citation></ref>
<ref id="b162"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Ungerleider</surname>, <given-names>L. G.</given-names></string-name>, and <string-name><surname>Haxby</surname>, <given-names>J. V.</given-names></string-name></person-group> (<year>1994</year>). <article-title>‘What’ and ‘where’ in the human brain.</article-title> <source>Current Opinion in Neurobiology</source>, <volume>4</volume>(<issue>2</issue>), <fpage>157</fpage>–<lpage>165</lpage>. Retrieved from <ext-link ext-link-type="uri" xlink:href="http://www.sciencedirect.com/science/article/pii/0959438894900663">http://www.sciencedirect.com/science/article/pii/0959438894900663</ext-link>. <pub-id pub-id-type="doi">10.1016/0959-4388(94)90066-3</pub-id><pub-id pub-id-type="pmid">8038571</pub-id><issn>0959-4388</issn></mixed-citation></ref>
<ref id="b163"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Urruty</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Lew</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Ihadaddene</surname>, <given-names>N.</given-names></string-name>, and <string-name><surname>Simovici</surname>, <given-names>D. A.</given-names></string-name></person-group> (<year>2007</year>). <article-title>Detecting eye fixations by projection clustering.</article-title> <source>ACM Transactions on Multimedia Computing Communications and Applications</source>, <volume>3</volume>(<issue>4</issue>), <fpage>1</fpage>–<lpage>20</lpage>. <pub-id pub-id-type="doi">10.1145/1314303.1314308</pub-id><issn>1551-6857</issn></mixed-citation></ref>
<ref id="b164"><mixed-citation publication-type="journal" specific-use="unparsed"><person-group person-group-type="author"><collab>˘Spakov</collab></person-group>, O., and Miniotas, D. (<year>2007</year>). <article-title>Application of clustering algorithms in eye gaze visualizations.</article-title> <source>Information Technology and Control</source>, <volume>36</volume>(<issue>2</issue>), <fpage>213</fpage>–<lpage>216</lpage>.<issn>1392-124X</issn></mixed-citation></ref>
<ref id="b165"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Valsecchi</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Gegenfurtner</surname>, <given-names>K. R.</given-names></string-name>, and <string-name><surname>Schütz</surname>, <given-names>A. C.</given-names></string-name></person-group> (<year>2013</year>). <article-title>Saccadic and smooth-pursuit eye movements during reading of drifting texts.</article-title> <source>Journal of Vision (Charlottesville, Va.)</source>, <volume>13</volume>(<issue>10</issue>), <fpage>8</fpage>. <pub-id pub-id-type="doi">10.1167/13.10.8</pub-id><pub-id pub-id-type="pmid">23956456</pub-id><issn>1534-7362</issn></mixed-citation></ref>
<ref id="b166"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>van der Lans</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Wedel</surname>, <given-names>M.</given-names></string-name>, and <string-name><surname>Pieters</surname>, <given-names>R.</given-names></string-name></person-group> (<year>2011</year>). <article-title>Defining eye-fixation sequences across individuals and tasks: The Binocular-Individual Threshold (BIT) algorithm.</article-title> <source>Behavior Research Methods</source>, <volume>43</volume>(<issue>1</issue>), <fpage>239</fpage>–<lpage>257</lpage>. <pub-id pub-id-type="doi">10.3758/s13428-010-0031-2</pub-id><pub-id pub-id-type="pmid">21287116</pub-id><issn>1554-351X</issn></mixed-citation></ref>
<ref id="b167"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Van Der Linde</surname>, <given-names>I.</given-names></string-name>, <string-name><surname>Rajashekar</surname>, <given-names>U.</given-names></string-name>, <string-name><surname>Bovik</surname>, <given-names>A. C.</given-names></string-name>, and <string-name><surname>Cormack</surname>, <given-names>L. K.</given-names></string-name></person-group> (<year>2009</year>). <article-title>DOVES: A database of visual eye movements.</article-title> <source>Spatial Vision</source>, <volume>22</volume>(<issue>2</issue>), <fpage>161</fpage>–<lpage>177</lpage>. Retrieved from <ext-link ext-link-type="uri" xlink:href="http://live.ece.utexas.edu/research/doves">http://live.ece.utexas.edu/research/doves</ext-link> <pub-id pub-id-type="doi">10.1163/156856809787465636</pub-id><pub-id pub-id-type="pmid">19228456</pub-id><issn>0169-1015</issn></mixed-citation></ref>
<ref id="b168"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Van der Stigchel</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Meeter</surname>, <given-names>M.</given-names></string-name>, and <string-name><surname>Theeuwes</surname>, <given-names>J.</given-names></string-name></person-group> (<year>2006</year>). <article-title>Eye movement trajectories and what they tell us.</article-title> <source>Neuroscience and Biobehavioral Reviews</source>, <volume>30</volume>(<issue>5</issue>), <fpage>666</fpage>–<lpage>679</lpage>. <pub-id pub-id-type="doi">10.1016/j.neubiorev.2005.12.001</pub-id><pub-id pub-id-type="pmid">16497377</pub-id><issn>0149-7634</issn></mixed-citation></ref>
<ref id="b169"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>van Hateren</surname>, <given-names>J. H.</given-names></string-name>, and <string-name><surname>van der Schaaf</surname>, <given-names>A.</given-names></string-name></person-group> (<year>1998</year>). <article-title>Independent component filters of natural images compared with simple cells in primary visual cortex.</article-title> <source>Proceedings of the Royal Society of London. Series B, Biological Sciences</source>, <volume>265</volume>(<issue>1394</issue>), <fpage>359</fpage>–<lpage>366</lpage>. Retrieved from <ext-link ext-link-type="uri" xlink:href="http://rspb.royalsocietypublishing.org/content/265/1394/359">http://rspb.royalsocietypublishing.org/content/265/1394/359</ext-link>. <pub-id pub-id-type="doi">10.1098/rspb.1998.0303</pub-id><pub-id pub-id-type="pmid">9523437</pub-id><issn>0080-4649</issn></mixed-citation></ref>
<ref id="b170"><mixed-citation publication-type="web-page" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Velichkovsky</surname>, <given-names>B. M.</given-names></string-name>, <string-name><surname>Joos</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Helmert</surname>, <given-names>J. R.</given-names></string-name>, and <string-name><surname>Pannash</surname>, <given-names>S.</given-names></string-name></person-group> (<year>2005</year>). <article-title>Two visual systems and their eye movements: evidence from static and dynamic scene perception.</article-title> In Proceedings of the xxvii conference of the cognitive science society (p. 2283-2288). Retrieved from <ext-link ext-link-type="uri" xlink:href="https://tudresden.de/mn/psychologie/appliedcognition/ressourcen/dateien/publikationen/pdf/velichkovsky2005.pdf?lang=de">https://tudresden.de/mn/psychologie/appliedcognition/ressourcen/dateien/publikationen/pdf/velichkovsky2005.pdf?lang=de</ext-link></mixed-citation></ref>
<ref id="b171"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Vella</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Infantino</surname>, <given-names>I.</given-names></string-name>, and <string-name><surname>Scardino</surname>, <given-names>G.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Person identification through entropy oriented mean shift clustering of human gaze patterns.</article-title> <source>Multimedia Tools and Applications</source>, <volume>•••</volume>, <fpage>1</fpage>–<lpage>25</lpage>.<issn>1380-7501</issn></mixed-citation></ref>
<ref id="b172"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Veneri</surname>, <given-names>G.</given-names></string-name></person-group> (<year>2013</year>). <source>Pattern recognition on human vision</source> (pp. <fpage>19</fpage>–<lpage>47</lpage>). <publisher-name>Transworld Research Network</publisher-name>.</mixed-citation></ref>
<ref id="b173"><mixed-citation publication-type="conference" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Veneri</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Piu</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Federighi</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Rosini</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Federico</surname>, <given-names>A.</given-names></string-name>, and <string-name><surname>Rufa</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2010</year>, June). Eye fixations identification based on statistical analysis - case study. In Cognitive information processing (cip), 2010 2<sup>nd</sup> international workshop on (pp. 446–451).</mixed-citation></ref>
<ref id="b174"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Veneri</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Piu</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Rosini</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Federighi</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Federico</surname>, <given-names>A.</given-names></string-name>, and <string-name><surname>Rufa</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2011</year>). <article-title>Automatic eye fixations identification based on analysis of variance and covariance.</article-title> <source>Pattern Recognition Letters</source>, <volume>32</volume>(<issue>13</issue>), <fpage>1588</fpage>–<lpage>1593</lpage>. <pub-id pub-id-type="doi">10.1016/j.patrec.2011.06.012</pub-id><issn>0167-8655</issn></mixed-citation></ref>
<ref id="b175"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Vidal</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Bulling</surname>, <given-names>A.</given-names></string-name>, and <string-name><surname>Gellersen</surname>, <given-names>H.</given-names></string-name></person-group> (<year>2012</year>). <article-title>Detection of smooth pursuits using eye movement shape features.</article-title> In <source>Proceedings of the symposium on eye tracking research and applications</source> (pp. <fpage>177</fpage>–<lpage>180</lpage>). <pub-id pub-id-type="doi">10.1145/2168556.2168586</pub-id></mixed-citation></ref>
<ref id="b176"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Špakov</surname>, <given-names>O.</given-names></string-name></person-group> (<year>2012</year>). <article-title>Comparison of eye movement filters used in hci.</article-title> In <source>Proceedings of the symposium on eye tracking research and applications</source> (pp. <fpage>281</fpage>–<lpage>284</lpage>). <publisher-name>ACM.</publisher-name> <pub-id pub-id-type="doi">10.1145/2168556.2168616</pub-id></mixed-citation></ref>
<ref id="b177"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Wass</surname>, <given-names>S. V.</given-names></string-name>, <string-name><surname>Smith</surname>, <given-names>T. J.</given-names></string-name>, and <string-name><surname>Johnson</surname>, <given-names>M. H.</given-names></string-name></person-group> (<year>2013</year>). <article-title>Parsing eye-tracking data of variable quality to provide accurate fixation duration estimates in infants and adults.</article-title> <source>Behavior Research Methods</source>, <volume>45</volume>(<issue>1</issue>), <fpage>229</fpage>–<lpage>250</lpage>. <pub-id pub-id-type="doi">10.3758/s13428-012-0245-6</pub-id><pub-id pub-id-type="pmid">22956360</pub-id><issn>1554-351X</issn></mixed-citation></ref>
<ref id="b178"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Weiskrantz</surname>, <given-names>L.</given-names></string-name></person-group> (<year>1972</year>). <article-title>Behavioural analysis of the monkey’s visual nervous system.</article-title> <source>Proceedings of the Royal Society of London. Series B, Biological Sciences</source>, <volume>182</volume>(<issue>1069</issue>), <fpage>427</fpage>–<lpage>455</lpage>. Retrieved from <ext-link ext-link-type="uri" xlink:href="http://rspb.royalsocietypublishing.org/content/182/1069/427">http://rspb.royalsocietypublishing.org/content/182/1069/427</ext-link>. <pub-id pub-id-type="doi">10.1098/rspb.1972.0087</pub-id><pub-id pub-id-type="pmid">4404807</pub-id><issn>0080-4649</issn></mixed-citation></ref>
<ref id="b179"><mixed-citation publication-type="book-chapter" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Widdel</surname>, <given-names>H.</given-names></string-name></person-group> (<year>1984</year>). <chapter-title>Operational problems in analyzing eye movements.</chapter-title> In Theoretical and applied aspects of eye movement research (Vol. 22, pp. 21–29).</mixed-citation></ref>
<ref id="b180"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Wooding</surname>, <given-names>D. S.</given-names></string-name></person-group> (<year>2002</year>a). <article-title>Eye movements of large populations: II. Deriving regions of interest, coverage, and similarity using fixation maps.</article-title> <source>Behavior Research Methods, Instruments, and Computers</source>, <volume>34</volume>(<issue>4</issue>), <fpage>518</fpage>–<lpage>528</lpage>. <pub-id pub-id-type="doi">10.3758/BF03195481</pub-id><pub-id pub-id-type="pmid">12564556</pub-id><issn>0743-3808</issn></mixed-citation></ref>
<ref id="b181"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Wooding</surname>, <given-names>D. S.</given-names></string-name></person-group> (<year>2002</year>b). <article-title>Fixation maps: quantifying eye-movement traces.</article-title> In Etra ’02: Proceedings of the 2002 symposium on eye tracking research and applications (pp. 31–36). New York, NY, USA: ACM. <pub-id pub-id-type="doi">10.1145/507072.507078</pub-id></mixed-citation></ref>
<ref id="b182"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Wooding</surname>, <given-names>D. S.</given-names></string-name>, <string-name><surname>Mugglestone</surname>, <given-names>M. D.</given-names></string-name>, <string-name><surname>Purdy</surname>, <given-names>K. J.</given-names></string-name>, and <string-name><surname>Gale</surname>, <given-names>A. G.</given-names></string-name></person-group> (<year>2002</year>). <article-title>Eye movements of large populations: I. Implementation and performance of an autonomous public eye tracker.</article-title> <source>Behavior Research Methods, Instruments, and Computers</source>, <volume>34</volume>(<issue>4</issue>), <fpage>509</fpage>–<lpage>517</lpage>. <pub-id pub-id-type="doi">10.3758/BF03195480</pub-id><pub-id pub-id-type="pmid">12564555</pub-id><issn>0743-3808</issn></mixed-citation></ref>
<ref id="b183"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Wyatt</surname>, <given-names>H. J.</given-names></string-name></person-group> (<year>1998</year>). <article-title>Detecting saccades with jerk.</article-title> <source>Vision Research</source>, <volume>38</volume>(<issue>14</issue>), <fpage>2147</fpage>–<lpage>2153</lpage>. <pub-id pub-id-type="doi">10.1016/S0042-6989(97)00410-0</pub-id><pub-id pub-id-type="pmid">9797975</pub-id><issn>0042-6989</issn></mixed-citation></ref>
<ref id="b184"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="editor"><string-name><surname>Zangemeister</surname>, <given-names>W. H.</given-names></string-name>, <string-name><surname>Stiehl</surname>, <given-names>H. S.</given-names></string-name>, and <string-name><surname>Freksa</surname>, <given-names>C.</given-names></string-name> (<role>Eds.</role>)</person-group>. (<year>1996</year>). <source>Visual attention and cognition</source> (<volume>Vol. 116</volume>). <publisher-name>North-Holland</publisher-name>.</mixed-citation></ref>
<ref id="b185"><mixed-citation publication-type="web-page" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Zemblys</surname>, <given-names>R.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Eye-movement event detection meets machine learning.</article-title> In The 20th international conference biomedical engineering 2016. Retrieved from <ext-link ext-link-type="uri" xlink:href="https://www.researchgate.net/publication/311027097_Eye-movement_event_detection_meets_machine_learning">https://www.researchgate.net/publication/311027097_Eye-movement_event_detection_meets_machine_learning</ext-link></mixed-citation></ref>
<ref id="b186"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Zemblys</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Niehorster</surname>, <given-names>D. C.</given-names></string-name>, <string-name><surname>Komogortsev</surname>, <given-names>O.</given-names></string-name>, and <string-name><surname>Holmqvist</surname>, <given-names>K.</given-names></string-name></person-group> (<year>2017</year>). <article-title>Using machine learning to detect events in eye-tracking data (accepted paper).</article-title> <source>Behavior Research Methods</source>.<issn>1554-351X</issn></mixed-citation></ref>
</ref-list>

<fn-group>
  <fn id="FN1">
  <p>Realistically the model is a strong assumption (prior) and
very often the hypothesized construct is driven by the original model.</p>
  </fn>
  
  
  <fn id="FN2">
  <p> (Mason, 1976[<xref ref-type="bibr" rid="b102">102</xref>]; Karsh and Breitenbach, 1983[<xref ref-type="bibr" rid="b77">77</xref>]; Widdel, 1984[<xref ref-type="bibr" rid="b179">179</xref>];
Scinto and Barnette, 1986[<xref ref-type="bibr" rid="b140">140</xref>]; Stampe, 1993[<xref ref-type="bibr" rid="b150">150</xref>]; Krauzlis and Miles, 1996[<xref ref-type="bibr" rid="b86">86</xref>];
Wyatt, 1998[<xref ref-type="bibr" rid="b183">183</xref>]; Salvucci and Goldberg, 2000[<xref ref-type="bibr" rid="b134">134</xref>]; Privitera and Stark,
2000[<xref ref-type="bibr" rid="b124">124</xref>]; Larsson, 2002[<xref ref-type="bibr" rid="b93">93</xref>]; Engbert and Kliegl, 2003[<xref ref-type="bibr" rid="b40">40</xref>]; Smeets and Hooge,
2003[<xref ref-type="bibr" rid="b146">146</xref>]; Santella and DeCarlo, 2004[<xref ref-type="bibr" rid="b135">135</xref>]; Engbert and Mergenthaler,
2006[<xref ref-type="bibr" rid="b41">41</xref>]; Urruty, Lew, and Ihadaddene, 2007[<xref ref-type="bibr" rid="b163">163</xref>]; Spakov and Miniotas, ˘
2007[<xref ref-type="bibr" rid="b164">164</xref>]; Shic, Scassellati, and Chawarska, 2008 [<xref ref-type="bibr" rid="b145">145</xref>]; Camilli, Nacchia,
Terenzi, and Nocera, 2008[<xref ref-type="bibr" rid="b25">25</xref>]; Kumar, Klingner, Puranik, Winograd, and Paepcke, 2008[<xref ref-type="bibr" rid="b87">87</xref>]; Munn, Stefano, and Pelz, 2008[<xref ref-type="bibr" rid="b110">110</xref>]; Blignaut,
2009[<xref ref-type="bibr" rid="b19">19</xref>]; Komogortsev, Jayarathna, Koh, and Gowda, 2009[<xref ref-type="bibr" rid="b82">82</xref>]; Nyström and Holmqvist, 2010[<xref ref-type="bibr" rid="b116">116</xref>]; Komogortsev, Gobert, Jayarathna,
Koh, and Gowda, 2010[<xref ref-type="bibr" rid="b81">81</xref>]; Dorr, Jarodzka, and Barth, 2010[<xref ref-type="bibr" rid="b33">33</xref>]; van der
Lans, Wedel, and Pieters, 2011[<xref ref-type="bibr" rid="b166">166</xref>]; Mould, Foster, Amano, and Oakley, 2012[<xref ref-type="bibr" rid="b108">108</xref>]; Komogortsev and Karpov, 2012/13[<xref ref-type="bibr" rid="b83">83</xref>]; Vidal, Bulling, and
Gellersen, 2012[<xref ref-type="bibr" rid="b175">175</xref>]; Liston, Krukowski, and Stone, 2012[<xref ref-type="bibr" rid="b96">96</xref>]; Špakov,
2012 [<xref ref-type="bibr" rid="b176">176</xref>]; Valsecchi, Gegenfurtner, and Schütz, 2013[<xref ref-type="bibr" rid="b165">165</xref>])</p>
  </fn>
  
  
  
  <fn id="FN3">
  <p>Entia non sunt multiplicanda praeter necessitatem (Entities must not be multiplied beyond necessity). –John Punch–.</p>
  </fn>
  
  
  <fn id="FN4">
  <p>4In reading called gaze (Just and Carpenter, 1980 [<xref ref-type="bibr" rid="b73">73</xref>]) and in human factors called glance (Green, 2002 [<xref ref-type="bibr" rid="b53">53</xref>]). To avoid confusion
with the standard meaning of gaze the term dwell is used
(Holmqvist et al., 2011[<xref ref-type="bibr" rid="b64">64</xref>])
</p>
  </fn>
  
  
  <fn id="FN5">
  <p>Bivariate Contour Ellipse Area</p>
  </fn>
  
  
  <fn id="FN6">
  <p>The term scanpath is somewhat vague and differs in its
meaning and interpretation between different research areas
and authors. Introduced in 1971 by Noton and Stark (Noton
and Stark, 1971b, 1971a, 1971c[<xref ref-type="bibr" rid="b115 b116 b117">115, 116, 117</xref>]; Zangemeister, Stiehl, and Freksa,
1996[<xref ref-type="bibr" rid="b184">184</xref>]), it was a fairly abstract concept to describe a repetitive
pattern of a single subject while viewing a static stimulus
(Privitera, 2006[<xref ref-type="bibr" rid="b123">123</xref>]). Common terminology has been improved
with works such as Holmqvist et al. (2011[<xref ref-type="bibr" rid="b64">64</xref>]), research networks
such as COGAIN, or industry driven demands such as the
ISO 15007 and SAE J2396 standards for in-vehicle visual demand measurements (Green, 2002 [<xref ref-type="bibr" rid="b53">53</xref>]).</p>
  </fn>
  
  
  <fn id="FN7">
  <p>wo-state Hidden Markov models (HMM) are intrinsically based on fitting individual data thus avoiding the problem of setting parameters explicitly (Salvucci and Goldberg,
2000 [<xref ref-type="bibr" rid="b134">134</xref>]; Rothkopf and Pelz, 2004[<xref ref-type="bibr" rid="b130">130</xref>]). A prerequisite is, however,
to assume two states, i.e., saccade and fixation, limiting the classification.
</p>
  </fn>
  
  
  <fn id="FN8">
  <p>E.g., a saccade epoch in the trajectory is in first approximation a straight line, which is a geometric shape feature (a
closer look shows it is curved (Inhoff, Seymoura, Schad, and Greenberg, 2010[<xref ref-type="bibr" rid="b69">69</xref>]).</p>
  </fn>
  
  
  
  <fn id="FN9">
  <p>Here gaze is understood as the ray from the center of the
entrance pupil and the point-of-regard, essentially the first
part of the line-of-sight. For a detailed discussion of the related notions line-of-sight, pupillary axis, visual axis, etc. see
(Bennett and Rabbetts, 2007[<xref ref-type="bibr" rid="b14">14</xref>]; Schwartz, 2013[<xref ref-type="bibr" rid="b139">139</xref>]).</p>
  </fn>
  
  
  
  <fn id="FN10">
  <p>Plotting a distance matrix is a technique used in different research areas and comes under different names, e.g., visual assessment of cluster tendency (VAT) (Havens, Bezdek,
Keller, and Popescu, 2008 [<xref ref-type="bibr" rid="b61">61</xref>]; Bezdek and Hathaway, 2002[<xref ref-type="bibr" rid="b16">16</xref>]), or see,
e.g., Junejo, Dexter, Laptev, and Pérez (2011[<xref ref-type="bibr" rid="b72">72</xref>]). In the context
of dynamical systems it is called recurrence plot (Eckmann,
Kamphorst, and Ruelle, 1987[<xref ref-type="bibr" rid="b38">38</xref>]). Recurrence analysis is a successful tool for describing complex dynamic systems, see,
e.g., Marwan, Romano, Thiel, and Kurths (2007[<xref ref-type="bibr" rid="b101">101</xref>]). The reference also includes a simple statistical model for the movement of the eyes, i.e., the disrupted Brownian motion. Recurrence analysis is also known in eye movement research
(Anderson, Bischof, Laidlaw, Risko, and Kingstone, 2013[<xref ref-type="bibr" rid="b2">2</xref>]; Farnand, Vaidyanathan, and Pelz, 2016[<xref ref-type="bibr" rid="b43">43</xref>]).</p>
  </fn>
  
  
  
  <fn id="FN11">
  <p>Anatomically a separate pathways for color can be distinguished (Schwartz, 2010[<xref ref-type="bibr" rid="b138">138</xref>]).</p>
  </fn>
  
  
  
  </fn-group>
</back>
</article>
