<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd">

<article article-type="research-article" xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
	<journal-id journal-id-type="publisher-id">Jemr</journal-id>
      <journal-title-group>
        <journal-title>Journal of Eye Movement Research</journal-title>
      </journal-title-group>
      <issn pub-type="epub">1995-8692</issn>
	  <publisher>								
	  <publisher-name>Bern Open Publishing</publisher-name>
	  <publisher-loc>Bern, Switzerland</publisher-loc>
	</publisher>
    </journal-meta>
<article-meta>
	<article-id pub-id-type="doi">10.16910/jemr.10.5.9</article-id> 
	  <article-categories>								
				<subj-group subj-group-type="heading">
					<subject>Research Article</subject>
				</subj-group>
		</article-categories>
      <title-group>
        <article-title>Scanpath visualization and comparison using visual aggregation techniques</article-title>
      </title-group>
	   <contrib-group> 
				<contrib contrib-type="author">
					<name>
						<surname>Peysakhovich</surname>
						<given-names>Vsevolod</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Hurter</surname>
						<given-names>Christophe</given-names>
					</name>
					<xref ref-type="aff" rid="aff2">2</xref>
				</contrib>
				
        <aff id="aff1">
		<institution>ISAE-SUPAERO
</institution>, <country>France</country>
        </aff>
		<aff id="aff2">
		<institution>ENAC, Toulouse</institution><country>France</country>
        </aff>
		</contrib-group>
     
	  <pub-date date-type="pub" publication-format="electronic"> 
		<day>8</day>  
		<month>1</month>
        <year>2018</year>
      </pub-date>
	  <pub-date date-type="collection" publication-format="electronic"> 
	  <year>2017</year>
	</pub-date>
      <volume>10</volume>
      <issue>5</issue>
	<elocation-id>10.16910/jemr.10.5.9</elocation-id>
	<permissions> 
	<copyright-year>2018</copyright-year>
	<copyright-holder>Peysakhovich et al.</copyright-holder>
	<license license-type="open-access">
  <license-p>This work is licensed under a Creative Commons Attribution 4.0 International License, 
  (<ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">
    https://creativecommons.org/licenses/by/4.0/</ext-link>), which permits unrestricted use and redistribution provided that the original author and source are credited.</license-p>
</license>
	</permissions>
      <abstract>
        <p>We demonstrate the use of different visual aggregation techniques to obtain non-cluttered visual representations of scanpaths. First, fixation points are clustered using the mean-shift algorithm. Second, saccades are aggregated using the Attribute-Driven Edge Bundling (ADEB) algorithm that handles a saccades direction, onset timestamp, magnitude or their combination for the edge compatibility criterion. Flow direction maps, computed during bundling, can be visualized separately (vertical or horizontal components) or as a single image using the Oriented Line Integral Convolution (OLIC) algorithm. Furthermore, cosine similarity between two flow direction maps provides a similarity map to compare two scanpaths. Last, we provide examples of basic patterns, visual search task, and art perception. Used together, these techniques provide valuable insights about scanpath exploration and informative illustrations of the eye movement data.</p>
      </abstract>
      <kwd-group>
        <kwd>eye tracking</kwd>
        <kwd>scanpath</kwd>
        <kwd>saccades</kwd>
        <kwd>visualization</kwd>
        <kwd>fixation clustering</kwd>
        <kwd>mean-shift</kwd>
        <kwd>edge bundling</kwd>
        <kwd>flow directional map</kwd>
        <kwd>oriented line integral convolution</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
   
   
   
    <sec id="s1">
      <title>Introduction</title>
      <p>
        The affordable prices of modern eye tracking devices
and the maturity of analytical methods have made gaze
recordings a standard source of information when
studying human-computer interaction, user behavior or
cognition [
        <xref ref-type="bibr" rid="b14 b22">14, 22</xref>
        ]. Gaze positions are computed at high speed (up
to 2 kHz) with additional data dimension like pupil
diameter; and are further processed to analyze the behavior of
users. This analysis can be supported by a statistical
comparison of numerous metrics derived from eye
movements (e.g. fixation duration, saccade amplitude
etc.) or static, dynamic and interactive visualizations.
Gaze record processing in the data space [
        <xref ref-type="bibr" rid="b18">18</xref>
        ] is more
popular than processing in the image space and
displaying the data using visual simplification techniques.
However, interest has recently grown in image-based
techniques due to their fast computation and their efficiency
to support a visual analysis [
        <xref ref-type="bibr" rid="b19">19</xref>
        ].
      </p>
	  
	  
	  
	  
	  
      <p>
        Raw eye tracking data is complex, and, therefore,
needs to be simplified for a visual analysis to support an
efficient exploration of visual patterns. A heat or saliency
map [
        <xref ref-type="bibr" rid="b38">38</xref>
        ] &#x2013; a conventional visualization of fixation
distribution &#x2013; allows an analyst to instantly perceive what
elements of the scene were focused on. Gaze plots &#x2013;
classic scanpath visualizations &#x2013; represent fixation points
as circles with the diameter proportional to fixations
duration and connected with straight lines. However, in
general, such visualizations rapidly become cluttered
after a dozen drawn saccades. Therefore, scanpath
analysis and comparison, a cumbersome task, is often solved at
a higher level [
        <xref ref-type="bibr" rid="b28">28</xref>
        ] implying analyst-defined areas of
interests (AOIs) and visual analysis using infographics such as
line and bar charts, scatter plots, timeline visualizations,
histograms etc.  [
        <xref ref-type="bibr" rid="b2">2</xref>
        ]. Nevertheless, to the best of our
knowledge, there does not yet exist a commonly accepted
visualization technique for scanpaths in an intermediate
state between raw data and high-level representation.
      </p>
	  
	  
	  
	  
	  
      <p>Among techniques for visual simplifications of
graphs, edge bundling [
        <xref ref-type="bibr" rid="b25">25</xref>
        ] has exhibited a high potential
to support gaze analysis [
        <xref ref-type="bibr" rid="b33 b40 b26 b21">33, 40, 26, 21</xref>
        ]. Considering a recorded
gaze path as a sequence of points (i.e. fixations)
connected by lines (i.e. saccades), the resulting visualization of
these data corresponds to a set of tangled lines. Edge
bundling techniques aggregate these lines into bundles
using a compatibility criterion which is often defined as
the line vicinity: close lines are aggregated to create an
aggregated path.</p>



      <p>A recent review of state-of-the-art eye-tracking data
visualizations  [
        <xref ref-type="bibr" rid="b2">2</xref>
        ] revealed that, in spite of an important
number of high-quality visualization techniques available
to eye tracking practitioners, there is still a lack of
efficient point-based scanpath visualizations. For example,
Hurter et al.  [
        <xref ref-type="bibr" rid="b21">21</xref>
        ] proposed applying edge bundling to eye
traces. Peysakhovich et al. [
        <xref ref-type="bibr" rid="b33">33</xref>
        ] noted the importance of
the saccade direction and developed an edge bundling
framework that allows to take account of the orientation
of edges. Based on these ideas, in this paper, we present
a new rationale for scanpath visualizations using visual
aggregation techniques that make it possible to reduce
visual clutter and provide a mathematical base for
scanpath comparison. The paper is structured as follows: after
a brief review of previous work on eye-tracking
visualizations, we explain our design rationale consisting of four
steps: fixation detection, fixation clustering, saccade
bundling, and generation of flow direction maps; then we
explain a set of examples where the visual aggregation
techniques help to extract meaningful information.
Finally, we present an example for comparing the scanpaths of
two participants using a similarity map. This work
contributes to the state-of-the-art eye tracking visualizations
techniques describing in detail how to reduce clutter in
visual scanpath visualizations.</p>
   </sec>
    <sec id="s2">
      <title>Previous work</title>
      <p>
        Fixation patterns can be transformed into transitions
between meaningful semantically different AOIs that can
be analyzed using graphs, trees, or matrices [
        <xref ref-type="bibr" rid="b4">4</xref>
        ]. The
sequences of annotated fixations can be further compared
using string edit metrics [
        <xref ref-type="bibr" rid="b27 b28 b15">27, 28, 15</xref>
        ], or represented as a
dotplot to discover scanpath patterns using linear
regression and hierarchical clustering [
        <xref ref-type="bibr" rid="b16">16</xref>
        ]. The string-based
scanpath comparison can also be performed without an a
priori AOI definition by regrouping fixations into clusters
automatically [
        <xref ref-type="bibr" rid="b13 b37">13, 37</xref>
        ].
      </p>
	  
	  
	  
      <p>
        Various visualizations exist to support the exploration
of the gaze data such as color bands [
        <xref ref-type="bibr" rid="b8">8</xref>
        ], eye movements
plots [
        <xref ref-type="bibr" rid="b6">6</xref>
        ], radial AOI transition graphs [
        <xref ref-type="bibr" rid="b3">3</xref>
        ], saccade
plots [
        <xref ref-type="bibr" rid="b7">7</xref>
        ], AOI rivers [
        <xref ref-type="bibr" rid="b9">9</xref>
        ], or interactive systems [
        <xref ref-type="bibr" rid="b35 b31">35, 31</xref>
        ].
      </p>
	  
	  
	  
	  
      <p>
        Scanpaths can also be broken down into individual
saccades that can be compactly represented as radial plots
[
        <xref ref-type="bibr" rid="b17">17</xref>
        ], or compared numerically using vector-based
alignment and statistical comparison of an average saccade
[
        <xref ref-type="bibr" rid="b23">23</xref>
        ].
      </p>
    </sec> 
    <sec id="s3">
      <title>Methodology</title>
      <p>In this section, we describe the pipeline for the
generation of a scanpath visualization using visual aggregation
techniques. First, fixations and saccades are extracted
from the gaze recording. Then, fixations are clustered and
saccades are bundled together. Finally, the analysis of
gaze data is performed using a flow visualization map.</p>
      <sec id="s3a">
        <title>Fixation detection</title>
        <p>A typical gaze recording consists of horizontal and
vertical coordinates varying over time. In order to apply
an edge bundling technique, we have to define the control
points &#x2013; the start and end points of trails that are not
affected by the edge aggregation. The trivial choice for the
gaze data are fixations. Fixations can be detected from
the raw data using dispersion or velocity thresholds
 [
        <xref ref-type="bibr" rid="b1 b32 b36">1, 32, 36</xref>
        ]. The consecutive fixations are connected with straight
lines that represent saccades. Hence, in terms of graph
theory, eye movement data can be represented as a
directed graph where fixations are vertices and saccades are
edges (see <xref ref-type="fig" rid="fig01">Figure 1</xref>, raw data). Note that throughout this
paper we call this representation (fixations connected
with saccades) &#x201C;raw data&#x201D; &#x2013; raw meaning relative to the
application of the visual aggregation techniques &#x2013; the
focus of this work.</p><fig id="fig01" fig-type="figure" position="float">
					<label>Figure 1</label>
					<caption>
						<p>Different representations and maps of the raw data.</p>
					</caption>
					<graphic id="graph01" xlink:href="jemr-10-05-i-figure-01.png"/>
				</fig>
      </sec>
	  
	  
      <sec id="s3b">
        <title>Fixation clustering</title>
        <p>
          When we fixate the exact same object multiple times,
the detected fixation points are rarely at the exact same
position due to the inaccuracy of video-based eye
tracking systems and the size of the fovea. Therefore, while
semantically equal, the spread of the fixation points
produces unnecessary visual clutter. Fixation clustering
algorithms can reduce the clutter by aggregating
adjoining fixations. In this work, we propose applying the
mean-shift algorithm [
          <xref ref-type="bibr" rid="b11">11</xref>
          ]. This uses kernel density
estimation to generate a density map; the points are then
iteratively shifted to their densest neighborhood. The
density map of fixations is equal to a saliency map
(<xref ref-type="fig" rid="fig01">Figure 1</xref>, bottom left), i.e. for N fixations at
positions {x_n,n=(1,N)&#x203E; } the density map is defined by
<xref ref-type="fig" rid="eq01">Equation 1</xref>
where  K(&#x2219;) is a bivariate radial kernel and &#x3B4; is the
Kronecker symbol. In this work, we implemented maps with a
resolution of 420 &#xD7; 420 and a kernel width of 31. One
map pixel corresponds to a 4 &#xD7; 4 pixel square on the
screen. In each iteration, points are shifted towards the
locally densest area, and the density map is then
recomputed. To compute this gradient, we use a neighborhood
width of 40. We performed 10 clustering iterations for
all paper illustrations. <xref ref-type="fig" rid="fig02">Figure 2</xref> illustrates a few
intermediate results of the fixation clustering. The parameters
(number of iterations, kernel size, map resolution etc.)
have been chosen empirically. Some parameters are
related, for example, the kernel size and the gradient size
(gradient should be higher than the kernel size), and some
parameters must be adapted according to the recorded
data (for instance, the map resolution can be decreased if
the viewed objects are placed far enough from each
other). For consistency and comparison purposes, we fixed
the same parameters for every generated image.
        </p><fig id="eq01" fig-type="equation" position="anchor">
					<graphic id="equation01" xlink:href="jemr-10-05-i-equation-01.png"/>
				</fig>
		
		<fig id="fig02" fig-type="figure" position="float">
					<label>Figure 2</label>
					<caption>
						<p>Clustering of fixations using the mean-shift algorithm (i = #iterations).</p>
					</caption>
					<graphic id="graph02" xlink:href="jemr-10-05-i-figure-02.png"/>
				</fig>
      </sec>
      <sec id="s3c">
        <title>Saccade bundling</title>
        <p>
          Diminishing the dispersion of fixation points around a
focused location reduces visual clutter. It also facilitates
the use of the edge bundling technique by moving the
control points closer to each other (which are not affected
by the edge aggregation). Edge bundling techniques
regroup the close edges and draw them in bundles. Visual
suppression during saccades (i.e. the absence of
information encoding [
          <xref ref-type="bibr" rid="b29">29</xref>
          ]) supports such an approach. The
lines that represent the saccades do not carry any
information apart from connecting the subsequent fixations.
Many edge bundling algorithms exist; few, however,
handle the orientation of the edge (saccades). In this we use the Attribute-Driven Edge Bundling (ADEB) framework [
        <xref ref-type="bibr" rid="b33">33</xref>
        ]. This is an extension of the Kernel Density Estimation Edge Bundling (KDEEB) method  [
        <xref ref-type="bibr" rid="b20">20</xref>
        ], which applies the mean-shift algorithm to resampled edges. In comparison to previous work [
        <xref ref-type="bibr" rid="b33">33</xref>
        ], we provide additional uses of the ADEB framework and open eye tracking datasets for which this tech-nique helps to understand recorded data. Furthermore, we have taken Peysakhovich et al. work further by using the underlying computed gradient map (flow direction map) presented in the next section</p>



        <p>The method is similar to the procedure described in
the Section &#x201C;Fixation clustering&#x201D;, except for resampling
the lines (saccades) that connect fixation points and
computing the density map taking into account all these
resampled points (see <xref ref-type="fig" rid="fig01">Figure 1</xref>, bottom, Fixation density
map vs. Saccade density map). ADEB also introduced the
flow direction maps &#x2013; vector fields generated similar to
density maps by weighting a unit vector tangent to the
saccade curve with a bivariate radial kernel. Given the N
fixations, the resampling of the  N&#x2212; 1
saccades gives the
points {s_m,m=(1,M_n )&#x203E;  }, where M_n
is the number of
points composing the n-th saccade. Thus, the flow
direction map is defined by <xref ref-type="fig" rid="eq02">Equation 2</xref>
s_(m+1)-s_m being an estimate of the tangent vector to the
saccade curve at the sampling point s_m. In the presence
of a dominant local direction, the directional component
is significant, otherwise, the vector sum of the directions
is relatively small (<xref ref-type="fig" rid="fig01">Figure 1</xref>, bottom right). At each point,
a local subspace of compatible directions is defined as the
cosine similarity between the edge direction and the flow
direction at this point, i.e. it is defined by a maximum
allowed angle between two vectors. The gradient of
advection is not computed across all points in the
neighborhood as in standard mean-shift, but across the
subneighborhood that is compatible directionally. We used
the same parameters for the map size, kernel width and
neighborhood width as for fixation clustering, and 60&#xB0; for
the compatibility criteria.</p>
<fig id="eq02" fig-type="equation" position="anchor">
					<graphic id="equation02" xlink:href="jemr-10-05-i-equation-02.png"/>
				</fig>




        <p>ADEB introduced a compatibility criterion which is
based on the edges proximity and direction: close edges
of the same direction are aggregated. However, other
factors can be considered, for example, the temporal
dimension, or the length of the saccade. We illustrate the
use of these different factors in the art perception
example.</p>




        <p>We performed 20 saccade bundling iterations for all
paper illustrations. <xref ref-type="fig" rid="fig03">Figure </xref> shows a few iterations of the
saccades bundling. Similar to the number of fixation
clustering iterations, the number of bundling iterations for
the saccades was chosen arbitrarily but seemed
appropriate for the goal of this work. Performing more iterations
would simply refine the flow direction maps further and
shift the compatible saccades closer together.</p><fig id="fig03" fig-type="figure" position="float">
					<label>Figure 3</label>
					<caption>
						<p>Bundling of saccades using the Attribute-Drive Edge Bundling algorithm (i = #iterations).
Line width can be set proportional to the edge density.</p>
					</caption>
					<graphic id="graph03" xlink:href="jemr-10-05-i-figure-03.png"/>
				</fig>
      </sec>
	  
	  
	  
	  
      <sec id="s3d">
        <title>Flow direction map visualization</title>
        <p>The flow direction map is implemented as two
floatingpoint textures corresponding to horizontal and vertical
components (<xref ref-type="fig" rid="fig04">Figure 4</xref>). In the ADEB framework  [
        <xref ref-type="bibr" rid="b33">33</xref>
        ]
these textures are used only to define the edge
compatibility criterion. However, the visual analysis of the flow
direction map can round off the exploration of the
bundled saccades traces to identify the clearly visible saccade
patterns. Comparing, for instance, the maps before
(<xref ref-type="fig" rid="fig01">Figure 1</xref>, bottom right) and after (<xref ref-type="fig" rid="fig04">Figure 4</xref>) applying the
saccade bundling algorithm shows how the vertical and
horizontal paired transitions become clearly visible.
Nevertheless, while exploring two separate components can
be intuitive when the flows are parallel to the components
(i.e. purely vertical or horizontal, as in the square
scanpath dataset), it is more troublesome in cases of diagonal
or circular flows where both components are non-null. In
the scientific visualization domain a variety of methods
exist that can depict a vector field in a single 2D image
[
          <xref ref-type="bibr" rid="b34">34</xref>
          ]. Flow visualization techniques include direct flow
visualizations using arrow glyphs, geometric flow
visualizations using streamlines, feature-based flow
visualizations using topological information, and dense,
texturebased flow visualizations using repetition of a texture
according to the local flow vector [
          <xref ref-type="bibr" rid="b24">24</xref>
          ]. The texture-based
flow visualization is among the most versatile and
effective methods, and is easy to implement.</p> 
<fig id="fig04" fig-type="figure" position="float">
					<label>Figure 4</label>
					<caption>
						<p>Visualization of the horizontal (on the
left) and vertical (on the right) components of the
flow direction map of the square scanpath after
fixation clustering and saccades bundling</p>
					</caption>
					<graphic id="graph04" xlink:href="jemr-10-05-i-figure-04.png"/>
				</fig>


<p>In this work, we use the Line Integral Convolution
(LIC) algorithm [
          <xref ref-type="bibr" rid="b10">10</xref>
          ] which filters an input texture along
streamlines using a one-dimensional convolutional kernel
(for instance, a simple constant or Gaussian kernel).
Using white noise textures as an input (<xref ref-type="fig" rid="fig05">Figure 5a</xref>, top row),
LIC visualizes vector fields where ink droplets follow the
flow direction. The intensity  I(&#x3C7;) of a pixel at location x
is calculated by <xref ref-type="fig" rid="eq03">Equation 3</xref>
where  k(&#x2219;) is the convolution kernel,  T(&#x2219;) is the input
noise texture, and  s(&#x2219;) is the function that parametrizes
the streamlines of the flow direction map. To each pixel
at position y it associates one of the surrounding pixels
according to the direction vector at that location (<xref ref-type="fig" rid="fig05">Figure 5b</xref>).</p>
<fig id="eq03" fig-type="equation" position="anchor">
					<graphic id="equation03" xlink:href="jemr-10-05-i-equation-03.png"/>
				</fig>



        <p>
          By using a sparse noise texture and ramp-like kernel
function as an input, Oriented LIC (OLIC) [
          <xref ref-type="bibr" rid="b39">39</xref>
          ] enables
visual separation of streamlines with the same direction
but opposite orientation in static images. The ramp-like
kernel function makes the ink intensity vary according to
the streamline, indicating the direction of the flow
(<xref ref-type="fig" rid="fig05">Figure 5a</xref>, bottom row). By phase-shifting the kernel, these
textures can be animated to indicate the flow direction
more clearly.
        </p>
		<fig id="fig05" fig-type="figure" position="float">
					<label>Figure 5</label>
					<caption>
						<p>A) Visualization of the flow direction map for the square scanpath dataset using the oriented
line integral convolution algorithm. In the top row, three input textures of decreasing density are shown.
In the bottom row, the corresponding OLIC visualizations are depicted. B) For each pixel, a noise texture
is filtered using a convolutional kernel according to the flow direction map.</p>
					</caption>
					<graphic id="graph05" xlink:href="jemr-10-05-i-figure-05.png"/>
				</fig>
		
      </sec>
      <sec id="s3e">
        <title>Illustrations Datasets</title>
        <p>We considered three use cases: a square scanpath, a
visual search task and an art perception task. A
participant&#x2019;s gaze position was recorded at 500 Hz with a
remote SMI RED eye tracker (SensoMotoric Instruments
GmbH, Germany). A 9-point calibration was performed
in the beginning of the acquisition. The calibration was
validated with four additional fixation points until the
precision was below 1&#xB0;. The participants had a viewing
distance of approximately 60 cm from the 22-inch LCD
monitor with 1680 &#xD7; 1250 pixels screen resolution. The
fixations and saccades were detected using the Event
Detector 3.0.20 by SMI using default settings. The
software that generated the illustrations using the described
visual aggregations algorithms was implemented in C#.
All the datasets, containing x and y coordinates of the
start and end fixation points of each saccade and their
timestamp, are available in supplementary files.</p>



        <p><bold>Square scanpath.</bold> In this example, the participant
followed a small black circle on the screen for one
minute. The circle moved from corner to corner of the
square, each side of which has a length of 200 pixels.
During the first half of the trial, the circle moved in the
clockwise direction, during the other half it moved
anticlockwise. The resulting dataset contains 90 saccades.</p>
        <p><bold>Visual search task.</bold> During this task, the participant
had to find all the numbers from 1 to 90 in the correct
order. This test was used in the Soviet Union to test
children&#x2019;s attentional capabilities. We considered the first
minute of the task recording. The resulting dataset
contains 595 saccades.</p>
        <p><bold>Art perception.</bold> The participant freely observed three
paintings for one minute each. The participant was
presented with Caf&#xE9; Terrace at Night (1888) by Vincent van
Gogh, I and the Village (1911) by Marc Chagall, and The
Creation of Adam (1510) by Michelangelo. The resulting
datasets contains 320, 380 and 375 saccades respectively.</p>
      </sec>
    </sec>
    <sec id="s4">
      <title>Results and Discussion</title>
      <p>In this section, we present and discuss the three use
cases to illustrate the described scanpath visualizations
using visual aggregation techniques, i.e. fixation
clustering and saccade bundling. We close the discussion with
an example of scanpath comparison using the flow
direction maps.</p>
      <sec id="s4a">
        <title>Square scanpath</title>
        <p>
          This basic square scanpath illustrates all the steps of
the described visualization methods. <xref ref-type="fig" rid="fig01">Figure 1</xref> shows the
raw fixations and saccades. At the top, fixations are
represented as small black dots and saccades are shown with
different color encodings. The color coding of the
saccade direction gives us initial information about the
scanpath nature. We used a standard rainbow colormap.
Though far from perfect, and confusing for the viewer in
some situations [
          <xref ref-type="bibr" rid="b30 b5">30, 5</xref>
          ], we consider it suitable for the
illustrations presented here. Indeed, for the purpose of
illustration we needed at least four principal colors to
depict four compass directions. For example, we can
easily distinguish red-cyan horizontal and green-violet
vertical transitions in <xref ref-type="fig" rid="fig01">Figure 3</xref>. Representing the line
width proportionally to the local density facilitates the
reading of the colors. Based on the raw fixations and
saccades, four maps (2D textures) are generated: a
fixation density map to perform fixation clustering, a saccade
density map and a flow direction map to perform saccade
bundling. We used a grayscale colormap for the density
maps and the diverging colormap proposed by Moreland
[
          <xref ref-type="bibr" rid="b30">30</xref>
          ] for the flow direction maps.
        </p>
		
		
		
        <p>The square scanpath illustrates the inherent visual
clutter of gaze recordings. While the target presented a
small dot appearing at the exact same locations, the
fixations were detected at quite different positions. A few
iterations of fixation clustering make it possible to bring
adjacent fixations together (<xref ref-type="fig" rid="fig02">Figure 2</xref>), and saccade
bundling merges the saccades of the same direction and
orientation (<xref ref-type="fig" rid="fig03">Figure 3</xref>). After applying these two steps, we
can easily distinguish mutual transitions between corners.
By separating the edges of different directions, the flow
direction map of bundled data can also be used to
interpret the data (<xref ref-type="fig" rid="fig04">Figure 4</xref>). While the overlapping saccades
of the raw data canceled the flow of the opposite
orientation (<xref ref-type="fig" rid="fig01">Figure 1</xref>, bottom right), the bundled layout has eight
clearly visible flows corresponding to saccade bundles:
four horizontal (<xref ref-type="fig" rid="fig04">Figure 4</xref>, left) and four vertical (<xref ref-type="fig" rid="fig04">Figure 4</xref>,
right).</p>
        <p>The resulting flow direction map can be shown as a
single texture by using the OLIC technique. <xref ref-type="fig" rid="fig05">Figure 5a</xref>
shows the result of convolving noisy textures with the
flow direction map from <xref ref-type="fig" rid="fig04">Figure 4</xref>. The ink droplets of
varying intensity that follow the saccade flow allow
instant reading of the flow direction and orientation.</p>
      </sec>
      <sec id="s4b">
        <title>Visual search task</title>
        <p>In this example (<xref ref-type="fig" rid="fig06">Figure 6a and 6b</xref>), we can notice the
benefit of the proposed scanpath visualization when
hundreds of saccades are present. While the clustered layout
with the color and line width encoding already gives us a
few insights about the direction of the scanpath (<xref ref-type="fig" rid="fig06">Figure 6c</xref>), the clustered and bundled layout significantly
reduces the visual clutter and uncovers the circular scanpath
(<xref ref-type="fig" rid="fig06">Figure 6d</xref>). The red east-west transitions at the top, the
blue-violet north-south transitions on the left, the cyan
west-east transition at the bottom, and the green
southnorth transitions on the right can be seen. We can also
spot the clearly visible violet north-south transition on the
right and red east-west transition at the bottom. These
latter transitions indicate the presence of local loops that
can be seen on the OLIC representation (<xref ref-type="fig" rid="fig06">Figure 6e</xref>). The
participant confirmed the circular visual search strategy
afterwards. The obtained insights can also be seen in the
horizontal and vertical components of the flow direction
map (<xref ref-type="fig" rid="fig06">Figure 6e and 6g</xref>).</p>
<fig id="fig06" fig-type="figure" position="float">
					<label>Figure 6</label>
					<caption>
						<p>The scanpath visualization for the visual search task. A) visual stimulus, B) raw fixations and
saccades, C) clustered data colored with line width proportional to local density, D) bundled data with
line width proportional to local density, E) OLIC image of the flow direction map, F) horizontal and G)
vertical components of the flow direction map.</p>
					</caption>
					<graphic id="graph06" xlink:href="jemr-10-05-i-figure-06.png"/>
				</fig>


      </sec>
      <sec id="s4c">
        <title>Art perception</title>
        <p>As in the visual search task example, the visualization
of the art perception datasets reveals the scan strategy
used by the participant viewing the masterpieces. <xref ref-type="fig" rid="fig07">Figure 7a</xref> shows that the participant explored the Vincent van
Gogh painting in a triangle between the caf&#xE9; terrace, the
night sky and the shop on the street corner. The line
width encoding according to the bundle density also tells
us that the least seen element was the corner shop, and
the majority of transitions were between the blue sky and
the yellow terrace. <xref ref-type="fig" rid="fig07">Figure 7b</xref> uncovers the main
transitions between the eyes and the lips of the peasant and the
cow. Small transitions to the figures of two peasants on
the top of the painting are also easily visible in the
proposed scanpath representation.</p>


<fig id="fig07" fig-type="figure" position="float">
					<label>Figure 7</label>
					<caption>
						<p>The scanpath visualization for the Vincent van Gogh (A) and Marc Chagall (B) paintings.</p>
					</caption>
					<graphic id="graph07" xlink:href="jemr-10-05-i-figure-07.png"/>
				</fig>
        <p><xref ref-type="fig" rid="fig08">Figure 8</xref> shows the bundled layout and different color
encodings of the gaze of the participant exploring the
Michelangelo masterpiece. <xref ref-type="fig" rid="fig08">Figure 8c</xref> shows that the
bundled layout reveals the main transitions between
Adam&#x2019;s head and hand and God&#x2019;s head and hand. However,
<xref ref-type="fig" rid="fig08">Figure 8d</xref> shows the color encoding according to the
saccade amplitude, and reveals that saccades having the
same direction between the heads and hands were
bundled together with the transitions between faces. We can
easily correct this by applying multi-criteria bundling
using both direction and saccade amplitude as a
compatibility criterion. The resulting layout (<xref ref-type="fig" rid="fig08">Figure 8e</xref>) separates
Adam&#x2019;s hand-face and God&#x2019;s face-hand transitions from
the God-Adam face transitions. Encoding saccade
magnitude (<xref ref-type="fig" rid="fig08">Figure 8f</xref>) allows us to see that the bundles take the
differences of the saccade length into account. Moreover,
color coding the saccade timestamp (<xref ref-type="fig" rid="fig08">Figure 8g</xref>) shows us
the order in which the elements were looked at: first, the
figures around God, next, Adam&#x2019;s body, and, at last, a
long exploration of the main characters&#x2019; faces and hands
transitions.</p>


<fig id="fig08" fig-type="figure" position="float">
					<label>Figure 8</label>
					<caption>
						<p>scanpath visualization of the dataset for the Michelangelo painting. A) visual stimulus, B) raw
fixations and saccades, C) data bundled according to saccade direction, D) layout “C” colored according to saccade
length, E) data bundled according to both direction and saccade length, F) layout “E” colored according to
saccade length, G) layout “C” colored according to timestamp.</p>
					</caption>
					<graphic id="graph08" xlink:href="jemr-10-05-i-figure-08.png"/>
				</fig>
      </sec>
      <sec id="s4d">
        <title>Scanpaths comparison</title>
        <p>
          The techniques presented provide a visual support for
an analysis. Nevertheless, the rationale also provides us
with flow direction maps which allow us to not only
visualize but also quantitatively compare. Le Meur and
Baccino [
          <xref ref-type="bibr" rid="b28">28</xref>
          ] presented a number of methods for
comparing saliency maps, which are also suitable for comparing
flow direction maps: correlation-based measures, the
Kullback-Leibler divergence, and the Receiver Operating
Characteristic Analysis. These approaches can be used to,
first, individually compute the similarities S_V and S_H
between vertical and horizontal components of the flow
direction maps; and, then, choose a norm for the vector &#x2329;S&#x2329;=(S&#x20D1;_V,S_H ) that defines the similarity between the two
scanpaths. In this paper, we provide an example of
another more straight-forward approach that does not require
the choice of a norm. We use cosine similarity cos &#x3B8;  in
which &#x3B8;  is the angle between two direction vectors.
Therefore, we can compute a similarity map. Each pixel&#x2019;s
value varies from &#x2212;1 (opposite direction) to 1 (the same
direction). We further apply a mask of the vectors&#x2019;
magnitude (average of two flow direction maps) scaled to a
range of (0, 1). This allows us to visualize only parts of
the similarity map in which direction flows are important.
<xref ref-type="fig" rid="fig09">Figure 9</xref> shows a comparison of two participants&#x2019;
scanpaths. We can notice that the upper part (blue areas) of
the two scanpaths is rather different while the lower part
(red areas) is similar. Notably, both scanpaths include a
transition from the caf&#xE9; terrace to the corner shop (cyan);
and while participant A used a triangle pattern (as
previously described), participant B mostly switched between
the upper part and the center of the painting. More
sophisticated approaches, such as a similarity measurement
using global distributions [
          <xref ref-type="bibr" rid="b12">12</xref>
          ], exist and can be used to
compare the flow direction maps of different scanpaths.
        </p>
		
		<fig id="fig09" fig-type="figure" position="float">
					<label>Figure 9</label>
					<caption>
						<p>Comparison of scanpaths of two participants who observed the Vincent van Gogh
painting. The similarity map is given by cosines between the two flow direction maps.</p>
					</caption>
					<graphic id="graph09" xlink:href="jemr-10-05-i-figure-09.png"/>
				</fig>
		
      </sec>
    </sec>
    <sec id="s5">
      <title>Conclusion and Future Work</title>
      <p>In this paper, we illustrated the use of different visual
aggregation techniques to obtain non-cluttered visual
representations of scanpaths. Fixation clustering and
saccade bundling simplified the scanpath representation
and allowed the scan strategy of the participant to be
read. Flow direction maps generated using edge bundling
can be further represented as a single image to explore
the transitions and can be compared using cosine
similarity maps. Used together, these techniques provide an
efficient support for a visual analysis of the scanpaths and
informative illustrations of the eye movement data. We
also provide the example datasets in the supplementary
material so that other researchers can test their
visualization methods on the data and compare it with our results.</p>


      <p>It is worth noting that these are the first results based
on observations of the rendered images. To further
demonstrate the efficiency of such visualizations, it
would be necessary to conduct a study with a group of
participants to statistically validate our findings.</p>


      <p>This work can be taken further in many directions.
Using the proposed approach, we can visually simplify
the scanpath of multiple participants. To do so, we will
have to address the scalability issue with large quantities
of data to be simplified. The used clustering and bundling
algorithms have already proven capable of addressing
these issues. The relative clutter of the generated layout
despite their visual simplification can be further reduced
using filtering based on the density map. For instance, we
can choose to not display the least dense areas (which is
partly done using the line width modulation), areas of
some specific direction or time period. Finally, as
addressed to some extent at the end of the discussion,
quantitative metrics can be extracted from these simplified
visualizations. Few metrics of scanpath comparison exist
and our approach paves a new way to assess the eye
tracking data.</p>
      <sec id="s5a" sec-type="COI-statement">
        <title>Ethics and Conflict of Interest</title>
        <p>The author(s) declare(s) that the contents of the article
are in agreement with the ethics described in <ext-link ext-link-type="uri" 
xlink:href="http://biblio.unibe.ch/portale/elibrary/BOP/jemr/ethics.html" xlink:show="new">
http://biblio.unibe.ch/portale/elibrary/BOP/jemr/ethics.html</ext-link> 
and that there is no conflict of interest regarding the
publication of this paper.</p>


      </sec>
    </sec>
  </body>
  <back>
<ref-list>
<ref id="b1"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Andersson</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Larsson</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Holmqvist</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Stridh</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Nyström</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2017</year>). <article-title>One algorithm to rule them all? An evaluation and discussion of ten eye movement event-detection algorithms.</article-title> <source>Behavior Research Methods</source>, <volume>49</volume>(<issue>2</issue>), <fpage>616</fpage>–<lpage>637</lpage>. <pub-id pub-id-type="doi">10.3758/s13428-016-0738-9</pub-id><pub-id pub-id-type="pmid">27193160</pub-id><issn>1554-351X</issn></mixed-citation></ref>
<ref id="b2"><mixed-citation publication-type="conference" specific-use="parsed"><person-group person-group-type="author"><string-name><surname>Blascheck</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Kurzhals</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Raschke</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Burch</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Weiskopf</surname>, <given-names>D.</given-names></string-name>, &amp; <string-name><surname>Ertl</surname>, <given-names>T.</given-names></string-name></person-group> (<year>2014</year>). <article-title>State-of-the-art of visualization for eye tracking data.</article-title> In <source>Proceedings of Eurographics Conference on Visualization (EuroVis)</source> (pp. <fpage>63</fpage>-<lpage>82</lpage>). https://doi.org/<pub-id pub-id-type="doi">10.2312/eurovisstar.20141173</pub-id></mixed-citation></ref>
<ref id="b3"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Blascheck</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Schweizer</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Beck</surname>, <given-names>F.</given-names></string-name>, &amp; <string-name><surname>Ertl</surname>, <given-names>T.</given-names></string-name></person-group> (<year>2017</year>). <article-title>Visual Comparison of Eye Movement Patterns.</article-title> <source>Computer Graphics Forum</source>, <volume>36</volume>(<issue>3</issue>), <fpage>87</fpage>–<lpage>97</lpage>. <pub-id pub-id-type="doi">10.1111/cgf.13170</pub-id><issn>0167-7055</issn></mixed-citation></ref>
<ref id="b4"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Blascheck</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Kurzhals</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Raschke</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Strohmaier</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Weiskopf</surname>, <given-names>D.</given-names></string-name>, &amp; <string-name><surname>Ertl</surname>, <given-names>T.</given-names></string-name></person-group> (<year>2016</year>). <article-title>AOI hierarchies for visual exploration of fixation sequences.</article-title> In Proceed-ings of the Symposium on Eye Tracking Research &amp; Applications (pp. 111-118). https://doi.org/<pub-id pub-id-type="doi">10.1145/2857491.2857524</pub-id></mixed-citation></ref>
<ref id="b5"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Borland</surname>, <given-names>D.</given-names></string-name>, &amp; <string-name><surname>Taylor</surname>, <given-names>M. R.</given-names>, <suffix>II</suffix></string-name></person-group>. (<year>2007</year>). <article-title>Rainbow color map (still) considered harmful.</article-title> <source>IEEE Computer Graphics and Applications</source>, <volume>27</volume>(<issue>2</issue>), <fpage>14</fpage>–<lpage>17</lpage>. <pub-id pub-id-type="doi">10.1109/MCG.2007.323435</pub-id><pub-id pub-id-type="pmid">17388198</pub-id><issn>0272-1716</issn></mixed-citation></ref>
<ref id="b6"><mixed-citation publication-type="conference" specific-use="parsed"><person-group person-group-type="author"><string-name><surname>Burch</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2017</year>). <article-title>Eye movement plots.</article-title> In <source>Proceedings of the 10th International Symposium on Visual Infor-mation Communication and Interaction</source> (pp. <fpage>101</fpage>-<lpage>108</lpage>). https://doi.org/<pub-id pub-id-type="doi">10.1145/3105971.3105973</pub-id></mixed-citation></ref>
<ref id="b7"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Burch</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Schmauder</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Raschke</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Weiskopf</surname>, <given-names>D.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Saccade plots.</article-title> In <source>Proceedings of the Symposi-um on Eye Tracking Research and Applications</source> (pp. <fpage>307</fpage>-<lpage>310</lpage>). https://doi.org/<pub-id pub-id-type="doi">10.1145/2578153.2578205</pub-id></mixed-citation></ref>
<ref id="b8"><mixed-citation publication-type="conference" specific-use="parsed"><person-group person-group-type="author"><string-name><surname>Burch</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Kumar</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Mueller</surname>, <given-names>K.</given-names></string-name>, &amp; <string-name><surname>Weiskopf</surname>, <given-names>D.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Color bands: visualizing dynamic eye move-ment patterns.</article-title> <source>IEEE Second Workshop on Eye Track-ing and Visualization</source>, (pp. <fpage>40</fpage>-<lpage>44</lpage>). https://doi.org/<pub-id pub-id-type="doi">10.1109/ETVIS.2016.7851164</pub-id></mixed-citation></ref>
<ref id="b9"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Burch</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Kull</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Weiskopf</surname>, <given-names>D.</given-names></string-name></person-group> (<year>2013</year>). <article-title>AOI rivers for visualizing dynamic eye gaze frequencies.</article-title> <source>Computer Graphics Forum</source>, <volume>32</volume>(<issue>3</issue>), <fpage>281</fpage>–<lpage>290</lpage>. <pub-id pub-id-type="doi">10.1111/cgf.12115</pub-id><issn>0167-7055</issn></mixed-citation></ref>
<ref id="b10"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Cabral</surname>, <given-names>B.</given-names></string-name>, &amp; <string-name><surname>Leedom</surname>, <given-names>L. C.</given-names></string-name></person-group> (<year>1993</year>). <article-title>Imaging vector fields using line integral convolution.</article-title> In <source>Proceedings of the Conference on Computer Graphics and Interac-tive Techniques</source> (pp. <fpage>263</fpage>-<lpage>270</lpage>). http://doi.org/<pub-id pub-id-type="doi">10.1145/166117.166151</pub-id></mixed-citation></ref>
<ref id="b11"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Comaniciu</surname>, <given-names>D.</given-names></string-name>, &amp; <string-name><surname>Meer</surname>, <given-names>P.</given-names></string-name></person-group> (<year>2002</year>). <article-title>Mean shift: A robust approach toward feature space analysis.</article-title> <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>, <volume>24</volume>(<issue>5</issue>), <fpage>603</fpage>–<lpage>619</lpage>. <pub-id pub-id-type="doi">10.1109/34.1000236</pub-id><issn>0162-8828</issn></mixed-citation></ref>
<ref id="b12"><mixed-citation publication-type="book-chapter" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Dinh</surname>, <given-names>H. Q.</given-names></string-name>, &amp; <string-name><surname>Xu</surname>, <given-names>L.</given-names></string-name></person-group> (<year>2008</year>). <chapter-title>Measuring the similarity of vector fields using global distributions.</chapter-title> In Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition and Structural and Syntactic Pat-tern Recognition (pp. 187-196). https://doi.org/<pub-id pub-id-type="doi">10.1007/978-3-540-89689-0_23</pub-id></mixed-citation></ref>
<ref id="b13"><mixed-citation publication-type="conference" specific-use="parsed"><person-group person-group-type="author"><string-name><surname>Duchowski</surname>, <given-names>A. T.</given-names></string-name>, <string-name><surname>Driver</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Jolaoso</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Tan</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Ramey</surname>, <given-names>B. N.</given-names></string-name>, &amp; <string-name><surname>Robbins</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2010</year>). <article-title>Scanpath com-parison revisited.</article-title> In <source>Proceedings of the Symposium on Eye Tracking Research &amp; Applications</source> (pp. <fpage>219</fpage>-<lpage>226</lpage>). https://doi.org/<pub-id pub-id-type="doi">10.1145/1743666.1743719</pub-id></mixed-citation></ref>
<ref id="b14"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Duchowski</surname>, <given-names>A. T.</given-names></string-name></person-group> (<year>2002</year>). <article-title>A breadth-first survey of eye-tracking applications.</article-title> <source>Behavior Research Methods, Instruments, &amp; Computers</source>, <volume>34</volume>(<issue>4</issue>), <fpage>455</fpage>–<lpage>470</lpage>. <pub-id pub-id-type="doi">10.3758/BF03195475</pub-id><pub-id pub-id-type="pmid">12564550</pub-id><issn>0743-3808</issn></mixed-citation></ref>
<ref id="b15"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Eraslan</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Yesilada</surname>, <given-names>Y.</given-names></string-name>, &amp; <string-name><surname>Harper</surname>, <given-names>S.</given-names></string-name></person-group> (<year>2015</year>). <article-title>Eye track-ing scanpath analysis techniques on web pages: A survey, evaluation and comparison.</article-title> <source>Journal of Eye Movement Research</source>, <volume>9</volume>(<issue>1</issue>), <fpage>1</fpage>–<lpage>19</lpage>. <pub-id pub-id-type="doi">10.16910/jemr.9.1.2</pub-id><issn>1995-8692</issn></mixed-citation></ref>
<ref id="b16"><mixed-citation publication-type="conference" specific-use="parsed"><person-group person-group-type="author"><string-name><surname>Goldberg</surname>, <given-names>J. H.</given-names></string-name>, &amp; <string-name><surname>Helfman</surname>, <given-names>J. I.</given-names></string-name></person-group> (<year>2010</year>). <article-title>Visual scanpath representation.</article-title> In <source>Proceedings of the Symposium on Eye Tracking Research &amp; Applications</source> (pp. <fpage>203</fpage>-<lpage>210</lpage>). https://doi.org/<pub-id pub-id-type="doi">10.1145/1743666.1743717</pub-id></mixed-citation></ref>
<ref id="b17"><mixed-citation publication-type="conference" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Goldberg</surname>, <given-names>J. H.</given-names></string-name>, &amp; <string-name><surname>Helfman</surname>, <given-names>J. I.</given-names></string-name></person-group> (<year>2010</year>). <article-title>Scanpath clus-tering and aggregation.</article-title> In Proceedings of the Sympo-sium on Eye Tracking Research &amp; Applications (pp. 227-234). https://doi.org/<pub-id pub-id-type="doi">10.1145/1743666.1743721</pub-id></mixed-citation></ref>
<ref id="b18"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Holmqvist</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Nyström</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Andersson</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Dewhurst</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Jarodzka</surname>, <given-names>H.</given-names></string-name>, &amp; <string-name><surname>Van de Weijer</surname>, <given-names>J.</given-names></string-name></person-group> (<year>2011</year>). <source>Eye tracking: A comprehensive guide to methods and measures</source>. <publisher-name>OUP Oxford</publisher-name>.</mixed-citation></ref>
<ref id="b19"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Hurter</surname>, <given-names>C.</given-names></string-name></person-group> (<year>2015</year>). <article-title>Image-Based Visualization: Interactive Multidimensional Data Exploration.</article-title> <source>Synthesis Lec-tures on Visualization</source>, <volume>3</volume>(<issue>2</issue>), <fpage>1</fpage>–<lpage>127</lpage>. <pub-id pub-id-type="doi">10.2200/S00688ED1V01Y201512VIS006</pub-id></mixed-citation></ref>
<ref id="b20"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Hurter</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Ersoy</surname>, <given-names>O.</given-names></string-name>, &amp; <string-name><surname>Telea</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2012</year>). <article-title>Graph bun-dling by kernel density estimation.</article-title> <source>Computer Graphics Forum</source>, <volume>31</volume>(<issue>3</issue>), <fpage>865</fpage>–<lpage>874</lpage>. <pub-id pub-id-type="doi">10.1111/j.1467-8659.2012.03079.x</pub-id><issn>0167-7055</issn></mixed-citation></ref>
<ref id="b21"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Hurter</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Ersoy</surname>, <given-names>O.</given-names></string-name>, <string-name><surname>Fabrikant</surname>, <given-names>S. I.</given-names></string-name>, <string-name><surname>Klein</surname>, <given-names>T. R.</given-names></string-name>, &amp; <string-name><surname>Telea</surname>, <given-names>A. C.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Bundled visualization of dynamicgraph and trail data.</article-title> <source>IEEE Transactions on Visualization and Computer Graphics</source>, <volume>20</volume>(<issue>8</issue>), <fpage>1141</fpage>–<lpage>1157</lpage>. <pub-id pub-id-type="doi">10.1109/TVCG.2013.246</pub-id><pub-id pub-id-type="pmid">26357367</pub-id><issn>1077-2626</issn></mixed-citation></ref>
<ref id="b22"><mixed-citation publication-type="book-chapter" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Jacob</surname>, <given-names>R. J.</given-names></string-name>, &amp; <string-name><surname>Karn</surname>, <given-names>K. S.</given-names></string-name></person-group> (<year>2003</year>). Eye tracking in hu-man-computer interaction and usability research: Ready to deliver the promises. In The Mind’s Eye: Cognitive and Applied Aspects of Eye Movement Re-search. Hyona, Radach &amp; Deubel (eds.) Oxford.</mixed-citation></ref>
<ref id="b23"><mixed-citation publication-type="conference" specific-use="parsed"><person-group person-group-type="author"><string-name><surname>Jarodzka</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Holmqvist</surname>, <given-names>K.</given-names></string-name>, &amp; <string-name><surname>Nyström</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2010</year>). <article-title>A vector-based, multidimensional scanpath similarity measure.</article-title> In <source>Proceedings of the Symposium on Eye Tracking Research &amp; Applications</source> (pp. <fpage>211</fpage>-<lpage>218</lpage>). https://doi.org/<pub-id pub-id-type="doi">10.1145/1743666.1743718</pub-id></mixed-citation></ref>
<ref id="b24"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Laramee</surname>, <given-names>R. S.</given-names></string-name>, <string-name><surname>Hauser</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Doleisch</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Vrolijk</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Post</surname>, <given-names>F. H.</given-names></string-name>, &amp; <string-name><surname>Weiskopf</surname>, <given-names>D.</given-names></string-name></person-group> (<year>2004</year>). The State of the Art in Flow Visualization: Dense and Texture-Based Techniques. Computer Graphics Forum, 23(2):203-221. https://doi.org/<pub-id pub-id-type="doi">10.1111/j.1467-8659.2004.00753.x</pub-id> 14</mixed-citation></ref>
<ref id="b25"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Lhuillier</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Hurter</surname>, <given-names>C.</given-names></string-name>, &amp; <string-name><surname>Telea</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2017</year>). <article-title>State of the Art in Edge and Trail Bundling Techniques.</article-title> <source>Computer Graphics Forum</source>, <volume>36</volume>(<issue>3</issue>), <fpage>619</fpage>–<lpage>645</lpage>. <pub-id pub-id-type="doi">10.1111/cgf.13213</pub-id><issn>0167-7055</issn></mixed-citation></ref>
<ref id="b26"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Lhuillier</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Hurter</surname> <given-names>C.</given-names></string-name>, &amp; <string-name><surname>Telea</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2017</year>). <article-title>FFTEB: Edge Bundling of Huge Graphs by the Fast Fourier Transform.</article-title> In <source>IEEE Pacific Visualization Symposium</source> (PacificVis) (pp. <fpage>190</fpage>-<lpage>199</lpage>) https://doi.org/<pub-id pub-id-type="doi">10.1109/PACIFICVIS.2017.8031594</pub-id></mixed-citation></ref>
<ref id="b27"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Levenshtein</surname>, <given-names>V. I.</given-names></string-name></person-group> (<year>1966</year>). <article-title>Binary codes capable of cor-recting deletions, insertions, and reversals.</article-title> <source>Soviet Physics, Doklady</source>, <volume>10</volume>(<issue>8</issue>), <fpage>707</fpage>–<lpage>710</lpage>.<issn>0038-5689</issn></mixed-citation></ref>
<ref id="b28"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Le Meur</surname>, <given-names>O.</given-names></string-name>, &amp; <string-name><surname>Baccino</surname>, <given-names>T.</given-names></string-name></person-group> (<year>2013</year>). <article-title>Methods for comparing scanpaths and saliency maps: Strengths and weaknesses.</article-title> <source>Behavior Research Methods</source>, <volume>45</volume>(<issue>1</issue>), <fpage>251</fpage>–<lpage>266</lpage>. <pub-id pub-id-type="doi">10.3758/s13428-012-0226-9</pub-id><pub-id pub-id-type="pmid">22773434</pub-id><issn>1554-351X</issn></mixed-citation></ref>
<ref id="b29"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Matin</surname>, <given-names>E.</given-names></string-name></person-group> (<year>1974</year>). <article-title>Saccadic suppression: A review and an analysis.</article-title> <source>Psychological Bulletin</source>, <volume>81</volume>(<issue>12</issue>), <fpage>899</fpage>–<lpage>917</lpage>. <pub-id pub-id-type="doi">10.1037/h0037368</pub-id><pub-id pub-id-type="pmid">4612577</pub-id><issn>0033-2909</issn></mixed-citation></ref>
<ref id="b30"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Moreland</surname>, <given-names>K.</given-names></string-name></person-group> (<year>2009</year>). <article-title>Diverging color maps for scientific visualization.</article-title> <source>Advances in Visual Computing</source>, <volume>5876</volume>, <fpage>92</fpage>–<lpage>103</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-642-10520-3_9</pub-id></mixed-citation></ref>
<ref id="b31"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Netzel</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Burch</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Weiskopf</surname>, <given-names>D.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Interactive scanpath-oriented annotation of fixations.</article-title> In Proceed-ings of the Symposium on Eye Tracking Research &amp; Applications (pp. 183-187). https://doi.org/<pub-id pub-id-type="doi">10.1145/2857491.2857498</pub-id></mixed-citation></ref>
<ref id="b32"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Nyström</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Holmqvist</surname>, <given-names>K.</given-names></string-name></person-group> (<year>2010</year>). <article-title>An adaptive algorithm for fixation, saccade, and glissade detection in eyetracking data.</article-title> <source>Behavior Research Methods</source>, <volume>42</volume>(<issue>1</issue>), <fpage>188</fpage>–<lpage>204</lpage>. <pub-id pub-id-type="doi">10.3758/BRM.42.1.188</pub-id><pub-id pub-id-type="pmid">20160299</pub-id><issn>1554-351X</issn></mixed-citation></ref>
<ref id="b33"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Peysakhovich</surname>, <given-names>V.</given-names></string-name>, <string-name><surname>Hurter</surname>, <given-names>C.</given-names></string-name>, &amp; <string-name><surname>Telea</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2015</year>). <article-title>Attrib-ute-driven edge bundling for general graphs with ap-plications in trail analysis.</article-title> In <source>IEEE Pacific Visualiza-tion Symposium (PacificVis)</source> (pp. <fpage>39</fpage>-<lpage>46</lpage>). https://doi.org/<pub-id pub-id-type="doi">10.1109/PACIFICVIS.2015.7156354</pub-id></mixed-citation></ref>
<ref id="b34"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Post</surname>, <given-names>F. H.</given-names></string-name>, <string-name><surname>Vrolijk</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Hauser</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Laramee</surname>, <given-names>R. S.</given-names></string-name>, &amp; <string-name><surname>Doleisch</surname>, <given-names>H.</given-names></string-name></person-group> (<year>2002</year>). <source>Feature extraction and visualiza-tion of flow fields. Proceedings of Eurographics Con-ference on Visualization</source> (pp. <fpage>69</fpage>–<lpage>100</lpage>). <publisher-name>EuroVis</publisher-name>.</mixed-citation></ref>
<ref id="b35"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Raschke</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Herr</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Blascheck</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Ertl</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Burch</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Willmann</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Schrauf</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2014</year>). <article-title>A visual ap-proach for scan path comparison.</article-title> In <source>Proceedings of the Symposium on Eye Tracking Research and Appli-cations</source> (pp. <fpage>135</fpage>-<lpage>142</lpage>). https://doi.org/<pub-id pub-id-type="doi">10.1145/2578153.2578173</pub-id></mixed-citation></ref>
<ref id="b36"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Salvucci</surname>, <given-names>D. D.</given-names></string-name>, &amp; <string-name><surname>Goldberg</surname>, <given-names>J. H.</given-names></string-name></person-group> (<year>2000</year>). <article-title>Identifying fixations and saccades in eye-tracking protocols.</article-title> In <source>Proceedings of the Symposium on Eye Tracking Re-search &amp; Applications</source> (pp. <fpage>71</fpage>-<lpage>78</lpage>). https://doi.org/<pub-id pub-id-type="doi">10.1145/355017.355028</pub-id></mixed-citation></ref>
<ref id="b37"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Santella</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>DeCarlo</surname>, <given-names>D.</given-names></string-name></person-group> (<year>2004</year>). <article-title>Robust clustering of eye movement recordings for quantification of visual interest.</article-title> In <source>Proceedings of the Symposium on Eye Tracking Research &amp; Applications</source> (pp. <fpage>27</fpage>-<lpage>34</lpage>). https://doi.org/<pub-id pub-id-type="doi">10.1145/968363.968368</pub-id></mixed-citation></ref>
<ref id="b38"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Špakov</surname>, <given-names>O.</given-names></string-name>, &amp; <string-name><surname>Miniotas</surname>, <given-names>D.</given-names></string-name></person-group> (<year>2007</year>). <article-title>Visualization of eye gaze data using heat maps.</article-title> <source>Elektronika ir Elektrotechnika</source>, <volume>2</volume>(<issue>74</issue>), <fpage>55</fpage>–<lpage>58</lpage>.<issn>1392-1215</issn></mixed-citation></ref>
<ref id="b39"><mixed-citation publication-type="book-chapter" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Wegenkittl</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Groller</surname>, <given-names>E.</given-names></string-name>, &amp; <string-name><surname>Purgathofer</surname>, <given-names>W.</given-names></string-name></person-group> (<year>1997</year>). <chapter-title>Animating flow fields: rendering of oriented line inte-gral convolution.</chapter-title> In Computer Animation (pp. 15-21).</mixed-citation></ref>
<ref id="b40"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>van der Zwan</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Codreanu</surname>, <given-names>V.</given-names></string-name>, &amp; <string-name><surname>Telea</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2016</year>). <article-title>CUBu: Universal real-time bundling for large graphs.</article-title> <source>IEEE Transactions on Visualization and Computer Graphics</source>, <volume>22</volume>(<issue>12</issue>), <fpage>2550</fpage>–<lpage>2563</lpage>. <pub-id pub-id-type="doi">10.1109/TVCG.2016.2515611</pub-id><pub-id pub-id-type="pmid">26761819</pub-id><issn>1077-2626</issn></mixed-citation></ref>
</ref-list>
  </back>
</article>
