<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd">

<article article-type="research-article" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML">
 <front>
    <journal-meta>
	<journal-id journal-id-type="publisher-id">Jemr</journal-id>
      <journal-title-group>
        <journal-title>Journal of Eye Movement Research</journal-title>
      </journal-title-group>
      <issn pub-type="epub">1995-8692</issn>
	  <publisher>								
	  <publisher-name>Bern Open Publishing</publisher-name>
	  <publisher-loc>Bern, Switzerland</publisher-loc>
	</publisher>
    </journal-meta>
    <article-meta>
	<article-id pub-id-type="doi">10.16910/jemr.10.5.3</article-id> 
	  <article-categories>								
				<subj-group subj-group-type="heading">
					<subject>Research Article</subject>
				</subj-group>
		</article-categories>
      <title-group>
        <article-title>Gaze Self-Similarity Plot - A New Visualization Technique</article-title>
      </title-group>
	   <contrib-group> 
				<contrib contrib-type="author">
					<name>
						<surname>Kasprowski</surname>
						<given-names>Pawel</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Katarzyna</surname>
						<given-names>Harezlak</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>				
        <aff id="aff1">
		<institution>Silesian University of Technology</institution>,   <country>Poland</country>
        </aff>
		</contrib-group>   

		
	  <pub-date date-type="pub" publication-format="electronic"> 
		<day>16</day>  
		<month>10</month>
        <year>2017</year>
      </pub-date>
	  <pub-date date-type="collection" publication-format="electronic"> 
	  <year>2017</year>
	</pub-date>
      <volume>10</volume>
      <issue>5</issue>
	 <elocation-id>10.16910/jemr.10.5.3</elocation-id> 
	<permissions> 
	<copyright-year>2017</copyright-year>
	<copyright-holder>Kasprowski and Katarzyna</copyright-holder>
	<license license-type="open-access">
  <license-p>This work is licensed under a Creative Commons Attribution 4.0 International License, 
  (<ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">
    https://creativecommons.org/licenses/by/4.0/</ext-link>), which permits unrestricted use and redistribution provided that the original author and source are credited.</license-p>
</license>
	</permissions>
      <abstract>
        <p>Eye tracking has become a valuable way for extending knowledge of human behavior based on visual patterns. One of the most important elements of such an analysis is the presentation of obtained results, which proves to be a challenging task. Traditional visualization techniques such as scan-paths or heat maps may reveal interesting information, nonetheless many useful features are still not visible, especially when temporal characteristics of eye movement is taken into account. This paper introduces a technique called gaze self-similarity plot (GSSP) that may be applied to visualize both spatial and temporal eye movement features on the single two-dimensional plot. The technique is an extension of the idea of recurrence plots, commonly used in time series analysis. The paper presents the basic concepts of the proposed approach (two types of GSSP) complemented with some examples of what kind of information may be disclosed and finally showing areas of the GSSP possible applications.</p>
      </abstract>
      <kwd-group>
        <kwd>eye tracking</kwd>
        <kwd>visualization</kwd>
        <kwd>recurrence</kwd>
        <kwd>visual patterns</kwd>
        <kwd>classification</kwd>
      </kwd-group>
    </article-meta>
  </front>	
  <body>

    <sec id="S1">
      <title>Introduction</title>
	  
      <p>There are many visualization techniques for eye movement presentation among which
scan-paths and heat maps showing spatial positions of gazes in relation to a stimulus come to
the fore. The most important feature of the said visualization approaches is that they are
straightforward and understandable even for laymen; however these techniques are not well
suited to present temporal information. Temporal eye movement features such as fixations
durations, their order and recurrence or saccades durations are not visible on heat maps and
are barely visible on scan-paths, thus they have to be presented by means of other methods.</p>

      <p>There are attempts to enrich scan-paths [
        <xref ref-type="bibr" rid="R14">1</xref>
		] or heat maps [
        <xref ref-type="bibr" rid="R30">17</xref>
		], but the general problem is
that it is impossible to present three properties (horizontal and vertical position together with
time) on a single two-dimensional plot. Therefore, many spatio-temporal visualization
techniques use complex 3D graphs or combine different information in the same picture. See [
        <xref ref-type="bibr" rid="R16">3</xref>
		]
for a state-of-the-art in this area.</p>

      <p>The idea discussed in this paper alleviates the aforementioned problems by presenting
spatial information by relative distances between gazes instead of their absolute locations. Such
an approach - which was initially presented in [
        <xref ref-type="bibr" rid="R17">4</xref>
		] and significantly extended in the current
research - allows to reduce one dimension.</p>

      <p>The concept is based on the recurrence plot technique, used in the time series analysis
to reveal repeating patterns in data [
        <xref ref-type="bibr" rid="R18">5</xref>
		]. This method has already been utilized in eye tracking
field by [
        <xref ref-type="bibr" rid="R19">6</xref>
		] for a series of fixations located on axes X and Y according to their occurrence order.
If fixation i<sub>th</sub> and fixation j<sub>th</sub> are close to each other, a point (i, j) on the plot is black, and
when the distance between the fixations is above a threshold, it is white. Based on recurrence
plot, several measures describing eye movement patterns have been defined. There are also
tools for building recurrence plots, among which VERP Explorer is a good example [
        <xref ref-type="bibr" rid="R15">2</xref>
		].</p>

      <p>A pattern created by a recurrence plot as used in [
        <xref ref-type="bibr" rid="R19">6</xref>
		] depends on two parameters - a
maximal distance between two fixations to treat them as similar (or recurrent) and an algorithm
for the fixation detection. It may be easily shown that the algorithm, which more eagerly
merges subsequent fixations may provide a completely different plot and different values of
recurrence measures, introducing this way some ambiguity.</p>

      <p>In this paper we propose a visualization technique that does not depend on the
previously mentioned parameters, because: (1) its functioning is not based on fixations, but on
raw gaze coordinates, and (2) it visualizes a distance between gazes as a continuous value
instead of using only two values indicating whether the distance is above or below the
threshold, as in the case of the method described above. The next section of the paper
introduces the technique, whereas in subsequent parts we present a non exhaustive list of
possible applications of the method referred to as the Gaze Self-Similarity Plot (GSSP).</p>
    </sec>
	
    <sec id="S2">
      <title>Method</title>
	  
      <p>Suppose that we have a sequence of n gaze recordings g(1)...g(n) where each
recording g(i) is described as a point in 2-dimensional space: (g<sub>x</sub> , g<sub>y</sub> ) . The x and y
values are coordinates of a gaze on a screen with a resolution (x<sub>max</sub>, y<sub>max</sub>) . The GSSP is a
visualization of a matrix consisting of n*n points where each point encodes a distance
between an i<sub>th</sub> and an j<sub>th</sub> gaze points.</p>

      <p>The GSSP is defined by the following equation:</p>
	  
	  <disp-formula>
	   <label>(1)</label>
	  <mml:math id="m1">
	<mml:mtable class="m-equation" displaystyle="true" style="display: block; margin-top: 1.0em; margin-bottom: 2.0em">
	<mml:mtr>
		<mml:mtd>
			<mml:mspace width="6.0em" />
		</mml:mtd>
		<mml:mtd columnalign="left">
			<mml:mtable class="m-split" columnalign="left">
				<mml:mtr>
					<mml:mtd>
						<mml:mi>g</mml:mi>
						<mml:mi>s</mml:mi>
						<mml:mi>s</mml:mi>
						<mml:mi>p</mml:mi>
						<mml:mrow>
							<mml:mo form="prefix">(</mml:mo>
							<mml:mi>i</mml:mi>
							<mml:mo>,</mml:mo>
							<mml:mi>j</mml:mi>
							<mml:mo form="postfix">)</mml:mo>
						</mml:mrow>
						<mml:mo>=</mml:mo>
						<mml:mfrac linethickness="1">
							<mml:mrow>
								<mml:msqrt>
									<mml:mrow>
										<mml:msup>
											<mml:mrow>
												<mml:mo form="prefix">(</mml:mo>
												<mml:msub>
													<mml:mi>g</mml:mi>
													<mml:mi>x</mml:mi>
												</mml:msub>
												<mml:mo form="prefix">(</mml:mo>
												<mml:mi>i</mml:mi>
												<mml:mo form="postfix">)</mml:mo>
												<mml:mo>-</mml:mo>
												<mml:msub>
													<mml:mi>g</mml:mi>
													<mml:mi>x</mml:mi>
												</mml:msub>
												<mml:mo form="prefix">(</mml:mo>
												<mml:mi>j</mml:mi>
												<mml:mo form="postfix">)</mml:mo>
												<mml:mo form="postfix">)</mml:mo>
											</mml:mrow>
											<mml:mn>2</mml:mn>
										</mml:msup>
										<mml:mo>+</mml:mo>
										<mml:msup>
											<mml:mrow>
												<mml:mo form="prefix">(</mml:mo>
												<mml:msub>
													<mml:mi>g</mml:mi>
													<mml:mi>y</mml:mi>
												</mml:msub>
												<mml:mo form="prefix">(</mml:mo>
												<mml:mi>i</mml:mi>
												<mml:mo form="postfix">)</mml:mo>
												<mml:mo>-</mml:mo>
												<mml:msub>
													<mml:mi>g</mml:mi>
													<mml:mi>y</mml:mi>
												</mml:msub>
												<mml:mo form="prefix">(</mml:mo>
												<mml:mi>j</mml:mi>
												<mml:mo form="postfix">)</mml:mo>
												<mml:mo form="postfix">)</mml:mo>
											</mml:mrow>
											<mml:mn>2</mml:mn>
										</mml:msup>
									</mml:mrow>
								</mml:msqrt>
							</mml:mrow>
							<mml:mi>N</mml:mi>
						</mml:mfrac>
					</mml:mtd>
				</mml:mtr>
			</mml:mtable>
		</mml:mtd>
		<mml:mtd columnalign="right" style="width: 100%">
			<mml:mspace width="6.0em" />
		</mml:mtd>
	</mml:mtr>
</mml:mtable>
</mml:math>
 </disp-formula>
 	  

      <p>where N is the normalization factor, which is defined as the maximal possible distance
between two gaze points:</p>

	  <disp-formula>
	   <label>(2)</label>
	  <mml:math id="m2">
<mml:mtable class="m-equation" displaystyle="true" style="display: block; margin-top: 1.0em; margin-bottom: 2.0em">
	<mml:mtr>
		<mml:mtd>
			<mml:mspace width="6.0em" />
		</mml:mtd>
		<mml:mtd columnalign="left">
			<mml:mi>N</mml:mi>
			<mml:mo>=</mml:mo>
			<mml:msqrt>
				<mml:mrow>
					<mml:msubsup>
						<mml:mi>x</mml:mi>
						<mml:mi>max</mml:mi>
						<mml:mn>2</mml:mn>
					</mml:msubsup>
					<mml:mo>+</mml:mo>
					<mml:msubsup>
						<mml:mi>y</mml:mi>
						<mml:mi>max</mml:mi>
						<mml:mn>2</mml:mn>
					</mml:msubsup>
				</mml:mrow>
			</mml:msqrt>
		</mml:mtd>
		<mml:mtd columnalign="right" style="width: 100%">
			<mml:mspace width="6.0em" />
		</mml:mtd>
	</mml:mtr>
</mml:mtable>
</mml:math>
 </disp-formula>


      <p>Every element of the matrix may contain a value in range of (0...1) where 0 is
represented by a black point and 1 is shown as a white one on the corresponding plot. The
brightness of a pixel on such a plot informs about the Euclidean distance between two points.
Black color means that two gaze points are very close to each other and white color indicates
that the points are far from each other. The size of a plot is practically not limited and depends
on the number of registered gazes.</p>

      <p>A sample recorded gaze sequence and the corresponding GSSP for that sequence with
the description of its characteristic elements are presented in Figures 1a and 1b, respectively.
The diagonal line from the upper-left corner (start) to the lower-right corner is black as it shows
a distance of a gaze point to itself. Each group of black points adjacent to diagonal - visible as a
black square - may be interpreted as a fixation. The bigger the square, the longer the fixation
duration is. Rectangles outside the diagonal represent fixations distances. A dark rectangle
indicates that two fixations are close to each other, which may be noticed in regard to fixations
2, 4 and 7 as well as to fixations 1 and 3. A bright rectangle indicates that groups of gaze points
constituting fixations are far from each other, as in the case of fixations (1, 6) and (3, 6).</p>

<fig id="fig01" fig-type="figure" position="float">
					<label>Figure 1:</label>
					<caption>
						<p>The GSSP example with the explanation of characteristic elements. Numbers from 1 to 8 denote black squares characteristic for fixations. On one hand, the GSSP shows that fixations 2, 4 and 7 appear very close to each other, which is a typical example of a recurrence behavior. On the other hand, fixations 1 and 3 are close to each other and very far from fixation number 6.</p>
					</caption>
					<graphic id="graph01" xlink:href="jemr-10-05-c-figure-01.eps"/>
				</fig>

      <sec id="S2a">
        <title>Differentiating vertical and horizontal offsets using GSSP<sub>VH</sub></title>
		
        <p>The main disadvantage of recurrence plots, and at the same time of the GSSP presented
above, is that the upper right part of the plot is a mirror of its lower left part. To avoid such a
redundancy and to provide more information on the same plot we propose the extended
version of the GSSP - denoted by GSSP<sub>VH</sub> - in which the upper right part of the plot shows
horizontal distances between gazes, while the lower left part presents vertical distances.
Additionally, we propose to use the directed distances to preserve information not only about
the distance, but also about the direction of the distance (e.g. from left to right or from right to
left).</p>

        <p>If we consider two gazes g (a) and g(b) for which a&lt;b, i.e. g (a) was measured
before g(b) , we can calculate horizontally and vertically directed distances as:</p>

	  <disp-formula>
	   <label>(3)</label>
	  <mml:math id="m3">
<mml:mtable class="m-equation" displaystyle="true" style="display: block; margin-top: 1.0em; margin-bottom: 2.0em">
	<mml:mtr>
		<mml:mtd>
			<mml:mspace width="6.0em" />
		</mml:mtd>
		<mml:mtd columnalign="left">
			<mml:mo>&#x02146;</mml:mo>
			<mml:mi>x</mml:mi>
			<mml:mo>=</mml:mo>
			<mml:mrow>
				<mml:mo form="prefix">(</mml:mo>
				<mml:msub>
					<mml:mi>g</mml:mi>
					<mml:mi>x</mml:mi>
				</mml:msub>
				<mml:mo form="prefix">(</mml:mo>
				<mml:mi>b</mml:mi>
				<mml:mo form="postfix">)</mml:mo>
				<mml:mo>-</mml:mo>
				<mml:msub>
					<mml:mi>g</mml:mi>
					<mml:mi>x</mml:mi>
				</mml:msub>
				<mml:mo form="prefix">(</mml:mo>
				<mml:mi>a</mml:mi>
				<mml:mo form="postfix">)</mml:mo>
				<mml:mo form="postfix">)</mml:mo>
			</mml:mrow>
			<mml:mo>&#x0002F;</mml:mo>
			<mml:msub>
				<mml:mi>x</mml:mi>
				<mml:mi>max</mml:mi>
			</mml:msub>
		</mml:mtd>
		<mml:mtd columnalign="right" style="width: 100%">
			<mml:mspace width="6.0em" />
		</mml:mtd>
	</mml:mtr>
</mml:mtable>
</mml:math>
 </disp-formula>

	  <disp-formula>
	   <label>(4)</label>
	  <mml:math id="m4">
<mml:mtable class="m-equation" displaystyle="true" style="display: block; margin-top: 1.0em; margin-bottom: 2.0em">
	<mml:mtr>
		<mml:mtd>
			<mml:mspace width="6.0em" />
		</mml:mtd>
		<mml:mtd columnalign="left">
			<mml:mo>&#x02146;</mml:mo>
			<mml:mi>y</mml:mi>
			<mml:mo>=</mml:mo>
			<mml:mrow>
				<mml:mo form="prefix">(</mml:mo>
				<mml:msub>
					<mml:mi>g</mml:mi>
					<mml:mi>y</mml:mi>
				</mml:msub>
				<mml:mo form="prefix">(</mml:mo>
				<mml:mi>b</mml:mi>
				<mml:mo form="postfix">)</mml:mo>
				<mml:mo>-</mml:mo>
				<mml:msub>
					<mml:mi>g</mml:mi>
					<mml:mi>y</mml:mi>
				</mml:msub>
				<mml:mo form="prefix">(</mml:mo>
				<mml:mi>a</mml:mi>
				<mml:mo form="postfix">)</mml:mo>
				<mml:mo form="postfix">)</mml:mo>
			</mml:mrow>
			<mml:mo>&#x0002F;</mml:mo>
			<mml:msub>
				<mml:mi>y</mml:mi>
				<mml:mi>max</mml:mi>
			</mml:msub>
		</mml:mtd>
		<mml:mtd columnalign="right" style="width: 100%">
			<mml:mspace width="6.0em" />
		</mml:mtd>
	</mml:mtr>
</mml:mtable>
</mml:math>
 </disp-formula>
								
        <p>Therefore, the general formula for GSSP<sub>VH</sub> calculation is:</p>
		
	  <disp-formula>
	   <label>(5)</label>
	  <mml:math id="m5">
<mml:mtable class="m-equation" displaystyle="true" style="display: block; margin-top: 1.0em; margin-bottom: 2.0em">
	<mml:mtr>
		<mml:mtd>
			<mml:mspace width="6.0em" />
		</mml:mtd>
		<mml:mtd columnalign="left">
			<mml:mi>g</mml:mi>
			<mml:mi>s</mml:mi>
			<mml:mi>s</mml:mi>
			<mml:msub>
				<mml:mi>p</mml:mi>
				<mml:mrow>
					<mml:mi>v</mml:mi>
					<mml:mi>h</mml:mi>
				</mml:mrow>
			</mml:msub>
			<mml:mrow>
				<mml:mo form="prefix">(</mml:mo>
				<mml:mi>i</mml:mi>
				<mml:mo>,</mml:mo>
				<mml:mi>j</mml:mi>
				<mml:mo form="postfix">)</mml:mo>
			</mml:mrow>
			<mml:mo>=</mml:mo>
			<mml:mrow>
				<mml:mo rspace="0.3em" lspace="0em" stretchy="true" fence="true" form="prefix">{</mml:mo>
				<mml:mtable class="m-matrix">
					<mml:mtr>
						<mml:mtd>
							<mml:mo>-</mml:mo>
							<mml:mo>&#x02146;</mml:mo>
							<mml:mi>x</mml:mi>
							<mml:mo>,</mml:mo>
						</mml:mtd>
						<mml:mtd>
							<mml:mi>i</mml:mi>
							<mml:mo>&#x02265;</mml:mo>
							<mml:mi>j</mml:mi>
						</mml:mtd>
					</mml:mtr>
					<mml:mtr>
						<mml:mtd>
							<mml:mo>&#x02146;</mml:mo>
							<mml:mi>y</mml:mi>
							<mml:mo>,</mml:mo>
						</mml:mtd>
						<mml:mtd>
							<mml:mi>i</mml:mi>
							<mml:mo>&#x0003C;</mml:mo>
							<mml:mi>j</mml:mi>
						</mml:mtd>
					</mml:mtr>
				</mml:mtable>
			</mml:mrow>
		</mml:mtd>
		<mml:mtd columnalign="right" style="width: 100%">
			<mml:mspace width="6.0em" />
		</mml:mtd>
	</mml:mtr>
</mml:mtable>
</mml:math>
 </disp-formula>		
					
        <p>and every value may be in the range of (&#x2212;1...1) .</p>
		
        <p>It is worth noting that when condition i &gt; j is fulfilled, it means that the gaze i was
<bold>after</bold> the gaze j , so &#x2212;dx must be taken as a directed distance.</p>

        <p>Two ways to visualize such a matrix may be applied. One is to recalculate values to
(0...1) range in the greyscale, similarly to the previous example. However, the main drawback
of such an approach is that the distance equal to zero is difficult to distinguish visually, as after
the recalculation it is equal to 0.5.</p>

        <p>Therefore, we propose a colored plot and encoding each direction using a different color
channel. For every point on the plot its color is defined using its three components (R, G, B): red
(R), green (G) and blue (B). Every component may have a value in the range of (0...1) where
0 denotes lack of the component.</p>

        <p>For instance, movements from left to right and from top to bottom may be characterized
by a red component and from right to left and from bottom to top by a green component (but it
is also possible to use any other color pattern) (see Figure 2).</p>

        <p>For such color encoding every pixel value would be calculated as:</p>
		
	  <disp-formula>
	   <label>(6)</label>
	  <mml:math id="m6">
<mml:mtable class="m-equation" displaystyle="true" style="display: block; margin-top: 1.0em; margin-bottom: 2.0em">
	<mml:mtr>
		<mml:mtd>
			<mml:mspace width="6.0em" />
		</mml:mtd>
		<mml:mtd columnalign="left">
			<mml:msub>
				<mml:mi>I</mml:mi>
				<mml:mrow>
					<mml:mo form="prefix">(</mml:mo>
					<mml:mi>R</mml:mi>
					<mml:mo>,</mml:mo>
					<mml:mi>G</mml:mi>
					<mml:mo>,</mml:mo>
					<mml:mi>B</mml:mi>
					<mml:mo form="postfix">)</mml:mo>
				</mml:mrow>
			</mml:msub>
			<mml:mrow>
				<mml:mo form="prefix">(</mml:mo>
				<mml:mi>i</mml:mi>
				<mml:mo>,</mml:mo>
				<mml:mi>j</mml:mi>
				<mml:mo form="postfix">)</mml:mo>
			</mml:mrow>
			<mml:mo>=</mml:mo>
			<mml:mrow>
				<mml:mo rspace="0.3em" lspace="0em" stretchy="true" fence="true" form="prefix">{</mml:mo>
				<mml:mtable class="m-matrix">
					<mml:mtr>
						<mml:mtd>
							<mml:msub>
								<mml:mi>I</mml:mi>
								<mml:mrow>
									<mml:mo form="prefix">(</mml:mo>
									<mml:mi>g</mml:mi>
									<mml:mi>s</mml:mi>
									<mml:mi>s</mml:mi>
									<mml:msub>
										<mml:mi>p</mml:mi>
										<mml:mrow>
											<mml:mi>v</mml:mi>
											<mml:mi>h</mml:mi>
										</mml:mrow>
									</mml:msub>
									<mml:mo form="prefix">(</mml:mo>
									<mml:mi>i</mml:mi>
									<mml:mo>,</mml:mo>
									<mml:mi>j</mml:mi>
									<mml:mo form="postfix">)</mml:mo>
									<mml:mn>,0 ,0</mml:mn>
									<mml:mo form="postfix">)</mml:mo>
								</mml:mrow>
							</mml:msub>
							<mml:mo>,</mml:mo>
						</mml:mtd>
						<mml:mtd>
							<mml:mi>g</mml:mi>
							<mml:mi>s</mml:mi>
							<mml:mi>s</mml:mi>
							<mml:msub>
								<mml:mi>p</mml:mi>
								<mml:mrow>
									<mml:mi>v</mml:mi>
									<mml:mi>h</mml:mi>
								</mml:mrow>
							</mml:msub>
							<mml:mrow>
								<mml:mo form="prefix">(</mml:mo>
								<mml:mi>i</mml:mi>
								<mml:mo>,</mml:mo>
								<mml:mi>j</mml:mi>
								<mml:mo form="postfix">)</mml:mo>
							</mml:mrow>
							<mml:mo>&#x02265;</mml:mo>
							<mml:mn>0</mml:mn>
						</mml:mtd>
					</mml:mtr>
					<mml:mtr>
						<mml:mtd>
							<mml:msub>
								<mml:mi>I</mml:mi>
								<mml:mrow>
									<mml:mo form="prefix">(</mml:mo>
									<mml:mn>0</mml:mn>
									<mml:mo>,</mml:mo>
									<mml:mo>-</mml:mo>
									<mml:mi>g</mml:mi>
									<mml:mi>s</mml:mi>
									<mml:mi>s</mml:mi>
									<mml:msub>
										<mml:mi>p</mml:mi>
										<mml:mrow>
											<mml:mi>v</mml:mi>
											<mml:mi>h</mml:mi>
										</mml:mrow>
									</mml:msub>
									<mml:mo form="prefix">(</mml:mo>
									<mml:mi>i</mml:mi>
									<mml:mo>,</mml:mo>
									<mml:mi>j</mml:mi>
									<mml:mo form="postfix">)</mml:mo>
									<mml:mn>,0</mml:mn>
									<mml:mo form="postfix">)</mml:mo>
								</mml:mrow>
							</mml:msub>
							<mml:mo>,</mml:mo>
						</mml:mtd>
						<mml:mtd>
							<mml:mi>g</mml:mi>
							<mml:mi>s</mml:mi>
							<mml:mi>s</mml:mi>
							<mml:msub>
								<mml:mi>p</mml:mi>
								<mml:mrow>
									<mml:mi>v</mml:mi>
									<mml:mi>h</mml:mi>
								</mml:mrow>
							</mml:msub>
							<mml:mrow>
								<mml:mo form="prefix">(</mml:mo>
								<mml:mi>i</mml:mi>
								<mml:mo>,</mml:mo>
								<mml:mi>j</mml:mi>
								<mml:mo form="postfix">)</mml:mo>
							</mml:mrow>
							<mml:mo>&#x0003C;</mml:mo>
							<mml:mn>0</mml:mn>
						</mml:mtd>
					</mml:mtr>
				</mml:mtable>
			</mml:mrow>
		</mml:mtd>
		<mml:mtd columnalign="right" style="width: 100%">
			<mml:mspace width="6.0em" />
		</mml:mtd>
	</mml:mtr>
</mml:mtable>
</mml:math>
 </disp-formula>		
		
<fig id="fig02" fig-type="figure" position="float">
					<label>Figure 2:</label>
					<caption>
						<p>Illustration of the GSSP<sub>VH</sub>  idea. Horizontal distances are presented in the upper right part of the plot and vertical distances in the lower left one.</p>
					</caption>
					<graphic id="graph02" xlink:href="jemr-10-05-c-figure-02.eps"/>
				</fig>				

        <p>It is worth noting that a point may be only black, red or green and the intensity of a red
or green component may change and it is not possible to have a point with both red and green
components greater than 0.</p>

        <p>Figure 3b presents both types of GSSP calculated for the gaze sequence shown in Figure 3a.</p>
		
<fig id="fig03" fig-type="figure" position="float">
					<label>Figure 3:</label>
					<caption>
						<p>Figure 3b presents both GSSP and GSSP<sub>VH</sub>  plots for the gaze sequence shown in Figure 3a.</p>
					</caption>
					<graphic id="graph03" xlink:href="jemr-10-05-c-figure-03.eps"/>
				</fig>		

        <p>An interesting property of GSSP<sub>VH</sub> matrix is that it may be used to reconstruct a
scan-path. The only required information is an absolute position of one gaze point. Having such
a gaze point g<sub>s</sub> (x<sub>s</sub> , y<sub>s</sub> ) we can calculate an absolute position of any other gaze point
g<sub>i</sub> (x<sub>i</sub> , y<sub>i</sub> ) using the following formulas:</p>

	  <disp-formula>
	   <label>(7)</label>
	  <mml:math id="m7">
<mml:mtable class="m-equation" displaystyle="true" style="display: block; margin-top: 1.0em; margin-bottom: 2.0em">
	<mml:mtr>
		<mml:mtd>
			<mml:mspace width="6.0em" />
		</mml:mtd>
		<mml:mtd columnalign="left">
			<mml:msub>
				<mml:mi>x</mml:mi>
				<mml:mi>i</mml:mi>
			</mml:msub>
			<mml:mo>=</mml:mo>
			<mml:mrow>
				<mml:mo rspace="0.3em" lspace="0em" stretchy="true" fence="true" form="prefix">{</mml:mo>
				<mml:mtable class="m-matrix">
					<mml:mtr>
						<mml:mtd>
							<mml:msub>
								<mml:mi>x</mml:mi>
								<mml:mi>s</mml:mi>
							</mml:msub>
							<mml:mo>+</mml:mo>
							<mml:mi>g</mml:mi>
							<mml:mi>s</mml:mi>
							<mml:mi>s</mml:mi>
							<mml:msub>
								<mml:mi>p</mml:mi>
								<mml:mrow>
									<mml:mi>v</mml:mi>
									<mml:mi>h</mml:mi>
								</mml:mrow>
							</mml:msub>
							<mml:mrow>
								<mml:mo form="prefix">(</mml:mo>
								<mml:mi>i</mml:mi>
								<mml:mo>,</mml:mo>
								<mml:mi>s</mml:mi>
								<mml:mo form="postfix">)</mml:mo>
							</mml:mrow>
							<mml:mo>,</mml:mo>
						</mml:mtd>
						<mml:mtd>
							<mml:mi>i</mml:mi>
							<mml:mo>&#x02265;</mml:mo>
							<mml:mi>s</mml:mi>
						</mml:mtd>
					</mml:mtr>
					<mml:mtr>
						<mml:mtd>
							<mml:msub>
								<mml:mi>x</mml:mi>
								<mml:mi>s</mml:mi>
							</mml:msub>
							<mml:mo>-</mml:mo>
							<mml:mi>g</mml:mi>
							<mml:mi>s</mml:mi>
							<mml:mi>s</mml:mi>
							<mml:msub>
								<mml:mi>p</mml:mi>
								<mml:mrow>
									<mml:mi>v</mml:mi>
									<mml:mi>h</mml:mi>
								</mml:mrow>
							</mml:msub>
							<mml:mrow>
								<mml:mo form="prefix">(</mml:mo>
								<mml:mi>i</mml:mi>
								<mml:mo>,</mml:mo>
								<mml:mi>s</mml:mi>
								<mml:mo form="postfix">)</mml:mo>
							</mml:mrow>
							<mml:mo>,</mml:mo>
						</mml:mtd>
						<mml:mtd>
							<mml:mi>i</mml:mi>
							<mml:mo>&#x0003C;</mml:mo>
							<mml:mi>s</mml:mi>
						</mml:mtd>
					</mml:mtr>
				</mml:mtable>
			</mml:mrow>
		</mml:mtd>
		<mml:mtd columnalign="right" style="width: 100%">
			<mml:mspace width="6.0em" />
		</mml:mtd>
	</mml:mtr>
</mml:mtable>
</mml:math>
 </disp-formula>

	  <disp-formula>
	   <label>(8)</label>
	  <mml:math id="m8">
<mml:mtable class="m-equation" displaystyle="true" style="display: block; margin-top: 1.0em; margin-bottom: 2.0em">
	<mml:mtr>
		<mml:mtd>
			<mml:mspace width="6.0em" />
		</mml:mtd>
		<mml:mtd columnalign="left">
			<mml:msub>
				<mml:mi>y</mml:mi>
				<mml:mi>i</mml:mi>
			</mml:msub>
			<mml:mo>=</mml:mo>
			<mml:mrow>
				<mml:mo rspace="0.3em" lspace="0em" stretchy="true" fence="true" form="prefix">{</mml:mo>
				<mml:mtable class="m-matrix">
					<mml:mtr>
						<mml:mtd>
							<mml:msub>
								<mml:mi>y</mml:mi>
								<mml:mi>s</mml:mi>
							</mml:msub>
							<mml:mo>+</mml:mo>
							<mml:mi>g</mml:mi>
							<mml:mi>s</mml:mi>
							<mml:mi>s</mml:mi>
							<mml:msub>
								<mml:mi>p</mml:mi>
								<mml:mrow>
									<mml:mi>v</mml:mi>
									<mml:mi>h</mml:mi>
								</mml:mrow>
							</mml:msub>
							<mml:mrow>
								<mml:mo form="prefix">(</mml:mo>
								<mml:mi>s</mml:mi>
								<mml:mo>,</mml:mo>
								<mml:mi>i</mml:mi>
								<mml:mo form="postfix">)</mml:mo>
							</mml:mrow>
							<mml:mo>,</mml:mo>
						</mml:mtd>
						<mml:mtd>
							<mml:mi>i</mml:mi>
							<mml:mo>&#x02265;</mml:mo>
							<mml:mi>s</mml:mi>
						</mml:mtd>
					</mml:mtr>
					<mml:mtr>
						<mml:mtd>
							<mml:msub>
								<mml:mi>y</mml:mi>
								<mml:mi>s</mml:mi>
							</mml:msub>
							<mml:mo>-</mml:mo>
							<mml:mi>g</mml:mi>
							<mml:mi>s</mml:mi>
							<mml:mi>s</mml:mi>
							<mml:msub>
								<mml:mi>p</mml:mi>
								<mml:mrow>
									<mml:mi>v</mml:mi>
									<mml:mi>h</mml:mi>
								</mml:mrow>
							</mml:msub>
							<mml:mrow>
								<mml:mo form="prefix">(</mml:mo>
								<mml:mi>s</mml:mi>
								<mml:mo>,</mml:mo>
								<mml:mi>i</mml:mi>
								<mml:mo form="postfix">)</mml:mo>
							</mml:mrow>
							<mml:mo>,</mml:mo>
						</mml:mtd>
						<mml:mtd>
							<mml:mi>i</mml:mi>
							<mml:mo>&#x0003C;</mml:mo>
							<mml:mi>s</mml:mi>
						</mml:mtd>
					</mml:mtr>
				</mml:mtable>
			</mml:mrow>
		</mml:mtd>
		<mml:mtd columnalign="right" style="width: 100%">
			<mml:mspace width="6.0em" />
		</mml:mtd>
	</mml:mtr>
</mml:mtable>
</mml:math>
 </disp-formula>							
      </sec>
	  
      <sec id="S2b">
        <title>Quantitative Metrics for GSSP</title>
		
        <p>Analysis of the above-described plots may reveal a lot of interesting information, which
will be shown in further parts of the paper. However, comparison of several of such plots and
their assessment based only on visual inspection may be difficult, thus we propose several
quantitative metrics for GSSPs comparison. Since the GSSP is in fact an image, the metrics stem
from image analysis algorithms.</p>

        <p>The calculation of various characteristics of GSSP images has been based on the
Co-occurrence Matrix (CM). It characterizes the texture of an image by determining how often
pairs of pixels with specific values and in a specified spatial relationship occur in this image [
        <xref ref-type="bibr" rid="R20">7</xref>
		].
The size of CM is equal to the number of distinct values derived from the image, thus the
calculation of CM must start with discretization of distances encoded in GSSP. All GSSP points
must be recalculated from a continuous range of 0..1 to K discrete values forming a new
matrix with integer values in range 0..K .</p>

	  <disp-formula>
	   <label>(9)</label>
	  <mml:math id="m9">
<mml:mtable class="m-equation" displaystyle="true" style="display: block; margin-top: 1.0em; margin-bottom: 2.0em">
	<mml:mtr>
		<mml:mtd>
			<mml:mspace width="6.0em" />
		</mml:mtd>
		<mml:mtd columnalign="left">
			<mml:mi>I</mml:mi>
			<mml:mrow>
				<mml:mo form="prefix">(</mml:mo>
				<mml:mi>x</mml:mi>
				<mml:mo>,</mml:mo>
				<mml:mi>y</mml:mi>
				<mml:mo form="postfix">)</mml:mo>
			</mml:mrow>
			<mml:mo>=</mml:mo>
			<mml:mrow>
				<mml:mo form="prefix">&#x0230A;</mml:mo>
				<mml:mrow>
					<mml:mi>g</mml:mi>
					<mml:mi>s</mml:mi>
					<mml:mi>s</mml:mi>
					<mml:mi>p</mml:mi>
					<mml:mrow>
						<mml:mo form="prefix">(</mml:mo>
						<mml:mi>x</mml:mi>
						<mml:mo>,</mml:mo>
						<mml:mi>y</mml:mi>
						<mml:mo form="postfix">)</mml:mo>
					</mml:mrow>
					<mml:mo>*</mml:mo>
					<mml:mi>K</mml:mi>
				</mml:mrow>
				<mml:mo form="postfix">&#x0230B;</mml:mo>
			</mml:mrow>
		</mml:mtd>
		<mml:mtd columnalign="right" style="width: 100%">
			<mml:mspace width="6.0em" />
		</mml:mtd>
	</mml:mtr>
</mml:mtable>
</mml:math>
 </disp-formula>

        <p>where I(x,y) represents GSSP with recalculated values. Subsequently, CM (K + 1, K + 1)
matrix is determined for every pair of values a = 0...K and b = 0...K and for a given offset
d = (dx, dy) representing their spatial relationship. For the purpose of this research the value
of K was arbitrarily set to 10.</p>

        <p>In the case of GSSP<sub>VH</sub> CM matrices are calculated separately for horizontal (upper right)
and vertical (lower left) parts of the GSSP and are denoted by CM<sup>V</sup> and CM<sup>H</sup> respectively.</p>

	  <disp-formula>
	   <label>(10)</label>
	  <mml:math id="m10">
<mml:mtable class="m-equation" displaystyle="true" style="display: block; margin-top: 1.0em; margin-bottom: 2.0em">
	<mml:mtr>
		<mml:mtd>
			<mml:mspace width="6.0em" />
		</mml:mtd>
		<mml:mtd columnalign="left">
			<mml:mi>c</mml:mi>
			<mml:msubsup>
				<mml:mi>m</mml:mi>
				<mml:mrow>
					<mml:mo>&#x02146;</mml:mo>
					<mml:mi>x</mml:mi>
					<mml:mo>,</mml:mo>
					<mml:mo>&#x02146;</mml:mo>
					<mml:mi>y</mml:mi>
				</mml:mrow>
				<mml:mi>H</mml:mi>
			</mml:msubsup>
			<mml:mrow>
				<mml:mo form="prefix">(</mml:mo>
				<mml:mi>a</mml:mi>
				<mml:mo>,</mml:mo>
				<mml:mi>b</mml:mi>
				<mml:mo form="postfix">)</mml:mo>
			</mml:mrow>
			<mml:mo>=</mml:mo>
			<mml:mstyle displaystyle="true">
				<mml:munderover>
					<mml:mo>&#x02211;</mml:mo>
					<mml:mrow>
						<mml:mi>x</mml:mi>
						<mml:mo>=</mml:mo>
						<mml:mn>1</mml:mn>
					</mml:mrow>
					<mml:mrow>
						<mml:mi>n</mml:mi>
						<mml:mo>-</mml:mo>
						<mml:mn>1</mml:mn>
					</mml:mrow>
				</mml:munderover>
			</mml:mstyle>
			<mml:mstyle displaystyle="true">
				<mml:munderover>
					<mml:mo>&#x02211;</mml:mo>
					<mml:mrow>
						<mml:mi>y</mml:mi>
						<mml:mo>=</mml:mo>
						<mml:mi>x</mml:mi>
						<mml:mo>+</mml:mo>
						<mml:mn>1</mml:mn>
					</mml:mrow>
					<mml:mi>n</mml:mi>
				</mml:munderover>
			</mml:mstyle>
			<mml:mrow>
				<mml:mo rspace="0.3em" lspace="0em" stretchy="true" fence="true" form="prefix">{</mml:mo>
				<mml:mtable class="m-matrix">
					<mml:mtr>
						<mml:mtd>
						</mml:mtd>
						<mml:mtd>
							<mml:mi>I</mml:mi>
							<mml:mrow>
								<mml:mo form="prefix">(</mml:mo>
								<mml:mi>x</mml:mi>
								<mml:mo>,</mml:mo>
								<mml:mi>y</mml:mi>
								<mml:mo form="postfix">)</mml:mo>
							</mml:mrow>
							<mml:mo>=</mml:mo>
							<mml:mi>a</mml:mi>
						</mml:mtd>
					</mml:mtr>
					<mml:mtr>
						<mml:mtd>
							<mml:mn>1</mml:mn>
							<mml:mo>,</mml:mo>
						</mml:mtd>
						<mml:mtd>
							<mml:mi>a</mml:mi>
							<mml:mi>n</mml:mi>
							<mml:mi>d</mml:mi>
						</mml:mtd>
					</mml:mtr>
					<mml:mtr>
						<mml:mtd>
						</mml:mtd>
						<mml:mtd>
							<mml:mi>I</mml:mi>
							<mml:mrow>
								<mml:mo form="prefix">(</mml:mo>
								<mml:mi>x</mml:mi>
								<mml:mo>+</mml:mo>
								<mml:mo>&#x02146;</mml:mo>
								<mml:mi>x</mml:mi>
								<mml:mo>,</mml:mo>
								<mml:mi>y</mml:mi>
								<mml:mo>+</mml:mo>
								<mml:mo>&#x02146;</mml:mo>
								<mml:mi>y</mml:mi>
								<mml:mo form="postfix">)</mml:mo>
							</mml:mrow>
							<mml:mo>=</mml:mo>
							<mml:mi>b</mml:mi>
						</mml:mtd>
					</mml:mtr>
					<mml:mtr>
						<mml:mtd>
							<mml:mn>0</mml:mn>
							<mml:mo>,</mml:mo>
						</mml:mtd>
						<mml:mtd>
							<mml:mi>o</mml:mi>
							<mml:mi>t</mml:mi>
							<mml:mi>h</mml:mi>
							<mml:mi>e</mml:mi>
							<mml:mi>r</mml:mi>
							<mml:mi>w</mml:mi>
							<mml:mi>i</mml:mi>
							<mml:mi>s</mml:mi>
							<mml:mi>e</mml:mi>
						</mml:mtd>
					</mml:mtr>
				</mml:mtable>
			</mml:mrow>
		</mml:mtd>
		<mml:mtd columnalign="right" style="width: 100%">
			<mml:mspace width="6.0em" />
		</mml:mtd>
	</mml:mtr>
</mml:mtable>
</mml:math>
 </disp-formula>

	  <disp-formula>
	   <label>(11)</label>
	  <mml:math id="m11">
<mml:mtable class="m-equation" displaystyle="true" style="display: block; margin-top: 1.0em; margin-bottom: 2.0em">
	<mml:mtr>
		<mml:mtd>
			<mml:mspace width="6.0em" />
		</mml:mtd>
		<mml:mtd columnalign="left">
			<mml:mi>c</mml:mi>
			<mml:msubsup>
				<mml:mi>m</mml:mi>
				<mml:mrow>
					<mml:mo>&#x02146;</mml:mo>
					<mml:mi>x</mml:mi>
					<mml:mo>,</mml:mo>
					<mml:mo>&#x02146;</mml:mo>
					<mml:mi>y</mml:mi>
				</mml:mrow>
				<mml:mi>V</mml:mi>
			</mml:msubsup>
			<mml:mrow>
				<mml:mo form="prefix">(</mml:mo>
				<mml:mi>a</mml:mi>
				<mml:mo>,</mml:mo>
				<mml:mi>b</mml:mi>
				<mml:mo form="postfix">)</mml:mo>
			</mml:mrow>
			<mml:mo>=</mml:mo>
			<mml:mstyle displaystyle="true">
				<mml:munderover>
					<mml:mo>&#x02211;</mml:mo>
					<mml:mrow>
						<mml:mi>y</mml:mi>
						<mml:mo>=</mml:mo>
						<mml:mn>1</mml:mn>
					</mml:mrow>
					<mml:mrow>
						<mml:mi>n</mml:mi>
						<mml:mo>-</mml:mo>
						<mml:mn>1</mml:mn>
					</mml:mrow>
				</mml:munderover>
			</mml:mstyle>
			<mml:mstyle displaystyle="true">
				<mml:munderover>
					<mml:mo>&#x02211;</mml:mo>
					<mml:mrow>
						<mml:mi>x</mml:mi>
						<mml:mo>=</mml:mo>
						<mml:mi>y</mml:mi>
						<mml:mo>+</mml:mo>
						<mml:mn>1</mml:mn>
					</mml:mrow>
					<mml:mi>n</mml:mi>
				</mml:munderover>
			</mml:mstyle>
			<mml:mrow>
				<mml:mo rspace="0.3em" lspace="0em" stretchy="true" fence="true" form="prefix">{</mml:mo>
				<mml:mtable class="m-matrix">
					<mml:mtr>
						<mml:mtd>
						</mml:mtd>
						<mml:mtd>
							<mml:mi>I</mml:mi>
							<mml:mrow>
								<mml:mo form="prefix">(</mml:mo>
								<mml:mi>x</mml:mi>
								<mml:mo>,</mml:mo>
								<mml:mi>y</mml:mi>
								<mml:mo form="postfix">)</mml:mo>
							</mml:mrow>
							<mml:mo>=</mml:mo>
							<mml:mi>a</mml:mi>
						</mml:mtd>
					</mml:mtr>
					<mml:mtr>
						<mml:mtd>
							<mml:mn>1</mml:mn>
							<mml:mo>,</mml:mo>
						</mml:mtd>
						<mml:mtd>
							<mml:mi>a</mml:mi>
							<mml:mi>n</mml:mi>
							<mml:mi>d</mml:mi>
						</mml:mtd>
					</mml:mtr>
					<mml:mtr>
						<mml:mtd>
						</mml:mtd>
						<mml:mtd>
							<mml:mi>I</mml:mi>
							<mml:mrow>
								<mml:mo form="prefix">(</mml:mo>
								<mml:mi>x</mml:mi>
								<mml:mo>+</mml:mo>
								<mml:mo>&#x02146;</mml:mo>
								<mml:mi>x</mml:mi>
								<mml:mo>,</mml:mo>
								<mml:mi>y</mml:mi>
								<mml:mo>+</mml:mo>
								<mml:mo>&#x02146;</mml:mo>
								<mml:mi>y</mml:mi>
								<mml:mo form="postfix">)</mml:mo>
							</mml:mrow>
							<mml:mo>=</mml:mo>
							<mml:mi>b</mml:mi>
						</mml:mtd>
					</mml:mtr>
					<mml:mtr>
						<mml:mtd>
							<mml:mn>0</mml:mn>
							<mml:mo>,</mml:mo>
						</mml:mtd>
						<mml:mtd>
							<mml:mi>o</mml:mi>
							<mml:mi>t</mml:mi>
							<mml:mi>h</mml:mi>
							<mml:mi>e</mml:mi>
							<mml:mi>r</mml:mi>
							<mml:mi>w</mml:mi>
							<mml:mi>i</mml:mi>
							<mml:mi>s</mml:mi>
							<mml:mi>e</mml:mi>
						</mml:mtd>
					</mml:mtr>
				</mml:mtable>
			</mml:mrow>
		</mml:mtd>
		<mml:mtd columnalign="right" style="width: 100%">
			<mml:mspace width="6.0em" />
		</mml:mtd>
	</mml:mtr>
</mml:mtable>
</mml:math>
 </disp-formula>
							
          <p>Co-occurrence matrices created in this way may serve to compute various image-related
metrics.</p>
      </sec>
	  
        <sec id="S2ba">
          <title>Homogeneity</title>	  

          <p>The homogeneity of an image gives information to what extend nearby gazes are in
similar locations. It is high when values in CM concentrate along the diagonal, meaning that
there are a lot of pixels with the same or very similar value. The range of homogeneity is [0,1]. If
an image is constant then homogeneity is equal to 1.</p>

	  <disp-formula>
	   <label>(12)</label>
	  <mml:math id="m12">
<mml:mtable class="m-equation" displaystyle="true" style="display: block; margin-top: 1.0em; margin-bottom: 2.0em">
	<mml:mtr>
		<mml:mtd>
			<mml:mspace width="6.0em" />
		</mml:mtd>
		<mml:mtd columnalign="left">
			<mml:mi>h</mml:mi>
			<mml:mi>o</mml:mi>
			<mml:mi>m</mml:mi>
			<mml:mi>o</mml:mi>
			<mml:mi>g</mml:mi>
			<mml:mi>e</mml:mi>
			<mml:mi>n</mml:mi>
			<mml:mi>e</mml:mi>
			<mml:mi>i</mml:mi>
			<mml:mi>t</mml:mi>
			<mml:msub>
				<mml:mi>y</mml:mi>
				<mml:mrow>
					<mml:mo>&#x02146;</mml:mo>
					<mml:mi>x</mml:mi>
					<mml:mo>,</mml:mo>
					<mml:mo>&#x02146;</mml:mo>
					<mml:mi>y</mml:mi>
				</mml:mrow>
			</mml:msub>
			<mml:mo>=</mml:mo>
			<mml:mstyle displaystyle="true">
				<mml:munderover>
					<mml:mo>&#x02211;</mml:mo>
					<mml:mrow>
						<mml:mi>i</mml:mi>
						<mml:mo>=</mml:mo>
						<mml:mn>0</mml:mn>
					</mml:mrow>
					<mml:mi>K</mml:mi>
				</mml:munderover>
			</mml:mstyle>
			<mml:mstyle displaystyle="true">
				<mml:munderover>
					<mml:mo>&#x02211;</mml:mo>
					<mml:mrow>
						<mml:mi>j</mml:mi>
						<mml:mo>=</mml:mo>
						<mml:mn>0</mml:mn>
					</mml:mrow>
					<mml:mi>K</mml:mi>
				</mml:munderover>
			</mml:mstyle>
			<mml:mfrac linethickness="1">
				<mml:mrow>
					<mml:mi>c</mml:mi>
					<mml:msub>
						<mml:mi>m</mml:mi>
						<mml:mrow>
							<mml:mo>&#x02146;</mml:mo>
							<mml:mi>x</mml:mi>
							<mml:mo>,</mml:mo>
							<mml:mo>&#x02146;</mml:mo>
							<mml:mi>y</mml:mi>
						</mml:mrow>
					</mml:msub>
					<mml:mrow>
						<mml:mo form="prefix">(</mml:mo>
						<mml:mi>i</mml:mi>
						<mml:mo>,</mml:mo>
						<mml:mi>j</mml:mi>
						<mml:mo form="postfix">)</mml:mo>
					</mml:mrow>
				</mml:mrow>
				<mml:mrow>
					<mml:mn>1</mml:mn>
					<mml:mo>+</mml:mo>
					<mml:mo>|</mml:mo>
					<mml:mi>i</mml:mi>
					<mml:mo>-</mml:mo>
					<mml:mi>j</mml:mi>
					<mml:mo>|</mml:mo>
				</mml:mrow>
			</mml:mfrac>
		</mml:mtd>
		<mml:mtd columnalign="right" style="width: 100%">
			<mml:mspace width="6.0em" />
		</mml:mtd>
	</mml:mtr>
</mml:mtable>
</mml:math>
 </disp-formula>
        </sec>

        <sec id="S2bb">
          <title>Contrast</title>
		  
          <p>The contrast is a difference moment of the CM and it measures the amount of local
variations in an image. If the neighboring pixels have similar values then the contrast in the
image is low. Therefore, the contrast is sensitive to long jumps from one gaze point to another.
The range of contrast is [0, K 	&#x00B2; ] where contrast is 0 for a constant image. The contrast is
inversely proportional to homogeneity.</p>

	  <disp-formula>
	   <label>(13)</label>
	  <mml:math id="m13">
<mml:mtable class="m-equation" displaystyle="true" style="display: block; margin-top: 1.0em; margin-bottom: 2.0em">
	<mml:mtr>
		<mml:mtd>
			<mml:mspace width="6.0em" />
		</mml:mtd>
		<mml:mtd columnalign="left">
			<mml:mi>c</mml:mi>
			<mml:mi>o</mml:mi>
			<mml:mi>n</mml:mi>
			<mml:mi>t</mml:mi>
			<mml:mi>r</mml:mi>
			<mml:mi>a</mml:mi>
			<mml:mi>s</mml:mi>
			<mml:msub>
				<mml:mi>t</mml:mi>
				<mml:mrow>
					<mml:mo>&#x02146;</mml:mo>
					<mml:mi>x</mml:mi>
					<mml:mo>,</mml:mo>
					<mml:mo>&#x02146;</mml:mo>
					<mml:mi>y</mml:mi>
				</mml:mrow>
			</mml:msub>
			<mml:mo>=</mml:mo>
			<mml:mstyle displaystyle="true">
				<mml:munderover>
					<mml:mo>&#x02211;</mml:mo>
					<mml:mrow>
						<mml:mi>i</mml:mi>
						<mml:mo>=</mml:mo>
						<mml:mn>0</mml:mn>
					</mml:mrow>
					<mml:mi>K</mml:mi>
				</mml:munderover>
			</mml:mstyle>
			<mml:mstyle displaystyle="true">
				<mml:munderover>
					<mml:mo>&#x02211;</mml:mo>
					<mml:mrow>
						<mml:mi>j</mml:mi>
						<mml:mo>=</mml:mo>
						<mml:mn>0</mml:mn>
					</mml:mrow>
					<mml:mi>K</mml:mi>
				</mml:munderover>
			</mml:mstyle>
			<mml:msup>
				<mml:mrow>
					<mml:mo form="prefix">(</mml:mo>
					<mml:mi>i</mml:mi>
					<mml:mo>-</mml:mo>
					<mml:mi>j</mml:mi>
					<mml:mo form="postfix">)</mml:mo>
				</mml:mrow>
				<mml:mn>2</mml:mn>
			</mml:msup>
			<mml:mi>c</mml:mi>
			<mml:msub>
				<mml:mi>m</mml:mi>
				<mml:mrow>
					<mml:mo>&#x02146;</mml:mo>
					<mml:mi>x</mml:mi>
					<mml:mo>,</mml:mo>
					<mml:mo>&#x02146;</mml:mo>
					<mml:mi>y</mml:mi>
				</mml:mrow>
			</mml:msub>
			<mml:mrow>
				<mml:mo form="prefix">(</mml:mo>
				<mml:mi>i</mml:mi>
				<mml:mo>,</mml:mo>
				<mml:mi>j</mml:mi>
				<mml:mo form="postfix">)</mml:mo>
			</mml:mrow>
		</mml:mtd>
		<mml:mtd columnalign="right" style="width: 100%">
			<mml:mspace width="6.0em" />
		</mml:mtd>
	</mml:mtr>
</mml:mtable>
</mml:math>
 </disp-formula>	
        </sec>

        <sec id="S2bc">
          <title>Uniformity</title>
		  
          <p>Uniformity (also called energy) measures gaze pairs repetitions. It is high when the
GSSP contains similar areas, which means that the same paired values with the same
arrangement appear repeatedly in the image. It is low when there are no dominant pairs and
the CM matrix contains a large number of small entries. The range of uniformity is [0,1], and it is
1 for a constant image.</p>

	  <disp-formula>
	   <label>(14)</label>
	  <mml:math id="m14">
<mml:mtable class="m-equation" displaystyle="true" style="display: block; margin-top: 1.0em; margin-bottom: 2.0em">
	<mml:mtr>
		<mml:mtd>
			<mml:mspace width="6.0em" />
		</mml:mtd>
		<mml:mtd columnalign="left">
			<mml:mi>u</mml:mi>
			<mml:mi>n</mml:mi>
			<mml:mi>i</mml:mi>
			<mml:mi>f</mml:mi>
			<mml:mi>o</mml:mi>
			<mml:mi>r</mml:mi>
			<mml:mi>m</mml:mi>
			<mml:mi>i</mml:mi>
			<mml:mi>t</mml:mi>
			<mml:msub>
				<mml:mi>y</mml:mi>
				<mml:mrow>
					<mml:mo>&#x02146;</mml:mo>
					<mml:mi>x</mml:mi>
					<mml:mo>,</mml:mo>
					<mml:mo>&#x02146;</mml:mo>
					<mml:mi>y</mml:mi>
				</mml:mrow>
			</mml:msub>
			<mml:mo>=</mml:mo>
			<mml:mstyle displaystyle="true">
				<mml:munderover>
					<mml:mo>&#x02211;</mml:mo>
					<mml:mrow>
						<mml:mi>i</mml:mi>
						<mml:mo>=</mml:mo>
						<mml:mn>0</mml:mn>
					</mml:mrow>
					<mml:mi>K</mml:mi>
				</mml:munderover>
			</mml:mstyle>
			<mml:mstyle displaystyle="true">
				<mml:munderover>
					<mml:mo>&#x02211;</mml:mo>
					<mml:mrow>
						<mml:mi>j</mml:mi>
						<mml:mo>=</mml:mo>
						<mml:mn>0</mml:mn>
					</mml:mrow>
					<mml:mi>K</mml:mi>
				</mml:munderover>
			</mml:mstyle>
			<mml:msup>
				<mml:mrow>
					<mml:mo form="prefix">(</mml:mo>
					<mml:mi>c</mml:mi>
					<mml:msub>
						<mml:mi>m</mml:mi>
						<mml:mrow>
							<mml:mo>&#x02146;</mml:mo>
							<mml:mi>x</mml:mi>
							<mml:mo>,</mml:mo>
							<mml:mo>&#x02146;</mml:mo>
							<mml:mi>y</mml:mi>
						</mml:mrow>
					</mml:msub>
					<mml:mo form="prefix">(</mml:mo>
					<mml:mi>i</mml:mi>
					<mml:mo>,</mml:mo>
					<mml:mi>j</mml:mi>
					<mml:mo form="postfix">)</mml:mo>
					<mml:mo form="postfix">)</mml:mo>
				</mml:mrow>
				<mml:mn>2</mml:mn>
			</mml:msup>
		</mml:mtd>
		<mml:mtd columnalign="right" style="width: 100%">
			<mml:mspace width="6.0em" />
		</mml:mtd>
	</mml:mtr>
</mml:mtable>
</mml:math>
 </disp-formula>

          <p>For the purpose of the presented research all these metrics - homogeneity, contrast and
uniformity - were evaluated taking into account the three offsets - vertical (0,1), horizontal (1,0)
and diagonal (1,1).</p>
        </sec>
      </sec>

	
    <sec id="S3">
      <title>Experiments and Results</title>
	  
      <p>The usefulness of the GSSP was verified in terms of both visual exploration of registered
eye movements and their quantification with the usage of the aforementioned metrics. In the
first case the GSSP may prove useful in a quick identification of problems or in revealing
characteristics of an eye movement patterns, not easily obtainable in case of a scan-path or
heat map.</p>

      <sec id="S3a">
        <title>Outlier detection</title>
		
        <p>Outliers are visible on the GSSP plot as a bright cross with black square on a diagonal.
One look at the GSSP gives information about the overall signal quality. Figure 4 presents the
plot with one obvious outlier in the center of the plot and three more possible outliers. Of
course evident outliers may be removed by means of simple analytic methods based on velocity
thresholds [
        <xref ref-type="bibr" rid="R21">8</xref>
		], however the GSSP may be useful for examining the remaining scan-path to check
for some less obvious outliers.</p>

<fig id="fig04" fig-type="figure" position="float">
					<label>Figure 4:</label>
					<caption>
						<p>An example of the GSSP with visible outliers. The white cross with black square on the diagonal reveals several gaze points that are situated far away from all other points and may be treated as outliers. The three darker crosses show other possible outliers.</p>
					</caption>
					<graphic id="graph04" xlink:href="jemr-10-05-c-figure-04.eps"/>
				</fig>
      </sec>
	  
      <sec id="S3b">
        <title>Distinguishing regions of interest</title>
		
        <p>Eye movement analysis is usually based on a fixations-saccades sequence extracted from
a registered signal. It has been shown that such a sequence structure is sensitive to the fixation
detection algorithm settings ( [
        <xref ref-type="bibr" rid="R22 R23">9, 10</xref>
		]), and it is difficult to visually check, if the settings used are
adequate. It became possible to present the detailed characteristics of fixations and saccades in
2D space on a single plot by means of the GSSP. We applied such a plot to estimate how
homogeneous fixations are. On one hand, it may reveal that gaze points constituting the fixation
are scattered, if there are shades on a fixation&#x2019;s square. On the other hand, if two subsequent
fixations appear in a similar place, they are visible as one big square. This gives the opportunity
to observe a scan-path on a higher level - based on regions of interest instead of on separated
fixations. An example is presented in Figure 5 on a plot with seven fixations and only two
regions of interest.</p>

<fig id="fig05" fig-type="figure" position="float">
					<label>Figure 5:</label>
					<caption>
						<p>The GSSP with fixations (detected in a signal by the IDT algorithm) shown on diagonal as white lines. It is visible that despite seven fixations detected, there are three dark squares easily distinguishable along the diagonal, indicating three regions of interest (A, B and C). Additionally, dark rectangles out of diagonal show that the regions A and C are very close to each other, so they represent the same area of interest. It means that the observer looked at the first region (A), subsequently looked at the other (B) and then returned to the first one as region C is located in the same place as region A</p>
					</caption>
					<graphic id="graph05" xlink:href="jemr-10-05-c-figure-05.eps"/>
				</fig>
      </sec>
	  
      <sec id="S3c">
        <title>Recurrence of fixations</title>
		
        <p>One of the main aims of the recurrence plots usage is revealing repeated pattern existing
in time series. In the presented solution this feature was applied for the purpose of a registered
gaze points analysis. Figure 5 presents the GSSP with a recurrence of gaze points&#x2019; placements.
They are represented by dark rectangles appearing out of the diagonal line, which means that
two groups of gaze points are located in the same place. In contrast to a classic recurrence plot,
the proposed approach reveals not only repeated gaze points positions, but also allows - due to
the application of the coloring mechanism - to estimate relative positions of the remaining
points set. Additionally, the applied strategy of gazes&#x2019; presentation makes the simultaneous
comparison of recurring fixations durations possible.</p>
      </sec>
	  
      <sec id="S3d">
        <title>Smooth pursuits visualization</title>
		
        <p>Smooth pursuits are much slower eye movements than saccades, occurring when
somebody is following with eyes a slowly moving object. Unfortunately, algorithms commonly
used for the fixation detection frequently mistakenly classify smooth pursuits as fixations or
saccades [
        <xref ref-type="bibr" rid="R24">11</xref>
		]. Smooth pursuits are also difficult to visualize. We conduced experiment showing
that, based on the GSSP, it is easy to distinguish smooth pursuits and fixations, because edges of
rectangles representing the former event are smoother (Figure 6).</p>

<fig id="fig06" fig-type="figure" position="float">
					<label>Figure 6:</label>
					<caption>
						<p>The GSSP showing smooth pursuit after the point wandering round the screen (scan-path visible above). The whole distance was covered twice. Black lines above and below diagonal represent recurrent recordings (yellow arrow points to one of the lines). The whitest points represent distances between gazes recorded in left-upper corner and right-bottom one. There are no squares with sharp edges, all edges are blurred.</p>
					</caption>
					<graphic id="graph06" xlink:href="jemr-10-05-c-figure-06.eps"/>
				</fig>
      </sec>
	  
      <sec id="S3e">
        <title>Distinguishing focal and ambient patterns</title>
		
        <p>Two modes of processing visual information are commonly known: the focal and
ambient processing [
        <xref ref-type="bibr" rid="R25">12</xref>
		] [
        <xref ref-type="bibr" rid="R26">13</xref>
		] which are used for the purpose of two different tasks: exploration
and inspection. Short duration fixations followed by long saccades are characteristic for the
ambient processing, while longer duration fixations followed by shorter saccades are indicative
of the focal processing [
        <xref ref-type="bibr" rid="R27">14</xref>
		]. The visualization of eye movement that takes an ambient/focal
processing into account is not a simple task. One of the attempts dealing with this issue may be
found in [
        <xref ref-type="bibr" rid="R14">1</xref>
		], where ambient and focal fixations were distinguished by the usage of different
coloring.</p>

        <p>Assuming that the GSSP is a good tool for the ambient-focal distinction we have
undertaken an appropriate experiment. Figure 7 shows two examples of plots. One of them is
an example of ambient processing - a person is looking for something in the scene. Another one
is a typical example of a focal processing - only some interesting objects are carefully inspected.</p>

<fig id="fig07" fig-type="figure" position="float">
					<label>Figure 7:</label>
					<caption>
						<p>Two GSSPs showing the ambient (left) and the focal (right) processing of an image. There are many short fixations (small black squares along the diagonal) and long saccades (white rectangles adjacent to fixation squares) on the left plot while there are only few big black squares with short (dark) saccades on the right.</p>
					</caption>
					<graphic id="graph07" xlink:href="jemr-10-05-c-figure-07.eps"/>
				</fig>
      </sec>
	  
      <sec id="S3f">
        <title>Searching strategy</title>
		
        <p>As the GSSP reveals both spatial and temporal patterns on one plot, it may be used to
analyze strategies while exploring an image. Figure 8a presents two basic strategies: horizontal
and vertical which are easily distinguishable in GSSP<sub>VH</sub> shown in Figure 8b.</p>

        <p>For the horizontal search strategy a person exploring the scene starts eye movement
from the left upper corner and moves eyes to the right, thus all subsequent gaze positions are
always to the right or in the same place (i.e. near the left edge of the scene). It is represented by
red and black regions in the first row of the upper part of the plot. The whole horizontal (upper
right) part of the GSSP<sub>VH</sub> consists of subsequent red and green regions of similar size, which
indicates that gaze was moving left and right with the similar speed. The part of the plot below
the diagonal (visualizing vertical movements) is only black and red with very sparse green
components, because vertical eye movements are made only downwards.</p>

        <p>In the case of the vertical strategy similar color layout may be found, yet it is visible in
lower and upper plot&#x2019;s parts.</p>

<fig id="fig08" fig-type="figure" position="float">
					<label>Figure 8:</label>
					<caption>
						<p>An example of GSSPs for different search strategies.</p>
					</caption>
					<graphic id="graph08" xlink:href="jemr-10-05-c-figure-08.eps"/>
				</fig>
      </sec>
	  
      <sec id="S3g">
        <title>Reading patterns</title>
		
        <p>Fixations&#x2019; patterns during reading are very specific, which makes the GSSP obtained for
reading tasks also very specific. Analyzing the GSSP<sub>VH</sub> presented in Figure 9 it may be noticed
that vertical movements are only directed downwards, while horizontal ones are both to the left
and to the right. Subsequent lines of text are easily distinguishable on the horizontal (upper
right) part of the GSSP. It consists of squares with red upper right part and green lower left part
which indicates that there were slow movements to the right and then rapid movements to the
left (which makes it different from the GSSP for the horizontal search strategy presented in
Figure 8b).</p>

        <p>Another example of the text reading task is presented in Figure 10. A careful
examination of the vertical (lower left) part of the GSSP<sub>VH</sub> reveals that the same sequence
repeats twice, which means that the same text was read twice. It is not so obvious when looking
only at the scan-path.</p>

<fig id="fig09" fig-type="figure" position="float">
					<label>Figure 9:</label>
					<caption>
						<p>A scan-path (above) and the corresponding GSSP<sub>VH</sub> (below) during reading of a text. It is visible that vertical movements are only directed downwards, while horizontal movements are slow to the right and very fast to the left.</p>
					</caption>
					<graphic id="graph09" xlink:href="jemr-10-05-c-figure-09.eps"/>
				</fig>
				
<fig id="fig10" fig-type="figure" position="float">
					<label>Figure 10:</label>
					<caption>
						<p>A scanpath during text reading (above) and the corresponding GSSP<sub>VH</sub>  (below). The vertical (lower left) part of the GSSP<sub>VH</sub>  reveals that the same sequence was read twice.</p>
					</caption>
					<graphic id="graph10" xlink:href="jemr-10-05-c-figure-10.eps"/>
				</fig>				

        <p>The subsequent example is a backward reading task. Figure 11 presents the scan-path
and the GSSP<sub>VH</sub> for such a task. It is visible that this time the horizontal part of the plot consists
of rectangles with green upper right corner and red lower left corner which indicates slow
movements to the left and rapid ones to the right. However, the pattern is not so clear, as in the
case of the normal text reading, because the person was not used to this kind of reading.</p>

<fig id="fig11" fig-type="figure" position="float">
					<label>Figure 11:</label>
					<caption>
						<p>A scan-path during backward reading (above) and the GSSP<sub>VH</sub> for this scan-path (below).</p>
					</caption>
					<graphic id="graph11" xlink:href="jemr-10-05-c-figure-11.eps"/>
				</fig>

        <p>The same text was presented to another person and the corresponding scan-paths and
GSSP<sub>VH</sub> are presented in Figure 12. This time the person had serious problems with reading
from right to left and it is visible on the GSSP<sub>VH</sub> - the rectangles are not similar to the previous
ones - movements to the right and to the left have similar velocity (such as in the case of the
horizontal search strategy). It is worth noting that this information is not visible on scan-paths
which look similarly in Figures 11 and 12.</p>

<fig id="fig12" fig-type="figure" position="float">
					<label>Figure 12:</label>
					<caption>
						<p>A scan-path during backward reading (above) and the GSSP<sub>VH</sub> for this scan-path (below) for another person.</p>
					</caption>
					<graphic id="graph12" xlink:href="jemr-10-05-c-figure-12.eps"/>
				</fig>
      </sec>
	  
      <sec id="S3h">
        <title>GSSP metrics usage</title>
		
        <p>Our assumption was that the metrics presented in Method section (contrast,
homogeneity and uniformity) may reveal interesting information about the gaze patterns. To
check it, all three metrics for [0,1] offset were calculated for the first three seconds of five GSSPs
presented in the previous sections (Table 1). This way we were able to compare metrics for
normal and smooth pursuit observations (first row) and for ambient and focal observations
(second row). The differences are visible for all compared observations, especially in the case of
contrast and uniformity. The third row of the table shows comparison between the same
metrics calculated for the same gaze pattern presented in Figure 9, but separately for horizontal
and vertical directions.</p>

<table-wrap id="t01" position="float">
					<label>Table 1:</label>
					<caption>
						<p>Metrics calculated for [0,1] offset for some GSSPs presented in the previous sections.</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>
          <tr>
            <td rowspan="1" colspan="1">observation</td>
            <td rowspan="1" colspan="1">contrast</td>
            <td rowspan="1" colspan="1">homog</td>
            <td rowspan="1" colspan="1">uniform</td>
          </tr>
						</thead>
						<tbody>
          <tr>
            <td rowspan="1" colspan="1">Normal (Fig. 5) </td>
            <td rowspan="1" colspan="1"> 0.062 </td>
            <td rowspan="1" colspan="1"> 0.979 </td>
            <td rowspan="1" colspan="1"> 0.237 </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Smooth pursuit (Fig. 6) </td>
            <td rowspan="1" colspan="1"> 0.034 </td>
            <td rowspan="1" colspan="1"> 0.987 </td>
            <td rowspan="1" colspan="1"> 0.430 </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Ambient (Fig. 7)</td>
            <td rowspan="1" colspan="1"> 0.092 </td>
            <td rowspan="1" colspan="1"> 0.972 </td>
            <td rowspan="1" colspan="1"> 0.259 </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Focal (Fig. 7) </td>
            <td rowspan="1" colspan="1"> 0.009 </td>
            <td rowspan="1" colspan="1"> 0.996 </td>
            <td rowspan="1" colspan="1"> 0.632</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Text horizontal (Fig. 9)</td>
            <td rowspan="1" colspan="1"> 0.082 </td>
            <td rowspan="1" colspan="1"> 0.959 </td>
            <td rowspan="1" colspan="1"> 0.370 </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Text vertical (Fig. 9)</td>
            <td rowspan="1" colspan="1"> 0.011 </td>
            <td rowspan="1" colspan="1"> 0.994 </td>
            <td rowspan="1" colspan="1"> 0.664 </td>
          </tr>
						</tbody>
					</table>
					</table-wrap>
      </sec>
	  
      <sec id="S3i">
        <title>Distinguishing picture types</title>
		
        <p>The next step in ascertaining the usefulness of the proposed metrics was utilizing them
in distinguishing visual behavior depending on an image type.</p>

        <p>The dataset used for this purpose consisted of gaze recordings registered for 18
participants looking at four images - two free observation images denoted as &#x2019;bus&#x2019; and &#x2019;cat&#x2019;, one
image with text to be read (&#x2019;text&#x2019;) and one image for which the participants&#x2019; task was to count
the number of rabbits. All four images are presented in Figure 13. After removing two bad
samples the remaining subset formed a dataset consisting of 232 observations.</p>

<fig id="fig13" fig-type="figure" position="float">
					<label>Figure 13:</label>
					<caption>
						<p>Four images analyzed during the first experiment.</p>
					</caption>
					<graphic id="graph13" xlink:href="jemr-10-05-c-figure-13.eps"/>
				</fig>

        <p>For each of them the GSSP<sub>VH</sub> was created and three metrics - contrast, homogeneity
and uniformity - calculated separately (1) for every direction (horizontal - upper right triangle
and vertical - lower left triangle) and (2) for three different offsets: (0,1), (1,0) and (1,1). It gave
overall 18 attributes derived from one GSSP corresponding to one observation.</p>

        <p>During the metrics analysis it occurred that values of uniformity calculated for the same
direction (V or H) and for different offsets are highly correlated (Pearson correlation for every
pair &gt;.9). Therefore, it was decided to remove those metrics determined for (0,1) and (1,0)
offsets from the further studies. After that step there were 14 attributes describing every GSSP<sub>VH</sub> (and this way every observation).</p>

        <p>The resulting GSSPs for all four images and two exemplary participants are presented in
Figures 14 and 15.</p>

<fig id="fig14" fig-type="figure" position="float">
					<label>Figure 14:</label>
					<caption>
						<p>The GSSPs of one observer for four images presented in Figure 13. The order is the same as in Figure 13.</p>
					</caption>
					<graphic id="graph14" xlink:href="jemr-10-05-c-figure-14.eps"/>
				</fig>
				
<fig id="fig15" fig-type="figure" position="float">
					<label>Figure 15:</label>
					<caption>
						<p>The GSSPs of another observer for four images presented in Figure 13. The order is the same as in Figure 13</p>
					</caption>
					<graphic id="graph15" xlink:href="jemr-10-05-c-figure-15.eps"/>
				</fig>				

        <p>The mean values of attributes calculated for each image are presented in Table 2.
Because, according to Shapiro-Wilk test, none of the 14 analyzed attributes exhibited normal
distribution, the nonparametric Kruskal-Wallis test was utilized to check, if there are differences
in attributes values among images. The differences were significant, so the post-hoc pairwise
comparison realized by means of Mann-Whitney test was also calculated (see Table 3).</p>

<table-wrap id="t02" position="float">
					<label>Table 2:</label>
					<caption>
						<p>Mean attribute values for different images averaged for all 18 participants. Standard deviation in brackets. Kruskal-Wallis test result in column H and significance in column p-value  </p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>
          <tr>
            <td rowspan="1" colspan="1">attribute</td>
            <td rowspan="1" colspan="1">bus</td>
            <td rowspan="1" colspan="1">cat</td>
            <td rowspan="1" colspan="1">text</td>
            <td rowspan="1" colspan="1">task</td>
            <td rowspan="1" colspan="1">H</td>
            <td rowspan="1" colspan="1">p-value</td>
            <td rowspan="1" colspan="1">sign</td>
          </tr>
						</thead>
						<tbody>
          <tr>
            <td rowspan="1" colspan="1">H01 contrast </td>
            <td rowspan="1" colspan="1"> .062 (.048) </td>
            <td rowspan="1" colspan="1"> .083 (.042) </td>
            <td rowspan="1" colspan="1"> .154 (.064) </td>
            <td rowspan="1" colspan="1"> .063 (.037) </td>
            <td rowspan="1" colspan="1"> 28.451 </td>
            <td rowspan="1" colspan="1"> 0 </td>
            <td rowspan="1" colspan="1"> ***</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">H01 homogeneity </td>
            <td rowspan="1" colspan="1"> .977 (.01) </td>
            <td rowspan="1" colspan="1"> .969 (.011) </td>
            <td rowspan="1" colspan="1"> .952 (.007) </td>
            <td rowspan="1" colspan="1"> .975 (.006) </td>
            <td rowspan="1" colspan="1"> 34.253 </td>
            <td rowspan="1" colspan="1"> 0 </td>
            <td rowspan="1" colspan="1"> ***</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">H10 contrast </td>
            <td rowspan="1" colspan="1"> .069 (.048) </td>
            <td rowspan="1" colspan="1"> .079 (.067) </td>
            <td rowspan="1" colspan="1"> .141 (.075) </td>
            <td rowspan="1" colspan="1"> .058 (.025) </td>
            <td rowspan="1" colspan="1"> 22.321 </td>
            <td rowspan="1" colspan="1"> 0 </td>
            <td rowspan="1" colspan="1"> ***</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">H10 homogeneity </td>
            <td rowspan="1" colspan="1"> .977 (.01) </td>
            <td rowspan="1" colspan="1"> .973 (.013) </td>
            <td rowspan="1" colspan="1"> .957 (.01) </td>
            <td rowspan="1" colspan="1"> .978 (.006) </td>
            <td rowspan="1" colspan="1"> 32.848 </td>
            <td rowspan="1" colspan="1"> 0 </td>
            <td rowspan="1" colspan="1"> ***</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">H11 contrast </td>
            <td rowspan="1" colspan="1"> .125 (.086) </td>
            <td rowspan="1" colspan="1"> .154 (.092) </td>
            <td rowspan="1" colspan="1"> .286 (.133) </td>
            <td rowspan="1" colspan="1"> .114 (.052) </td>
            <td rowspan="1" colspan="1"> 25.403 </td>
            <td rowspan="1" colspan="1"> 0 </td>
            <td rowspan="1" colspan="1"> ***</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">H11 homogeneity </td>
            <td rowspan="1" colspan="1"> .957 (.017) </td>
            <td rowspan="1" colspan="1"> .947 (.02) </td>
            <td rowspan="1" colspan="1"> .917 (.013) </td>
            <td rowspan="1" colspan="1"> .957 (.01) </td>
            <td rowspan="1" colspan="1"> 35.417 </td>
            <td rowspan="1" colspan="1"> 0 </td>
            <td rowspan="1" colspan="1"> ***</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">H11 uniformity </td>
            <td rowspan="1" colspan="1"> .25 (.125) </td>
            <td rowspan="1" colspan="1"> .205 (.056) </td>
            <td rowspan="1" colspan="1"> .152 (.023) </td>
            <td rowspan="1" colspan="1"> .159 (.024) </td>
            <td rowspan="1" colspan="1"> 23.627 </td>
            <td rowspan="1" colspan="1"> 0 </td>
            <td rowspan="1" colspan="1"> ***</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">V01 contrast </td>
            <td rowspan="1" colspan="1"> .045 (.016) </td>
            <td rowspan="1" colspan="1"> .051 (.022) </td>
            <td rowspan="1" colspan="1"> .074 (.022) </td>
            <td rowspan="1" colspan="1"> .07 (.015) </td>
            <td rowspan="1" colspan="1"> 21.51 </td>
            <td rowspan="1" colspan="1"> 0 </td>
            <td rowspan="1" colspan="1"> ***</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">V01 homogeneity </td>
            <td rowspan="1" colspan="1"> .979 (.007) </td>
            <td rowspan="1" colspan="1"> .977 (.009) </td>
            <td rowspan="1" colspan="1"> .973 (.006) </td>
            <td rowspan="1" colspan="1"> .972 (.005) </td>
            <td rowspan="1" colspan="1"> 12.853 </td>
            <td rowspan="1" colspan="1"> 0.005 </td>
            <td rowspan="1" colspan="1"> **</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">V10 contrast </td>
            <td rowspan="1" colspan="1"> .063 (.024) </td>
            <td rowspan="1" colspan="1"> .056 (.019) </td>
            <td rowspan="1" colspan="1"> .074 (.028) </td>
            <td rowspan="1" colspan="1"> .075 (.019) </td>
            <td rowspan="1" colspan="1"> 10.251 </td>
            <td rowspan="1" colspan="1"> 0.017 </td>
            <td rowspan="1" colspan="1"> *</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">V10 homogeneity </td>
            <td rowspan="1" colspan="1"> .976 (.007) </td>
            <td rowspan="1" colspan="1"> .975 (.007) </td>
            <td rowspan="1" colspan="1"> .97 (.01) </td>
            <td rowspan="1" colspan="1"> .97 (.005) </td>
            <td rowspan="1" colspan="1"> 10.218 </td>
            <td rowspan="1" colspan="1"> 0.017 </td>
            <td rowspan="1" colspan="1"> *</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">V11 contrast </td>
            <td rowspan="1" colspan="1"> .098 (.034) </td>
            <td rowspan="1" colspan="1"> .097 (.035) </td>
            <td rowspan="1" colspan="1"> .136 (.043) </td>
            <td rowspan="1" colspan="1"> .135 (.03) </td>
            <td rowspan="1" colspan="1"> 16.655 </td>
            <td rowspan="1" colspan="1"> 0.001 </td>
            <td rowspan="1" colspan="1"> ***</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">V11 homogeneity </td>
            <td rowspan="1" colspan="1"> .96 (.011) </td>
            <td rowspan="1" colspan="1"> .957 (.013) </td>
            <td rowspan="1" colspan="1"> .949 (.013) </td>
            <td rowspan="1" colspan="1"> .948 (.008) </td>
            <td rowspan="1" colspan="1"> 13.832 </td>
            <td rowspan="1" colspan="1"> 0.003 </td>
            <td rowspan="1" colspan="1"> **</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">V11 uniformity </td>
            <td rowspan="1" colspan="1"> .363 (.115) </td>
            <td rowspan="1" colspan="1"> .311 (.111) </td>
            <td rowspan="1" colspan="1"> .205 (.048) </td>
            <td rowspan="1" colspan="1"> .156 (.023) </td>
            <td rowspan="1" colspan="1"> 47.07 </td>
            <td rowspan="1" colspan="1"> 0 </td>
            <td rowspan="1" colspan="1"> ***</td>
          </tr>
						</tbody>
					</table>
					</table-wrap>
					
<table-wrap id="t03" position="float">
					<label>Table 3:</label>
					<caption>
						<p>The results of Mann-Whitney test for significance of differences between each pair of images averaged for all 232 observations (and 18 participants). The table shows p-values for each attribute and pair. &#x2018;*&#x2019; means p-value&lt;0.01, &#x2018;**&#x2019; - p-value&lt;0.001.</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>
          <tr>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"> bus-cat </td>
            <td rowspan="1" colspan="1"> bus-text </td>
            <td rowspan="1" colspan="1"> bus-task </td>
            <td rowspan="1" colspan="1"> cat-text </td>
            <td rowspan="1" colspan="1"> text-task </td>
            <td rowspan="1" colspan="1"> cat-task</td>
          </tr>
						</thead>
						<tbody>
          <tr>
            <td rowspan="1" colspan="1">H01 contrast </td>
            <td rowspan="1" colspan="1"> .09 </td>
            <td rowspan="1" colspan="1"> ** </td>
            <td rowspan="1" colspan="1"> .24 </td>
            <td rowspan="1" colspan="1"> ** </td>
            <td rowspan="1" colspan="1"> ** </td>
            <td rowspan="1" colspan="1"> .25 </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">H01 homog </td>
            <td rowspan="1" colspan="1"> .04 </td>
            <td rowspan="1" colspan="1"> ** </td>
            <td rowspan="1" colspan="1"> .21 </td>
            <td rowspan="1" colspan="1"> ** </td>
            <td rowspan="1" colspan="1"> ** </td>
            <td rowspan="1" colspan="1"> .22 </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">H10 contrast </td>
            <td rowspan="1" colspan="1"> .42 </td>
            <td rowspan="1" colspan="1"> ** </td>
            <td rowspan="1" colspan="1"> 1.0 </td>
            <td rowspan="1" colspan="1"> ** </td>
            <td rowspan="1" colspan="1"> ** </td>
            <td rowspan="1" colspan="1"> .19 </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">H10 homog </td>
            <td rowspan="1" colspan="1"> .29 </td>
            <td rowspan="1" colspan="1"> ** </td>
            <td rowspan="1" colspan="1"> .66 </td>
            <td rowspan="1" colspan="1"> ** </td>
            <td rowspan="1" colspan="1"> ** </td>
            <td rowspan="1" colspan="1"> .14 </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">H11 contrast </td>
            <td rowspan="1" colspan="1"> .19 </td>
            <td rowspan="1" colspan="1"> ** </td>
            <td rowspan="1" colspan="1"> .66 </td>
            <td rowspan="1" colspan="1"> ** </td>
            <td rowspan="1" colspan="1"> ** </td>
            <td rowspan="1" colspan="1"> .14 </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">H11 homog </td>
            <td rowspan="1" colspan="1"> .06 </td>
            <td rowspan="1" colspan="1"> ** </td>
            <td rowspan="1" colspan="1"> .62 </td>
            <td rowspan="1" colspan="1"> ** </td>
            <td rowspan="1" colspan="1"> ** </td>
            <td rowspan="1" colspan="1"> .08 </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">H11 uniformity </td>
            <td rowspan="1" colspan="1"> .19 </td>
            <td rowspan="1" colspan="1"> ** </td>
            <td rowspan="1" colspan="1"> ** </td>
            <td rowspan="1" colspan="1"> ** </td>
            <td rowspan="1" colspan="1"> .48 </td>
            <td rowspan="1" colspan="1"> * </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">V01 contrast </td>
            <td rowspan="1" colspan="1"> .41 </td>
            <td rowspan="1" colspan="1"> ** </td>
            <td rowspan="1" colspan="1"> ** </td>
            <td rowspan="1" colspan="1"> * </td>
            <td rowspan="1" colspan="1"> .79 </td>
            <td rowspan="1" colspan="1"> * </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">V01 homog </td>
            <td rowspan="1" colspan="1"> .58 </td>
            <td rowspan="1" colspan="1"> * </td>
            <td rowspan="1" colspan="1"> * </td>
            <td rowspan="1" colspan="1"> .04 </td>
            <td rowspan="1" colspan="1"> .47 </td>
            <td rowspan="1" colspan="1"> .01 </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">V10 contrast </td>
            <td rowspan="1" colspan="1"> .32 </td>
            <td rowspan="1" colspan="1"> .24 </td>
            <td rowspan="1" colspan="1"> .08 </td>
            <td rowspan="1" colspan="1"> .03 </td>
            <td rowspan="1" colspan="1"> .82 </td>
            <td rowspan="1" colspan="1"> * </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">V10 homog </td>
            <td rowspan="1" colspan="1"> .82 </td>
            <td rowspan="1" colspan="1"> .04 </td>
            <td rowspan="1" colspan="1"> * </td>
            <td rowspan="1" colspan="1"> .1 </td>
            <td rowspan="1" colspan="1"> .91 </td>
            <td rowspan="1" colspan="1"> .01 </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">V11 contrast </td>
            <td rowspan="1" colspan="1"> .96 </td>
            <td rowspan="1" colspan="1"> * </td>
            <td rowspan="1" colspan="1"> * </td>
            <td rowspan="1" colspan="1"> * </td>
            <td rowspan="1" colspan="1"> .76 </td>
            <td rowspan="1" colspan="1"> * </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">V11 homog </td>
            <td rowspan="1" colspan="1"> .64 </td>
            <td rowspan="1" colspan="1"> .01 </td>
            <td rowspan="1" colspan="1"> * </td>
            <td rowspan="1" colspan="1"> .06 </td>
            <td rowspan="1" colspan="1"> .81 </td>
            <td rowspan="1" colspan="1"> * </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">V11 uniformity </td>
            <td rowspan="1" colspan="1"> .08 </td>
            <td rowspan="1" colspan="1"> ** </td>
            <td rowspan="1" colspan="1"> ** </td>
            <td rowspan="1" colspan="1"> ** </td>
            <td rowspan="1" colspan="1"> ** </td>
            <td rowspan="1" colspan="1"> ** </td>
          </tr>
						</tbody>
					</table>
					</table-wrap>					

        <p>The above-presented results with their statistically significant differences showed that
distinguishing image types based on calculated GSSP metrics is possible. To confirm the findings
a subsequent step of the analysis was undertaken, in which all 14 attributes were used to
associate an observation with an image type. That classification process was performed by
means of Random Forest classifier with one leave out cross validation using WEKA
implementation with default parameters [
        <xref ref-type="bibr" rid="R28">15</xref>
		]. The resulting confusion matrix is presented in
Table 4. It is visible that &#x2019;text&#x2019; and &#x2019;task&#x2019; were the easiest images to classify (17 out of 18 and 16
out of 18 correct classifications, respectively). On the other hand the &#x2019;bus&#x2019; and &#x2019;cat&#x2019; images
both representing the free viewing visual pattern - were frequently mistaken with each other.</p>

<table-wrap id="t04" position="float">
					<label>Table 4:</label>
					<caption>
						<p>Confusion matrix for the images classification. Each cell shows how many instances of the actual class defined in the column were classified as the class defined in the row.</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>
          <tr>
            <td rowspan="1" colspan="1">actual -&gt; <break/> predicted </td>
            <td rowspan="1" colspan="1">bus</td>
            <td rowspan="1" colspan="1">cat</td>
            <td rowspan="1" colspan="1">text</td>
            <td rowspan="1" colspan="1">task</td>
          </tr>
						</thead>
						<tbody>
          <tr>
            <td rowspan="1" colspan="1">bus </td>
            <td rowspan="1" colspan="1"> 11 </td>
            <td rowspan="1" colspan="1"> 5 </td>
            <td rowspan="1" colspan="1"> 0 </td>
            <td rowspan="1" colspan="1"> 1 </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">cat </td>
            <td rowspan="1" colspan="1"> 5 </td>
            <td rowspan="1" colspan="1"> 10 </td>
            <td rowspan="1" colspan="1"> 1 </td>
            <td rowspan="1" colspan="1"> 0 </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">text </td>
            <td rowspan="1" colspan="1"> 1 </td>
            <td rowspan="1" colspan="1"> 3 </td>
            <td rowspan="1" colspan="1"> 17 </td>
            <td rowspan="1" colspan="1"> 1 </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">task </td>
            <td rowspan="1" colspan="1"> 1 </td>
            <td rowspan="1" colspan="1"> 0 </td>
            <td rowspan="1" colspan="1"> 0 </td>
            <td rowspan="1" colspan="1"> 16 </td>
          </tr>
						</tbody>
					</table>
					</table-wrap>
      </sec>
	  
      <sec id="S3j">
        <title>Distinguishing a level of expertise</title>
		
        <p>One of the intensively studied issues regarding utilizing eye tracking methods is revealing
eye movement patterns of people with various levels of expertise, which is especially visible in
medicine. For this reason the next test aimed to check, if the GSSP may be used to distinguish
gaze patterns for laymen and specialists. The dataset utilized in the analysis consisted of eye
movement recordings of 8 laymen and 8 specialists looking at 12 X-rays for 5 seconds (the
duration was chosen arbitrarily). The set of images included chest X-rays with and without
various diseases. Participants&#x2019; task was to explore each image and assess it based on four
possibilities provided.</p>

        <p>Similarly to the previously described case, there was the GSSP<sub>VH</sub> created for every
observation and three attributes: contrast, homogeneity and uniformity calculated separately
for both directions and three different offsets.</p>

        <p>In this case it occurred that values of all attributes calculated for the same direction (V or
H) and for different offsets are highly correlated (Pearson correlation for every pair &gt;.8).
Therefore, only (1,1) offset was taken into account. The mean values of attributes for each
group together with Kruskal-Wallis test results are presented in Table 5.</p>

<table-wrap id="t05" position="float">
					<label>Table 5:</label>
					<caption>
						<p>Mean attribute values for all 216 observations (12 images and 18 participants) with standard deviation in brackets. Kruskal-Wallis test results are provided in column H and significance in column p-value</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>
          <tr>
            <td rowspan="1" colspan="1">attribute</td>
            <td rowspan="1" colspan="1">laymen</td>
            <td rowspan="1" colspan="1">specialists</td>
            <td rowspan="1" colspan="1">H</td>
            <td rowspan="1" colspan="1">p-value</td>
          </tr>
						</thead>
						<tbody>
          <tr>
            <td rowspan="1" colspan="1">H cont </td>
            <td rowspan="1" colspan="1"> .14 (.16) </td>
            <td rowspan="1" colspan="1"> .09 (.06) </td>
            <td rowspan="1" colspan="1"> 0.13 </td>
            <td rowspan="1" colspan="1"> 0.72 </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">H homo </td>
            <td rowspan="1" colspan="1"> .94 (.05) </td>
            <td rowspan="1" colspan="1"> .96 (.02) </td>
            <td rowspan="1" colspan="1"> 0.2 </td>
            <td rowspan="1" colspan="1"> 0.66 </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">H unif </td>
            <td rowspan="1" colspan="1"> .4 (.12) </td>
            <td rowspan="1" colspan="1"> .29 (.07) </td>
            <td rowspan="1" colspan="1"> 43.8 </td>
            <td rowspan="1" colspan="1"> 0 </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">V cont </td>
            <td rowspan="1" colspan="1"> .2 (.21) </td>
            <td rowspan="1" colspan="1"> .11 (.07) </td>
            <td rowspan="1" colspan="1"> 14.4 </td>
            <td rowspan="1" colspan="1"> 0 </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">V homo </td>
            <td rowspan="1" colspan="1"> .93 (.06) </td>
            <td rowspan="1" colspan="1"> .96 (.02) </td>
            <td rowspan="1" colspan="1"> 13.0 </td>
            <td rowspan="1" colspan="1"> 0 </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">V unif </td>
            <td rowspan="1" colspan="1"> .32 (.15) </td>
            <td rowspan="1" colspan="1"> .25 (.09) </td>
            <td rowspan="1" colspan="1"> 9.5 </td>
            <td rowspan="1" colspan="1"> 0.002 </td>
          </tr>
						</tbody>
					</table>
					</table-wrap>

        <p>Similarly to the previous experiment, all six attributes of each observation were used to
classify it as specialist&#x2019;s or layman&#x2019;s. Once more Random Forest classification algorithm using
WEKA implementation with default parameters [
        <xref ref-type="bibr" rid="R28">15</xref>
		] was applied. The accuracy of the
classification was 85% and the confusion matrix is presented in Table 6. Additionally the
Detection Error Tradeoff (DET) curve for specialist-layman prediction is presented in Figure 16.</p>

<table-wrap id="t06" position="float">
					<label>Table 6:</label>
					<caption>
						<p>Confusion matrix for the experts&#x2019; classification. Each cell shows how many instances of the actual class defined in the column were classified as the class defined in the row.</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>
          <tr>
            <td rowspan="1" colspan="1">actual -&gt; <break/>classified as</td>
            <td rowspan="1" colspan="1">laymen</td>
            <td rowspan="1" colspan="1">specialists</td>
          </tr>
						</thead>
						<tbody>
          <tr>
            <td rowspan="1" colspan="1">laymen </td>
            <td rowspan="1" colspan="1"> 83 </td>
            <td rowspan="1" colspan="1"> 17 </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">specialists </td>
            <td rowspan="1" colspan="1"> 12 </td>
            <td rowspan="1" colspan="1"> 79 </td>
          </tr>
						</tbody>
					</table>
					</table-wrap>	

<fig id="fig16" fig-type="figure" position="float">
					<label>Figure 16:</label>
					<caption>
						<p>DET curve for the specialist-layman prediction based on calculated GSSP metrics.</p>
					</caption>
					<graphic id="graph16" xlink:href="jemr-10-05-c-figure-16.eps"/>
				</fig>

        <p>Moreover, when the classification results of the same person were summarized for 12
images observations with the usage of a classic voting algorithm - all participants were classified
correctly either as a layman or specialist (8 out of 8 correct for both classes). Such results may
be treated as the confirmation of the GSSP usefulness for distinguishing laymen and specialists.</p>
      </sec>
	  
      <sec id="S3k">
        <title>Handling with long sequences</title>
		
        <p>Visualization techniques very often have to deal with problems of large amount of
samples. Presenting big numbers of fixations and saccades makes scan-paths or heat maps
difficult to analyze, especially in regard to detailed information. The problem may be overcome
by analyzing data taking its smaller parts into account. Similar solution may be used in the case
of the GSSP. If a gaze sequence (scan-path) is very long (e.g. during watching a movie) it is not
necessary to analyze the whole GSSP - the better option is to create multiple GSSPs for
successive periods. The idea is visually presented in Figure 17, where parts were selected from
the whole GSSP . Such extracted GSSPs may be then compared to find characteristic moments
during observation.</p>

<fig id="fig17" fig-type="figure" position="float">
					<label>Figure 17:</label>
					<caption>
						<p>Calculation of the GSSP in a moving window.</p>
					</caption>
					<graphic id="graph17" xlink:href="jemr-10-05-c-figure-17.eps"/>
				</fig>

        <p>That GSSP feature was investigated during the next experiment aimed to check, if it was
possible to find out, based on GSSP metrics, if a person was reading a text. For the sake of the
experiment a cartoon movie was used. From time to time a foreground text appeared on a
screen (see Figure 18).</p>

<fig id="fig18" fig-type="figure" position="float">
					<label>Figure 18:</label>
					<caption>
						<p>One frame from the cartoon movie with a text displayed.</p>
					</caption>
					<graphic id="graph18" xlink:href="jemr-10-05-c-figure-18.eps"/>
				</fig>

        <p>There were six various texts displayed during the movie with different durations from 7
to 9 seconds and short breaks (2-5 seconds) between subsequent texts presentations. A
participant&#x2019;s task was to watch the movie, but at the same time to read all texts.</p>

        <p>The research question was to ascertain, if it was possible to indicate whether a person
was reading the text while watching the movie, based on metrics values. To answer this
question at first the GSSPs were created for one-second windows with 0.16 second step. Then,
all metrics were calculated separately for each GSSP and their values were ascertained in terms
of the &#x2019;text visibility&#x2019;-&#x2019;metrics values&#x2019; correlation existence. For this purpose the function
defining moments of text presentation was defined as:</p>

	  <disp-formula>
	   <label>(15)</label>
	  <mml:math id="m15">
<mml:mtable class="m-equation" displaystyle="true" style="display: block; margin-top: 1.0em; margin-bottom: 2.0em">
	<mml:mtr>
		<mml:mtd>
			<mml:mspace width="6.0em" />
		</mml:mtd>
		<mml:mtd columnalign="left">
			<mml:mi>t</mml:mi>
			<mml:mi>e</mml:mi>
			<mml:mi>x</mml:mi>
			<mml:mi>t</mml:mi>
			<mml:mi>v</mml:mi>
			<mml:mi>i</mml:mi>
			<mml:mi>s</mml:mi>
			<mml:mi>i</mml:mi>
			<mml:mi>b</mml:mi>
			<mml:mi>l</mml:mi>
			<mml:mi>e</mml:mi>
			<mml:mrow>
				<mml:mo form="prefix">(</mml:mo>
				<mml:mi>t</mml:mi>
				<mml:mo form="postfix">)</mml:mo>
			</mml:mrow>
			<mml:mo>=</mml:mo>
			<mml:mrow>
				<mml:mo rspace="0.3em" lspace="0em" stretchy="true" fence="true" form="prefix">{</mml:mo>
				<mml:mtable class="m-matrix">
					<mml:mtr>
						<mml:mtd>
							<mml:mn>1</mml:mn>
							<mml:mo>,</mml:mo>
						</mml:mtd>
						<mml:mtd>
							<mml:mi>t</mml:mi>
							<mml:mi>e</mml:mi>
							<mml:mi>x</mml:mi>
							<mml:mi>t</mml:mi>
							<mml:mspace width="0.167em" />
							<mml:mi>v</mml:mi>
							<mml:mi>i</mml:mi>
							<mml:mi>s</mml:mi>
							<mml:mi>i</mml:mi>
							<mml:mi>b</mml:mi>
							<mml:mi>l</mml:mi>
							<mml:mi>e</mml:mi>
						</mml:mtd>
					</mml:mtr>
					<mml:mtr>
						<mml:mtd>
							<mml:mn>0</mml:mn>
							<mml:mo>,</mml:mo>
						</mml:mtd>
						<mml:mtd>
							<mml:mi>t</mml:mi>
							<mml:mi>e</mml:mi>
							<mml:mi>x</mml:mi>
							<mml:mi>t</mml:mi>
							<mml:mspace width="0.167em" />
							<mml:mi>n</mml:mi>
							<mml:mi>o</mml:mi>
							<mml:mi>t</mml:mi>
							<mml:mspace width="0.167em" />
							<mml:mi>v</mml:mi>
							<mml:mi>i</mml:mi>
							<mml:mi>s</mml:mi>
							<mml:mi>i</mml:mi>
							<mml:mi>b</mml:mi>
							<mml:mi>l</mml:mi>
							<mml:mi>e</mml:mi>
						</mml:mtd>
					</mml:mtr>
				</mml:mtable>
			</mml:mrow>
		</mml:mtd>
		<mml:mtd columnalign="right" style="width: 100%">
			<mml:mspace width="6.0em" />
		</mml:mtd>
	</mml:mtr>
</mml:mtable>
</mml:math>
 </disp-formula>

        <p>where textvisible(t) indicates whether in time t a text was visible on the screen (the
function value is 1) or not (function value is 0).</p>

        <p>It occurred that Pearson correlation between horizontal contrast and the outcome of
textvisible(t) function was 0.46 (see Figure 19) and between horizontal uniformity and the
textvisible(t) function values was -0.54 (see Figure 20).</p>

        <p>When a participant&#x2019;s task was defined as: &#x2019;watch the movie and do not pay attention to
texts&#x2019; there was no correlation between metrics and textvisible(t) function results found.</p>

<fig id="fig19" fig-type="figure" position="float">
					<label>Figure 19:</label>
					<caption>
						<p>Horizontal contrast values calculated in a moving window of approximately 1 second. Grey areas are moments when a text was displayed as a foreground. The correlation is clearly visible - the only exception is a moment between the first and the second text appearance, when the contrast is higher than expected.</p>
					</caption>
					<graphic id="graph19" xlink:href="jemr-10-05-c-figure-19.eps"/>
				</fig>
				
<fig id="fig20" fig-type="figure" position="float">
					<label>Figure 20:</label>
					<caption>
						<p>Horizontal uniformity values calculated in a moving window of approximately 1 second. Grey areas are moments when a text was displayed as a foreground. The uniformity is clearly lower when there is the text to be read.</p>
					</caption>
					<graphic id="graph20" xlink:href="jemr-10-05-c-figure-20.eps"/>
				</fig>				
      </sec>
    </sec>
	
    <sec id="S4">
      <title>Discussion</title>
	  
      <p>Experiments presented in the previous section showed that the GSSP may be used as a
useful tool in various fields of eye movement analysis. It may be utilized to check the quality of
recordings by a convenient outliers presentation. Additionally, different plot patterns provided
for different tasks makes the GSSP helpful in identifying underlying activity such as smooth
pursuit, reading or searching task. Moreover, the GSSP ensures the possibility of recognizing the
way a scene is observed - a direction of the scene scanning and ambient/focal characteristic of
its exploration.</p>

      <p>However, the GSSP is not only a visual tool for gaze pattern analysis, but it may also be
used to calculate meaningful and quantitative metrics, which may enrich our understanding of
eye movements. For instance when we compare metrics for GSSP presented in Figure 5 and
metrics of smooth pursuit GSSP (Figure 6), it is visible that contrast is much lower for the latter,
while uniformity is higher (see Table 1).</p>

      <p>When ambient and focal GSSPs (Figure 7) are compared, the GSSP for focal observation
is characterized with much lower contrast, slightly higher homogeneity and much higher
uniformity (Table 1).</p>

      <p>Usage of GSSP<sub>VH</sub> offers opportunity to compare metrics obtained for horizontal and
vertical directions. When these metrics are compared for text reading GSSP (Figure 9) it occurs
that the contrast is lower and both homogeneity and uniformity are higher for vertical direction
(Table 1).</p>

      <sec id="S4a">
        <title>Distinguishing picture types</title>

      <p>The results presented in Table 2 revealed significant effect among images for all
attributes derived from GSSP. The differences were especially visible for attributes calculated
for horizontal part of the GSSP.</p>

      <p>The post-hoc pairwise comparison realized by means of Mann-Whitney test revealed
that there were no significant differences between &#x2019;bus&#x2019; and &#x2019;cat&#x2019; observations (Table 3).
However, all horizontal attributes values showed significant differences when comparing both
free observations (&#x2019;cat&#x2019; and &#x2019;bus&#x2019;) with &#x2019;text&#x2019; one. On the other hand, there were no significant
differences for horizontal contrast and homogeneity between free observations and &#x2019;task&#x2019;
explorations, but there were some significant differences for vertical attributes. Horizontal
contrast and homogeneity as well as vertical uniformity significantly distinguishes &#x2019;text&#x2019; and
&#x2019;task&#x2019; observations.</p>

      <p>The classification results presented in Table 4 show that it was possible to differentiate
observation based on its purpose. The &#x2019;text&#x2019; and &#x2019;task&#x2019; observations were classified correctly
with accuracies 88% and 94% respectively while &#x2019;cat&#x2019; and &#x2019;bus&#x2019; observations were frequently
misclassified.</p>

      <p>The careful analysis of the results leads to the following conclusions:</p>
	  
      <p>&#x2022; &#x2019;text&#x2019; has significantly higher horizontal contrast and lower horizontal homogeneity than
other images,</p>

      <p>&#x2022; &#x2019;task&#x2019; has significantly lower horizontal contrast than other images,</p>
	  
      <p>&#x2022; both &#x2019;task&#x2019; and &#x2019;test&#x2019; have significantly lower uniformity and higher vertical contrast than
both &#x2019;free observation&#x2019; images,</p>

      <p>&#x2022; &#x2019;free observation&#x2019; images have similar attributes values and no significant differences
between them were observed,</p>

      <p>&#x2022; it is possible to distinguish the type of observation taking into account only three metrics
derived from the GSSP, which was demonstrated using the Random Forest classification
algorithm.</p>
      </sec>

      <sec id="S4b">
        <title>Distinguishing a level of expertise</title>
		
        <p>The results presented in Table 5 reveal that the uniformity and vertical contrast are
significantly lower for specialists whereas the vertical homogeneity is higher. It suggests that
specialists&#x2019; gaze pattern is more sophisticated - there are different jumps/saccades to different
directions and there are no dominant directions, which results in lower uniformity. At the same
time the jumps/saccades (especially in vertical direction) are shorter, which results in lower
contrast and higher homogeneity. Additionally, the standard deviations of metrics among
specialists are lower than among laymen.</p>

        <p>Based on those outcomes it may be concluded that specialists observe the image more
carefully - they focus their attention on relevant parts of the image (more or less the same for
each specialist), whereas laymen just scan the image - using similar and predictable patterns for
each image (but specific to each observer).</p>

        <p>The classification part of the experiment showed that it is possible to distinguish a
layman and a specialist gaze patterns taking into account only three metrics derived from the
GSSP. With 12 gaze patterns available for a person the classification algorithm performed
perfectly in predicting the person&#x2019;s level of expertise.</p>
      </sec>
	  
      <sec id="S4c">
        <title>Handling with long sequences</title>
		
        <p>The last (movie) experiment described in the previous section leads to the conclusion
that the proposed technique is scalable towards long sequences of recordings. By dividing them
into shorter series with the application of arbitrarily defined windows, within - as well as
between - series comparison is facilitated. Additionally, the results obtained showed the
usefulness of the proposed metrics, with the example of the horizontal contrast and horizontal
uniformity metrics, which may be good indicators, if a person is reading a text.</p>
      </sec>
    </sec>
	
    <sec id="S5">
      <title>Summary</title>
	  
      <p>The eye movement analysis attracts interest of scientists from many fields of research
and it has become a promising tool for the exploration of human brain functioning [
        <xref ref-type="bibr" rid="R29">16</xref>
		]. The aim
of the paper was to present the new method for eye movement visualization, which would be
capable to overcome the limitation present in most other solutions, i.e. the difficulty in
simultaneous presentation of spatial and temporal eye movement characteristics.</p>

      <p>The developed method - The Gaze Self-Similarity Plot (GSSP), based on recurrence plot
technique - achieves it by means of a single two-dimensional plot. The most important features
of this solution are the usage of raw gaze points instead of fixations and encoding distances
between gazes as continuous values. Both features make the GSSP completely independent of
any thresholds or initial assumptions. By introducing its extended version - the GSSP<sub>VH</sub>
encoding horizontal and vertical movements in different ways and using colors to distinguish
the sense of the movement, more information is available on the same plot.</p>

      <p> Along with the method description, the discussion of its possible applications was also
provided. Among them effortless revealing reading patterns, outliers, ambient/focal
characteristics or differentiating search strategies may be mentioned. The presented solution
was equipped with several metrics as well. They allow for both quantitative GSSP&#x2019;s assessment
and comparison of various such plots. Two examples of their usage were discussed in the paper: 
(1) for distinguishing picture types and (2) for distinguishing levels of expertise. In both cases
statistical analysis revealed significant differences in metrics values for studied groups. These
findings were confirmed by results obtained during the classification process performed to
assign an observation to one of these groups.</p>

      <p>Furthermore, based on eye movements gathered while watching a cartoon movie with
overlapping text, the example of processing gaze sets consisting of big amount of recordings
was provided. The example also showed that by means of the GSSP it is feasible to detect which
of the elements overlapping on the screen - movie or text - attracted user&#x2019;s attention. This
distinction is hard to achieve when using other visualization techniques.</p>

      <p>All the presented GSSP&#x2019;s applications give - in authors&#x2019; opinion - strong evidence that the
GSSP may be a valuable supplement to other, existing gaze pattern visualization techniques. It
should also be emphasized that the list is not exhaustive and many other measures, metrics and
interpretations may be taken into account - those issues may constitute a basis of a future
analysis.</p>

      <sec id="S5a">
        <title>Acknowledgements</title>
		
        <p>The research presented in this paper was partially supported by the Silesian University
of Technology grant BK/263/RAu2/2016.
<named-content content-type="COI-statement">The authors declare that no
conflicts of interest exist.</named-content>
        </p>
      </sec>
    </sec>	
  </body>
  
  <back>
  <ref-list>
  <ref id="R21"><label>8</label><mixed-citation publication-type="book-chapter" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Binias</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Palus</surname>, <given-names>H.</given-names></string-name>, &amp; <string-name><surname>Niezabitowski</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2016</year>). <chapter-title>Elimination of bioelectrical source overlapping effects from the eeg measurements.</chapter-title> In Carpathian control conference (iccc), 2016 17th international (pp. 70–75). <pub-id pub-id-type="doi">10.1109/CarpathianCC.2016.7501069</pub-id></mixed-citation></ref>

<ref id="R19"><label>6</label><mixed-citation publication-type="unknown" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Anderson</surname>, <given-names>N. C.</given-names></string-name>, <string-name><surname>Bischof</surname>, <given-names>W. F.</given-names></string-name>, <string-name><surname>Laidlaw</surname>, <given-names>K. E.</given-names></string-name>, <string-name><surname>Risko</surname>, <given-names>E. F.</given-names></string-name>, &amp; <string-name><surname>Kingstone</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2013</year>). <article-title>Recurrence quantification analysis of eye movements.</article-title> Behavior Research Methods, 45(3), 842–856. Binias, B., Palus, H., &amp; Niezabitowski, M. (2016). Elimination of bioelectrical source overlapping effects from 13Journal of Eye Movement Research 10(5):3, 1-14 <pub-id pub-id-type="doi">10.3758/s13428-012-0299-5</pub-id></mixed-citation></ref>
<ref id="R16"><label>3</label><mixed-citation publication-type="book-chapter" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Blascheck</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Kurzhals</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Raschke</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Burch</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Weiskopf</surname>, <given-names>D.</given-names></string-name>, &amp; <string-name><surname>Ertl</surname>, <given-names>T.</given-names></string-name></person-group> (<year>2014</year>). <chapter-title>State-of-the-art of visualization for eye tracking data.</chapter-title> In Proceedings of eurovis (Vol. 2014).</mixed-citation></ref>
<ref id="R30"><label>17</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Burch</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Time-preserving visual attention maps.</article-title> <comment>[Springer.]</comment>. <source>Intelligent Decision Technologies</source>, <volume>2016</volume>, <fpage>273</fpage>–<lpage>283</lpage>.<issn>1872-4981</issn></mixed-citation></ref>
<ref id="R15"><label>2</label><mixed-citation publication-type="conference" specific-use="parsed"><person-group person-group-type="author"><string-name><surname>Demiralp</surname>, <given-names>Ç.</given-names></string-name>, <string-name><surname>Cirimele</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Heer</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Card</surname>, <given-names>S. K.</given-names></string-name></person-group> (<year>2015</year>). <article-title>The verp explorer: a tool for exploring eye movements of visual-cognitive tasks using recurrence plots.</article-title> In <source>Workshop on eye tracking and visualization</source> (pp. <fpage>41</fpage>– <lpage>55</lpage>).</mixed-citation></ref>
<ref id="R14"><label>1</label><mixed-citation publication-type="book-chapter" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Duchowski</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Krejtz</surname>, <given-names>K.</given-names></string-name></person-group> (<year>2015</year>). Visualizing dynamic ambient/focal attention with coefficient k. In Proceedings of etvis 2015.</mixed-citation></ref>
<ref id="R28"><label>15</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Hall</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Frank</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Holmes</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Pfahringer</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Reutemann</surname>, <given-names>P.</given-names></string-name>, &amp; <string-name><surname>Witten</surname>, <given-names>I. H.</given-names></string-name></person-group> (<year>2009</year>). <article-title>The weka data mining software: An update.</article-title> <source>SIGKDD Explorations</source>, <volume>11</volume>(<issue>1</issue>), <fpage>10</fpage>–<lpage>18</lpage>. <pub-id pub-id-type="doi">10.1145/1656274.1656278</pub-id><issn>1931-0145</issn></mixed-citation></ref>
<ref id="R20"><label>7</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Haralick</surname>, <given-names>R. M.</given-names></string-name>, <string-name><surname>Shanmugam</surname>, <given-names>K.</given-names></string-name>, &amp; <string-name><surname>Dinstein</surname>, <given-names>I. H.</given-names></string-name></person-group> (<year>1973</year>). <article-title>Textural features for image classification.</article-title> <source>IEEE Transactions on Systems, Man, and Cybernetics</source>, <volume>3</volume>(<issue>6</issue>), <fpage>610</fpage>–<lpage>621</lpage>. <pub-id pub-id-type="doi">10.1109/TSMC.1973.4309314</pub-id><issn>0018-9472</issn></mixed-citation></ref>
<ref id="R23"><label>10</label><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Hare˙zlak</surname>, <given-names>K.</given-names></string-name>, &amp; <string-name><surname>Kasprowski</surname>, <given-names>P.</given-names></string-name></person-group> (<year>2014</year>). <chapter-title>Evaluating quality of dispersion based fixation detection algorithm</chapter-title>. In <source>Information sciences and systems 2014</source> (pp. <fpage>97</fpage>–<lpage>104</lpage>). <publisher-name>Springer</publisher-name>. <pub-id pub-id-type="doi">10.1007/978-3-319-09465-6_11</pub-id></mixed-citation></ref>
<ref id="R29"><label>16</label><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Kasprowski</surname>, <given-names>P.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Mining of eye movement data to discover people intentions.</article-title> In International conference: Beyond databases, architectures and structures (pp. 355–363). <pub-id pub-id-type="doi">10.1007/978-3-319-06932-6_34</pub-id></mixed-citation></ref>
<ref id="R17"><label>4</label><mixed-citation publication-type="book-chapter" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Kasprowski</surname>, <given-names>P.</given-names></string-name>, &amp; <string-name><surname>Harezlak</surname>, <given-names>K.</given-names></string-name></person-group> (<year>2016</year>). <chapter-title>Gaze self-similarity plots as a useful tool for eye movement characteristics analysis.</chapter-title> In Proceedings of etvis 2016. <pub-id pub-id-type="doi">10.1109/ETVIS.2016.7851157</pub-id></mixed-citation></ref>
<ref id="R27"><label>14</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Krejtz</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Duchowski</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Krejtz</surname>, <given-names>I.</given-names></string-name>, <string-name><surname>Szarkowska</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Kopacz</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Discerning ambient/focal attention with coefficient k.</article-title> <comment>[TAP]</comment>. <source>ACM Transactions on Applied Perception</source>, <volume>13</volume>(<issue>3</issue>), <fpage>11</fpage>. <pub-id pub-id-type="doi">10.1145/2896452</pub-id><issn>1544-3558</issn></mixed-citation></ref>
<ref id="R18"><label>5</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Marwan</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Romano</surname>, <given-names>M. C.</given-names></string-name>, <string-name><surname>Thiel</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Kurths</surname>, <given-names>J.</given-names></string-name></person-group> (<year>2007</year>). <article-title>Recurrence plots for the analysis of complex systems.</article-title> <source>Physics Reports</source>, <volume>438</volume>(<issue>5</issue>), <fpage>237</fpage>–<lpage>329</lpage>. <pub-id pub-id-type="doi">10.1016/j.physrep.2006.11.001</pub-id><issn>0370-1573</issn></mixed-citation></ref>
<ref id="R25"><label>12</label><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Post</surname>, <given-names>R. B.</given-names></string-name>, <string-name><surname>Welch</surname>, <given-names>R. B.</given-names></string-name>, &amp; <string-name><surname>Bridgeman</surname>, <given-names>B.</given-names></string-name></person-group> (<year>2003</year>). <chapter-title>Perception and action: Two modes of processing visual information</chapter-title>. In <source>Visual perception: The influence of h. w. leibowitz</source> (pp. <fpage>143</fpage>–<lpage>154</lpage>). <publisher-name>American Psychological Association</publisher-name>. <pub-id pub-id-type="doi">10.1037/10485-011</pub-id></mixed-citation></ref>
<ref id="R22"><label>9</label><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Shic</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Scassellati</surname>, <given-names>B.</given-names></string-name>, &amp; <string-name><surname>Chawarska</surname>, <given-names>K.</given-names></string-name></person-group> (<year>2008</year>). <article-title>The incomplete fixation measure.</article-title> In Proceedings of the 2008 symposium on eye tracking research &amp; applications (pp. 111–114). <pub-id pub-id-type="doi">10.1145/1344471.1344500</pub-id></mixed-citation></ref>
<ref id="R26"><label>13</label><mixed-citation publication-type="conference" specific-use="parsed"><person-group person-group-type="author"><string-name><surname>Velichkovsky</surname>, <given-names>B. M.</given-names></string-name>, <string-name><surname>Joos</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Helmert</surname>, <given-names>J. R.</given-names></string-name>, &amp; <string-name><surname>Pannasch</surname>, <given-names>S.</given-names></string-name></person-group> (<year>2005</year>). <article-title>Two visual systems and their eye movements: Evidence from static and dynamic scene perception.</article-title> In <source>Proceedings of the xxvii conference of the cognitive science society</source> (pp. <fpage>2283</fpage>–<lpage>2288</lpage>).</mixed-citation></ref>
<ref id="R24"><label>11</label><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Vidal</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Bulling</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Gellersen</surname>, <given-names>H.</given-names></string-name></person-group> (<year>2012</year>). <article-title>Detection of smooth pursuits using eye movement shape features.</article-title> In <source>Proceedings of the symposium on eye tracking research and applications</source> (pp. <fpage>177</fpage>–<lpage>180</lpage>). <pub-id pub-id-type="doi">10.1145/2168556.2168586</pub-id></mixed-citation></ref>
 
 </ref-list></back>
</article>
