<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd">

<article article-type="research-article" xmlns:xlink="http://www.w3.org/1999/xlink">
 <front>
    <journal-meta>
	<journal-id journal-id-type="publisher-id">Jemr</journal-id>
      <journal-title-group>
        <journal-title>Journal of Eye Movement Research</journal-title>
      </journal-title-group>
      <issn pub-type="epub">1995-8692</issn>
	  <publisher>								
	  <publisher-name>Bern Open Publishing</publisher-name>
	  <publisher-loc>Bern, Switzerland</publisher-loc>
	</publisher>
    </journal-meta>
    <article-meta>
	<article-id pub-id-type="doi">10.16910/jemr.10.3.1</article-id> 
	  <article-categories>								
				<subj-group subj-group-type="heading">
					<subject>Research Article</subject>
				</subj-group>
		</article-categories>
      <title-group>
        <article-title>Ways of improving the precision of eye tracking data: Controlling the influence of dirt and dust on pupil detection</article-title>
      </title-group>
	   <contrib-group> 
				<contrib contrib-type="author">
					<name>
						<surname>Fuhl</surname>
						<given-names>Wolfgang</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>K&#xFC;bler</surname>
						<given-names>Thomas C.</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Hospach</surname>
						<given-names>Dennis</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Bringmann</surname>
						<given-names>Oliver</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Rosenstiel</surname>
						<given-names>Wolfgang</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Kasneci</surname>
						<given-names>Enkelejda</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>				
        <aff id="aff1">
		<institution>University of Tu&#x308;bingen</institution>, <country>Germany</country>
        </aff>
		</contrib-group>
     
	  <pub-date date-type="pub" publication-format="electronic"> 
		<day>25</day>  
		<month>05</month>
        <year>2017</year>
      </pub-date>
	  <pub-date date-type="collection" publication-format="electronic"> 
	  <year>2017</year>
	</pub-date>
      <volume>10</volume>
      <issue>3</issue>
	  <elocation-id>10.16910/jemr.10.3.1</elocation-id>
	<permissions> 
	<copyright-year>2017</copyright-year>
	<copyright-holder>Fuhl et al.</copyright-holder>
	<license license-type="open-access">
  <license-p>This work is licensed under a Creative Commons Attribution 4.0 International License, 
  (<ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">
    https://creativecommons.org/licenses/by/4.0/</ext-link>), which permits unrestricted use and redistribution provided that the original author and source are credited.</license-p>
</license>
	</permissions>
      <abstract>
        <p>Eye-tracking technology has to date been primarily employed in research. With recent advances in affordable video-based devices, the implementation of gaze-aware smartphones, and marketable driver monitoring systems, a considerable step towards pervasive eye-tracking has been made. However, several new challenges arise with the usage of eye-tracking in the wild and will need to be tackled to increase the acceptance of this technology. The main challenge is still related to the usage of eye-tracking together with eyeglasses, which in combination with reflections for changing illumination conditions will make a subject "untrackable". If we really want to bring the technology to the consumer, we cannot simply exclude 30% of the population as potential users only because they wear eyeglasses, nor can we make them clean their glasses and the device regularly. Instead, the pupil detection algorithms need to be made robust to potential sources of noise. We hypothesize that the amount of dust and dirt on the eyeglasses and the eye-tracker camera has a significant influence on the performance of currently available pupil detection algorithms. Therefore, in this work, we present a systematic study of the effect of dust and dirt on the pupil detection by simulating various quantities of dirt and dust on eyeglasses. Our results show 1) an overall high robustness to dust in an offfocus layer. 2) the vulnerability of edge-based methods to even small in-focus dust particles. 3) a trade-off between tolerated particle size and particle amount, where a small number of rather large particles showed only a minor performance impact.</p>
      </abstract>
	   <kwd-group>
        <kwd>eye tracking</kwd>
        <kwd>pupil detection</kwd>
        <kwd>robustness</kwd>
        <kwd>dirt simulation</kwd>
        <kwd>data quality</kwd>
      </kwd-group>
    </article-meta>
  </front>

  <body>

    <sec id="S1">
      <title>Introduction</title>
	  
      <p>With the advent of affordable eye-tracking technology 
to consumer products like controllers for video gaming, interaction 
with smartphones, or driver monitoring, new challenges arises. 
Outside of the controlled conditions of a laboratory, a reliable 
eye-tracking can hardly be achieved. The main source of error in such 
settings is a non-robust pupil signal which primarily arises from 
challenges in the image-based detection of the pupil due to changing 
illumination, especially for subjects wearing glasses. Excluding such 
subjects or declaring a customer as untrackable (commonly 5-10% in lab setups 
Schnipke and Todd (
			<xref ref-type="bibr" rid="R11">11</xref>
			)) is not an option anymore. Instead, customers 
expect eye-tracking to just work. Hence, relia-bility of the eye-tracking 
signal is still an important issue.</p>

      <p>One of the first data processing steps for video based
eye-tracking is the localization of the pupil within the
eyetracker image. Benchmark data for pupil detection are
declared especially challenging (and in fact are) if people
are simply walking around outdoors or driving a car Fuhl, Santini, 
K&#xFC;bler, and Kasneci (
			<xref ref-type="bibr" rid="R4">4</xref>
			); Fuhl, Tonsen, Bulling, and Kasneci (
			<xref ref-type="bibr" rid="R5">5</xref>
			); Tonsen, Zhang, Sugano, and Bulling (
			<xref ref-type="bibr" rid="R14">14</xref>
			). However, 
data quality means much more than the mere tracking rate Holmqvist, 
Nystr&#xF6;m, and Mulvey (
			<xref ref-type="bibr" rid="R6">6</xref>
			), yet it certainly is amongst the most
fundamental factors. A tracking loss affects all
subsequent processing steps, such as calibration and
fixation identification. Therefore, it alters almost every
key metric used in eye-tracking research Wass,
Forssman, and Lepp&#xE4;nen (
			<xref ref-type="bibr" rid="R15">15</xref>
			) and can be extremely
frustrating during interaction with a device.</p>

      <p>The development of robust algorithms has to keep
pace with the availability of consumer devices. In order
to improve the current generation of algorithms, we need
to get a better understanding of the factors that cause a
decrease of tracking quality in real-world applications.</p>

      <p>In this work, we systematically study the impact of
dust and dirt on the tracking rate. As most of today&#x2019;s
eyetrackers are video based, dirt and smudges, both on the
device as well on the subject&#x2019;s eyeglasses, are a potential
source of error that may be less common in a well
maintained laboratory, but become relevant in real-world
applications. Just think of a remote tracking setup in an
automotive driver monitoring system. Since it is hard to
objectively quantify the amount and nature of dirt in a
real experiment, we employ an image synthesis method
on top of real eye-tracking videos recorded during a
driving experiment. Tracking rate and performance of
four state-of-the-art pupil detection algorithms, namely
&#x15A;wirski and Dodgson (
			<xref ref-type="bibr" rid="R13">13</xref>
			), ExCuSe Fuhl, K&#xFC;bler,
Sippel, Rosenstiel, and Kasneci (
			<xref ref-type="bibr" rid="R2">2</xref>
			), Set Javadi,
Hakimi, Barati, Walsh, and Tcheang (
			<xref ref-type="bibr" rid="R7">7</xref>
			), and ElSe
Fuhl, Santini, K&#xFC;bler, and Kasneci (
			<xref ref-type="bibr" rid="R4">4</xref>
			)are evaluated.</p>

      <p>The remaining of this paper is organized as follows.
The next Section gives an overview over the competing
pupil detection algorithms and discusses related work in
image synthesis for eye tracking. Details on the particle
simulation are given in Section Methods. Section Results
presents the performance of the state-of-the-art pupil
detectors for various conditions. Finally, the obtained
results are discussed and conclusions are drawn.</p>
        </sec>

    <sec id="S2">
      <title>Related work</title>
      <sec id="S2a">
        <title>Pupil detection algorithms</title>

          <p>Although many commercial eye-tracker
manufacturers do not provide exact documentation of their pupil
detection method, there are a number of published
algorithms. In the following, we will provide a summary of
the workflow for a selection of algorithms. For a more
detailed overview and comparison of the state-of-the-art
we refer the reader to a recent review by Fuhl, Tonsen, et
al. (
			<xref ref-type="bibr" rid="R5">5</xref>
			). In the following we will briefly discuss details
of some of these algorithms, namely &#x15A;wirski, Bulling,
and Dodgson (
			<xref ref-type="bibr" rid="R12">12</xref>
			), Else Fuhl, Santini, K&#xFC;bler, and
Kasneci (
			<xref ref-type="bibr" rid="R4">4</xref>
			), and ExCuSe Fuhl et al. (
			<xref ref-type="bibr" rid="R2">2</xref>
			)due to
their good performance in prior evaluations Fuhl, Tonsen,
et al. (
			<xref ref-type="bibr" rid="R5">5</xref>
			) and their conceptual differences. ElSe Fuhl,
Santini, K&#xFC;bler, and Kasneci (
			<xref ref-type="bibr" rid="R4">4</xref>
			)was chosen as the
currently best performing state-of-the-art method Fuhl,
Geisler, Santini, Rosenstiel, and Kasneci (
			<xref ref-type="bibr" rid="R1">1</xref>
			). We also
include the Set algorithm Javadi et al. (
			<xref ref-type="bibr" rid="R7">7</xref>
			) as a
representative of simple, threshold-based approach.</p>
        </sec>
		
      <sec id="S2b">
        <title>Algorithm ExCuSe</title>		

          <p>The Exclusive Curve Selector (ExCuSe)Fuhl et al.(
			<xref ref-type="bibr" rid="R2">2</xref>
			)first analyzes the image based on the intensity
histogram with regard to large reflections. For images
with such reflections, the algorithm tries to find the pupils
outer edge, otherwise this step is skipped. To localize the
pupil boundary, a Canny edge filter is applied and all
orthogonally connected edges are broken at their
intersection. This is done by applying different morphologic
operations. All non-curvy lines are then removed. For
each curved line, the average intensity of its enclosed
pixels is computed. The curved line with the darkest
enclosed intensity value is selected as pupil boundary
candidate and an ellipse fit is applied to it. If the previous
step did not yield a clear result or was skipped, a binary
threshold based on the standard deviation of the image is
applied. For four orientations, the Angular Integral
Projection Function Mohammed, Hong, and Jarjes (
			<xref ref-type="bibr" rid="R10">10</xref>
			) is
calculated on the binary image and an intersection of the
four maximal responses is determined. This intersection
location is further refined within the surrounding image
region by using similar or darker intensity values as
attractive force. Based on the refined position, the
surrounding image region is extracted and a Canny edge
filter applied. These edges are refined using the binary
image obtained by applying a calculated threshold.
Beginning at the estimated center location, rays are send out
to select the closest edge candidates. The last step is a
least squares ellipse fit on the selected edges to correct
the pupil center location.</p>
        </sec>
		
      <sec id="S2c">
        <title>Algorithm ElSe</title>			
		
          <p>The Ellipse Selector (ElSe) Fuhl, Santini, K&#xFC;bler, and
Kasneci (
			<xref ref-type="bibr" rid="R4">4</xref>
			)begins by applying a Canny edge filter to
the eye image. Afterwards, all edges are filtered either
morphologically or algorithmically. In this work, we used
the morphological approach due to the lower
computational demands. The filter removes orthogonal
connections and applies a thinning and straightening with
different morphologic operations than ExCuSe Fuhl et al.(
			<xref ref-type="bibr" rid="R2">2</xref>
			). Afterwards, all straight lines are removed and
each curved segment is evaluated based on the enclosed
intensity value, size, ellipse parameters, and the ease of
fitting an ellipse to it. The last evaluation metric is a pupil
plausibility check. In case the previously described step
fails to detect a pupil, a convolution based approach is
applied. Therefore, a mean circle and a surface difference
circle are convolved with the downscaled image. The
magnitude result of both convolutions is then multiplied
and the maximum is selected as pupil center estimation.
This position is refined on the full sized image by
calculating a intensity range from its neighborhood. All
connected pixels in this range are grouped and the center of
mass is calculated.</p>
        </sec>
		
      <sec id="S2d">
        <title>Algorithm Set</title>			

          <p>Set Javadi et al. (
			<xref ref-type="bibr" rid="R7">7</xref>
			)can be subdivided into pupil
extraction and validation. An intensity threshold is
provided as a parameter and used to convert the input image
into a binary image. All connected pixels below (darker
than) the threshold are considered as belonging to the
pupil and grouped together. Pixel groups that exceed a
certain size, provided to the algorithm as a second
parameter, are selected as possible pupil candidates. For each
such group the convex hull is computed and an ellipse is
fit to it. This ellipse fit is based on comparing the sine
and cosine part of each segment to possible ellipse axis
parameters. The most circular segment is chosen as the
final pupil.</p>
        </sec>
		
      <sec id="S2e">
        <title>Algorithm by &#x15A;wirski et al.</title>			

          <p>In a first step of the algorithm introduced by &#x15A;wirski
et al. (
			<xref ref-type="bibr" rid="R10">10</xref>
			), Haar-Cascade-like features of different sizes
are used to find a coarse position for the pupil. To save
computational costs this is done on the integral image.
The range at which these features are searched is
specified by a minimum and maximum pupil radius.</p>

          <p>This results in a magnitude map where the strongest
response is selected as coarse pupil center estimate. An
intensity histogram is calculated on the surrounding
region. This histogram is segmented using k-means
clustering, resulting thus in an intensity threshold. This
threshold converts the image into a binary pixel inside-pupil,
outside-pupil image. The largest continuously connected
patch is selected as pupil and its center of mass as the
refined pupil center location. In the final step, an ellipse
is fitted to the pupil boundary. A morphologic
preprocessing by an opening operation is applied to the image to
remove the eyelashes. Afterwards, the canny edge
detector is used for edge extraction. Edges that surround the
refined pupil location are selected and an ellipse is fitted
using RANSAC and an image aware support function for
edge pixel selection.</p>
        </sec>

      <sec id="S2f">
        <title>Image synthesis in eye-tracking algorithm development</title>	

          <p>Each eye-tracking recording is associated with a quite
unique mixture of noise components. Therefore, artificial
eye models and image synthesis methods for eye-tracker
images were created in order to produce mostly
artifactfree recordings. &#x15A;wirski and Dodgson (
			<xref ref-type="bibr" rid="R13">13</xref>
			) even model
and render the complete head to generate data for remote
as well as head mounted eye trackers. The model renders
as physically correct as possible, including reflections,
refraction, shadows and depth-of-field blur together with
the facial marks like eyelashes and eyebrows.</p>

          <p>Wood, Baltru&#x161;aitis, Morency, Robinson, and Bulling(
			<xref ref-type="bibr" rid="R17">17</xref>
			) used rendered images for estimating the gaze of a
person using a k nearest neighbor&#x2019;s estimator. For fast
rendering they employed the Unity game engine.
Furthermore, the authors generated data for different skin
colors, head poses, and pupil states. The accuracy of this
concept was further improved by Zhang, Sugano, Fritz,
and Bulling (
			<xref ref-type="bibr" rid="R18">18</xref>
			) using a convolution neuronal network
trained on the complete face.</p>

          <p>The work by K&#xFC;bler, Rittig, Kasneci, Ungewiss, and
Krauss (
			<xref ref-type="bibr" rid="R9">9</xref>
			) advances in a different direction. More
specifically, the authors evaluate the effect of eyeglasses
on traditional gaze estimation methods by including the
optical medium into the simulation model. The authors
showed that eyeglasses have a major impact on gaze
direction predicted by a geometrical model, but not on
that of a polynomial fit.</p>
        </sec>
      </sec>

    <sec id="S3">
      <title>Methods</title>
	  
  
	  
      <sec id="S3a">
        <title>Observations on real recordings</title>
	  
          <p>To get an impression of the impact of dust during
realworld eye-tracking, we browsed about 30 datasets
from a real-world driving experiment Kasneci et al. (
			<xref ref-type="bibr" rid="R8">8</xref>
			). Dust particles are almost invisible on still images,
but become clearly visible in a video. This is due to the
static behavior of dust while the eye is moving. Figure 1
shows some examples of dust we found in the dataset.</p>
       
		
	<fig id="fig01" fig-type="figure" position="float">
					<label>Figure 1</label>
					<caption>
						<p>Example eye images of two subjects showing dust particles in the focus layer of the camera (because they are best visible in print). Most dust particles in our data were placed slightly outside of the focus layer and therefore blurred.</p>
					</caption>
					<graphic id="graph01" xlink:href="jemr-10-03-a-figure-01.png"/>
				</fig>

 </sec>				
		
      <sec id="S3b">
        <title>Dirt particle image synthesis</title>		

          <p>In 2005, Willson et al. first described a method for the
simulation of dust particles on optical elements in
Willson, Maimone, Johnson, and Scherr (
			<xref ref-type="bibr" rid="R16">16</xref>
			). They
formulated a camera model, derived the influence of dust
particles on the final image, and specified formulae to
calculate these effects. However, their particle model and the
final image synthesis were still lacking some of the
occurring effects: particles were modeled as circular
achromatic shapes that were distributed on a plane
perpendicular to the image sensor. We extended their work
by modeling particles as triangulated objects with color
and positions in 3D, which allows distribution in
potentially arbitrary 3D-subspaces, e.g. curved and rotated
planes. By modeling particles as a set of triangles and
adding color information, the formulae for intersection
calculation and color blending are significantly different
and more expensive to compute. To calculate the final
image in reasonable time, a rtree is used to speed the
location of particles close to a specified point.</p>

          <p>As presented by Willson et al. (
			<xref ref-type="bibr" rid="R16">16</xref>
			), the influence of
dust particles on the final image depends on the camera
model and the dust particle model. These models are
presented in the following subsections.</p>
        </sec>

      <sec id="S3c">
        <title>Camera model</title>	

          <p>In real cameras, an aperture controls the amount of
light and the directions from which light is collected. This
bundle of light rays is called the collection cone. Such a
camera model that respects was employed in this work to
vary the depth of field, to gain naturally blurred images
of objects that are out of focus and control for the amount
of light and blur, Figure 2.</p>

<fig id="fig02" fig-type="figure" position="float">
					<label>Figure 2</label>
					<caption>
						<p>Camera model of the dust simulation. Notice the parameters A (area of the projected collection cone), <italic>p<sub>w</sub></italic> (intersection point of the collection cone with the dust plane and <italic>p<sub>x,y</sub></italic> (pixel position on the sensor)</p>
					</caption>
					<graphic id="graph02" xlink:href="jemr-10-03-a-figure-02.png"/>
				</fig>

        </sec>
		
      <sec id="S3d">
        <title>Dust particle model</title>			

          <p>We model dust particles based on position, color
(including an alpha channel for transparency), shape
variance, and size. They are randomly distributed on a
userdefined plane, not necessarily perpendicular to the image
plane. As shape we modify a basic circle by smooth
deviations. The final shape is triangulated and the
triangulation detail level controlled by specifying the number of
edges for each particle. The maximum extent of the dust
plane is calculated using the maximum angle of view of
the camera. Finally, by setting the number of particles for
the next simulation run, the desired particles are
distributed over the given plane subset.Using the center of 
location <italic>d<sub>i</sub></italic> of the dust particle <italic>d&#x302;<sub>i</sub></italic> 
with index <italic>i</italic>, its radius <italic>r<sub>i</sub></italic> and the number of edges <italic>n</italic>, 
the edge vertices <italic>v<sub>i</sub></italic> of a circle-like particle can be 
calculated as formulated in the following equation</p>

<fig id="eq01" fig-type="equation" position="anchor">
                    <label>(1)</label>
					<graphic id="equation01" xlink:href="jemr-10-03-a-equation-01.png"/>
				</fig>

          <p>where 0 &#x2264; <italic>j</italic> &#x2264; <italic>n</italic>. These vertices are then appended to
form a polygon. A similar approach is used for varying
the particle shape. Setting a property value <italic>k</italic> &#x2208; [0; 2],
which controls the scale of the shape variance, a new
radius is calculated for each of the edge vertices by</p>

<fig id="eq02" fig-type="equation" position="anchor">
                    <label>(2)</label>
					<graphic id="equation02" xlink:href="jemr-10-03-a-equation-02.png"/>
				</fig>

          <p>These randomly shaped particles are then 
smoothed by using simple interpolation between two edge 
points. For each particle at location di, the left and 
right neighbor (<italic>d<sub>i-1</sub></italic> and <italic>d<sub>i+1</sub></italic>) are taken into account. 
Finally, the edge vertices are smoothed using the equation</p>

<fig id="eq03" fig-type="equation" position="anchor">
                    <label>(3)</label>
					<graphic id="equation03" xlink:href="jemr-10-03-a-equation-03.png"/>
				</fig>

        </sec>
		
      <sec id="S3e">
        <title>Image composition.</title>			

          <p>Every image that enters the simulation has already
been recorded with a real camera. The parameters chosen
for the simulation should therefore be as close as possible
to the real recording camera. The appearance of the dust
particles will only yield correct results if this condition
holds. For each pixel <italic>p<sub>x,y</sub></italic>, on the output image, the
following steps are performed to calculate the final pixel output
color.</p>

          <p>First, the intersection point <italic>p<sub>w</sub></italic> of the light ray starting
at the pixel at <italic>p<sub>x,y</sub></italic>, and leaving through the center of the
aperture towards the scene with the dust plane needs to be
found. In case of perpendicular planes, this can be
calculated rather easy using similar triangles Willson et al. (
			<xref ref-type="bibr" rid="R16">16</xref>
			). If the plane can have arbitrary geometry, it is best
calculated using ray-plane-intersection.</p>

          <p>Second, the projection of the collection cone at the
point <italic>p<sub>w</sub></italic> needs to be calculated. This is done by
projecting the triangle edge points of the aperture onto the plane,
gaining a projected polygon <italic>c<sub>w</sub></italic> of the collection cone section with area A.</p>

          <p>To determine the influence of the dust particles on the
final output color, all surrounding particles that satisfy
the condition &#x2016;<italic>p<sub>w</sub></italic>  &#x2212; <italic>d<sub>i</sub></italic>&#x2016; &lt; &#x3B1; + 2 &#x2217; <italic>r<sub>i</sub></italic> are retrieved. They
are referred to as the subset <italic>C</italic> of dust particles in the
following. These particles potentially have an influence
on the final pixel color. To calculate the magnitude of
that influence, for each particle <italic>d<sub>i</sub></italic> &#x2208; <italic>C</italic>, the intersection
area <italic>A<sub>i</sub></italic> with <italic>c<sub>w</sub></italic> is calculated. If we assume that the
equation for the overall area holds:</p>
 
 <fig id="eq04" fig-type="equation" position="anchor">
                     <label>(4)</label>
					<graphic id="equation04" xlink:href="jemr-10-03-a-equation-04.png"/>
				</fig>
 
          <p>The fraction <italic>&#x3B1;<sub>i</sub></italic> = <italic>A<sub>i</sub>/A</italic> 
is the alpha-blending factor of <italic>d&#x302;<sub>i</sub></italic>
and determines the amount of its color contributing to the
final pixel color. Therefore, if a particle has huge overlap
with the current collection cone, the final output color of
that pixel is strongly mixed with the particle color. Figure 3 
visualizes this process.</p>

<fig id="fig03" fig-type="figure" position="float">
					<label>Figure 3</label>
					<caption>
						<p>Estimation the mixing factors for the final output color. The blending factor is calculated as the fraction of each particle of the projected aperture area.</p>
					</caption>
					<graphic id="graph03" xlink:href="jemr-10-03-a-figure-03.png"/>
				</fig>

        </sec>

      <sec id="S3f">
        <title>Optimization of the computational time</title>			

      <p>For fast retrieval of the particles close to <italic>c<sub>w</sub></italic>, the
boost implementation of an rtree is used. Further, since 
the blending factors are constant as long as the camera
parameters remain the same, an attenuation image is
calculated that can be applied to all subsequent images of
a stream. The generation of the attenuation image is
computationally expensive, whereas the application to an
image can be done in real-time. The attenuation image
contains the alpha-blending values and the color
information for each pixel on the sensor and is valid as long as
the camera parameters remain fixed.</p>
        </sec>
		
      <sec id="S3g">
        <title>Dataset</title>			

        <p>We evaluated our approach on a subset of the data set 
by Fuhl et al. (
			<xref ref-type="bibr" rid="R2">2</xref>
			), namely data set X, XII, XIV, and XVII (Figure 4). 
Based on visual inspection, these data sets were found to be mostly free 
of dust particles and provided thus good baseline results for all of 
evaluated algorithms. A total of 2,101 images from four different subjects 
were extracted. These images do not contain any other challenges to the 
pupil detection such as make-up or contact lenses, since we wanted to 
investigate the iso-lated influence of dust and dirt. However, all 
subjects wore eyeglasses and the ambient illumination changed. Furthermore, 
we did not use completely synthetic images as comparable results can 
only be achieved within strict laboratory conditions, where dust would 
usually simply be removed from the recording devices.</p>

<fig id="fig04" fig-type="figure" position="float">
					<label>Figure 4</label>
					<caption>
						<p>Example images selected from the respective data sets published by Fuhl et al. (2015).</p>
					</caption>
					<graphic id="graph04" xlink:href="jemr-10-03-a-figure-04.png"/>
				</fig>

        <p>Figure 5 shows the influence of the focal length on 
the final image. It should be noted here that in a realistic 
scenario the focal length would also have an influence on the image 
of the eye, not just on the particles. This effect was omitted here 
(visible for example at the eyelashes). For the images in 
Figure 5, a focal length of 5.6 puts the dust particles in focus. 
For real dust particles this is based on their distance to the 
camera and depends mainly on the design of worn glasses. Most eye 
cameras do not employ an autofocus mechanism but provide a 
possibility of adjusting the focus. However, it is rarely adjusted 
with dust on the eyeglasses in mind (to our experience also by 
the manufacturers). In the following, we will use the value 
of 5.6mm focus as a reference.</p>

<fig id="fig05" fig-type="figure" position="float">
					<label>Figure 5</label>
					<caption>
						<p>Simulation results for different focal lengths on one image. 200 particle of size group 2 were inserted.</p>
					</caption>
					<graphic id="graph05" xlink:href="jemr-10-03-a-figure-05.png"/>
				</fig>

        <p>Another important aspect of dirt is the size of 
different particles. This effect is shown in Figure 6. The amount 
and focal length is fixed to 200 and 5.6 respec-tively. For real 
world recordings dust can occur in differ-ent sizes for which we 
used four size groups. As can be seen in figure 6(d) they are 
not simple dots they are vary-ing polygons.</p>

<fig id="fig06" fig-type="figure" position="float">
					<label>Figure 6</label>
					<caption>
						<p>Simulation results for different particle size groups on one image. The particle amount is set to 200 and the focal length is 5.6.</p>
					</caption>
					<graphic id="graph06" xlink:href="jemr-10-03-a-figure-06.png"/>
				</fig>

        <p>The effect of the amount of particles rendered can be 
seen in Figure 7. The particles are spread uniformly over the image. 
This is one limitation of the current simula-tion, as realistic 
dust distributions would include the lens lenticular buckle of 
the camera and the curvature of the glasses of a subject.</p>

<fig id="fig07" fig-type="figure" position="float">
					<label>Figure 7</label>
					<caption>
						<p>Simulation results for different amounts of particles on one image. The particle size group is set to 2 and the focal length is 5.6.</p>
					</caption>
					<graphic id="graph07" xlink:href="jemr-10-03-a-figure-07.png"/>
				</fig>

        </sec>
      </sec>

    <sec id="S4">
      <title>Results</title>		
		
        <p>Figure 8 shows the detection rate of the evaluated
algorithms over all data sets. The detection rate is reported
based on the difference in pixels between the manually
labeled and the automatically detected pupil center (pixel
error). The red vertical line marks the results (i.e.,
detection rate) for a pixel error of 5, which is considered as an
acceptable pixel error for the given image resolution.</p>

<fig id="fig08" fig-type="figure" position="float">
					<label>Figure 8</label>
					<caption>
						<p>Results on all data sets without dust simulation. The detection rate is shown with regard to the Euclidean distance error in pixels. The vertical red line shows the tracking rate at a 5 pixel error, the tolerance where all algorithms have reached saturation and that we will use throughout the following evaluation.</p>
					</caption>
					<graphic id="graph08" xlink:href="jemr-10-03-a-figure-08.png"/>
				</fig>

        <p>For the evaluation, we simulated the data sets with all
combinations of focal length (i.e., 2.8, 4.0, and 5.6), size
groups (1-4) and particle amount (50-500). Dirt particle
placement is calculated based on a uniform distribution.
To ensure the same dirt placement for each algorithm, we
stored the simulation results and performed the
evaluation on images.</p>

        <p>Tables 1, 2, 3, and 4 show the impact of different
simulation parameters on the detection rate of the
algorithms SET, &#x15A;wirski, ExCuSe, and Else, respectively. The
provided values describe the loss of detection rate for a
pixel error of five in direct comparison to the detection
rate on the original data set. According to these result, the
algorithm SET Javadi et al. (
			<xref ref-type="bibr" rid="R7">7</xref>
			) seems to be more
robust to dust than the competitor algorithms.
Interestingly, in Fuhl, Tonsen, et al. (
			<xref ref-type="bibr" rid="R5">5</xref>
			), SET was found to
handle reflections inappropriately, yet in this evaluation SET
showed highest robustness to simulated dirt. Only the
focal length in combination with large amounts of white
dust interferes with the pupil detection of this method.
The reason for this robustness is related to the threshold
based nature of SET.</p>

<table-wrap id="t01" position="float">
					<label>Table 1</label>
					<caption>
						<p>Performance of the SET algorithm. The results show the reduction in detection rate (in %) due to dirt for an error rate of 5 pixels. The baseline are detection rates achieved on clean images. F represents the focal length, SG the size group, whereas P50-P500 values specify the amount of particles. Bold highlights a reduction in the detection rate relatively to clean data of more than 10%.</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>
  <tr>
    <th align="center" rowspan="1" >F</th>
    <th align="center" rowspan="1" colspan="1">SG</th>
    <th align="center" rowspan="1" colspan="1">P50</th>
    <th align="center" rowspan="1" colspan="1">P100</th>
    <th align="center" rowspan="1" colspan="1">P200</th>
    <th align="center" rowspan="1" colspan="1">P300</th>
    <th align="center" rowspan="1" colspan="1">P400</th>
    <th align="center" rowspan="1" colspan="1">P500</th>
  </tr>
						</thead>
						<tbody>
  <tr>
    <td align="center"   rowspan="4">2.8</td>
    <td align="center" rowspan="1" colspan="1">1</td>
    <td align="center" rowspan="1" colspan="1">0</td>
    <td align="center" rowspan="1" colspan="1">0</td>
    <td align="center" rowspan="1" colspan="1">-2</td>
    <td align="center" rowspan="1" colspan="1">-2</td>
    <td align="center" rowspan="1" colspan="1">-2</td>
    <td align="center" rowspan="1" colspan="1">-3</td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">2</td>
    <td align="center" rowspan="1" colspan="1">0</td>
    <td align="center" rowspan="1" colspan="1">-2</td>
    <td align="center" rowspan="1" colspan="1">-7</td>
    <td align="center" rowspan="1" colspan="1">-9</td>
    <td align="center" rowspan="1" colspan="1"><bold>-12</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-13</bold></td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">3</td>
    <td align="center" rowspan="1" colspan="1">-3</td>
    <td align="center" rowspan="1" colspan="1">-1</td>
    <td align="center" rowspan="1" colspan="1"><bold>-12</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-17</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-29</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-24</bold></td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">4</td>
    <td align="center" rowspan="1" colspan="1">-3</td>
    <td align="center" rowspan="1" colspan="1"><bold>-13</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-24</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-22</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-37</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-52</bold></td>
  </tr>
  <tr>
    <td align="center"   rowspan="4">4.0</td>
    <td align="center" rowspan="1" colspan="1">1</td>
    <td align="center" rowspan="1" colspan="1">0</td>
    <td align="center" rowspan="1" colspan="1">1</td>
    <td align="center" rowspan="1" colspan="1">-1</td>
    <td align="center" rowspan="1" colspan="1">-2</td>
    <td align="center" rowspan="1" colspan="1">-1</td>
    <td align="center" rowspan="1" colspan="1">-2</td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">2</td>
    <td align="center" rowspan="1" colspan="1">0</td>
    <td align="center" rowspan="1" colspan="1">0</td>
    <td align="center" rowspan="1" colspan="1">-2</td>
    <td align="center" rowspan="1" colspan="1">-3</td>
    <td align="center" rowspan="1" colspan="1">-4</td>
    <td align="center" rowspan="1" colspan="1"><bold>-11</bold></td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">3</td>
    <td align="center" rowspan="1" colspan="1">-1</td>
    <td align="center" rowspan="1" colspan="1">-6</td>
    <td align="center" rowspan="1" colspan="1">-7</td>
    <td align="center" rowspan="1" colspan="1">-9</td>
    <td align="center" rowspan="1" colspan="1">-9</td>
    <td align="center" rowspan="1" colspan="1"><bold>-20</bold></td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">4</td>
    <td align="center" rowspan="1" colspan="1">-1</td>
    <td align="center" rowspan="1" colspan="1">-3</td>
    <td align="center" rowspan="1" colspan="1">-6</td>
    <td align="center" rowspan="1" colspan="1"><bold>-12</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-20</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-36</bold></td>
  </tr>
  <tr>
    <td align="center"   rowspan="4">5.6</td>
    <td align="center" rowspan="1" colspan="1">1</td>
    <td align="center" rowspan="1" colspan="1">0</td>
    <td align="center" rowspan="1" colspan="1">-1</td>
    <td align="center" rowspan="1" colspan="1">-1</td>
    <td align="center" rowspan="1" colspan="1">-2</td>
    <td align="center" rowspan="1" colspan="1">-3</td>
    <td align="center" rowspan="1" colspan="1">-2</td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">2</td>
    <td align="center" rowspan="1" colspan="1">-1</td>
    <td align="center" rowspan="1" colspan="1">0</td>
    <td align="center" rowspan="1" colspan="1">-3</td>
    <td align="center" rowspan="1" colspan="1">-1</td>
    <td align="center" rowspan="1" colspan="1">-5</td>
    <td align="center" rowspan="1" colspan="1">-5</td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">3</td>
    <td align="center" rowspan="1" colspan="1">0</td>
    <td align="center" rowspan="1" colspan="1">-2</td>
    <td align="center" rowspan="1" colspan="1">-9</td>
    <td align="center" rowspan="1" colspan="1">-8</td>
    <td align="center" rowspan="1" colspan="1">-8</td>
    <td align="center" rowspan="1" colspan="1"><bold>-14</bold></td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">4</td>
    <td align="center" rowspan="1" colspan="1">2</td>
    <td align="center" rowspan="1" colspan="1">-2</td>
    <td align="center" rowspan="1" colspan="1">-8</td>
    <td align="center" rowspan="1" colspan="1"><bold>-18</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-18</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-21</bold></td>
  </tr>
						</tbody>
					</table>
					</table-wrap>
					
<table-wrap id="t02" position="float">
					<label>Table 2</label>
					<caption>
						<p>Performance of the &#x15A;wirski algorithm. The results show the reduction in detection rate (in %) due to dirt for an error rate of 5 pixels. The baseline are detection rates achieved on clean images. F represents the focal length, SG the size group, whereas P50-P500 values specify the amount of particles. Bold highlights a reduction in the detection rate relatively to clean data of more than 10%.</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>
  <tr>
    <th align="center" rowspan="1" >F</th>
    <th align="center" rowspan="1" colspan="1">SG</th>
    <th align="center" rowspan="1" colspan="1">P50</th>
    <th align="center" rowspan="1" colspan="1">P100</th>
    <th align="center" rowspan="1" colspan="1">P200</th>
    <th align="center" rowspan="1" colspan="1">P300</th>
    <th align="center" rowspan="1" colspan="1">P400</th>
    <th align="center" rowspan="1" colspan="1">P500</th>
  </tr>
						</thead>
						<tbody>
  <tr>
    <td align="center"   rowspan="4">2.8</td>
    <td align="center" rowspan="1" colspan="1">1</td>
    <td align="center" rowspan="1" colspan="1">-3</td>
    <td align="center" rowspan="1" colspan="1">-2</td>
    <td align="center" rowspan="1" colspan="1">-2</td>
    <td align="center" rowspan="1" colspan="1">-4</td>
    <td align="center" rowspan="1" colspan="1">-2</td>
    <td align="center" rowspan="1" colspan="1">-4</td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">2</td>
    <td align="center" rowspan="1" colspan="1">-3</td>
    <td align="center" rowspan="1" colspan="1">-5</td>
    <td align="center" rowspan="1" colspan="1"><bold>-23</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-25</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-16</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-33</bold></td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">3</td>
    <td align="center" rowspan="1" colspan="1"><bold>-12</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-18</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-36</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-48</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-58</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-43</bold></td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">4</td>
    <td align="center" rowspan="1" colspan="1"><bold>-23</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-42</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-46</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-44</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-55</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-65</bold></td>
  </tr>
  <tr>
    <td align="center"   rowspan="4">4.0</td>
    <td align="center" rowspan="1" colspan="1">1</td>
    <td align="center" rowspan="1" colspan="1">-1</td>
    <td align="center" rowspan="1" colspan="1">-4</td>
    <td align="center" rowspan="1" colspan="1">-5</td>
    <td align="center" rowspan="1" colspan="1"><bold>-14</bold></td>
    <td align="center" rowspan="1" colspan="1">-2</td>
    <td align="center" rowspan="1" colspan="1">-3</td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">2</td>
    <td align="center" rowspan="1" colspan="1">-3</td>
    <td align="center" rowspan="1" colspan="1">-3</td>
    <td align="center" rowspan="1" colspan="1"><bold>-14</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-15</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-17</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-31</bold></td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">3</td>
    <td align="center" rowspan="1" colspan="1">-8</td>
    <td align="center" rowspan="1" colspan="1"><bold>-29</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-36</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-39</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-60</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-52</bold></td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">4</td>
    <td align="center" rowspan="1" colspan="1">-8</td>
    <td align="center" rowspan="1" colspan="1"><bold>-37</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-36</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-42</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-46</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-69</bold></td>
  </tr>
  <tr>
    <td align="center"   rowspan="4">5.6</td>
    <td align="center" rowspan="1" colspan="1">1</td>
    <td align="center" rowspan="1" colspan="1">-2</td>
    <td align="center" rowspan="1" colspan="1">-1</td>
    <td align="center" rowspan="1" colspan="1">-5</td>
    <td align="center" rowspan="1" colspan="1">-1</td>
    <td align="center" rowspan="1" colspan="1">-5</td>
    <td align="center" rowspan="1" colspan="1">-2</td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">2</td>
    <td align="center" rowspan="1" colspan="1">0</td>
    <td align="center" rowspan="1" colspan="1">0</td>
    <td align="center" rowspan="1" colspan="1"><bold>-14</bold></td>
    <td align="center" rowspan="1" colspan="1">-4</td>
    <td align="center" rowspan="1" colspan="1"><bold>-14</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-13</bold></td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">3</td>
    <td align="center" rowspan="1" colspan="1">-1</td>
    <td align="center" rowspan="1" colspan="1"><bold>-14</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-25</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-42</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-29</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-60</bold></td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">4</td>
    <td align="center" rowspan="1" colspan="1">-9</td>
    <td align="center" rowspan="1" colspan="1"><bold>-25</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-56</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-44</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-58</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-66</bold></td>
  </tr>
						</tbody>
					</table>
					</table-wrap>	
					
<table-wrap id="t03" position="float">
					<label>Table 3</label>
					<caption>
						<p>Performance of the ExCuSe algorithm. The results show the reduction in detection rate (in %) due to dirt for an error rate of 5 pixels. The baseline are detection rates achieved on clean images. F represents the focal length, SG the size group, whereas P50-P500 values specify the amount of particles. Bold highlights a reduction in the detection rate relatively to clean data of more than 10%.</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>
  <tr>
    <th align="center" rowspan="1" >F</th>
    <th align="center" rowspan="1" colspan="1">SG</th>
    <th align="center" rowspan="1" colspan="1">P50</th>
    <th align="center" rowspan="1" colspan="1">P100</th>
    <th align="center" rowspan="1" colspan="1">P200</th>
    <th align="center" rowspan="1" colspan="1">P300</th>
    <th align="center" rowspan="1" colspan="1">P400</th>
    <th align="center" rowspan="1" colspan="1">P500</th>
  </tr>
						</thead>
						<tbody>
  <tr>
    <td align="center"   rowspan="4">2.8</td>
    <td align="center" rowspan="1" colspan="1">1</td>
    <td align="center" rowspan="1" colspan="1">2</td>
    <td align="center" rowspan="1" colspan="1">1</td>
    <td align="center" rowspan="1" colspan="1">2</td>
    <td align="center" rowspan="1" colspan="1">1</td>
    <td align="center" rowspan="1" colspan="1">1</td>
    <td align="center" rowspan="1" colspan="1">0</td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">2</td>
    <td align="center" rowspan="1" colspan="1">0</td>
    <td align="center" rowspan="1" colspan="1">-1</td>
    <td align="center" rowspan="1" colspan="1">-2</td>
    <td align="center" rowspan="1" colspan="1">-7</td>
    <td align="center" rowspan="1" colspan="1"><bold>-11</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-14</bold></td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">3</td>
    <td align="center" rowspan="1" colspan="1">-2</td>
    <td align="center" rowspan="1" colspan="1">-7</td>
    <td align="center" rowspan="1" colspan="1"><bold>-14</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-27</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-35</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-37</bold></td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">4</td>
    <td align="center" rowspan="1" colspan="1">-4</td>
    <td align="center" rowspan="1" colspan="1"><bold>-17</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-31</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-45</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-42</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-59</bold></td>
  </tr>
  <tr>
    <td align="center"   rowspan="4">4.0</td>
    <td align="center" rowspan="1" colspan="1">1</td>
    <td align="center" rowspan="1" colspan="1">2</td>
    <td align="center" rowspan="1" colspan="1">1</td>
    <td align="center" rowspan="1" colspan="1">1</td>
    <td align="center" rowspan="1" colspan="1">0</td>
    <td align="center" rowspan="1" colspan="1">-1</td>
    <td align="center" rowspan="1" colspan="1">-2</td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">2</td>
    <td align="center" rowspan="1" colspan="1">0</td>
    <td align="center" rowspan="1" colspan="1">-5</td>
    <td align="center" rowspan="1" colspan="1"><bold>-12</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-17</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-25</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-28</bold></td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">3</td>
    <td align="center" rowspan="1" colspan="1">-5</td>
    <td align="center" rowspan="1" colspan="1"><bold>-11</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-26</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-26</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-57</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-54</bold></td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">4</td>
    <td align="center" rowspan="1" colspan="1">-6</td>
    <td align="center" rowspan="1" colspan="1"><bold>-18</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-29</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-47</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-62</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-75</bold></td>
  </tr>
  <tr>
    <td align="center"   rowspan="4">5.6</td>
    <td align="center" rowspan="1" colspan="1">1</td>
    <td align="center" rowspan="1" colspan="1">0</td>
    <td align="center" rowspan="1" colspan="1">0</td>
    <td align="center" rowspan="1" colspan="1">-2</td>
    <td align="center" rowspan="1" colspan="1">-5</td>
    <td align="center" rowspan="1" colspan="1">-5</td>
    <td align="center" rowspan="1" colspan="1">-8</td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">2</td>
    <td align="center" rowspan="1" colspan="1">-3</td>
    <td align="center" rowspan="1" colspan="1">-8</td>
    <td align="center" rowspan="1" colspan="1"><bold>-20</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-25</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-40</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-42</bold></td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">3</td>
    <td align="center" rowspan="1" colspan="1">-5</td>
    <td align="center" rowspan="1" colspan="1"><bold>-16</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-31</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-48</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-53</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-76</bold></td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">4</td>
    <td align="center" rowspan="1" colspan="1"><bold>-13</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-25</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-44</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-58</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-71</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-83</bold></td>
  </tr>
						</tbody>
					</table>
					</table-wrap>

<table-wrap id="t04" position="float">
					<label>Table 4</label>
					<caption>
						<p>Performance of the ElSe algorithm. The results show the reduction in detection rate (in %) due to dirt for an error rate of 5 pixels. The baseline are detection rates achieved on clean images. F represents the focal length, SG the size group, whereas P50-P500 values specify the amount of particles. Bold highlights a reduction in the detection rate relatively to clean data of more than 10%.</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>
  <tr>
    <th align="center" rowspan="1" >F</th>
    <th align="center" rowspan="1" colspan="1">SG</th>
    <th align="center" rowspan="1" colspan="1">P50</th>
    <th align="center" rowspan="1" colspan="1">P100</th>
    <th align="center" rowspan="1" colspan="1">P200</th>
    <th align="center" rowspan="1" colspan="1">P300</th>
    <th align="center" rowspan="1" colspan="1">P400</th>
    <th align="center" rowspan="1" colspan="1">P500</th>
  </tr>
						</thead>
						<tbody>
  <tr>
    <td align="center"   rowspan="4">2.8</td>
    <td align="center" rowspan="1" colspan="1">1</td>
    <td align="center" rowspan="1" colspan="1">0</td>
    <td align="center" rowspan="1" colspan="1">0</td>
    <td align="center" rowspan="1" colspan="1">0</td>
    <td align="center" rowspan="1" colspan="1">-1</td>
    <td align="center" rowspan="1" colspan="1">-2</td>
    <td align="center" rowspan="1" colspan="1">-1</td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">2</td>
    <td align="center" rowspan="1" colspan="1">-2</td>
    <td align="center" rowspan="1" colspan="1">-4</td>
    <td align="center" rowspan="1" colspan="1">-5</td>
    <td align="center" rowspan="1" colspan="1">-8</td>
    <td align="center" rowspan="1" colspan="1"><bold>-12</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-14</bold></td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">3</td>
    <td align="center" rowspan="1" colspan="1">-3</td>
    <td align="center" rowspan="1" colspan="1">-7</td>
    <td align="center" rowspan="1" colspan="1"><bold>-13</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-21</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-31</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-33</bold></td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">4</td>
    <td align="center" rowspan="1" colspan="1">-5</td>
    <td align="center" rowspan="1" colspan="1"><bold>-15</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-31</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-40</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-43</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-59</bold></td>
  </tr>
  <tr>
    <td align="center"   rowspan="4">4.0</td>
    <td align="center" rowspan="1" colspan="1">1</td>
    <td align="center" rowspan="1" colspan="1">0</td>
    <td align="center" rowspan="1" colspan="1">-1</td>
    <td align="center" rowspan="1" colspan="1">-2</td>
    <td align="center" rowspan="1" colspan="1">-3</td>
    <td align="center" rowspan="1" colspan="1">-3</td>
    <td align="center" rowspan="1" colspan="1">-5</td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">2</td>
    <td align="center" rowspan="1" colspan="1">-3</td>
    <td align="center" rowspan="1" colspan="1">-9</td>
    <td align="center" rowspan="1" colspan="1"><bold>-14</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-29</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-29</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-31</bold></td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">3</td>
    <td align="center" rowspan="1" colspan="1">-8</td>
    <td align="center" rowspan="1" colspan="1"><bold>-13</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-30</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-54</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-54</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-51</bold></td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">4</td>
    <td align="center" rowspan="1" colspan="1"><bold>-10</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-22</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-33</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-61</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-61</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-79</bold></td>
  </tr>
  <tr>
    <td align="center"   rowspan="4">5.6</td>
    <td align="center" rowspan="1" colspan="1">1</td>
    <td align="center" rowspan="1" colspan="1">-2</td>
    <td align="center" rowspan="1" colspan="1">-2</td>
    <td align="center" rowspan="1" colspan="1">-5</td>
    <td align="center" rowspan="1" colspan="1">-8</td>
    <td align="center" rowspan="1" colspan="1">-8</td>
    <td align="center" rowspan="1" colspan="1"><bold>-11</bold></td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">2</td>
    <td align="center" rowspan="1" colspan="1">-4</td>
    <td align="center" rowspan="1" colspan="1">-8</td>
    <td align="center" rowspan="1" colspan="1">-2</td>
    <td align="center" rowspan="1" colspan="1"><bold>-28</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-40</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-38</bold></td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">3</td>
    <td align="center" rowspan="1" colspan="1">-4</td>
    <td align="center" rowspan="1" colspan="1"><bold>-14</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-29</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-43</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-50</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-70</bold></td>
  </tr>
  <tr>
    <td align="center" rowspan="1" colspan="1">4</td>
    <td align="center" rowspan="1" colspan="1"><bold>-13</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-21</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-41</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-54</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-69</bold></td>
    <td align="center" rowspan="1" colspan="1"><bold>-77</bold></td>
  </tr>
						</tbody>
					</table>
					</table-wrap>					
														
        <p>The algorithms &#x15A;wirski et al. (
			<xref ref-type="bibr" rid="R12">12</xref>
			), ExCuSe Fuhl et al. (
			<xref ref-type="bibr" rid="R2">2</xref>
			) and ElSe Fuhl, Santini, K&#xFC;bler, and Kasneci (
			<xref ref-type="bibr" rid="R4">4</xref>
			) are, in contrast, all based on edge detection. Since
dust particles in the image interfere with the performance
of the Canny edge detector, the pupil boundary cannot be
extracted robustly. In addition, the induced edges by the
particles themselves connected to the pupil edge make the
ellipse fit much harder. Therefore, further improvements
to the algorithms should inspect automatic threshold
adjustments. In addition, preliminary image refinement
steps are necessary since thresholding is not appropriate
for light gradients over the pupil.</p>

        <p>For a better overview, a heatmap visualization of the
results given in Tables 1-4 is shown in Figure 9. In this
visualization, yellow represents lowest influence on the
algorithm performance, whereas dark blue represent
highest negative influence on the algorithmic
performance.</p>

<fig id="fig09" fig-type="figure" position="float">
					<label>Figure 9</label>
					<caption>
						<p>Heatmap visualization of the results shown in Tables 1-4. The chosen colors reach from yellow over green to blue, where yellow stands for no influence on the algorithmic performance, whereas dark blue represents highest negative influence on the performance of the pupil detection.</p>
					</caption>
					<graphic id="graph09" xlink:href="jemr-10-03-a-figure-09.png"/>
				</fig>

      </sec>
	  
    <sec id="S5">
      <title>Discussion</title>	  
	  
        <p>We proposed a dirt simulation and evaluation for eye
images as obtained by commercially available
eyetrackers. Such a simulation can help to evaluate
algorithms regarding their applicability in the wild and to
explore their limitations. Besides simulating different
colors and particle sizes for dirt, our approach offers the
possibility to vary the focal length, which could also
happen in real scenarios since the automatic focus
estimation is influenced by the dirt layer. We found most
algorithms to be relatively robust towards few large off-focus
dust particles. These are particles close to the camera lens
(or for example a glass cover when the camera is
mounted within a car dashboard). We can therefore conclude
that the amount of dust that can be tolerated on the
tracking device itself is quite large, given the right choice of
pupil detection algorithm.</p>

        <p>Overall, it has to be mentioned that lower detection
rates of individual algorithms also impact the tracking
loss significantly. While the edge-based pupil localization
methods still outperformed threshold-based methods for
most of the simulations, even small in-focus dust
particles can result in a huge impact on their performance.
However, this impact is likely occurring in images where
the pupil is hard to detect, so that other methods already
failed at the baseline level and show therefore only a
smaller percentual loss. This finding highlights (i) that
the current generation of pupil detection algorithms are
vulnerable to dust particles and (ii) the importance of a
sharp and intelligent autofocus for head-mounted trackers
in order to select the actual eye depth layer instead of the
eyeglasses as accurately as possible.</p>

        <p>This work provides a task-plan for the further
improvement of pupil edge-based pupil localization
methods that should focus on automatic threshold range
adjustments, image refinement, and reconstruction (dirt
removal and filtering). Dirt particles are static on the
subject&#x2019;s glasses and can therefore be identified in a
video sequence. Removing this noise factor could improve
the algorithmic performance and robustness.</p>

        <p>In future work, we will evaluate our simulation results
against real dirt conditions on the camera and on
subject&#x2019;s glasses. In addition, we will investigate the
robustness of pupil detection algorithms based on deep neural
networks, such as PupilNet Fuhl, Santini, Kasneci, and
Kasneci (
			<xref ref-type="bibr" rid="R3">3</xref>
			), to dust. Further improvements to the
simulation itself will include a combination with the
synthesis module for glasses from K&#xFC;bler et al. (
			<xref ref-type="bibr" rid="R9">9</xref>
			) and 
inclusion of different dirt area distributions.</p>
      </sec>
	  
    <sec id="S6" sec-type="COI-statement">
      <title>Ethics and Conflict of Interest</title>	  

          <p>The author(s) declare(s) that the contents of the article
are in agreement with the ethics described in
<ext-link ext-link-type="uri" xlink:href="http://biblio.unibe.ch/portale/elibrary/BOP/jemr/ethics.html" xlink:show="new">http://biblio.unibe.ch/portale/elibrary/BOP/jemr/ethics.html</ext-link> 
and that there is no conflict of interest regarding the
publication of this paper.</p>
        </sec>
		
    <sec id="S7">
      <title>Acknowledgements</title>			

          <p>We acknowledge support by Deutsche
Forschungsgemeinschaft and Open Access Publishing Fund
of University of T&#xFC;bingen.</p>
        </sec>

  </body>
  
<back>
<ref-list>
<ref id="R1"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Fuhl</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Geisler</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Santini</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Rosenstiel</surname>, <given-names>W.</given-names></string-name>, &amp; <string-name><surname>Kasneci</surname>, <given-names>E.</given-names></string-name></person-group> (<year>2016</year>). <chapter-title>Evaluation of state-of-the-art pupil detection algorithms on remote eye images</chapter-title>. In <source>Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct</source> (pp. <fpage>1716</fpage>–<lpage>1725</lpage>). <publisher-loc>New York, NY, USA</publisher-loc>: <publisher-name>ACM</publisher-name>; Retrieved from <ext-link ext-link-type="uri" xlink:href="http://doi.acm.org/10.1145/2968219.2968340">http://doi.acm.org/10.1145/2968219.2968340</ext-link> <pub-id pub-id-type="doi">10.1145/2968219.2968340</pub-id></mixed-citation></ref>
<ref id="R2"><mixed-citation publication-type="conference" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Fuhl</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Kübler</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Sippel</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Rosenstiel</surname>, <given-names>W.</given-names></string-name></person-group>, &amp; Kas- neci, E. (<year>2015</year>). <article-title>ExCuSe: Robust Pupil Detection in Real World Scenarios.</article-title> In <person-group person-group-type="editor"><string-name><given-names>G.</given-names> <surname>Azzopardi</surname></string-name> &amp; <string-name><given-names>N.</given-names> <surname>Petkov</surname></string-name> <role>(Eds.)</role></person-group>, Computer Analysis of Images and Patterns: <source>16th International Conference, CAIP 2015</source>, <conf-loc>Valletta, Malta</conf-loc>, <conf-date>September 2-4, 2015</conf-date> Proceedings, Part I (pp. <fpage>39</fpage>–<lpage>51</lpage>). Springer International Publish- ing. Retrieved from http://dx.doi.org/<pub-id pub-id-type="doi">10.1007/978-3-319-23192-14</pub-id></mixed-citation></ref>
<ref id="R3"><mixed-citation publication-type="web-page" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Fuhl</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Santini</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Kasneci</surname>, <given-names>G.</given-names></string-name>, &amp; <string-name><surname>Kasneci</surname>, <given-names>E.</given-names></string-name></person-group> (<year>2016</year>). <article-title>PupilNet: Convolutional Neural Networks for Robust Pupil Detection</article-title>. <source>preprint arXiv:1601.04902</source>.</mixed-citation></ref>
<ref id="R4"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Fuhl</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Santini</surname>, <given-names>T. C.</given-names></string-name>, <string-name><surname>Kübler</surname>, <given-names>T.</given-names></string-name>, &amp; <string-name><surname>Kasneci</surname>, <given-names>E.</given-names></string-name></person-group> (<year>2016</year>). <chapter-title>Else: Ellipse selection for robust pupil detection in real-world environments</chapter-title>. In <source>Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research &amp; Applications</source> (pp. <fpage>123</fpage>–<lpage>130</lpage>). <publisher-loc>New York, NY, USA</publisher-loc>: <publisher-name>ACM</publisher-name>; Retrieved from <ext-link ext-link-type="uri" xlink:href="http://doi.acm.org/10.1145/2857491.2857505">http://doi.acm.org/10.1145/2857491.2857505</ext-link> <pub-id pub-id-type="doi">10.1145/2857491.2857505</pub-id></mixed-citation></ref>
<ref id="R5"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Fuhl</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Tonsen</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Bulling</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Kasneci</surname>, <given-names>E.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Pupil detection for head-mounted eye tracking in the wild:  An evaluation of the state of  the art.</article-title> <source>Machine Vision and Applications</source>, <volume>27</volume>(<issue>8</issue>), <fpage>1275</fpage>–<lpage>1288</lpage>. <pub-id pub-id-type="doi">10.1007/s00138-016-0776-4</pub-id><issn>0932-8092</issn></mixed-citation></ref>
<ref id="R6"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Holmqvist</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Nyström</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Mulvey</surname>, <given-names>F.</given-names></string-name></person-group> (<year>2012</year>). <chapter-title>Eye tracker data quality: What it is and how to measure it</chapter-title>. In <source>Proceedings of the Symposium on Eye Tracking Re- search and Applications</source> (pp. <fpage>45</fpage>–<lpage>52</lpage>). <publisher-loc>New York, NY, USA</publisher-loc>: <publisher-name>ACM</publisher-name>; Retrieved from <ext-link ext-link-type="uri" xlink:href="http://doi.acm.org/10.1145/2168556.2168563">http://doi.acm.org/10.1145/2168556.2168563</ext-link> <pub-id pub-id-type="doi">10.1145/2168556.2168563</pub-id></mixed-citation></ref>
<ref id="R7"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Javadi</surname>, <given-names>A.-H.</given-names></string-name>, <string-name><surname>Hakimi</surname>, <given-names>Z.</given-names></string-name>, <string-name><surname>Barati</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Walsh</surname>, <given-names>V.</given-names></string-name>, &amp; <string-name><surname>Tcheang</surname>, <given-names>L.</given-names></string-name></person-group> (<year>2015</year>). <article-title>SET: A pupil detection method using sinusoidal approximation.</article-title> <source>Frontiers in Neuroengineering</source>, <volume>8</volume>, <fpage>4</fpage>. Retrieved from <ext-link ext-link-type="uri" xlink:href="http://journal.frontiersin.org/article/10.3389/fneng.2015.00004">http://journal.frontiersin.org/article/10.3389/fneng.2015.00004</ext-link> <pub-id pub-id-type="doi">10.3389/fneng.2015.00004</pub-id><pub-id pub-id-type="pmid">25914641</pub-id><issn>1662-6443</issn></mixed-citation></ref>
<ref id="R8"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Kasneci</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Sippel</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Aehling</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Heister</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Rosenstiel</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Schiefer</surname>, <given-names>U.</given-names></string-name>, &amp; <string-name><surname>Papageorgiou</surname>, <given-names>E.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Driving with binocular visual field loss? A study on a supervised on-road parcours with simultaneous eye and head tracking.</article-title> <source>PLoS One</source>, <volume>9</volume>(<issue>2</issue>), <fpage>e87470</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0087470</pub-id><pub-id pub-id-type="pmid">24523869</pub-id><issn>1932-6203</issn></mixed-citation></ref>
<ref id="R9"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Kübler</surname>, <given-names>T. C.</given-names></string-name>, <string-name><surname>Rittig</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Kasneci</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Ungewiss</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Krauss</surname>, <given-names>C.</given-names></string-name></person-group> (<year>2016</year>). <chapter-title>Rendering refraction and reflection of eyeglasses for synthetic eye tracker images</chapter-title>. In <source>Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research &amp; Applications</source> (pp. <fpage>143</fpage>–<lpage>146</lpage>). <publisher-loc>New York, NY, USA</publisher-loc>: <publisher-name>ACM</publisher-name>; Retrieved from <ext-link ext-link-type="uri" xlink:href="http://doi.acm.org/10.1145/2857491.2857494">http://doi.acm.org/10.1145/2857491.2857494</ext-link> <pub-id pub-id-type="doi">10.1145/2857491.2857494</pub-id></mixed-citation></ref>
<ref id="R10"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Mohammed</surname>, <given-names>G. J.</given-names></string-name>, <string-name><surname>Hong</surname>, <given-names>B.-R.</given-names></string-name>, &amp; <string-name><surname>Jarjes</surname>, <given-names>A. A.</given-names></string-name></person-group> (<year>2010</year>). <article-title>Accurate pupil features extraction based on new projection function.</article-title> <source>Computer Information</source>, <volume>29</volume>(<issue>4</issue>), <fpage>663</fpage>–<lpage>680</lpage>.<issn>0374-2849</issn></mixed-citation></ref>
<ref id="R11"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Schnipke</surname>, <given-names>S. K.</given-names></string-name>, &amp; <string-name><surname>Todd</surname>, <given-names>M. W.</given-names></string-name></person-group> (<year>2000</year>). <chapter-title>Trials and tribulations of using an eye-tracking system</chapter-title>. In <source>CHI ’00 Extended Abstracts on Human Factors in Computing Systems</source> (pp. <fpage>273</fpage>–<lpage>274</lpage>). <publisher-loc>New York, NY, USA</publisher-loc>: <publisher-name>ACM</publisher-name>; Retrieved from <ext-link ext-link-type="uri" xlink:href="http://doi.acm.org/10.1145/633292.633452">http://doi.acm.org/10.1145/633292.633452</ext-link> <pub-id pub-id-type="doi">10.1145/633292.633452</pub-id></mixed-citation></ref>
<ref id="R12"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>S’wirski</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Bulling</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Dodgson</surname>, <given-names>N.</given-names></string-name></person-group> (<year>2012</year>). <chapter-title>Robust realtime pupil tracking in highly off-axis images</chapter-title>. In <source>Proceedings of the Symposium on Eye Tracking Research and Applications</source> (pp. <fpage>173</fpage>–<lpage>176</lpage>). <publisher-loc>New York, NY, USA</publisher-loc>: <publisher-name>ACM</publisher-name>; Retrieved from <ext-link ext-link-type="uri" xlink:href="http://doi.acm.org/10.1145/2168556.2168585">http://doi.acm.org/10.1145/2168556.2168585</ext-link> <pub-id pub-id-type="doi">10.1145/2168556.2168585</pub-id></mixed-citation></ref>
<ref id="R13"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>S’wirski</surname>, <given-names>L.</given-names></string-name>, &amp; <string-name><surname>Dodgson</surname>, <given-names>N.</given-names></string-name></person-group> (<year>2014</year>). <chapter-title>Rendering synthetic ground truth images for eye tracker evaluation</chapter-title>. In <source>Proceedings of the Symposium on Eye Tracking Research and Applications</source> (pp. <fpage>219</fpage>–<lpage>222</lpage>). <publisher-loc>New York, NY, USA</publisher-loc>: <publisher-name>ACM</publisher-name>; Retrieved from <ext-link ext-link-type="uri" xlink:href="http://doi.acm.org/10.1145/2578153.2578188">http://doi.acm.org/10.1145/2578153.2578188</ext-link> <pub-id pub-id-type="doi">10.1145/2578153.2578188</pub-id></mixed-citation></ref>
<ref id="R14"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Tonsen</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Zhang</surname>, <given-names>X.</given-names></string-name>, <string-name><surname>Sugano</surname>, <given-names>Y.</given-names></string-name>, &amp; <string-name><surname>Bulling</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2016</year>). <chapter-title>Labelled pupils in the wild: A dataset for studying pupil detection in unconstrained environments</chapter-title>. In <source>Pro- ceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research &amp; Applications</source> (pp. <fpage>139</fpage>–<lpage>142</lpage>). <publisher-loc>New York, NY, USA</publisher-loc>: <publisher-name>ACM</publisher-name>; Retrieved from <ext-link ext-link-type="uri" xlink:href="http://doi.acm.org/10.1145/2857491.2857520">http://doi.acm.org/10.1145/2857491.2857520</ext-link> <pub-id pub-id-type="doi">10.1145/2857491.2857520</pub-id></mixed-citation></ref>
<ref id="R15"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Wass</surname>, <given-names>S. V.</given-names></string-name>, <string-name><surname>Forssman</surname>, <given-names>L.</given-names></string-name>, &amp; <string-name><surname>Leppänen</surname>, <given-names>J.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Robustness and precision: How data quality may influence key dependent variables in infant eye-tracker analyses.</article-title> <source>Infancy</source>, <volume>19</volume>(<issue>5</issue>), <fpage>427</fpage>–<lpage>460</lpage>. <pub-id pub-id-type="doi">10.1111/infa.12055</pub-id><issn>1525-0008</issn></mixed-citation></ref>
<ref id="R16"><mixed-citation publication-type="web-page" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Willson</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Maimone</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Johnson</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Scherr</surname>, <given-names>L.</given-names></string-name></person-group> (<year>2005</year>, aug). <article-title>An optical model for image artifacts produced by dust particles on lenses.</article-title> In <source>’i-SAIRAS 2005’ -  The 8th International Symposium on Artificial Intelligence, Robotics and Automation in Space</source> (Vol. <volume>603</volume>, Retrieved from <ext-link ext-link-type="uri" xlink:href="http://adsabs.harvard.edu/abs/2005ESASP.603E.103W">http://adsabs.harvard.edu/abs/2005ESASP.603E.103W</ext-link></mixed-citation></ref>
<ref id="R17"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Wood</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Baltrušaitis</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Morency</surname>, <given-names>L.-P.</given-names></string-name>, <string-name><surname>Robinson</surname>, <given-names>P.</given-names></string-name>, &amp; <string-name><surname>Bulling</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2016</year>). <chapter-title>Learning an appearance- based gaze estimator from one million synthesised images</chapter-title>. In <source>Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research &amp; Applications</source> (pp. <fpage>131</fpage>–<lpage>138</lpage>). <publisher-loc>New York, NY, USA</publisher-loc>: <publisher-name>ACM</publisher-name>; Retrieved from <ext-link ext-link-type="uri" xlink:href="http://doi.acm.org/10.1145/2857491.2857492">http://doi.acm.org/10.1145/2857491.2857492</ext-link> <pub-id pub-id-type="doi">10.1145/2857491.2857492</pub-id></mixed-citation></ref>
<ref id="R18"><mixed-citation publication-type="web-page" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Zhang</surname>, <given-names>X.</given-names></string-name>, <string-name><surname>Sugano</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Fritz</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Bulling</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2016</year>). <article-title>It’s written all over your face: Full-face appearance-based gaze estimation</article-title>. <source>CoRR, abs/1611.08860</source>. Retrieved from <ext-link ext-link-type="uri" xlink:href="http://arxiv.org/abs/1611.08860">http://arxiv.org/abs/1611.08860</ext-link></mixed-citation></ref>
</ref-list>
  </back>
</article>
