<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd">

<article article-type="research-article" xmlns:xlink="http://www.w3.org/1999/xlink">
 <front>
    <journal-meta>
	<journal-id journal-id-type="publisher-id">Jemr</journal-id>
      <journal-title-group>
        <journal-title>Journal of Eye Movement Research</journal-title>
      </journal-title-group>
      <issn pub-type="epub">1995-8692</issn>
	  <publisher>								
	  <publisher-name>Bern Open Publishing</publisher-name>
	  <publisher-loc>Bern, Switzerland</publisher-loc>
	</publisher>
    </journal-meta>
    <article-meta>
	<article-id pub-id-type="doi">10.16910/jemr.11.1.4</article-id> 
	  <article-categories>								
				<subj-group subj-group-type="heading">
					<subject>Research Article</subject>
				</subj-group>
		</article-categories>
      <title-group>
        <article-title>The Effect of Real-time Headbox Adjustments on Data Quality</article-title>
      </title-group>
	   <contrib-group> 
				<contrib contrib-type="author">
					<name>
						<surname>Blignaut</surname>
						<given-names>Pieter</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
        <aff id="aff1">
		<institution>University of the Free State Bloemfontein</institution>,    <country>South Africa</country>
        </aff>
		</contrib-group>
     
	  <pub-date date-type="pub" publication-format="electronic"> 
		<day>21</day>  
		<month>3</month>
        <year>2018</year>
      </pub-date>
	  <pub-date date-type="collection" publication-format="electronic"> 
	  <year>2018</year>
	</pub-date>
      <volume>11</volume>
      <issue>1</issue>
	<elocation-id>10.16910/jemr.11.1.4</elocation-id>
	<permissions> 
	<copyright-year>2018</copyright-year>
	<copyright-holder>Blignaut P. J.</copyright-holder>
	<license license-type="open-access">
  <license-p>This work is licensed under a Creative Commons Attribution 4.0 International License, 
  (<ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">
    https://creativecommons.org/licenses/by/4.0/</ext-link>), which permits unrestricted use and redistribution provided that the original author and source are credited.</license-p>
</license>
	</permissions>
      <abstract>
        <p>Following a patent owned by Tobii, the framerate of a CMOS camera can be increased by reducing the size of the recording window so that it fits the eyes with minimum room to spare. The position of the recording window can be dynamically adjusted within the camera sensor area to follow the eyes as the participant moves the head. Since only a portion of the camera sensor data is communicated to the computer and processed, much higher framerates can be achieved with the same CPU and camera. Eye trackers can be expected to present data at a high speed, with good accuracy and precision, small latency and with minimal loss of data while allowing participants to behave as normally as possible. In this study, the effect of headbox adjustments in real-time is investigated with respect to the above-mentioned parameters. It was found that, for the specific camera model and tracking algorithm, one or two headbox adjustments per second, as would normally be the case during recording of human participants, could be tolerated in favour of a higher framerate. The effect of adjustment of the recording window can be reduced by using a larger recording window at the cost of the framerate.</p>
      </abstract>
      <kwd-group>
        <kwd>Low-cost eye tracking</kwd>
        <kwd>data quality</kwd>
        <kwd>high framerates</kwd>
        <kwd>head movement</kwd>
      </kwd-group>
    </article-meta>
  </front>	
  <body>

    <sec id="S1">
      <title>Introduction</title>
	  
      <p>This paper builds on a patent assigned to Tobii (
        <xref ref-type="bibr" rid="b14">14</xref>
        ) to
increase the framerate of a CMOS camera by reducing
the size of the recording window so that it fits the eyes
with minimum room to spare. The crux of the patent lies
in the fact that the position of the recording window can
be dynamically adjusted within the camera sensor area to
follow the eyes as the participant moves the head. Since
only a portion of the camera sensor data is communicated
to the computer and processed, much higher framerates
can be achieved with the same CPU and camera.</p>
	  
      <p>Besides the size of the recording window, the
framerate that can be attained depends also on parameters such
as the type and model of camera that is used, the amount
of illumination and the focal length of the lens. The
camera and lens that were used in this study allow framerates
up to 350 Hz, while allowing the participant to move his
head freely within a box of about 200&#xD7;200 mm. More
head movement can be allowed, but at the cost of
framerate.</p>

      <p>While the principle is described broadly in the said
patent, no details or algorithm is provided. The principle
is explained in this paper with full credit to the patent as
it is available in the public domain. It is not the intention
of this paper to infringe on the copyrighted patent, nor to
disclose any protected intellectual property, but to
evaluate the impact of the real-time adjustment of the
recording window on the data quality of the eye tracker.</p>

      <p>Since the details of implementation of the patent is
not known, an algorithm to implement the process has
been developed independently from the patent and is
described in this paper. For purposes of the analysis, a
self-assembled eye tracker with two infrared illuminators
and the UI-1550LE camera from IDS Imaging
(
<ext-link ext-link-type="uri" xlink:href="https://en.ids-imaging.com" xlink:show="new">https://en.ids-imaging.com</ext-link>
) was used. This equipment
and the software that was developed are used for research
purposes only and is not available commercially.</p>

      <p>As a specific camera and tracking algorithm was used,
the effects cannot necessarily be generalized to other
trackers, but the study aims to create an awareness on the
possibilities of attaining a high framerate at a low cost
with acceptable data quality.</p>

      <p>For a fixed period of time and recording at a specified
framerate, a specific number of data samples should be
captured. Some of these samples might, however, be lost
or contain invalid data when the recording window is
adjusted. The researcher needs to know the percentage of
lost or invalid samples and whether the remaining
samples can still be used to do valid research. The
adjustments might also affect the camera latency and it is
necessary to determine if, and to what extent, the accuracy
and precision of the gaze data are affected by frequent
head movements.</p>

      <p>The next section will discuss the general criteria for
the evaluation of eye trackers when used to analyse gaze
behaviour in research projects. Subsequently, these
criteria, namely data quality, freedom of head movement and
tracking speed, are then discussed in more detail. The
impact of the strategy to use a smaller recording window
on the camera sensor and adjust it in real time to follow
the eyes on the above-mentioned criteria, is evaluated by
simulating head movements programmatically.</p>
    </sec>

    <sec id="S2">
      <title>Criteria for Evaluation of an Eye Tracker</title>

      <p>Eye tracking can be used to obtain information on
how people acquire and process information while they
are reading(
        <xref ref-type="bibr" rid="b37">37</xref>
        ), browsing a Web site (
        <xref ref-type="bibr" rid="b17">17</xref>
        ), shopping(
        <xref ref-type="bibr" rid="b24">24</xref>
        ),
driving a motor vehicle(
        <xref ref-type="bibr" rid="b12">12</xref>
        ), interpreting medical
images(
        <xref ref-type="bibr" rid="b13">13</xref>
        ), and performing other tasks where the ability
to decipher the visual world around them is critical. Eye
tracking can also be used as input modality during
computer interaction, such as eye typing (
        <xref ref-type="bibr" rid="b1">1</xref>
        ) or other
gazecontingent systems(
        <xref ref-type="bibr" rid="b39">39</xref>
        ).</p>
		
      <p>Regardless of the application area, the validity of
research results based on eye movement analysis depend on
the quality of eye movement data (
        <xref ref-type="bibr" rid="b22">22</xref>
		). Generally, data
quality is expressed in terms of four metrics, namely
<italic>accuracy</italic> (offset between actual and reported gaze
coordinates), <italic>precision</italic> (reproducibility or repeatability of
measures; related but not equivalent to system
resolution), <italic>latency</italic> (time delay between occurrence of an event
and the reporting thereof) and <italic>robustness</italic> (percentage of
expected data samples that is captured) (
        <xref ref-type="bibr" rid="b22">22</xref>
		).</p>

      <p>Besides good quality data, many types of research
also expect gaze data to be delivered at a high speed (
        <xref ref-type="bibr" rid="b2">2</xref>
		).
For video-based eye trackers, gaze data is based on the
analysis of successive frames in a video stream and
therefore the speed of an eye tracker is often referred to as its
framerate. For studies where typical saccades are small
and brief, as in silent reading or when studying
microsaccades(
        <xref ref-type="bibr" rid="b10 b27 b36">10, 27, 36</xref>
		), it is important to track at framerates in
excess of 250 Hz (
        <xref ref-type="bibr" rid="b21">21</xref>, p. 30
		).</p>

      <p>Many studies, such as reading or usability studies, can
be done using a video-based eye tracker with participants
seated in front of a computer screen. To ensure ecological
validity, participants should be able to move or tilt their
heads sideways, lie on their arms, lean backward, etc. as
they would find comfortable(
        <xref ref-type="bibr" rid="b32">32</xref>
		). Expecting participants
to consciously keep their heads still or to put their heads
in a chin rest, would deviate their attention from the task
at hand and could impact on their performance.The eye
tracker should, therefore, allow participants as much
freedom as possible with regard to head movement.</p>

      <p>Depending on the nature of the study, it is evident that
eye trackers may be required to present data at a high
speed, with good accuracy and precision, small latency
and with minimal loss of data while allowing participants
to behave as normally as possible. These requirements
are discussed in more details below.</p>
    </sec>
	
    <sec id="S3">
      <title>Quality of Eye Tracking Data</title>
	  
      <p>With reference to eye tracking, the term <italic>data quality</italic>
refers to the evaluation of the fidelity with which the
continuous variation in the eye movement signal is
reflected in the values measured and reported by the eye
tracker (
        <xref ref-type="bibr" rid="b39">39</xref>
		).</p>

      <sec id="S3a">
        <title>Latency</title>
		
        <p>Latency is normally described as the time difference
between the occurrence of an event in the visual field of
the camera and the reporting thereof (
          <xref ref-type="bibr" rid="b38">38</xref>
          ).For this study,
latency is divided into two components, namely camera
latency and processing time. Camera latency is the
amount of time from the moment an event takes place
until the image thereof arrives at the host computer. This
includes camera exposure, image acquisition and transfer
through the network or USB. Processing time refers to
the time needed by the host computer to locate the
featuresin the image and calculate and report the gaze
coordinates. This is in agreement with Gibaldi, Vanegas (
        <xref ref-type="bibr" rid="b16">16</xref>)
who found that an eye tracker's system latency results
from the sum of the image acquisition latency
(hardware) and the gaze computation latency (software).</p>

        <p>For a specific camera and under specific conditions,
such as light, framerate and shutter speed, the camera
latency is expected to stay constant and therefore, at a set
framerate, frames are expected to be delivered to the host
computer at fixed intervals. Although the arrival of
frames could be out of sync with the generation thereof,
the phase difference is assumed to stay constant.</p>

        <p>Repositioning of the recording window could cause a
short stutter on the camera sensor, which will be
propagated to the receiving end. This will manifest in a
longerthan-expected interval between successive frames, which
is referred to below as the delivery delay. In other words,
it is unnecessary to have access to the absolute camera
latency to study the effect of adjustments of the recording
window as the effect on latency can be represented by the
effect on the delivery delay. If the latency stays constant,
the delivery delay is supposed to be zero.</p>
      </sec>
	  
      <sec id="S3b">
        <title>Robustness</title>
		
        <p>For a fixed period of time and recording at a specified
framerate, the eye tracker is expected to capture a specific
number of data samples. Loss of data typically occurs
when some of the critical features of the eye image &#x2013; for
example, the pupil and/or corneal reflection &#x2013; cannot be
detected reliably (
        <xref ref-type="bibr" rid="b21">21</xref>, p.141
		). Typically, glasses and contact
lenses may cause reflections that can either obscure the
corneal reflection or incorrectly be regarded by the eye
tracker as being corneal reflections. Participant-related
eye physiology,for example, droopy eyelids or narrow
eyes,may also obscure the glint or pupil or part thereof,
with subsequent loss of data. Robustness can be
expressed in terms of the percentage of expected samples
that are captured.</p>

        <p>Besides the effect on latency, the short delay before
frame delivery caused by repositioning of the recording
window on the sensor will also result in less than the
expected number of frames being delivered to the host
computer. This phenomenon was confirmed by Holmqvist and 
Andersson (
        <xref ref-type="bibr" rid="b20">20</xref>, pp. 167, 168
		)for an SMI RED 250 Hz eye tracker. Gaze events 
that occurred at the same time as adjustment of the recording 
window, would not be recorded.</p>
      </sec>

      <sec id="S3c">
        <title>Accuracy</title>

        <p>The ISO defines accuracy as the &#x201C;closeness of
agreement between a test result and the accepted reference
value&#x201D; (
        <xref ref-type="bibr" rid="b25">25</xref>). In layman terms, accuracy can be regarded
as the offset between the actual fixation positions and the
position as reported by the eye tracker.</p>

        <p>The traditional manner of recording data for data
accuracy measurements is to ask participants to focus on
various small gaze targets across the display while gaze data
is recorded (
        <xref ref-type="bibr" rid="b5">5</xref>
		). The offsets at all target positions are
then averaged to obtain the accuracy of the system under
the current conditions. The accuracy of a specific eye
tracker is neither absolute nor constant and stating the
manufacturers&#x2019; specifications only could be misleading.
Accuracy depends on factors such as the hardware,
ambient light, some physiological characteristics of the
participants, calibration procedure, polynomial for
interpolation, gaze angle, etc. (
          <xref ref-type="bibr" rid="b6 b23">6, 23</xref>
          ).Hansen and Ji (
          <xref ref-type="bibr" rid="b18">18</xref>
          ) provided
an overviewof remote eyetrackers and reported the
accuracy ofmost model-based gaze estimation systems to be
in theorder of 1&#xB0;&#x2013;2&#xB0;.</p>
      </sec>

        <sec id="S3d">
          <title>Precision</title>
		  
          <p>Precision is defined as the &#x201C;closeness of agreement
between independent test results obtained under
stipulated conditions&#x201D; (
        <xref ref-type="bibr" rid="b25">25</xref>
		). The spatial precisionof eye tracking
data is an indication of variation of gaze data over a
period of time. In other words, if the same gaze position is
reported for every sample recorded by the eye tracker
while a participant fixates on a target, the precision is 0.
Variation in reported gaze data can originate from either
system variations or human (physiological) variations.</p>

          <p>Closely related to spatial precision is a measure
termed spatial resolution, which refers to the smallest eye
movement that can be detected in the data (
        <xref ref-type="bibr" rid="b22">22</xref>
		).</p>

          <p>Spatial precision can be calculated as the square root
of the pooled variance over the X and Y dimensions of
the display (referred to as STD hereafter). In other words,
precision = &#x221A;((&#x3C3;<sub>x</sub>&#x00B2; + &#x3C3;<sub>y</sub>)/2)
where  &#x3C3;<sub>x</sub>&#x00B2; = 1/N  &#x2211;<sup>N</sup><sub>i=1</sub> (x<sub>i</sub> - &#x2205;x)&#x00B2; and 
&#x3C3;<sub>y</sub>&#x00B2; = 1/N  &#x2211;<sup>N</sup><sub>i=1</sub> (y<sub>i</sub> - &#x2205;y)&#x00B2;. 
Another common measure of precision is the root-mean-square of the distance between
subsequent points, RMS = &#x221A;((&#x2211;<sup>N</sup><sub>i=1</sub>d<sub>i</sub>)/N (
        <xref ref-type="bibr" rid="b22">22</xref>
		)). 
This paper will report STD precision values.</p>
        </sec>
      </sec>

	
    <sec id="S4">
      <title>Eye Tracking and Head Movement</title>
      <sec id="S4a">
        <title>Head-mounted vs remote eye trackers</title>
		
        <p>Video-based eye trackers can be head-mounted or
remote devices. Although head-mounted eye trackers
produce more accurate results(
          <xref ref-type="bibr" rid="b43">43</xref>
          ) and allow for more head
movement (
          <xref ref-type="bibr" rid="b29">29</xref>
          ), they are intrusive. Remote devices are
less intrusive and do not require any physical contact
with the user, but are sensitive to head movements &#x2013;
especially movements in the depth dimension (
          <xref ref-type="bibr" rid="b8">8</xref>
          ).</p>
      </sec>
	  
      <sec id="S4b">
        <title>Finding the point of regard</title>
		
        <p>With remote eye tracking, gaze coordinates are
determined through either a feature-based or appearance- or
model-based approach (
          <xref ref-type="bibr" rid="b18 b26">18, 26</xref>
          ). Feature-based methods
use the features in an eye video, i.e. pupils and corneal
reflections, to map to gaze coordinates through
interpolation (
          <xref ref-type="bibr" rid="b18">18</xref>
          ). Appearance-based methods use a geometric
model of the eye that can be learnt through a deep neural
network, to estimate the 3D gaze direction(
          <xref ref-type="bibr" rid="b26 b42">27, 42</xref>
          ).</p>
		  
        <p>For feature-based systems, head movements within
the field of view of the camera can lead to reduced
accuracy as the subject moves away from the calibration
position (
          <xref ref-type="bibr" rid="b8 b40">8, 40</xref>
          ). Larger head movements would cause total
loss of gaze data and affect the fixed camera latency as
the camera has to be redirected again. Accuracy can be
improved by alternative calibration procedures (
          <xref ref-type="bibr" rid="b8 b31">8, 31</xref>
          ) or
by using an appearance-based approach that maps an eye
image into gaze coordinates(
          <xref ref-type="bibr" rid="b3 b42">3, 42</xref>
          ). These solutions are,
however, not always feasible &#x2013; as illustrated by the
alternative calibration proposed by Nguyen et al. (
          <xref ref-type="bibr" rid="b31">3</xref>
          ) that
takes more than 10 minutes to perform.</p>

        <p>Although being simpler to execute, feature-based
methods are more restrictive than model-based methods
in terms of permissible head movement (
          <xref ref-type="bibr" rid="b11">11</xref>
          ). Depending
on the accuracy that is required for a specific study, a
small amount of head movement may be tolerated. It is
customary to let a participant use a chin rest or bite bar to
limit head movements if accuracy is of high importance
(
          <xref ref-type="bibr" rid="b30">30</xref>
          ).</p>
      </sec>
	  
      <sec id="S4c">
        <title>Requirements of the image acquisition system</title>
		
        <p>To allow for free head motion, a large field of view is
required (
          <xref ref-type="bibr" rid="b19">19</xref>
          ) but to obtain high-accuracy, a video-based
eye-tracker must acquire a high-resolution image of the
eye (
          <xref ref-type="bibr" rid="b30 b35">30, 35</xref>
          ), which essentially means that the field of
view must be confined (
          <xref ref-type="bibr" rid="b26">26</xref>
          ). In general, there is a
tradeoff between the field of view of an eye tracking system
and the gaze estimation accuracy (
          <xref ref-type="bibr" rid="b40">40</xref>
          ).</p>
		  
        <p>To maintain high accuracy with remote, feature-based
systems while allowing head movement, the image
acquisition system must be able to follow the eyes (
          <xref ref-type="bibr" rid="b30">30</xref>
          ). Some
of the first approaches to achieve this goal made use of
multiple cameras and/or multiple illuminators (
          <xref ref-type="bibr" rid="b4 b7 b28 b33 b34 b35 b41 b43">4, 7, 28, 33, 34, 35, 41, 43</xref>
          ). Mostly these systems have one camera with a wide
field of view to track head movements and another with a
small field of view to track the eyes. The second camera
is mechanically directed based on feedback from the first.
See Hennessey et al. (
          <xref ref-type="bibr" rid="b19">19</xref>
          )for an overview and
discussion of such systems. Although this approach is
effective, the mechanical nature of the set-up causes large
delays and few of these systems are capable of framerates
higher than 30 Hz.</p>

        <p>Instead of using two cameras or a single stereo
camera, a region of interest in a high resolution image from a
single fixed camera with a wide field-of-view can be
moved around to follow the eyes (
          <xref ref-type="bibr" rid="b14 b30">14, 30</xref>
          ). Besides being
easier to set up, the redirection of the region of interest is
done programmatically instead of mechanically, leading
to reduced latency (quicker reaction to head movements)
and improved robustness (less interruption in the stream
of gaze data).</p>
      </sec>
	  
      <sec id="S4d">
        <title>Use of programmable CMOS cameras</title>
		
        <p>CMOS (complementary metal-oxide semiconductor)
cameras use active pixel sensors (APS), containing a
photo detector and transistors to combine the image
sensor function and image processing within the same
integrated circuit (
          <xref ref-type="bibr" rid="b15">15</xref>
          ).</p>
		  
        <p>CMOS cameras can be programmable with a software
development kit (SDK) that enables developers to
manipulate camera properties such as shutter speed,
framerate and colour depth. The use of programmable CMOS
image sensors provides a number of important
advantages for eye tracking, <italic>inter alia</italic> fast sampling rates,
direct pixel addressing for pre-processing and acquisition,
and hard-disk storage of relevant image data (
          <xref ref-type="bibr" rid="b9">9</xref>
          ).</p>
		  
        <p>Modern CMOS cameras also solve the problem of the
trade-offs between accuracy and robustness on the one
end and framerate to a large extent. High resolution
camera sensors that provide enough clarity for the features to
be found with high accuracy and precision while
simultaneously allowing a wide field of view at a high framerate,
are available. Alternatively, a camera with a high
resolution sensor and large enough field of view, but with lower
native (full sensor) framerate can be used. Only a portion
of the image (the so-called recording window) can be
communicated to the host computer(
          <xref ref-type="bibr" rid="b14">14</xref>
          ), which will allow
the native framerate to be exceeded by several factors.
The challenge with this approach is two-fold, namely to
determine the optimum size of the recording window and
to position it over the eyes.</p>

        <p>The size of the recording window (RW) determines
the frame rate. The smaller the RW, the higher the
achievable framerate. In turn, the focal length of the lens
determines the area that can fit into the RW. The higher
the focal length, the better the resolution and associated
accuracy, but less of the real world object can fit into it.
In other words, it is desirable to have a longer focal
length and a smaller recording window. Unfortunately, a
tight fit of the recording window around the eyes will
limit head movement to a large extent. As indicated by
Elvesjo, Skogo (
          <xref ref-type="bibr" rid="b14">14</xref>
          ), the solution lies in the adjustment of
the recording window in real-time to follow the eyes
without affecting latency or causing loss of frames during
the adjustment.</p>
      </sec>
    </sec>
	
    <sec id="S5">
      <title>Experimental details</title>
	  
      <p>In order to examine the effect of real-time headbox
adjustments on the data quality delivered by an eye
tracker at various framerates, gaze data had to be recorded for
a range of framerates and headbox adjustments.</p>

      <p>For this experiment, the context was that of a
stationary seated participant in front of a single computer
display. The idea was that the system should compensate for
smooth head movements due to the participant changing
position from time to time, for example leaning sideways,
moving forward/backwards, etc.</p>

      <p>It is important to note that the study was done with a
specific model of camera and a specific tracking
algorithm. Although it is known that the SMI REDm, REDn,
RED250 and 500, the Tobii T120, T60 and TX300 all use
active recording windows, one should be careful to
compare the results with them since they probably use more
expensive cameras. Existing low-cost commercial eye
trackers such as the Tobii EyeX
(
<ext-link ext-link-type="uri" xlink:href="tobiigaming.com/product/tobii-eyex/" xlink:show="new">tobiigaming.com/product/tobii-eyex/</ext-link>
), Tobii 4C (
<ext-link ext-link-type="uri" xlink:href="tobiigaming.com/eye-tracker-4c/" xlink:show="new">tobiigaming.com/eye-tracker-4c/</ext-link>) 
and MyGaze(
<ext-link ext-link-type="uri" xlink:href="www.mygaze.com" xlink:show="new">www.mygaze.com</ext-link>) 
(now discontinued)deliver
comparable data quality but probably don't use a smaller
recording window or else they would have been able to obtain
higher framerates.</p>

      <sec id="S5a">
        <title>Camera and lens</title>
		
        <p>An eye tracker with two infrared illuminators, 480
mm apart and the UI-1550LE camera from IDS Imaging
(
<ext-link ext-link-type="uri" xlink:href="https://en.ids-imaging.com" xlink:show="new">https://en.ids-imaging.com</ext-link>
) was assembled. The
UI1550LE camera with daylight cut filter has a 1600&#xD7;1200
sensor with pixel size 2.8 &#x3BC;m and a native framerate of
18.3 fps (period = 54.6 ms).(There is a linear relationship
between the number of pixels and the minimum possible
interval between frames.)</p>

        <p>The camera was fitted with a 10 mm lens from
Lensation (
<ext-link ext-link-type="uri" xlink:href="http://www.lensation.de/" xlink:show="new">http://www.lensation.de/</ext-link>
). Although the camera is
more sophisticated than a web camera, the camera, lens
and lens adapter are available from the manufacturer at
about 300 euro.</p>

        <p>The camera and lens has a field of view of 288&#xD7;217
mm at 700 mm distance. A recording window of
500&#xD7;116 pixels captures an image of a 90&#xD7;21 mm world
object at this distance which allows 198&#xD7;196 mm for
head movements in world space.</p>
      </sec>

      <sec id="S5b">
        <title>Computer and Screen</title>

          <p>Data was recorded with a desktop computer with an i7
processor and 16 GB of memory, running Windows 10.
A 22 inch screen, 474&#xD7;299 mm, with resolution
1680&#xD7;1050, was used to display the stimuli. This means
that at a gaze distance of 700 mm, 1 degree of gaze angle
subtends 43 pixels (approximately 12 mm) on the
display.</p>
        </sec>
		
      <sec id="S5c">
        <title>Software to inspect and adjust the recording window</title>		

          <p>Software was developed using C# with .Net 4.5 along
with the camera manufacturer&#x2019;s software development kit
(SDK) to control the camera settings and process the eye
video. The software system provided an inspection panel
in which the camera sensor area is represented on the
computer screen, on a dark blue background (Figure 1).
The portion of image data that is communicated to the PC
and analysed (RW) is displayed inside this area as an eye
video. Figure 1 also shows the eye-camera distance in
mm and a status bar to indicate which eye(s) is/are
currently inside the RW.</p>

<fig id="fig01" fig-type="figure" position="float">
					<label>Figure 1.</label>
					<caption>
						<p>Recording window with eye video within the camera sensor area.</p>
					</caption>
					<graphic id="graph01" xlink:href="jemr-11-01-d-figure-01.png"/>
				</fig>

          <p>When a participant is seated, the recording window
(RW) can be adjusted around his/her eyes manually in
three ways: the chair and seating position can be adjusted,
the camera can be adjusted and the RW can be grabbed
with the mouse and dragged within the confines of the
sensor area. Once the RW is positioned around the
participant's eyes, it will be adjusted in real-time to follow
smooth head movements.</p>
        </sec>
		
      <sec id="S5d">
        <title>Recording window</title>			

          <p>Through the software development kit (SDK), the
camera allows the selection of a smaller area of its sensor
to be communicated through USB 2. The size of the
recording window was chosen to (i) provide a tight fit
around the eyes in order to maximise the probability of
headbox adjustments for the sake of the experiment and
(ii) to allow the maximum possible framerate with the
specific camera model. This allowed for a worst-case
scenario to investigate the effect of headbox adjustments
on data quality at various framerates. This specific size is,
by the way, very close to that used by the SMI RED 250
(a video clip to illustrate this can be provided when
requested).</p>

          <p>The two illuminators provided enough light for a
short exposure time which, together with a recording
window (RW) of 500&#xD7;116 pixels, allowed a maximum
framerate of 352 Hz.</p>

          <p>The recording window fitted the image of the eyes at
700 mm gaze distance with some margin to spare (cf.
Figure 1). Through the SDK, the framerate could be
adjusted with automatic adjustment of the exposure and
gain.</p>
        </sec>
		
      <sec id="S5e">
        <title>Real-time adjustment of the recording window</title>		
		
          <p>The position of the eyes in the recording window is
used to adjust the recording window within the headbox
as the head moves around. Figure 2 shows the algorithm
that is executed for every frame that is received from the
camera. The magnitude of the adjustment, d, should be
such that the eyes will not move outside of the RW. If d
is too small, a fast or jerky movement of the head will
allow the relevant eye to move out of the RW. If it is too
large, the opposite eye will move out of the RW. A value
of d = 20pixels (4% of the width of the recording
window) worked well for smooth head movements and for
the range of framerates that was tested in this study (50
Hz-350 Hz).</p>

<fig id="fig02" fig-type="figure" position="float">
					<label>Figure 2.</label>
					<caption>
						<p>Algorithm for adjustment of the recording window in real-time. Note that the camera's mirror property is set to true so that the left edge of the sensor is displayed on the right-hand side of the image and vice versa.</p>
					</caption>
					<graphic id="graph02" xlink:href="jemr-11-01-d-figure-02.png"/>
				</fig>

          <p>The eyes are only lost if the recording window cannot
fit into the sensor area or if the participant suddenly jerks
his head to one side. In that case, the experimenter will
have to adjust the RW manually again as explained
above. The software can be developed such that the
sensor area and recording window (Figure 1) are visible on
the experimenter's screen and that manual adjustments
can be made during recording without interrupting the
participant.</p>

          <p>Figure 3 shows two successive eye video frames at
200 Hz (5 ms interval). The crosshairs show the reported
centres of the pupils and outer glints. In Frame 1, the left
pupil is too close to the bottom of the recording window
and needs to be corrected. In Frame 2, the recording
window is adjusted so that the pupil is further from the
border.</p>

<fig id="fig03" fig-type="figure" position="float">
					<label>Figure 3.</label>
					<caption>
						<p>The recording window of two successive camera frames. In the first frame, the left pupil is too close to the bottom edge of the recording window. The second frame shows the recording windowafter adjustment.</p>
					</caption>
					<graphic id="graph03" xlink:href="jemr-11-01-d-figure-03.png"/>
				</fig>
        </sec>
		
      <sec id="S5f">
        <title>Simulation of head movements</title>				

          <p>Instead of asking participants to move their heads
from side to side or up and down, their heads were
stabilised by a chin rest and head movements were
programmatically simulated by adjusting the recording window
instead. This procedure allowed for easier control of the
amount and speed of movement for evaluation purposes.</p>

          <p>The recording window was adjusted either
horizontally or vertically by d pixels every t milliseconds. This is
the same amount of adjustment if a participant would
"bump" his head against one of the walls of the headbox
during normal recording (cf. algorithm in Figure 2).Once
the eyes get within d pixels from the edge of the
recording window, every adjustment will result in an immediate
correction according to the algorithm in Figure 2 &#x2013; thus
causing the recording window to be adjusted back and
forth. At lower framerates, the two adjustments could be
done within one frame, but at higher framerates it is
probable that the adjustments are done in successive
frames.</p>

          <p>The value of t ranged from 100 ms to 900 ms in
increments of 200 ms. This means that eleven recordings
were made at each value of the framerate, namely five
with vertical adjustments, five with horizontal
adjustments and one with a stationary headbox.</p>
        </sec>
		
      <sec id="S5g">
        <title>Data capturing</title>				

          <p>The recording framerates were programmatically
adjusted from 50 Hz to 350 Hz with increments of 50 Hz as
part of the recording procedure. Refer to Figure 4 for an
algorithmic outline of the procedure that was followed.</p>

<fig id="fig04" fig-type="figure" position="float">
					<label>Figure 4.</label>
					<caption>
						<p>Procedure to record human participants</p>
					</caption>
					<graphic id="graph04" xlink:href="jemr-11-01-d-figure-04.png"/>
				</fig>

          <p>Seven gaze targets (one in the middle of the display
and one in a random position in each of six rectangular
areas on the display (Figure 5) were displayed one at a
time for each combination of framerate and headbox
adjustment. Targets were displayed for 1.5 seconds
each,but gaze data was recorded during the second 750
ms only to allow time for the eyes to settle on a target
once it appeared. Taking into account also the time taken
to save data after every set of seven targets and adjust the
framerate programmatically before the next set, the
recording of every set of 7 targets took about 12 s.</p>

<fig id="fig05" fig-type="figure" position="float">
					<label>Figure 5.</label>
					<caption>
						<p>Position of gaze targets. The centre target is fixed while the others appeared in random positions within the six rectangles</p>
					</caption>
					<graphic id="graph05" xlink:href="jemr-11-01-d-figure-05.png"/>
				</fig>

          <p>The above mentioned combination of gaze targets (7
values), range of framerates (7 values) and intervals of
headbox adjustments (11 values) meant that 7&#xD7;7&#xD7;11 =
539 gaze targets had to be presented to each participant.
A single recording lasted about 15 minutes. Participants
had the opportunity to pause between target sets to rest
their eyes if necessary.</p>
        </sec>
		
      <sec id="S5h">
        <title>Participants</title>		

          <p>A participant was seated with his head stabilised by a
chinrest at a distance of 700 mm in front of the
camera.The chair, chinrest and camera were manually
adjusted until both eyes were visible in the recording window,
and the recording window was centred in the sensor area.</p>

          <p>In order to save time and not interrupt the recording
process, calibration was done before the recording
commenced. Calibration was done at 200 Hz, which is in the
middle of the range of framerates for which data was
captured. One could argue that the calibration should
have been repeated every time when the framerate
changed, but that would have rendered the data capturing
procedure unpractically long with an interruption every
time that the framerate was changed.</p>

          <p>The lengthy and exhausting procedure for human
participants presented a challenge in terms of recruiting a
large number of participants. Since the aim of the study
was not to determine the absolute values for indicators of
data quality for a variety of participants, but rather to
compare data quality for various settings of framerate and
headbox adjustments, it was decided to use only two
participants (one of which was the author) and record
data over a few repetitions over a period of time. This
also meant improved consistency with respect to gaze
behaviour over recordings.</p>
        </sec>
		
      <sec id="S5i">
        <title>Artificial eyes</title>			

          <p>The aim of the study was to investigate the effect of
real-time headbox adjustments on the quality of
eyetracking data. In order to control for as much variance as
possible due to participant-specific characteristics, data
was also recorded for a set of artificial eyes. Of course,
the effect of headbox adjustments on the accuracy of
tracking could not be evaluated through the use of
artificial eyes.</p>

          <p>The artificial eyes were mounted at a fixed distance of
700 mm in front of the camera. The artificial eyes were
aimed roughly at the centre of the screen and gaze data
was recorded for seven repetitions, imitating the seven
gaze targets, for all combinations of framerate and
headbox adjustments. The entire procedure was repeated 10
times.</p>
        </sec>
      </sec>
	
    <sec id="S6">
      <title>Results</title>
	  
      <p>Because of the small number of participants and
better control over external factors, the data from the
artificial eyes were considered to be more reliable than the
datafrom the human participants. However, since data
was in any case recorded for human participants to
facilitate analysis of accuracy, it was also analysed and
presented below. Where the results for human participants
do not agree with that of the artificial eyes, the
conclusions are based on the results of the artificial eyes.</p>

      <sec id="S6a">
        <title>The effect of headbox adjustments on delivery delay</title>
		
        <p>As explained above, the effect of headbox adjustment
on latency can be represented by the effect on delivery
delay as the time between image acquisition and delivery
(camera latency) is expected to be constant. The delivery
delay can be expressed in terms of the difference between
the expected between-frames interval (based on the set
framerate) and the actual interval (as determined by
subtracting timestamps of successive frames). In the absence
of any effect on latency, the delivery delay should be
zero.</p>

        <p>Figure 6a shows the frame-to-frame interval for 6
fixations of one specific participant at 50 Hz and 100 ms
between headbox adjustments. Zooming into a specific
fixation revealed that instead of 20 ms between frames,
one frame in every 100 ms is lost with the effect of a 40
ms gap to the next frame.</p>

        <p>Figure 6b shows the frame-to-frame interval at 50 Hz
and 500 ms between adjustments. This time, there is only
one lost frame per 750 ms fixation. Because of the lower
percentage of lost frames and the subsequent smaller
effect on the average frame-to-frame interval, it is
expected that, for the same framerate, the average delivery
delay will be shorter when less frequent headbox
adjustments are made.</p>

        <p>Figure 6c shows a single fixation at 200 Hz and 100
ms between adjustments. The moment of adjustment can
clearly be identified by the periodic doubling of the
interframe interval &#x2013; once every 100 ms. It is expected that the
effect of the lost frames will be less pronounced at higher
framerates as the percentage of affected frames is reduced
unless more than one frame is lost for every adjustment
as shown in Figure 6d for a recording at 300 Hz.</p>

        <p>Figure 7 shows the average delivery delay for each
value of the adjustment intervals and framerates that were
tested for artificial eyes and human participants. Note that
in this and all subsequent figures, the points on the X-axis
are categorical. The points are shown with horizontal
offsets to avoid overlapping (and thus improve
readability) of the vertical bars that indicate the 95% confidence
intervals, but these offsets do not have meaning.</p>

<fig id="fig06a" fig-type="figure" position="float">
					<label>Figure 6a.</label>
					<caption>
						<p>Frame-to-frame interval for a human participant at 50 Hz and 100 ms between headbox adjustments.</p>
					</caption>
					<graphic id="graph06a" xlink:href="jemr-11-01-d-figure-06.png"/>
				</fig>
				
<fig id="fig06b" fig-type="figure" position="float">
					<label>Figure 6b.</label>
					<caption>
						<p>Frame-to-frame interval for a human participant at 50 Hz and 500 ms between headbox adjustments.</p>
					</caption>
					<graphic id="graph06b" xlink:href="jemr-11-01-d-figure-07.png"/>
				</fig>

<fig id="fig06c" fig-type="figure" position="float">
					<label>Figure 6c.</label>
					<caption>
						<p>Frame-to-frame interval for one fixation at 200 Hz and 100 ms between headbox adjustments.</p>
					</caption>
					<graphic id="graph06c" xlink:href="jemr-11-01-d-figure-08.png"/>
				</fig>

<fig id="fig06d" fig-type="figure" position="float">
					<label>Figure 6d.</label>
					<caption>
						<p>Frame-to-frame interval for one fixation at 300 Hz and 100 ms between headbox adjustments.</p>
					</caption>
					<graphic id="graph06d" xlink:href="jemr-11-01-d-figure-09.png"/>
				</fig>

<fig id="fig07" fig-type="figure" position="float">
					<label>Figure 7.</label>
					<caption>
						<p>Average delivery delay for a range of framerates at 6 distinct values for the interval between headbox adjustments. Note that the points on the X-axis are categorical and are shown with horizontal offsets to avoid overlapping of the vertical bars (95% confidence intervals). These offsets do not have meaning.</p>
					</caption>
					<graphic id="graph07" xlink:href="jemr-11-01-d-figure-10.png"/>
				</fig>				

        <p>The expected trend of shorter delivery delays as the
framerate is increased, is confirmed for 50 Hz &#x2013; 250 Hz.
The inconsistency at higher framerates could possibly be
attributed to the fact that the headbox was adjusted twice
every t milliseconds during simulation of head
movements as explained above and that the successive
adjustments could not be accommodated in a single frame (cf
Fig. 6d).</p>

        <p>Table 1 shows the results of a factorial analysis 
of variance for the effects of the interval between headbox 
adjustments and framerate on delivery delay. Since the interaction 
between the factors was significant (&#x3B1; =.01), a series of separate 
analyses of variance was done for each of the framerates that were 
tested (Table 2). The entries under <italic>post hoc</italic> list the individual 
differences that were significant according to Tukey's HSD post hoc 
test for unequal N. As expected, a significant increase in 
delivery delay for headbox adjustments every 100 ms was found 
at the lower framerates for both artificial and human eyes.</p>

<table-wrap id="t01" position="float">
					<label>Table 1</label>
					<caption>
						<p>The effect of framerate and headbox adjustment interval on delivery delay (**significant at &#x3B1;=.01).</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>
          <tr>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1">Artificial eyes</td>
            <td rowspan="1" colspan="1"/>			
            <td rowspan="1" colspan="1">Human eyes</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Factor</td>
            <td rowspan="1" colspan="1">df</td>
            <td rowspan="1" colspan="1">F</td>
            <td rowspan="1" colspan="1">p</td>
            <td rowspan="1" colspan="1">F</td>
            <td rowspan="1" colspan="1">p</td>
          </tr>
						</thead>
						<tbody>
          <tr>
            <td rowspan="1" colspan="1">Adjustment interval</td>
            <td rowspan="1" colspan="1">5</td>
            <td rowspan="1" colspan="1">23.953</td>
            <td rowspan="1" colspan="1">.000**</td>
            <td rowspan="1" colspan="1">8.499</td>
            <td rowspan="1" colspan="1">.000**</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Framerate</td>
            <td rowspan="1" colspan="1">6</td>
            <td rowspan="1" colspan="1">682.7</td>
            <td rowspan="1" colspan="1">.000**</td>
            <td rowspan="1" colspan="1">599.5</td>
            <td rowspan="1" colspan="1">.000**</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Interaction</td>
            <td rowspan="1" colspan="1">30</td>
            <td rowspan="1" colspan="1">3.521</td>
            <td rowspan="1" colspan="1">.000**</td>
            <td rowspan="1" colspan="1">2.291</td>
            <td rowspan="1" colspan="1">.000**</td>
          </tr>
						</tbody>
					</table>					
					</table-wrap>
					
<table-wrap id="t02" position="float">
					<label>Table 2</label>
					<caption>
						<p>The effect of adjustment interval on delivery delay while controlling for framerate (*sign at &#x3B1;=.05; **sign at &#x3B1;=.01).</p>					
					</caption>				
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>
          <tr>
            <td rowspan="1" colspan="1">Artificial eyes</td>
          </tr>												
          <tr>
            <td rowspan="1" colspan="1">FPS</td>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1">post hoc*</td>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>			
          </tr>
          <tr>
            <td rowspan="1" colspan="1">(Hz)</td>
            <td rowspan="1" colspan="1">F(5)</td>
            <td rowspan="1" colspan="1">p</td>
            <td rowspan="1" colspan="1">N</td>
            <td rowspan="1" colspan="1">9</td>
            <td rowspan="1" colspan="1">7</td>
            <td rowspan="1" colspan="1">5</td>
            <td rowspan="1" colspan="1">3</td>
            <td rowspan="1" colspan="1">1</td>
          </tr>
						</thead>
						<tbody>
          <tr>
            <td rowspan="1" colspan="1">50</td>
            <td rowspan="1" colspan="1">11.433</td>
            <td rowspan="1" colspan="1">.000**</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">All</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">100</td>
            <td rowspan="1" colspan="1">20.236</td>
            <td rowspan="1" colspan="1">.000**</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">All</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">150</td>
            <td rowspan="1" colspan="1">6.697</td>
            <td rowspan="1" colspan="1">.000**</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">All</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">200</td>
            <td rowspan="1" colspan="1">13.569</td>
            <td rowspan="1" colspan="1">.000**</td>
            <td rowspan="1" colspan="1">9,7,5</td>
            <td rowspan="1" colspan="1">N,1</td>
            <td rowspan="1" colspan="1">N,3,1</td>
            <td rowspan="1" colspan="1">N,1</td>
            <td rowspan="1" colspan="1">7,1</td>
            <td rowspan="1" colspan="1">9,7,5,3</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">250</td>
            <td rowspan="1" colspan="1">0.969</td>
            <td rowspan="1" colspan="1">.436</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">300</td>
            <td rowspan="1" colspan="1">1.323</td>
            <td rowspan="1" colspan="1">.252</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">350</td>
            <td rowspan="1" colspan="1">3.681</td>
            <td rowspan="1" colspan="1">.003**</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">N,9,7</td>
          </tr>
						</tbody>
					</table>
					<table-wrap-foot>
						<fn id="FN2">
						<p>*post hoc:Individual significant (&#x3B1;=.05) differences according to Tukey's HSD for unequal N(N=None, 9=900 ms, 7=700 ms, 5=500 ms, 3=300 ms, 1=100 ms)</p>
						</fn>
					</table-wrap-foot>					
					</table-wrap>					

      </sec>
	  
      <sec id="S6b">
        <title>Effect of headbox adjustments on processing time</title>
		
        <p>Since processing of delivered frames is done on the
host computer, processing time is expected to be
independent of headbox adjustments that are done on the
camera board. One would also expect the processing time
to be independent of framerate as the processing time
should only depend on the algorithms that are used to
locate the feature points and map them to gaze
coordinates.</p>

        <p>Figure 8 shows the average processing time for each
value of the adjustment intervals and framerates that were
tested for human participants and artificial eyes.</p>

<fig id="fig08" fig-type="figure" position="float">
					<label>Figure 8.</label>
					<caption>
						<p>Average processing time for a range of framerates at 6 distinct values for the interval between headbox adjustments.</p>
					</caption>
					<graphic id="graph08" xlink:href="jemr-11-01-d-figure-11.png"/>
				</fig>

        <p>For artificial eyes, a repeated-measures analysis of
variance for the effects of framerate and headbox
adjustment interval on processing time indicated no interaction
between these two factors (Table 3). The main effect of
headbox adjustment interval was not significant (&#x3B1;=.05)
but that of framerate was significant. There was,
however, significant interaction between the factors for human
participants, and therefore, a series of separate analyses
of variance was done for each of the framerates that were
tested (Table 4). The entries under <italic>post hoc</italic> list the
individual differences that were significant according to
Tukey's post hoc HSD test for unequal N. A significant
(&#x3B1;=.05) effect was found for only two of the seven levels
of framerate.</p>

<table-wrap id="t03" position="float">
					<label>Table 3</label>
					<caption>
						<p>The effect of framerate and headbox adjustment interval on processing time per frame (**significant at &#x3B1;=.01).</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>
          <tr>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1">Artificial eyes</td>
            <td rowspan="1" colspan="1"/>			
            <td rowspan="1" colspan="1">Human eyes</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Factor</td>
            <td rowspan="1" colspan="1">df</td>
            <td rowspan="1" colspan="1">F</td>
            <td rowspan="1" colspan="1">p</td>
            <td rowspan="1" colspan="1">F</td>
            <td rowspan="1" colspan="1">p</td>
          </tr>
						</thead>
						<tbody>
          <tr>
            <td rowspan="1" colspan="1">Adj interval</td>
            <td rowspan="1" colspan="1">5</td>
            <td rowspan="1" colspan="1">1.986</td>
            <td rowspan="1" colspan="1">.077</td>
            <td rowspan="1" colspan="1">159.55</td>
            <td rowspan="1" colspan="1">.000**</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Framerate</td>
            <td rowspan="1" colspan="1">6</td>
            <td rowspan="1" colspan="1">1163.9</td>
            <td rowspan="1" colspan="1">.000**</td>
            <td rowspan="1" colspan="1">2.077</td>
            <td rowspan="1" colspan="1">.065</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Interaction</td>
            <td rowspan="1" colspan="1">30</td>
            <td rowspan="1" colspan="1">0.663</td>
            <td rowspan="1" colspan="1">.919</td>
            <td rowspan="1" colspan="1">1.505</td>
            <td rowspan="1" colspan="1">.038*</td>
          </tr>
						</tbody>
					</table>			
					</table-wrap>
					
<table-wrap id="t04" position="float">
					<label>Table 4</label>
					<caption>
						<p>The effect of headbox adjustment interval on processing time while controlling for framerate for human participants (*significant at &#x3B1;=.05; **significant at &#x3B1;=.01).</p>					
					</caption>				
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>											
          <tr>
            <td rowspan="1" colspan="1">Framerate</td>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1">post hoc*</td>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>			
          </tr>
          <tr>
            <td rowspan="1" colspan="1">(Hz)</td>
            <td rowspan="1" colspan="1">F(5)</td>
            <td rowspan="1" colspan="1">p</td>
            <td rowspan="1" colspan="1">N</td>
            <td rowspan="1" colspan="1">9</td>
            <td rowspan="1" colspan="1">7</td>
            <td rowspan="1" colspan="1">5</td>
            <td rowspan="1" colspan="1">3</td>
            <td rowspan="1" colspan="1">1</td>
          </tr>
						</thead>
						<tbody>
          <tr>
            <td rowspan="1" colspan="1">50</td>
            <td rowspan="1" colspan="1">2.622</td>
            <td rowspan="1" colspan="1">.023*</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">7</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">100</td>
            <td rowspan="1" colspan="1">1.126</td>
            <td rowspan="1" colspan="1">.345</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">150</td>
            <td rowspan="1" colspan="1">2.075</td>
            <td rowspan="1" colspan="1">.067</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">200</td>
            <td rowspan="1" colspan="1">6.434</td>
            <td rowspan="1" colspan="1">.000**</td>
            <td rowspan="1" colspan="1">7,5,3,1</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">N</td>
            <td rowspan="1" colspan="1">N</td>
            <td rowspan="1" colspan="1">N</td>
            <td rowspan="1" colspan="1">N</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">250</td>
            <td rowspan="1" colspan="1">1.109</td>
            <td rowspan="1" colspan="1">.354</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">300</td>
            <td rowspan="1" colspan="1">0.129</td>
            <td rowspan="1" colspan="1">.986</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">350</td>
            <td rowspan="1" colspan="1">0.397</td>
            <td rowspan="1" colspan="1">.851</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
          </tr>
						</tbody>
					</table>
					<table-wrap-foot>
						<fn id="FN4">
						<p>*post hoc: Individual significant (&#x3B1;=.05) differences according to Tukey's HSD for unequal N (N=No adjustment, 9=900 ms, 7=700 ms, 5=500 ms, 3=300 ms, 1=100 ms)</p>
						</fn>
					</table-wrap-foot>					
					</table-wrap>						

        <p>It is not clear why processing time is less for higher
framerates (cf. Figure 8). One could speculate that the
operating system assigns resources (CPU time and
memory) to where they are needed most and that the
higher framerates attract more resources to the eye
tracking application. The allocation of more and more
resources can, of course, only persist until capacity is
reached. For artificial eyes, the processing time stabilises
at 0.75 ms &#x2013; 0.8 ms for framerates of 200 Hz and higher
(cf. Figure 8).</p>

        <p>No meaning should be attached to the fact that the
processing time for human eyes was higher than that for
artificial eyes since the pupil sizes differed and a larger
cluster of pupil pixels had to be examined for human
participants. Once again, this is dependent on the specific
tracking algorithm and other algorithms might produce
different absolute values for processing time.</p>
      </sec>
	  
      <sec id="S6c">
        <title>The effect of headbox adjustments on robustness</title>
		
        <p>Figure 9 shows the average tracking percentage 
for each of the combinations of framerate and headbox adjustment 
interval for human and artificial eyes separately. Table 5 shows 
the results of a factorial analysis of variance for the effects 
of the interval between headbox adjustments and framerate on robustness. 
Since the interaction between the factors was significant (&#x3B1;=.01), 
a series of separate analyses of variance was done for each of the
framerates that were tested (Table 6). The entries under
<italic>post hoc</italic> list the individual differences that were
significant according to Tukey's HSD post hoc test for unequal
N.</p>

<fig id="fig09" fig-type="figure" position="float">
					<label>Figure 9.</label>
					<caption>
						<p>Average robustness for a range of framerates at 6 distinct values for the interval between headbox adjustments.</p>
					</caption>
					<graphic id="graph09" xlink:href="jemr-11-01-d-figure-12.png"/>
				</fig>
				
<table-wrap id="t05" position="float">
					<label>Table 5</label>
					<caption>
						<p>The effect of framerate and headbox adjustment interval on robustness (**significant at &#x3B1;=.01).</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>
          <tr>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1">Artificial eyes</td>
            <td rowspan="1" colspan="1"/>			
            <td rowspan="1" colspan="1">Human eyes</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Factor</td>
            <td rowspan="1" colspan="1">df</td>
            <td rowspan="1" colspan="1">F</td>
            <td rowspan="1" colspan="1">p</td>
            <td rowspan="1" colspan="1">F</td>
            <td rowspan="1" colspan="1">p</td>
          </tr>
						</thead>
						<tbody>
          <tr>
            <td rowspan="1" colspan="1">Adjinterval</td>
            <td rowspan="1" colspan="1">5</td>
            <td rowspan="1" colspan="1">459.80</td>
            <td rowspan="1" colspan="1">.000**</td>
            <td rowspan="1" colspan="1">189.66</td>
            <td rowspan="1" colspan="1">.000**</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Framerate</td>
            <td rowspan="1" colspan="1">6</td>
            <td rowspan="1" colspan="1">582.72</td>
            <td rowspan="1" colspan="1">.000**</td>
            <td rowspan="1" colspan="1">727.34</td>
            <td rowspan="1" colspan="1">.000**</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Interaction</td>
            <td rowspan="1" colspan="1">30</td>
            <td rowspan="1" colspan="1">40.398</td>
            <td rowspan="1" colspan="1">.000**</td>
            <td rowspan="1" colspan="1">19.396</td>
            <td rowspan="1" colspan="1">.000**</td>
          </tr>
						</tbody>
					</table>			
					</table-wrap>

<table-wrap id="t06" position="float">
					<label>Table 6</label>
					<caption>
						<p>The effect of headbox adjustment interval on robustness while controlling for framerate (**significant at &#x3B1;=.01).</p>					
					</caption>				
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>
          <tr>
            <td rowspan="1" colspan="1">Artificial eyes</td>
          </tr>												
          <tr>
            <td rowspan="1" colspan="1">Framerate</td>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1">post hoc*</td>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>			
          </tr>
          <tr>
            <td rowspan="1" colspan="1">(Hz)</td>
            <td rowspan="1" colspan="1">F(5)</td>
            <td rowspan="1" colspan="1">p</td>
            <td rowspan="1" colspan="1">N</td>
            <td rowspan="1" colspan="1">9</td>
            <td rowspan="1" colspan="1">7</td>
            <td rowspan="1" colspan="1">5</td>
            <td rowspan="1" colspan="1">3</td>
            <td rowspan="1" colspan="1">1</td>
          </tr>
						</thead>
						<tbody>
          <tr>
            <td rowspan="1" colspan="1">50</td>
            <td rowspan="1" colspan="1">3685.1</td>
            <td rowspan="1" colspan="1">.000**</td>
            <td rowspan="1" colspan="1">All</td>
            <td rowspan="1" colspan="1">All</td>
            <td rowspan="1" colspan="1">N,9,3,1</td>
            <td rowspan="1" colspan="1">N,9,3,1</td>
            <td rowspan="1" colspan="1">All</td>
            <td rowspan="1" colspan="1">All</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">100</td>
            <td rowspan="1" colspan="1">1209.6</td>
            <td rowspan="1" colspan="1">.000**</td>
            <td rowspan="1" colspan="1">All</td>
            <td rowspan="1" colspan="1">All</td>
            <td rowspan="1" colspan="1">N,9,3,1</td>
            <td rowspan="1" colspan="1">N,9,3,1</td>
            <td rowspan="1" colspan="1">All</td>
            <td rowspan="1" colspan="1">All</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">150</td>
            <td rowspan="1" colspan="1">976.25</td>
            <td rowspan="1" colspan="1">.000**</td>
            <td rowspan="1" colspan="1">All</td>
            <td rowspan="1" colspan="1">All</td>
            <td rowspan="1" colspan="1">N,9,3,1</td>
            <td rowspan="1" colspan="1">N,9,3,1</td>
            <td rowspan="1" colspan="1">All</td>
            <td rowspan="1" colspan="1">All</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">200</td>
            <td rowspan="1" colspan="1">133.86</td>
            <td rowspan="1" colspan="1">.000**</td>
            <td rowspan="1" colspan="1">All</td>
            <td rowspan="1" colspan="1">N,3,1</td>
            <td rowspan="1" colspan="1">N,1</td>
            <td rowspan="1" colspan="1">N,1</td>
            <td rowspan="1" colspan="1">N,1</td>
            <td rowspan="1" colspan="1">All</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">250</td>
            <td rowspan="1" colspan="1">13.374</td>
            <td rowspan="1" colspan="1">.000**</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">3,1</td>
            <td rowspan="1" colspan="1">3,1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">All</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">300</td>
            <td rowspan="1" colspan="1">10.658</td>
            <td rowspan="1" colspan="1">.000**</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">3,1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">9</td>
            <td rowspan="1" colspan="1">N,9,7,5</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">350</td>
            <td rowspan="1" colspan="1">6.720</td>
            <td rowspan="1" colspan="1">.000**</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">All</td>
          </tr>

          <tr>
            <td rowspan="1" colspan="1">Human eyes</td>
          </tr>												
          <tr>
            <td rowspan="1" colspan="1">Framerate</td>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1">post hoc*</td>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>			
          </tr>
          <tr>
            <td rowspan="1" colspan="1">(Hz)</td>
            <td rowspan="1" colspan="1">F(5)</td>
            <td rowspan="1" colspan="1">p</td>
            <td rowspan="1" colspan="1">N</td>
            <td rowspan="1" colspan="1">9</td>
            <td rowspan="1" colspan="1">7</td>
            <td rowspan="1" colspan="1">5</td>
            <td rowspan="1" colspan="1">3</td>
            <td rowspan="1" colspan="1">1</td>
          </tr>

          <tr>
            <td rowspan="1" colspan="1">50</td>
            <td rowspan="1" colspan="1">591.60</td>
            <td rowspan="1" colspan="1">.000**</td>
            <td rowspan="1" colspan="1">All</td>
            <td rowspan="1" colspan="1">N,3,1</td>
            <td rowspan="1" colspan="1">N,3,1</td>
            <td rowspan="1" colspan="1">N,3,1</td>
            <td rowspan="1" colspan="1">All</td>
            <td rowspan="1" colspan="1">All</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">100</td>
            <td rowspan="1" colspan="1">326.47</td>
            <td rowspan="1" colspan="1">.000**</td>
            <td rowspan="1" colspan="1">All</td>
            <td rowspan="1" colspan="1">N,3,1</td>
            <td rowspan="1" colspan="1">N,3,1</td>
            <td rowspan="1" colspan="1">N,3,1</td>
            <td rowspan="1" colspan="1">All</td>
            <td rowspan="1" colspan="1">All</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">150</td>
            <td rowspan="1" colspan="1">244.10</td>
            <td rowspan="1" colspan="1">.000**</td>
            <td rowspan="1" colspan="1">All</td>
            <td rowspan="1" colspan="1">N,3,1</td>
            <td rowspan="1" colspan="1">N,3,1</td>
            <td rowspan="1" colspan="1">N,3,1</td>
            <td rowspan="1" colspan="1">All</td>
            <td rowspan="1" colspan="1">All</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">200</td>
            <td rowspan="1" colspan="1">55.854</td>
            <td rowspan="1" colspan="1">.000**</td>
            <td rowspan="1" colspan="1">9,3,1</td>
            <td rowspan="1" colspan="1">N,5,1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">9,3,1</td>
            <td rowspan="1" colspan="1">N,5,1</td>
            <td rowspan="1" colspan="1">All</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">250</td>
            <td rowspan="1" colspan="1">4.082</td>
            <td rowspan="1" colspan="1">.001**</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">9,7,5</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">300</td>
            <td rowspan="1" colspan="1">2.326</td>
            <td rowspan="1" colspan="1">.041*</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">350</td>
            <td rowspan="1" colspan="1">2.222</td>
            <td rowspan="1" colspan="1">.051</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
            <td rowspan="1" colspan="1">-</td>
          </tr>
						</tbody>						
					</table>
					<table-wrap-foot>
						<fn id="FN6">
						<p>*post hoc:Individual significant (&#x3B1;=.05) differences according to Tukey's HSD for unequal N(N=None, 9=900 ms, 7=700 ms, 5=500 ms, 3=300 ms, 1=100 ms)</p>
						</fn>
					</table-wrap-foot>					
					</table-wrap>						

        <p>For artificial eyes, the interval between headbox
adjustments proved to be a significant (&#x3B1;=.05) indicator of
robustness for all framerates. Specifically, Tukey's
posthoc comparison for the significance of individual
differences revealed that the robustness when the headbox is
not adjusted is significantly (&#x3B1;=.05) better than the
robustness if the headbox was adjusted &#x2013; regardless of the
rate of adjustments. On the other hand, when the headbox
was adjusted frequently at 10 adjustments per second, the
robustness was significantly worse than when it was
adjusted less frequently. This was expected as one or
more frames are lost at every adjustment of the headbox.</p>

        <p>The same trend was observed for human eyes,
although the effect was not as pronounced at the higher
framerates of 250 Hz, 300 Hz or 350 Hz. At the range of
framerates 50 Hz &#x2013; 250 Hz, the robustness for human
eyes was more or less on the same level as that of
artificial eyes, but at 300 Hz and 350 Hz it was considerably
worse.</p>

        <p>The results for framerate can be grouped into two
clusters, namely &#x2264; 200 Hz and &#x2265; 250 Hz. Within a
cluster, the robustness was more or less the same when the
headbox was not adjusted often. This can again be
explained in terms of the fact that only one frame was lost
for adjustments at the lower framerates but more frames
were lost at higher framerates (cf Figure 6d).</p>

        <p>For framerates &#x2264; 200 Hz, robustness increases with
framerate as the percentage of lost frames becomes less.
For framerates  &#x2265; 250 Hz, robustness decreased as more
and more frames were lost for every adjustment.</p>

        <p>In conclusion, robustness is at its best at lower
framerates and with no headbox adjustments. Infrequent
headbox adjustments at intervals of 300 ms or longer do,
however, not affect robustness too much.</p>
      </sec>
	  
      <sec id="S6d">
        <title>The effect of headbox adjustments on accuracy</title>
		
        <p>Figure 10 shows the average error for each of the
combinations of framerate and headbox adjustment
interval for human participants. A repeated-measures analysis
of variance for the effects of framerate and headbox
adjustment interval indicated no interaction between these
two factors (cf. Table 7) (F(30, 4756) = 0.700, p = .888).</p>

<fig id="fig10" fig-type="figure" position="float">
					<label>Figure 10.</label>
					<caption>
						<p>Average accuracy for a range of framerates at 6 distinct values for the interval between adjustments.</p>
					</caption>
					<graphic id="graph10" xlink:href="jemr-11-01-d-figure-13.png"/>
				</fig>
				
<table-wrap id="t07" position="float">
					<label>Table 7</label>
					<caption>
						<p>The effect of framerate and headbox adjustment interval on accuracy(**significant at &#x3B1;=.01).</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>
          <tr>
            <td rowspan="1" colspan="1">Factor</td>
            <td rowspan="1" colspan="1">df</td>
            <td rowspan="1" colspan="1">F</td>
            <td rowspan="1" colspan="1">p</td>
          </tr>
						</thead>
						<tbody>
          <tr>
            <td rowspan="1" colspan="1">Adjustment interval</td>
            <td rowspan="1" colspan="1">5</td>
            <td rowspan="1" colspan="1">0.947</td>
            <td rowspan="1" colspan="1">.449</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Framerate</td>
            <td rowspan="1" colspan="1">6</td>
            <td rowspan="1" colspan="1">18.825</td>
            <td rowspan="1" colspan="1">.000*</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Interaction</td>
            <td rowspan="1" colspan="1">30</td>
            <td rowspan="1" colspan="1">0.700</td>
            <td rowspan="1" colspan="1">.888</td>
          </tr>
						</tbody>
					</table>
					</table-wrap>					
		  
        <p>The main effect of framerate had a significant effect
on accuracy (F(6, 4756) = 18.825, p =.000). The fact that
the accuracies at 200 Hz and 250 Hz are better than the
accuracies at other framerates, can probably be ascribed
to the fact that participant calibration was done at 200 Hz.
This was not regarded as problematic since the study
focused on the effect of headbox adjustments while
controlling for framerate. The researcher is also not as
interested in the absolute accuracy, than in the difference in
accuracy with and without headbox adjustments. In this
respect, it was found that the main effect of headbox
adjustment interval did not have a significant effect on
accuracy (F(5, 4756) = 0.947, p = .449).</p>
      </sec>
	  
      <sec id="S6e">
        <title>The effect of headbox adjustments on precision</title>
		
        <p>Figure 11 shows the average STD precision for each
of the combinations of framerate and headbox adjustment
interval for human and artificial eyes separately. Table 8
shows the results of a factorial analysis of variance for
the effects of the interval between headbox adjustments
and framerate on precision. Since the interaction between
the factors was significant (&#x3B1;=.01), a series of separate
analyses of variance was done for each of the framerates
that were tested (Table 9). The entries under <italic>post hoc</italic> list
the individual differences that were significant according
to Tukey's HSD post hoc test for unequal N.</p>

<fig id="fig11" fig-type="figure" position="float">
					<label>Figure 11.</label>
					<caption>
						<p>Average precision for a range of framerates at 6 distinct values for the interval between headbox adjustments.</p>
					</caption>
					<graphic id="graph11" xlink:href="jemr-11-01-d-figure-14.png"/>
				</fig>
				
<table-wrap id="t08" position="float">
					<label>Table 8</label>
					<caption>
						<p>The effect of framerate and headbox adjustment interval on precision(**significant at &#x3B1;=.01).</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>
          <tr>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1">Artificial eyes</td>
            <td rowspan="1" colspan="1"/>			
            <td rowspan="1" colspan="1">Human eyes</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Factor</td>
            <td rowspan="1" colspan="1">df</td>
            <td rowspan="1" colspan="1">F</td>
            <td rowspan="1" colspan="1">p</td>
            <td rowspan="1" colspan="1">F</td>
            <td rowspan="1" colspan="1">p</td>
          </tr>
						</thead>
						<tbody>
          <tr>
            <td rowspan="1" colspan="1">Adjustment interval</td>
            <td rowspan="1" colspan="1">5</td>
            <td rowspan="1" colspan="1">16.69</td>
            <td rowspan="1" colspan="1">.000**</td>
            <td rowspan="1" colspan="1">7.98</td>
            <td rowspan="1" colspan="1">.000**</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Framerate</td>
            <td rowspan="1" colspan="1">6</td>
            <td rowspan="1" colspan="1">67.35</td>
            <td rowspan="1" colspan="1">.000**</td>
            <td rowspan="1" colspan="1">39.00</td>
            <td rowspan="1" colspan="1">.000**</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Interaction</td>
            <td rowspan="1" colspan="1">30</td>
            <td rowspan="1" colspan="1">2.96</td>
            <td rowspan="1" colspan="1">.000**</td>
            <td rowspan="1" colspan="1">2.34</td>
            <td rowspan="1" colspan="1">.000**</td>
          </tr>
						</tbody>
					</table>			
					</table-wrap>

<table-wrap id="t09" position="float">
					<label>Table 9</label>
					<caption>
						<p>The effect of headbox adjustment interval on precision while controlling for framerate (*significant at &#x3B1;=.05; **significant at &#x3B1;=.01).</p>					
					</caption>				
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>
          <tr>
            <td rowspan="1" colspan="1">Artificial eyes</td>
          </tr>												
          <tr>
            <td rowspan="1" colspan="1">Framerate</td>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1">post hoc*</td>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>			
          </tr>
          <tr>
            <td rowspan="1" colspan="1">(Hz)</td>
            <td rowspan="1" colspan="1">F(5)</td>
            <td rowspan="1" colspan="1">p</td>
            <td rowspan="1" colspan="1">N</td>
            <td rowspan="1" colspan="1">9</td>
            <td rowspan="1" colspan="1">7</td>
            <td rowspan="1" colspan="1">5</td>
            <td rowspan="1" colspan="1">3</td>
            <td rowspan="1" colspan="1">1</td>
          </tr>
						</thead>
						<tbody>
          <tr>
            <td rowspan="1" colspan="1">50</td>
            <td rowspan="1" colspan="1">1.948</td>
            <td rowspan="1" colspan="1">.084</td>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">100</td>
            <td rowspan="1" colspan="1">2.665</td>
            <td rowspan="1" colspan="1">.021*</td>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1">N</td>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">150</td>
            <td rowspan="1" colspan="1">2.195</td>
            <td rowspan="1" colspan="1">.053</td>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">200</td>
            <td rowspan="1" colspan="1">7.814</td>
            <td rowspan="1" colspan="1">.000**</td>
            <td rowspan="1" colspan="1">7,9</td>
            <td rowspan="1" colspan="1">N,3</td>
            <td rowspan="1" colspan="1">N,3,1</td>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1">9,7</td>
            <td rowspan="1" colspan="1">7</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">250</td>
            <td rowspan="1" colspan="1">1.806</td>
            <td rowspan="1" colspan="1">.109</td>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">300</td>
            <td rowspan="1" colspan="1">10.863</td>
            <td rowspan="1" colspan="1">.000**</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">N,9,7,5,3</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">350</td>
            <td rowspan="1" colspan="1">7.449</td>
            <td rowspan="1" colspan="1">.000**</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">N,9,7,5,3</td>
          </tr>

          <tr>
            <td rowspan="1" colspan="1">Human eyes</td>
          </tr>												
          <tr>
            <td rowspan="1" colspan="1">Framerate</td>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1">post hoc*</td>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>			
          </tr>
          <tr>
            <td rowspan="1" colspan="1">(Hz)</td>
            <td rowspan="1" colspan="1">F(5)</td>
            <td rowspan="1" colspan="1">p</td>
            <td rowspan="1" colspan="1">N</td>
            <td rowspan="1" colspan="1">9</td>
            <td rowspan="1" colspan="1">7</td>
            <td rowspan="1" colspan="1">5</td>
            <td rowspan="1" colspan="1">3</td>
            <td rowspan="1" colspan="1">1</td>
          </tr>

          <tr>
            <td rowspan="1" colspan="1">50</td>
            <td rowspan="1" colspan="1">1.732</td>
            <td rowspan="1" colspan="1">.125</td>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">100</td>
            <td rowspan="1" colspan="1">1.533</td>
            <td rowspan="1" colspan="1">.177</td>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">150</td>
            <td rowspan="1" colspan="1">0.633</td>
            <td rowspan="1" colspan="1">.674</td>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">200</td>
            <td rowspan="1" colspan="1">0.866</td>
            <td rowspan="1" colspan="1">.504</td>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">250</td>
            <td rowspan="1" colspan="1">1.279</td>
            <td rowspan="1" colspan="1">.271</td>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">300</td>
            <td rowspan="1" colspan="1">6.497</td>
            <td rowspan="1" colspan="1">.000**</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">N,9,7,5,3</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">350</td>
            <td rowspan="1" colspan="1">4.237</td>
            <td rowspan="1" colspan="1">.001**</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">1</td>
            <td rowspan="1" colspan="1">N,9,7,5,3</td>
          </tr>
						</tbody>						
					</table>
					<table-wrap-foot>
						<fn id="FN9">
						<p>*post hoc:Individual significant (&#x3B1;=.05) differences according to Tukey's HSD for unequal N.(N=None, 9=900 ms, 7=700 ms, 5=500 ms, 3=300 ms, 1=100 ms.)</p>
						</fn>
					</table-wrap-foot>					
					</table-wrap>					

        <p>The interval between headbox adjustments was found 
to have a very significant (&#x3B1;=.01) effect on precision for the 
higher framerates of 300 Hz and 350 Hz for both artificial and 
human eyes. Specifically, Tukey's post-hoc comparison for the 
significance of individual differences revealed that when the 
headbox was adjusted frequently at 10 adjustments per second 
(100 ms intervals), the precision was significantly worse 
than when it was adjusted less frequently or not 
adjusted at all.</p>

        <p>As expected, physiological aspects result in precision 
values for human participants that are slightly worse than for 
artificial eyes. An interesting trend was noted, namely that 
lower framerates lead to better precision. In other words, the 
best precision is obtained at lower framerates with or without 
infrequent headbox adjustments.</p>
      </sec>
    </sec>
	
    <sec id="S7">
      <title>Summary</title>
	  
      <p>Table 10 shows the significance of headbox
adjustments on the various elements of data quality at specific
framerates. For all elements but accuracy, the data
captured from artificial eyes was used.</p>

<table-wrap id="t010" position="float">
					<label>Table 10</label>
					<caption>
						<p>Significance (&#x3B1;=.05) of headbox adjustments on data quality for artificial eyes at specific framerates</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>
          <tr>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1">Framerate (Hz)</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Factor</td>
            <td rowspan="1" colspan="1">50</td>
            <td rowspan="1" colspan="1">100</td>
            <td rowspan="1" colspan="1">150</td>
            <td rowspan="1" colspan="1">200</td>
            <td rowspan="1" colspan="1">250</td>
            <td rowspan="1" colspan="1">300</td>
            <td rowspan="1" colspan="1">350</td>
          </tr>
						</thead>
						<tbody>
          <tr>
            <td rowspan="1" colspan="1">Delivery delay</td>
            <td rowspan="1" colspan="1">&#x2022;</td>
            <td rowspan="1" colspan="1">&#x2022;</td>
            <td rowspan="1" colspan="1">&#x2022;</td>
            <td rowspan="1" colspan="1">&#x2022;</td>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Processing time</td>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Robustness</td>
            <td rowspan="1" colspan="1">&#x2022;</td>
            <td rowspan="1" colspan="1">&#x2022;</td>
            <td rowspan="1" colspan="1">&#x2022;</td>
            <td rowspan="1" colspan="1">&#x2022;</td>
            <td rowspan="1" colspan="1">&#x2022;</td>
            <td rowspan="1" colspan="1">&#x2022;</td>
            <td rowspan="1" colspan="1">&#x2022;</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Accuracy</td>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Precision</td>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1">&#x2022;</td>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1">&#x2022;</td>
            <td rowspan="1" colspan="1">&#x2022;</td>
          </tr>
						</tbody>
					</table>
					</table-wrap>	

        <p>When interpreting the results, it should be 
kept in mind that the absolute values of data quality 
is not as important as the effect of headbox adjustments 
on data quality. In other words, the fact that accuracy 
is about 0.6&#xB0; should not be compared with other systems 
since there were only two participants for whom 
data is available.</p>

        <p>The main findings are summarised per element 
of data quality in the following paragraphs.</p>

      <sec id="S7a">
        <title>Delivery delay</title>
      <p>Delivery delay refers to the difference in time from
when a frame is expected to arrive at the host computer
and when it actually arrives. A significant (&#x3B1;=.05)
increase in delivery delay was found at the lower
framerates for both artificial and human eyes when the headbox
was adjusted frequently. This means that infrequent
headbox adjustments, as would be the case during normal
recording of human participants, will not have an effect
on delivery delay.</p>
      </sec>
	  
      <sec id="S7b">
        <title>Processing time</title>
		
        <p>Since headbox adjustments are done on the camera
board and processing of frames is done on the host
computer, headbox adjustments were confirmed to have an
insignificant effect on processing time. For the computer
that was used in this study, processing time for artificial
eyes ranged from 0.75 ms at 350 Hz to 0.96 ms at 50 Hz.</p>
      </sec>
	  
      <sec id="S7c">
        <title>Robustness</title>
		
        <p>Robustness refers to the amount of data loss during
eye tracking and is expressed as a percentage in terms of
the expected number of data frames for the set framerate.
For artificial eyes, the tracking percentage was above
95% at framerates of 200 Hz or less when there wereno
headbox adjustments. At these lower framerates, the
tracking percentage dropped a little, but stayed above
90% when only one or two headbox adjustments were
made per second. For higher framerates and for 3 or more
adjustments per second, the tracking percentage dropped
to about 70%-80%.</p>

        <p>The robustness when the headbox was not adjusted,
was significantly (&#x3B1;=.05) better than the robustness when
the headbox was adjusted &#x2013; irrespective of the rate of
adjustments. On the other hand, when the headbox was
adjusted frequently at 10 adjustments per second (100 ms
intervals), the robustness was significantly worse than
when it was adjusted less frequently.</p>

        <p>In summary, robustness is at its best at lower
framerates and with no headbox adjustments. Infrequent
headbox adjustments at intervals of 300 ms or longer do,
however, not affect robustness significantly.</p>
      </sec>
	  
      <sec id="S7d">
        <title>Accuracy</title>
		
        <p>Accuracy is a measurement of the offset between
actual gaze position and reported gaze position. It was
found that the interval between headbox adjustments did
not have a significant effect on accuracy. This should be
understood against the background that robustness is
affected by headbox adjustments. Calculation of point of
regard is done on the host computer for received data. In
other words, headbox adjustments lead to data loss, but
when there is data, it is mostly accurate.</p>
      </sec>
	  
      <sec id="S7e">
        <title>Precision</title>
		
        <p>Precision is an indication of the spread of data points
around the centre. It was found that headbox adjustments
affected precision quite significantly at the higher
framerates of 300 Hz and 350 Hz for both artificial and human
eyes, but only so when the headbox was adjusted
frequently at 10 adjustments per second (100 ms intervals).
At lower framerates and with less frequent headbox
adjustments, STD precision for artificial eyes was in the
order of 0.10&#xB0;-0.14&#xB0;. For human eyes, the average
precision ranged between 0.20&#xB0; and 0.26&#xB0; as long as no more
than 3 headbox adjustments were done per second.</p>
      </sec>
    </sec>
	
    <sec id="S8">
      <title>Conclusions</title>
	  
      <p>The framerate of a CMOS camera can be increased by
sending only a part of the image taken by the camera
through to the computer for processing. This means that
the recording window must be adjusted in real-time to
follow the eyes as the head moves around. The purpose
of this paper was to evaluate the impact of these
adjustments of the recording window on the data quality of the
eye tracker.</p>

      <p>One or two headbox adjustments per second, as would
normally be the case during recording of human
participants, will not have an effect on delivery delay. At a
specific framerate, headbox adjustments have no effect
on processing time on the host computer.</p>

      <p>Likewise, infrequent headbox adjustments at intervals
of 300 ms or longer do also not affect robustness too
much. For the data that is delivered to the host computer,
the accuracy will not be affected by headbox adjustments.</p>

      <p>Headbox adjustments affect precision at the higher
framerates of 300 Hz and 350 Hz for both artificial and
human eyes but only so when the headbox was adjusted
frequently at 10 adjustments per second (100 ms
intervals).</p>

      <p>Taking all the above into consideration, it can be
concluded that a CMOS camera that allows a smaller
recording window to be sent through to the host computer for
processing, can be used to achieve higher framerates than
is normally possible, provided that the number of
adjustments of the recording window to follow the eyes in real
time is limited to once or twice per second. The number
of adjustments per second can be reduced by using a
larger recording window, but that will have an effect on
the maximum framerate that can be achieved.</p>
    </sec>
	
    <sec id="S9">
      <title>Limitations and Future Research</title>
	  
      <p>The results above pertain to a specific camera model
and tracking algorithm. Although it is expected that the
results can be generalised to even the high-end
commercial trackers (see for example Holmqvist and Andersson
(
        <xref ref-type="bibr" rid="b20">20</xref>, p. 168
		)), future research should include other models and
types of cameras to investigate the effect of headbox
adjustments on data quality.</p>

      <p>Specifically, the current camera does not allow
segmentation of the recording window to have separate areas
for the two eyes. That would have allowed even higher
framerates because it would exclude the large area
between the eyes which is currently also transferred to the
computer and processed. Alternatively, at the same
framerates, the margins around the eyes can be enlarged
which would allow for a larger headbox and fewer
adjustments of the recording window segments.</p>

      <p>Furthermore, more expensive cameras would
probably be less susceptible for delivery delays during
adjustments of the recording window. This should be
investigated.</p>

      <p>One could also experiment with different focal
lengths of the lens. A shorter focal length will reduce the
size of the features and consequently enlarge the margins
between the pupil and the borders of the recording
window. This will allow for more head movement but
probably at the cost of tracking accuracy.</p>

      <p>The rolling shutter of the camera that was used in this
study might have had an effect on the results during fast
or jerky head movements. Although the focus of this
paper was on the adjustment of the headbox during
smooth movements, the experiments could be repeated
with a camera with a global shutter.</p>

      <p>For this experiment, the size of the recording window
was fixed. A larger recording window will mean fewer
headbox adjustments and will consequently have a
smaller effect on data quality. Future experiments could
include the size of the recording window as a factor. A
generalised procedure can be established that will allow
eye tracker builders to determine the smallest recording
window for the specific camera model that will not affect
data quality.</p>

      <p>The adjustment algorithm in Figure 2 is based on the
margins between the pupils and the edge of the recording
window. The algorithm can be adapted to rather adjust
the recording window such that the centroid between the
pupils are centred in the window.</p>
    </sec>
  </body>
   
  <back>
<ref-list>
<ref id="b1"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Abe</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Ohi</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Ohyama</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2007</year>). <article-title>An Eye- Gaze Input System Using Information on Eye Movement History.</article-title> In <person-group person-group-type="editor"><string-name><given-names>C.</given-names> <surname>Stephanidis</surname></string-name> <role>(Ed.)</role></person-group>, Universal Access in HCI: <source>Proceedings, Part II of HCII2007</source>, <conf-loc>Beijing, China</conf-loc>, <conf-date>July 22-27, 2007</conf-date>. (pp. <fpage>721</fpage>-<lpage>729</lpage>). <publisher-name>Springer Berlin Heidelberg</publisher-name> <pub-id pub-id-type="doi">10.1007/978-3-540-73281-5_79</pub-id></mixed-citation></ref>
<ref id="b2"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Andersson</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Nyström</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Holmqvist</surname>, <given-names>K.</given-names></string-name></person-group> (<year>2010</year>). Sampling frequency and eye-tracking measures: how speed affects durations, latencies, and more. 2010, 3(3). doi:<pub-id pub-id-type="doi">10.16910/jemr.3.3.6</pub-id></mixed-citation></ref>
<ref id="b3"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Baluja</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Pomerleau</surname>, <given-names>D.</given-names></string-name></person-group> (<year>1994</year>). <article-title>Non-Intrusive Gaze Tracking Using Artificial</article-title>. <source>Neural Networks</source>.<issn>0893-6080</issn></mixed-citation></ref>
<ref id="b4"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Beymer</surname>, <given-names>D.</given-names></string-name>, &amp; <string-name><surname>Flickner</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2003</year>). Eye gaze tracking using an active stereo head. Paper presented at the <source>2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition</source>. <pub-id pub-id-type="doi">10.1109/CVPR.2003.1211502</pub-id></mixed-citation></ref>
<ref id="b5"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Blignaut</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Holmqvist</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Nyström</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Dewhurst</surname>, <given-names>R.</given-names></string-name></person-group> (<year>2014</year>). <chapter-title>Improving the Accuracy of Video-Based Eye Tracking in Real Time through Post-Calibration Regression</chapter-title>. In <person-group person-group-type="editor"><string-name><given-names>M.</given-names> <surname>Horsley</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Eliot</surname></string-name>, <string-name><given-names>B. A.</given-names> <surname>Knight</surname></string-name>, &amp; <string-name><given-names>R.</given-names> <surname>Reilly</surname></string-name> (<role>Eds.</role>),</person-group> <source>Current Trends in Eye Tracking Research</source> (pp. <fpage>77</fpage>–<lpage>100</lpage>). <publisher-loc>Cham</publisher-loc>: <publisher-name>Springer International Publishing</publisher-name>. <pub-id pub-id-type="doi">10.1007/978-3-319-02868-2_5</pub-id></mixed-citation></ref>
<ref id="b6"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Blignaut</surname>, <given-names>P.</given-names></string-name>, &amp; <string-name><surname>Wium</surname>, <given-names>D.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Eye-tracking data quality as affected by ethnicity and experimental design.</article-title> <source>Behavior Research Methods</source>, <volume>46</volume>(<issue>1</issue>), <fpage>67</fpage>–<lpage>80</lpage>. <pub-id pub-id-type="doi">10.3758/s13428-013-0343-0</pub-id><pub-id pub-id-type="pmid">23609415</pub-id><issn>1554-351X</issn></mixed-citation></ref>
<ref id="b7"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Brolly</surname>, <given-names>X.</given-names></string-name>, &amp; <string-name><surname>Mulligan</surname>, <given-names>J.</given-names></string-name></person-group> (<year>2004</year>). Implicit Calibration of a Remote Gaze Tracker. Paper presented at the <source>2004 Conference on Computer Vision and Pattern Recognition Workshop</source>. <pub-id pub-id-type="doi">10.1109/CVPR.2004.366</pub-id></mixed-citation></ref>
<ref id="b8"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Cerrolaza</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Villanueva</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Villanueva</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Cabeza</surname>, <given-names>R.</given-names></string-name></person-group> (<year>2012</year>). <article-title><italic>Error characterization and compensation in eye tracking systems</italic>.</article-title> Paper presented at the <source>Proceedings of the Symposium on Eye Tracking Research and Applications</source>, <conf-loc>Santa Barbara, California</conf-loc>. <pub-id pub-id-type="doi">10.1145/2168556.2168595</pub-id></mixed-citation></ref>
<ref id="b9"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Clarke</surname>, <given-names>A. H.</given-names></string-name>, <string-name><surname>Ditterich</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Drüen</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Schönfeld</surname>, <given-names>U.</given-names></string-name>, &amp; <string-name><surname>Steineke</surname>, <given-names>C.</given-names></string-name></person-group> (<year>2002</year>). <article-title>Using high frame rate CMOS sensors for three-dimensional eye tracking.</article-title> <source>Behavior Research Methods, Instruments, &amp; Computers</source>, <volume>34</volume>(<issue>4</issue>), <fpage>549</fpage>–<lpage>560</lpage>. <pub-id pub-id-type="doi">10.3758/BF03195484</pub-id><pub-id pub-id-type="pmid">12564559</pub-id><issn>0743-3808</issn></mixed-citation></ref>
<ref id="b10"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Collewijn</surname>, <given-names>H.</given-names></string-name>, &amp; <string-name><surname>Kowler</surname>, <given-names>E.</given-names></string-name></person-group> (<year>2008</year>). <article-title>The significance of microsaccades for vision and oculomotor control.</article-title> <source>Journal of Vision (Charlottesville, Va.)</source>, <volume>8</volume>(<issue>14</issue>), <fpage>1</fpage>–<lpage>21</lpage>. <pub-id pub-id-type="doi">10.1167/8.14.20</pub-id><pub-id pub-id-type="pmid">19146321</pub-id><issn>1534-7362</issn></mixed-citation></ref>
<ref id="b11"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Coutinho</surname>, <given-names>F. L.</given-names></string-name>, &amp; <string-name><surname>Morimoto</surname>, <given-names>C. H.</given-names></string-name></person-group> (<year>2012</year>). <article-title><italic>Augmenting the robustness of cross-ratio gaze tracking methods to head movement</italic>.</article-title> Paper presented at the <source>Proceedings of the Symposium on Eye Tracking Research and Applications</source>, <conf-loc>Santa Barbara, California</conf-loc>. <pub-id pub-id-type="doi">10.1145/2168556.2168565</pub-id></mixed-citation></ref>
<ref id="b12"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Crundall</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Chapman</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Phelps</surname>, <given-names>N.</given-names></string-name>, &amp; <string-name><surname>Underwood</surname>, <given-names>G.</given-names></string-name></person-group> (<year>2003</year>). <article-title>Eye movements and hazard perception in police pursuit and emergency response driving.</article-title> <source>Journal of Experimental Psychology. Applied</source>, <volume>9</volume>(<issue>3</issue>), <fpage>163</fpage>–<lpage>174</lpage>. <pub-id pub-id-type="doi">10.1037/1076-898X.9.3.163</pub-id><issn>1076-898X</issn></mixed-citation></ref>
<ref id="b13"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Donovan</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Manning</surname>, <given-names>D. J.</given-names></string-name>, &amp; <string-name><surname>Crawford</surname>, <given-names>T.</given-names></string-name></person-group> (<year>2008</year>). Performance changes in lung nodule detection following perceptual feedback of eye movements. Paper presented at the Medical Imaging. <pub-id pub-id-type="doi">10.1117/12.768503</pub-id></mixed-citation></ref>
<ref id="b14"><mixed-citation publication-type="other" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Elvesjo</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Skogo</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Elvers</surname>, <given-names>G.</given-names></string-name></person-group> (<year>2009</year>). Method and Installation for detecting and following an eye and the gaze direction thereof. US Patent 7572008B2.</mixed-citation></ref>
<ref id="b15"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Fossum</surname>, <given-names>E. R.</given-names></string-name>, &amp; <string-name><surname>Hondongwa</surname>, <given-names>D. B.</given-names></string-name></person-group> (<year>2014</year>). <article-title>A Review of the Pinned Photodiode for CCD and CMOS Image Sensors.</article-title> <source>IEEE Journal of the Electron Devices Society</source>, <volume>2</volume>(<issue>3</issue>), <fpage>33</fpage>–<lpage>43</lpage>. <pub-id pub-id-type="doi">10.1109/JEDS.2014.2306412</pub-id></mixed-citation></ref>
<ref id="b16"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Gibaldi</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Vanegas</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Bex</surname>, <given-names>P. J.</given-names></string-name>, &amp; <string-name><surname>Maiello</surname>, <given-names>G.</given-names></string-name></person-group> (<year>2017</year>). <article-title>Evaluation of the Tobii EyeX Eye tracking controller and Matlab toolkit for research.</article-title> <source>Behavior Research Methods</source>, <volume>49</volume>(<issue>3</issue>), <fpage>923</fpage>–<lpage>946</lpage>. <pub-id pub-id-type="doi">10.3758/s13428-016-0762-9</pub-id><pub-id pub-id-type="pmid">27401169</pub-id><issn>1554-351X</issn></mixed-citation></ref>
<ref id="b17"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Goldberg</surname>, <given-names>J. H.</given-names></string-name>, <string-name><surname>Stimson</surname>, <given-names>M. J.</given-names></string-name>, <string-name><surname>Lewenstein</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Scott</surname>, <given-names>N.</given-names></string-name>, &amp; <string-name><surname>Wichansky</surname>, <given-names>A. M.</given-names></string-name></person-group> (<year>2002</year>). <article-title><italic>Eye tracking in web search tasks: design implications</italic>.</article-title> Paper presented at the Proceedings of the 2002 symposium on Eye tracking research &amp; applications, New Orleans, Louisiana. <pub-id pub-id-type="doi">10.1145/507072.507082</pub-id></mixed-citation></ref>
<ref id="b18"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Hansen</surname>, <given-names>D. W.</given-names></string-name>, &amp; <string-name><surname>Ji</surname>, <given-names>Q.</given-names></string-name></person-group> (<year>2010</year>). <article-title>In the eye of the beholder: A survey of models for eyes and gaze.</article-title> <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>, <volume>32</volume>(<issue>3</issue>), <fpage>478</fpage>–<lpage>500</lpage>. <pub-id pub-id-type="doi">10.1109/TPAMI.2009.30</pub-id><pub-id pub-id-type="pmid">20075473</pub-id><issn>0162-8828</issn></mixed-citation></ref>
<ref id="b19"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Hennessey</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Noureddin</surname>, <given-names>B.</given-names></string-name>, &amp; <string-name><surname>Lawrence</surname>, <given-names>P.</given-names></string-name></person-group> (<year>2006</year>). <article-title><italic>A single camera eye-gaze tracking system with free head motion</italic>.</article-title> Paper presented at the Proceedings of the 2006 symposium on Eye tracking research &amp;amp; applications, San Diego, California. <pub-id pub-id-type="doi">10.1145/1117309.1117349</pub-id></mixed-citation></ref>
<ref id="b20"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Holmqvist</surname>, <given-names>K.</given-names></string-name>, &amp; <string-name><surname>Andersson</surname>, <given-names>R.</given-names></string-name></person-group> (<year>2017</year>). <source>Eye tracking: A comprehensive guide to methods, paradigms and measures</source>. <publisher-name>Lund Eye Tracking Research Institute</publisher-name>.</mixed-citation></ref>
<ref id="b21"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Holmqvist</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Nyström</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Andersson</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Dewhurst</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Jarodzka</surname>, <given-names>H.</given-names></string-name>, &amp; <string-name><surname>Van de Weijer</surname>, <given-names>J.</given-names></string-name></person-group> (<year>2011</year>). <source>Eye tracking: A comprehensive guide to methods and measures</source>. <publisher-name>OUP Oxford</publisher-name>.</mixed-citation></ref>
<ref id="b22"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Holmqvist</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Nyström</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Mulvey</surname>, <given-names>F.</given-names></string-name></person-group> (<year>2012</year>). <article-title><italic>Eye tracker data quality: what it is and how to measure it</italic>.</article-title> Paper presented at the <source>Proceedings of the Symposium on Eye Tracking Research and Applications</source>, <conf-loc>Santa Barbara, California</conf-loc>. <pub-id pub-id-type="doi">10.1145/2168556.2168563</pub-id></mixed-citation></ref>
<ref id="b23"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Hornof</surname>, <given-names>A. J.</given-names></string-name>, &amp; <string-name><surname>Halverson</surname>, <given-names>T.</given-names></string-name></person-group> (<year>2002</year>). <article-title>Cleaning up systematic error in eye-tracking data by using required fixation locations.</article-title> <source>Behavior Research Methods, Instruments, &amp; Computers</source>, <volume>34</volume>(<issue>4</issue>), <fpage>592</fpage>–<lpage>604</lpage>. <pub-id pub-id-type="doi">10.3758/BF03195487</pub-id><pub-id pub-id-type="pmid">12564562</pub-id><issn>0743-3808</issn></mixed-citation></ref>
<ref id="b24"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Hwang</surname>, <given-names>Y. M.</given-names></string-name>, &amp; <string-name><surname>Lee</surname>, <given-names>K. C.</given-names></string-name></person-group> (<year>2017</year>). <article-title>Using Eye Tracking to Explore Consumers’ Visual Behavior According to Their Shopping Motivation in Mobile Environments.</article-title> <source>Cyberpsychology, Behavior, and Social Networking</source>, <volume>20</volume>(<issue>7</issue>), <fpage>442</fpage>–<lpage>447</lpage>. <pub-id pub-id-type="doi">10.1089/cyber.2016.0235</pub-id><pub-id pub-id-type="pmid">28715265</pub-id><issn>2152-2715</issn></mixed-citation></ref>
<ref id="b25"><mixed-citation publication-type="book" specific-use="unparsed"><person-group person-group-type="author"><collab>ISO</collab></person-group>. (<year>1994</year>). Accuracy (trueness and precision) of measurement methods and results. Part 1: General principles and definitions. In (Vol. 5725-1). Geneva, Switzerland.</mixed-citation></ref>
<ref id="b26"><mixed-citation publication-type="conference" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Lai</surname>, <given-names>C. C.</given-names></string-name>, <string-name><surname>Chen</surname>, <given-names>Y. T.</given-names></string-name>, <string-name><surname>Chen</surname>, <given-names>K. W.</given-names></string-name>, <string-name><surname>Chen</surname>, <given-names>S. C.</given-names></string-name>, <string-name><surname>Shih</surname>, <given-names>S. W.</given-names></string-name>, &amp; <string-name><surname>Hung</surname>, <given-names>Y. P.</given-names></string-name></person-group> (<year>2014</year>, 24-28 Aug. 2014). Appearance-Based Gaze Tracking with Free Head Movement. Paper presented at the 2014 22nd International Conference on Pattern Recognition.</mixed-citation></ref>
<ref id="b27"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Manor</surname>, <given-names>B. R.</given-names></string-name>, &amp; <string-name><surname>Gordon</surname>, <given-names>E.</given-names></string-name></person-group> (<year>2003</year>). <article-title>Defining the temporal threshold for ocular fixation in free-viewing visuocognitive tasks.</article-title> <source>Journal of Neuroscience Methods</source>, <volume>128</volume>(<issue>1-2</issue>), <fpage>85</fpage>–<lpage>93</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1016/s0165-0270(03)00151-1</pub-id> <pub-id pub-id-type="doi">10.1016/S0165-0270(03)00151-1</pub-id><pub-id pub-id-type="pmid">12948551</pub-id><issn>0165-0270</issn></mixed-citation></ref>
<ref id="b28"><mixed-citation publication-type="conference" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Matsumoto</surname>, <given-names>Y.</given-names></string-name>, &amp; <string-name><surname>Zelinsky</surname>, <given-names>A.</given-names></string-name></person-group> (<year>1999</year>, 1999). Real-time face tracking system for human-robot interaction. Paper presented at the Systems, Man, and Cybernetics, 1999. IEEE SMC '99 Conference Proceedings. 1999 IEEE International Conference on.</mixed-citation></ref>
<ref id="b29"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Morimoto</surname>, <given-names>C. H.</given-names></string-name>, <string-name><surname>Amir</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Flickner</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2002</year>). <article-title><italic>Free head motion eye gaze tracking without calibration</italic>.</article-title> Paper presented at the <source>CHI ’02 Extended Abstracts on Human Factors in Computing Systems</source>, <conf-loc>Minneapolis, Minnesota, USA</conf-loc>. <pub-id pub-id-type="doi">10.1145/506443.506496</pub-id></mixed-citation></ref>
<ref id="b30"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Mulligan</surname>, <given-names>J. B.</given-names></string-name>, &amp; <string-name><surname>Gabayan</surname>, <given-names>K. N.</given-names></string-name></person-group> (<year>2010</year>). <article-title><italic>Robust optical eye detection during head movement</italic>.</article-title> Paper presented at the Proceedings of the 2010 Symposium on Eye-Tracking Research \&amp;\#38; Applications, Austin, Texas. <pub-id pub-id-type="doi">10.1145/1743666.1743698</pub-id></mixed-citation></ref>
<ref id="b31"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Nguyen</surname>, <given-names>B. L.</given-names></string-name>, <string-name><surname>Chahir</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Molina</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Tijus</surname>, <given-names>C.</given-names></string-name>, &amp; <string-name><surname>Jouen</surname>, <given-names>F.</given-names></string-name></person-group> (<year>2010</year>). <article-title><italic>Eye gaze tracking with free head movements using a single camera</italic>.</article-title> Paper presented at the <source>Proceedings of the 2010 Symposium on Information and Communication Technology</source>, <conf-loc>Hanoi, Vietnam</conf-loc>. <pub-id pub-id-type="doi">10.1145/1852611.1852632</pub-id></mixed-citation></ref>
<ref id="b32"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Niehorster</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Cornelissen</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Holmqvist</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Hooge</surname>, <given-names>I. T.</given-names></string-name>, &amp; <string-name><surname>Hessels</surname>, <given-names>R. S.</given-names></string-name></person-group> (<year>2017</year>). <article-title>What to expect from your remote eye-tracker when participants are unrestrained.</article-title> <source>Behavior Research Methods</source>. <pub-id pub-id-type="doi">10.3758/s13428-017-0863-0</pub-id><pub-id pub-id-type="pmid">28205131</pub-id><issn>1554-351X</issn></mixed-citation></ref>
<ref id="b33"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Ohno</surname>, <given-names>T.</given-names></string-name>, &amp; <string-name><surname>Mukawa</surname>, <given-names>N.</given-names></string-name></person-group> (<year>2004</year>). <article-title><italic>A free-head, simple calibration, gaze tracking system that enables gaze-based interaction</italic>.</article-title> Paper presented at the Proceedings of the 2004 symposium on Eye tracking research \&amp; applications, San Antonio, Texas. <pub-id pub-id-type="doi">10.1145/968363.968387</pub-id></mixed-citation></ref>
<ref id="b34"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Ohno</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Mukawa</surname>, <given-names>N.</given-names></string-name>, &amp; <string-name><surname>Kawato</surname>, <given-names>S.</given-names></string-name></person-group> (<year>2003</year>). <article-title><italic>Just blink your eyes: a head-free gaze tracking system</italic>.</article-title> Paper presented at the <source>CHI ’03 Extended Abstracts on Human Factors in Computing Systems</source>, <conf-loc>Ft. Lauderdale, Florida, USA</conf-loc>. <pub-id pub-id-type="doi">10.1145/765891.766088</pub-id></mixed-citation></ref>
<ref id="b35"><mixed-citation publication-type="conference" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Pérez</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Córdoba</surname>, <given-names>M. L.</given-names></string-name>, <string-name><surname>García</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Méndez</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Muñoz</surname>, <given-names>M. L.</given-names></string-name>, <string-name><surname>Pedraza</surname>, <given-names>J. L.</given-names></string-name>, &amp; <string-name><surname>Sánchez</surname>, <given-names>F.</given-names></string-name></person-group> (<year>2003</year>). <article-title><italic>A Precise Eye-Gaze Detection and Tracking System</italic>.</article-title> Paper presented at the WSCG, Plzen, Czech Republic.</mixed-citation></ref>
<ref id="b36"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Rayner</surname>, <given-names>K.</given-names></string-name></person-group> (<year>1998</year>). <article-title>Eye movements in reading and information processing: 20 years of research.</article-title> <source>Psychological Bulletin</source>, <volume>124</volume>(<issue>3</issue>), <fpage>372</fpage>–<lpage>422</lpage>. <pub-id pub-id-type="doi">10.1037/0033-2909.124.3.372</pub-id><pub-id pub-id-type="pmid">9849112</pub-id><issn>0033-2909</issn></mixed-citation></ref>
<ref id="b37"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Rayner</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Pollatsek</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Drieghe</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Slattery</surname>, <given-names>T. J.</given-names></string-name>, &amp; <string-name><surname>Reichle</surname>, <given-names>E. D.</given-names></string-name></person-group> (<year>2007</year>). <article-title>Tracking the mind during reading via eye movements: Comments on Kliegl, Nuthmann, and Engbert (2006).</article-title> <source>Journal of Experimental Psychology. General</source>, <volume>136</volume>(<issue>3</issue>), <fpage>520</fpage>–<lpage>529</lpage>. <pub-id pub-id-type="doi">10.1037/0096-3445.136.3.520</pub-id><pub-id pub-id-type="pmid">17696697</pub-id><issn>0096-3445</issn></mixed-citation></ref>
<ref id="b38"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Reingold</surname>, <given-names>E. M.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Eye Tracking Research and Technology: Towards Objective Measurement of Data Quality.</article-title> <source>Visual Cognition</source>, <volume>22</volume>(<issue>3</issue>), <fpage>635</fpage>–<lpage>652</lpage>. <pub-id pub-id-type="doi">10.1080/13506285.2013.876481</pub-id><pub-id pub-id-type="pmid">24771998</pub-id><issn>1350-6285</issn></mixed-citation></ref>
<ref id="b39"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Saunders</surname>, <given-names>D. R.</given-names></string-name>, &amp; <string-name><surname>Woods</surname>, <given-names>R. L.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Direct measurement of the system latency of gaze-contingent displays.</article-title> <source>Behavior Research Methods</source>, <volume>46</volume>(<issue>2</issue>), <fpage>439</fpage>–<lpage>447</lpage>. <pub-id pub-id-type="doi">10.3758/s13428-013-0375-5</pub-id><pub-id pub-id-type="pmid">23949955</pub-id><issn>1554-351X</issn></mixed-citation></ref>
<ref id="b40"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Sesma-Sanchez</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Villanueva</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Cabeza</surname>, <given-names>R.</given-names></string-name></person-group> (<year>2014</year>). <article-title><italic>Design issues of remote eye tracking systems with large range of movement</italic>.</article-title> Paper presented at the <source>Proceedings of the Symposium on Eye Tracking Research and Applications</source>, <conf-loc>Safety Harbor, Florida</conf-loc>. <pub-id pub-id-type="doi">10.1145/2578153.2578193</pub-id></mixed-citation></ref>
<ref id="b41"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Shih</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Wu</surname>, <given-names>Y.</given-names></string-name>, &amp; <string-name><surname>Liu</surname>, <given-names>J.</given-names></string-name></person-group> (<year>2000</year>). A calibration-free gaze tracking technique. Paper presented at the <source>Proceedings 15th International Conference on Pattern Recognition. ICPR-2000</source>. <pub-id pub-id-type="doi">10.1109/ICPR.2000.902895</pub-id></mixed-citation></ref>
<ref id="b42"><mixed-citation publication-type="conference" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Tan</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Kriegman</surname>, <given-names>D.</given-names></string-name>, &amp; <string-name><surname>Ahuja</surname>, <given-names>N.</given-names></string-name></person-group> (<year>2002</year>, 2002). Appearance-based eye gaze estimation. Paper presented at the Sixth IEEE Workshop on Applications of Computer Vision, 2002. (WACV 2002). Proceedings.</mixed-citation></ref>
<ref id="b43"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Yoo</surname>, <given-names>D. H.</given-names></string-name>, &amp; <string-name><surname>Chung</surname>, <given-names>M. J.</given-names></string-name></person-group> (<year>2005</year>). <article-title>A novel non-intrusive eye gaze estimation using cross-ratio under large head motion.</article-title> <source>Computer Vision and Image Understanding</source>, <volume>98</volume>(<issue>1</issue>), <fpage>25</fpage>–<lpage>51</lpage>. <pub-id pub-id-type="doi">10.1016/j.cviu.2004.07.011</pub-id><issn>1077-3142</issn></mixed-citation></ref>
</ref-list>
  </back>
</article>

