<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd">

<article article-type="research-article" xmlns:xlink="http://www.w3.org/1999/xlink">
 <front>
    <journal-meta>
	<journal-id journal-id-type="publisher-id">Jemr</journal-id>
      <journal-title-group>
        <journal-title>Journal of Eye Movement Research</journal-title>
      </journal-title-group>
      <issn pub-type="epub">1995-8692</issn>
	  <publisher>								
	  <publisher-name>Bern Open Publishing</publisher-name>
	  <publisher-loc>Bern, ĄSwitzerland</publisher-loc>
	</publisher>
    </journal-meta>
	
	
    <article-meta>
	<article-id pub-id-type="doi">10.16910/jemr.10.5.2</article-id> 
	  <article-categories>								
				<subj-group subj-group-type="heading">
					<subject>Research Article</subject>
				</subj-group>
		</article-categories>
      <title-group>
        <article-title>A Quality-Centered Analysis of Eye Tracking Data in Foveated
Rendering</article-title>
      </title-group>
	   <contrib-group> 
				<contrib contrib-type="author">
					<name>
						<surname>Roth</surname>
						<given-names>Thorsten</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>	
					<xref ref-type="aff" rid="aff2">2</xref>
				</contrib>
				
				
				<contrib contrib-type="author">
					<name>
						<surname>Weier</surname>
						<given-names>Martin</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
					<xref ref-type="aff" rid="aff3">3</xref>
				</contrib>
				
				
				<contrib contrib-type="author">
					<name>
						<surname>Hinkenjann</surname>
						<given-names>André</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
				
				<contrib contrib-type="author">
					<name>
						<surname>Li</surname>
						<given-names>Yongmin</given-names>
					</name>
					<xref ref-type="aff" rid="aff2">2</xref>
				</contrib>
				
				<contrib contrib-type="author">
					<name>
						<surname>Slusallek</surname>
						<given-names>Philipp</given-names>
					</name>
					<xref ref-type="aff" rid="aff3">3</xref>		
					<xref ref-type="aff" rid="aff4">4</xref>		
					<xref ref-type="aff" rid="aff5">5</xref>
				
		</contrib>
        <aff id="aff1">
		Bonn-Rhein-Sieg, <institution>University of Applied Sciences</institution>, <country>Germany</country>
        </aff>
		<aff id="aff2">
		<institution>Brunel University London,</institution>, <country>UK</country>
        </aff>
		<aff id="aff3">
		<institution>Saarland University</institution><country>Germany</country>
        </aff>
		<aff id="aff4">
		<institution>Intel Visual Computing Institute,</institution>
        </aff>
		<aff id="aff5"> 
		<institution>German Research Center for Artificial Intelligence (DFKI)</institution><country>Germany</country>
      </aff>
    </contrib-group> 
	 
	 
	  <pub-date date-type="pub" publication-format="electronic"> 
		<day>28</day>  
		<month>9</month>
        <year>2017</year>
      </pub-date>
	  <pub-date date-type="collection" publication-format="electronic"> 
	  <year>2017</year>
	</pub-date>
      <volume>10</volume>
      <issue>5</issue>
	   <elocation-id>10.16910/jemr.10.5.2</elocation-id> 
	<permissions> 
	<copyright-year>2017</copyright-year>
	<copyright-holder>Roth et al.</copyright-holder>
	<license license-type="open-access">
  <license-p>This work is licensed under a Creative Commons Attribution 4.0 International License, 
  (<ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">
    https://creativecommons.org/licenses/by/4.0/</ext-link>), which permits unrestricted use and redistribution provided that the original author and source are credited.</license-p>
</license>
	</permissions>
	<abstract>
        <p>This work presents the analysis of data recorded by an eye tracking device in the course of
evaluating a foveated rendering approach for head-mounted displays (HMDs). Foveated rendering
methods adapt the image synthesis process to the user’s gaze and exploiting the human
visual system’s limitations to increase rendering performance. Especially, foveated rendering
has great potential when certain requirements have to be fulfilled, like low-latency rendering
to cope with high display refresh rates. This is crucial for virtual reality (VR), as a high level
of immersion, which can only be achieved with high rendering performance and also helps to
reduce nausea, is an important factor in this field. We put things in context by first providing
basic information about our rendering system, followed by a description of the user study and
the collected data. This data stems from fixation tasks that subjects had to perform while being
shown fly-through sequences of virtual scenes on an HMD. These fixation tasks consisted of a
combination of various scenes and fixation modes. Besides static fixation targets, moving targets
on randomized paths as well as a free focus mode were tested. Using this data, we estimate
the precision of the utilized eye tracker and analyze the participants’ accuracy in focusing the
displayed fixation targets. Here, we also take a look at eccentricity-dependent quality ratings.
Comparing this information with the users’ quality ratings given for the displayed sequences
then reveals an interesting connection between fixation modes, fixation accuracy and quality
ratings.</p>
      </abstract>
	   <kwd-group>
        <kwd>Rendering</kwd>
        <kwd>Ray tracing</kwd>
        <kwd>data analysis</kwd>
        <kwd>perceived quality</kwd>
        <kwd>eye tracking</kwd>
        <kwd>foveated rendering</kwd>
        <kwd>eye movement</kwd>
        <kwd>region of interest</kwd>
        <kwd>gaze</kwd>
      </kwd-group>
    </article-meta>
  </front>
  
  
  
  
  
  
  
  <body>

    <sec id="s1">
      <title>Introduction</title>
      <p>
        Virtual reality has the major goal of presenting a virtual
world in a way that resembles reality as close as possible.
Recently, head-mounted displays (HMDs) are becoming widely
available, providing a suitable display technology for this
purpose. To enable a visually pleasant experience without
disturbing aliasing artifacts and visible pixel grids, high
display resolutions are required. Early HMDs like the Forte
VFX 3D (1997, 263 480 2 = 0.25 million pixels) worked at
very low resolutions, while modern HMDs like the StarVR
(2016, 2560 1440 2 = 7.37 million pixels) have made
a huge step forward in this regard. However, the full
retinal resolution including a user&#x2019;s full dynamic field of view
(200&#176;  horizontally, 150&#176;  vertically) would potentially require
a resolution of 32k 24k = 768 million pixels [
        <xref ref-type="bibr" rid="b1">1</xref>
        ]. Such
resolutions are neither achievable by current display
technology nor are they tractable by current GPUs. In addition,
frames need to be displayed with a low latency and at high
frame rates to meet the requirements of the display devices
and at the same time reduce nausea caused by a perceptual
mismatch of the self-induced motion and the visual response
(simulator sickness) [
        <xref ref-type="bibr" rid="b2">2</xref>
        ]. With the ongoing
improvements of pixel densities in HMDs and the current inability
to render at the required resolutions while maintaining
performance, developing new rendering methods to tackle these
challenges is urgently required.
      </p>
	  
	  
	  
	  
	  
	  
      <p>Fortunately, the human visual system (HVS) has several
limitations which imply that it is not necessary to provide
the highest level of detail over the entire visual field. There
is a drop of the eye&#x2019;s visual acuity with increasing
<italic>eccentricities</italic>, where the eccentricity describes the angular deviation
from the central optical axis. Thus, one possible approach
is to adopt techniques that adjust rendering quality based on
the exploitation of a user&#x2019;s current viewing direction. This
process is referred to as <italic>foveated rendering</italic>.</p>
     


	 <p>
        The visual field can be divided in <italic>central</italic> and <italic>peripheral</italic>
vision. We go with the definition in [
        <xref ref-type="bibr" rid="b3">3</xref>
        ], where central vision
is defined to include the following areas of the fovea (up to
5.2&#176; from the optical axis), the parafovea (up to 9&#176;) and the
perifovea (up to 17&#176;). Larger eccentricities are defined to
belong to peripheral vision.
      </p>
	  
      <p>The tracking-based adaptation of rendering quality based
on the HVS&#x2019; drop in visual acuity requires highly accurate
and low-latency eye tracking to determine the point of regard
(PoR), i.e., the screen space position currently focused by the
user. Moreover, several models of human attention have been
used in computer graphics to get a notion of the screen space
position without using an active tracking mechanism. Both
approaches benefit from insights regarding eye tracking data
acquired in a foveated rendering system.</p>
      <p>
        Eye tracking as a way to actively measure the user&#x2019;s gaze
directly has been used by the computer graphics community
in various disciplines. A survey on rendering techniques can
be found in [
        <xref ref-type="bibr" rid="b4 b5">4, 5</xref>
        ], while perception-driven geometric
processing and mesh simplification are described in [
        <xref ref-type="bibr" rid="b6">6</xref>
        ]. Another
field that uses eye tracking is computational displays, where
a survey on various techniques can be found in [
        <xref ref-type="bibr" rid="b7">7</xref>
        ].
      </p>
	 
	 
      <p>
        Early work in the field of gaze-contingent and foveated
rendering techniques utilized focus assumptions [
        <xref ref-type="bibr" rid="b8">8</xref>
        ] and
visual attention models [
        <xref ref-type="bibr" rid="b9 b10">9, 10</xref>
        ] instead of employing eye
tracking devices. Those early systems su ered from the
lacking hardware capabilities for both accurate and low-latency
eye tracking as well as computational power to synthesize
high-quality images. One of the earliest approaches to speed
up rendering using perceptual methods is described in [
        <xref ref-type="bibr" rid="b11">11</xref>
        ],
where eye tracking is used to adapt the sampling frequency
on the image plane and in object space in accordance with the
spatial acuity of the HVS. Another early system can be found
in [
        <xref ref-type="bibr" rid="b12">12</xref>
        ], adapting the geometric quality of a three-dimensional
mesh using an anisotropic simplification system. Most
notably this is one of the first publications where binocular eye
tracking is used inside an HMD.
      </p>
	  
	  
      <p>
        However, the eye tracking and graphics hardware was still
lacking the necessary accuracy, latency and rendering
performance to meet perceptual requirements. Hence, early
systems focused primarily on a theoretical analysis, e.g., the
general influence of the quality degradation on visual
performance, especially on search performance. Watson et al.
[
        <xref ref-type="bibr" rid="b13">13</xref>
        ] demonstrated that image resolution could be reduced by
half for peripheral vision without a significant influence on
search time. Duchowski et al. [
        <xref ref-type="bibr" rid="b14">14</xref>
        ] demonstrated that color
precision can be reduced for peripheral vision, though not as
readily as resolution.
      </p>
      <p>
        As we take a closer look at the eye tracking data one can
ask how precise eye tracking devices need to be in order to
be suitable for foveated rendering. Loschky et al. [
        <xref ref-type="bibr" rid="b15 b16">15, 16</xref>
        ]
showed that the update must be started at 5 ms to 60 ms
after an eye movement for an image change to go undetected.
However, an acceptable delay highly depends on the task of
the application and the stimulus size and positioning in the
visual field. Ways to measure latency and a discussion on
di erent tasks can be found in the work by [
        <xref ref-type="bibr" rid="b17">17</xref>
        ] and [
        <xref ref-type="bibr" rid="b18">18</xref>
        ].
      </p>
      <p>
        Synthesizing images from a 3D scene description is made
possible by some basic methods in the field of computer
graphics, the most important being rasterization and
raybased approaches (ray tracing). A comprehensive description
of recent work in the field of perception-driven accelerated
and foveated rendering can be found in [
        <xref ref-type="bibr" rid="b5">5</xref>
        ].
      </p>
      <p>
        Our analysis is based on the fully adaptive foveated ray
tracing technique suggested in [
        <xref ref-type="bibr" rid="b19">19</xref>
        ]. In addition to the
system&#x2019;s fully adaptive sampling, improvements of temporal
stability and a reduction of artifacts in the visual periphery
are achieved by incorporating a reprojection technique to
improve image quality and fill gaps in sparsely sampled images.
      </p>
      <p>
        The main idea here is that samples are cached in image
space, reprojected to a new view and used to aid the
image quality of subsequent frames. In this regard the
system most closely relates to temporal anti-aliasing [
        <xref ref-type="bibr" rid="b20">20</xref>
        ] and
the mathematical considerations on how to combine
samples temporally [
        <xref ref-type="bibr" rid="b21">21</xref>
        ]. The unique characteristics of our
system come from combining a performance-focused
reprojection method based on a coarse geometry approximation with
foveated rendering methods, which enables us to generate
visually pleasant results at high update rates. The user&#x2019;s gaze
is measured using a binocular SMI eye tracker built into an
Oculus Rift DK2 and then used to parameterize the rendering
process. The evaluation of our rendering system has shown
that the subjective perceived quality is very similar to full ray
tracing with benchmarks showing a clearly superior
rendering performance.
      </p>
      <p>
        This paper serves the purpose of extending the short
analysis of eye tracking data from our user study which is
provided in [
        <xref ref-type="bibr" rid="b22">22</xref>
        ]. The main objective of this extension is to give
better, more detailed insights into the recorded tracking data
and a more extensive discussion of the results and the
connections between gaze data and subjective perceived quality.
      </p>
      <p>In order to provide context, we describe the basic design
of the user study and the recorded eye tracking data. Based
on this data we describe how the recorded data is analyzed.
Amongst others, this analysis revealed an interesting relation
between fixation accuracy and quality ratings for di erent
fixation modes.</p>
      <p>
        Although there has been related work on analyzing eye
tracking data, there is no work in the context of foveated
rendering that thoroughly investigates the link between the
subjective perceived image quality, the eye tracking precision
and the induced effects when asking users to focus or fixate
a target in the image. Tracking precision is measured by the
distance between the recorded PoR and the actual location
that people were instructed to fixate on predefined paths on
a screen. [
        <xref ref-type="bibr" rid="b23">23</xref>
        ] represent this precision by the standard
deviation of these measurements to evaluate their low-cost eye
tracking system. [
        <xref ref-type="bibr" rid="b24">24</xref>
        ] use the glint in the eye to derive a PoR.
However, they evaluate the precision in terms of determining
the right image quadrant. Although the book by Duchowski
[25, ch. 12] contains various strategies to evaluate eye
tracking data, the author focuses on di erent aspects like dwell
time, saccade detection and denoising.
      </p>
      <p>The main contributions of our work are: 1. An estimation of the tracking precision of an
HMDmounted eye tracking device, supported by the
evaluation of eccentricity-based quality ratings. 2. An analysis of fixation accuracy based on the data
recorded during a quality-focused user study carried
out for our foveated rendering system. 3. An analysis of the connection between subjective
perceived quality and fixation accuracy, providing
possible evidence of the presence of visual tunneling effects
and the magnitude of their influence on the user&#x2019;s
perception.</p>
      <p>The results are discussed and conclusions are drawn in the
according sections at the end of this article, together with
some suggestions on how to benefit from our findings in
practical systems and how to further improve the suggested
methods.</p>
    </sec>
    <sec id="s2">
      <title>Methods</title>
      <sec id="s2a">
        <title>Rendering process</title>
        <p>
          As opposed to basic rasterization, ray-based approaches
enable us to sample the image plane in a fully adaptive way.
This is done by sampling each individual pixel with a
probability computed from its eccentricity, based on the foveal
function, a fallo function that can be freely parameterized
based on current gaze properties and performance
requirements. The receptor density of cones in the human eye is
approximated quite well with a hyperbolic fallo , which also
corresponds to the fallo in visual acuity with increasing
eccentricities. Rods, on the other hand, exhibit a density fallo
that is much more linear [
          <xref ref-type="bibr" rid="b26">26</xref>
          ]. In addition to that, visual
acuity can also be represented quite well by a linear model when
it comes to small angles [
          <xref ref-type="bibr" rid="b27">27</xref>
          ]. Because of the human visual
system&#x2019;s high sensitivity to peripheral flickering and motion
that results from these receptor distributions, we designed the
foveal function, to be piecewise linear instead of adopting a
hyperbolic falloff , as shown in <xref ref-type="fig" rid="fig01">Figure 1</xref>
        </p>
		<fig id="fig01" fig-type="figure" position="float">
					<label>Figure 1</label>
					<caption>
						<p>Figure 1. The sampling probability of each individual pixel
is computed by evaluating the foveal function with freely adjustable
parameters (r&#8320; ; r&#8321; ; p<sub>min</sub>). Image adapted from [
        <xref ref-type="bibr" rid="b22">22</xref>
        ].</p>
					</caption>
					<graphic id="graph01" xlink:href="jemr-10-05-b-figure-01.png"/>
				</fig>
		
        <p>During each rendering iteration, a set of pixels to be
sampled is determined by evaluating the foveal function for each
pixel individually. As this results in a sparsely sampled
image with only little pixel information towards outer image
regions, it becomes necessary to provide a reconstruction
method for filling in unsampled image regions. In order to
do this, we rely on a reprojection-based approach.  </p>
 <p>
The support image and support G-buffer provide a low resolution version of the color and geometry information that
is computed for each rendering iteration. Based on these,
a coarse geometric approximation of the scene geometry as
seen from the user&#x2019;s current point of view is generated, which
is then textured with the known color information from the
preceding frame. This mesh is then reprojected by
rendering it from the camera position of the current frame. The
support image is now used to improve areas where
reprojection errors due to disocclusions or movement and areas
with insu cient quality become apparent. Additional
samples are computed where necessary, which is done by
analyzing the reprojected image for depth and luminance
discontinuities between neighboring pixels. If such discontinuities
are found, pixels are scheduled for resampling.</p>
      </sec>
      <sec id="s2b">
        <title>Evaluation</title>
        <p>A specific parameter set for the foveal function is referred
to as the foveal region configuration (FRC). An FRC is a
triplet (r&#8320; ; r&#8321; ; p<sub>min</sub>) that describes the sampling density fallo
(cf. <xref ref-type="fig" rid="fig01">Figure 1</xref>).</p>
        <p>Rendering performance (and thus speedups) compared to
full ray tracing depend strongly on the chosen FRC. Our user
study has shown good results for the subjective perceived
image quality for the medium-sized FRC (specific parameters
are shown below). The speedups achieved in our test scenes
with this FRC ranged from 1.46 to 4.18 depending on the
quality settings (with better speedups for higher quality
rendering). The benchmarks were run on an Intel Core i7-3820
CPU with 64GiB of RAM and an NVIDIA GeForce Titan
X graphics card at a resolution of 1182x1464 pixels. The
chosen FRCs are based on the necessity of achieving a frame
rate that has to be at least as high as the display refresh rate
of the HMD utilized in our user study. Thus, while it may be
chosen as large as possible within the range that still provides
the required performance, it is also desirable to leave some
room for additional computations such as physics. Our user
study gives clues about the possible parameter range for FRC
adjustments.</p>
        <p>
          We carried out a user study to measure the visual quality
of our rendering method. Our main goal was to answer our
three research questions defined more clearly in [
          <xref ref-type="bibr" rid="b19">19</xref>
          ]:
1. How well can users differentiate between foveated and
non-foveated rendering?
2. How do varying foveal region configurations influence
the subjective quality perception?
3. How do varying fixation modes affect the subjective
quality perception?
Each participant in our study was shown 96 trials resulting
from a 4x4x3 full factorial within-subject design. Each
of the trials consisted of the display of the fly-through (8
seconds) and a varying amount of time for the quality
rating after each sequence. 15 subjects participated in our user
study (10 male, 5 female). They were aged between 26 and
51 (M = 33, S D = 7.24) and all of them had an academic
background. There was no compensation for participating
in the experiment. The considered factors and the according
levels were:
1. Four scenes {Sponza, TunnelGeom, TunnelMaps, Rungholt} (see <xref ref-type="fig" rid="fig02">Figure 2</xref>)
2. Four FRCs {small (5&#176;  ; 10&#176;  ; 0.01), medium
(10&#176;  ; 20&#176;  ; 0.05), large (15&#176;  ; 30&#176;  ; 0.1), full (&#8734; &#8734; 1)}
3. Three fixation types {fixed, moving, free}.
Trials were shown to the participants in a randomized
order, with each condition being presented twice, resulting in
4x4x3x2 = 96 trials.</p>

<p> The main idea behind varying the fixation types was to
find potential visual tunneling effects that had an influence
on the outcome of the user study. In the fixed focus mode, a
static fixation cross was displayed at the center of the screen. This had to be focused by the user for the entire trial. The
moving target, on the other hand, consisted in a green,
moving sphere. The position of this sphere was determined by
paths that were generated randomly across the image area. For each individual path, the velocity of the fixation target
was static (between 11 and 17 degrees per second).</p>
        <p>To minimize learning effects, the utilized paths were
varied in all trials except for repetitions. Identical combinations
of all variables including the fixation paths were presented to
all test subject, but in a randomized order. For both fixation
modes, the foveal region was not controlled by the user, but
centered around the fixation target. The additional free focus
mode enabled the users to freely adjust the foveal region&#x2019;s
position with their eye movement. In this case, there is no
reference for the desired PoR as in the other fixation modes.
        Nonetheless, we analyze the tracking data from the free focus
mode in conjunction with given quality ratings and our
measured tracking precision, giving additional hints about
tracking precision and eccentricity-dependent quality perception.</p>
        <p>Quality had to be rated by giving a level of agreement for
two statements: "The shown sequence was free of visual
artifacts." and "I was confident of my answer." Rating was done
using a 7-point Likert scale which ranged from strongly
disagree (-3) to strongly agree (3).</p>
        <p>The tracking data was determined and recorded at a rate
of 60Hz during all trials, while rendering was performed at a
static update rate of 75Hz on an Oculus Rift DK2 HMD. The
tracking data was mainly recorded in order to compare it to
the prescribed paths.</p>


        <p>The analysis of variances (ANOVA) and additional t-tests
that were performed on the data collected during our user
study have suggested that it is not possible to make a
reliable di erentiation between our optimized rendering
approach and full ray tracing, as long as the FRC is chosen to be
at least of size medium. There were no significant di erences
in quality ratings given for FRCs of medium, large and full.</p>
       

	   
        <p>The significant main e ect we have found for fixation
types can likely be attributed to effects reducing the
perception of some visual artifacts. The moving target mode was
rated best over all scenes with a mean of 0.99 and a standard
deviation of 1.63, while the static and free fixation resulted in
a mean of 0.43 for both and a standard deviation of 1.81 for
static and 1.89 for free fixation. The user study and
rendering algorithm are both described in more detail in [
          <xref ref-type="bibr" rid="b19">19</xref>
          ], also
providing details regarding the statistical significance of the
presented results.
        </p>
        <p>
          Our goal now is to analyze the tracking data and the users&#x2019;
corresponding quality ratings for the presence of effects like
visual tunneling further than in [
          <xref ref-type="bibr" rid="b19">19</xref>
          ], extending [
          <xref ref-type="bibr" rid="b22">22</xref>
          ]. Effects
like this may affect quality ratings in certain ways unexpected
from the raw data. In addition, we aim for giving a deeper
insight into tracking quality by also providing information
on the relation between a user&#x2019;s gaze and the given quality
ratings. All distances in our analysis are average values of
the left and the right eye.
        </p>
		
		
		
		
        <p>To ensure that the recorded data is valid, we first tried
to determine the actual tracking precision. When using the
tracking device, it was quite noticeable that the precision
degraded towards outer image areas. This may also be one
reason for the calibration process of the eye tracker&#x2019;s SDK only
employing a relatively small area around the image center. Our estimate for tracking precision is given by looking at the
deviations of the recorded PoR from the fixation target&#x2019;s
current position. Our basic assumption is that the fixation
accuracy, describing how well a user can fixate a target, is largely
independent of the target&#x2019;s position in the image. Based on
this assumption, it would follow that worse fixation towards
outer areas most likely results from tracking inaccuracies.</p>


        <p>To estimate tracking precision, we sort the data into bins,
where it is then averaged. These bins have a width of
w = 0.1&#176; and there is a total of n = &#8968;max(F<sub>p;t</sub>(i))/w&#8969; bins
B<sub>j</sub> = (F&#xAF;<sub>j</sub>; G&#xAF;<sub>j</sub>); 0 &#8804;  j &lt; n, with
     <xref ref-type="fig" rid="eq01">Equation 1</xref> and <xref ref-type="fig" rid="eq02">Equation 2</xref></p>

	 
	 
	 <fig id="eq01" fig-type="equation" position="anchor">
					<graphic id="equation01" xlink:href="jemr-10-05-b-equation-01.png"/>
				</fig>
				
<fig id="eq02" fig-type="equation" position="anchor">
					<graphic id="equation02" xlink:href="jemr-10-05-b-equation-02.png"/>
				</fig>
				




<fig id="fig02" fig-type="figure" position="float">
					<label>Figure 2</label>
					<caption>
						<p>(a) to (d): Scenes used during our user study. (e), (f): Full ray tracing vs. foveated rendering. The white circles
represent r<sub>0</sub> and r<sub>1</sub> in the foveal function</p>
					</caption>
					<graphic id="graph02a" xlink:href="jemr-10-05-b-figure-02.png"/><graphic id="graph02b" xlink:href="jemr-10-05-b-figure-03.png"/><graphic id="graph02c" xlink:href="jemr-10-05-b-figure-04.png"/><graphic id="graph02d" xlink:href="jemr-10-05-b-figure-05.png"/><graphic id="graph02e" xlink:href="jemr-10-05-b-figure-06.png"/><graphic id="graph02f" xlink:href="jemr-10-05-b-figure-07.png"/>
				</fig>







<p>Here, F<sub>p,t</sub>(i) is the distance between the fixation target&#x2019;s
current position and the image center, while G<sub>p,t</sub>(i) represents
the distance between the gaze and the fixation target in trial
t at frame i for participant p. G&#xAF; <sub>j</sub>, the average value for the
according bin j, now provides an approximate tracking
quality measure for the contained eccentricities, which would be
[ j x w; ( j + 1) x w]. We analyze this data further by performing
a linear regression, which is described in the results section
below.</p>























        <p>The average fixation accuracy of participants is then
compared for all tested scenes. Eventually, we compare the
measured fixation accuracies with quality ratings for individual
scenes and try to explain the apparent effects. To support our
findings regarding tracking precision, we analyze how
quality ratings given by the users relate to average eccentricities
of the points of regard in the free focus mode.</p>
       


	   <p>
          Adults can physically rotate the eye up to 50&#176;  horizontally,
42&#176;  up and 48&#176;  down around line of sight in the eye&#x2019;s resting
position [
          <xref ref-type="bibr" rid="b28">28</xref>
          ]. However, it has to be noted that in practice,
humans usually do not rotate the eye to the physiologically
possible extent. After exceeding a certain angular deviation a
human would highly likely start turning the head. This
angular deviation is referred to as the comfortable viewing angle
(CVA). It is considered to be &#8776; 15&#176;  around the normal line of
sight  [
        <xref ref-type="bibr" rid="b29">29, p.17</xref>
        ]. Thus, it is important to note that we did not
account for fixation target eccentricities larger than the CVA
in our tracking precision measures.
        </p>
        <p>In our user study, head tracking was not implemented
because it was necessary to present identical visual stimuli to
all participants. This would not have been possible if users
were able to freely look around. However, for fixation target
positions further away from the image center than the CVA,
users would most likely not just rely on eye movement to
fixate a target, but instead incorporate head movement.</p>
      </sec>
    </sec>
   


   <sec id="s3">
      <title>Results</title>
      <p>In this section we present the results of our analysis.</p>
      <sec id="s3a">
        <title>Tracking Precision</title>
        <p>To analyze in which way the tracking precision relates to
the actual eccentricity of the PoR (which we assume to be
identical to the fixation target position at this point), we
perform a linear regression with G&#x2C6; j = 0 + 1F j + 2F 2j. This
results in a correlation of 0.989 with &#946; = (1.05, 0.024, 0.008)
and R<sup>2</sup> = 0.978 with the constant (p&#8776; 0), linear (p &lt; 0.01)
and square (p&#8776; 0) terms being statistically significant. The
quadratic prediction for gaze deviation is illustrated in <xref ref-type="fig" rid="fig04">Figure 4</xref>. The decreasing tracking precision for larger eccentricities
becomes apparent from the regression result.</p>
        
<fig id="fig04" fig-type="figure" position="float">
					<label>Figure 4</label>
					<caption>
						<p>Tracking precision vs. fixation target’s distance to the image center. The area used for calibration by our tracking
device is also denoted here by C<sub>h</sub> (horizontal extent), C<sub>v</sub> (vertical extent) and C<sub>max</sub> (diagonal extent). The result of linear
regression with a quadratic equation is represented by the blue line. The red area illustrates the residuals. Image adapted from
 [
          <xref ref-type="bibr" rid="b22">22</xref>
          ].</p>
					</caption>
					<graphic id="graph04" xlink:href="jemr-10-05-b-figure-09.png"/>
				</fig>


		</sec>
		  
		  
		  
		  
		  
		  
		  
		  
      <sec id="s3b">
        <title>Fixation Accuracy</title>
       

	   <p><xref ref-type="fig" rid="fig01">Figure 3</xref> shows the cumulative distribution functions
(CDFs) for the fixed and the moving fixation target for all
four scenes, with the horizontal axis representing the angular
distance between the user&#x2019;s gaze and the fixation target. It
can be seen that there is a significant di erence between the
fixation accuracy for the fixed target (below 1.1&#176;  ) and the
moving target (approximately 4&#176;  to 4.5&#176;  for all scenes). In
the discussion section, we explain the result that we would
normally expect from this di erence in accuracy and put that
into context with the users&#x2019; actual quality ratings. <xref ref-type="fig" rid="fig01">Figure 5</xref>
shows the distribution of gaze deviation for fixed and moving
targets, respectively.</p>

<fig id="fig03" fig-type="figure" position="float">
					<label>Figure 3</label>
					<caption>
						<p>Cumulative distribution functions (CDFs) of the
measured fixation accuracy for fixed and moving targets. The
95% quantiles of gaze deviations for each scene are illustrated
with dotted lines. There are significant differences
between the fixation accuracy for the fixed and the moving
fixation targets. X is the actual gaze deviation.</p>
					</caption>
					<graphic id="graph03" xlink:href="jemr-10-05-b-figure-08.png"/>
				</fig><fig id="fig05" fig-type="figure" position="float">
					<label>Figure 5</label>
					<caption>
						<p>Gaze deviation for all individual scenes, fixed and moving targets.</p>
					</caption>
					<graphic id="graph05" xlink:href="jemr-10-05-b-figure-10.png"/>
				</fig>


        <p>As indicated by the color range, it is also illustrated how
often the users&#x2019; gaze has been found at the respective
relative positions to the fixation target. The shift to the right
for the gaze deviation can be explained by the utilized
fixation paths not being equally distributed regarding the fixation
target&#x2019;s movement. We analyzed the paths, which revealed
that the fixation target has moved left more often than right,
which is one possible explanation for the slight shift of the
PoR to the right on average. The di erence in the fixation
accuracy between the moving target and the fixation cross is
likely apparent because of the smooth pursuit eye movements
(SPEM). Even though the speed of the moving object did not
exceed the 100&#176;/s were a decrease in accuracy is reported
due to physiological constraints, the movement of the target
was not predictable for the user. This naturally leads to a
reduced SPEM precision. Moreover, precision is reduced due
to the fact that the background lies at the same distance than
the pursuit target. Thus other signals, e.g. by the vestibular
system, cannot be used by the HVS to discriminate between
target and background  [
          <xref ref-type="bibr" rid="b28">28, p.229</xref>
          ].</p>



      </sec>




      <sec id="s3c">
        <title>Subjective Perceived Quality: Fixed and Moving Target</title>
		
		
        <p><xref ref-type="fig" rid="fig04">Figure 6</xref>. shows that the average quality for the moving
target was rated better for all scenes on average. In order to shed
some light on the influence of the actual rendering detail,
<xref ref-type="fig" rid="fig04">Figure 7</xref> illustrates the data for the individual scenes, each with
all three fixation modes and all foveal region configurations
up to full rendering. The red lines show the means for each
of the fixation modes, exhibiting that the aforementioned
effect is present in all tested scenes. It also becomes
apparent that the increase in rendering detail between the medium
and the large FRC did not result in a consistent improvement
of subjective perceived quality. For the moving fixation
target, di erences from a medium FRC up to full rendering are
mostly negligible. Interestingly, in some cases a larger FRC
even results in lower subjective perceived quality. We try to
explain the given quality ratings in the discussion section, as
they contradict intuition at first.</p>
<fig id="fig06" fig-type="figure" position="float">
					<label>Figure 6</label>
					<caption>
						<p>Quality for all combinations of scenes and fixation modes. Quality ratings were the highest for all scenes when the
moving target fixation mode was selected, although the fixation accuracy was worse for the moving target than for the fixed
target. The black dots inside the boxes represent the respective mean quality ratings.</p>
					</caption>
					<graphic id="graph06" xlink:href="jemr-10-05-b-figure-11.png"/>
				</fig><fig id="fig07" fig-type="figure" position="float">
					<label>Figure 7</label>
					<caption>
						<p>Quality ratings for fixation modes and Foveal Region Configurations, all scenes.</p>
					</caption>
					<graphic id="graph07" xlink:href="jemr-10-05-b-figure-12.png"/>
				</fig>







      </sec>
	  
	  
	  
	  
      <sec id="s3d">
        <title>Subjective Perceived Quality: Free Focus</title>
		
		
		
        <p>In the free focus mode, users were allowed to move their
eyes freely instead of having to follow a prescribed path. As
we have shown above, tracking quality seemingly degrades
with increasing eccentricities. To prove that this apparent
degradation does not only come from fixations, saccades and
other disturbances not being filtered from the raw data, we
take a look at the eccentricity-dependent quality ratings in
the free focus mode. <xref ref-type="fig" rid="fig01">Figure 8</xref> shows illustrations of the
according data (eccentricity and quality ratings) for all scenes.
The left column contains scatter plots for each scene. The
horizontal axis represents the eccentricity, while the
vertical axis represents the mean quality per bin, which has been
computed for bins of size w = 0.1&#176;  . For the binning process,
each recorded frame from all trials of a scene was analyzed
for the tracked eccentricity, which was then used to account
for the quality rating in the according bin. Another possible
approach would be to average eccentricities for all individual
trials and bin the data based on that. In addition to the mean
quality for each of the bins, we performed linear regressions
with quadratic equations, which are contained in each of the
plots as a red curve. The right column shows eccentricity
distributions for each scene, i.e., it gives an idea about how
far away from the image center the user inspected the shown
scenes. Di erences between scenes turn out to be mostly
minuscule or at least too small to draw any further conclusions.
See the discussion section for further remarks regarding these
illustrations.</p>
<fig id="fig08" fig-type="figure" position="float">
					<label>Figure 8</label>
					<caption>
						<p>Eccentricity-dependent mean quality measurements and eccentricity distributions for all scenes for free focus mode.
It is clearly visible that subjective perceived quality degrades with increasing eccentricities. Eccentricity distributions are
similar and almost uniform for all scenes.</p>
					</caption>
					<graphic id="graph08" xlink:href="jemr-10-05-b-figure-13.png"/>
				</fig>



      </sec>
    </sec>
	
    <sec id="s4">
      <title>Discussion</title>
      <p>As shown above, the precision of the utilized eye tracking
device drops with increasing eccentricities. Our solution at
this point has been to limit the area accounted for in our
considerations to only include fixation targets up to the CVA.
Still, this issue could also be approached di erently. One
possibility would be to take a deeper look into the calibration
step of the eye tracking device. As <xref ref-type="fig" rid="fig01">Figure 4</xref> illustrates, the
tracking precision decreases smoothly with increasing
eccentricities. It may be worthwhile to analyze di erent
calibration procedures for their e ect on tracking precision. Also,
our most recent tests have shown other devices to be possibly
more capable of capturing accurate gaze data over a larger
area. New generation eye trackers will likely improve on
accuracy for greater angles. The result of our tracking
precision analysis should not be interpreted as a direct measure
for tracking precision, even though it seems to be quite
accurate. Latency-based deviations, saccades and other possible
disturbances have not been filtered from the data. The actual
behaviour of the measured gaze deviation however yields a
good estimate of the eccentricity-dependent precision fallo .</p>
      <p>The lower fixation accuracy that we found for in
moving target mode implies that the users PoR was often located
within the border area of the foveal region for the small FRC.
This exhibits reconstructed (and thus lower-quality) parts of
the scene to the user in his/her central vision. Causes for
the low accuracy that we measured are tracking latency and
possible unpredictabilities of the target&#x2019;s movement as well
as tracking precision itself.</p>



      <p>For the subjective perceived quality ratings, we have
found that an increase in rendering detail did not always
result in improved quality ratings. One possible cause for this
is the reprojection method hiding visual artifacts by effectively putting a low-pass filter over them, as even full
rendering &#x2013; as all rendering methods &#x2013; still is a subsampling of
the rendered scene, just with a finer and more regular pixel
grid. Thus, it may also contain visual artifacts. A more
detailed explanation of blur effects that occur when using
reprojection methods can be found in (?). Also, when rating the
subjective perceived quality for the fixed targets on the one
hand and the moving targets on the other hand, intuition may
suggest a worse outcome for the latter because of the larger
gaze deviation illustrated in <xref ref-type="fig" rid="fig01">Figure 3</xref> and <xref ref-type="fig" rid="fig01">Figure 5</xref>. This assumption is
mainly based on the fact that the visibility of image regions
rendered at a lower quality is increased when the gaze
deviation from the fixation target is higher. However, contrary
to this, <xref ref-type="fig" rid="fig01">Figure 6</xref> and <xref ref-type="fig" rid="fig01">Figure 7</xref> reveal the quality ratings for moving
target fixation to be better in all tested scenes. We interpret
the consistent di erences in subjective perceived quality
between fixation modes and their counterintuitive nature when
taking tracking precision and temporal effects into account
as evidence for the possible presence of visual tunneling
effects. This means that visual artifacts that appear in our
rendering system are effectively filtered by human perception,
which makes them largely imperceptible. Also, there often
was a clear tendency towards negative ratings for the small
FRC. One possibility to overcome this issue is to enlarge the
foveal region proportionally to the occuring eye movement.
However, this approach poses a significant challenge. The
achieved frame rate is already considered to be critical when
it comes to head-mounted displays in general, and it becomes
even more critical when incorporating (potentially rapid) eye
movements. Contrariwise, increasing the rendering quality
results in a performance hit, making it even more di cult
to achieve the necessary refresh rate. Thus, this approach is
only viable in an environment where enough computational
resources are available. This may raise the question why
these computational resources should not be initially put into
rendering a larger FRC, but it has to be kept in mind that
visualization is often only one part of an application and
resources also need to be available for other components such
as physics, interaction or AI, for example in computer games.</p>


      <p>We have also shown results of the subjective perceived
quality in the free focus mode (cf. <xref ref-type="fig" rid="fig01">Figure 8</xref>). As mentioned
above, it becomes clear that subjective perceived quality in
this mode degrades with increasing eccentricities. Besides a
degradation on average, quality measurements also become
rather unpredictable in areas further away from the image
center. This may imply that the visual effects that occur
through mismatches between the actual PoR and the
measured PoR do not have the same e ect for all users. We
suggest that this can be attributed mainly to the FRCs, as a
large foveal region still presents the most important parts of
the image at full detail to the user, while smaller FRCs tend
to miss the user&#x2019;s central vision completely due to tracking
inaccuracies.</p>



      <p>One of the challenges that are yet unsolved is the issue
of HMDs getting out of place in the process of a user study,
or, more generally, during the execution of an application or
specific task. Even slight movements of the HMD may lead
to an eye tracker&#x2019;s calibration becoming invalid. However,
asking the user to repeat the calibration step each time the
HMD has moved too much is not a viable option. In
opposition to an explicit calibration procedure, having a calibration
that is embedded into the task at hand would make HMDs
with eye trackers more practical for everyday applications.
</p>
    </sec>
	
	
    <sec id="s5">
      <title>Conclusion</title>
      <p>In this paper we have analyzed the data recorded by an eye
tracking device during the evaluation of our foveated
rendering method. We described our evaluation setup as well as
the rendering method itself. Tracking precision has been
analyzed regarding its angular dependencies, revealing a clear
drop of tracking quality for higher eccentricities.
Accordingly, quality ratings for free focus mode also show a clear
drop towards larger eccentricities. Properties of tracking
devices such as this have to be accounted for when
implementing foveated rendering methods, as the point of regard is a
crucial measurement in such setups. Having measured these
inaccuracies of the tracking device, it becomes clear that
applications that rely on methods from this field have to adjust
the specific parameterizations for the given circumstances.</p>
      <p>We have analyzed the ability of users to focus static and
moving fixation targets. While we found the PoR being
scattered over larger areas for the moving target mode, the results
seemed to contradict the intuitive assumption that worse
fixations should result in worse quality ratings. The mean quality
ratings were best for the moving target mode in all scenes,
even though the match between the measured PoR and the
actually focused PoR was worse than for the static fixation
mode. Even though this may lead to subsampling and
reprojection artifacts being exposed to the user, the ratings were
still better, which we attribute to the potential presence of
visual tunneling effects that are induced by the mental load
of the task that has to be carried out, although the task has
just been to follow a moving point. effectively, this reduces
the user&#x2019;s field of view.</p>
      <p>
        Thus, there are circumstances which make it possible to
reduce visual quality. This is the case in games, where events
can be triggered that produce a change in the visuals, or
taskdriven environments, where task or navigation complexity
may lead to high mental workloads. Moreover, certain events
may allow for deriving a hint which part of the scene attracts
attention. Thus, visual quality can be reduced even further
(Selective rendering). However, attentional models and gaze
predictions are far from accurate [
        <xref ref-type="bibr" rid="b5">5</xref>
        ] However, more recently
the flicker observer e ect and the higher temporal resolution
for peripheral vision has successfully been used to direct the
user&#x2019;s gaze directly [
        <xref ref-type="bibr" rid="b30">30</xref>
        ].
      </p>
	
	  
	  
      <p>Future research in the area of foveated rendering may
analyze further how optimal foveal region configurations can
be determined and how the point of regard can be optimally
placed even with imprecise tracking. Also, it may be possible
to exploit visual tunneling effects directly to improve
performance or, alternatively, visual quality for central vision.
In addition, it may be worth looking into the comparative
behavior of tracking devices with di erent update rates for analyses such as the one we have presented here.</p>
    </sec>
    <sec id="s6">
      <title>Acknowledgements</title>
      <p>We would like to thank NVIDIA for providing us with
two Quadro K6000 graphics cards for the user study, the
Intel Visual Computing Institute, the European Union (EU)
for the co-funding as part of the Dreamspace project, the
German Federal Ministry for Economic A airs and Energy
(BMWi) for funding the MATEDIS ZIM project (grant no
KF2644109) and the Federal Ministry of Education and
Research (BMBF) for funding the project OLIVE (grant no
13N13161).</p>
    </sec>
    <sec id="s7" sec-type="COI-statement">
      <title>Conflict of Interest</title>
      <p>The authors declare that there is no conflict of interest
regarding the publication of this paper.</p>
    </sec>
  </body>
  
  
  
  
  
  
  
  
  
  
  
  
  <back>
<ref-list>
<ref id="b1"><label>1.	</label><mixed-citation publication-type="web-page" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Hunt</surname> <given-names>W</given-names></string-name></person-group>. Virtual Reality: The Next Great GraphicsRevolution; <year>2015</year>. Keynote Talk HPG. Availablefrom: <ext-link ext-link-type="uri" xlink:href="http://www.highperformancegraphics.org/wp-content/uploads/2015/Keynote1/WarrenHuntHPGKeynote.pptx" xlink:show="new">http://www.highperformancegraphics.org/wp-content/uploads/2015/Keynote1/WarrenHuntHPGKeynote.pptx</ext-link></mixed-citation></ref>
<ref id="b2"><label>2. </label><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Hale</surname> <given-names>KS</given-names></string-name>, <string-name><surname>Stanney</surname> <given-names>KM</given-names></string-name></person-group>. <source>Handbook of Virtual Environments:Design, Implementation, and Applications. 2nded</source>. <publisher-loc>Boca Raton, FL, USA</publisher-loc>: <publisher-name>CRC Press, Inc.</publisher-name>; <year>2014</year>. <pub-id pub-id-type="doi">10.1201/b17360</pub-id></mixed-citation></ref>
<ref id="b3"><label>3. </label><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Wandell</surname> <given-names>BA</given-names></string-name></person-group>. <source>Foundations of Vision</source>. <publisher-name>Stanford University</publisher-name>; <year>1995</year>.</mixed-citation></ref>
<ref id="b4"><label>4.	</label><mixed-citation publication-type="book-chapter" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>McNamara</surname> <given-names>A</given-names></string-name>, <string-name><surname>Mania</surname> <given-names>K</given-names></string-name>, <string-name><surname>Banks</surname> <given-names>M</given-names></string-name>, <string-name><surname>Healey</surname> <given-names>C</given-names></string-name></person-group>. Perceptually-motivated Graphics, Visualization and 3DDisplays. In: SIGGRAPH ’10, Courses. ACM; <year>2010</year>.p. 7:1–7:159.</mixed-citation></ref>
<ref id="b5"><label>5.	</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Weier</surname> <given-names>M</given-names></string-name>, <string-name><surname>Stengel</surname> <given-names>M</given-names></string-name>, <string-name><surname>Roth</surname> <given-names>T</given-names></string-name>, <string-name><surname>Didyk</surname> <given-names>P</given-names></string-name>, <string-name><surname>Eisemann</surname> <given-names>E</given-names></string-name>, <string-name><surname>Eisemann</surname> <given-names>M</given-names></string-name> <etal>et al.</etal></person-group> <article-title>Perception-driven Accelerated Rendering</article-title>. <source>Comput Graph Forum</source>. <year>2017</year>;<volume>36</volume>(<issue>2</issue>):<fpage>611</fpage>–<lpage>43</lpage>. <pub-id pub-id-type="doi">10.1111/cgf.13150</pub-id><issn>0167-7055</issn></mixed-citation></ref>
<ref id="b6"><label>6.	</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Masia</surname> <given-names>B</given-names></string-name>, <string-name><surname>Wetzstein</surname> <given-names>G</given-names></string-name>, <string-name><surname>Didyk</surname> <given-names>P</given-names></string-name>, <string-name><surname>Gutierrez</surname> <given-names>D</given-names></string-name></person-group>. <article-title>A surveyon computational displays: pushing the boundaries of optics, computation, and perception</article-title>. <source>Comput Graph</source>. <year>2013</year>;<volume>37</volume>(<issue>8</issue>):<fpage>1012</fpage>–<lpage>38</lpage>. <pub-id pub-id-type="doi">10.1016/j.cag.2013.10.003</pub-id><issn>0097-8930</issn></mixed-citation></ref>
<ref id="b7"><label>7.	</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Corsini</surname> <given-names>M</given-names></string-name>, <string-name><surname>Larabi</surname> <given-names>MC</given-names></string-name>, <string-name><surname>Lavoué</surname> <given-names>G</given-names></string-name>, <string-name><surname>Petřík</surname> <given-names>O</given-names></string-name>, <string-name><surname>Váša</surname> <given-names>L</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>K</given-names></string-name></person-group>. <article-title>Petˇrík O, Váša L, Wang K. Perceptual metrics for static and dynamic triangle meshes. ACM Eurographics ’12 - STAR</article-title>. <source>Comput Graph Forum</source>. <year>2013</year>;<volume>32</volume>(<issue>1</issue>):<fpage>101</fpage>–<lpage>25</lpage>. <pub-id pub-id-type="doi">10.1111/cgf.12001</pub-id><issn>0167-7055</issn></mixed-citation></ref>
<ref id="b8"><label>8.	</label><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Funkhouser</surname> <given-names>TA</given-names></string-name>, <string-name><surname>Séquin</surname> <given-names>CH</given-names></string-name></person-group>. <article-title>Adaptive display algorithm for interactive frame rates during visualization of complex virtual environments.</article-title> In: <source>20th annual conference on Computer graphics and interactive techniques</source>. <publisher-name>ACM</publisher-name>; <year>1993</year>. p. <fpage>247</fpage>–<lpage>254</lpage>. <pub-id pub-id-type="doi">10.1145/166117.166149</pub-id></mixed-citation></ref>
<ref id="b9"><label>9. </label><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Horvitz</surname> <given-names>E</given-names></string-name>, <string-name><surname>Lengyel</surname> <given-names>J</given-names></string-name></person-group>. <chapter-title>Perception, Attention, and Resources: A Decision-Theoretic Approach to Graphics Rendering</chapter-title>. <source>UAI</source>. <publisher-name>Morgan Kaufmann</publisher-name>; <year>1997</year>. pp. <fpage>238</fpage>–<lpage>49</lpage>.</mixed-citation></ref>
<ref id="b10"><label>10.	</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Yee</surname> <given-names>H</given-names></string-name>, <string-name><surname>Pattanaik</surname> <given-names>S</given-names></string-name>, <string-name><surname>Greenberg</surname> <given-names>DP</given-names></string-name></person-group>. <article-title>Spatiotemporal Sensitivity and Visual Attention for Efficient Rendering of Dynamic Environments</article-title>. <source>ACM Trans Graph</source>. <year>2001</year> <month>Jan</month>;<volume>20</volume>(<issue>1</issue>):<fpage>39</fpage>–<lpage>65</lpage>. <pub-id pub-id-type="doi">10.1145/383745.383748</pub-id><issn>0730-0301</issn></mixed-citation></ref>
<ref id="b11"><label>11.	</label><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Levoy</surname> <given-names>M</given-names></string-name>, <string-name><surname>Whitaker</surname> <given-names>R</given-names></string-name></person-group>. <article-title>Gaze-directed Volume Rendering.</article-title> In: Proceedings of the 1990 Symposium on Interactive 3D Graphics. I3D ’90. ACM; <year>1990</year>. p. 217–223. <pub-id pub-id-type="doi">10.1145/91385.91449</pub-id></mixed-citation></ref>
<ref id="b12"><label>12. </label><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Murphy</surname> <given-names>H</given-names></string-name>, <string-name><surname>Duchowski</surname> <given-names>AT</given-names></string-name></person-group>. <article-title>Gaze-contingent level of detail rendering.</article-title> Eurographics 2001. <year>2001</year>;.</mixed-citation></ref>
<ref id="b13"><label>13. </label><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Watson</surname> <given-names>B</given-names></string-name>, <string-name><surname>Walker</surname> <given-names>N</given-names></string-name>, <string-name><surname>Hodges</surname> <given-names>LF</given-names></string-name>, <string-name><surname>Worden</surname> <given-names>A</given-names></string-name></person-group>. Managing Level of Detail Through Peripheral Degradation: Effects on Search Performance with a Head-mounted Display. ACM Trans Comput-Hum Interact. <year>1997</year>.Dec;4(4):323–346.</mixed-citation></ref>
<ref id="b14"><label>14.	</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Duchowski</surname> <given-names>AT</given-names></string-name>, <string-name><surname>Bate</surname> <given-names>D</given-names></string-name>, <string-name><surname>Stringfellow</surname> <given-names>P</given-names></string-name>, <string-name><surname>Thakur</surname> <given-names>K</given-names></string-name>, <string-name><surname>Melloy</surname> <given-names>BJ</given-names></string-name>, <string-name><surname>Gramopadhye</surname> <given-names>AK</given-names></string-name></person-group>. <article-title>On Spatiochromatic Visual Sensitivity and Peripheral Color LOD Management</article-title> <comment>[TAP]</comment>. <source>ACM Trans Appl Percept</source>. <year>2009</year>;<volume>6</volume>(<issue>2</issue>):<fpage>9</fpage>. <pub-id pub-id-type="doi">10.1145/1498700.1498703</pub-id><issn>1544-3558</issn></mixed-citation></ref>
<ref id="b15"><label>15.	</label><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Loschky</surname> <given-names>LC</given-names></string-name>, <string-name><surname>McConkie</surname> <given-names>GW</given-names></string-name></person-group>. <article-title>User Performance with Gaze Contingent Multiresolutional Displays.</article-title> In: <source>Proceedings of the 2000 Symposium on Eye Tracking Research &amp; Applications. ETRA ’00</source>. <publisher-loc>New York, NY, USA</publisher-loc>: <publisher-name>ACM</publisher-name>; <year>2000</year>. p. <fpage>97</fpage>–<lpage>103</lpage>. <pub-id pub-id-type="doi">10.1145/355017.355032</pub-id></mixed-citation></ref>
<ref id="b16"><label>16. </label><mixed-citation publication-type="unknown" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Loschky</surname> <given-names>LC</given-names></string-name>, <string-name><surname>Wolverton</surname> <given-names>GS</given-names></string-name></person-group>. <article-title>How Late Can You Update Gaze-contingent Multiresolutional Displays Without Detection?</article-title> ACM Trans Multimedia Comput Commun Appl. <year>2007</year> Dec;3(4):7:1–7:10. <pub-id pub-id-type="doi">10.1145/1314303.1314310</pub-id></mixed-citation></ref>
<ref id="b17"><label>17.	</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Saunders</surname> <given-names>DR</given-names></string-name>, <string-name><surname>Woods</surname> <given-names>RL</given-names></string-name></person-group>. <article-title>Direct measurement of the system latency of gaze-contingent displays</article-title>. <source>Behav Res Methods</source>. <year>2014</year> <month>Jun</month>;<volume>46</volume>(<issue>2</issue>):<fpage>439</fpage>–<lpage>47</lpage>. <pub-id pub-id-type="doi">10.3758/s13428-013-0375-5</pub-id><pub-id pub-id-type="pmid">23949955</pub-id><issn>1554-351X</issn></mixed-citation></ref>
<ref id="b18"><label>18.	</label><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Ringer</surname> <given-names>RV</given-names></string-name>, <string-name><surname>Johnson</surname> <given-names>AP</given-names></string-name>, <string-name><surname>Gaspar</surname> <given-names>JG</given-names></string-name>, <string-name><surname>Neider</surname> <given-names>MB</given-names></string-name>, <string-name><surname>Crowell</surname> <given-names>J</given-names></string-name>, <string-name><surname>Kramer</surname> <given-names>AF</given-names></string-name> <etal>et al.</etal></person-group> <article-title>Creating a New Dynamic Measure of the Useful Field of View Using Gaze-contingent Displays.</article-title> In: <source>Proceedings of the Symposium on Eye Tracking Research and Applications. ETRA ’14</source>. <publisher-loc>New York, NY, USA</publisher-loc>: <publisher-name>ACM</publisher-name>; <year>2014</year>. p. <fpage>59</fpage>–<lpage>66</lpage>. <pub-id pub-id-type="doi">10.1145/2578153.2578160</pub-id></mixed-citation></ref>
<ref id="b19"><label>19.	</label><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Weier</surname> <given-names>M</given-names></string-name>, <string-name><surname>Roth</surname> <given-names>T</given-names></string-name>, <string-name><surname>Kruij</surname> <given-names>E</given-names></string-name>, <string-name><surname>Hinkenjann</surname> <given-names>A</given-names></string-name>, <string-name><surname>Pérard-Gayot</surname> <given-names>A</given-names></string-name>, <string-name><surname>Slusallek</surname> <given-names>P</given-names></string-name> <etal>et al.</etal></person-group> <article-title>Foveated Real-time Ray Tracing for Head-mounted Displays.</article-title> In: <source>Proceedings of the 24th Pacific Conference on Computer Graphics and Applications. PG ’16</source>. <publisher-loc>Goslar Germany, Germany</publisher-loc>: <publisher-name>Eurographics Association</publisher-name>; <year>2016</year>. p. <fpage>289</fpage>–<lpage>298</lpage>. Available from <pub-id pub-id-type="doi">10.1111/cgf.13026</pub-id></mixed-citation></ref>

<ref id="b20"><label>20.	</label><mixed-citation publication-type="conference" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Nehab</surname> <given-names>D</given-names></string-name>, <string-name><surname>Sander</surname> <given-names>PV</given-names></string-name>, <string-name><surname>Lawrence</surname> <given-names>J</given-names></string-name>, <string-name><surname>Tatarchuk</surname> <given-names>N</given-names></string-name>, <string-name><surname>Isidoro</surname> <given-names>JR</given-names></string-name></person-group>. <article-title>Accelerating Real-time Shading with Reverse Reprojection Caching.</article-title> In: Proceedings of the 22Nd ACM SIGGRAPH/EUROGRAPHICS Symposium on Graphics Hardware. GH ’07. Eurographics Association, <year>2007</year>. p. 25–35.</mixed-citation></ref>
<ref id="b21"><label>21. </label><mixed-citation publication-type="unknown" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Yang</surname> <given-names>L</given-names></string-name>, <string-name><surname>Nehab</surname> <given-names>DF</given-names></string-name>, <string-name><surname>Sander</surname> <given-names>PV</given-names></string-name>, <string-name><surname>Sitthi-amorn</surname> <given-names>P</given-names></string-name>, <string-name><surname>Lawrence</surname> <given-names>J</given-names></string-name>, <string-name><surname>Hoppe</surname> <given-names>H</given-names></string-name></person-group>. <article-title>Amortized supersampling.</article-title> ACM Trans Graph. <year>2009</year>;28(5):135:1–135:12. <pub-id pub-id-type="doi">10.1145/1661412.1618481</pub-id></mixed-citation></ref>
<ref id="b22"><label>22.	</label><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Roth</surname> <given-names>T</given-names></string-name>, <string-name><surname>Weier</surname> <given-names>M</given-names></string-name>, <string-name><surname>Hinkenjann</surname> <given-names>A</given-names></string-name>, <string-name><surname>Li</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Slusallek</surname> <given-names>P</given-names></string-name></person-group>. <article-title>An Analysis of Eye-tracking Data in Foveated Ray Tracing.</article-title> In: <source>Second Workshop on Eye Tracking and Visualization (ETVIS 2016)</source>, <conf-loc>Baltimore, USA</conf-loc>; <year>2016</year>. p. <fpage>69</fpage>–<lpage>73</lpage>. <pub-id pub-id-type="doi">10.1109/ETVIS.2016.7851170</pub-id></mixed-citation></ref>
<ref id="b23"><label>23.	</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Ooms</surname> <given-names>K</given-names></string-name>, <string-name><surname>Dupont</surname> <given-names>L</given-names></string-name>, <string-name><surname>Lapon</surname> <given-names>L</given-names></string-name>, <string-name><surname>Popelka</surname> <given-names>S</given-names></string-name></person-group>. <article-title>Accuracy and precision of fixation locations recorded with the lowcost Eye Tribe tracker in different experimental set-ups</article-title>. <source>J Eye Mov Res</source>. <year>2015</year>;<volume>8</volume>(<issue>1</issue>).<issn>1995-8692</issn></mixed-citation></ref>
<ref id="b24"><label>24.	</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Sharma</surname> <given-names>A</given-names></string-name>, <string-name><surname>Abrol</surname> <given-names>P</given-names></string-name></person-group>. <article-title>Direction Estimation Model for Gaze Controlled Systems</article-title>. <source>J Eye Mov Res</source>. <year>2016</year>;<volume>9</volume>(<issue>6</issue>).<issn>1995-8692</issn></mixed-citation></ref>
<ref id="b25"><label>25. </label><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Duchowski</surname> <given-names>AT</given-names></string-name></person-group>. <source>Eye Tracking Methodology: Theory and Practice</source>. <publisher-loc>Secaucus, NJ, USA</publisher-loc>: <publisher-name>Springer-Verlag New York, Inc.</publisher-name>; <year>2007</year>.</mixed-citation></ref>
<ref id="b26"><label>26.	</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Strasburger</surname> <given-names>H</given-names></string-name>, <string-name><surname>Rentschler</surname> <given-names>I</given-names></string-name>, <string-name><surname>Jüttner</surname> <given-names>M</given-names></string-name></person-group>. <article-title>Peripheral vision and pattern recognition: a review</article-title>. <source>J Vis</source>. <year>2011</year> <month>Dec</month>;<volume>11</volume>(<issue>5</issue>):<fpage>13</fpage>. <pub-id pub-id-type="doi">10.1167/11.5.13</pub-id><pub-id pub-id-type="pmid">22207654</pub-id><issn>1534-7362</issn></mixed-citation></ref>
<ref id="b27"><label>27.	</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Guenter</surname> <given-names>B</given-names></string-name>, <string-name><surname>Finch</surname> <given-names>M</given-names></string-name>, <string-name><surname>Drucker</surname> <given-names>S</given-names></string-name>, <string-name><surname>Tan</surname> <given-names>D</given-names></string-name>, <string-name><surname>Snyder</surname> <given-names>J</given-names></string-name></person-group>. <article-title>Foveated 3D Graphics</article-title>. <source>ACM Trans Graph</source>. <year>2012</year>;<volume>31</volume>(<issue>6</issue>):<fpage>164</fpage>. <pub-id pub-id-type="doi">10.1145/2366145.2366183</pub-id><issn>0730-0301</issn></mixed-citation></ref>
<ref id="b28"><label>28. </label><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Adler</surname> <given-names>FH</given-names></string-name>, <string-name><surname>Kaufman</surname> <given-names>PL</given-names></string-name>, <string-name><surname>Levin</surname> <given-names>LA</given-names></string-name>, <string-name><surname>Alm</surname> <given-names>A</given-names></string-name></person-group>. <source>Adler’s Physiology of the Eye</source>. <publisher-name>Elsevier Health Sciences</publisher-name>; <year>2011</year>.</mixed-citation></ref>
<ref id="b29"><label>29.	</label><mixed-citation publication-type="book" specific-use="unparsed"><person-group person-group-type="author"><collab>of Defense D. Design Criteria Standard</collab></person-group>. Human Engineering, MIL-STD-1472F. United States of America; <year>1999</year>.</mixed-citation></ref>
<ref id="b30"><label>30.	</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Waldin</surname> <given-names>N</given-names></string-name>, <string-name><surname>Waldner</surname> <given-names>M</given-names></string-name>, <string-name><surname>Viola</surname> <given-names>I</given-names></string-name></person-group>. <article-title>Flicker Observer Effect: Guiding Attention Through High Frequency Flicker in Images</article-title>. <source>Comput Graph Forum</source>. <year>2017</year>;<volume>36</volume>(<issue>2</issue>):<fpage>467</fpage>–<lpage>76</lpage>. <pub-id pub-id-type="doi">10.1111/cgf.13141</pub-id><issn>0167-7055</issn></mixed-citation></ref>
</ref-list>
  </back>
</article>
