<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd">

<article article-type="research-article" xmlns:xlink="http://www.w3.org/1999/xlink">
 <front>
    <journal-meta>
	<journal-id journal-id-type="publisher-id">Jemr</journal-id>
      <journal-title-group>
        <journal-title>Journal of Eye Movement Research</journal-title>
      </journal-title-group>
      <issn pub-type="epub">1995-8692</issn>
	  <publisher>								
	  <publisher-name>Bern Open Publishing</publisher-name>
	  <publisher-loc>Bern, Switzerland</publisher-loc>
	</publisher>
    </journal-meta>
    <article-meta>
	<article-id pub-id-type="doi">10.16910/jemr.10.4.3</article-id> 
	  <article-categories>								
				<subj-group subj-group-type="heading">
					<subject>Research Article</subject>
				</subj-group>
		</article-categories>
      <title-group>
        <article-title>Pupil response as an indicator of hazard perception during simulator driving</article-title>
      </title-group>
	   <contrib-group> 
				<contrib contrib-type="author">
					<name>
						<surname>Vintila</surname>
						<given-names>Florentin</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>K&#xFC;bler</surname>
						<given-names>Thomas C.</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>				
				<contrib contrib-type="author">
					<name>
						<surname>Kasneci</surname>
						<given-names>Enkelejda</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
        <aff id="aff1">
		<institution>University of Tu&#x308;bingen</institution>,   <country>Germany</country>
        </aff>
		</contrib-group>
     
	  <pub-date date-type="pub" publication-format="electronic"> 
		<day>6</day>  
		<month>11</month>
        <year>2017</year>
      </pub-date>
	  <pub-date date-type="collection" publication-format="electronic"> 
	  <year>2017</year>
	</pub-date>
      <volume>10</volume>
      <issue>4</issue>
	   <elocation-id>10.16910/jemr.10.4.3</elocation-id> 
	<permissions> 
	<copyright-year>2017</copyright-year>
	<copyright-holder>Vintila, K&#xFC;bler and Kasneci</copyright-holder>
	<license license-type="open-access">
  <license-p>This work is licensed under a Creative Commons Attribution 4.0 International License, 
  (<ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">
    https://creativecommons.org/licenses/by/4.0/</ext-link>), which permits unrestricted use and redistribution provided that the original author and source are credited.</license-p>
</license>
	</permissions>
      <abstract>
        <p>We investigate the pupil response to hazard perception during driving simulation. Complementary to gaze movement and physiological stress indicators, pupil size changes can provide valuable information on traffic hazard perception with a relatively low temporal delay. We tackle the challenge of identifying those pupil dilation events associated with hazardous events from a noisy signal by a combination of wavelet transformation and machine learning. Therefore, we use features of the wavelet components as training data of a support vector machine. We further demonstrate how to utilize the method for the analysis of actual hazard perception and how it may differ from the behavioral driving response.</p>
      </abstract>
      <kwd-group>
        <kwd>Driving</kwd>
        <kwd>Stress indicators</kwd>
        <kwd>Pupil diameter</kwd>
        <kwd>Attention</kwd>
        <kwd>Visual field defect</kwd>
        <kwd>Wavelets</kwd>
        <kwd>Supervised classification</kwd>
      </kwd-group>
    </article-meta>
  </front>	

  <body>

    <sec id="S1">
      <title>Introduction</title>
	  
      <p>The size of the human pupil regulates to the amount
of light that enters the eye. An increase in luminance
therefore results in a fast constriction of the pupil, a
luminance decrease in a gradual &#x2018;unconstriction&#x2019;. The
pupillary light reflex (PLR) (
        <xref ref-type="bibr" rid="R24">1</xref>
        )regulates the light influx. On
the other hand, there is a well-studied correlation between pupillary 
dilation and cognitive factors such as workload(
        <xref ref-type="bibr" rid="R25">2</xref>
        ), surprise(
        <xref ref-type="bibr" rid="R26">3</xref>
        ), attention(
        <xref ref-type="bibr" rid="R27">4</xref>
        ), and emotional
arousal(
        <xref ref-type="bibr" rid="R28">5</xref>
        ).</p>
		
      <p>Pupil dilation constitutes a proxy for indirect
measurement of these cognitive factors, which would
otherwise only be visible with costly and intrusive
measurements such as EEG. Through behavioral observation,
however, one can only measure the superposition of PLR
and cognitive influences. Both, &#x2018;unconstriction&#x2019; due to a
luminance change and pupil dilation due to an increased
arousal level result in a larger pupil. This fact effectively
limits the usefulness of pupil dilation in most practical
applications, as an equiluminant surrounding is only
realistic for laboratory experiments.</p>

      <p> Not only are the causes of PLR unconstriction and
cognitive pupil dilation different, but also different brain
regions trigger them and they manifest through different
components of the eye musculature. PLR is driven
mainly by the constriction and relaxation of the iris sphincter
muscle; cognitive changes innervate the iris dilator
muscle(
        <xref ref-type="bibr" rid="R24">1</xref>
        ),(
        <xref ref-type="bibr" rid="R29">6</xref>
        ).</p>
		
      <p> With their work on the &#x2018;index of cognitive activity&#x2019;
(ICA) Marshall demonstrated that these processes are in
fact so different in their manifestation (especially in the
speed and acceleration of dilation and constriction) that
sophisticated signal processing can separate cognitive
from PLR caused pupil size changes(
        <xref ref-type="bibr" rid="R30">7</xref>
        ). Schwalm et al.
later used this method to distinguish between mental
workload levels of drivers in a simulator(
        <xref ref-type="bibr" rid="R25">2</xref>
        ). In the
context of driving, the study of pupillary dilation has mostly
focused on mental workload(
        <xref ref-type="bibr" rid="R31 R32">8, 9</xref>
        ).</p>
		
      <p>Driving is generally considered a foveated task (i.e.,
an object generally needs to be fixated by the driver in
order to be perceived). However, drivers can perceive
certain potential hazards without or shortly before an
explicit fixation. On the other hand, several studies have
empirically shown that the mere fixation of a specific
object does not imply its perception not its interpretation
as hazardous by the driver. For example, in (
        <xref ref-type="bibr" rid="R33">10</xref>
        ) hazard
fixation was found to be unreliable for predicting hazard
perception, as an object can either be &#x2018;cognitively
overlooked&#x2019; or incorrectly judged as non-hazardous by the
driver. Thus, if we want to infer information on hazard
perception in a driving scene, the fixation-based
information is not sufficient. Other physiological signals, such
as electrocardiography (ECG) or Galvanic Skin Response
(GSR), can help us to disambiguate. However, they
usually show a variable and relatively long delay (within
several seconds) so that they are not applicable to a
realtime use case, e.g. to trigger assistance systems, nor to
determine the exact moment in time when a hazard is
perceived. Contrary to these physiological parameters,
pupil response happens almost instantly and spans only
about 2 seconds(
        <xref ref-type="bibr" rid="R34">11</xref>
        ). This lack of a delay allows for a
timely interaction with in-vehicle systems.</p>

      <p>In this study, we investigate the pupil dilation in
immediate response to a hazard during driving. Our aim is
to investigate the predictive quality of the pupillary signal
to infer hazard perception. Being able to detect hazard
perception of a driver reliably via a change in pupil
diameter is interesting for multiple reasons: In Underwood,
Ngai &#x26; Underwood, 2013 the authors perform a hazard
perception task where subjects are to press the space bar
once they perceive a hazard. Similar experimental setups
are common in studying hazard detection, e.g., in (
        <xref ref-type="bibr" rid="R35">12</xref>
        )
subjects were to honk upon detection of a pedestrian. We
could substitute such artificial manual feedback by a
noninvasive measurement of pupil dilation. Furthermore, we
could disambiguate other stress signals, such as hazard
fixation, heart rate changes or the galvanic skin response
by use of the pupil diameter: was an object perceived and
judged as hazardous?</p>

      <p>For the purpose we can built on insights gained from
the analysis of mental workload during driving, as the
identification of a stress response shares the common
problem of isolating a cognitive pupillary dilation from
the PLR. For example in(
        <xref ref-type="bibr" rid="R36">13</xref>
        ), the authors find an increase
in pupillary dilation with mental workload that is reliable
even under the daylight variations of a natural
environment. However, the detection of this effect is only
possible through averaging over of a large amount of data and
by applying statistical methods. Finding a statistically
significant difference in a large collection of data does
not imply that a useful classification of individual trials
towards a specific mental workload state is possible.</p>

      <p>In this context, the Index of cognitive activity is of
much interest, as its authors claim that it is almost
immune to illumination changes. Therefore, a wavelet
transformation filters only those pupil changes that did not
originate from ambient illumination changes. By
analyzing only certain components of the wavelet-transformed
signal, we filter for a specific dilation speed and
amplitude(
        <xref ref-type="bibr" rid="R30">7</xref>
        ).</p>
		
      <p>For determining a stress level, increasing mean values
of the pupil diameter over time are commonly used(
        <xref ref-type="bibr" rid="R37">14</xref>
        ).
This averaging has the advantage of being relatively
robust towards momentary pupil diameter changes as
caused by rapid illumination changes. Pedrotti et al. used
a wavelet transformed pupil diameter in a simulated
driving task in order to classify different stress levels of the
driver(
        <xref ref-type="bibr" rid="R38">15</xref>
        ). Such a procedure is useful when a gradual
change in stress level is expected. However, for our
application, we are interested in spontaneous, fast stress
events and an average filter would delay the detection of
the expected steep and short peaks.</p>

      <p>In the following, a filtering and classification cascade
for the pupil diameter signal is introduced that can be
utilized to classify the perception of hazards during
driving in a simulator.</p>
    </sec>
	
    <sec id="S2">
      <title>Methods</title>
      <sec id="S2a">
        <title>Driving simulator experiment</title>
		
        <p>Thirty-one subjects drove in the moving-base driving
simulator (
          <xref ref-type="bibr" rid="R39">16</xref>
          ) at the Mercedes-Benz Technology Center
in Sindelfingen, Germany. The cabin contained a real car
body amidst a 360&#xB0; virtual reality, thus the driving
experience was very realistic. Each subject absolved a 40 min
drive of 37.5 km length. Nine hazardous situations
occurred at predefined positions along the course. A
Dikablis essential eye tracker (Ergoneers GmbH,
Manching/Germany) recorded eye movements and pupil
size at 25 Hz. Simultaneously, we recorded the
physiological parameters galvanic skin conductance (GSC,
Biotrace+ with finger electrodes) and heart rate (ECG,
mobile 3-channel customed EKG). Figure 1 shows the
experimental setup. The processing steps required to
derive an indicator of hazard perception from these
sensors are published in(
          <xref ref-type="bibr" rid="R33">10</xref>
          ).</p>
		  
<fig id="fig01" fig-type="figure" position="float">
					<label>Fig. 1.</label>
					<caption>
						<p>Setup of the vital parameter sensors in the driving simulator.</p>
					</caption>
					<graphic id="graph01" xlink:href="jemr-10-04-c-figure-01.png"/>
				</fig>			  
		  
        <p>All subjects were recruited from the department of
Neuro-Ophthalmology at the University of T&#xFC;bingen
(Germany). The research study was approved by the
Institutional Review Board of the University of T&#xFC;bingen
(Germany) and was performed according to the
Declaration of Helsinki. Aim of the original study was to analyze
the driving performance of patients with binocular visual
field loss (16 patients, 15 control subjects). For the
analysis provided here, we do not expect an influence of these
groups on the pupil diameter and therefore provide no
further interpretation with regard to the visual field
defects.</p>
      </sec>
	  
      <sec id="S2b">
        <title>Pupillary data processing</title>
		
        <p>As we are operating on data recorded in a close to
realistic environment, we have to first assure sufficient data
quality. In a preprocessing step, we eliminated blinks,
partial blinks and unlikely pupil sizes from the data:</p>

        <p>The first 30 seconds of the pupil signal were very
noisy due to an acclimation phase of the subject in the
car. We discarded this relatively short time interval for all
subjects. We identified blinks, tracking failures of the eye
tracker and pupil size samples that differed by more than
10% from their preceding value (empirically chosen and
mainly dependent on pupil detection quality). We
eliminated these usually relatively short tracking losses from
the data. That produces an artifact spanning up to five
samples, given the 25 Hz sampling rate of the eye tracker.
Additionally, two samples (corresponding to 40 ms)
before and after a blink were removed as well since a
partial occlusion of the pupil by a half-closed eyelid may
cause the pupil detection to report a smaller size than
actual pupil size. To eliminate physiologically unlikely
pupil sizes, we used a statistical approach and considered
all pupil sizes that exceed the average by more than three
standard deviations outliers. Such samples result from a
failure of the pupil detection algorithm (e.g., by detecting
the iris instead of the pupil). We filled the gaps from
missing/eliminated data by a linear interpolation between
the neighboring valid samples. This step was necessary
as the following frequency-based processing steps require
a continuous signal without discontinuities.</p>

        <p>Trials with less than 75% of valid data (with
interpolated points of the previous step counting towards invalid
data) were not included for further analysis. In the next
step, we compensate for a non-stationary trend (i.e., a
gradual slow change in pupil diameter over several
minutes). We identify such a local trend by
reconstruction of the original signal from wavelet coefficients that
correspond to a low frequency band (see Figure 2). It is
necessary to remove such a trend before applying spectral
analysis, as it distorts the spectra of the signal at low
frequency(
          <xref ref-type="bibr" rid="R40">17</xref>
          ).</p>
		  
<fig id="fig02" fig-type="figure" position="float">
					<label>Fig. 2.</label>
					<caption>
						<p>The raw pupil area signal (top) and its reconstruction using the wavelet coefficients (middle). We obtain a detrended signal (bottom) by subtracting the wavelet approximation from the original signal. The sampling rate is 25 frames/second.</p>
					</caption>
					<graphic id="graph02" xlink:href="jemr-10-04-c-figure-02.png"/>
				</fig>			  
		  
        <p>A manual analysis of the pupil diameter signal after
filtering and smoothing indicated that peaks do indeed
occur at the hazardous situations, but also that a simple
threshold approach is insufficient to detect them reliably
amongst the high noise level. Spurious pupil diameter
peaks need to be distinguished from the peaks
corresponding to hazardous situations. We employ the method
introduced in (
          <xref ref-type="bibr" rid="R41">18</xref>
          ) for this purpose:</p>
		  
        <p>First, we detect zero-crossings of the smoothed first
derivative of the pupil diameter signal. They correspond
to extrema in the original signal. We consider them as
candidate peaks, if their amplitude exceeds 1.5 standard
deviations. Then, a parabola is fit to the set of points
within a 2.5 second time-window around the peak by
least squares quadratic fit (Figure 3) using the <italic>full width
at half maximum method</italic>(
          <xref ref-type="bibr" rid="R42">19</xref>
          ). The pupil response to
visual detection is supposed to last for 2-2.5 seconds(
          <xref ref-type="bibr" rid="R34">11</xref>
          ),
motivating this choice of window width.</p>

<fig id="fig03" fig-type="figure" position="float">
					<label>Fig. 3.</label>
					<caption>
						<p>Fit of a parabola to a candidate pupil dilation peak.</p>
					</caption>
					<graphic id="graph03" xlink:href="jemr-10-04-c-figure-03.png"/>
				</fig>	
      </sec>
	  
      <sec id="S2c">
        <title>Wavelet Analysis</title>
		
        <p>For each drive, we identified and labelled all
hazardous events and the corresponding pupil signal. We
automated this process as the driving simulation provided the
position of the vehicle on the track and we knew about
the position of the pre-programmed hazardous events.</p>

        <p>Several different events resulted in a stress or
emotional response on different levels of intensity during the
driving session. As the illumination within the simulator
environment does not change as rapidly and intensely as
during actual on-road driving, we can expect these events
to have a major impact on pupil dilation. A stress
response results in rapid pupil dilation, but also in the
following gradual return to normal size. This gradual return
is often of oscillatory nature and contains several
(decreasing) waves. The more significant the event, the
longer this return phase(
          <xref ref-type="bibr" rid="R43">20</xref>
          ). In order to discriminate
between possible causes for a pupil dilation, we perform
a scale analysis of the time series: wavelet analysis.</p>

        <p>The wavelet transform decomposes a signal into
wavelets (i.e., small waves with their energy concentrated
in time). These wavelets are scaled and shifted copies of
a main pattern, called the mother wavelet. In a
multiresolution representation, the signal is decomposed into
increasingly finer details based on wavelet and scaling
functions, which correspond to a high pass and a low pass
filter. Precise time information is contained at high
frequencies and frequency information at low frequencies.
These filters are applied successively to the signal joined
by a down sampling by factor 2 (Figure 4). The
maximum level of decomposition depends on the relevant time
scale of events under consideration(
          <xref ref-type="bibr" rid="R44">21</xref>
          ),(
          <xref ref-type="bibr" rid="R45">22</xref>
          ).</p>
		  
<fig id="fig04" fig-type="figure" position="float">
					<label>Fig. 4.</label>
					<caption>
						<p>Construction of the feature vector for one candidate peak. Mean, amplitude and area are calculated from the A4 component, the relative energy from the detail coefficients.</p>
					</caption>
					<graphic id="graph04" xlink:href="jemr-10-04-c-figure-04.png"/>
				</fig>			  
		  
        <p>We can separate events at different levels on the
arousal scale by partial reconstruction of the signal in
only one specific frequency sub-band, which corresponds
to the respective arousal level.</p>

        <p>For our purpose, we chose to decompose and
reconstruct the signal accurately at a time scale of 1-2 s. The
pupil can react to stimuli within 200-350 ms and reaches
peak response between 500-1000 ms(
          <xref ref-type="bibr" rid="R34">11</xref>
          ). For the 25Hz
sampling rate of our eye tracker, this corresponds to the
fourth level decomposition.</p>

        <p>It is important to select a wavelet that matches the
shape and frequency characteristics of the signal we want
to separate. The Daubechies wavelet family is optimal in
the sense that most of the wavelet coefficients are small
or zero, making them well suited for matching smooth
polynomial features in a given signal(
          <xref ref-type="bibr" rid="R46">23</xref>
          ).</p>
      </sec>
	  
      <sec id="S2d">
        <title>Feature extraction</title>
		
        <p>For each of the candidate peaks extracted in the
previous step, we applied a temporal window to extract the
signal within 1.5 seconds before/after the peak. We then
decomposed the signal within this window into sub-band
frequencies by means of a discrete wavelet transform
with Daubechies 4 (db4) wavelets up to level 4 and
extracted the detail and approximation coefficients. From
these coefficients we calculated the relative energy of the
wavelet, which characterizes the signal&#x2019;s energy
distribution at diff erent frequency bands.</p>
      </sec>
	  
      <sec id="S2e">
        <title>Classification of pupil size peaks</title>
		
        <p>To discriminate between peaks that occur as an effect
of noise and ambient illumination change during the drive
from pupil responses to hazardous events, we used a
support vector machine (SVM) with radial basis function
(RBF) kernel. The feature vectors used for the training of
the SVM were composed of the following: amplitude,
mean diameter, area of the approximation coefficient A4,
and the wavelet relative energy corresponding to the
detail coefficients D1-D4 (Figure 4). The SVM selects
those criteria and their interactions that help us to
distinguish between different kinds of peaks. Such a machine
learning approach is sensitive to unbalanced data. In our
case, the relatively large amount of peaks occurring
during normal driving (that we want to classify as noise)
would result in a relatively high classification accuracy,
even if the SVM would simply classify every as noise. It
would simply neglect the few hazardous events.
Therefore, we balanced the number of feature vectors for each
class by oversampling of the minority class (i.e., the
hazardous events). We trained and tested the SVM using
leave-one-out cross-validation and evaluated the
classification accuracy separately for each subject by using only
training data from the other subjects. This evaluation
procedure is almost unbiased and gives a good indication
of the cross-subject generalization performance (
        <xref ref-type="bibr" rid="R47">24</xref>
		)
while it makes good use of our limited training and test
data. It should however be noted that the selection of
candidate peaks and the construction of the feature vector
involves subject-specific adaptation such as the subject&#x2019;s
average pupil diameter and its distribution.</p>
      </sec>
    </sec>
	
    <sec id="S3">
      <title>Results</title>

      <p>Figure 5 shows the detailed results of the classification 
for each subject. A white circle indicates a change in pupil diameter 
that the classificatory judged relevant; a black circle indicates 
that such a change was not detected. The surrounding square indicates 
whether the driving instructor judged the driving response as adequate 
or not. Both markers have to be considered in conjunction. For example, 
a black square and black circle indicate a situation that the driver 
did not perceive and, consequently, did not  react to. A white square 
with a white circle would correspond to a hazard that the subject 
responded to adequately and that caused a pupil dilation.</p>

<fig id="fig05" fig-type="figure" position="float">
					<label>Fig. 5.</label>
					<caption>
						<p>Presence of a detected pupil diameter change at hazardous situations for all subjects and situations. Each row corresponds to the drive of one subject. The squares along the drive correspond to the hazardous situations and are filled black, if the driving instructor judged an inadequate driving response. The inscribed circle shows whether a pupil response was observed (filled white) or not (filled black). Lines without pupil markers correspond to trials excluded from data analysis due to a bad tracking rate of the eye-tracker. In addition, we provide locations of dangerous situations along the route. In each case, where a subject aborted the experiment, the reason (either technical difficulties or motion sickness) is provided.</p>
					</caption>
					<graphic id="graph05" xlink:href="jemr-10-04-c-figure-05.png"/>
				</fig>	

      <p>Figure 6 shows the ROC curves for the classification, 
separately for the patient and the control group. As there were very 
few inadequate driving responses in the control group, we can expect 
the curves to differ even in the case that the visual field defect 
does not have any effect on the pupil diameter. </p>

<fig id="fig06" fig-type="figure" position="float">
					<label>Fig. 6.</label>
					<caption>
						<p>Receiver operating curves of the classification performance for both the visual field defect patients and the control group.</p>
					</caption>
					<graphic id="graph06" xlink:href="jemr-10-04-c-figure-06.png"/>
				</fig>	
	  
      <p>Such a prediction assumes that a hazard to which the
driver reacted was perceived and via versa a hazard that
was not reacted to adequately was overlooked by the
driver. From previous analyses of vital parameter data we
know that this was not always the case, e.g., some drivers
responded inadequately to a hazard they had perceived.
Therefore, we cannot expect a perfectly reliable
classification result. For our analysis, we decided to predict as
many of the hazardous situations from the pupil data as
possible and allowed for a moderate number of false
positives (so we judge in favor of hazard perception in
case of doubt). The numerical classification results are
provided in Table 1.</p>

<table-wrap id="t01" position="float">
					<label>Table 1.</label>
					<caption>
						<p>Results for the prediction of hazardous event perception from the pupil diameter.</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>
          <tr>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1">
              <bold>Control group</bold>
            </td>
            <td rowspan="1" colspan="1">
              <bold>Patients</bold>
            </td>
          </tr>
						</thead>
						<tbody>
          <tr>
            <td rowspan="1" colspan="1">Specificity</td>
            <td rowspan="1" colspan="1">0.89</td>
            <td rowspan="1" colspan="1">0.78</td>
          </tr>						
          <tr>
            <td rowspan="1" colspan="1">Sensitivity</td>
            <td rowspan="1" colspan="1">0.92</td>
            <td rowspan="1" colspan="1">0.83</td>
          </tr>						
          <tr>
            <td rowspan="1" colspan="1">Precision</td>
            <td rowspan="1" colspan="1">0.93</td>
            <td rowspan="1" colspan="1">0.87</td>
          </tr>
						</tbody>
					</table>
					</table-wrap>	

      <p>When we analyze those situations that lead to a failure of
the driving test, we can now distinguish between a
perceptual failure and a behavioral failure: Subject PH11
fails the first situation without perceiving the hazard. The
same subject also fails the sixth situation, but this time
perceived the hazard, as a pupil diameter change happens.
This might be due to a general awareness of a dangerous
situation without knowledge about the exact location. Just
as interesting is that we can also derive that PH07 showed
an adequate driving behavior to the first and seventh
situation, even though the hazardous object was likely not
perceived. Being able to include such events in the
evaluation of driving performance will allow us to better judge
driving safety also for subjects with a more defensive
driving style that would require extensive testing before
the perceptional deficit becomes obvious in terms of a
driving test failure.</p>

      <p>Table 2 gives some insights to the false positives that
influence the ROC curves and classification results. We
can observe that the classification step performs well in
filtering only few events from many candidates (e.g.,
from 86 to 7 for subject CG66). It returns an average of
8&#xB1;8 false positives, i.e. it classified pupil size peaks as a
stress response to a hazard that were not associated with
one of the predefined hazardous situations. Without the
classifier, an average of 50 false peaks per drive would be
reported. For the task at hand we aimed at predicting
hazard perception at the predefined hazardous situations.
It is possible that some subjects were very careful at
several other situations along the route that looked like
potential hazards, and therefore, showed valid additional
stress responses that we are (wrongly) counting as false
positives. In order to decide in favor of hazard perception
we accepted a relatively high number of false positives
along the complete drive..</p>

<table-wrap id="t02" position="float">
					<label>Table 2.</label>
					<caption>
						<p>Pupil dilation event classification. F = the number of candidate peaks that did not correspond to a hazard situation but were misclassified as such an event (false positives); D = the number of candidate peaks before the classification step. Subject descriptors indicate the patient groups of hemianopia (PH) and glaucoma (PG) as well as their respective control groups (CH/CG)</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>
          <tr>
            <td rowspan="1" colspan="1">Subj.
            </td>
            <td rowspan="1" colspan="1">
                F
            </td>
            <td rowspan="1" colspan="1">
                D
            </td>
            <td rowspan="1" colspan="1">
            </td>
            <td rowspan="1" colspan="1">
            </td>
            <td rowspan="1" colspan="1">
                Subj.
            </td>
            <td rowspan="1" colspan="1">
                F
            </td>
            <td rowspan="1" colspan="1">
                D
            </td>
          </tr>
						</thead>
						<tbody>
          <tr>
            <td rowspan="1" colspan="1">
                PH07
            </td>
            <td rowspan="1" colspan="1">
                14
            </td>
            <td rowspan="1" colspan="1">
                31
            </td>
            <td rowspan="1" colspan="1">
            </td>
            <td rowspan="1" colspan="1">
            </td>
            <td rowspan="1" colspan="1">
                PG69
            </td>
            <td rowspan="1" colspan="1">
            </td>
            <td rowspan="1" colspan="1">
            </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">
                PH50
            </td>
            <td rowspan="1" colspan="1">
                4
            </td>
            <td rowspan="1" colspan="1">
                46
            </td>
            <td rowspan="1" colspan="1">
            </td>
            <td rowspan="1" colspan="1">
            </td>
            <td rowspan="1" colspan="1">
                PG63
            </td>
            <td rowspan="1" colspan="1">
                36
            </td>
            <td rowspan="1" colspan="1">
                94
            </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">
                PH11
            </td>
            <td rowspan="1" colspan="1">
                4
            </td>
            <td rowspan="1" colspan="1">
                29
            </td>
            <td rowspan="1" colspan="1">
            </td>
            <td rowspan="1" colspan="1">
            </td>
            <td rowspan="1" colspan="1">
                PG75
            </td>
            <td rowspan="1" colspan="1">
                5
            </td>
            <td rowspan="1" colspan="1">
                24
            </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">
                PH05
            </td>
            <td rowspan="1" colspan="1">
            </td>
            <td rowspan="1" colspan="1">
            </td>
            <td rowspan="1" colspan="1">
            </td>
            <td rowspan="1" colspan="1">
            </td>
            <td rowspan="1" colspan="1">
                PG61
            </td>
            <td rowspan="1" colspan="1">
            </td>
            <td rowspan="1" colspan="1">
            </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">
                PH27
            </td>
            <td rowspan="1" colspan="1">
                2
            </td>
            <td rowspan="1" colspan="1">
                34
            </td>
            <td rowspan="1" colspan="1">
            </td>
            <td rowspan="1" colspan="1">
            </td>
            <td rowspan="1" colspan="1">
                PG65
            </td>
            <td rowspan="1" colspan="1">
            </td>
            <td rowspan="1" colspan="1">
            </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">
                PH25
            </td>
            <td rowspan="1" colspan="1">
                1
            </td>
            <td rowspan="1" colspan="1">
                6
            </td>
            <td rowspan="1" colspan="1">
            </td>
            <td rowspan="1" colspan="1">
            </td>
            <td rowspan="1" colspan="1">
                PG71
            </td>
            <td rowspan="1" colspan="1">
                7
            </td>
            <td rowspan="1" colspan="1">
                42
            </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">
                PH15
            </td>
            <td rowspan="1" colspan="1">
            </td>
            <td rowspan="1" colspan="1">
            </td>
            <td rowspan="1" colspan="1">
            </td>
            <td rowspan="1" colspan="1">
            </td>
            <td rowspan="1" colspan="1">
                PG77
            </td>
            <td rowspan="1" colspan="1">
            </td>
            <td rowspan="1" colspan="1">
            </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">
                PH01
            </td>
            <td rowspan="1" colspan="1">
                5
            </td>
            <td rowspan="1" colspan="1">
                44
            </td>
            <td rowspan="1" colspan="1">
            </td>
            <td rowspan="1" colspan="1">
            </td>
            <td rowspan="1" colspan="1">
                CG74
            </td>
            <td rowspan="1" colspan="1">
                8
            </td>
            <td rowspan="1" colspan="1">                         
            </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">
                CH06
            </td>
            <td rowspan="1" colspan="1">
                8
            </td>
            <td rowspan="1" colspan="1">
                37
            </td>
            <td rowspan="1" colspan="1">
            </td>
            <td rowspan="1" colspan="1">
            </td>
            <td rowspan="1" colspan="1">
                CG66
            </td>
            <td rowspan="1" colspan="1">
                7
            </td>
            <td rowspan="1" colspan="1">
                86
            </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">
                CH12
            </td>
            <td rowspan="1" colspan="1">
                6
            </td>
            <td rowspan="1" colspan="1">
                41
            </td>
            <td rowspan="1" colspan="1">
            </td>
            <td rowspan="1" colspan="1">
            </td>
            <td rowspan="1" colspan="1">
                CG62           
            </td>
            <td rowspan="1" colspan="1">             
                10          
            </td>
            <td rowspan="1" colspan="1">              
                54             
            </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">             
                CH08             
            </td>
            <td rowspan="1" colspan="1">             
            </td>
            <td rowspan="1" colspan="1">             
            </td>
            <td rowspan="1" colspan="1">              
            </td>
            <td rowspan="1" colspan="1">             
            </td>
            <td rowspan="1" colspan="1">             
                CG78           
            </td>
            <td rowspan="1" colspan="1">              
            </td>
            <td rowspan="1" colspan="1">              
            </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">              
                CH28            
            </td>
            <td rowspan="1" colspan="1">             
                17           
            </td>
            <td rowspan="1" colspan="1">              
                75             
            </td>
            <td rowspan="1" colspan="1">             
            </td>
            <td rowspan="1" colspan="1">
            </td>
            <td rowspan="1" colspan="1">              
                CG68            
            </td>
            <td rowspan="1" colspan="1">             
            </td>
            <td rowspan="1" colspan="1">              
            </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">              
                CH20           
            </td>
            <td rowspan="1" colspan="1">             
                7            
            </td>
            <td rowspan="1" colspan="1">              
                62             
            </td>
            <td rowspan="1" colspan="1">              
            </td>
            <td rowspan="1" colspan="1">             
            </td>
            <td rowspan="1" colspan="1">             
                CG80              
            </td>
            <td rowspan="1" colspan="1">             
                4             
            </td>
            <td rowspan="1" colspan="1">              
                55            
            </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">              
                CH02              
            </td>
            <td rowspan="1" colspan="1">             
            </td>
            <td rowspan="1" colspan="1">              
            </td>
            <td rowspan="1" colspan="1">             
            </td>
            <td rowspan="1" colspan="1">              
            </td>
            <td rowspan="1" colspan="1">              
                CG72           
            </td>
            <td rowspan="1" colspan="1">              
                2             
            </td>
            <td rowspan="1" colspan="1">              
                4              
            </td>
          </tr>
						</tbody>
					</table>
					</table-wrap>	
    </sec>
	
    <sec id="S4">
      <title>Discussion</title>
	  
      <p>Hazard perception involves the input of sensory
information and subsequent cognitive processing. This
processes result in the identification of potentially
dangerous traffic situations. Only the combined process of
seeing and identifying a hazard will lead to a stress
response. We found that pupil dilation can be utilized to
disambiguate hazard perception and adequate driving
reaction in a simulated driving scenario.</p>

      <p>We employed a filtering and classification cascade
that is able to identify sudden stress responses from the
pupil data. We aimed at correctly detecting as many of
the hazardous situations as possible from the pupil
diameter only while trying to minimize the false positives. This
allows us to determine whether the subject likely
perceived a hazard. Due to the number of false positives
during the drive it could not be used as a stand-alone
detection system for hazardous situations, e.g., to trigger
assistance systems. It only indicates the perception of the
driver, if such an event has occurred. We designed the
hazard situations to be easily overlooked by the driver
and to resemble a looming emergency. They are therefore
very attention arousing and stress inducing. For less
challenging scenarios where the driver can detect hazardous
objects earlier and sufficient reaction time is available, no
stress signals would be expected. Eye-tracking measures
would then be sufficient.</p>

      <p>Pupil dilation events were more absent in those
situations that were relatively difficult (e.g. 1, 2 and 6, where
subjects actually failed the driving test or had only few
reaction time available). That indicates that careful,
prospective driving behavior may have resulted in a less
intense experience of the hazardous situation for some
drivers &#x2013; or that they were simply lucky to have passed
the situation. We further found that there is a large
individual variation in the number of predicted stress peaks
per subject, likely associated with the level of
engagement and emotional arousal of the driver during the test
scenario.</p>

      <p>We showed that the pupil variation events occur with
the detection, recognition and reaction to potentially
dangerous events while driving. It indicates the moment
at which a potentially dangerous event becomes relevant
to awareness. Furthermore, the pupil dynamics can
resolve the ambiguity of perception and unexpected
uncertainty that plays an important role in detecting and
recognizing unexpected dangerous events(
        <xref ref-type="bibr" rid="R48">25</xref>
		),(
        <xref ref-type="bibr" rid="R49">26</xref>
		).</p>

      <p>As brightness within a simulated world (road surface,
sky, vegetation, etc.) varies only about &#xB1;5% from the
average brightness(
        <xref ref-type="bibr" rid="R50">27</xref>
		), we can currently not conclude as
to whether and to what extend these findings may hold
for on-road driving. Yet, the indicator may be useful for
studies that require a precise distinction between hazard
perception and the behavioral driving response without
requiring unnatural behavior such as pressing a button
upon detection. The approach may also be used to assess
the design of a simulator track as to whether a timely
detection of planned hazard scenarios is possible.</p>
    </sec>
	
    <sec id="S5" sec-type="COI-statement">
      <title>Ethics and Conflict of Interest</title>

        <p>The author(s) declare(s) that the contents of the article
are in agreement with the ethics described in
<ext-link ext-link-type="uri" xlink:href="http://biblio.unibe.ch/portale/elibrary/BOP/jemr/ethics.html" xlink:show="new">http://biblio.unibe.ch/portale/elibrary/BOP/jemr/ethics.html</ext-link> 
and that there is no conflict of interest regarding the
publication of this paper.</p>
      </sec>
	  
    <sec id="S6">
      <title>Acknowledgements</title>	  

        <p>The authors would like to thank Daimler AG for the
possibility to use the moving-base driving simulator for
this study.</p>

        <p>We would further like to thank Pfizer and MSD Sharp
&#x26; Dohme GmbH for supporting and enabling this study.
The authors declare that there is no conflict of interest
regarding the publication of this paper.</p>

        <p>Work of the authors is supported by the Institutional
Strategy of the University of T&#xFC;bingen (Deutsche
Forschungsgemeinschaft, ZUK 63)</p>

        <p>We acknowledge support by Deutsche
Forschungsgemeinschaft and Open Access Publishing Fund
of University of T&#xFC;bingen.</p>
      </sec>
  </body>
  <back>
<ref-list>
<ref id="R40"><label>17</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Andreas</surname>, <given-names>E. L.</given-names></string-name>, &amp; <string-name><surname>Trevino</surname>, <given-names>G.</given-names></string-name></person-group> (<year>1997</year>). <article-title>Using wavelets to detect trends.</article-title> <source>Journal of Atmospheric and Oceanic Technology</source>, <volume>14</volume>(<issue>3</issue>), <fpage>554</fpage>–<lpage>564</lpage>. <pub-id pub-id-type="doi">10.1175/1520-0426(1997)014&lt;0554:UWTDT&gt;2.0.CO;2</pub-id><issn>0739-0572</issn></mixed-citation></ref>
<ref id="R43"><label>20</label><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Andreassi</surname>, <given-names>J. L.</given-names></string-name></person-group> (<year>2000</year>). <article-title>Pupillary response and behavior.</article-title> Psychophysiology: Human Behavior &amp; Physiological Response, 218-233.</mixed-citation></ref>
<ref id="R35"><label>12</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Bowers</surname>, <given-names>A. R.</given-names></string-name>, <string-name><surname>Mandel</surname>, <given-names>A. J.</given-names></string-name>, <string-name><surname>Goldstein</surname>, <given-names>R. B.</given-names></string-name>, &amp; <string-name><surname>Peli</surname>, <given-names>E.</given-names></string-name></person-group> (<year>2009</year>). <article-title>Driving with hemianopia, I: Detection performance in a driving simulator.</article-title> <source>Investigative Ophthalmology &amp; Visual Science</source>, <volume>50</volume>(<issue>11</issue>), <fpage>5137</fpage>–<lpage>5147</lpage>. <pub-id pub-id-type="doi">10.1167/iovs.09-3799</pub-id><pub-id pub-id-type="pmid">19608541</pub-id><issn>0146-0404</issn></mixed-citation></ref>
<ref id="R46"><label>23</label><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Daubechies</surname>, <given-names>I.</given-names></string-name></person-group> (<year>1992</year>). <source>Ten Lectures on Wavelets</source>. <publisher-loc>Philadelphia</publisher-loc>: <publisher-name>SIAM Press</publisher-name>. <pub-id pub-id-type="doi">10.1137/1.9781611970104</pub-id></mixed-citation></ref>
<ref id="R48"><label>25</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Einhäuser</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Stout</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Koch</surname>, <given-names>C.</given-names></string-name>, &amp; <string-name><surname>Carter</surname>, <given-names>O.</given-names></string-name></person-group> (<year>2008</year>). <article-title>Pupil dilation reflects perceptual selection and predicts subsequent stability in perceptual rivalry.</article-title> <source>Proceedings of the National Academy of Sciences of the United States of America</source>, <volume>105</volume>(<issue>5</issue>), <fpage>1704</fpage>–<lpage>1709</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.0707727105</pub-id><pub-id pub-id-type="pmid">18250340</pub-id><issn>0027-8424</issn></mixed-citation></ref>
<ref id="R47"><label>24</label><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Elisseeeff</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Pontil</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2003</year>). <article-title>Leave-one out error and stability of learning algorithm with applications.</article-title> Advances in learning theory: Methods, models and applications.</mixed-citation></ref>
<ref id="R28"><label>5</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Granholm</surname>, <given-names>E.</given-names></string-name>, &amp; <string-name><surname>Steinhauer</surname>, <given-names>S. R.</given-names></string-name></person-group> (<year>2004</year>). <article-title>Pupillometric measures of cognitive and emotional processes.</article-title> <source>International Journal of Psychophysiology</source>, <volume>52</volume>(<issue>1</issue>), <fpage>1</fpage>–<lpage>6</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijpsycho.2003.12.001</pub-id><pub-id pub-id-type="pmid">15003368</pub-id><issn>0167-8760</issn></mixed-citation></ref>
<ref id="R41"><label>18</label><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Haver</surname>, <given-names>T. O.</given-names></string-name></person-group> (<year>2008</year>, 5). A Pragmatic Introduction to Signal Processing.</mixed-citation></ref>
<ref id="R27"><label>4</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Hoeks</surname>, <given-names>B.</given-names></string-name>, &amp; <string-name><surname>Levelt</surname>, <given-names>W.</given-names></string-name></person-group> (<year>1993</year>). <article-title>Pupillary dilation as a measure of attention: A quantitative system analysis.</article-title> <source>Behavior Research Methods, Instruments, &amp; Computers</source>, <volume>25</volume>(<issue>1</issue>), <fpage>16</fpage>–<lpage>26</lpage>. <pub-id pub-id-type="doi">10.3758/BF03204445</pub-id><issn>0743-3808</issn></mixed-citation></ref>
<ref id="R44"><label>21</label><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Kaiser</surname>, <given-names>G.</given-names></string-name></person-group> (<year>2010</year>). <source>A Friendly Guide to Wavelets</source>. <publisher-name>Springer Science &amp; Business Media</publisher-name>.</mixed-citation></ref>
<ref id="R26"><label>3</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Kloosterman</surname>, <given-names>N. A.</given-names></string-name>, <string-name><surname>Meindertsma</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>van Loon</surname>, <given-names>A. M.</given-names></string-name>, <string-name><surname>Lamme</surname>, <given-names>V. A.</given-names></string-name>, <string-name><surname>Bonneh</surname>, <given-names>Y. S.</given-names></string-name>, &amp; <string-name><surname>Donner</surname>, <given-names>T. H.</given-names></string-name></person-group> (<year>2015</year>). <article-title>Pupil size tracks perceptual content and surprise.</article-title> <source>The European Journal of Neuroscience</source>, <volume>41</volume>(<issue>8</issue>), <fpage>1068</fpage>–<lpage>1078</lpage>. <pub-id pub-id-type="doi">10.1111/ejn.12859</pub-id><pub-id pub-id-type="pmid">25754528</pub-id><issn>0953-816X</issn></mixed-citation></ref>
<ref id="R33"><label>10</label><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Kübler</surname>, <given-names>T. C.</given-names></string-name>, <string-name><surname>Kasneci</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Rosenstiel</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Schiefer</surname>, <given-names>U.</given-names></string-name>, <string-name><surname>Nagel</surname>, <given-names>K.</given-names></string-name>, &amp; <string-name><surname>Papageorgiou</surname>, <given-names>E.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Stressindicators and exploratory gaze for the analysis of hazard perception in patients with visual field loss.</article-title> Transportation Research Part F: Traffic Psychology.</mixed-citation></ref>
<ref id="R45"><label>22</label><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Mallat</surname>, <given-names>S.</given-names></string-name></person-group> (<year>1998</year>). <source>A Wavelet Tour of Signal Processing</source>. <publisher-loc>San Diego, CA</publisher-loc>: <publisher-name>Academic Press</publisher-name>.</mixed-citation></ref>
<ref id="R30"><label>7</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Marshall</surname>, <given-names>S. P.</given-names></string-name></person-group> (<year>2007</year>). <article-title>Identifying cognitive state from eye metrics.</article-title> <source>Aviation, Space, and Environmental Medicine</source>, <volume>78</volume>(<issue>5</issue>, <supplement>Suppl</supplement>), <fpage>B165</fpage>–<lpage>B175</lpage>.<pub-id pub-id-type="pmid">17547317</pub-id><issn>0095-6562</issn></mixed-citation></ref>
<ref id="R24"><label>1</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Mathôt</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>van der Linden</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Grainger</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Vitu</surname>, <given-names>F.</given-names></string-name></person-group> (<year>2013</year>). <article-title>The pupillary light response reveals the focus of covert visual attention.</article-title> <source>PLoS One</source>, <volume>8</volume>(<issue>10</issue>), <fpage>e78168</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0078168</pub-id><pub-id pub-id-type="pmid">24205144</pub-id><issn>1932-6203</issn></mixed-citation></ref>
<ref id="R49"><label>26</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Nassar</surname>, <given-names>M. R.</given-names></string-name>, <string-name><surname>Rumsey</surname>, <given-names>K. M.</given-names></string-name>, <string-name><surname>Wilson</surname>, <given-names>R. C.</given-names></string-name>, <string-name><surname>Parikh</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Heasly</surname>, <given-names>B.</given-names></string-name>, &amp; <string-name><surname>Gold</surname>, <given-names>J. I.</given-names></string-name></person-group> (<year>2012</year>). <article-title>Rational regulation of learning dynamics by pupil-linked arousal systems.</article-title> <source>Nature Neuroscience</source>, <volume>15</volume>(<issue>7</issue>), <fpage>1040</fpage>–<lpage>1046</lpage>. <pub-id pub-id-type="doi">10.1038/nn.3130</pub-id><pub-id pub-id-type="pmid">22660479</pub-id><issn>1097-6256</issn></mixed-citation></ref>
<ref id="R42"><label>19</label><mixed-citation publication-type="web-page" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>O’Haver</surname>, <given-names>T. C.</given-names></string-name></person-group> (Producer). (<year>2017</year>, 09). Retrieved from <ext-link ext-link-type="uri" xlink:href="https://terpconnect.umd.edu/toh/spectrum/findp">https://terpconnect.umd.edu/toh/spectrum/findp</ext-link> eaksG.m</mixed-citation></ref>
<ref id="R50"><label>27</label><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Palinko</surname>, <given-names>O.</given-names></string-name>, <string-name><surname>Kun</surname>, <given-names>A. L.</given-names></string-name>, <string-name><surname>Shyrokov</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Heeman</surname>, <given-names>P.</given-names></string-name></person-group> (<year>2010</year>). <article-title>Estimating cognitive load using remote eye tracking in a driving simulator.</article-title> In Proceedings of the 2010 Symposium on EyeTracking Research &amp; Applications, 141-144. <pub-id pub-id-type="doi">10.1145/1743666.1743701</pub-id></mixed-citation></ref>
<ref id="R38"><label>15</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Pedrotti</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Mirzaei</surname>, <given-names>M. A.</given-names></string-name>, <string-name><surname>Tedesco</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Chardon-net</surname>, <given-names>J. R.</given-names></string-name>, <string-name><surname>Mérienne</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Benedetto</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Baccino</surname>, <given-names>T.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Automatic stress classification with pupil diameter analysis.</article-title> <source>International Journal of Human-Computer Interaction</source>, <volume>30</volume>(<issue>3</issue>), <fpage>220</fpage>–<lpage>236</lpage>. <pub-id pub-id-type="doi">10.1080/10447318.2013.848320</pub-id><issn>1044-7318</issn></mixed-citation></ref>
<ref id="R36"><label>13</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Recarte</surname>, <given-names>M. A.</given-names></string-name>, &amp; <string-name><surname>Nunes</surname>, <given-names>L. M.</given-names></string-name></person-group> (<year>2000</year>). <article-title>Effects of verbal and spatial-imagery tasks on eye fixations while driving.</article-title> <source>Journal of Experimental Psychology. Applied</source>, <volume>6</volume>(<issue>1</issue>), <fpage>31</fpage>–<lpage>43</lpage>. <pub-id pub-id-type="doi">10.1037/1076-898X.6.1.31</pub-id><pub-id pub-id-type="pmid">10937310</pub-id><issn>1076-898X</issn></mixed-citation></ref>
<ref id="R25"><label>2</label><mixed-citation publication-type="thesis" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Schwalm</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2009</year>). <article-title>Pupillometry as a method for measuring mental workload within an automotive context</article-title> Dissertation. Universität des Saarlandes.</mixed-citation></ref>
<ref id="R29"><label>6</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Tanaka</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Kuchiiwa</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Izumi</surname>, <given-names>H.</given-names></string-name></person-group> (<year>2005</year>). <article-title>Parasympathetic mediated pupillary dilation elicited by lingual nerve stimulation in cats.</article-title> <source>Investigative Ophthalmology &amp; Visual Science</source>, <volume>46</volume>(<issue>11</issue>), <fpage>4267</fpage>–<lpage>4274</lpage>. <pub-id pub-id-type="doi">10.1167/iovs.05-0088</pub-id><pub-id pub-id-type="pmid">16249507</pub-id><issn>0146-0404</issn></mixed-citation></ref>
<ref id="R39"><label>16</label><mixed-citation publication-type="conference" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Zeeb</surname>, <given-names>E.</given-names></string-name></person-group> (<year>2010</year>). Daimler's New Full-Scale, HighDynamic Driving Simulator - A Technical Overview. In. Conference Pro. <source>Driving Simulation Conference</source>.</mixed-citation></ref>
<ref id="R37"><label>14</label><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Zhai</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Barreto</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2006</year>). <article-title>Stress detection in computer users based on digital signal processing of noninvasive physiological variables.</article-title> In <source>Engineering in Medicine and Biology Society 28th Annual International Conference of the IEEE EMBS</source>. <pub-id pub-id-type="doi">10.1109/IEMBS.2006.259421</pub-id></mixed-citation></ref>
<ref id="R32"><label>9</label><mixed-citation publication-type="conference" specific-use="parsed"><person-group person-group-type="author"><string-name><surname>Zhang</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Owechko</surname>, <given-names>Y.</given-names></string-name>, &amp; <string-name><surname>Zhang</surname>, <given-names>J.</given-names></string-name></person-group> (<year>2004</year>). <article-title>Driver cognitive workload estimation: a data-driven perspective.</article-title> In <source>Proceedings of the 7th International IEEE Conference on Intelligent Transportation Systems</source>, <fpage>642</fpage>–<lpage>647</lpage>.</mixed-citation></ref>
<ref id="R34"><label>11</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Privitera</surname>, <given-names>C. M.</given-names></string-name>, <string-name><surname>Renninger</surname>, <given-names>L. W.</given-names></string-name>, <string-name><surname>Carney</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Klein</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Aguilar</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2010</year>). <article-title>Pupil dilation during visual target detection.</article-title> <comment>[Journal of Eye Movement Research.]</comment>. <source>Journal of Vision (Charlottesville, Va.)</source>, <volume>10</volume>(<issue>10</issue>), <fpage>3</fpage>. <pub-id pub-id-type="doi">10.1167/10.10.3</pub-id><pub-id pub-id-type="pmid">20884468</pub-id><issn>1534-7362</issn></mixed-citation></ref>
<ref id="R31"><label>8</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Recarte</surname>, <given-names>M. A.</given-names></string-name>, &amp; <string-name><surname>Nunes</surname>, <given-names>L. M.</given-names></string-name></person-group> (<year>2003</year>). <article-title>Mental workload while driving: Effects on visual search, discrimination, and decision making.</article-title> <source>Journal of Experimental Psychology</source>, <volume>9</volume>(<issue>2</issue>), <fpage>119</fpage>–<lpage>137</lpage>. <pub-id pub-id-type="doi">10.1037/1076-898X.9.2.119</pub-id><pub-id pub-id-type="pmid">12877271</pub-id><issn>0022-1015</issn></mixed-citation></ref>

</ref-list>
  
  
  </back>    
</article>


