<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd">

<article article-type="research-article" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML">
 <front>
    <journal-meta>
	<journal-id journal-id-type="publisher-id">Jemr</journal-id>
      <journal-title-group>
        <journal-title>Journal of Eye Movement Research</journal-title>
      </journal-title-group>
      <issn pub-type="epub">1995-8692</issn>
	  <publisher>								
	  <publisher-name>Bern Open Publishing</publisher-name>
	  <publisher-loc>Bern, Switzerland</publisher-loc>
	</publisher>
    </journal-meta>
    <article-meta>
	<article-id pub-id-type="doi">10.16910/jemr.11.2.13</article-id> 
	  <article-categories>								
				<subj-group subj-group-type="heading">
					<subject>Research Article</subject>
				</subj-group>
		</article-categories>
      <title-group>
        <article-title>Pupillary dilation response reflects
surprising moments in music.</article-title>
      </title-group>
	   <contrib-group> 
				<contrib contrib-type="author">
					<name>
						<surname>Liao</surname>
						<given-names>Hsin-I</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Yoneya</surname>
						<given-names>Makoto</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>	
 				<contrib contrib-type="author">
					<name>
						<surname>Kashino</surname>
						<given-names>Makio</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Furukawa</surname>
						<given-names>Shigeto</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>               			
        <aff id="aff1">
		<institution>NTT Communication Science Laboratories, NTT Cooperation,</institution>,   <country>Japan</country>
        </aff>
		</contrib-group>   

		
	  <pub-date date-type="pub" publication-format="electronic"> 
		<day>14</day>  
		<month>12</month>
        <year>2018</year>
      </pub-date>
	  <pub-date date-type="collection" publication-format="electronic"> 
	  <year>2018</year>
	</pub-date>
      <volume>11</volume>
      <issue>2</issue>
	 <elocation-id>10.16910/jemr.11.2.13</elocation-id> 
	<permissions> 
	<copyright-year>2018</copyright-year>
	<copyright-holder>Liao, H.-I., Yoneya, M., Kashino, M. &#x26; Furukawa, S.</copyright-holder>
	<license license-type="open-access">
  <license-p>This work is licensed under a Creative Commons Attribution 4.0 International License, 
  (<ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">
    https://creativecommons.org/licenses/by/4.0/</ext-link>), which permits unrestricted use and redistribution provided that the original author and source are credited.</license-p>
</license>
	</permissions>
      <abstract>
        <p>There are indications that the pupillary dilation response (PDR) reflects surprising moments in an auditory sequence such as the appearance of a deviant noise against repetitively presented pure tones (<xref ref-type="bibr" rid="b4">4</xref>), and salient and loud sounds that are evaluated by human paricipants subjectively (<xref ref-type="bibr" rid="b12">12</xref>). In the current study, we further examined whether the reflection of PDR in auditory surprise can be accumulated and revealed in complex and yet structured auditory stimuli, i.e., music, and when the surprise is defined subjectively. Participants listened to 15 excerpts of music while their pupillary responses were recorded. In the surprise-rating session, participants rated how surprising an instance in the excerpt was, i.e., rich in variation versus monotonous, while they listened to it. In the passive-listening session, they listened to the same 15 excerpts again but were not involved in any task. The pupil diameter data obtained from both sessions were time-aligned to the rating data obtained from the surprise-rating session. Results showed that in both sessions, mean pupil diameter was larger at moments rated more surprising than unsurprising. The result suggests that the PDR reflects surprise in music automatically. </p>
      </abstract>
      <kwd-group>
        <kwd>Pupil</kwd>
        <kwd>music</kwd>
        <kwd>surprise</kwd>
        <kwd>salience</kwd>
        <kwd>decision making</kwd>
        <kwd>familiarity</kwd>
        <kwd>eye tracking</kwd>
        <kwd>attention</kwd>
        <kwd>art perception</kwd>
        <kwd>individual differences</kwd>                                                                
      </kwd-group>
    </article-meta>
  </front>	
  <body>

    <sec id="S1">
      <title>Introduction</title>

<p>“The eyes are the windows to the soul”—by looking into a person’s
eyes, we may understand how she or he thinks and feels. Scientists have
backed up this proverb by showing that the pupil reflects various
cognitive functions such as cognitive processing load (<xref ref-type="bibr" rid="b1 b2 b3">1, 2, 3</xref>), emotion
(<xref ref-type="bibr" rid="b5">5</xref>), attentional modulation (<xref ref-type="bibr" rid="b6 b7">6, 7</xref>), memory (<xref ref-type="bibr" rid="b8 b9">8, 9</xref>), decision making (<xref ref-type="bibr" rid="b10 b11">10, 11</xref>), 
high level visual content information processing (<xref ref-type="bibr" rid="b13">13</xref>), and mental
imagery (<xref ref-type="bibr" rid="b14">14</xref>). The underlying mechanism is considered to be related to
the locus coeruleus (LC)–norepinephrine (NE) function, which modulates
adaptive gain and optimizes performance (<xref ref-type="bibr" rid="b15">15</xref>). Since changes in pupil
size are tightly coupled with the activity of the LC neurons, we may
infer the LC-NE function by observing pupillary responses.</p>

<p>The auditory system is sensitive to stimulus regularity and detects
any change rapidly to optimize environmental monitoring. It has been
demonstrated that pupillary responses reflect salient and surprising
auditory events (e.g., <xref ref-type="bibr" rid="b4 b12 b16 b17 b18 b19 b20">4, 12, 16, 17, 18, 19, 20</xref>). For example, Liao, Yoneya, et
al. (<xref ref-type="bibr" rid="b4">4</xref>) showed that when participants listened to an auditory sequence
consisting of repetitive tones with a deviant noise oddball presented
occasionally, pupil size increased when the oddball appeared. This
pupillary dilation response (PDR) was observed regardless of whether the
participant paid attention to the auditory sequence or not, suggesting
that the PDR is an automatic physiological response for auditory
surprise detection.</p>

<p>The PDR reflects a surprising moment not only when the surprise is
defined objectively as a deviant oddball event against the background,
but also when it is defined by human participants’ subjective
evaluations. Liao, Kidani, et al. (<xref ref-type="bibr" rid="b12">12</xref>) presented ten discrete
environmental sounds to participants while their pupillary responses
were recorded. Each sound was presented for 500 ms with a 10-s
inter-stimulus interval. After the pupillary response recording, they
were asked to rate several psychoacoustic aspects of the sounds,
including salience, loudness, preference, beauty, hardness,
vigorousness, and annoyance. Results showed that the pupil dilated when
the sounds were presented. Most importantly, the magnitude of the PDR
was positively correlated with the subjective salience of the sound, as
well as its loudness, but not with other aspects of the psychoacoustic
judgments.</p>

<p>The correspondence between auditory surprise and the PDR shown in our
previous studies was found when the salient auditory event was briefly
presented, e.g., 50 ms for the noise oddball (<xref ref-type="bibr" rid="b4">4</xref>) and 500 ms for the
environmental sound (<xref ref-type="bibr" rid="b12">12</xref>). In real-world situations, on the other hand, a
salient auditory event may last long and continuously. Therefore, it is
important to examine whether the PDR reflects auditory salience in
complex auditory scenes. In the current study, we examined whether the
PDR reflects subjective auditory surprise in music and how loudness may
contribute to the effect. Music is a long-lasting, continuous, complex,
and yet structured auditory stimulus. A composition usually consists of
certain repetitions and variations of the repetitive structure. These
characteristics of music enable us to trace subjective surprise
evaluations as an excerpt changes. We examined whether the pupil dilates
when an excerpt is evaluated as surprising.</p>
    </sec>
	
    <sec id="S2">
      <title>Methods</title>

<p>Participants listened to an excerpt of music for 90 s and
concurrently rated how surprising it was, i.e., rich in variation versus
monotonous, by sliding a rating bar continuously. Meanwhile, we had them
fixate a central point on the monitor to record their pupillary
responses. Each participant listened to 15 excerpts of classical, jazz,
and rock music. After the concurrent surprise-rating session,
participants listened to the same excerpts again while their pupillary
responses were recorded, but they were not involved in any task.</p>
	
    <sec id="S2a">
      <title>Participants</title>

<p>Twenty-two adults (aged 22-43; median of 35) participated in the
study. All had normal or corrected-to-normal vision and reported normal
hearing. All participants were naïve about the purpose of the study and
received payment for their participation. All the procedures were
approved by the NTT Communication Science Laboratories Ethical
Committee, and all participants gave informed written consent before the
experiment.</p>
    </sec>
	
    <sec id="S2b">
      <title>Materials</title>

<p>Stimuli were generated and controlled by a personal computer (Dell
OptiPlex 980) and presented through a headphone (Sennheiser HD 595) and
on an 18.1-inch monitor (EIZO FlexScanL685Ex). Auditory stimuli were 15
excerpts of the first 90 s of selected pieces (Table 1). They were
selected because their structure consisted of both several repetitions
and variations of them. The sound pressure levels were fixed across the
participants at a comfortable listening level. The visual stimulus was a
dark gray fixation point (0.25 × 0.25°, 0.33 cd/m<sup>2</sup>) presented
against light gray background (27.0 cd/m<sup>2</sup>).</p>

<table-wrap id="t01" position="float">
					<label>Table 1.</label>
					<caption>
						<p>Excerpts used in the current study. Artists are indicated
with italics.</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
    <thead>
      <tr>
        <th></th>
        <th>Classical</th>
        <th>Jazz</th>
        <th>Rock</th>
      </tr>
    </thead>
    <tbody>
      <tr>
        <td>1</td>
        <td>Beethoven: Symphony #5 In C Minor, Op. 67, &#x22;Fate&#x22;:
        Allegro Con Brio
        <italic>Carlos Kleiber; Vienna Philharmonic
        Orchestra</italic></td>
        <td>Autumn Leaves
        <italic>Cannonball Adderley</italic></td>
        <td>(I Can't Get No) Satisfaction
        <italic>The Rolling Stones</italic></td>
      </tr>
      <tr>
        <td>2</td>
        <td>Bach: Chorale &#x22;Jesus bleibet meine Freude&#x22;
        <italic>Orchestre de Chambre de Jean-Francois
        Paillard</italic></td>
        <td>Somethin' Else
        <italic>Cannonball Adderley</italic></td>
        <td>London Calling
        <italic>The Clash</italic></td>
      </tr>
      <tr>
        <td>3</td>
        <td>Mozart: Serenade No.13 in G major, K.525, &#x22;Eine kleine
        Nachtmusik&#x22; Allegro
        <italic>I Musici</italic></td>
        <td>Blue Train
        <italic>John Coltrane</italic></td>
        <td>Smells Like Teen Spirit
        <italic>Nirvana</italic></td>
      </tr>
      <tr>
        <td>4</td>
        <td>Chopin: Nocturnes: No. 2 In E Flat Op. 9 No. 2
        <italic>Yundi Li</italic></td>
        <td>Moanin
        <italic>Art Blakey and the Jazz Messengers</italic></td>
        <td>Comfortably Numb
        <italic>Pink Floyd</italic></td>
      </tr>
      <tr>
        <td>5</td>
        <td>Stravinsky: Pétrouchka. Scenes De Ballet, Russian Dance
        <italic>Phillip Moll: Piano / Berlin Philharmonic Orchestra:
        Cond: Bernard Haitink</italic></td>
        <td>Waltz for Debby
        <italic>Bill Evans</italic></td>
        <td>Highway Star
        <italic>Deep Purple</italic></td>
      </tr>
    </tbody>
  </table>
</table-wrap>

<p>Behavioral responses were collected from a transducer (TSD115)
connected to a Biopac MP system (HLT100C module, BIOPAC Systems, Inc.).
The transducer had a slider on the panel to allow participants to report
subjective assessments from 0 to 10 continuously. The sampling rate of
the transducer was 1000 Hz. Pupillary responses were recoded binocularly
by an infrared eye-tracker camera (Eyelink 1000 Desktop Mount, SR Research Ltd.)
with a sampling rate of 1000 Hz.</p>
    </sec>
	
    <sec id="S2c">
      <title>Design</title>

<p>The 15 musical excerpts were presented twice in different sessions:
first in a surprise-rating session and then again in a passive-listening
session. The order of the excerpts in each session for each participant
was randomly assigned. The inter-stimulus interval (ISI) was 5 s. The
total duration of each session was around 25 min.</p>
    </sec>
	
    <sec id="S2d">
      <title>Procedure</title>

<p>All participants were given written and oral explanations about the
nature of the experiment and the pupillary response recording.
Participants sat in front of the monitor at a viewing distance of 51 cm
in a dimly lit soundproof chamber, with their chin on a chinrest. Before
each session, a five-point calibration procedure was performed, after
which the participants were instructed to fixate the central point
throughout the experiment.</p>

<p>In the surprise-rating session, participants were asked to
concurrently rate how they felt about changes (in any sense) compared
with the portions within the excerpt they had heard so far. For example,
if they felt any aspect in the music, including melody, tempo, harmony,
or texture (e.g., more instruments playing), became richer in variation,
they moved the slider to the right to register higher scores. If they
felt the change became monotonous, they moved it to the left to register
lower scores. The slider was reset in the middle (i.e., scored as 5) at
the beginning of each excerpt.</p>

<p>In the passive-listening session, participants listened to the same
musical excerpts again without any task involvement. The break between
the two sessions was longer than 30 min. The order of the two sessions
was fixed to avoid the influence of expectation on the surprise rating
due to the repetition.</p>

<p>After the two sessions, participants answered a questionnaire to rate
from 1 (never heard the piece) to 7 (often heard the piece) how familiar
they were with each excerpt and to write down the name of the piece
and/or the artist/composer if they knew it. They were allowed to replay
the excerpts at their own pace when answering the questionnaire.</p>
    </sec>
    </sec>

    <sec id="S3">
      <title>Results</title>
    <sec id="S3a">
      <title>Familiarity with the excerpts (questionnaire)</title>

<p>The mean familiarity scores for the classical, jazz, and rock music
were 4.1, 2.4, and 2.6, respectively (scores for individual excerpts are
listed in Table 2). The mean scores for each participant were subjected
to a repeated-measures ANOVA with the music genres (classical, jazz,
rock) as within-subject factors. Results showed that participants were more familiar with the classical music
we selected than the other types of music [<italic>F</italic>(2,42) =
19.07, <italic>p</italic> &#x3C; .001, <italic>η<sup>2</sup></italic> =
.48]. The results of the open questions are shown in Table 2 (second and
third columns). Participants tended to give more answers and correct
ones to questions about the classical music than to those about the
other genres, which is consistent with the results of the subjective
feeling of familiarity.</p>

<table-wrap id="t02" position="float">
					<label>Table 2.</label>
					<caption>
						<p>Results of familiarity rating, questionnaire, and on-line
surprise rating. The first column shows the means of the familiarity
rating score across participants, with the standard deviation in
parentheses. The second and third columns show the number of
participants who gave any answer and a correct one to questions about
the excerpt or the artist/composer, respectively. The fourth and fifth
columns show the mean of the average and standard deviations of the
surprise rating over time, respectively, across participants. Numbers in
parentheses are standard deviations across participants. The last column
shows the Kendall’s coefficient of concordance (W) of the on-line
surprise rating.</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">

    <thead>
      <tr>
        <th></th>
        <th></th>
        <th>Familiarity rating</th>
        <th>Total answers</th>
        <th>Correct
        answers</th>
        <th>Average surprise rating over time</th>
        <th>Variations in surprise rating over time</th>
        <th>Kendall’s W</th>
      </tr>
    </thead>
    <tbody>
      <tr>
        <td><italic>Classical</italic></td>
        <td>1</td>
        <td>5.0 (1.7)</td>
        <td>17</td>
        <td>16</td>
        <td>6.4 (1.1)</td>
        <td>1.4 (0.6)</td>
        <td>0.39</td>
      </tr>
      <tr>
        <td></td>
        <td>2</td>
        <td>4.4 (1.8)</td>
        <td>8</td>
        <td>4</td>
        <td>4.9 (1.3)</td>
        <td>1.1 (0.8)</td>
        <td>0.56</td>
      </tr>
      <tr>
        <td></td>
        <td>3</td>
        <td>5.0 (1.3)</td>
        <td>7</td>
        <td>5</td>
        <td>6.2 (1.0)</td>
        <td>1.4 (0.6)</td>
        <td>0.30</td>
      </tr>
      <tr>
        <td></td>
        <td>4</td>
        <td>4.6 (1.7)</td>
        <td>10</td>
        <td>9</td>
        <td>5.0 (1.8)</td>
        <td>1.0 (0.5)</td>
        <td>0.71</td>
      </tr>
      <tr>
        <td></td>
        <td>5</td>
        <td>1.7 (1.1)</td>
        <td>1</td>
        <td>0</td>
        <td>5.2 (1.5)</td>
        <td>1.3 (0.8)</td>
        <td>0.50</td>
      </tr>
      <tr>
        <td><italic>Jazz</italic></td>
        <td>1</td>
        <td>1.8 (1.5)</td>
        <td>1</td>
        <td>1</td>
        <td>4.7 (1.3)</td>
        <td>1.1 (0.8)</td>
        <td>0.52</td>
      </tr>
      <tr>
        <td></td>
        <td>2</td>
        <td>1.6 (1.4)</td>
        <td>0</td>
        <td>0</td>
        <td>5.1 (2.4)</td>
        <td>1.2 (1.0)</td>
        <td>0.66</td>
      </tr>
      <tr>
        <td></td>
        <td>3</td>
        <td>2.5 (1.8)</td>
        <td>0</td>
        <td>0</td>
        <td>5.2 (1.7)</td>
        <td>1.1 (0.5)</td>
        <td>0.63</td>
      </tr>
      <tr>
        <td></td>
        <td>4</td>
        <td>3.5 (1.7)</td>
        <td>0</td>
        <td>0</td>
        <td>5.7 (1.3)</td>
        <td>1.2 (0.7)</td>
        <td>0.47</td>
      </tr>
      <tr>
        <td></td>
        <td>5</td>
        <td>2.4 (1.6)</td>
        <td>0</td>
        <td>0</td>
        <td>5.5 (1.8)</td>
        <td>1.0 (0.4)</td>
        <td>0.73</td>
      </tr>
      <tr>
        <td><italic>Rock</italic></td>
        <td>1</td>
        <td>3.4 (2.0)</td>
        <td>3</td>
        <td>3</td>
        <td>6.2 (1.4)</td>
        <td>1.0 (0.6)</td>
        <td>0.64</td>
      </tr>
      <tr>
        <td></td>
        <td>2</td>
        <td>1.5 (1.2)</td>
        <td>1</td>
        <td>1</td>
        <td>6.1 (2.1)</td>
        <td>1.3 (0.9)</td>
        <td>0.62</td>
      </tr>
      <tr>
        <td></td>
        <td>3</td>
        <td>3.3 (2.3)</td>
        <td>4</td>
        <td>4</td>
        <td>6.4 (1.5)</td>
        <td>1.2 (0.6)</td>
        <td>0.52</td>
      </tr>
      <tr>
        <td></td>
        <td>4</td>
        <td>1.6 (1.2)</td>
        <td>1</td>
        <td>0</td>
        <td>4.1 (1.5)</td>
        <td>1.1 (0.6)</td>
        <td>0.53</td>
      </tr>
      <tr>
        <td></td>
        <td>5</td>
        <td>3.0 (1.9)</td>
        <td>1</td>
        <td>1</td>
        <td>6.7 (1.8)</td>
        <td>1.1 (0.4)</td>
        <td>0.70</td>
      </tr>
    </tbody>
  </table>
</table-wrap>
    </sec>

    <sec id="S3b">
      <title>On-line subjective surprise rating</title>

<p>Figure 1A shows examples of the surprise rating over time. The
average (the fourth column) and variation (the fifth column) of the
surprise rating over time are listed in Table 2.</p>

<p>To examine whether the surprise rating varied among the music genres
or excerpts (e.g., in terms of its familiarity), we conducted two
different analyses. First, the means of the average, as well as the
standard deviations, of the surprise rating score were subjected to a
repeated-measures ANOVA with the music genres (classical, jazz, rock) as
within-subject factors. Results showed that neither the average surprise
rating [<italic>F</italic>(2,42) = 2.80, <italic>p</italic> &#x3E; .07,
<italic>η<sup>2</sup></italic> = .12] nor the variations in the surprise
rating over time [<italic>F</italic>(2,42) = 1.13, <italic>p</italic>
&#x3E; .3, <italic>η<sup>2</sup></italic> = .05] differed among music
genres. Second, we calculated the correlation between the familiarity
rating and average surprise rating and the correlation between the
familiarity rating and the variations in the surprise rating over time.
Results showed a positive correlation between familiarity and the
average surprise rating over time (<italic>r</italic> = .24,
<italic>p</italic> &#x3C; .001) but not between familiarity and variations
in the surprise rating over time (<italic>r</italic> = -0.07,
<italic>p</italic> &#x3E; .2).</p>

<p>To examine the consensus among the participants on the surprise
rating, we calculated Kendall’s coefficient of concordance (W). The
rating data were resampled with a 10-Hz sampling rate for the analysis.
The results are shown in Table 2 (sixth column). The consensuses among
the participants were moderate but significant, and they varied among
musical excerpts (median of 0.56, min of 0.30, max of 0.73; all
<italic>p</italic>s &#x3C; .001).</p>
    </sec>

    <sec id="S3c">
      <title>Pupillary response analysis</title>

<p>Figure 1B shows the pupil size change over time. Only data recorded
from the right eye were analyzed since the pupillary responses from both
eyes were consensual. Data during blinks were treated as missing and
discarded (30.1%). The range of the average blink rate was about the
same as in our previous studies (<xref ref-type="bibr" rid="b4 b12">4, 12</xref>), where the task was an auditory
one that allowed normal blinks.</p>

<fig id="fig01" fig-type="figure" position="float">
					<label>Figure 1.</label>
					<caption>
						<p>Examples of on-line surprise rating (A) and pupil size change
(B) over time. The red and blue lines represent the surprising and
unsurprising moments, respectively, as defined as when the rating score
was above 7 or below 4.</p>
					</caption>
					<graphic id="graph01" xlink:href="jemr-11-02-m-figure-01.png"/>
				</fig>

<p>The pupil size measurement in the video-based eye tracker system, as
used in the current study, was covariant with the gaze position (<xref ref-type="bibr" rid="b21">21</xref>). To
avoid recording errors due to unexpected gaze positions, pupil size data
were screened when the gaze position deviated 1.5 deg. from the central
fixation point, and 23.1% of the data were screened out.</p>

<p>The Eyelink system outputs arbitrary units [au] to represent the
pupil size, which was not calibrated across participants or conditions.
To compare the results across conditions, we computed z-score during
each 90-s excerpt. To reduce high-frequency noise due to the over-fine
sampling rate (i.e., 1000 Hz) for pupillary response measurements, we
resampled the data with a 10-Hz sampling rate for the analysis.
Specifically, the data between the resampling points (i.e., every 100
data points) were discarded without any interpolation or filtering
procedure. In this work, we used an EDF converter (provided by SR
Research) to convert the Eyelink EDF file to the ASC format, and we used
Matlab for all the data analyses. The function for the resampling
procedure described above was “downsample.” We followed the same
protocol as in our previous study (<xref ref-type="bibr" rid="b4">4</xref>).</p>
    </sec>

    <sec id="S3d">
      <title>Surprise-related PDR</title>

<p>The pupil data recorded in the two sessions (surprise-rating and
passive-listening) were time-aligned to the rating data obtained in the
surprise-rating session. The surprising moments were defined arbitrarily
as the period when the surprise rating score was above 7 (the red lines
in Fig. 1), the unsurprising moments as a surprise rating score below 4
(the blue lines in Fig. 1), and the neutral moments as a surprise rating
score between 7 and 4 (the black lines in Fig. 1). The criterion was set
to obtain similar probabilities of the valid data for surprising and
unsurprising moments: 24.7% and 21.7% of the total duration,
respectively.</p>

<p>Results are shown in Fig. 2. Mean pupil diameter was subjected to a
three-way repeated-measures ANOVA with the task (surprise-rating,
passive-listening), music genre (classical, jazz, rock), and surprise
(surprising, neutral, unsurprising) as within-subject factors. Results
showed main effects of surprise [<italic>F</italic>(2,42) = 9.66,
<italic>p</italic> &#x3C; .001, <italic>η<sup>2</sup></italic> = .32] and
music genre [<italic>F</italic>(2,42) = 3.31, <italic>p</italic> &#x3C;
.05, <italic>η<sup>2</sup></italic> = .14] but not any other effect or
interaction (<italic>p</italic>s &#x3E; .1). When we applied a different
criterion to define the surprising moments in which the deviation of the
rating score from mean was more than 1.5 times the standard deviation,
the effect of surprise remained [<italic>F</italic>(2,42) = 10.82,
<italic>p</italic> &#x3C; .001, <italic>η<sup>2</sup></italic> = .34]. The
results suggest that the pupil dilated more strongly during the
surprising moments than during the unsurprising ones regardless of the
music genre or whether the on-line surprise rating was involved or
not.</p>

<fig id="fig02" fig-type="figure" position="float">
					<label>Figure 2.</label>
					<caption>
						<p>Mean of the pupil diameter during the surprising, neutral,
and unsurprising moments parameterized by music type in surprise-rating
(A) and passive-listening (B) sessions. Error bars represent standard
errors across participants.</p>
					</caption>
					<graphic id="graph02" xlink:href="jemr-11-02-m-figure-02.png"/>
				</fig>

<p>To further investigate whether there was systematic bias induced by a
particular musical excerpt or participant, we used scatter plots to
represent the surprise-related PDR for individual excerpts and
participants. Results are shown in Fig. 3. The data were clustered below
the diagonal line (confirming larger PDR during surprising moments than
during unsurprising ones), while the distribution of the genres or
participants was spread equally, indicating a consistent tendency of the
surprise-related PDR among different genres or participants. There was
no significant correlation between the surprise-related PDR (i.e., the
difference in average pupil size between surprising moments and
unsurprising ones) and the familiarity rating (<italic>r</italic> = .02,
<italic>p</italic> &#x3E; .8 for the surprise-rating condition, and
<italic>r</italic> = .03, <italic>p</italic> &#x3E; .7 for the
passive-listening condition; data not shown). The overall results
suggest that the surprise-related PDR did not depend on the genre,
participant, or familiarity with the excerpt.</p>

<fig id="fig03" fig-type="figure" position="float">
					<label>Figure 3.</label>
					<caption>
						<p>Scatter plots of PDR during the surprising moments against
PDR during the unsurprising ones in surprise-rating (A) and
passive-listening (B) sessions. Each marker represents each musical
genre (in the left panels) or participant (in the right panels).</p>
					</caption>
					<graphic id="graph03" xlink:href="jemr-11-02-m-figure-03.png"/>
				</fig>

<p>We conduced further analysis to verify the effect of the
surprise-related PDRs and examine whether the effect could be explained
by stimulus-driven factors coupled with the musical excerpts or response
biases/tendencies associated with the participants. Specifically, we
calculated the estimated PDR-surprise association using bootstrapping
procedures. In the completely random procedure (as a baseline), the
pupil data were aligned with rating data randomly selected from
different participants/excerpts. The difference in the mean pupil
diameter between surprising and unsurprising moments, derived from the
ratings of different participants and excerpts, was calculated 1,000
times (by random selection between the pair of the pupil and rating
data) to form a distribution, where the PDR was expected not to be
associated with the surprise at all. The results are shown in Fig. 4. In
both the surprise-rating and passive-listening conditions, the baseline
distributions (i.e., the black distributions) were quite distant from
the observed surprise-related PDR (indicated as vertical dashed lines),
indicating a reliable surprise-related PDR: when the pupil data matched
the rating data for the same participant and musical excerpt, pupil size
was larger during surprising moments.</p>

<fig id="fig04" fig-type="figure" position="float">
					<label>Figure 4.</label>
					<caption>
						<p>Histogram of the estimated PDR-surprise associations in
surprise-rating (A) and passive-listening (B) sessions. The curves
represent the fitted normal densities. The vertical dashed lines
indicate the observed surprise-related PDR.</p>
					</caption>
					<graphic id="graph04" xlink:href="jemr-11-02-m-figure-04.png"/>
				</fig>

<p>We further calculated the estimated PDR-surprise associations when
the pupil data were paired with the rating data for the same excerpt,
but randomly selected from different participants (i.e.,
shuffled-participant condition), and when the pupil data were paired
with the rating data obtained from the same participant, but randomly
selected from different excerpts (i.e., shuffled-excerpt condition). We
expected that if the observed surprise-related PDR could mainly be
explained by the stimulus-driven factor, the distribution of the
estimated PDR-surprise association from the shuffled-participant
condition would be close to the observed surprise-related PDR. Namely,
as long as the pupil data were aligned with the rating data from the
same excerpt, regardless of the rater/participant, the PDR-surprise
association would increase. In contrast, if the surprise-related PDR
could mainly be explained by the participant-related factors, such as
response bias systematic tendency of rating, etc., the surprise-related
PDR would be close to the estimation obtained from the shuffled-excerpt
procedure. Namely, the surprise-related PDR would be due to coordination
between the pupillary response and rating of a particular
rater/participant, regardless of the excerpt that was to be rated.</p>

<p>As shown in Fig. 4, in the surprise-rating session, the observed
surprise-related PDR was quite close to the distribution derived from
the shuffled-participant procedure but not to the distribution derived
from the shuffled-excerpt procedure, indicating that the stimulus
characteristics might contribute to the surprise-related PDRs during
surprise rating. In contrast, no such result was found in the
passive-listening session. The distribution of the shuffled-participant
or shuffled-excerpt condition overlapped the baseline distribution and
was distant from the observed surprise-related PDR. The results suggest
that the surprise-related PDRs observed during passive listening cannot
be explained by the stimulus characteristics or response
biases/tendencies associated with the participants.</p>
    </sec>

    <sec id="S3e">
      <title>Decision-making-related PDR</title>

<p>Surprise-related PDRs were observed in both the surprise-rating and
passive-listening sessions. It may be suspected that the participant
performed the surprise rating implicitly and spontaneously even though
they were not asked to, especially since the passive-listening condition
was always conducted in the later session. To examine whether the
participant performed the rating while listening to the music, we
conducted another analysis to examine the decision-making-related PDR
(e.g., <xref ref-type="bibr" rid="b10 b11">10, 11</xref>).</p>

<p>The pupil data were time-locked to the rating change instead of the
rating moment as shown in the surprise-related PDR analysis. The rating
change was defined as the moment the rating started to move in a
particular direction. Figure 5 shows examples of the results of the
time-locked analysis of the rating change. The timing of the surprising
change was defined as the start of the period in which an increased
rating score lasted longer than 0.5 s (the vertical red dotted lines).
The timing of the unsurprising change was defined as the start of the
period in which a decreased rating score lasted longer than 0.5 s
(vertical blue dotted lines). To provide the baseline, we defined the
neutral time as the median timing of the period in which an unchanged
rating score lasted longer than 4 s (vertical green dotted lines).</p>

<fig id="fig05" fig-type="figure" position="float">
					<label>Figure 5.</label>
					<caption>
						<p>Examples of on-line surprise rating (A) and pupil size change
(B) over time. The lines represent the surprising (red), unsurprising
(blue), and the unchanged (green) moments, which were defined as the
period when the rating score was increased, decreased, or unchanged over
time. The vertical dotted lines represent the reference timing of the
surprising change (red), unsurprising change (blue), and unchanged
(green) rating.</p>
					</caption>
					<graphic id="graph05" xlink:href="jemr-11-02-m-figure-05.png"/>
				</fig>

<p>Mean pupil diameter changes time-locked to the reference timing are
shown in Fig. 6. To examine whether pupil diameter reliably increased,
we conducted a pairwise <italic>t</italic>-test at each time point for
all the pairs in the three conditions (Bonferroni corrected
<italic>p</italic>-value). Results showed that in the surprise-rating
session, pupil diameter increased around 1 s before the decision-making
event, and reached statistical significance around the reference timing,
as indicated by the difference between the surprising/unsurprising and
neutral conditions. Note that the reference timing of the surprising and
unsurprising events was when the participant started moving the rating
bar. The pupil dilated before the timing, suggesting that this PDR is
evoked by a decision-making processing, rather than motor behavior. This
is consistent with Einhäuser et al. (<xref ref-type="bibr" rid="b10">10</xref>), who showed that the pupil
dilated at the moment a choice was made even when the actual motor
response occurred thereafter. This decision-making-related PDR had a
similar pattern regardless of whether the decision was made in the
surprising or unsurprising direction. In contrast, in the
passive-listening condition, no such decision-making-related PDR was
found. The overall results suggest that there was no spontaneous
decision-making process involved in the passive-listening session.</p>

<fig id="fig06" fig-type="figure" position="float">
					<label>Figure 6.</label>
					<caption>
						<p>Mean pupil diameter change time-locked to the reference
timing in surprise-rating (A) and passive-listening (B) sessions. The
shadow represents standard errors across participants. The horizontal
lines represent significant differences between surprising and neutral
conditions (red lines), unsurprising and neutral conditions (blue
lines), or surprising and unsurprising conditions (black lines), p &#x3C;
.05 with Bonferroni correction. Ns represent the number of valid trials
from all participants for each condition.</p>
					</caption>
					<graphic id="graph06" xlink:href="jemr-11-02-m-figure-06.png"/>
				</fig>
    </sec>

    <sec id="S3f">
      <title>Loudness-related PDR</title>

<p>We have previously found that the subjective salience evaluation of
sounds is highly correlated with their loudness, as well as with the PDR
to them (<xref ref-type="bibr" rid="b12">12</xref>). The sounds used in the previous study were environmental
sounds presented briefly (500-ms duration) and discretely (10-s ISI). It
remains unclear whether the current online surprise judgment on the
long-lasting music can be explained by the instantaneous loudness change
of the music, and whether the surprise-related PDR can be simply
explained by the loudness change.</p>

<p>To investigate the issue, we conducted an analysis to examine the
similarity between the surprise rating and instantaneous loudness change
for each excerpt. The loudness of the musical excerpt was estimated by
an excitation-pattern-based loudness model (e.g., <xref ref-type="bibr" rid="b22">22</xref>). The acoustic
signal was bandpass filtered with a bank of filters of equivalent
rectangular bandwidth (ERB) with the center frequencies spaced 0.5 ERB
from 30 to 16,000 Hz, and weighted with the middle ear transfer
function. The outputs were divided into segments of 100-ms windows to
compute the instantaneous loudness. The instantaneous loudness was
smoothed with a 1-s window to represent the estimated loudness change
over time. We then calculated the correlation between the surprise
rating and loudness change over time. Results showed that among the 15
excerpts we selected, 13 showed significant correlation between the
surprise rating and instantaneous loudness change (see Fig. 7).</p>

<fig id="fig07" fig-type="figure" position="float">
					<label>Figure 7.</label>
					<caption>
						<p>Subjective salience rating (blue lines) and estimated
instantaneous loudness change (orange lines) over time. Pearson’s
correlation coefficients with hypothesis testing p-values are shown for
each excerpt.</p>
					</caption>
					<graphic id="graph07" xlink:href="jemr-11-02-m-figure-07.png"/>
				</fig>

<p>To further investigate whether the pupil data simply reflected the
instantaneous loudness change of the music, we conducted an analysis of
the loudness-related PDR. The idea was to align the pupil data with the
loudness change over time, as in the analysis of the surprise-related
PDR, to examine whether the pupil size was larger during the loud
moments than during the quiet ones. The loud and quiet moments were
defined as when the deviation of the loudness change from the mean was
larger and smaller than 1.5 times the standard deviation, respectively,
and the middle ground between the criteria. Mean pupil diameter during
loud, quiet, and middle ground moments (Fig. 8) was subjected to a
three-way repeated-measures ANOVA with the task (surprise-rating,
passive-listening), music genre (classical, jazz, rock), and loudness
(loud, middle ground, quiet) as within-subject factors. Results showed
an two-way interaction between music genre and loudness
[<italic>F</italic>(4,84) = 7.36, <italic>p</italic> &#x3C; .001,
<italic>η<sup>2</sup></italic> = .26] and the three-way interaction
among task, music genre, and loudness [<italic>F</italic>(4,84) = 2.95,
<italic>p</italic> &#x3C; .03, <italic>η<sup>2</sup></italic> = .12], but
not any other main effect or interaction (<italic>p</italic>s &#x3E;
.3).</p>

<fig id="fig08" fig-type="figure" position="float">
					<label>Figure 8.</label>
					<caption>
						<p>Mean of the pupil diameter during loud, middle ground, and
quiet moments parameterized by music type in surprise-rating (A) and
passive-listening (B) sessions. Error bars represent standard errors
across participants.</p>
					</caption>
					<graphic id="graph08" xlink:href="jemr-11-02-m-figure-08.png"/>
				</fig>
    </sec>
    </sec>

    <sec id="S4">
      <title>Discussion</title>

<p>We examined whether the PDR reflects surprising moments in music.
Participants evaluated how surprisingly a musical excerpt changed over
time while they listened to the music concurrently. We found that their
pupil size increased at the moment they gave a surprise rating,
indicating a surprise-related PDR in music. This pattern of results was
also observed when they listened to the music passively without
performing any evaluation. Note that in the current study, the
surprise-related PDR was not revealed as a typical phasic (or biphasic)
response as it is when the surprise event is clearly defined and
presented discretely (e.g., <xref ref-type="bibr" rid="b4 b12">4, 12</xref>). In contrast, the ‘surprise’ was
defined by a continuously updated processing over time (therefore, it
was not necessarily a discrete transient event), as indexed when the
surprise rating scores increased beyond certain levels, which might make
the stereotypical phasic response less noticeable. In any case, when
averaging the pupil size across time periods during ‘surprise’ events,
it has been consistently observed that the average pupil size is larger
around the surprise events than for background neutral sounds (<xref ref-type="bibr" rid="b4">4</xref>) or
less surprising/salient sounds (<xref ref-type="bibr" rid="b12">12</xref>).</p>

<p>Further bootstrapping analysis demonstrated that the effect of
stimulus characteristics might contribute to the surprise-related PDRs
during the surprise rating task but not during passive listening.
Moreover, the decision-making-related PDRs were only observed when the
participants performed the rating task but not when they listened to the
music passively, indicating the absence of spontaneous evaluation in the
latter case. The overall results indicate that PDR reflects surprising
moments in music, regardless of whether an evaluation of the surprise
per se is required. This suggests that the surprise-related PDR could be
due to a stimulus-driven response to the acoustic features embedded in
the music or due to automatic monitoring of surprise in an auditory
environment.</p>

<p>The surprise-related PDR was observed for all the music genres we
tested, regardless of the familiarity with the excerpt. In the
behavioral subjective rating, participants were more familiar with a
particular genre of music, i.e., classical music, than the others, and
tended to give a higher surprise rating on average over time if they
were familiar with the excerpt. The reason for this tendency could be
that when one is familiar with a particular excerpts, it becomes easier
for him/her to form an expectation and thus to predict the ‘surprise’ or
be predisposed to it. It has been shown that with familiarity with
excerpts, chills and emotional responses related to the excerpts
increase (e.g., <xref ref-type="bibr" rid="b23 b24 b25">23, 24, 25</xref>). Chills are also observed in the reflection
of pupillary dilation response (<xref ref-type="bibr" rid="b26">26</xref>) and are often present when music is
rich in variation. While we did not measure chills or perform an
emotional evaluation of the excerpts, it is unclear whether the surprise
rating was similar to chills or not. However, the surprise-related PDR
did not correlate with familiarity with the excerpts and was observed
robustly and constantly regardless of music genre. This suggests that
the surprise-related PDR can hardly be explained by familiarity or
chills and is consistent with the idea that the reflection of the PDR in
auditory surprise is an automatic physiological response. This
conclusion is also supported by evidence showing that the PDR to a
deviant auditory oddball (<xref ref-type="bibr" rid="b4">4</xref>) is independent of the task demand, i.e.,
when the participant does not pay attention to the oddball per se.</p>

<p>It remains unclear whether and how the subjective surprise evaluation
in music can be derived from stimulus-driven factors. The consensus on
the surprise rating among the participants was generally at the
intermediate level and varied among the musical excerpts, indicating
that the evaluation was based on an interaction between the top-down
expectation (e.g., knowledge and familiarity with the excerpts) and
stimulus-driven factors (e.g., acoustic features). This conclusion is
also supported by the results of the bootstrapping analysis of the
estimate PDR-surprise associations in that the surprise-related PDR
could be explained, but only partly, by the stimulus-driven effects
associated with the musical excerpts. Huang and Elhilali (<xref ref-type="bibr" rid="b17">17</xref>)
investigated auditory salience using natural soundscapes. They asked
participants to rate relative salience between two auditory streams and
took a data-driven approach to uncover the critical parameters for
auditory salience. They found that auditory salience is spaced among
multidimensional features that combine nonlinearly and context
dependently. Estimating auditory surprise in music requires, in addition
to the features contributing to auditory salience, parameters that are
possibly related to the time sequence and interactions among the
acoustic features to estimate the surprise derived from the passing
sequence. A related study has shown how surprise in popular music
contributes to preference (<xref ref-type="bibr" rid="b27">27</xref>). Considering that pupil size also
reflects emotional arousal (<xref ref-type="bibr" rid="b28">28</xref>) that might be related to preference,
more study is required to further investigate how the pupil reflects
surprise and preference and their interaction.</p>

<p>The surprise-related PDR in music cannot be explained by an explicit
or spontaneous surprise evaluation of the music or cognitive processing
load (in terms of task demand). Einhäuser and colleagues (<xref ref-type="bibr" rid="b10">10</xref>) showed
that the pupil dilates at the moment a decision is made, regardless of
the decision content or whether the motor response is required. This is
consistent with our observation of the decision-making-related PDR only
in the surprise-rating condition, regardless of the surprising or
unsurprising rating, but not in the passive-listening condition. In
contrast, surprise-related PDRs were constantly observed in both
conditions, and thus cannot be explained by the decision-making process.
With regard to the cognitive processing load, the pupil dilates when the
load increases (<xref ref-type="bibr" rid="b1 b2 b3">1, 2, 3</xref>). In the current study, the task demand was
constantly required during the surprise-rating session but not required
at all during the passive-listening one, but the surprise-related PDR
was constantly observed regardless of whether the cognitive effort was
involved or not.</p>

<p>The surprise-related PDR might be partially explained by the loudness
change of the music, depending on the music genre. It has been shown
that subjective salience evaluation is highly correlated with loudness,
regardless of whether the sound is presented briefly (<xref ref-type="bibr" rid="b12">12</xref>) or if it is
long-lasting music as in the current study. However, the loudness change
could not explain the pupillary response to all the music genres. In the
loudness-related PDR analysis, larger pupil size during loud moments
than during quiet ones was only observed for the classical music, and
the effect was more remarkable during the surprise-rating condition than
the passive-listening condition. This general pattern is different from
the surprise-related PDR, in that the effect was observed for all the
music genre. Furthermore, while the consensus of the surprise rating
among the participants was at the intermediate level, it is possible
that the pupillary response for individual participant did not simply
reflect the loudness change, but instead was modulated by each
participant’s specific judgment of surprise. The overall results suggest
that loudness may partially explain the surprise-related PDR.</p>

<p>Pupillometry has recently been widely used to study various aspects
of musical processing such as arousal and preference (<xref ref-type="bibr" rid="b29">29</xref>), chills (<xref ref-type="bibr" rid="b26">26</xref>),
and familiarity (<xref ref-type="bibr" rid="b30">30</xref>). The current study contributes to our understanding
of pupillary response related cognitive processing by demonstrating that
not only emotional arousal induced by music, but also the orienting
response by surprise can be revealed by the pupillary response. By
presenting relatively long musical excerpts, we were able to apply
various analyses to investigate the dynamics of pupillary responses and
related cognitive processing. We conclude that the pupil dilates
automatically during surprising moments in music.</p>
    </sec>

    <sec id="S5" sec-type="COI-statement">
      <title>Ethics and Conflict of Interest</title>

<p>The authors declare that the contents of the article are in agreement
with the ethics described in
<ext-link ext-link-type="uri" xlink:href="http://biblio.unibe.ch/portale/elibrary/BOP/jemr/ethics.html" xlink:show="new">http://biblio.unibe.ch/portale/elibrary/BOP/jemr/ethics.html</ext-link>
and that there is no conflict of interest regarding the publication of
this paper.</p>
    </sec>

</body>
<back>
<ref-list>
<ref id="b15"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Aston-Jones</surname>, <given-names>G.</given-names></name>, &#x26; <name><surname>Cohen</surname>, <given-names>J. D.</given-names></name></person-group> (<year>2005</year>). <article-title>An integrative theory of locus coeruleus-norepinephrine function: Adaptive gain and optimal performance.</article-title> <source>Annual Review of Neuroscience</source>, <volume>28</volume>(<issue>1</issue>), <fpage>403</fpage>&#8211;<lpage>450</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1146/annurev.neuro.28.061604.135709</pub-id><pub-id pub-id-type="pmid">16022602</pub-id><issn>0147-006X</issn></mixed-citation></ref>
<ref id="b16"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Bala</surname>, <given-names>A. D.</given-names></name>, &#x26; <name><surname>Takahashi</surname>, <given-names>T. T.</given-names></name></person-group> (<year>2000</year>). <article-title>Pupillary dilation response as an indicator of auditory discrimination in the barn owl.</article-title> <source>Journal of Comparative Physiology. A, Neuroethology, Sensory, Neural, and Behavioral Physiology</source>, <volume>186</volume>(<issue>5</issue>), <fpage>425</fpage>&#8211;<lpage>434</lpage>. <pub-id pub-id-type="doi">10.1007/s003590050442</pub-id><pub-id pub-id-type="pmid">10879946</pub-id><issn>0340-7594</issn></mixed-citation></ref>
<ref id="b1"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Beatty</surname>, <given-names>J.</given-names></name></person-group> (<year>1982</year>). <article-title>Task-evoked pupillary responses, processing load, and the structure of processing resources.</article-title> <source>Psychological Bulletin</source>, <volume>91</volume>(<issue>2</issue>), <fpage>276</fpage>&#8211;<lpage>292</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1037/0033-2909.91.2.276</pub-id><pub-id pub-id-type="pmid">7071262</pub-id><issn>0033-2909</issn></mixed-citation></ref>
<ref id="b28"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Bradley</surname>, <given-names>M. M.</given-names></name>, <name><surname>Miccoli</surname>, <given-names>L.</given-names></name>, <name><surname>Escrig</surname>, <given-names>M. A.</given-names></name>, &#x26; <name><surname>Lang</surname>, <given-names>P. J.</given-names></name></person-group> (<year>2008</year>). <article-title>The pupil as a measure of emotional arousal and autonomic activation.</article-title> <source>Psychophysiology</source>, <volume>45</volume>(<issue>4</issue>), <fpage>602</fpage>&#8211;<lpage>607</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1111/j.1469-8986.2008.00654.x</pub-id><pub-id pub-id-type="pmid">18282202</pub-id><issn>0048-5772</issn></mixed-citation></ref>
<ref id="b11"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Einh&#228;user</surname>, <given-names>W.</given-names></name>, <name><surname>Stout</surname>, <given-names>J.</given-names></name>, <name><surname>Koch</surname>, <given-names>C.</given-names></name>, &#x26; <name><surname>Carter</surname>, <given-names>O.</given-names></name></person-group> (<year>2008</year>). <article-title>Pupil dilation reflects perceptual selection and predicts subsequent stability in perceptual rivalry.</article-title> <source>Proceedings of the National Academy of Sciences of the United States of America</source>, <volume>105</volume>(<issue>5</issue>), <fpage>1704</fpage>&#8211;<lpage>1709</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1073/pnas.0707727105</pub-id><pub-id pub-id-type="pmid">18250340</pub-id><issn>0027-8424</issn></mixed-citation></ref>
<ref id="b10"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Einh&#228;user</surname>, <given-names>W.</given-names></name>, <name><surname>Koch</surname>, <given-names>C.</given-names></name>, &#x26; <name><surname>Carter</surname>, <given-names>O. L.</given-names></name></person-group> (<year>2010</year>). <article-title>Pupil dilation betrays the timing of decisions.</article-title> <source>Frontiers in Human Neuroscience</source>, <volume>4</volume>, <fpage>18</fpage>. <pub-id pub-id-type="doi" specific-use="author">10.3389/fnhum.2010.00018</pub-id><pub-id pub-id-type="pmid">20204145</pub-id><issn>1662-5161</issn></mixed-citation></ref>
<ref id="b6"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Eldar</surname>, <given-names>E.</given-names></name>, <name><surname>Cohen</surname>, <given-names>J. D.</given-names></name>, &#x26; <name><surname>Niv</surname>, <given-names>Y.</given-names></name></person-group> (<year>2013</year>). <article-title>The effects of neural gain on attention and learning.</article-title> <source>Nature Neuroscience</source>, <volume>16</volume>(<issue>8</issue>), <fpage>1146</fpage>&#8211;<lpage>1153</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1038/nn.3428</pub-id><pub-id pub-id-type="pmid">23770566</pub-id><issn>1097-6256</issn></mixed-citation></ref>
<ref id="b7"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Gabay</surname>, <given-names>S.</given-names></name>, <name><surname>Pertzov</surname>, <given-names>Y.</given-names></name>, &#x26; <name><surname>Henik</surname>, <given-names>A.</given-names></name></person-group> (<year>2011</year>). <article-title>Orienting of attention, pupil size, and the norepinephrine system.</article-title> <source>Attention, Perception &#x26; Psychophysics</source>, <volume>73</volume>(<issue>1</issue>), <fpage>123</fpage>&#8211;<lpage>129</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.3758/s13414-010-0015-4</pub-id><pub-id pub-id-type="pmid">21258914</pub-id><issn>1943-3921</issn></mixed-citation></ref>
<ref id="b21"><mixed-citation publication-type="unknown" specific-use="linked"><person-group person-group-type="author"><name><surname>Gagl</surname> <given-names>B</given-names></name>, <name><surname>Hawelka</surname> <given-names>S</given-names></name>, <name><surname>Hutzler</surname> <given-names>F</given-names></name></person-group>. Systematic influence of gaze position on pupil size measurement: Analysis and correction. Behav Res Methods. <year>2011</year>;43(4):1171-81. doi: <pub-id pub-id-type="doi" specific-use="author">10.3758/s13428-011-0109-5</pub-id>. PubMed PMID: PMC3218283.</mixed-citation></ref>
<ref id="b29"><mixed-citation publication-type="unknown" specific-use="linked"><person-group person-group-type="author"><name><surname>Gingras</surname> <given-names>B</given-names></name>, <name><surname>Marin</surname> <given-names>MM</given-names></name>, <name><surname>Puig-Waldm&#252;ller</surname> <given-names>E</given-names></name>, <name><surname>Fitch</surname> <given-names>WT</given-names></name></person-group>. The eye is listening: Music-induced arousal and individual differences predict pupillary responses. Frontiers in Human Neuroscience. <year>2015</year>;9:619. doi: <pub-id pub-id-type="doi" specific-use="author">10.3389/fnhum.2015.00619</pub-id>. PubMed PMID: PMC4639616.</mixed-citation></ref>
<ref id="b8"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Goldinger</surname>, <given-names>S. D.</given-names></name>, &#x26; <name><surname>Papesh</surname>, <given-names>M. H.</given-names></name></person-group> (<year>2012</year>). <article-title>Pupil dilation reflects the creation and retrieval of memories.</article-title> <source>Current Directions in Psychological Science</source>, <volume>21</volume>(<issue>2</issue>), <fpage>90</fpage>&#8211;<lpage>95</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1177/0963721412436811</pub-id><pub-id pub-id-type="pmid">29093614</pub-id><issn>0963-7214</issn></mixed-citation></ref>
<ref id="b17"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Huang</surname>, <given-names>N.</given-names></name>, &#x26; <name><surname>Elhilali</surname>, <given-names>M.</given-names></name></person-group> (<year>2017</year>). <article-title>Auditory salience using natural soundscapes.</article-title> <source>The Journal of the Acoustical Society of America</source>, <volume>141</volume>(<issue>3</issue>), <fpage>2163</fpage>&#8211;<lpage>2176</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1121/1.4979055</pub-id><pub-id pub-id-type="pmid">28372080</pub-id><issn>0001-4966</issn></mixed-citation></ref>
<ref id="b2"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Hy&#246;n&#228;</surname>, <given-names>J.</given-names></name>, <name><surname>Tommola</surname>, <given-names>J.</given-names></name>, &#x26; <name><surname>Alaja</surname>, <given-names>A.-M.</given-names></name></person-group> (<year>1995</year>). <article-title>Pupil dilation as a measure of processing load in simultaneous interpretation and other language tasks.</article-title> <source>The Quarterly Journal of Experimental Psychology.</source>, <volume>48</volume>(<issue>3</issue>), <fpage>598</fpage>&#8211;<lpage>612</lpage>. <pub-id pub-id-type="doi">10.1080/14640749508401407</pub-id><pub-id pub-id-type="pmid">7568993</pub-id><issn>0272-4987</issn></mixed-citation></ref>
<ref id="b3"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Kahneman</surname>, <given-names>D.</given-names></name></person-group> (<year>1973</year>). <source>Attention and effort</source>. <publisher-name>Prentice-Hall</publisher-name>.</mixed-citation></ref>
<ref id="b14"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Laeng</surname>, <given-names>B.</given-names></name>, &#x26; <name><surname>Sulutvedt</surname>, <given-names>U.</given-names></name></person-group> (<year>2014</year>). <article-title>The eye pupil adjusts to imaginary light.</article-title> <source>Psychological Science</source>, <volume>25</volume>(<issue>1</issue>), <fpage>188</fpage>&#8211;<lpage>197</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1177/0956797613503556</pub-id><pub-id pub-id-type="pmid">24285432</pub-id><issn>0956-7976</issn></mixed-citation></ref>
<ref id="b26"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Laeng</surname>, <given-names>B.</given-names></name>, <name><surname>Eidet</surname>, <given-names>L. M.</given-names></name>, <name><surname>Sulutvedt</surname>, <given-names>U.</given-names></name>, &#x26; <name><surname>Panksepp</surname>, <given-names>J.</given-names></name></person-group> (<year>2016</year>). <article-title>Music chills: The eye pupil as a mirror to music&#8217;s soul.</article-title> <source>Consciousness and Cognition</source>, <volume>44</volume>, <fpage>161</fpage>&#8211;<lpage>178</lpage>. <pub-id pub-id-type="doi">10.1016/j.concog.2016.07.009</pub-id><pub-id pub-id-type="pmid">27500655</pub-id><issn>1053-8100</issn></mixed-citation></ref>
<ref id="b4"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Liao</surname>, <given-names>H.-I.</given-names></name>, <name><surname>Yoneya</surname>, <given-names>M.</given-names></name>, <name><surname>Kidani</surname>, <given-names>S.</given-names></name>, <name><surname>Kashino</surname>, <given-names>M.</given-names></name>, &#x26; <name><surname>Furukawa</surname>, <given-names>S.</given-names></name></person-group> (<year>2016</year>). <article-title>Human pupillary dilation response to deviant auditory stimuli: Effects of stimulus properties and voluntary attention.</article-title> <source>Frontiers in Neuroscience</source>, <volume>10</volume>, <fpage>43</fpage>. <pub-id pub-id-type="doi">10.3389/fnins.2016.00043</pub-id><pub-id pub-id-type="pmid">26924959</pub-id><issn>1662-4548</issn></mixed-citation></ref>
<ref id="b12"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Liao</surname>, <given-names>H.-I.</given-names></name>, <name><surname>Kidani</surname>, <given-names>S.</given-names></name>, <name><surname>Yoneya</surname>, <given-names>M.</given-names></name>, <name><surname>Kashino</surname>, <given-names>M.</given-names></name>, &#x26; <name><surname>Furukawa</surname>, <given-names>S.</given-names></name></person-group> (<year>2016</year>). <article-title>Correspondences among pupillary dilation response, subjective salience of sounds, and loudness.</article-title> <source>Psychonomic Bulletin &#x26; Review</source>, <volume>23</volume>(<issue>2</issue>), <fpage>412</fpage>&#8211;<lpage>425</lpage>. <pub-id pub-id-type="doi">10.3758/s13423-015-0898-0</pub-id><pub-id pub-id-type="pmid">26163191</pub-id><issn>1069-9384</issn></mixed-citation></ref>
<ref id="b27"><mixed-citation publication-type="unknown" specific-use="linked"><person-group person-group-type="author"><name><surname>Miles</surname> <given-names>SA</given-names></name>, <name><surname>Rosen</surname> <given-names>DS</given-names></name>, <name><surname>Grzywacz</surname> <given-names>NM</given-names></name></person-group> (<year>2017</year>). <article-title>A statistical analysis of the relationship between harmonic surprise and preference in popular music.</article-title> <source>Frontiers in Human Neuroscience.</source> ;11:263. doi: <pub-id pub-id-type="doi" specific-use="author">10.3389/fnhum.2017.00263</pub-id>. PubMed PMID: 28572763; PubMed Central PMCID: PMCPMC5435755.</mixed-citation></ref>
<ref id="b23"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Mori</surname>, <given-names>K.</given-names></name>, &#x26; <name><surname>Iwanaga</surname>, <given-names>M.</given-names></name></person-group> (<year>2017</year>). <article-title>Two types of peak emotional responses to music: The psychophysiology of chills and tears.</article-title> <source>Scientific Reports</source>, <volume>7</volume>(<issue>1</issue>), <fpage>46063</fpage>. <pub-id pub-id-type="doi" specific-use="author">10.1038/srep46063</pub-id><pub-id pub-id-type="pmid">28387335</pub-id><issn>2045-2322</issn></mixed-citation></ref>
<ref id="b9"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Naber</surname>, <given-names>M.</given-names></name>, <name><surname>Fr&#228;ssle</surname>, <given-names>S.</given-names></name>, <name><surname>Rutishauser</surname>, <given-names>U.</given-names></name>, &#x26; <name><surname>Einh&#228;user</surname>, <given-names>W.</given-names></name></person-group> (<year>2013</year>). <article-title>Pupil size signals novelty and predicts later retrieval success for declarative memories of natural scenes.</article-title> <source>Journal of Vision (Charlottesville, Va.)</source>, <volume>13</volume>(<issue>2</issue>), <fpage>11</fpage>. <pub-id pub-id-type="doi" specific-use="author">10.1167/13.2.11</pub-id><pub-id pub-id-type="pmid">23397036</pub-id><issn>1534-7362</issn></mixed-citation></ref>
<ref id="b13"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Naber</surname>, <given-names>M.</given-names></name>, &#x26; <name><surname>Nakayama</surname>, <given-names>K.</given-names></name></person-group> (<year>2013</year>). <article-title>Pupil responses to high-level image content.</article-title> <source>Journal of Vision (Charlottesville, Va.)</source>, <volume>13</volume>(<issue>6</issue>), <fpage>7</fpage>. <pub-id pub-id-type="doi" specific-use="author">10.1167/13.6.7</pub-id><pub-id pub-id-type="pmid">23685390</pub-id><issn>1534-7362</issn></mixed-citation></ref>
<ref id="b24"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Panksepp</surname>, <given-names>J.</given-names></name></person-group> (<year>1995</year>). <article-title>The emotional sources of &#8220;chills&#8221; induced by music.</article-title> <source>Music Perception</source>, <volume>13</volume>(<issue>2</issue>), <fpage>171</fpage>&#8211;<lpage>207</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.2307/40285693</pub-id><issn>0730-7829</issn></mixed-citation></ref>
<ref id="b5"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Partala</surname>, <given-names>T.</given-names></name>, &#x26; <name><surname>Surakka</surname>, <given-names>V.</given-names></name></person-group> (<year>2003</year>). <article-title>Pupil size variation as an indication of affective processing.</article-title> <source>International Journal of Human-Computer Studies</source>, <volume>59</volume>(<issue>1-2</issue>), <fpage>185</fpage>&#8211;<lpage>198</lpage>. <pub-id pub-id-type="doi">10.1016/S1071-5819(03)00017-X</pub-id><issn>1071-5819</issn></mixed-citation></ref>
<ref id="b25"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Pereira</surname>, <given-names>C. S.</given-names></name>, <name><surname>Teixeira</surname>, <given-names>J.</given-names></name>, <name><surname>Figueiredo</surname>, <given-names>P.</given-names></name>, <name><surname>Xavier</surname>, <given-names>J.</given-names></name>, <name><surname>Castro</surname>, <given-names>S. L.</given-names></name>, &#x26; <name><surname>Brattico</surname>, <given-names>E.</given-names></name></person-group> (<year>2011</year>). <article-title>Music and emotions in the brain: Familiarity matters.</article-title> <source>PLoS One</source>, <volume>6</volume>(<issue>11</issue>), <fpage>e27241</fpage>. <pub-id pub-id-type="doi" specific-use="author">10.1371/journal.pone.0027241</pub-id><pub-id pub-id-type="pmid">22110619</pub-id><issn>1932-6203</issn></mixed-citation></ref>
<ref id="b22"><mixed-citation publication-type="unknown" specific-use="linked"><person-group person-group-type="author"><name><surname>Petsas</surname> <given-names>T</given-names></name>, <name><surname>Harrison</surname> <given-names>J</given-names></name>, <name><surname>Kashino</surname> <given-names>M</given-names></name>, <name><surname>Furukawa</surname> <given-names>S</given-names></name>, <name><surname>Chait</surname> <given-names>M</given-names></name></person-group>. <article-title>The effect of distraction on change detection in crowded acoustic scenes.</article-title> Hearing research. <year>2016</year>;341:179-89. doi: <pub-id pub-id-type="doi" specific-use="author">10.1016/j.heares.2016.08.015</pub-id>. PubMed PMID: PMC5090045.</mixed-citation></ref>
<ref id="b18"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Wang</surname>, <given-names>C. A.</given-names></name>, <name><surname>Boehnke</surname>, <given-names>S. E.</given-names></name>, <name><surname>Itti</surname>, <given-names>L.</given-names></name>, &#x26; <name><surname>Munoz</surname>, <given-names>D. P.</given-names></name></person-group> (<year>2014</year>). <article-title>Transient pupil response is modulated by contrast-based saliency.</article-title> <source>The Journal of Neuroscience : The Official Journal of the Society for Neuroscience</source>, <volume>34</volume>(<issue>2</issue>), <fpage>408</fpage>&#8211;<lpage>417</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1523/jneurosci.3550-13.2014</pub-id> <pub-id pub-id-type="doi">10.1523/JNEUROSCI.3550-13.2014</pub-id><pub-id pub-id-type="pmid">24403141</pub-id><issn>0270-6474</issn></mixed-citation></ref>
<ref id="b20"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Wang</surname>, <given-names>C. A.</given-names></name>, &#x26; <name><surname>Munoz</surname>, <given-names>D. P.</given-names></name></person-group> (<year>2014</year>). <article-title>Modulation of stimulus contrast on the human pupil orienting response.</article-title> <source>The European Journal of Neuroscience</source>, <volume>40</volume>(<issue>5</issue>), <fpage>2822</fpage>&#8211;<lpage>2832</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1111/ejn.12641</pub-id><pub-id pub-id-type="pmid">24911340</pub-id><issn>0953-816X</issn></mixed-citation></ref>
<ref id="b30"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Weiss</surname>, <given-names>M. W.</given-names></name>, <name><surname>Trehub</surname>, <given-names>S. E.</given-names></name>, <name><surname>Schellenberg</surname>, <given-names>E. G.</given-names></name>, &#x26; <name><surname>Habashi</surname>, <given-names>P.</given-names></name></person-group> (<year>2016</year>). <article-title>Pupils dilate for vocal or familiar music.</article-title> <source>Journal of Experimental Psychology. Human Perception and Performance</source>, <volume>42</volume>(<issue>8</issue>), <fpage>1061</fpage>&#8211;<lpage>1065</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1037/xhp0000226</pub-id><pub-id pub-id-type="pmid">27123682</pub-id><issn>0096-1523</issn></mixed-citation></ref>
<ref id="b19"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Wetzel</surname>, <given-names>N.</given-names></name>, <name><surname>Buttelmann</surname>, <given-names>D.</given-names></name>, <name><surname>Schieler</surname>, <given-names>A.</given-names></name>, &#x26; <name><surname>Widmann</surname>, <given-names>A.</given-names></name></person-group> (<year>2016</year>). <article-title>Infant and adult pupil dilation in response to unexpected sounds.</article-title> <source>Developmental Psychobiology</source>, <volume>58</volume>(<issue>3</issue>), <fpage>382</fpage>&#8211;<lpage>392</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1002/dev.21377</pub-id><pub-id pub-id-type="pmid">26507492</pub-id><issn>0012-1630</issn></mixed-citation></ref>
</ref-list>
</back>
</article>
