<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd">

<article article-type="research-article" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML">
 <front>
    <journal-meta>
	<journal-id journal-id-type="publisher-id">Jemr</journal-id>
      <journal-title-group>
        <journal-title>Journal of Eye Movement Research</journal-title>
      </journal-title-group>
      <issn pub-type="epub">1995-8692</issn>
	  <publisher>								
	  <publisher-name>Bern Open Publishing</publisher-name>
	  <publisher-loc>Bern, Switzerland</publisher-loc>
	</publisher>
    </journal-meta>
    <article-meta>
	<article-id pub-id-type="doi">10.16910/jemr.11.2.12</article-id> 
	  <article-categories>								
				<subj-group subj-group-type="heading">
					<subject>Research Article</subject>
				</subj-group>
		</article-categories>
      <title-group>
        <article-title>A linear oscillator model predicts
dynamic temporal attention and
pupillary entrainment to rhythmic patterns</article-title>
      </title-group>
	   <contrib-group> 
				<contrib contrib-type="author">
					<name>
						<surname>Fink</surname>
						<given-names>Lauren K.</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Hurley</surname>
						<given-names>Brian K.</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
				<contrib contrib-type="author">       
 					<name>      
						<surname>Geng</surname>
						<given-names>Joy J.</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
				<contrib contrib-type="author">        
					<name>        
						<surname>Janata</surname>
						<given-names>Petr</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>                				
        <aff id="aff1">
		<institution>University of California, Davis</institution>,   <country>USA</country>
        </aff>
		</contrib-group>   

		
	  <pub-date date-type="pub" publication-format="electronic"> 
		<day>20</day>  
		<month>11</month>
        <year>2018</year>
      </pub-date>
	  <pub-date date-type="collection" publication-format="electronic"> 
	  <year>2018</year>
	</pub-date>
      <volume>11</volume>
      <issue>2</issue>
	 <elocation-id>10.16910/jemr.11.2.12</elocation-id> 
	<permissions> 
	<copyright-year>2018</copyright-year>
	<copyright-holder>Fink, L. K., Hurley, B. K., Geng, J. J. &#x26; Janata, P.</copyright-holder>
	<license license-type="open-access">
  <license-p>This work is licensed under a Creative Commons Attribution 4.0 International License, 
  (<ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">
    https://creativecommons.org/licenses/by/4.0/</ext-link>), which permits unrestricted use and redistribution provided that the original author and source are credited.</license-p>
</license>
	</permissions>
      <abstract>
        <p>Rhythm is a ubiquitous feature of music that induces specific neural modes of processing. In this
paper, we assess the potential of a stimulus-driven linear oscillator model (<xref ref-type="bibr" rid="b57">57</xref>)
to predict dynamic attention to complex musical rhythms on an instant-by-instant basis. We use
perceptual thresholds and pupillometry as attentional indices against which to test our model predictions.
During a deviance detection task, participants listened to continuously looping, multiinstrument,
rhythmic patterns, while being eye-tracked. Their task was to respond anytime they
heard an increase in intensity (dB SPL). An adaptive thresholding algorithm adjusted deviant intensity
at multiple probed temporal locations throughout each rhythmic stimulus. The oscillator
model predicted participants’ perceptual thresholds for detecting deviants at probed locations, with
a low temporal salience prediction corresponding to a high perceptual threshold and vice versa. A
pupil dilation response was observed for all deviants. Notably, the pupil dilated even when participants
did not report hearing a deviant. Maximum pupil size and resonator model output were significant
predictors of whether a deviant was detected or missed on any given trial. Besides the
evoked pupillary response to deviants, we also assessed the continuous pupillary signal to the
rhythmic patterns. The pupil exhibited entrainment at prominent periodicities present in the stimuli
and followed each of the different rhythmic patterns in a unique way. Overall, these results replicate
previous studies using the linear oscillator model to predict dynamic attention to complex
auditory scenes and extend the utility of the model to the prediction of neurophysiological signals,
in this case the pupillary time course; however, we note that the amplitude envelope of the acoustic
patterns may serve as a similarly useful predictor. To our knowledge, this is the first paper to show
entrainment of pupil dynamics by demonstrating a phase relationship between musical stimuli and
the pupillary signal.</p>
      </abstract>
      <kwd-group>
        <kwd>Pupil</kwd>
        <kwd>attention</kwd>
        <kwd>entrainment</kwd>
        <kwd>rhythm</kwd>
        <kwd>music</kwd>
        <kwd>modeling</kwd>
        <kwd>amplitude envelope</kwd>
        <kwd>psychophysics</kwd>                                
      </kwd-group>
    </article-meta>
  </front>	
  <body>

    <sec id="S1">
      <title>Introduction</title>

<p>Though diverse forms of music exist across the globe, all music shares the
property of evolving through time. While certain scales, modes, meters,
or timbres may be more or less prevalent depending on the culture in
question, the use of time to organize sound is universal. Therefore,
rhythm, one of the most basic elements of music, provides an excellent
scientific starting point to begin to question and characterize the
neural mechanisms underlying music-induced changes in motor behavior and
attentional state. To remain consistent with previous literature, here
rhythm is defined as patterns of duration, timing, and stress in the
amplitude envelope of an auditory signal (a physical property), whereas
meter is a perceptual phenomenon that tends to include the pulse (beat
or tactus) frequency perceived in a rhythmic sequence, as well as slower
and faster integer-related frequencies (<xref ref-type="bibr" rid="b1">1</xref>).</p>

<p>Previous studies have shown that the presence of meter affects
attention and motor behavior. For instance, perceptual sensitivity is
enhanced and reaction times are decreased when targets occur in phase
with an on-going metric periodicity (<xref ref-type="bibr" rid="b2 b3 b4">2, 3, 4</xref>). Interestingly, this
facilitation via auditory regularity is observed not only for auditory
targets, including speech (<xref ref-type="bibr" rid="b5">5</xref>), but also for visual targets (<xref ref-type="bibr" rid="b6 b7 b8 b9 b10 b11 b12">6, 7, 8, 9, 10, 11, 12</xref>). One
promising theory that accounts for these results is Dynamic Attending
Theory (DAT) (<xref ref-type="bibr" rid="b13 b14">13, 14</xref>).</p>

    <sec id="S1a">
      <title>Dynamic Attending Theory</title>

<p>Dynamic Attending Theory (DAT) posits that the neural mechanisms of
attention are susceptible to entrainment by an external stimulus,
allowing for temporal predictions and therefore attention and motor
coordination to specific time points (<xref ref-type="bibr" rid="b13 b14 b15">13, 14, 15</xref>). For any given stimulus,
the periodicities with the most energy will capture attention most
strongly. Neurobiologically, the proposed mechanism is entrainment of
neuronal membrane potential of, for example, neurons in primary auditory
cortex (in the case of auditory entrainment), to the external stimulus.
These phase-locked fluctuations in membrane potential alter the
probability of firing action potentials at any given point in time (see
(<xref ref-type="bibr" rid="b16">16</xref>) for a review).</p>

<p>Similarly, the recent Active Sensing Hypothesis (<xref ref-type="bibr" rid="b16 b17 b18">16, 17, 18</xref>) proposes that
perception occurs actively via motor sampling routines, that neural
oscillations serve to selectively enhance or suppress input,
cross-modally, and that cortical entrainment is, in and of itself, a
mechanism of attentional selection. Higher frequency oscillations can
become nested within lower frequency ones via phase-phase coupling,
phase-amplitude coupling, or amplitude-amplitude coupling, allowing for
processing of different stimulus attributes in parallel (<xref ref-type="bibr" rid="b19 b20">19, 20</xref>). (<xref ref-type="bibr" rid="b21">21</xref>)
have connected the ideas of Active Sensing to those of DAT by
highlighting the critical role of low frequency neural oscillations. In
summary, DAT and Active Sensing are not incompatible, as outlined in
(<xref ref-type="bibr" rid="b22">22</xref>).</p>

<p>Interestingly, studies of neural entrainment are typically separate
from those investigating sensorimotor synchronization, defined as
spontaneous synchronization of one’s motor effectors with an external
rhythm (<xref ref-type="bibr" rid="b23">23</xref>). However, a recent study confirms that the amplitude of
neural entrainment at the beat frequency explains variability in
sensorimotor synchronization accuracy, as well as temporal prediction
capabilities (<xref ref-type="bibr" rid="b24">24</xref>). Although motor entrainment, typically referred to as
sensorimotor synchronization, is not explicitly mentioned as a mechanism
of DAT, the role of the motor system in shaping perception is discussed
in many DAT papers (<xref ref-type="bibr" rid="b25 b26 b27 b28 b29">25, 26, 27, 28, 29</xref>) and is a core tenet of the Active Sensing
Hypothesis (<xref ref-type="bibr" rid="b22">22</xref>).</p>

<p>In this paper, we test whether a computational model of Dynamic
Attending Theory can predict attentional fluctuations to rhythmic
patterns. We also attempt to bridge the gap between motor and cortical
entrainment by investigating coupling of pupil dynamics to musical
stimuli. We consider the pupil both a motor behavior and an overt index
of attention, which we discuss in more detail below.</p>
    </sec>
	
    <sec id="S1b">
      <title>Sensori(oculo)motor couplin</title>

<p>Though most sensorimotor synchronization research has focused on
large-scale motor effectors, the auditory system also seems to have a
tight relationship with the ocular motor system. For instance, (<xref ref-type="bibr" rid="b30">30</xref>) show
that eye movements can synchronize with a moving acoustic target whether
it is real or imagined, in light or in darkness. With regard to rhythm,
a recent paper by (<xref ref-type="bibr" rid="b31">31</xref>) suggests that the tempo of rhythmic auditory
stimuli modulates both fixation durations and inter-saccade-intervals:
rhythms with faster tempi result in shorter fixations and
inter-saccade-intervals and vice versa. These results seem to fit with
those observed in audiovisual illusions, which illustrate the ability of
auditory stimuli to influence visual perception and even enhance visual
discrimination (<xref ref-type="bibr" rid="b32 b33">32, 33</xref>). Such cross-modal influencing of perception also
occurs when participants are asked to engage in purely imaginary
situations (<xref ref-type="bibr" rid="b34">34</xref>).</p>

<p>Though most studies have focused on eyeball movements, some (outlined
below) have begun to analyze the effect of auditory stimuli on pupil
dilation. Such an approach holds particular promise, as changes in pupil
size reflect sub-second changes in attentional state related to locus
coeruleus-mediated noradrenergic (LC-NE) functioning (<xref ref-type="bibr" rid="b35 b36 b37">35, 36, 37</xref>). The LC-NE
system plays a critical role in sensory processing, attentional
regulation, and memory consolidation. Its activity is time-locked to
theta oscillations in hippocampal CA1, and is theorized to be capable of
phase-resetting forebrain gamma band fluctuations, which are similarly
implicated in a broad range of cognitive processes (<xref ref-type="bibr" rid="b38">38</xref>).</p>

<p>In the visual domain, the pupil can dynamically follow the frequency
of an attended luminance flicker and index the allocation of visual
attention (<xref ref-type="bibr" rid="b39">39</xref>), as well as the spread of attention, whether cued
endogenously or exogenously (<xref ref-type="bibr" rid="b40">40</xref>). However, such a pupillary entrainment
effect has never been studied in the auditory domain. Theoretically
though, pupil dilation should be susceptible to auditory entrainment,
like other autonomic responses, such as respiration, heart rate, and
blood pressure, which can become entrained to slow periodicities present
in music (see (<xref ref-type="bibr" rid="b41">41</xref>), for a recent review on autonomic entrainment).</p>

<p>In the context of audition, the pupil seems to be a reliable index of
neuronal auditory cortex activity and behavioral sensory sensitivity.
(<xref ref-type="bibr" rid="b42">42</xref>) simultaneously recorded neurons in auditory cortex, medial
geniculate (MG), and hippocampal CA1 in conjunction with pupil size,
while mice detected auditory targets embedded in noise. They found that
pupil diameter was tightly related to both ripple activity in CA1 (in a
180 degree antiphase relationship) and neuronal membrane fluctuations in
auditory cortex. Slow rhythmic activity and high membrane potential
variability were observed in conjunction with constricted pupils, while
high frequency activity and high membrane potential variability were
observed with largely dilated pupils. At intermediate levels of pupil
dilation, the membrane was hyperpolarized and the variance in its
potential was decreased. The same inverted U relationship was observed
for MG neurons as well. Crucially, in the behavioral task, (<xref ref-type="bibr" rid="b42">42</xref>) found
that the decrease in membrane potential variance at intermediate
pre-stimulus pupil sizes predicted the best performance on the task.
Variability of membrane potential was smallest on detected trials
(intermediate pupil size), largest on false alarm trials (large pupil
size), and intermediate on miss trials (small pupil size). Though this
study was performed on mice, it provides compelling neurophysiological
evidence for using pupil size as an index of auditory processing.</p>

<p>The same inverted U relationship between pupil size and task
performance has been observed in humans during a standard auditory
oddball task. For instance, (<xref ref-type="bibr" rid="b43">43</xref>) showed that baseline pupil diameter
predicts both reaction time and P300 amplitude in an inverted U fashion
on an individual trial basis. Additionally, (<xref ref-type="bibr" rid="b43">43</xref>) found that baseline
pupil diameter is negatively correlated with the phasic pupillary
response elicited by deviants. Because of the well-established
relationship between tonic neuronal activity in locus coeruleus and
pupil diameter, it is theorized that both the P300 amplitude and pupil
diameter index locus coeruleus-norepinephrine activity
(<xref ref-type="bibr" rid="b35 b37 b43 b44 b45">35, 37, 43, 44, 45</xref>).</p>

<p>Moving towards musical stimuli, pupil size has been found to be
larger for: more arousing stimuli (<xref ref-type="bibr" rid="b46 b47">46, 47</xref>), well-liked stimuli (<xref ref-type="bibr" rid="b48">48</xref>), more
familiar stimuli (<xref ref-type="bibr" rid="b47">47</xref>), psychologically and physically salient auditory
targets (<xref ref-type="bibr" rid="b49 b50 b51">49, 50, 51</xref>), more perceptually stable auditory stimuli (<xref ref-type="bibr" rid="b52">52</xref>), and
chill-evoking musical passages (<xref ref-type="bibr" rid="b53">53</xref>). A particularly relevant paper by
(<xref ref-type="bibr" rid="b54">54</xref>) showed that the pupil responded to unattended omissions in on-going
rhythmic patterns when the omissions coincided with strong metrical
beats but not weak ones, suggesting that the pupil is sensitive to
internally generated hierarchical models of musical meter. While Damsma
and van Rijn’s analysis of difference waves was informative, the
continuous time series of pupil size may provide additional dynamic
insights into complex auditory processing.</p>

<p>Of particular note, (<xref ref-type="bibr" rid="b55">55</xref>) demonstrated a relationship between
attention and the time course of the pupillary signal while listening to
music. To do this, they had participants listen to 30-sec clips of
classical music while being eye-tracked. In the first phase of the
experiment, participants listened to each clip individually (diotic
presentation); in the second phase participants were presented with two
different clips at once (dichotic presentation) and instructed to attend
to one or the other. Kang &#x26; Wheatley compared the pupil signal
during dichotic presentation to the pupil signal during diotic
presentation of the attended vs. ignored clip. Using dynamic time
warping to determine the similarity between the pupillary signals of
interest, they showed that in the dichotic condition, the pupil signal
was more similar to the pupil signal recorded during diotic presentation
of the attended clip than to that recorded during diotic presentation of
the unattended clip (<xref ref-type="bibr" rid="b55">55</xref>). Such a finding implies that the pupil time
series is a time-locked, continuous dependent measure that can reveal
fine-grained information about an attended auditory stimulus. However,
it remains to be determined whether it is possible for the pupil to
become entrained to rhythmic auditory stimuli and whether such
oscillations would reflect attentional processes or merely passive
entrainment.</p>
    </sec>
	
    <sec id="S1c">
      <title>Predicting dynamic auditory attention</title>

<p>Because the metric structure perceived by listeners is not readily
derivable from the acoustic signal, a variety of algorithms have been
developed to predict at what period listeners will perceive the beat.
For example, most music software applications use algorithms to display
tempo to users and a variety of contests exist in the music information
retrieval community for developing the most accurate estimation of
perceived tempo, as well as individual beats, e.g. the Music Information
Retrieval Evaluation eXchange (MIREX) Audio Beat Tracking task (<xref ref-type="bibr" rid="b56">56</xref>). The
beat period, however, it just one aspect of the musical meter. More
sophisticated algorithms and models have been developed to predict all
prominent metric periodicities in a stimulus, as well as the way in
which attention might fluctuate as a function of the temporal structure
of an audio stimulus, as predicted by Dynamic Attending Theory.</p>

<p>For instance, the Beyond-the-Beat (BTB) model (<xref ref-type="bibr" rid="b57">57</xref>) parses audio in a
way analogous to the auditory nerve and uses a bank of 99 damped linear
oscillators (reson filters) tuned to frequencies between 0.25 and 10 Hz
to model the periodicities present in a stimulus. Several studies have
shown that temporal regularities present in behavioral movement data
(tapping and motion capture) collected from participants listening to
musical stimuli correspond to the modeled BTB periodicity predictions
for those same stimuli (<xref ref-type="bibr" rid="b57 b58 b59">57, 58, 59</xref>). Recent work (<xref ref-type="bibr" rid="b60">60</xref>) suggests that an
additional model calculation of time-varying temporal salience can
predict participants’ perceptual thresholds for detecting intensity
changes at a variety of probed time points throughout the modeled
stimuli, i.e. participants’ time-varying fluctuations in attention when
listening to rhythmic patterns.</p>

<p>In the current study, we further tested the BTB model’s temporal
salience predictions by asking whether output from the model could
predict the pupillary response to rhythmic musical patterns. We
hypothesized that the model could predict neurophysiological signals,
such as the pupillary response, which we use as a proxy for attention.
Specifically, we expected that the pupil would become entrained to the
rhythmic musical patterns in a stimulus specific way.</p>

<p>We also expected to see phasic pupil dilation responses to intensity
deviants. As in (<xref ref-type="bibr" rid="b60">60</xref>), we used an adaptive thresholding procedure to
probe participants’ perceptual thresholds for detecting intensity
increases (dB SPL) inserted at multiple time points throughout
realistic, multi-part rhythmic stimuli. Each probed position within the
stimulus had a corresponding value in terms of the model’s temporal
salience predictions. We hypothesized that detection thresholds should
be lower at moments of high model-predicted salience and vice versa. If
perceptual thresholds differ for different moments in time, we assume
this reflects fluctuations in attention, as predicted by DAT.</p>
    </sec>
    </sec>

    <sec id="S2">
      <title>Methods</title>
    <sec id="S2a">
      <title>Participants</title>


<p>Eighteen people participated in the experiment (13 female; mean age:
26 years (min: 19; max: 52; median 23 years). Student participants from
UC Davis received course credit for participation; other volunteer
participants received no compensation. The experimental protocol was
approved by the Institutional Review Board at UC Davis.</p>
    </sec>

    <sec id="S2b">
      <title>Materials</title>

<p>The five rhythmic patterns used in this study (Fig. 1, left column,
top panels) were initially created by Dr. Peter Keller via a custom
audio sequencer in Max/MSP 4.5.7 (Cycling ‘74), for a previous
experiment in our lab. Multi-timbre percussive patterns, each consisting
of the snap, shaker, and conga samples from a Proteus 2000 sound module
(E-mu Systems, Scotts Valley, CA) were designed to be played back in a
continuous loop at 107 beats per minute, with a 4/4 meter in mind.
However, we remain agnostic as the actual beat periodicity and metric
periodicities listeners perceived in the stimuli, as we leave such
predictions to the linear oscillator model. Each stimulus pattern lasted
2.2 s. We use the same stimulus names as in (<xref ref-type="bibr" rid="b60">60</xref>) for consistency. All
stimuli can be accessed in the supplemental material of (<xref ref-type="bibr" rid="b60">60</xref>).</p>

<p>Please note that the intensity level changed dynamically throughout
the experiment based on participants’ responses. The real-time, adaptive
presentation of the stimuli is discussed further in the <italic>Adaptive
Thresholding Procedure</italic> section below.</p>
    </sec>

    <sec id="S2c">
      <title>Linear oscillator model predictions</title>

<p>All stimuli were processed through the Beyond-the-Beat model (<xref ref-type="bibr" rid="b57">57</xref>) to
obtain mean periodicity profiles and temporal salience predictions. For
full details about the architecture of the model and the periodicity
surface calculations, see (<xref ref-type="bibr" rid="b57">57</xref>). For details about the temporal salience
calculations, please see (<xref ref-type="bibr" rid="b60">60</xref>).</p>

<p>In short, the model uses the Institute for Psychoacoustics and
Electronic Music toolbox (<xref ref-type="bibr" rid="b61">61</xref>) to transform the incoming audio in a
manner analogous to the auditory nerve, separating the signal into 40
different frequency bands, with center frequencies ranging from 141 to
8877 Hz. Then, onset detection is performed in each band by taking the
half-wave rectified first order difference of the root mean square (RMS)
amplitude. Adjacent bands are averaged together to reduce redundancy and
enhance computational efficiency. The signal from each of the remaining
five bands is fed through a bank of 99 reson filters (linear
oscillators) tuned to a range of frequencies up to 10 Hz. The
oscillators driven most strongly by the incoming signal oscillate with
the largest amplitude (Fig. 2A). A windowed RMS on the reson filter
outputs results in five periodicity surfaces (one for each of the five
bands), which show the energy output at each reson-filter periodicity
(Fig. 2B). The periodicity surfaces are averaged together to produce an
Average Periodicity Surface (Fig. 2C). The profile plotted to the right
of each stimulus (Fig. 1, right column; Fig. 2D) is termed the Mean
Periodicity Profile (MPP) and represents the energy at each periodicity
frequency, averaged over time. Periodicities in the MPP that exceed 5%
of the MPP’s amplitude range are considered peak periodicities and are
plotted as dark black lines against the gray profile (Figure 1, right
column).</p>

<fig id="fig01" fig-type="figure" position="float">
					<label>Figure 1.</label>
					<caption>
						<p>Stimulus patterns (left column, top
panels), temporal salience predictions (left column, bottom panels), and
mean periodicity profiles (right column) for each of the five stimuli
used in this experiment. Vertical tick marks underlying the stimulus
patterns (upper panels, left column) correspond to time in 140 ms
intervals. Dotted vertical lines (bottom panels, left column), indicate
moments in time that were probed with a deviant. These moments
correspond to the musical event onset(s) directly above in the top
panels. Dark vertical lines in the right column indicate peak
periodicities in the mean periodicity profile.</p>
					</caption>
					<graphic id="graph01" xlink:href="jemr-11-02-l-figure-01.png"/>
				</fig>

<p>After determining the peak periodicities for each stimulus, we return
to the output in each of the five bands from the reson filters. We mask
this output to only contain activity from the peak frequencies. Taking
the point-wise mean resonator amplitude across the peak-frequency reson
filters in all five bands yields the time series shown directly beneath
each stimulus pattern in Figure 1 (also see Fig. 2E). We consider this
output an estimate of salience over time.</p>

<p>In deciding the possible time points at which to probe perceptual
thresholds, we tried to sample across the range of model-predicted
salience values for each stimulus by choosing four temporal locations
(dotted lines in lower panels of Figure 1). We treat the model
predictions as a continuous variable.</p>

<p>To predict the temporal and spectral properties of the pupillary
signal, we 1) extend the temporal salience prediction for multiple loop
iterations 2) convolve the extended temporal salience prediction with a
canonical pupillary response function (<xref ref-type="bibr" rid="b62">62</xref>) and 3) calculate the spectrum
of this extended, convolved prediction. The pupillary response function
(PRF) is plotted in Figure 2F. Its parameters have been empirically
derived, first by (<xref ref-type="bibr" rid="b63">63</xref>) then refined by (<xref ref-type="bibr" rid="b62">62</xref>). The PRF is an Erlang gamma
function, with the equation:</p>

<p><italic>h = t<sup>n</sup>e <sup>(- nt/
t</sup><sub>max</sub><sup>)</sup></italic></p>

<p>where <italic>h</italic> is the impulse response of the pupil, with
latency <italic>t<sub>max</sub>.</italic> (<xref ref-type="bibr" rid="b63">63</xref>) derived
<italic>n</italic> as 10.1 which represents the number of neural
signaling steps between attentional pulse and pupillary response. They
derived t<sub>max</sub> as 930ms when participants responded to
suprathreshold auditory tones with a button press. More recently, (<xref ref-type="bibr" rid="b62">62</xref>)
estimated t<sub>max</sub> to suprathreshold auditory tones in the
absence of a button press to be 512ms. They show that this non-motor PRF
is more accurate in correctly deconvolving precipitating attentional
events and that it can be used even when there are occasional motor
responses involved, e.g. responses to deviants, as long as they are
balanced across conditions. Hence, in our case, we model the continuous
pupillary response to our stimuli using the non-motor PRF and simply
treat any motor responses to deviants as noise that is balanced across
all of our conditions (stimuli).</p>

<fig id="fig02" fig-type="figure" position="float">
					<label>Figure 2.</label>
					<caption>
						<p>Boxes represent processing stages of the
linear oscillator model; those labeled with a letter have a
corresponding plot below. Please see <italic>Linear oscillator model
predictions</italic> for full details. All plots are for stimulus
<italic>complex1.</italic></p>
					</caption>
					<graphic id="graph02" xlink:href="jemr-11-02-l-figure-02.png"/>
				</fig>

<p>Though previous studies have taken a deconvolution approach
(deconvolving the recorded pupil data to get an estimate of the
attentional pulses that elicited it), note that we here take a forward,
convolutional approach. This allows us to generate predicted pupil data
(Fig. 2G) which we compare to our recorded pupil data. With this
approach, we avoid the issue of not being certain when exactly the
attentional pulse occurred, i.e. with deconvolution it is unclear what
the relationships are between the stimulus, attentional pulse, and the
system’s delay (see discussion in (<xref ref-type="bibr" rid="b63">63</xref>) p. 24); also note that
deconvolution approaches often require an additional temporal alignment
technique such as an optimization algorithm, e.g. (<xref ref-type="bibr" rid="b64">64</xref>), and/or dynamic
time-warping, e.g. (<xref ref-type="bibr" rid="b55">55</xref>). Here, we take the empirically derived delay,
<italic>t</italic>, of the pupil to return to baseline from (<xref ref-type="bibr" rid="b62">62</xref>), Figure
1a as 1300ms.</p>
    </sec>

    <sec id="S2d">
      <title>Alternative models</title>      
      
<p>An important consideration is whether the complexity of the linear
oscillator model is necessary to accurately predict behavioral and pupillary data for rhythmic stimuli. To
address this question, two alternative models are considered in our
analyses, each representing different, relevant aspects of the acoustic
input sequence.</p>

<p><italic>Full resonator output:</italic> Rather than masking the
resonator output at the peak periodicities determined by the Mean
Periodicity Profile to get an estimate of salience over time that is
driven by the likely relevant metric frequencies, it is possible to just
average the output from all reson filters over time. Such a prediction
acts as a nice alternative to our filtered output and allows for a
comparison of whether the prominent metric periodicities play a role in
predicting attention over time.</p>

<p><italic>Amplitude Envelope:</italic> The spectrum of the amplitude
envelope of a sound signal has been shown to predict neural entrainment
frequencies, e.g. (<xref ref-type="bibr" rid="b65">65</xref>) show cortical steady-state evoked potentials at
peak frequencies in the envelope spectrum. Hence, as a comparison to the
linear oscillator model predictions, we also used the amplitude envelope
of our stimuli as a predictor. To extract the amplitude envelope of our
stimuli, we repeated each stimulus for multiple loops and calculated the
root mean square envelope using MATLAB’s <italic>envelope</italic>
function with the ‘rms’ flag and a sliding window of 50ms. Proceeding
with just the upper half of the envelope, we low-pass filtered the
signal at 50 Hz using a 3<sup>rd</sup> order Butterworth filter then
down-sampled to 100 Hz to match the resolution of the oscillator model.
To predict pupil data, we convolved the envelope with the PRF, as
previously detailed for the linear oscillator model.</p>
    </sec>

    <sec id="S2e">
      <title>Apparatus</title>

<p>Participants were tested individually in a dimly lit,
sound-attenuating room, at a desk with a computer monitor, infrared
eye-tracker, Logitech Z-4 speaker system, and a Dell keyboard connected
to the computer via USB serial port. Participants were seated
approximately 60 cm away from the monitor. Throughout the experiment,
the screen was gray with a luminance of 17.7 cd/m<sup>2</sup>, a black
fixation cross in the center, and a refresh rate of 60 Hz. The center to
edge of the fixation cross subtended 2.8° of visual angle. Pupil
diameter of the right eye was recorded with an Eyelink 1000 (SR
Research) sampling at 500 Hz in remote mode, using Pupil-CR tracking and
the ellipse pupil tracking model. Stimuli were presented at a
comfortable listening level, individually selected by each participant,
through speakers that were situated on the right and left sides of the
computer monitor. During the experiment, auditory stimuli were
adaptively presented through Max/MSP (Cycling ’74; code available at:
<ext-link ext-link-type="uri" xlink:href="https://github.com/janatalab/attmap.git" xlink:show="new">https://github.com/janatalab/attmap.git</ext-link>
), which also recorded behavioral
responses and sent event codes to the eye-tracking computer via a custom
Python socket.</p>
    </sec>

    <sec id="S2f">
      <title>Procedure</title>

<p>Participants were instructed to listen to the music, to maintain
their gaze as comfortably as possible on the central fixation cross, and
to press the “spacebar” key any time they heard an increase in volume (a
deviant). They were informed that some increases might be larger or
smaller than others and that they should respond to any such change. A 1
min practice run was delivered under the control of our experimental web
interface, Ensemble (<xref ref-type="bibr" rid="b66">66</xref>), after which participants were asked if they
had any questions.</p>

<p>During the experiment proper, a run was approximately 7 min long and
consisted of approximately 190 repetitions of a stimulus pattern. There
were no pauses between repetitions, thus a continuously looping musical
scene was created. Please note that the exact number of loop repetitions
any participant heard varied according to the adaptive procedure
outlined below. Each participant heard each stimulus once, resulting in
five total runs of approximately 7 min each, i.e. a roughly 35 min
experiment.</p>

<p>Stimulus order was randomized throughout the experiment. Messages to
take a break and continue when ready were presented after each run of
each stimulus. Following the deviance detection task, participants
completed questionnaires assessing musical experience, imagery
abilities, genre preferences, etc. These questionnaire data were
collected as part of larger ongoing projects in our lab and will not be
reported in this study. In total, the experimental session lasted
approximately 50 min; this includes the auditory task, self-determined
breaks between runs (which were typically between 5-30 s), and the
completion of surveys.</p>

<p><italic>Adaptive Thresholding Procedure:</italic> Though participants
experienced a continuous musical scene, we can think of each repetition
of the stimulus loop as the fundamental organizing unit of the
experiment that determined the occurrence of deviants. Specifically,
after every standard (no-deviant) loop iteration, there was an 80%
chance of a deviant, in one of the four probed temporal locations,
without replacement, on the following loop iteration. After every
deviant loop, there was a 100% chance of a no-deviant loop. The Zippy
Estimation by Sequential Testing (ZEST) (<xref ref-type="bibr" rid="b67 b68">67, 68</xref>) algorithm was used to
dynamically change the decibel level of each deviant, depending on the
participant’s prior responses and an estimated probability density
function (p.d.f.) of their threshold.</p>

<p>The ZEST algorithm tracked thresholds for each of the four probed
temporal locations separately during each stimulus run. A starting
amplitude increase of 10 dB SPL was used as an initial difference limen.
On subsequent trials, the p.d.f. for each probed location was calculated
based on whether the participant detected the probe or not, within a
1000 ms window following probe onset. ZEST uses Bayes’ theorem to
constantly reduce the variance of a posterior probability density
function by reducing uncertainty in the participant’s threshold
probability distribution, given the participant’s preceding performance.
The mean of the resultant p.d.f. determines the magnitude of the
following deviant at that location. The mean of the estimated
probability density function on the last deviant trial is the
participant’s estimated perceptual threshold.</p>

<p>Compared to a traditional staircase procedure, ZEST allows for
relatively quick convergence on perceptual thresholds. Because the ZEST
procedure aims to minimize variance, using reversals as a stopping rule,
like in the case of a staircase procedure, does not make sense. Here, we
used 20 observations as a stopping rule because (<xref ref-type="bibr" rid="b68">68</xref>) showed that 18
observations were sufficient in a similar auditory task and (<xref ref-type="bibr" rid="b60">60</xref>)
demonstrated that, on average, 11 trials allowed for reliable estimation
of perceptual threshold when using a dynamic stopping rule. For the
current study, 20 was a conservative choice for estimating thresholds,
which simultaneously enabled multiple observations over which to average
pupillary data.</p>

<p>In summary, each participant was presented with a deviant at each of
the four probed locations, in each stimulus, 20 times. The intensity
change was always applied to the audio file for 200 ms in duration, i.e.
participants heard an increase in volume of the on-going rhythmic
pattern for 200 ms before the pattern returned to the initial listening
volume. The dB SPL of each deviant was adjusted dynamically based on
participants’ prior responses. The mean of the estimated probability
density function on the last deviant trial (observation 20) was the
participant’s estimated threshold. Examples of this adaptive stimulus
presentation are accessible online as part of the Supplemental Material
in (<xref ref-type="bibr" rid="b60">60</xref>):<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1037/xhp0000563.supp" xlink:show="new">http://dx.doi.org/10.1037/xhp0000563.supp</ext-link>.
 </p>
    </sec>

    <sec id="S2g">
      <title>Analysis</title>

<p><italic>Perceptual Thresholds:</italic> Participants’ perceptual
thresholds for detecting deviants at each of the probed temporal
locations were computed via ZEST (<xref ref-type="bibr" rid="b67 b68">67, 68</xref>); see the <italic>Adaptive
Thresholding Procedure</italic> section above for further details.</p>

<p><italic>Reaction Time:</italic> Reaction times were calculated for
each trial for each participant, from deviant onset until button press.
Trials containing reaction times that did not fall within three scaled
median absolute deviations from the median were removed from subsequent
analysis. This process resulted in the removal of 0.12% of the data.</p>

<p><italic>Pupil Preprocessing:</italic> Blinks were identified in the
pupil data using the Eyelink parser blink detection algorithm (<xref ref-type="bibr" rid="b69">69</xref>),
which identifies blinks as periods of loss in pupil data surrounded by
saccade detection, presumed to occur based on the sweep of the eyelid
during the closing and opening of the eye. Saccades were also identified
using Eyelink’s default algorithm.</p>

<p>Subsequently, all ocular data were preprocessed using custom scripts
and third party toolboxes in MATLAB version 9.2 (<xref ref-type="bibr" rid="b70">70</xref>). Samples consisting
of blinks or saccades were set to NaN, as was any sample that was 20
arbitrary units greater than the preceding sample. A sliding window of
25 samples (50ms) was used around all NaN events to remove edge
artifacts. Missing pupil data were imputed by linear interpolation. Runs
requiring 30% or more interpolation were discarded from future analysis,
which equated to 9% of the data. The pupil time series for each
participant, each run (~7 min), was high-pass filtered at .05 Hz, using
a 3<sup>rd</sup> order Butterworth filter, to remove any large-scale
drift in the data. For each participant, each stimulus run, pupil data
were normalized as follows: z-scoredPupilData = (rawData –
mean(rawData)) / std(rawData). See Figure S1 in the Supplementary
Materials accompanying this article for a visualization of these pupil
pre-processing steps.</p>

<p>Collectively, the preprocessing procedures and some of the
statistical analyses reported below relied on the Signal Processing
Toolbox (v. 7.4), the Statistics and Machine Learning Toolbox (v. 11.1),
the Bioinformatics Toolbox (v. 4.4), Ensemble (<xref ref-type="bibr" rid="b66">66</xref>), and the Janata Lab
Music Toolbox (<xref ref-type="bibr" rid="b57 b71">57, 71</xref>). All custom analysis code is available upon
request.</p>

<p><italic>Pupil Dilation Response:</italic> The pupil dilation response
(PDR) was calculated for each probed deviant location in each stimulus
by time-locking the pupil data to deviant onset. A baseline period was
defined as 200 ms preceding deviant onset. The mean pupil size from the
baseline period was subtracted from the trial pupil data (deviant onset
through 3000 ms). The mean and max pupil size were calculated within
this 3000 ms window. We chose 3000 ms because the pupil dilation
response typically takes around 2500 ms to return to baseline following
a motor response (<xref ref-type="bibr" rid="b62">62</xref>); therefore, 3000 ms seemed a safe window length.
Additionally, the velocity of the change in pupil size from deviant
onset to max dilation was calculated as the slope of a line fit from
pupil size at deviant onset to max pupil size in the window, similar to
Figure 1C in (<xref ref-type="bibr" rid="b72">72</xref>). The latency until pupil size maximum was defined as
the duration (in ms) it took from deviant onset until the pupil reached
its maximum size. Trials containing a baseline mean pupil size that was
greater than three scaled median absolute deviations from the median
were removed from subsequent analyses (0.2% of all trials).</p>

<p><italic>Time-Frequency Analyses:</italic> To examine the
spectro-temporal overlap between our varied model predictions and the
observed pupillary signals, we calculated the spectrum of the average
pupillary signal to 8-loop epochs of each stimulus, for each
participant. We then averaged the power at each frequency across all
participants, for each stimulus. We compared the continuous time series
and the power spectral density for the recorded pupil signal for each
stimulus to those predicted by the model predictions convolved with the
pupillary response function. These two average analyses are included for
illustrative purposes; note that the main analysis of interest is on the
level of the single participant, single stimulus, as outlined below.</p>

<p>To compare the fine-grained similarity between the pupil size time
series for any given stimulus to the linear oscillator model prediction
for that stimulus, we computed the Cross Power Spectral Density (CPSD)
between the pupil time series and itself, the model time series and
itself, and the pupil time series and the model time series. The CPSD
was calculated using Welch’s method (<xref ref-type="bibr" rid="b73">73</xref>), with a 4.4 s window and 75%
overlap. For each participant, each stimulus, we computed 1) the CPSD
between the pupil trace and the model prediction for that stimulus and
2) the CPSD between the pupil trace and the model prediction for all
other stimuli, which served as the null distribution of coherence
estimates.</p>

<p>The phase coherence between the pupil and the model for any given
stimulus was defined as the squared absolute value of the pupil-model
CPSD, divided by the power spectral density functions of the CPSD of the
individual signals with themselves. We then calculated a single true and
null coherence estimate for each participant, each stimulus, by finding
the true vs. null coherence at each model-predicted peak frequency (see
Table S2) under 3 Hz and averaging.</p>
    </sec>
    </sec>

    <sec id="S3">
      <title>Results</title>
    <sec id="S3a">
      <title>Perceptual Thresholds</title>


<p>We tested each of the three alternative predictors of perceptual
thresholds (peak filtered resonator output, full resonator output, and
amplitude envelope) using mixed-effects models via the
<italic>nlme</italic> package (<xref ref-type="bibr" rid="b74">74</xref>) in R (<xref ref-type="bibr" rid="b75">75</xref>). Threshold was the
dependent variable and the given model’s predicted value at the time of
each probe was the fixed effect. Random-effect intercepts were included
for each participant. We calculated effect sizes of fixed effects using
Cohen’s <italic>f<sup>2</sup></italic>, a standardized measure of an
independent variable’s effect size in the context of a multivariate
model (<xref ref-type="bibr" rid="b76">76</xref>). We calculated <italic>f</italic><sup>2</sup> effect sizes
following the guidelines of (<xref ref-type="bibr" rid="b77">77</xref>) for mixed-effects multiple regression
models.</p>

<p>We assessed the relative goodness of model fit using Akaike’s
information criterion (AIC; (<xref ref-type="bibr" rid="b78">78</xref>)). As widely recommended (e.g. (<xref ref-type="bibr" rid="b79">79</xref>)), we
rescaled AIC values to represent the amount of information lost if
choosing an alternative model, as opposed to the preferred model, with
the equation:</p>

<p>∆<sub>i</sub> = AIC<sub>i</sub> - AIC<sub>min</sub></p>

<p>where AIC<sub>min</sub> is the model with the lowest AIC value of all
models considered and AIC<sub>i</sub> is the alternative model under
consideration. Given this equation, AIC<sub>min</sub>, by definition,
has a value of 0 and all other models are expressed in relation.
Typically, models having a value ∆<sub>i</sub> &#x3C; 2 are considered to
have strong support; models with 4 &#x3C; ∆<sub>i</sub> &#x3C; 7 have less
support, and models ∆<sub>i</sub> &#x3E; 10, no support. This
transformation of the AIC value is important as it is free of scaling
constants and sample size effects that influence the raw AIC score. In
our case, because each of our models being compared has the same amount
of complexity, Bayesian Information Criterion (<xref ref-type="bibr" rid="b80">80</xref>) differences are
identical to those calculated for AIC ∆<sub>i</sub> so we do not include
them here.</p>

<p>As Table 1 indicates, our peak filtered model yielded the lowest AIC value, suggesting that it is strongly preferred over
the amplitude envelope of the audio and entirely preferred over the full
resonator model output as a predictor of perceptual threshold, given
this common metrics of model fit. Furthermore, although full reson
output and the amplitude envelope were both significant predictors of
participant thresholds, the effect size was largest for the
peak-filtered reson model compared to the alternatives. However, we note
that no broadly accepted significance test exists for comparing
non-nested mixed-effects models (i.e., each model contains a different
fixed-effect term). As such, we caution that a strong claim of model-fit
superiority would require further testing. Nevertheless, these results
suggest that the peak-filtered resonator model better explains variance
in participants’ thresholds than does the amplitude envelope of the
auditory stimulus or the unfiltered reson output. We interpret this in
favor of participants entraining their attention to endogenously
generated metrical expectations which are represented by the peak
periodicities from our model.</p>

<table-wrap id="t01" position="float">
					<label>Table 1.</label>
					<caption>
						<p>Comparison of three alternative predictive models of
perceptual threshold.</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">

    <thead>
      <tr>
        <th>Threshold Predictor</th>
        <th>ß</th>
        <th>SE</th>
        <th><italic>df</italic></th>
        <th><italic>f<sup>2</sup></italic></th>
        <th>AIC</th>
        <th>∆<sub>i</sub></th>
      </tr>
    </thead>
    <tbody>
      <tr>
        <td>Peak-filtered reson</td>
        <td>-3.30 **</td>
        <td>.431</td>
        <td>344</td>
        <td>.17</td>
        <td>2082.857</td>
        <td>0</td>
      </tr>
      <tr>
        <td>Full reson</td>
        <td>-0.011**</td>
        <td>.0023</td>
        <td>344</td>
        <td>.064</td>
        <td>2125.737</td>
        <td>42.8</td>
      </tr>
      <tr>
        <td>Amplitude env</td>
        <td>-133.14**</td>
        <td>20.80</td>
        <td>344</td>
        <td>.12</td>
        <td>2090.672</td>
        <td>7.8</td>
      </tr>
    </tbody>
  </table>
					<table-wrap-foot>
						<fn id="FN1">
						<p><italic>Note.</italic> Model estimates were obtained using linear
    mixed-effects models to regress fixed effects of stimulus model type
    on threshold; participant intercept was included as a random effect.
    AIC Akaike’s Information Criterion (a lower value indicates a more
    preferred model); Cohen’s <italic>f<sup>2</sup></italic> for effect
    size. ** <italic>p</italic> &#x3C; .001.</p>
						</fn>
					</table-wrap-foot>  
</table-wrap>

<p>The negative relationship between peak resonator level and increment
detection threshold is plotted in Figure 3 and visible within most
participants’ data. Note that random slopes were included in the final
model that generated Figure 3 so that participant level fit could be
visualized and because the there is growing consensus that the random
effects structure of linear mixed effects models should be kept maximal
(<xref ref-type="bibr" rid="b81">81</xref>); however, we wish to note that adding the random slope did not
significantly improve the fit of the model, which is why it was not
included during our model comparisons. Overall, the final peak reson
filtered model had a conditional R<sup>2</sup> of .087 – reflecting an
approximation of the variance explained by the fixed effect of peak
reson output – and a marginal R<sup>2</sup> of .475 – reflecting an
approximation of the variance explained by the overall model (fixed
effect of peak reson output plus random intercepts and slopes for
participants). R<sup>2</sup> estimates were calculated using the
<italic>MuMIn</italic> package in R (<xref ref-type="bibr" rid="b82 b83 b84">82, 83, 84</xref>).</p>

<fig id="fig03" fig-type="figure" position="float">
					<label>Figure 3.</label>
					<caption>
						<p>Increment detection thresholds as a
    function of averaged peak resonator level. Each panel is an
    individual participant’s data. Lines reflect participants’ slopes
    and intercepts as random effects in a mixed-effects model.</p>
					</caption>
					<graphic id="graph03" xlink:href="jemr-11-02-l-figure-03.png"/>
				</fig>

<p>In the remainder of the paper, we refrain from using the alternative
full reson model, as it could be confidently rejected and was weakest of
the three models. We do continue to compare the peak filtered model with
the amplitude envelope; however, we wish to note that a strong
correlation exists between the amplitude envelope predictions and the
peak-filtered predictions at each probed position (r(20) = .90,
<italic>p</italic> &#x3C; .001).</p>
    </sec>

    <sec id="S3b">
      <title>Pupil Dilation Response (PDR)</title>

<p>The average pupil dilation response for each trial type, for each
probed position, in each stimulus, is plotted in Figure 4. Possible
trial types are 1) trials in which a deviant occurred and was detected
by participants (blue), 2) trials in which a deviant occurred and was
not detected by participants (red), and 3) trials in which no deviant
occurred (black). In all cases the data are plotted with respect to the
probed time point. Please recall that the only difference between
deviant and no-deviant trials is that the auditory event at the probed
moment in time is increased in dB SPL, relative to the baseline volume
of the standard pattern. On average, per participant, per probe
position, 12 trials were used in calculating the “hit” average, and 8
trials were used in calculating the “miss” average. For a full report of
the average number and standard deviation of trials used in calculating
each grand average trace plotted in Figure 4, see Table S1.</p>

<fig id="fig04" fig-type="figure" position="float">
					<label>Figure 4.</label>
					<caption>
						<p>Average pupillary responses, across all
participants, to all probed locations in all stimuli. The blue trace
indicates the average pupillary response to trials during which a
deviant occurred and was detected; the red trace represents trials
during which a deviant occurred but was not detected; the black trace
indicates the same moments in time on trials in which no deviant
occurred. All data are time locked to deviant onset. For no-deviant
trials, this refers to the same point in time that the deviant could
have occurred (but did not). Width of each trace is the standard error
of the mean.</p>
					</caption>
					<graphic id="graph04" xlink:href="jemr-11-02-l-figure-04.png"/>
				</fig>

<p>As can be seen in Figure 4, the PDR to a deviant is consistent and
stereotyped across all probed time points There was a significant
difference between mean pupil size on hit vs. missed trials, t(15) =
2.14, <italic>p</italic> = .049, and missed vs. no deviant trials, t(15)
= 4.60, <italic>p</italic> &#x3C; .001. Additional features of the PDR
also varied as a function of whether the deviant was detected or missed.
There was a significant difference between max pupil size on hit vs.
missed trials, t(15) = 4.48, <italic>p</italic> &#x3C; .001, and missed
vs. no-deviant trials, t(15) = 28.45, <italic>p</italic> &#x3C; .001, as
well as a marginally significant difference between pupil latency until
maximum on hit vs. missed trials t(15) = -2.11, <italic>p</italic> =
.052. There was no significant difference between pupil velocity to
maximum on hit vs. miss trials.</p>

<p>Before constructing trial-level predictive models based on specific
features of the PDR, we first assessed correlations between all
predictor and outcome variables. All Pearson correlation coefficients
for both hit and miss trials, across all participants, are reported in
Table 2. We found that baseline pupil size is negatively correlated with
mean and max evoked pupil size, as well as latency to max pupil size and
pupil velocity. Mean evoked pupil size – the standard metric in most
cognitive studies – was strongly positively correlated with max evoked
pupil size, latency to max pupil size, and pupil velocity. It is also
worth noting that decibel contrast (relative to baseline volume) was
negatively correlated with reson output, reflecting our perceptual
threshold results, i.e. moments of low salience required higher dB
contrast to be detected. There was no correlation between dB contrast
and pupil dilation, adding to the literature of mixed findings on this
topic. Reaction time was weakly correlated with baseline pupil size, dB
contrast, and max evoked pupil size, though not mean evoked pupil size,
despite the strong correlation of these two pupil variables with each
other. As the coefficients indicate, these reaction time effects are
very weak at best, possibly due to our use of a standard computer
keyboard which may have introduced jitter in the recording of
responses.</p>

<p>Because of the strong correlations between possible pupil metrics of
interest, in subsequent analyses we used maximum evoked pupil size in
our statistical models, so as not to construct models with collinear
predictors. While it may be argued that baseline pupil size is a more
intuitive metric to use, as it could indicate causality, we wish to
point out that there were no significant differences in baseline pupil
size between hit vs. missed trials t(15) = -0.75, <italic>p</italic> =
.463, while there was a significant difference in max evoked pupil size
between hit and missed trials, as previously indicated. Hence, though
baseline pupil size is strongly correlated with mean and max evoked
pupil size, it is not our predictor of interest in context of this
analysis.</p>

<p>To test if we could predict whether a deviant was detected or not
based on the evoked pupillary response, on a per-trial basis, we fit a
generalized linear mixed-effects logistic regression model. The
generalized linear mixed-effects model (GLMM) included max evoked pupil
size and peak reson output as predictors, participant as a random
intercept, and a binary <italic>hit</italic> (detection of the
increment) or <italic>miss</italic> (non-detection of the increment) as
the dependent variable. The GLMM was fit via maximum likelihood using
the Laplace approximation method and implemented using the
<italic>glmer</italic> function from the lme4 package (<xref ref-type="bibr" rid="b85">85</xref>) in R. Odds
ratios, z statistics, confidence intervals, and p-values are reported
for all fixed effects (Table 3).</p>

<p>As a comparison, we ran the same model but swapped peak reson
prediction for amplitude envelope prediction. We used the
<italic>pROC</italic> package (<xref ref-type="bibr" rid="b86">86</xref>) to calculate the Receiver Operating
Characteristic (ROC) curves. ROC curves compare the true positive
prediction rate against the false positive rate. We compared the area
under the ROC curves (AUC) of the peak reson model vs. the amplitude
envelope model using DeLong’s test for two correlated ROC curves (<xref ref-type="bibr" rid="b87">87</xref>).
The AUC is a common metric for evaluating both the goodness of fit of a
model, as well as the performance of two different models. In this case,
it reflects the probability that a randomly selected ‘hit’ trial is
correctly identified as a ‘hit’ rather than ‘miss’ (<xref ref-type="bibr" rid="b88">88</xref>). With a range of
.5 (chance) to 1 (perfect prediction), higher AUC values indicate better
model performance. The peak reson model had an AUC of .608, while the
amplitude envelope model had an AUC of .606, thus there was no
significant difference between the models (Z = 1.594, <italic>p</italic>
= .111). Both performed significantly above chance, whether chance was
defined as the standard .5 or more conservatively via shuffling of the
pupil and model data (peak reson vs. shuffled chance: Z = 3.79,
<italic>p</italic> &#x3C; .001; amp env vs. shuffled chance: Z = 3.53,
<italic>p</italic> &#x3C; .001) .</p>

<p>Given the similar performance of peak reson and amplitude envelope
models, and the fact that the pupil dilation response to deviants at
different moments of predicted salience is remarkably stereotyped, it is
not possible to be sure whether the pupil dilation response reflects
endogenous meter perception or merely a bottom-up response to the
stimuli. An experiment with a wider range of stimuli, incorporating
non-stationary rhythms, might be well suited to answer this question, as
such stimuli would likely result in a greater difference between the
amplitude envelope and the peak reson filter predictions. Nonetheless,
the finding of a PDR on trials in which participants failed to report
detecting a deviant has implications for future studies and is discussed
in more detail below.</p>

<table-wrap id="t02" position="float">
					<label>Table 2.</label>
					<caption>
						<p>Pearson Correlation Coefficients for all predictor and
outcome variables on ‘hit’ and ‘miss’ trials.</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">

    <thead>
      <tr>
        <th>Variables</th>
        <th>1 Hit (Miss)</th>
        <th>2</th>
        <th>3</th>
        <th>4</th>
        <th>5</th>
        <th>6</th>
        <th>7</th>
      </tr>
    </thead>
    <tbody>
      <tr>
        <td>1. Mean baseline pup</td>
        <td>-</td>
        <td></td>
        <td></td>
        <td></td>
        <td></td>
        <td></td>
        <td></td>
      </tr>
      <tr>
        <td>2. Decibel contrast</td>
        <td>.004 (0)</td>
        <td>-</td>
        <td></td>
        <td></td>
        <td></td>
        <td></td>
        <td></td>
      </tr>
      <tr>
        <td>3. Resonator level</td>
        <td>.012 (0.004)</td>
        <td>-.185**(-.338)**</td>
        <td>-</td>
        <td></td>
        <td></td>
        <td></td>
        <td></td>
      </tr>
      <tr>
        <td>4. Mean evoked pup</td>
        <td>-.811** (-.787)**</td>
        <td>-.001 (.004)</td>
        <td>-.022 (.01)</td>
        <td>-</td>
        <td></td>
        <td></td>
        <td></td>
      </tr>
      <tr>
        <td>5. Max evoked pup</td>
        <td>-.728** (-.742)**</td>
        <td>-.009 (.009)</td>
        <td>-.015 (-.01)</td>
        <td>.868** (.887**)</td>
        <td>-</td>
        <td></td>
        <td></td>
      </tr>
      <tr>
        <td>6. Max latency</td>
        <td>-.443** (-.492)**</td>
        <td>.023 (-.019)</td>
        <td>.015 (-.026)</td>
        <td>.443** (.466**)</td>
        <td>.357** (.417**)</td>
        <td>-</td>
        <td></td>
      </tr>
      <tr>
        <td>7. Pup velocity</td>
        <td>-.144** (-.142)**</td>
        <td>-.055* (.013)</td>
        <td>.008 (.023)</td>
        <td>.245** (.230**)</td>
        <td>.219** (.196**)</td>
        <td>.163** (.168**)</td>
        <td>-</td>
      </tr>
      <tr>
        <td>8. Reaction time</td>
        <td>.045* (-)</td>
        <td>.077** (-)</td>
        <td>.006 (-)</td>
        <td>-.01 (-)</td>
        <td>-.039* (-)</td>
        <td>.011 (-)</td>
        <td>-.003 (-)</td>
      </tr>
    </tbody>
  </table>
					<table-wrap-foot>
						<fn id="FN2">
						<p><italic>Note.</italic> ‘Miss’ trial correlation coefficients are in
parenthesis; there is no reaction time for a ‘miss’ trial. Pup refers to
pupil size. Decibel contrast is the change in dB, relative to baseline
volume, of a deviant on any given trial. For the ‘hit’ data, df = 3108,
for the ‘miss’ data, df = 2167, *<italic>p</italic> &#x3C; .05,
**<italic>p</italic> &#x3C; .001</p>
						</fn>
					</table-wrap-foot>  
</table-wrap>

<table-wrap id="t03" position="float">
					<label>Table 3.</label>
					<caption>
						<p>Generalized linear mixed-effects logistic regression model
with dependent variable hit (1) or miss (0).</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">

    <thead>
      <tr>
        <th></th>
        <th></th>
        <th></th>
        <th colspan="2">95% CI for OR</th>

      </tr>
    </thead>
    <tbody>
      <tr>
        <td></td>
        <td>OR</td>
        <td><italic>Z</italic> stat</td>
        <td>Lower</td>
        <td>Upper</td>
      </tr>
      <tr>
        <td>Max pupil size</td>
        <td>1.28 **</td>
        <td>8.35</td>
        <td>1.21</td>
        <td>1.16</td>
      </tr>
      <tr>
        <td>Resonator output</td>
        <td>1.26 **</td>
        <td>3.86</td>
        <td>1.12</td>
        <td>1.41</td>
      </tr>
    </tbody>
  </table>
					<table-wrap-foot>
						<fn id="FN3">
						<p><italic>Note</italic>. 5279 observations. OR = Odds
Ratio<italic>.</italic> CI = Confidence Interval. **<italic>p</italic>
&#x3C; .001</p>
						</fn>
					</table-wrap-foot>  
</table-wrap>


    </sec>

    <sec id="S3c">
      <title>Continuous pupil signal</title>

<p>While the pupil dilation response to deviants is of interest with
regards to indexing change detection, we also wished to examine the
continuous pupillary response to our rhythmic stimuli. Specifically, we
wanted to assess whether the pupil entrained to the rhythmic patterns
and, if so, whether such dynamic changes in pupil size were unique for
each stimulus.</p>

<p>As can be seen in Figure 5, there appears to be a stimulus-specific
correspondence between the model predicted pupil time series (red) and
the observed continuous pupillary signal (black). Note the remarkably
similar predictions of the peak-filtered oscillator model (solid red) and the
amplitude envelope model (dashed red). This correspondence between the
two predictions and the recorded pupil data was also observable in the
Power Spectral Density (PSD) estimates for each stimulus (Figure 6).
Similar to studies of pupil oscillations in the visual domain (<xref ref-type="bibr" rid="b39">39</xref>), we
do not see much power in the pupil spectrum beyond about 3 Hz;
therefore, we plot frequencies in the range of 0 to 3 Hz (Figure 6). In
Figure 6, it is clear that the spectra of the pupil signal and that of
the two model predictions overlap. However, this analysis is not
sufficient to infer that the pupil is tracking our model output in a
stimulus-specific way, though it does indicate pupillary entrainment to
the prominent periodicities in the stimuli.</p>

<fig id="fig05" fig-type="figure" position="float">
					<label>Figure 5.</label>
					<caption>
						<p>Average pupillary responses, across all
participants, to 8-loop epochs of each stimulus (black), compared to the
peak reson filter model-predicted pupillary responses (solid red) and
the amplitude envelope-predicted pupillary responses (dotted red).</p>
					</caption>
					<graphic id="graph05" xlink:href="jemr-11-02-l-figure-05.png"/>
				</fig>

<fig id="fig06" fig-type="figure" position="float">
					<label>Figure 6.</label>
					<caption>
						<p>Average pupillary power spectral density
(PSD) across participants (black) vs. peak reson filter model-predicted
PSD for each stimulus (solid red) and amplitude envelope-predicted PSD
(dashed red).</p>
					</caption>
					<graphic id="graph06" xlink:href="jemr-11-02-l-figure-06.png"/>
				</fig>        


<p>To examine the relationship between each participant’s pupil time
series and the model prediction for each stimulus, we examined their
phase coherence. Because the peak reson and amplitude envelope models
are very similar in temporal and spectral realms, we chose to compute
phase coherence for only the peak reson model. An additional reason for
doing this was because the peak reson model outputs predicted salient
metric frequencies at which we can calculate coherence on a theoretical
basis, whereas, with the amplitude envelope model, one would have to
devise a comparable method to pick peaks in the envelope spectrum and
ensure that they are of a similar number, spacing, and theoretical
relevance.</p>

<p>Shuffling the stimulus labels allowed us to compute null coherence
estimates (see <italic>Time Frequency Analyses</italic> for more
details)<italic>.</italic> The average true vs. null coherence value for
each participant, across each model-predicted peak frequency, was subjected to a paired samples t-test, revealing a
significant difference in the true vs. null distributions for the peak
reson-filtered model, t(14) = 16.56, <italic>p</italic> &#x3C; .001. Thus,
we can conclude that the changing dynamics of pupil size were entrained
to the model predictions for each stimulus in a unique way. For
illustration, we have plotted the average coherence, across
participants, for each stimulus, in Figure 7 (black). The null coherence
estimate is plotted in magenta. We wish to note that similar results
would have likely been obtained via calculating the coherence between
the amplitude envelope model and the pupil signal.</p>

<fig id="fig07" fig-type="figure" position="float">
					<label>Figure 7.</label>
					<caption>
						<p>Average phase coherence across participants between the
pupil and peak reson filter model, for each stimulus (black). Average
null coherence is plotted in magenta. The width of each trace represents
standard error of the mean.</p>
					</caption>
					<graphic id="graph07" xlink:href="jemr-11-02-l-figure-07.png"/>
				</fig>
    </sec>
    </sec>
    
    <sec id="S4">
      <title>Discussion</title>

<p>The current experiment used a linear oscillator model to predict both
perceptual detection thresholds and the pupil signal during a continuous
auditory psychophysical task. During the task, participants listened to
repeating percussion loops and detected momentary intensity increments.
We hypothesized that the linear oscillator model would predict
perceptual thresholds for detecting intensity deviants that were
adaptively embedded into our stimuli, as well as the continuous
pupillary response to the stimuli.</p>

<p>The linear oscillator model reflects the predictions of Dynamic
Attending Theory (DAT), which posits that attention can become entrained
by an external (quasi)-periodic stimulus. The model is driven largely by
onsets detected in the acoustic envelope of the input signal, which get
fed through a bank of linear oscillators (reson filters). From there it
is possible to calculate which oscillators are most active, mask the
output at those peak frequencies, and average over time. Throughout the
paper, we considered this peak-filtered signal the ideal prediction of
temporal salience, as in (<xref ref-type="bibr" rid="b60">60</xref>); however, for comparison, we also tested
how well output from all resonators would predict our data, as well as
how well the amplitude envelope alone (without any processing through
the oscillator model) would do in predicting both perceptual thresholds
and the pupillary signal.</p>

<p>The peak-filtered model was best at predicting perceptual thresholds,
providing an important replication and extension of our previous study
(<xref ref-type="bibr" rid="b60">60</xref>). In the present study we used only complex stimuli and intensity
increments but our previous study showed the same predictive effects of
the peak-filtered model for both intensity increments and decrements, as
well as simple and complex stimuli (<xref ref-type="bibr" rid="b60">60</xref>). We assume that such results
imply that the peaks extracted by our model are attentionally relevant
periodicities that guide listeners’ attention throughout complex
auditory scenes. The fact that perceptual thresholds were higher at
moments of low predicted salience, i.e. a deviant needed to be louder at
that moment in time for participants to hear it and vice versa,
indicates that attention is not evenly distributed throughout time, in
line with the predictions of DAT. However, we note that the linear
oscillator model’s temporal salience prediction was strongly correlated
with the magnitude of the amplitude envelope of the signal.</p>

<p>Indeed, when it comes to the pupil signal, both the peak-filtered
model and the amplitude envelope performed almost identically. This
similarity is likely a result of a variety of factors: 1) the rhythms
used in the current study are stationary (unchanging over time) and,
though there are some moments lacking acoustic energy, overall, the
prominent periodicities are present in the acoustic signal. Hence, the
Fourier Transform of the amplitude envelope yields roughly identical
peak periodicities to that of the model. 2) Convolving both signals with
the pupillary response function smears out most subtle differences
between the two signals, making them even more similar.</p>

<p>Regardless of the ambiguity regarding which model may be a better
predictor, the pupillary results reported in this paper are exciting
nonetheless. First and foremost, we show that the pupil can entrain to a
rhythmic auditory stimulus. To our knowledge, we are the first to report
such a finding, though others have reported pupillary entrainment in the
visual domain (<xref ref-type="bibr" rid="b39 b40">39, 40</xref>). The continuous pupillary signal and the pupil
spectrum to each stimulus were both well predicted by the linear
oscillator model and the amplitude envelope of the audio signal. That
pupil dilation/constriction dynamics, controlled by the smooth dilator
and sphincter muscles of the iris, respectively, entrain to auditory
stimuli is in line with a large literature on music-evoked entrainment.
Though the pupil has never been mentioned in this literature, other
areas of the autonomic nervous system have been shown to entrain to
music (<xref ref-type="bibr" rid="b41">41</xref>). It remains to be tested how pupillary oscillations might
relate to cortical neural oscillations, as highlighted in the
introduction of this paper. Are pupillary delta oscillations
phase-locked to cortical delta? Do cortical steady-state evoked
potentials overlap with those of the pupil? Pupillometry is a more
mobile and cost-effective method than EEG, as such, characterizing the
relationship between pupillary and cortical responses to music will
hopefully allow future studies to use pupillometry in situations that
otherwise might have required EEG.</p>

<p>Furthermore, we have shown not only that the pupil entrains to the
prominent periodicities present in our stimuli, but also that the
oscillatory pupillary response to each stimulus is unique. These results
extend those of (<xref ref-type="bibr" rid="b55">55</xref>) and speak to the effectiveness of using
pupillometry in the context of music cognition studies. Unlike (<xref ref-type="bibr" rid="b55">55</xref>), we
did not use deconvolution or dynamic time-warping to assess the fit of
our pupil data with our stimuli, rather, we took a forward approach to
modeling our stimuli, convolving stimulus-specific predictions with a
pupillary response function, effectively removing the need for
algorithms like dynamic time-warping or fitting optimizations. We hope
that this approach will prove beneficial for others, especially given
the simplicity in calculating the amplitude envelope of a signal and
convolving it with the pupillary response function. With regards to our
linear oscillator model, future work will use a wider and more
temporally dynamic variety of stimuli to assess the power of our linear
oscillator model vs. the amplitude envelope in predicting the
time-varying pupil signal across a diverse range of musical cases.
Hopefully such a study will shed more light on the issue of whether
pupillary oscillations are an evoked response or a reflection of
attention and in what contexts one might need to use a more complex
model, if any.</p>

<p>Even in the case of oscillations being driven more so by the stimuli
than by endogenous attention, we still feel that such oscillations
nevertheless shape subsequent input and reflect, in some way, the likely
attended features of the auditory, and perhaps visual, input. Because
pupil dilation blurs the retinal image and widens the visual receptive
field, while pupil constriction sharpens the retinal image, narrowing
the visual receptive field (<xref ref-type="bibr" rid="b40">40</xref>), oscillations in pupil size, which are
driven by auditory stimuli may also have ramifications for visual
attention (<xref ref-type="bibr" rid="b89">89</xref>) and audiovisual integration. For example, visual
attention should be more spatially spread (greater sensitivity) at
moments of greater auditory temporal salience (larger pupil dilation).
It is possible, however, that such small changes in pupil size elicited
by music are negligible with respect to visual sensitivity and/or acuity
(see (<xref ref-type="bibr" rid="b90">90</xref>) for a discussion). In short, such interactions remain to be
empirically tested.</p>

<p>Another important finding of the current study is the pupil dilation
response (PDR) to deviants. Of particular interest is the result that
the pupil responds to deviants even when participants do not report
hearing them, providing further evidence that pupillary responses are
indicators of preconscious processing (<xref ref-type="bibr" rid="b91">91</xref>). However, the current results
raise an additional important question of whether the PDR might be more
akin to the mismatch negativity (MMN) than the P3, which requires
conscious attention to be elicited (<xref ref-type="bibr" rid="b92">92</xref>). Others have shown a PDR to
deviants in the absence of attention (<xref ref-type="bibr" rid="b51 b54">51, 54</xref>) and here we show a PDR to
deviants that did not reach participants’ decision thresholds, or
possibly conscious awareness, despite their focused attention. Hence,
though there is evidence connecting the P3a to the PDR and the LC-NE
system (<xref ref-type="bibr" rid="b43 b93">43, 93</xref>), an important avenue of future research will be to
disentangle the relationship between the PDR, P3a, and MMN, which can
occur without the involvement of conscious attention (<xref ref-type="bibr" rid="b94">94</xref>) or perception
(<xref ref-type="bibr" rid="b95">95</xref>). While all three of these measures can be used as indices of
deviance detection, the P3a has been proposed to reflect comparison of
sensory input with a previously formed mental expectation that is
distributed across sensory and motor regions (<xref ref-type="bibr" rid="b96 b97">96, 97</xref>), whereas the MMN
and PDR have been interpreted as more sensory-driven, pre-attentive
comparison processes.</p>

<p>Though we are enthusiastic about the PDR results, a few
considerations remain. Since the present experiment only utilized
intensity increments as deviants, it could be argued that the pupil
dilation response observed on trials during which a deviant was
presented below perceptual threshold does not reflect subthreshold
processes but rather a linear response to stimulus amplitude. To this
argument, we point to the fact that there was no correlation between the
mean or max evoked pupil size on any given trial and the contrast in dB
SPL on that trial. However, to further address this possible alternative
interpretation, we conducted an experiment involving both intensity
increments and decrements. Those data show the same PDR patterns for
both increments and decrements, hit vs. missed trials, as reported in
the current study (<xref ref-type="bibr" rid="b98">98</xref>). In addition, previous studies of rhythmic
violations showed a standard PDR to the omission of an event (e.g.
(<xref ref-type="bibr" rid="b54">54</xref>)), suggesting that the PDR is not specific to intensity increment
deviance.</p>

<p>An additional critique might be that the difference between the pupil
size on hit vs. missed trials is because hit trials require a button
press while miss trials do not. Though it may be the case that the
additional motor response required to report deviance detection results
in a larger pupil size, this is unlikely to fully account for the
difference in results (<xref ref-type="bibr" rid="b52 b53">52, 53</xref>). Critically, even if a button press
results in a greater pupil dilation, this does not change the fact that
a PDR is observed on trials in which a deviant was presented but not
reported as heard (uncontaminated by a button press).</p>

<p>In summary, our study contributes to a growing literature emphasizing
the benefits of using eye-tracking in musical contexts (for further
examples, please see the other articles in this Special Issue on Music
&#x26; Eye-Tracking). We have shown that the pupil of the eye can
reliably index deviance detection, as well as rhythmic entrainment.
Considered in conjunction with previous studies from our lab, the linear
oscillator model utilized in this paper is a valuable predictor on
multiple scales – tapping and large body movements (<xref ref-type="bibr" rid="b57 b58 b59">57, 58, 59</xref>), perceptual
thresholds (<xref ref-type="bibr" rid="b60">60</xref>), and pupil dynamics (current study). In general, the
model can explain aspects of motor entrainment within a range of .25 –
10 Hz – the typical range of human motor movements. Future work should
further compare the strengths and limitations of models of rhythmic
attending (e.g. (<xref ref-type="bibr" rid="b27 b99">27, 99</xref>)), the added benefits of such models over simpler
predictors such as the amplitude envelope, and the musical contexts in
which one model is more effective than another.</p>
    </sec>

    <sec id="S5" sec-type="COI-statement">
      <title>Ethics and Conflict of Interest</title>

<p>The authors declare that the contents of the article are in agreement
with the ethics described in
<ext-link ext-link-type="uri" xlink:href="http://biblio.unibe.ch/portale/elibrary/BOP/jemr/ethics.html" xlink:show="new">http://biblio.unibe.ch/portale/elibrary/BOP/jemr/ethics.html</ext-link>
 and that
there is no conflict of interest regarding the publication of this
paper.</p>
    </sec>

    <sec id="S6">
      <title>Acknowledgements</title>

<p>This research was supported in part by LF’s Neuroscience Graduate
Group fellowship, ARCS Foundation scholarship, and Ling-Lie Chau Student
Award for Brain Research, as well as a Templeton Advanced Research
Program grant from the Metanexus Institute to PJ.</p>

<p>We wish to thank Dr. Jeff Rector for providing the communication
layer between the MAX/MSP software and the eye-tracking computer, and
Dr. Peter Keller for constructing the rhythmic patterns used in this
study.</p>
    </sec>
</body>
<back>
<ref-list>
<ref id="b78"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Akaike</surname>, <given-names>H.</given-names></name></person-group> (<year>1974</year>). <article-title>A new look at the statistical model identification.</article-title> <source>IEEE Transactions on Automatic Control</source>, <volume>19</volume>(<issue>6</issue>), <fpage>716</fpage>–<lpage>723</lpage>. <pub-id pub-id-type="doi">10.1109/TAC.1974.1100705</pub-id><issn>0018-9286</issn></mixed-citation></ref>
<ref id="b95"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Allen</surname>, <given-names>J.</given-names></name>, <name><surname>Kraus</surname>, <given-names>N.</given-names></name>, &#x26; <name><surname>Bradlow</surname>, <given-names>A.</given-names></name></person-group> (<year>2000</year>). <article-title>Neural representation of consciously imperceptible speech sound differences.</article-title> <source>Perception &#x26; Psychophysics</source>, <volume>62</volume>(<issue>7</issue>), <fpage>1383</fpage>–<lpage>1393</lpage>. <pub-id pub-id-type="doi">10.3758/BF03212140</pub-id><pub-id pub-id-type="pmid">11143450</pub-id><issn>0031-5117</issn></mixed-citation></ref>
<ref id="b35"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Aston-Jones</surname>, <given-names>G.</given-names></name>, <name><surname>Rajkowski</surname>, <given-names>J.</given-names></name>, <name><surname>Kubiak</surname>, <given-names>P.</given-names></name>, &#x26; <name><surname>Alexinsky</surname>, <given-names>T.</given-names></name></person-group> (<year>1994</year>). <article-title>Locus coeruleus neurons in monkey are selectively activated by attended cues in a vigilance task.</article-title> <source>The Journal of Neuroscience : The Official Journal of the Society for Neuroscience</source>, <volume>14</volume>(<issue>7</issue>), <fpage>4467</fpage>&#8211;<lpage>4480</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.14-07-04467.1994</pub-id><pub-id pub-id-type="pmid">8027789</pub-id><issn>0270-6474</issn></mixed-citation></ref>
<ref id="b81"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Barr</surname> <given-names>DJ</given-names></name>, <name><surname>Levy</surname> <given-names>R</given-names></name>, <name><surname>Scheepers</surname> <given-names>C</given-names></name>, <name><surname>Tily</surname> <given-names>HJ</given-names></name></person-group>. Random effects structure for confirmatory hypothesis testing: Keep it maximal. J Mem Lang. Elsevier Inc.; <year>2013</year>;68(3):255–78.</mixed-citation></ref>
<ref id="b85"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Bates</surname>, <given-names>D.</given-names></name>, <name><surname>Maechler</surname>, <given-names>M.</given-names></name>, <name><surname>Bolker</surname>, <given-names>B.</given-names></name>, &#x26; <name><surname>Walker</surname>, <given-names>S.</given-names></name></person-group> (<year>2015</year>). <article-title>Fitting linear mixed-effects models using lme4.</article-title> <source>Journal of Statistical Software</source>, <volume>67</volume>(<issue>1</issue>), <fpage>1</fpage>–<lpage>48</lpage>. <pub-id pub-id-type="doi">10.18637/jss.v067.i01</pub-id><issn>1548-7660</issn></mixed-citation></ref>
<ref id="b49"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Beatty</surname>, <given-names>J.</given-names></name></person-group> (<year>1982</year>). <article-title>Phasic not tonic pupillary responses vary with auditory vigilance performance.</article-title> <source>Pscyhophysiology.</source>, <volume>19</volume>(<issue>2</issue>), <fpage>167</fpage>&#8211;<lpage>172</lpage>. <pub-id pub-id-type="doi">10.1111/j.1469-8986.1982.tb02540.x</pub-id><pub-id pub-id-type="pmid">7071295</pub-id><issn>0048-5772</issn></mixed-citation></ref>
<ref id="b34"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Berger</surname>, <given-names>C. C.</given-names></name>, &#x26; <name><surname>Ehrsson</surname>, <given-names>H. H.</given-names></name></person-group> (<year>2013</year>). <article-title>Mental imagery changes multisensory perception.</article-title> <source>Current Biology</source>, <volume>23</volume>(<issue>14</issue>), <fpage>1367</fpage>&#8211;<lpage>1372</lpage>. <pub-id pub-id-type="doi">10.1016/j.cub.2013.06.012</pub-id><pub-id pub-id-type="pmid">23810539</pub-id><issn>0960-9822</issn></mixed-citation></ref>
<ref id="b2"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Bergeson</surname>, <given-names>T. R.</given-names></name>, &#x26; <name><surname>Trehub</surname>, <given-names>S. E.</given-names></name></person-group> (<year>2006</year>). <article-title>Infants Perception of Rhythmic Patterns.</article-title> <source>Music Percept An Interdiscip J.</source>, <volume>23</volume>(<issue>4</issue>), <fpage>345</fpage>–<lpage>360</lpage>. <pub-id pub-id-type="doi">10.1525/mp.2006.23.4.345</pub-id></mixed-citation></ref>
<ref id="b36"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Berridge</surname>, <given-names>C. W.</given-names></name>, &#x26; <name><surname>Waterhouse</surname>, <given-names>B. D.</given-names></name></person-group> (<year>2003</year>). <article-title>The locus coeruleus-noradrenergic system: Modulation of behavioral state and state-dependent cognitive processes.</article-title> <source>Brain Research. Brain Research Reviews</source>, <volume>42</volume>(<issue>1</issue>), <fpage>33</fpage>&#8211;<lpage>84</lpage>. <pub-id pub-id-type="doi">10.1016/S0165-0173(03)00143-7</pub-id><pub-id pub-id-type="pmid">12668290</pub-id><issn>0165-0173</issn></mixed-citation></ref>
<ref id="b7"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Bolger</surname>, <given-names>D.</given-names></name>, <name><surname>Coull</surname>, <given-names>J. T.</given-names></name>, &#x26; <name><surname>Schön</surname>, <given-names>D.</given-names></name></person-group> (<year>2014</year>). <article-title>Metrical rhythm implicitly orients attention in time as indexed by improved target detection and left inferior parietal activation.</article-title> <source>Journal of Cognitive Neuroscience</source>, <volume>26</volume>(<issue>3</issue>), <fpage>593</fpage>–<lpage>605</lpage>. <pub-id pub-id-type="doi">10.1162/jocn_a_00511</pub-id><pub-id pub-id-type="pmid">24168222</pub-id><issn>0898-929X</issn></mixed-citation></ref>
<ref id="b6"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Bolger</surname>, <given-names>D.</given-names></name>, <name><surname>Trost</surname>, <given-names>W.</given-names></name>, &#x26; <name><surname>Schön</surname>, <given-names>D.</given-names></name></person-group> (<year>2013</year>). <article-title>Rhythm implicitly affects temporal orienting of attention across modalities.</article-title> <source>Acta Psychologica</source>, <volume>142</volume>(<issue>2</issue>), <fpage>238</fpage>–<lpage>244</lpage>. <pub-id pub-id-type="doi">10.1016/j.actpsy.2012.11.012</pub-id><pub-id pub-id-type="pmid">23357092</pub-id><issn>0001-6918</issn></mixed-citation></ref>
<ref id="b79"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Burnham</surname>, <given-names>K. P.</given-names></name>, &#x26; <name><surname>Anderson</surname>, <given-names>D. R.</given-names></name></person-group> (<year>2004</year>). <article-title>Multimodel inference: Understanding AIC and BIC in model selection.</article-title> <source>Sociological Methods &#x26; Research</source>, <volume>33</volume>(<issue>2</issue>), <fpage>261</fpage>–<lpage>304</lpage>. <pub-id pub-id-type="doi">10.1177/0049124104268644</pub-id><issn>0049-1241</issn></mixed-citation></ref>
<ref id="b19"><mixed-citation publication-type="unknown" specific-use="linked"><person-group person-group-type="author"><name><surname>Buzsaki</surname> <given-names>G.</given-names></name></person-group> <article-title>Neuronal Oscillations in Cortical Networks.</article-title> Science (80-) . <year>2004</year>;304:1926–1929. <pub-id pub-id-type="doi">10.1126/science.1099745</pub-id></mixed-citation></ref>
<ref id="b5"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Cason</surname>, <given-names>N.</given-names></name>, &#x26; <name><surname>Schön</surname>, <given-names>D.</given-names></name></person-group> (<year>2012</year>). <article-title>Rhythmic priming enhances the phonological processing of speech.</article-title> <source>Neuropsychologia</source>, <volume>50</volume>(<issue>11</issue>), <fpage>2652</fpage>–<lpage>2658</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2012.07.018</pub-id><pub-id pub-id-type="pmid">22828660</pub-id><issn>0028-3932</issn></mixed-citation></ref>
<ref id="b76"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Cohen</surname>, <given-names>J.</given-names></name></person-group> (<year>1988</year>). <source>Statistical power analysis for the behavioral sciences</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Routledge</publisher-name>.</mixed-citation></ref>
<ref id="b54"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Damsma</surname>, <given-names>A.</given-names></name>, &#x26; <name><surname>van Rijn</surname>, <given-names>H.</given-names></name></person-group> (<year>2017</year>). <article-title>Pupillary response indexes the metrical hierarchy of unattended rhythmic violations.</article-title> <source>Brain and Cognition</source>, <volume>111</volume>, <fpage>95</fpage>&#8211;<lpage>103</lpage>. <pub-id pub-id-type="doi">10.1016/j.bandc.2016.10.004</pub-id><pub-id pub-id-type="pmid">27816784</pub-id><issn>0278-2626</issn></mixed-citation></ref>
<ref id="b40"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Daniels</surname>, <given-names>L. B.</given-names></name>, <name><surname>Nichols</surname>, <given-names>D. F.</given-names></name>, <name><surname>Seifert</surname>, <given-names>M. S.</given-names></name>, &#x26; <name><surname>Hock</surname>, <given-names>H. S.</given-names></name></person-group> (<year>2012</year>). <article-title>Changes in pupil diameter entrained by cortically initiated changes in attention.</article-title> <source>Visual Neuroscience</source>, <volume>29</volume>(<issue>2</issue>), <fpage>131</fpage>&#8211;<lpage>142</lpage>. <pub-id pub-id-type="doi">10.1017/S0952523812000077</pub-id><pub-id pub-id-type="pmid">22391296</pub-id><issn>0952-5238</issn></mixed-citation></ref>
<ref id="b56"><mixed-citation publication-type="other" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Davies</surname> <given-names>MEP</given-names></name>, <name><surname>Degara</surname> <given-names>N</given-names></name>, <name><surname>Plumbley</surname> <given-names>MD</given-names></name></person-group>. <article-title>Evaluation Methods for Musical Audio Beat Tracking Algorithms.</article-title> Tech Rep C4DM-TR-09-06 8 Oct 2009. <year>2009</year>;(<month>October</month>):17.</mixed-citation></ref>
<ref id="b87"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>DeLong</surname>, <given-names>E. R.</given-names></name>, <name><surname>DeLong</surname>, <given-names>D. M.</given-names></name>, &#x26; <name><surname>Clarke-Pearson</surname>, <given-names>D. L.</given-names></name></person-group> (<year>1988</year>). <article-title>Comparing the areas under two or more correlated receiver operating characteristic curves: A nonparametric approach.</article-title> <source>Biometrics</source>, <volume>44</volume>(<issue>3</issue>), <fpage>837</fpage>–<lpage>845</lpage>. <pub-id pub-id-type="doi">10.2307/2531595</pub-id><pub-id pub-id-type="pmid">3203132</pub-id><issn>0006-341X</issn></mixed-citation></ref>
<ref id="b52"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Einh&#228;user</surname>, <given-names>W.</given-names></name>, <name><surname>Stout</surname>, <given-names>J.</given-names></name>, <name><surname>Koch</surname>, <given-names>C.</given-names></name>, &#x26; <name><surname>Carter</surname>, <given-names>O.</given-names></name></person-group> (<year>2008</year>). <article-title>Pupil dilation reflects perceptual selection and predicts subsequent stability in perceptual rivalry.</article-title> <source>Proceedings of the National Academy of Sciences of the United States of America</source>, <volume>105</volume>(<issue>5</issue>), <fpage>1704</fpage>&#8211;<lpage>1709</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.0707727105</pub-id><pub-id pub-id-type="pmid">18250340</pub-id><issn>0027-8424</issn></mixed-citation></ref>
<ref id="b8"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Escoffier</surname>, <given-names>N.</given-names></name>, <name><surname>Sheng</surname>, <given-names>D. Y.</given-names></name>, &#x26; <name><surname>Schirmer</surname>, <given-names>A.</given-names></name></person-group> (<year>2010</year>). <article-title>Unattended musical beats enhance visual processing.</article-title> <source>Acta Psychologica</source>, <volume>135</volume>(<issue>1</issue>), <fpage>12</fpage>–<lpage>16</lpage>. <pub-id pub-id-type="doi">10.1016/j.actpsy.2010.04.005</pub-id><pub-id pub-id-type="pmid">20451167</pub-id><issn>0001-6918</issn></mixed-citation></ref>
<ref id="b98"><mixed-citation publication-type="conference" specific-use="parsed"><person-group person-group-type="author"><name><surname>Fink</surname> <given-names>L</given-names></name>, <name><surname>Hurley</surname> <given-names>B</given-names></name>, <name><surname>Geng</surname> <given-names>J</given-names></name>, <name><surname>Janata</surname> <given-names>P</given-names></name></person-group>. <article-title>Predicting attention to auditory rhythms using a linear oscillator model and pupillometry.</article-title> In: <source>Proceedings of the Conference on Music &#x26; Eye-Tracking</source>. <conf-loc>Frankfurt, Germany</conf-loc>; <year>2017</year>.</mixed-citation></ref>
<ref id="b99"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Forth</surname>, <given-names>J.</given-names></name>, <name><surname>Agres</surname>, <given-names>K.</given-names></name>, <name><surname>Purver</surname>, <given-names>M.</given-names></name>, &#x26; <name><surname>Wiggins</surname>, <given-names>G. A.</given-names></name></person-group> (<year>2016</year>). <article-title>Entraining IDyOT: Timing in the Information Dynamics of Thinking.</article-title> <source>Frontiers in Psychology</source>, <volume>7</volume>, <fpage>1575</fpage>. <pub-id pub-id-type="doi">10.3389/fpsyg.2016.01575</pub-id><pub-id pub-id-type="pmid">27803682</pub-id><issn>1664-1078</issn></mixed-citation></ref>
<ref id="b46"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Gingras</surname>, <given-names>B.</given-names></name>, <name><surname>Marin</surname>, <given-names>M. M.</given-names></name>, <name><surname>Puig-Waldm&#252;ller</surname>, <given-names>E.</given-names></name>, &#x26; <name><surname>Fitch</surname>, <given-names>W. T.</given-names></name></person-group> (<year>2015</year>). <article-title>The Eye is Listening: Music-Induced Arousal and Individual Differences Predict Pupillary Responses.</article-title> <source>Frontiers in Human Neuroscience</source>, <volume>9</volume>, <fpage>619</fpage>. <pub-id pub-id-type="doi">10.3389/fnhum.2015.00619</pub-id><pub-id pub-id-type="pmid">26617511</pub-id><issn>1662-5161</issn></mixed-citation></ref>
<ref id="b9"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Grahn</surname>, <given-names>J. A.</given-names></name></person-group> (<year>2012</year>). <article-title>See what I hear? Beat perception in auditory and visual rhythms.</article-title> <source>Experimental Brain Research</source>, <volume>220</volume>(<issue>1</issue>), <fpage>51</fpage>–<lpage>61</lpage>. <pub-id pub-id-type="doi">10.1007/s00221-012-3114-8</pub-id><pub-id pub-id-type="pmid">22623092</pub-id><issn>0014-4819</issn></mixed-citation></ref>
<ref id="b10"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Grahn</surname>, <given-names>J. A.</given-names></name>, <name><surname>Henry</surname>, <given-names>M. J.</given-names></name>, &#x26; <name><surname>McAuley</surname>, <given-names>J. D.</given-names></name></person-group> (<year>2011</year>). <article-title>FMRI investigation of cross-modal interactions in beat perception: Audition primes vision, but not vice versa.</article-title> <source>NeuroImage</source>, <volume>54</volume>(<issue>2</issue>), <fpage>1231</fpage>–<lpage>1243</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2010.09.033</pub-id><pub-id pub-id-type="pmid">20858544</pub-id><issn>1053-8119</issn></mixed-citation></ref>
<ref id="b25"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Grahn</surname>, <given-names>J. A.</given-names></name>, &#x26; <name><surname>Rowe</surname>, <given-names>J. B.</given-names></name></person-group> (<year>2013</year>). <article-title>Finding and feeling the musical beat: Striatal dissociations between detection and prediction of regularity.</article-title> <source>Cerebral Cortex (New York, N.Y.)</source>, <volume>23</volume>(<issue>4</issue>), <fpage>913</fpage>–<lpage>921</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/bhs083</pub-id><pub-id pub-id-type="pmid">22499797</pub-id><issn>1047-3211</issn></mixed-citation></ref>
<ref id="b88"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Green</surname>, <given-names>D.</given-names></name>, &#x26; <name><surname>Swets</surname>, <given-names>J.</given-names></name></person-group> (<year>1996</year>). <source>Signal detection theory and psychophysics</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>John Wiley and Sons</publisher-name>.</mixed-citation></ref>
<ref id="b21"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Henry</surname>, <given-names>M. J.</given-names></name>, &#x26; <name><surname>Herrmann</surname>, <given-names>B.</given-names></name></person-group> (<year>2014</year>). <article-title>Low-Frequency Neural Oscillations Support Dynamic Attending in Temporal Context.</article-title> <source>Timing &#x26; Time Perception (Leiden, Netherlands)</source>, <volume>2</volume>(<issue>1</issue>), <fpage>62</fpage>–<lpage>86</lpage>. <pub-id pub-id-type="doi">10.1163/22134468-00002011</pub-id><issn>2213-445X</issn></mixed-citation></ref>
<ref id="b63"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Hoeks</surname>, <given-names>B.</given-names></name>, &#x26; <name><surname>Levelt</surname>, <given-names>W. J. M.</given-names></name></person-group> (<year>1993</year>). <article-title>Pupillary dilation as a measure of attention: A quantitative system analysis.</article-title> <source>Behavior Research Methods, Instruments, &#x26; Computers</source>, <volume>25</volume>(<issue>1</issue>), <fpage>16</fpage>–<lpage>26</lpage>. <pub-id pub-id-type="doi">10.3758/BF03204445</pub-id><issn>0743-3808</issn></mixed-citation></ref>
<ref id="b50"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Hong</surname>, <given-names>L.</given-names></name>, <name><surname>Walz</surname>, <given-names>J. M.</given-names></name>, &#x26; <name><surname>Sajda</surname>, <given-names>P.</given-names></name></person-group> (<year>2014</year>). <article-title>Your eyes give you away: Prestimulus changes in pupil diameter correlate with poststimulus task-related EEG dynamics.</article-title> <source>PLoS One</source>, <volume>9</volume>(<issue>3</issue>), <fpage>e91321</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0091321</pub-id><pub-id pub-id-type="pmid">24618591</pub-id><issn>1932-6203</issn></mixed-citation></ref>
<ref id="b11"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Hove</surname>, <given-names>M. J.</given-names></name>, <name><surname>Fairhurst</surname>, <given-names>M. T.</given-names></name>, <name><surname>Kotz</surname>, <given-names>S. A.</given-names></name>, &#x26; <name><surname>Keller</surname>, <given-names>P. E.</given-names></name></person-group> (<year>2013</year>). <article-title>Synchronizing with auditory and visual rhythms: An fMRI assessment of modality differences and modality appropriateness.</article-title> <source>NeuroImage</source>, <volume>67</volume>, <fpage>313</fpage>–<lpage>321</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2012.11.032</pub-id><pub-id pub-id-type="pmid">23207574</pub-id><issn>1053-8119</issn></mixed-citation></ref>
<ref id="b60"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Hurley</surname> <given-names>BK</given-names></name>, <name><surname>Fink</surname> <given-names>LK</given-names></name>, <name><surname>Janata</surname> <given-names>P</given-names></name></person-group>. Mapping the Dynamic Allocation of Temporal Attention in Musical Patterns Musical Patterns. J Exp Psychol Hum Percept Perform. <year>2018</year>;Advance On.</mixed-citation></ref>
<ref id="b58"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Hurley</surname>, <given-names>B. K.</given-names></name>, <name><surname>Martens</surname>, <given-names>P. A.</given-names></name>, &#x26; <name><surname>Janata</surname>, <given-names>P.</given-names></name></person-group> (<year>2014</year>). <article-title>Spontaneous sensorimotor coupling with multipart music.</article-title> <source>Journal of Experimental Psychology. Human Perception and Performance</source>, <volume>40</volume>(<issue>4</issue>), <fpage>1679</fpage>–<lpage>1696</lpage>. <pub-id pub-id-type="doi">10.1037/a0037154</pub-id><pub-id pub-id-type="pmid">24979362</pub-id><issn>0096-1523</issn></mixed-citation></ref>
<ref id="b26"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Iversen</surname>, <given-names>J. R.</given-names></name>, <name><surname>Repp</surname>, <given-names>B. H.</given-names></name>, &#x26; <name><surname>Patel</surname>, <given-names>A. D.</given-names></name></person-group> (<year>2009</year>). <article-title>Top-down control of rhythm perception modulates early auditory responses.</article-title> <source>Annals of the New York Academy of Sciences</source>, <volume>1169</volume>(<issue>1</issue>), <fpage>58</fpage>–<lpage>73</lpage>. <pub-id pub-id-type="doi">10.1111/j.1749-6632.2009.04579.x</pub-id><pub-id pub-id-type="pmid">19673755</pub-id><issn>0077-8923</issn></mixed-citation></ref>
<ref id="b71"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Janata</surname>, <given-names>P.</given-names></name></person-group> (<year>2009</year>). <article-title>The neural architecture of music-evoked autobiographical memories.</article-title> <source>Cerebral Cortex (New York, N.Y.)</source>, <volume>19</volume>(<issue>11</issue>), <fpage>2579</fpage>–<lpage>2594</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/bhp008</pub-id><pub-id pub-id-type="pmid">19240137</pub-id><issn>1047-3211</issn></mixed-citation></ref>
<ref id="b97"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Janata</surname>, <given-names>P.</given-names></name></person-group> (<year>2012</year>). <article-title>Acuity of mental representations of pitch.</article-title> <source>Annals of the New York Academy of Sciences</source>, <volume>1252</volume>(<issue>1</issue>), <fpage>214</fpage>&#8211;<lpage>221</lpage>. <pub-id pub-id-type="doi">10.1111/j.1749-6632.2011.06441.x</pub-id><pub-id pub-id-type="pmid">22524362</pub-id><issn>0077-8923</issn></mixed-citation></ref>
<ref id="b59"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Janata</surname>, <given-names>P.</given-names></name>, <name><surname>Tomic</surname>, <given-names>S. T.</given-names></name>, &#x26; <name><surname>Haberman</surname>, <given-names>J. M.</given-names></name></person-group> (<year>2012</year>). <article-title>Sensorimotor coupling in music and the psychology of the groove.</article-title> <source>Journal of Experimental Psychology. General</source>, <volume>141</volume>(<issue>1</issue>), <fpage>54</fpage>–<lpage>75</lpage>. <pub-id pub-id-type="doi">10.1037/a0024208</pub-id><pub-id pub-id-type="pmid">21767048</pub-id><issn>0096-3445</issn></mixed-citation></ref>
<ref id="b82"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Johnson</surname>, <given-names>P. C. D.</given-names></name></person-group> (<year>2014</year>). <article-title>Extension of Nakagawa &#x26; Schielzeth’s <italic>R</italic><sup>2</sup><sub>GLMM</sub> to random slopes models.</article-title> <source>Methods in Ecology and Evolution</source>, <volume>5</volume>(<issue>9</issue>), <fpage>944</fpage>–<lpage>946</lpage>. <pub-id pub-id-type="doi">10.1111/2041-210X.12225</pub-id><pub-id pub-id-type="pmid">25810896</pub-id><issn>2041-210X</issn></mixed-citation></ref>
<ref id="b4"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Jones</surname>, <given-names>M. R.</given-names></name>, <name><surname>Johnston</surname>, <given-names>H. M.</given-names></name>, &#x26; <name><surname>Puente</surname>, <given-names>J.</given-names></name></person-group> (<year>2006</year>). <article-title>Effects of auditory pattern structure on anticipatory and reactive attending.</article-title> <source>Cognitive Psychology</source>, <volume>53</volume>(<issue>1</issue>), <fpage>59</fpage>–<lpage>96</lpage>. <pub-id pub-id-type="doi">10.1016/j.cogpsych.2006.01.003</pub-id><pub-id pub-id-type="pmid">16563367</pub-id><issn>0010-0285</issn></mixed-citation></ref>
<ref id="b13"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Jones</surname>, <given-names>M. R.</given-names></name>, &#x26; <name><surname>Boltz</surname>, <given-names>M.</given-names></name></person-group> (<year>1989</year>). <article-title>Dynamic attending and responses to time.</article-title> <source>Psychological Review</source>, <volume>96</volume>(<issue>3</issue>), <fpage>459</fpage>–<lpage>491</lpage>. <pub-id pub-id-type="doi">10.1037/0033-295X.96.3.459</pub-id><pub-id pub-id-type="pmid">2756068</pub-id><issn>0033-295X</issn></mixed-citation></ref>
<ref id="b44"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Joshi</surname>, <given-names>S.</given-names></name>, <name><surname>Li</surname>, <given-names>Y.</given-names></name>, <name><surname>Kalwani</surname>, <given-names>R. M.</given-names></name>, &#x26; <name><surname>Gold</surname>, <given-names>J. I.</given-names></name></person-group> (<year>2016</year>). <article-title>Relationships between Pupil Diameter and Neuronal Activity in the Locus Coeruleus, Colliculi, and Cingulate Cortex.</article-title> <source>Neuron</source>, <volume>89</volume>(<issue>1</issue>), <fpage>221</fpage>&#8211;<lpage>234</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2015.11.028</pub-id><pub-id pub-id-type="pmid">26711118</pub-id><issn>0896-6273</issn></mixed-citation></ref>
<ref id="b55"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Kang</surname>, <given-names>O.</given-names></name>, &#x26; <name><surname>Wheatley</surname>, <given-names>T.</given-names></name></person-group> (<year>2015</year>). <article-title>Pupil dilation patterns reflect the contents of consciousness.</article-title> <source>Consciousness and Cognition</source>, <volume>35</volume>, <fpage>128</fpage>&#8211;<lpage>135</lpage>. <pub-id pub-id-type="doi">10.1016/j.concog.2015.05.001</pub-id><pub-id pub-id-type="pmid">26002764</pub-id><issn>1053-8100</issn></mixed-citation></ref>
<ref id="b67"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>King-Smith</surname>, <given-names>P. E.</given-names></name>, <name><surname>Grigsby</surname>, <given-names>S. S.</given-names></name>, <name><surname>Vingrys</surname>, <given-names>A. J.</given-names></name>, <name><surname>Benes</surname>, <given-names>S. C.</given-names></name>, &#x26; <name><surname>Supowit</surname>, <given-names>A.</given-names></name></person-group> (<year>1994</year>). <article-title>Efficient and unbiased modifications of the QUEST threshold method: Theory, simulations, experimental evaluation and practical implementation.</article-title> <source>Vision Research</source>, <volume>34</volume>(<issue>7</issue>), <fpage>885</fpage>–<lpage>912</lpage>. <pub-id pub-id-type="doi">10.1016/0042-6989(94)90039-6</pub-id><pub-id pub-id-type="pmid">8160402</pub-id><issn>0042-6989</issn></mixed-citation></ref>
<ref id="b53"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Laeng</surname>, <given-names>B.</given-names></name>, <name><surname>Eidet</surname>, <given-names>L. M.</given-names></name>, <name><surname>Sulutvedt</surname>, <given-names>U.</given-names></name>, &#x26; <name><surname>Panksepp</surname>, <given-names>J.</given-names></name></person-group> (<year>2016</year>). <article-title>Music chills: The eye pupil as a mirror to music&#8217;s soul.</article-title> <source>Consciousness and Cognition</source>, <volume>44</volume>, <fpage>161</fpage>&#8211;<lpage>178</lpage>. <pub-id pub-id-type="doi">10.1016/j.concog.2016.07.009</pub-id><pub-id pub-id-type="pmid">27500655</pub-id><issn>1053-8100</issn></mixed-citation></ref>
<ref id="b91"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Laeng</surname>, <given-names>B.</given-names></name>, <name><surname>Sirois</surname>, <given-names>S.</given-names></name>, &#x26; <name><surname>Gredebäck</surname>, <given-names>G.</given-names></name></person-group> (<year>2012</year>). <article-title>Pupillometry: A Window to the Preconscious?</article-title> <source>Perspectives on Psychological Science</source>, <volume>7</volume>(<issue>1</issue>), <fpage>18</fpage>–<lpage>27</lpage>. <pub-id pub-id-type="doi">10.1177/1745691611427305</pub-id><pub-id pub-id-type="pmid">26168419</pub-id><issn>1745-6916</issn></mixed-citation></ref>
<ref id="b17"><mixed-citation publication-type="unknown" specific-use="linked"><person-group person-group-type="author"><name><surname>Lakatos</surname> <given-names>P</given-names></name>, <name><surname>Karmos</surname> <given-names>G</given-names></name>, <name><surname>Mehta</surname> <given-names>AD</given-names></name>, <name><surname>Ulbert</surname> <given-names>I</given-names></name>, <name><surname>Schroeder</surname> <given-names>CE</given-names></name></person-group>. <article-title>Entrainment of neuronal oscillations as a mechanism of attentional selection.</article-title> Science (80-) . <year>2008</year>;320(5872):110–3. <pub-id pub-id-type="doi">10.1126/science.1154735</pub-id></mixed-citation></ref>
<ref id="b48"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Lange</surname>, <given-names>E. B.</given-names></name>, <name><surname>Zweck</surname>, <given-names>F.</given-names></name>, &#x26; <name><surname>Sinn</surname>, <given-names>P.</given-names></name></person-group> (<year>2017</year>). <article-title>Microsaccade-rate indicates absorption by music listening.</article-title> <source>Consciousness and Cognition</source>, <volume>55</volume>, <fpage>59</fpage>&#8211;<lpage>78</lpage>. <pub-id pub-id-type="doi">10.1016/j.concog.2017.07.009</pub-id><pub-id pub-id-type="pmid">28787663</pub-id><issn>1053-8100</issn></mixed-citation></ref>
<ref id="b27"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Large</surname>, <given-names>E. W.</given-names></name>, <name><surname>Herrera</surname>, <given-names>J. A.</given-names></name>, &#x26; <name><surname>Velasco</surname>, <given-names>M. J.</given-names></name></person-group> (<year>2015</year>). <article-title>Neural Networks for Beat Perception in Musical Rhythm.</article-title> <source>Frontiers in Systems Neuroscience</source>, <volume>9</volume>, <fpage>159</fpage>. <pub-id pub-id-type="doi">10.3389/fnsys.2015.00159</pub-id><pub-id pub-id-type="pmid">26635549</pub-id><issn>1662-5137</issn></mixed-citation></ref>
<ref id="b15"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Large</surname>, <given-names>E. W.</given-names></name>, &#x26; <name><surname>Palmer</surname>, <given-names>C.</given-names></name></person-group> (<year>2002</year>). <article-title>Perceiving temporal regularity in music.</article-title> <source>Cognitive Science</source>, <volume>26</volume>(<issue>1</issue>), <fpage>1</fpage>–<lpage>37</lpage>. <pub-id pub-id-type="doi">10.1207/s15516709cog2601_1</pub-id><issn>0364-0213</issn></mixed-citation></ref>
<ref id="b14"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Large</surname>, <given-names>E. W.</given-names></name>, &#x26; <name><surname>Snyder</surname>, <given-names>J. S.</given-names></name></person-group> (<year>2009</year>). <article-title>Pulse and meter as neural resonance.</article-title> <source>Annals of the New York Academy of Sciences</source>, <volume>1169</volume>(<issue>1</issue>), <fpage>46</fpage>–<lpage>57</lpage>. <pub-id pub-id-type="doi">10.1111/j.1749-6632.2009.04550.x</pub-id><pub-id pub-id-type="pmid">19673754</pub-id><issn>0077-8923</issn></mixed-citation></ref>
<ref id="b61"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Leman</surname>, <given-names>M.</given-names></name>, <name><surname>Lesaffre</surname>, <given-names>M.</given-names></name>, &#x26; <name><surname>Tanghe</surname>, <given-names>K.</given-names></name></person-group> (<year>2001</year>). <source>Computer code IPEM Toolbox</source>. <publisher-loc>Ghent, Belgium</publisher-loc>: <publisher-name>Ghent University</publisher-name>.</mixed-citation></ref>
<ref id="b51"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Liao</surname>, <given-names>H. I.</given-names></name>, <name><surname>Yoneya</surname>, <given-names>M.</given-names></name>, <name><surname>Kidani</surname>, <given-names>S.</given-names></name>, <name><surname>Kashino</surname>, <given-names>M.</given-names></name>, &#x26; <name><surname>Furukawa</surname>, <given-names>S.</given-names></name></person-group> (<year>2016</year>). <article-title>Human Pupillary Dilation Response to Deviant Auditory Stimuli: Effects of Stimulus Properties and Voluntary Attention.</article-title> <source>Frontiers in Neuroscience</source>, <volume>10</volume>, <fpage>43</fpage>. <pub-id pub-id-type="doi">10.3389/fnins.2016.00043</pub-id><pub-id pub-id-type="pmid">26924959</pub-id><issn>1662-4548</issn></mixed-citation></ref>
<ref id="b1"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><collab>London J. Hearing in Time</collab></person-group>. (<year>2012</year>). <source>Psychological Aspects of Musical Meter</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref>
<ref id="b31"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Mar&#243;ti</surname>, <given-names>E.</given-names></name>, <name><surname>Knakker</surname>, <given-names>B.</given-names></name>, <name><surname>Vidny&#225;nszky</surname>, <given-names>Z.</given-names></name>, &#x26; <name><surname>Weiss</surname>, <given-names>B.</given-names></name></person-group> (<year>2017</year>). <article-title>The effect of beat frequency on eye movements during free viewing.</article-title> <source>Vision Research</source>, <volume>131</volume>, <fpage>57</fpage>&#8211;<lpage>66</lpage>. <pub-id pub-id-type="doi">10.1016/j.visres.2016.12.009</pub-id><pub-id pub-id-type="pmid">28057578</pub-id><issn>0042-6989</issn></mixed-citation></ref>
<ref id="b68"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Marvit</surname>, <given-names>P.</given-names></name>, <name><surname>Florentine</surname>, <given-names>M.</given-names></name>, &#x26; <name><surname>Buus</surname>, <given-names>S.</given-names></name></person-group> (<year>2003</year>). <article-title>A comparison of psychophysical procedures for level-discrimination thresholds.</article-title> <source>The Journal of the Acoustical Society of America</source>, <volume>113</volume>(<issue>6</issue>), <fpage>3348</fpage>–<lpage>3361</lpage>. <pub-id pub-id-type="doi">10.1121/1.1570445</pub-id><pub-id pub-id-type="pmid">12822806</pub-id><issn>0001-4966</issn></mixed-citation></ref>
<ref id="b90"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Mathôt</surname>, <given-names>S.</given-names></name></person-group> (<year>2018</year>). <article-title>Pupillometry: Psychology, Physiology, and Function.</article-title> <source>Journal of Cognition</source>, <volume>1</volume>(<issue>1</issue>), <fpage>16</fpage>. <pub-id pub-id-type="doi">10.5334/joc.18</pub-id><pub-id pub-id-type="pmid">31517190</pub-id><issn>2514-4820</issn></mixed-citation></ref>
<ref id="b89"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Mathôt</surname>, <given-names>S.</given-names></name>, &#x26; <name><surname>Van der Stigchel</surname>, <given-names>S.</given-names></name></person-group> (<year>2015</year>). <article-title>New Light on the Mind’s Eye: The Pupillary Light Response as Active Vision.</article-title> <source>Current Directions in Psychological Science</source>, <volume>24</volume>(<issue>5</issue>), <fpage>374</fpage>–<lpage>378</lpage>. <pub-id pub-id-type="doi">10.1177/0963721415593725</pub-id><pub-id pub-id-type="pmid">26494950</pub-id><issn>0963-7214</issn></mixed-citation></ref>
<ref id="b70"><mixed-citation publication-type="book" specific-use="restruct"><source>MATLAB</source>. (<year>2017</year>). <publisher-loc>Natick, MA</publisher-loc>: <publisher-name>MathWorks, Inc.</publisher-name></mixed-citation></ref>
<ref id="b62"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>McCloy</surname>, <given-names>D. R.</given-names></name>, <name><surname>Larson</surname>, <given-names>E. D.</given-names></name>, <name><surname>Lau</surname>, <given-names>B.</given-names></name>, &#x26; <name><surname>Lee</surname>, <given-names>A. K. C.</given-names></name></person-group> (<year>2016</year>). <article-title>Temporal alignment of pupillary response with stimulus events via deconvolution.</article-title> <source>The Journal of the Acoustical Society of America</source>, <volume>139</volume>(<issue>3</issue>), <fpage>EL57</fpage>–<lpage>EL62</lpage>. <pub-id pub-id-type="doi">10.1121/1.4943787</pub-id><pub-id pub-id-type="pmid">27036288</pub-id><issn>0001-4966</issn></mixed-citation></ref>
<ref id="b42"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>McGinley</surname>, <given-names>M. J.</given-names></name>, <name><surname>David</surname>, <given-names>S. V.</given-names></name>, &#x26; <name><surname>McCormick</surname>, <given-names>D. A.</given-names></name></person-group> (<year>2015</year>). <article-title>Cortical Membrane Potential Signature of Optimal States for Sensory Signal Detection.</article-title> <source>Neuron</source>, <volume>87</volume>(<issue>1</issue>), <fpage>179</fpage>&#8211;<lpage>192</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2015.05.038</pub-id><pub-id pub-id-type="pmid">26074005</pub-id><issn>0896-6273</issn></mixed-citation></ref>
<ref id="b12"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Miller</surname>, <given-names>J. E.</given-names></name>, <name><surname>Carlson</surname>, <given-names>L. A.</given-names></name>, &#x26; <name><surname>McAuley</surname>, <given-names>J. D.</given-names></name></person-group> (<year>2013</year>). <article-title>When what you hear influences when you see: Listening to an auditory rhythm influences the temporal allocation of visual attention.</article-title> <source>Psychological Science</source>, <volume>24</volume>(<issue>1</issue>), <fpage>11</fpage>–<lpage>18</lpage>. <pub-id pub-id-type="doi">10.1177/0956797612446707</pub-id><pub-id pub-id-type="pmid">23160202</pub-id><issn>0956-7976</issn></mixed-citation></ref>
<ref id="b22"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Morillon</surname>, <given-names>B.</given-names></name>, <name><surname>Hackett</surname>, <given-names>T. A.</given-names></name>, <name><surname>Kajikawa</surname>, <given-names>Y.</given-names></name>, &#x26; <name><surname>Schroeder</surname>, <given-names>C. E.</given-names></name></person-group> (<year>2015</year>). <article-title>Predictive motor control of sensory dynamics in auditory active sensing.</article-title> <source>Current Opinion in Neurobiology</source>, <volume>31</volume>, <fpage>230</fpage>–<lpage>238</lpage>. <pub-id pub-id-type="doi">10.1016/j.conb.2014.12.005</pub-id><pub-id pub-id-type="pmid">25594376</pub-id><issn>0959-4388</issn></mixed-citation></ref>
<ref id="b28"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Morillon</surname>, <given-names>B.</given-names></name>, <name><surname>Schroeder</surname>, <given-names>C. E.</given-names></name>, &#x26; <name><surname>Wyart</surname>, <given-names>V.</given-names></name></person-group> (<year>2014</year>). <article-title>Motor contributions to the temporal precision of auditory attention.</article-title> <source>Nature Communications</source>, <volume>5</volume>(<issue>1</issue>), <fpage>5255</fpage>. <pub-id pub-id-type="doi">10.1038/ncomms6255</pub-id><pub-id pub-id-type="pmid">25314898</pub-id><issn>2041-1723</issn></mixed-citation></ref>
<ref id="b43"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Murphy</surname>, <given-names>P. R.</given-names></name>, <name><surname>Robertson</surname>, <given-names>I. H.</given-names></name>, <name><surname>Balsters</surname>, <given-names>J. H.</given-names></name>, &#x26; <name><surname>O&#8217;connell</surname>, <given-names>R. G.</given-names></name></person-group> (<year>2011</year>). <article-title>Pupillometry and P3 index the locus coeruleus-noradrenergic arousal function in humans.</article-title> <source>Psychophysiology</source>, <volume>48</volume>(<issue>11</issue>), <fpage>1532</fpage>&#8211;<lpage>1543</lpage>. <pub-id pub-id-type="doi">10.1111/j.1469-8986.2011.01226.x</pub-id><pub-id pub-id-type="pmid">21762458</pub-id><issn>0048-5772</issn></mixed-citation></ref>
<ref id="b45"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Murphy</surname>, <given-names>P. R.</given-names></name>, <name><surname>O&#8217;Connell</surname>, <given-names>R. G.</given-names></name>, <name><surname>O&#8217;Sullivan</surname>, <given-names>M.</given-names></name>, <name><surname>Robertson</surname>, <given-names>I. H.</given-names></name>, &#x26; <name><surname>Balsters</surname>, <given-names>J. H.</given-names></name></person-group> (<year>2014</year>). <article-title>Pupil diameter covaries with BOLD activity in human locus coeruleus.</article-title> <source>Human Brain Mapping</source>, <volume>35</volume>(<issue>8</issue>), <fpage>4140</fpage>&#8211;<lpage>4154</lpage>. <pub-id pub-id-type="doi">10.1002/hbm.22466</pub-id><pub-id pub-id-type="pmid">24510607</pub-id><issn>1065-9471</issn></mixed-citation></ref>
<ref id="b94"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Näätänen</surname>, <given-names>R.</given-names></name>, <name><surname>Paavilainen</surname>, <given-names>P.</given-names></name>, <name><surname>Rinne</surname>, <given-names>T.</given-names></name>, &#x26; <name><surname>Alho</surname>, <given-names>K.</given-names></name></person-group> (<year>2007</year>). <article-title>The mismatch negativity (MMN) in basic research of central auditory processing: A review.</article-title> <source>Clinical Neurophysiology</source>, <volume>118</volume>(<issue>12</issue>), <fpage>2544</fpage>–<lpage>2590</lpage>. <pub-id pub-id-type="doi">10.1016/j.clinph.2007.04.026</pub-id><pub-id pub-id-type="pmid">17931964</pub-id><issn>1388-2457</issn></mixed-citation></ref>
<ref id="b39"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Naber</surname>, <given-names>M.</given-names></name>, <name><surname>Alvarez</surname>, <given-names>G. A.</given-names></name>, &#x26; <name><surname>Nakayama</surname>, <given-names>K.</given-names></name></person-group> (<year>2013</year>). <article-title>Tracking the allocation of attention using human pupillary oscillations.</article-title> <source>Frontiers in Psychology</source>, <volume>4</volume>, <fpage>919</fpage>. <pub-id pub-id-type="doi">10.3389/fpsyg.2013.00919</pub-id><pub-id pub-id-type="pmid">24368904</pub-id><issn>1664-1078</issn></mixed-citation></ref>
<ref id="b83"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Nakagawa</surname>, <given-names>S.</given-names></name>, <name><surname>Johnson</surname>, <given-names>P. C. D.</given-names></name>, &#x26; <name><surname>Schielzeth</surname>, <given-names>H.</given-names></name></person-group> (<year>2017</year>). <article-title>The coefficient of determination <italic>R</italic><sup>2</sup> and intra-class correlation coefficient from generalized linear mixed-effects models revisited and expanded.</article-title> <source>Journal of the Royal Society, Interface</source>, <volume>14</volume>(<issue>134</issue>), <fpage>20170213</fpage>. <pub-id pub-id-type="doi">10.1098/rsif.2017.0213</pub-id><pub-id pub-id-type="pmid">28904005</pub-id><issn>1742-5689</issn></mixed-citation></ref>
<ref id="b84"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Nakagawa</surname>, <given-names>S.</given-names></name>, &#x26; <name><surname>Schielzeth</surname>, <given-names>H.</given-names></name></person-group> (<year>2013</year>). <article-title>A general and simple method for obtaining R2 from Generalized Linear Mixed-effects Models.</article-title> <source>Methods in Ecology and Evolution</source>, <volume>4</volume>(<issue>2</issue>), <fpage>133</fpage>–<lpage>142</lpage>. <pub-id pub-id-type="doi">10.1111/j.2041-210x.2012.00261.x</pub-id><issn>2041-210X</issn></mixed-citation></ref>
<ref id="b96"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Navarro Cebrian</surname>, <given-names>A.</given-names></name>, &#x26; <name><surname>Janata</surname>, <given-names>P.</given-names></name></person-group> (<year>2010</year>). <article-title>Electrophysiological correlates of accurate mental image formation in auditory perception and imagery tasks.</article-title> <source>Brain Research</source>, <volume>1342</volume>, <fpage>39</fpage>&#8211;<lpage>54</lpage>. <pub-id pub-id-type="doi">10.1016/j.brainres.2010.04.026</pub-id><pub-id pub-id-type="pmid">20406623</pub-id><issn>0006-8993</issn></mixed-citation></ref>
<ref id="b93"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Nieuwenhuis</surname>, <given-names>S.</given-names></name>, <name><surname>De Geus</surname>, <given-names>E. J.</given-names></name>, &#x26; <name><surname>Aston-Jones</surname>, <given-names>G.</given-names></name></person-group> (<year>2011</year>). <article-title>The anatomical and functional relationship between the P3 and autonomic components of the orienting response.</article-title> <source>Psychophysiology</source>, <volume>48</volume>(<issue>2</issue>), <fpage>162</fpage>–<lpage>175</lpage>. <pub-id pub-id-type="doi">10.1111/j.1469-8986.2010.01057.x</pub-id><pub-id pub-id-type="pmid">20557480</pub-id><issn>0048-5772</issn></mixed-citation></ref>
<ref id="b24"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Nozaradan</surname>, <given-names>S.</given-names></name>, <name><surname>Peretz</surname>, <given-names>I.</given-names></name>, &#x26; <name><surname>Keller</surname>, <given-names>P. E.</given-names></name></person-group> (<year>2016</year>). <article-title>Individual Differences in Rhythmic Cortical Entrainment Correlate with Predictive Behavior in Sensorimotor Synchronization.</article-title> <source>Scientific Reports</source>, <volume>6</volume>(<issue>1</issue>), <fpage>20612</fpage>. <pub-id pub-id-type="doi">10.1038/srep20612</pub-id><pub-id pub-id-type="pmid">26847160</pub-id><issn>2045-2322</issn></mixed-citation></ref>
<ref id="b65"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Nozaradan</surname>, <given-names>S.</given-names></name>, <name><surname>Peretz</surname>, <given-names>I.</given-names></name>, &#x26; <name><surname>Mouraux</surname>, <given-names>A.</given-names></name></person-group> (<year>2012</year>). <article-title>Selective neuronal entrainment to the beat and meter embedded in a musical rhythm.</article-title> <source>The Journal of Neuroscience : The Official Journal of the Society for Neuroscience</source>, <volume>32</volume>(<issue>49</issue>), <fpage>17572</fpage>–<lpage>17581</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.3203-12.2012</pub-id><pub-id pub-id-type="pmid">23223281</pub-id><issn>0270-6474</issn></mixed-citation></ref>
<ref id="b74"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Pinheiro</surname> <given-names>J</given-names></name>, <name><surname>Bates</surname> <given-names>D</given-names></name></person-group>, DebRoy S, Sarkar D, Team the RDC. nlme: Linear and nonlinear mixed effects models (Version 3.1-113). . <year>2013</year>.</mixed-citation></ref>
<ref id="b92"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Polich</surname>, <given-names>J.</given-names></name></person-group> (<year>2007</year>). <article-title>Updating P300: An integrative theory of P3a and P3b.</article-title> <source>Clinical Neurophysiology</source>, <volume>118</volume>(<issue>10</issue>), <fpage>2128</fpage>–<lpage>2148</lpage>. <pub-id pub-id-type="doi">10.1016/j.clinph.2007.04.019</pub-id><pub-id pub-id-type="pmid">17573239</pub-id><issn>1388-2457</issn></mixed-citation></ref>
<ref id="b75"><mixed-citation publication-type="web-page" specific-use="unparsed"><person-group person-group-type="author"><collab>R Core Team</collab></person-group>. (<year>2013</year>). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. Retrieved from <ext-link ext-link-type="uri" xlink:href="http://www.R-project.org/">http://www.R-project.org/</ext-link></mixed-citation></ref>
<ref id="b37"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Rajkowski</surname>, <given-names>J.</given-names></name>, <name><surname>Kubiak</surname>, <given-names>P.</given-names></name>, &#x26; <name><surname>Aston-Jones</surname>, <given-names>G.</given-names></name></person-group> (<year>1993</year>). <article-title>Correlations between locus coeruleus (LC) neural activity, pupil diameter, and behavior in monkey support a role of LC in attention.</article-title> <source>Abstracts - Society for Neuroscience</source>, <volume>19</volume>(<issue>974</issue>).<issn>0190-5295</issn></mixed-citation></ref>
<ref id="b32"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Recanzone</surname>, <given-names>G. H.</given-names></name></person-group> (<year>2002</year>). <article-title>Auditory influences on visual temporal rate perception.</article-title> <source>Journal of Neurophysiology</source>, <volume>89</volume>(<issue>2</issue>), <fpage>1078</fpage>&#8211;<lpage>1093</lpage>. <pub-id pub-id-type="doi">10.1152/jn.00706.2002</pub-id><pub-id pub-id-type="pmid">12574482</pub-id><issn>0022-3077</issn></mixed-citation></ref>
<ref id="b23"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Repp</surname>, <given-names>B. H.</given-names></name></person-group> (<year>2005</year>). <article-title>Sensorimotor synchronization: A review of the tapping literature.</article-title> <source>Psychonomic Bulletin &#x26; Review</source>, <volume>12</volume>(<issue>6</issue>), <fpage>969</fpage>–<lpage>992</lpage>. <pub-id pub-id-type="doi">10.3758/BF03206433</pub-id><pub-id pub-id-type="pmid">16615317</pub-id><issn>1069-9384</issn></mixed-citation></ref>
<ref id="b86"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Robin</surname>, <given-names>X.</given-names></name>, <name><surname>Turck</surname>, <given-names>N.</given-names></name>, <name><surname>Hainard</surname>, <given-names>A.</given-names></name>, <name><surname>Tiberti</surname>, <given-names>N.</given-names></name>, <name><surname>Lisacek</surname>, <given-names>F.</given-names></name>, <name><surname>Sanchez</surname>, <given-names>J. C.</given-names></name>, &#x26; <name><surname>Müller</surname>, <given-names>M.</given-names></name></person-group> (<year>2011</year>). <article-title>pROC: An open-source package for R and S+ to analyze and compare ROC curves.</article-title> <source>BMC Bioinformatics</source>, <volume>12</volume>(<issue>1</issue>), <fpage>77</fpage>. <pub-id pub-id-type="doi">10.1186/1471-2105-12-77</pub-id><pub-id pub-id-type="pmid">21414208</pub-id><issn>1471-2105</issn></mixed-citation></ref>
<ref id="b69"><mixed-citation publication-type="book-chapter" specific-use="unparsed"><person-group person-group-type="author"><name><given-names>S.R.</given-names> <surname>Research</surname></name></person-group>. Eyelink 1000 User Manual. Ltd. SRR, editor. Vol. 1.5.0. Ontario, Canada: SR Research Ltd.; <year>2009</year>.</mixed-citation></ref>
<ref id="b38"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Sara</surname>, <given-names>S. J.</given-names></name></person-group> (<year>2015</year>). <article-title>Locus Coeruleus in time with the making of memories.</article-title> <source>Current Opinion in Neurobiology</source>, <volume>35</volume>, <fpage>87</fpage>&#8211;<lpage>94</lpage>. <pub-id pub-id-type="doi">10.1016/j.conb.2015.07.004</pub-id><pub-id pub-id-type="pmid">26241632</pub-id><issn>0959-4388</issn></mixed-citation></ref>
<ref id="b30"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Schaefer</surname>, <given-names>K. P.</given-names></name>, <name><surname>S&#252;ss</surname>, <given-names>K. J.</given-names></name>, &#x26; <name><surname>Fiebig</surname>, <given-names>E.</given-names></name></person-group> (<year>1981</year>). <article-title>Acoustic-induced eye movements.</article-title> <source>Annals of the New York Academy of Sciences</source>, <volume>374</volume>(<supplement>1 Vestibular an</supplement>), <fpage>674</fpage>&#8211;<lpage>688</lpage>. <pub-id pub-id-type="doi">10.1111/j.1749-6632.1981.tb30910.x</pub-id><pub-id pub-id-type="pmid">6951453</pub-id><issn>0077-8923</issn></mixed-citation></ref>
<ref id="b16"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Schroeder</surname>, <given-names>C. E.</given-names></name>, &#x26; <name><surname>Lakatos</surname>, <given-names>P.</given-names></name></person-group> (<year>2009</year>). <article-title>Low-frequency neuronal oscillations as instruments of sensory selection.</article-title> <source>Trends in Neurosciences</source>, <volume>32</volume>(<issue>1</issue>), <fpage>9</fpage>–<lpage>18</lpage>. <pub-id pub-id-type="doi">10.1016/j.tins.2008.09.012</pub-id><pub-id pub-id-type="pmid">19012975</pub-id><issn>0166-2236</issn></mixed-citation></ref>
<ref id="b18"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Schroeder</surname>, <given-names>C. E.</given-names></name>, <name><surname>Wilson</surname>, <given-names>D. A.</given-names></name>, <name><surname>Radman</surname>, <given-names>T.</given-names></name>, <name><surname>Scharfman</surname>, <given-names>H.</given-names></name>, &#x26; <name><surname>Lakatos</surname>, <given-names>P.</given-names></name></person-group> (<year>2010</year>). <article-title>Dynamics of Active Sensing and perceptual selection.</article-title> <source>Current Opinion in Neurobiology</source>, <volume>20</volume>(<issue>2</issue>), <fpage>172</fpage>–<lpage>176</lpage>. <pub-id pub-id-type="doi">10.1016/j.conb.2010.02.010</pub-id><pub-id pub-id-type="pmid">20307966</pub-id><issn>0959-4388</issn></mixed-citation></ref>
<ref id="b80"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Schwarz</surname>, <given-names>G.</given-names></name></person-group> (<year>1978</year>). <article-title>Estimating the dimension of a model.</article-title> <source>Annals of Statistics</source>, <volume>6</volume>(<issue>2</issue>), <fpage>461</fpage>–<lpage>464</lpage>. <pub-id pub-id-type="doi">10.1214/aos/1176344136</pub-id><issn>0090-5364</issn></mixed-citation></ref>
<ref id="b33"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Sekuler</surname>, <given-names>R.</given-names></name>, <name><surname>Sekuler</surname>, <given-names>A. B.</given-names></name>, &#x26; <name><surname>Lau</surname>, <given-names>R.</given-names></name></person-group> (<year>1997</year>). <article-title>Sound alters visual motion perception.</article-title> <source>Nature</source>, <volume>385</volume>(<issue>6614</issue>), <fpage>308</fpage>. <pub-id pub-id-type="doi">10.1038/385308a0</pub-id><pub-id pub-id-type="pmid">9002513</pub-id><issn>0028-0836</issn></mixed-citation></ref>
<ref id="b77"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Selya</surname>, <given-names>A. S.</given-names></name>, <name><surname>Rose</surname>, <given-names>J. S.</given-names></name>, <name><surname>Dierker</surname>, <given-names>L. C.</given-names></name>, <name><surname>Hedeker</surname>, <given-names>D.</given-names></name>, &#x26; <name><surname>Mermelstein</surname>, <given-names>R. J.</given-names></name></person-group> (<year>2012</year>). <article-title>A practical guide to calculating Cohen’s f2, a measure of local effect size, from PROC MIXED.</article-title> <source>Frontiers in Psychology</source>, <volume>3</volume>, <fpage>111</fpage>. <pub-id pub-id-type="doi">10.3389/fpsyg.2012.00111</pub-id><pub-id pub-id-type="pmid">22529829</pub-id><issn>1664-1078</issn></mixed-citation></ref>
<ref id="b20"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Siegel</surname>, <given-names>M.</given-names></name>, <name><surname>Donner</surname>, <given-names>T. H.</given-names></name>, &#x26; <name><surname>Engel</surname>, <given-names>A. K.</given-names></name></person-group> (<year>2012</year>). <article-title>Spectral fingerprints of large-scale neuronal interactions.</article-title> <source>Nature Reviews. Neuroscience</source>, <volume>13</volume>(<issue>2</issue>), <fpage>121</fpage>–<lpage>134</lpage>. <pub-id pub-id-type="doi">10.1038/nrn3137</pub-id><pub-id pub-id-type="pmid">22233726</pub-id><issn>1471-003X</issn></mixed-citation></ref>
<ref id="b29"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Teki</surname>, <given-names>S.</given-names></name>, <name><surname>Grube</surname>, <given-names>M.</given-names></name>, <name><surname>Kumar</surname>, <given-names>S.</given-names></name>, &#x26; <name><surname>Griffiths</surname>, <given-names>T. D.</given-names></name></person-group> (<year>2011</year>). <article-title>Distinct neural substrates of duration-based and beat-based auditory timing.</article-title> <source>The Journal of Neuroscience : The Official Journal of the Society for Neuroscience</source>, <volume>31</volume>(<issue>10</issue>), <fpage>3805</fpage>&#8211;<lpage>3812</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.5561-10.2011</pub-id><pub-id pub-id-type="pmid">21389235</pub-id><issn>0270-6474</issn></mixed-citation></ref>
<ref id="b66"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Tomic</surname>, <given-names>S. T.</given-names></name>, &#x26; <name><surname>Janata</surname>, <given-names>P.</given-names></name></person-group> (<year>2007</year>). <article-title>Ensemble: A web-based system for psychology survey and experiment management.</article-title> <source>Behavior Research Methods</source>, <volume>39</volume>(<issue>3</issue>), <fpage>635</fpage>–<lpage>650</lpage>. <pub-id pub-id-type="doi">10.3758/BF03193036</pub-id><pub-id pub-id-type="pmid">17958178</pub-id><issn>1554-351X</issn></mixed-citation></ref>
<ref id="b57"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Tomic</surname>, <given-names>S. T.</given-names></name>, &#x26; <name><surname>Janata</surname>, <given-names>P.</given-names></name></person-group> (<year>2008</year>). <article-title>Beyond the beat: Modeling metric structure in music and performance.</article-title> <source>The Journal of the Acoustical Society of America</source>, <volume>124</volume>(<issue>6</issue>), <fpage>4024</fpage>–<lpage>4041</lpage>. <pub-id pub-id-type="doi">10.1121/1.3006382</pub-id><pub-id pub-id-type="pmid">19206825</pub-id><issn>0001-4966</issn></mixed-citation></ref>
<ref id="b41"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Trost</surname>, <given-names>J. W.</given-names></name>, <name><surname>Labb&#233;</surname>, <given-names>C.</given-names></name>, &#x26; <name><surname>Grandjean</surname>, <given-names>D.</given-names></name></person-group> (<year>2017</year>). <article-title>Rhythmic entrainment as a musical affect induction mechanism.</article-title> <source>Neuropsychologia</source>, <volume>96</volume>, <fpage>96</fpage>&#8211;<lpage>110</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2017.01.004</pub-id><pub-id pub-id-type="pmid">28069444</pub-id><issn>0028-3932</issn></mixed-citation></ref>
<ref id="b72"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Wang</surname>, <given-names>C. A.</given-names></name>, <name><surname>Boehnke</surname>, <given-names>S. E.</given-names></name>, <name><surname>Itti</surname>, <given-names>L.</given-names></name>, &#x26; <name><surname>Munoz</surname>, <given-names>D. P.</given-names></name></person-group> (<year>2014</year>). <article-title>Transient pupil response is modulated by contrast-based saliency.</article-title> <source>The Journal of Neuroscience : The Official Journal of the Society for Neuroscience</source>, <volume>34</volume>(<issue>2</issue>), <fpage>408</fpage>–<lpage>417</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.3550-13.2014</pub-id><pub-id pub-id-type="pmid">24403141</pub-id><issn>0270-6474</issn></mixed-citation></ref>
<ref id="b47"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Weiss</surname>, <given-names>M. W.</given-names></name>, <name><surname>Trehub</surname>, <given-names>S. E.</given-names></name>, <name><surname>Schellenberg</surname>, <given-names>E. G.</given-names></name>, &#x26; <name><surname>Habashi</surname>, <given-names>P.</given-names></name></person-group> (<year>2016</year>). <article-title>Pupils dilate for vocal or familiar music.</article-title> <source>Journal of Experimental Psychology. Human Perception and Performance</source>, <volume>42</volume>(<issue>8</issue>), <fpage>1061</fpage>&#8211;<lpage>1065</lpage>. <pub-id pub-id-type="doi">10.1037/xhp0000226</pub-id><pub-id pub-id-type="pmid">27123682</pub-id><issn>0096-1523</issn></mixed-citation></ref>
<ref id="b73"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Welch</surname> <given-names>PD</given-names></name></person-group>. The Use of the Fast Fourier Transform for the Estimation of Power Spectra: A Method Based on Time Averaging Over Short, Modified Periodograms. IEEE® Trans Audio Electroacoust. <year>1967</year>;AU-15:70–3.</mixed-citation></ref>
<ref id="b64"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Wierda</surname>, <given-names>S. M.</given-names></name>, <name><surname>van Rijn</surname>, <given-names>H.</given-names></name>, <name><surname>Taatgen</surname>, <given-names>N. A.</given-names></name>, &#x26; <name><surname>Martens</surname>, <given-names>S.</given-names></name></person-group> (<year>2012</year>). <article-title>Pupil dilation deconvolution reveals the dynamics of attention at high temporal resolution.</article-title> <source>Proceedings of the National Academy of Sciences of the United States of America</source>, <volume>109</volume>(<issue>22</issue>), <fpage>8456</fpage>–<lpage>8460</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1201858109</pub-id><pub-id pub-id-type="pmid">22586101</pub-id><issn>0027-8424</issn></mixed-citation></ref>
<ref id="b3"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Yee</surname>, <given-names>W.</given-names></name>, <name><surname>Holleran</surname>, <given-names>S.</given-names></name>, &#x26; <name><surname>Jones</surname>, <given-names>M. R.</given-names></name></person-group> (<year>1994</year>). <article-title>Sensitivity to event timing in regular and irregular sequences: Influences of musical skill.</article-title> <source>Perception &#x26; Psychophysics</source>, <volume>56</volume>(<issue>4</issue>), <fpage>461</fpage>–<lpage>471</lpage>. <pub-id pub-id-type="doi">10.3758/BF03206737</pub-id><pub-id pub-id-type="pmid">7984401</pub-id><issn>0031-5117</issn></mixed-citation></ref>
</ref-list>
</back>
</article>
