<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd">

<article article-type="research-article" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML">
 <front>
    <journal-meta>
	<journal-id journal-id-type="publisher-id">Jemr</journal-id>
      <journal-title-group>
        <journal-title>Journal of Eye Movement Research</journal-title>
      </journal-title-group>
      <issn pub-type="epub">1995-8692</issn>
	  <publisher>								
	  <publisher-name>Bern Open Publishing</publisher-name>
	  <publisher-loc>Bern, Switzerland</publisher-loc>
	</publisher>
    </journal-meta>
    <article-meta>
	<article-id pub-id-type="doi">10.16910/jemr.11.2.11</article-id> 
	  <article-categories>								
				<subj-group subj-group-type="heading">
					<subject>Research Article</subject>
				</subj-group>
		</article-categories>
      <title-group>
        <article-title>Eye movements, attention, and expert
knowledge in the observation of
Bharatanatyam dance</article-title>
      </title-group>
	   <contrib-group> 
				<contrib contrib-type="author">
					<name>
						<surname>Ponmanadiyil</surname>
						<given-names>Raganya</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Woolhouse</surname>
						<given-names>Matthew H.</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>				
        <aff id="aff1">
		<institution>McMaster University</institution>,   <country>Canada</country>
        </aff>
		</contrib-group>   

		
	  <pub-date date-type="pub" publication-format="electronic"> 
		<day>28</day>  
		<month>12</month>
        <year>2018</year>
      </pub-date>
	  <pub-date date-type="collection" publication-format="electronic"> 
	  <year>2018</year>
	</pub-date>
      <volume>11</volume>
      <issue>2</issue>
	 <elocation-id>10.16910/jemr.11.2.11</elocation-id> 
	<permissions> 
	<copyright-year>2018</copyright-year>
	<copyright-holder>Ponmanadiyil, R. &#x26; Woolhouse, M. H.</copyright-holder>
	<license license-type="open-access">
  <license-p>This work is licensed under a Creative Commons Attribution 4.0 International License, 
  (<ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">
    https://creativecommons.org/licenses/by/4.0/</ext-link>), which permits unrestricted use and redistribution provided that the original author and source are credited.</license-p>
</license>
	</permissions>
      <abstract>
        <p>Previous research indicates that dance expertise affects eye-movement behaviour—dance
experts tend to have faster saccades and more tightly clustered fixations than novices when
observing dance, suggesting that experts are able to predict movements and process choreographic
information more quickly. Relating to this, the present study aimed to explore (1)
the effects of expertise on eye movements (as a proxy for attentional focus and the existence
of movement-dance schemas) in Indian Bharatanatyam dance, and (2) narrative
dance, which is an important component of Bharatanatyam. Fixation durations, dwell
times, and fixation-position dispersions were recorded for novices and experts in Bharatanatyam
(N = 28) while they observed videos of narrative and non-narrative Bharatanatyam
dance. Consistent with previous research, experts had shorter fixation durations
and more tightly clustered fixations than novices. Tighter clustering of fixations was
also found for narrative dance versus non-narrative. Our results are discussed in relation to
previous dance and eye-tracking research.</p>
      </abstract>
      <kwd-group>
        <kwd>Bharatanatyam</kwd>
        <kwd>dance</kwd>
        <kwd>music</kwd>
        <kwd>eye tracking</kwd>
        <kwd>fixations</kwd>
        <kwd>attention</kwd>  
        <kwd>expertise</kwd>             
      </kwd-group>
    </article-meta>
  </front>	
  <body>

    <sec id="S1">
      <title>Introduction</title>

<p>Dance has long been recognized as a universal, non-verbal
communicative medium through which emotional states, social narratives,
and myths may be conveyed(<xref ref-type="bibr" rid="b1">1</xref>). Evidence from dance and emotion
research suggests that humans are highly attuned to dancers’ affective
states, communicated through body and limb movements; for
overview(<xref ref-type="bibr" rid="b2">2</xref>). For many world cultures, music and dance are not
separable cultural categories; for example, Merriam 1964 quotes Gbeho as
stating that in the indigenous music of the Gold Coast “If we speak of a
man being musical we mean that he understands all the dances, the drums
and the songs”(<xref ref-type="bibr" rid="b3">3</xref>). From a global, cultural perspective, dance
is frequently integrated into ritualistic and religious ceremonies, both
in contemporary societies(<xref ref-type="bibr" rid="b4">4</xref>) and historically(<xref ref-type="bibr" rid="b5">5</xref>).
Moreover, dance is considered an essential ingredient in the formation
and maintenance of group identities(<xref ref-type="bibr" rid="b6 b7 b8">6, 7, 8</xref>). For example,
researchers investigated the effects of an instructional program in
creative dance on the social development of preschool
children(<xref ref-type="bibr" rid="b9">9</xref>). Teachers and parents’ ratings of the children's social skills, both before and
after the program, revealed significant gains in social competence,
suggesting that creative dance instruction for (at-risk) preschoolers
improves social interactions and behaviour(<xref ref-type="bibr" rid="b9">9</xref>). Therefore, dance
might exist today due to an evolutionary past in which it promoted
prosocial behaviour, increasing one’s ability to survive within
‘primitive’ communities(<xref ref-type="bibr" rid="b10 b11 b12 b13 b14">10, 11, 12, 13, 14</xref>).</p>

<p>Key to dance’s ability to engender social affiliation is, arguably,
its capacity to direct the attentional foci and cognitive resources of
individuals. Evidence for this, and its effect upon interpersonal
memory, was investigated(<xref ref-type="bibr" rid="b15">15</xref>). In brief, the study required
untrained dancers to recall various attributes of one another after
having danced in groups, some synchronously, others
asynchronously(<xref ref-type="bibr" rid="b15">15</xref>). Results showed that those who danced
together in time were more likely to remember each other. It was
hypothesized that this may facilitate social bonding, which would
presumably be difficult to achieve in situations where interpersonal
memory was absent(<xref ref-type="bibr" rid="b15">15</xref>). For recent related research regarding
dance and social bonding, see(<xref ref-type="bibr" rid="b16">16</xref>).</p>

<p>In a related study, eye movements of participants observing pairs of
dancers, one of whom danced in synchrony with a musical track, while the
other danced asynchronously was investigated(<xref ref-type="bibr" rid="b17">17</xref>). Gaze
dwell-times amongst participants were significantly greater for the
music-synchronous dancer, indicating a possible mechanism through which
attention may have been directed towards the in-tempo dancers in the
group study(<xref ref-type="bibr" rid="b15">15</xref>). Moreover, they investigated fixations across
different body regions, including head, torso, legs and
feet(<xref ref-type="bibr" rid="b17">17</xref>). Perhaps, paradoxically, given the importance of legs
and feet in most dancing, feet attracted significantly less dwell time
than any other body region. In sum, dance, in combination with music,
appears to have the ability to direct attention as detected in the eye
movements of observers. This is on par with everyday motion, which also
produces gaze toward the head(<xref ref-type="bibr" rid="b18">18</xref>).</p>

<p>Two types of eye movements are usually studied in scene-perception
research: fixations, during which the eyes remain still and new
information is acquired from the visual field, and saccades, movements
between fixations during which vision is suppressed and no new
information is gained(<xref ref-type="bibr" rid="b19">19</xref>). In reading research,
regressions—reverse saccades in which the eyes backtrack to the previous
fixation point—are frequently examined in relation to syntactic
comprehension(<xref ref-type="bibr" rid="b20 b21">20, 21</xref>). In scene perception, fixations are
usually between 260–330 ms, interspersed with saccades lasting about 50
ms(<xref ref-type="bibr" rid="b19">19</xref>). Saccade lengths can differ significantly depending on
the type of image being viewed, and are about 40% longer for complex
natural scenes than abstract patterns(<xref ref-type="bibr" rid="b22">22</xref>).</p>

<p>While several individuals may observe the same dance, various studies
indicate that <italic>how</italic> each individual completes this
potentially cognitively demanding task depends upon context and
experience. For instance, researchers found that emotional arousal
increases upon repeated exposure to a musical piece, suggesting that, in
general, familiarity influences cognition, possibly by allowing
individuals to develop schematic expectations(<xref ref-type="bibr" rid="b23">23</xref>); see
also(<xref ref-type="bibr" rid="b24">24</xref>). And while, since the early 1970’s, it has been known
that expertise and familiarity of the static visual stimuli
significantly influences eye movements(<xref ref-type="bibr" rid="b25">25</xref>), it was only in the
1990’s that researchers thoroughly investigated how expectations
influence human motion perception. For example, individuals’ schema of
biological motion guides them to fixate on the ends of limbs to track
the movement of human extremities(<xref ref-type="bibr" rid="b18">18</xref>).</p>

<p>With respect to dance, in a study examining the influence of
expertise on the observation of dance, it was found that people with
advanced dance training had shorter fixation durations and faster
saccades than novices(<xref ref-type="bibr" rid="b26">26</xref>). In their analysis of body-directed
fixations, the experienced choreographer in the study attended mostly to
the head of the dancers, while novices attended equally to the head,
neck, torso and arms(<xref ref-type="bibr" rid="b26">26</xref>). It was suggested that the eye
movements of the expert in their study was “likely guided by the
expectancies and schemata in long-term memory”, and that this was due to
them being “adept at abstracting and extracting key information from
complex movement material” (p. 23)(<xref ref-type="bibr" rid="b26">26</xref>). Which is to say,
fixations and saccades are influenced by the type of image being viewed
and expertise that, in turn, are suggestive of viewers’ underlying
cognitive processes, attentional foci, and schematic knowledge. The
differences between experts and novices that were found(<xref ref-type="bibr" rid="b26">26</xref>) are
consistent with earlier findings that found the performance of expert
dancers on a dance sequence recall task depended in part on the amount
of structure in the material, which implied that subject’s knowledge
base impacted memorization(<xref ref-type="bibr" rid="b27">27</xref>). For related neurological
research see(<xref ref-type="bibr" rid="b28">28</xref>).</p>

<p>Common to the work of (<xref ref-type="bibr" rid="b17 b26">17, 26</xref>) and others(<xref ref-type="bibr" rid="b29">29</xref>) is
the treatment of dance as a relatively abstract form of human movement,
seemingly divorced from its richer cultural setting in which, for
example, narrative meaning may be conveyed; although, see(<xref ref-type="bibr" rid="b30">30</xref>)
for the effect of culture on eye movements during scene perception. The
influence that narrative contexts can have on perception, and, in
particular, eye movements, has recently received increased attention.
Researchers studied the relationship between film viewers’ eye movements
and their comprehension of film narrative by investigating whether eye
movements differed based on understanding(<xref ref-type="bibr" rid="b31">31</xref>). Referred to as
the <italic>mental model</italic> hypothesis, this notion is distinct
from the alternative <italic>tyranny of film</italic> hypothesis, which
stipulates that differences due to understanding are overwhelmed by
viewers’ attentional synchrony(<xref ref-type="bibr" rid="b31">31</xref>).</p>

<p>In brief, two groups were presented with a short clip from a James
Bond movie in which a villain (“Jaws”) was about to fall from the sky
onto a circus tent(<xref ref-type="bibr" rid="b31">31</xref>). Critically, one group saw only the clip
while the other saw the preceding two-and-a-half minutes of the
movie(<xref ref-type="bibr" rid="b31">31</xref>). The researchers hypothesized that the second group,
who viewed the clip with its narrative context, would be better able to
draw critical inferences and have more coherent perceptions than the
group who viewed only the short clip(<xref ref-type="bibr" rid="b31">31</xref>). However, despite the
difference in the stimuli, both groups showed strong attentional
synchrony, and only small between-group variance(<xref ref-type="bibr" rid="b31">31</xref>). Overall,
then, the results were more consistent with the <italic>tyranny of
film</italic> hypothesis than the <italic>mental model</italic>
hypothesis, suggesting that narrative context may contribute less to eye
movements than visual features such as flicker and motion (i.e. temporal
contrast) during free-viewing of videos(<xref ref-type="bibr" rid="b31">31</xref>); see
also(<xref ref-type="bibr" rid="b32 b33">32, 33</xref>).</p>

<p>Our intention in this eye-tracking study was to develop some of the
research discussed above in an experiment that examined the effects of
expertise and dance narrative on eye movements—that is, dances which,
through various gesture sequences, attempt to convey specific,
real-world and/or religious meanings. To preempt our hypotheses
somewhat, we envisaged that the difference between experts and novices
would lead to differences in eye movements. One such dance that lends
itself particularly well to this is Indian Bharatanatyam dance. As
discussed above, while prior eye-tracking studies have identified
several factors that influence the processing of dance—such as
expertise—these factors have yet to be explored within broader cultural
contexts. Our study sought to provide an expanded cultural understanding
of the effects of expertise on observing dance using videos of
Bharatanatyam.</p>

    <sec id="S1a">
      <title>Bharatanatyam</title>

<p>Originating in the southern states of India, Bharatanatyam is an
ancient form of female classical dance that involves extensive formal
training, passed from teacher to student through years of mentorship,
dedication, and practice. The <italic>Natyasastra</italic> scriptures
explain Bharatanatyam with reference to a taxonomy of body movements:
<italic>nritta</italic> (abstract, ‘pure’ dance, performed without
expressing a particular theme or emotion), and <italic>nritya</italic>
(representational, interpretive dance, performed to convey emotions and
narrative themes); for a detailed explanation see(<xref ref-type="bibr" rid="b34">34</xref>). Both
<italic>nritta</italic> and <italic>nritya</italic> are produced by a
combination of movements and positions involving the feet, limbs, and
body, along with hand gestures and facial expressions. These elements
constitute the ‘lexicon’ of Bharatanatyam, are highly codified, and are
responsible for its distinctive look (along with its brightly coloured,
traditional costumes).</p>

<p>One way in which <italic>nritta</italic> and <italic>nritya</italic>
can be distinguished is through the facial expressions of the
dancers. <italic>Nritta</italic> is predominantly performed with a
smile, and, despite eye movements, the face has a fix, somewhat
mask-like quality. In <italic>nritya</italic>, multiple dynamic facial
expressions can be enacted by the dancer as they portray contrasting
emotions, characters and themes. A further distinction is the use of
particular hand gestures and shapes, referred to as
<italic>hastas</italic> (or sometimes <italic>mudras</italic>).
During <italic>nritta</italic>, <italic>hastas</italic> convey no
meaning and are entirely decorative. In <italic>nritya</italic>,
<italic>hastas</italic> in combination with eye movements and facial
expressions can be used to describe objects, communicate concepts (e.g.
truth and beauty), and illustrate thoughts, actions, and emotions. In
short, within Bharatanatyam there are passages that are comprised
entirely of abstract, ‘pure dance’ gestures (<italic>nritta</italic>),
whilst others are wholly interpretive and/or representational
(<italic>nritya</italic>).</p>

<p>Lastly, although there are different styles of Bharatanatyam, being
taught in varying schools, the differences in style lead to only slight
variations in rules, forms, and steps. Which is to say, Bharatanatyam
conforms to a general set of choreographic rules that span the art form.
For example, such requirements include that, in general, a dance step be
completed three times, and that movements are executed on the right side
of the body before being duplicated on the left. As these rules are
common across Bharatanatyam, it can be assumed that dancers with at
least five years of training will have an adequate understanding of all
the basics movements of Bharatanatyam; however, more experience (e.g. a
minimum of eight years) is usually required before an individual is
considered to be an expert within the discipline.</p>
    </sec>
	
    <sec id="S1b">
      <title>Hypotheses</title>

<p>The present study builds in part upon previous works(<xref ref-type="bibr" rid="b17 b26">17, 26</xref>)
by using eye-tracking to investigate the following four hypotheses: (1)
that experts (of Bharatanatyam) will have shorter fixations than
novices, which, if true, would be consistent with the notion that
experienced viewers observe dance more efficiently(<xref ref-type="bibr" rid="b26">26</xref>); (2)
that there will be differences in eye movements while observing
narrative dance versus non-narrative dance, and possibly an interaction
between the type of dance (narrative versus non-narrative) and
expertise, reflecting differences in veridical knowledge; (3) that more
fixations (and greater gaze dwell times) will occur in relation to the
upper body than lower(<xref ref-type="bibr" rid="b17">17</xref>); and (4) that there will be greater
attentional similarity between experts than novices due to the influence
of shared schematic knowledge concerning Bharatanatyam.</p>

<p>A description of the study’s methods (including participants,
stimuli, apparatus, procedure, and analysis), and results now
follows.</p>
    </sec>
    </sec>

    <sec id="S2">
      <title>Methods</title>
    <sec id="S2a">
      <title>Participants</title>

<p>28 female undergraduate psychology students and volunteers
participated in the study. Participants were categorized into
Bharatanatyam experts—individuals possessing at least eight years of
formal training—and novices—individuals possessing no training or
knowledge of Bharatanatyam. The decision to include only female
participants was taken due to a preponderance of females amongst the
expert cohort; in order to maintain balance, female participants were
therefore also used within the novice cohort. There were 14 experts
(mean age = 18.81 years; SD = 1.03) and 14 novices (mean age = 18.92
years; SD = .53). Experts possessed a mean of 9.55 years of
Bharatanatyam training (SD = 1.03), and commenced training at about 5
years of age. Novices had no formal training in any dance form, and
reported that they had no knowledge of Bharatanatyam, nor had they
previously seen it performed. All participants had normal or
corrected-to-normal eyesight. Each participant provided informed consent
prior to the experiment; student participants were compensated with a
single course credit. All procedures involving the participants were
consistent with Canadian Tri-Council Policy; the study had ethics
clearance from the Research Ethics Board of the host institution.</p>
    </sec>

    <sec id="S2b">
      <title>Materials</title>

<p>The primary stimuli for this eye-tracking experiment were taken from
a solo Bharatanatyam dance performance (<italic>Arangetram</italic>),
presented in front of a live audience by the first author in November
2011. A collection of dance pieces, each ranging in length from 10 to 20
minutes, were selected and trimmed into sixteen video clips, each
approximately 30 seconds in duration. Eight of the videos presented
Bharatanatyam dance that was narrative in nature
(<italic>nritya</italic>), while the remaining eight videos were
non-narrative (<italic>nritta</italic>). For the narrative videos,
selections were made such that each video portrayed a storyline or
specific character. The stage, stage lighting, camera angle, and
dancer’s costume were consistent across the video clips; see Figure
1.</p>

<fig id="fig01" fig-type="figure" position="float">
					<label>Figure 1.</label>
					<caption>
						<p>Stills from narrative (<italic>nritya</italic>; top) and
non-narrative (<italic>nritta</italic>; bottom) video stimuli.</p>
					</caption>
					<graphic id="graph01" xlink:href="jemr-11-02-k-figure-01.png"/>
				</fig>

<p>Carnatic music, an Indian classical music genre, accompanied the
videos and was performed by a small ensemble of musicians to the right
of the dancer on stage; however, the ensemble was not visible in the
videos. Carnatic music consists of two main elements:
<italic>rāga</italic>, the melodic-scalic component of the music, and
<italic>tāḷa</italic>, the rhythmic cycles(<xref ref-type="bibr" rid="b35">35</xref>). The music used
for the narrative and non-narrative had a neutral mood, and did not
differ with respect to valence (i.e., was neither overtly positive nor
negative in affect). The specific <italic>rāgas</italic>,
<italic>tāḷas</italic>, and whether the video was narrative or
non-narrative are shown in Table 1.</p>

<table-wrap id="t01" position="float">
					<label>Table 1.</label>
					<caption>
						<p>Dance title, rāga, and dance type associated with each video
(#). The tāḷa for all videos was Adi. The music was composed by
Alakananda Nath.</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
    <thead>
      <tr>
        <th><bold>#</bold></th>
        <th><bold>Dance title</bold></th>
        <th><bold>Rāga</bold></th>
        <th><bold>Dance type</bold></th>
      </tr>
    </thead>
    <tbody>
      <tr>
        <td>1</td>
        <td>Keerthanam (Hanuman)</td>
        <td>Poorvikalyani</td>
        <td>Narrative</td>
      </tr>
      <tr>
        <td>2</td>
        <td>Keerthanam (Hanuman)</td>
        <td>Poorvikalyani</td>
        <td>Narrative</td>
      </tr>
      <tr>
        <td>3</td>
        <td>Keerthanam (Hanuman)</td>
        <td>Poorvikalyani</td>
        <td>Narrative</td>
      </tr>
      <tr>
        <td>4</td>
        <td>Keerthanam (Hanuman)</td>
        <td>Poorvikalyani</td>
        <td>Narrative</td>
      </tr>
      <tr>
        <td>5</td>
        <td>Keerthanam (Hanuman)</td>
        <td>Poorvikalyani</td>
        <td>Narrative</td>
      </tr>
      <tr>
        <td>6</td>
        <td>Keerthanam (Hanuman)</td>
        <td>Poorvikalyani</td>
        <td>Narrative</td>
      </tr>
      <tr>
        <td>7</td>
        <td>Keerthanam (Hanuman)</td>
        <td>Poorvikalyani</td>
        <td>Narrative</td>
      </tr>
      <tr>
        <td>8</td>
        <td>Keerthanam (Hanuman)</td>
        <td>Poorvikalyani</td>
        <td>Narrative</td>
      </tr>
      <tr>
        <td>9</td>
        <td>Keerthanam (Kimartham)</td>
        <td>Ragamalika</td>
        <td>Non-narrative</td>
      </tr>
      <tr>
        <td>10</td>
        <td>Keerthanam (Kimartham)</td>
        <td>Ragamalika</td>
        <td>Non-narrative</td>
      </tr>
      <tr>
        <td>11</td>
        <td>Mangalam</td>
        <td>Suruti</td>
        <td>Non-narrative</td>
      </tr>
      <tr>
        <td>12</td>
        <td>Thilana</td>
        <td>Hamirkalyani</td>
        <td>Non-narrative</td>
      </tr>
      <tr>
        <td>13</td>
        <td>Thilana</td>
        <td>Hamirkalyani</td>
        <td>Non-narrative</td>
      </tr>
      <tr>
        <td>14</td>
        <td>Thilana</td>
        <td>Hamirkalyani</td>
        <td>Non-narrative</td>
      </tr>
      <tr>
        <td>15</td>
        <td>Thilana</td>
        <td>Hamirkalyani</td>
        <td>Non-narrative</td>
      </tr>
      <tr>
        <td>16</td>
        <td>Thilana</td>
        <td>Hamirkalyani</td>
        <td>Non-narrative</td>
      </tr>
    </tbody>
  </table>
</table-wrap>
    </sec>

    <sec id="S2c">
      <title>Apparatus</title>

<p>Eye movements were recorded using a Mirametrix S2 Eye Tracker at a
sampling rate of 60 Hz for each eye; only data recorded from the right
eye were used in the subsequent analyses. Blinks were linearly
interpolated using the system’s eye-tracking software. The bright-pupil
tracking system (sometimes referred to as “red eye effect”, caused by
on-camera-axis illumination; see(<xref ref-type="bibr" rid="b36">36</xref>), for detailed summary) had
a 0.5-degree accuracy range, drift rating of &#x3C;0.3 degrees, and
allowed users to move their heads within the width-height-depth range of
25 × 11 × 30 cm. Video stimuli were presented to participants on a 27”
monitor with a resolution of 1920 x 1080. The eye-tracker equipment sat
unobtrusively below the monitor, facing the user. An artificially lit
booth surrounded the monitor and participant to minimize glare and
distraction. Music was presented through AKG K 172 HD headphones, and
set to a comfortable level by each participant prior to calibrating the
eye-tracker. All participants’ data was exported with the system’s
EyeMetrix Software (Mirametrix Inc.).</p>
    </sec>

    <sec id="S2d">
      <title>Procedure</title>

<p>Participants completed a questionnaire regarding their formal dance
and music-training experiences. Following a 9-point eye-tracking
calibration process, the 16 video stimuli were presented in a randomized
order unique to each participant. A black screen appeared for three
seconds between each video. Participants were instructed to observe the
dances in no specific manner, but simply to relax and watch the videos
as if viewing under normal conditions. Participants were also instructed
to tap along to the underlying beat of the music using the computer
mouse. This relatively undemanding task ensured that participants
attended to both the visual and acoustic/musical elements of the
stimuli; in most cases, individuals tend to seek out and move
synchronously (and sometimes spontaneously) to an observed
beat(<xref ref-type="bibr" rid="b37 b38">37, 38</xref>). The video-watching portion of the experiment
lasted approximately 12 minutes.</p>
    </sec>

    <sec id="S2e">
      <title>Analysis</title>

<p>In order to reduce “jitter” and “flicker” effects of the eye-tracking
system, and possible artifacts of its data-parsing algorithm, fixations
below 100 ms were omitted from the analysis; this resulted in
approximately 10% of the data being lost. For discussion on the relative
merits of omitting fixation durations below a certain threshold and
data-processing algorithms, see(<xref ref-type="bibr" rid="b39">39</xref>). In terms of raw data, each
participant produced a single data file which contained all their
fixation information for all videos. These data files contained the
following columns: <italic>Observation number, Frame number, Time stamp,
X and Y positions for both Left and Right eyes, and Pupil diametre
information.</italic> The frame number and time stamp column were linked
such that each frame equated to 16.66 milliseconds (i.e. 60 Hz). The
total number of observations (i.e. rows) per participant data file was
in the region of 35,000. We requested, from the eye-tracking analytical
software, data frames which consisted only of fixations greater than 100
ms (as mentioned above), the x- and y- coordinates, and time stamp
information.</p>

<p>Repeated-measure three-way mixed analyses of variance (ANOVA),
with Dance-type and Region of Interest (ROI) as within-subject factors,
and Expertise as between-subject factor, were run separately on two
dependent variables: fixation duration and dwell time. The data frame
for these analyses consisted of 112 rows and 7 columns with the
following headings: Subject ID; Expertise; Dance-type; ROI; Mean
fixation percentage; Fixation duration SD; Dwell time percentage. Within
Expertise there were two levels, expert and novice. There were also two
levels within Dance-type: narrative and non-narrative. ROI consisted of
the screen horizontally divided into two fixed, equally sized regions:
top, which covered the dancer’s upper body, and bottom, covering the
dancer’s lower body. It should be noted that this split did not
absolutely, nor consistently, divide the dancer’s body into two equal
parts (i.e. head/torso/arms and hips/legs/feet) due to the movement of
the dancer. Expertise and Dance-type with respect to average fixation
duration per participant for each factor combination were used to
investigate Hypotheses (1) and (2); ROI in relation to percentage dwell
time was used to investigate Hypothesis (3).</p>

<p>In order to investigate Hypothesis (4)—that experts will have greater
attentional synchrony due to the influence of shared schematic
knowledge—each video was divided into overlapping time windows of 1,000
ms, succeeding by 500 ms, producing a total of 60 time windows per
video. Fixations were only included in the time windows in which they
began, not in subsequent time windows. Thus, if a fixation began in
Window 1 and ended in Window 2, its position data was only included in
Window 1, not 2. Each fixation was associated with positional
coordinates (x, y) with which average SDs for x and y fixation positions
were calculated, and then used to calculate average fixation position
SDs. The fixation position SDs of each 1,000 ms time window,
corresponding to each video, were analyzed using a repeat-measure
two-way ANOVA, with <italic>Expertise</italic> as a between-subject
factor and <italic>Dance-type</italic> as within-subject factors.
Outliers, i.e. a data point outside 1.5 times the interquartile range
above the upper quartile and below the lower quartile, were removed from
this analysis<italic>.</italic> This resulted in a core data set
consisting of 1898 rows and 8 columns with the following headings:
<italic>Expertise; Semantics; Video ID; Window ID; Window start time;
Average SD, x-axis; Average SD, y-axis; Average SD, x- and
y-axes.</italic> This analysis also enabled us to further test
Hypothesis (2)—that there will be differences in eye movements while
observing narrative versus non-narrative dance.</p>

<p>All data were analyzed using the open-source statistical package R
(2.15.0, GUI 1.51). MATLAB (R2014) was used to calculate the fixation
SDs per time window per video. Effect sizes are reported with partial
eta-squared values.</p>
    </sec>
    </sec>

    <sec id="S3">
      <title>Results</title>
    <sec id="S3a">
      <title>Fixation duration</title>

<p>There was a significant main effect of Expertise [F(1,72) =
6.478, p &#x3C; 0.05, η&#x00B2; = 0.009], and of ROI [F(1,72) = 4.315, p &#x3C;
0.05, η&#x00B2; = 0.008], but not of Dance-type (F &#x3C; 1). Expert participants
had significantly shorter fixation durations than novices (see Figure
2); participants’ fixation durations were significantly greater when
observing the top of the screen versus the bottom (see Figure 3).
Whether the dance was narrative or non-narrative had no effect on
fixation duration. No significant interactions were found between the
factors (F &#x3C; 1), and excluding outliers did not affect the
significance of the results.</p>

<fig id="fig02" fig-type="figure" position="float">
					<label>Figure 2.</label>
					<caption>
						<p>Boxplots of mean fixation durations for <italic>Expertise</italic>
(experts and novices).</p>
					</caption>
					<graphic id="graph02" xlink:href="jemr-11-02-k-figure-02.png"/>
				</fig>

<fig id="fig03" fig-type="figure" position="float">
					<label>Figure 3.</label>
					<caption>
						<p>Boxplots of mean fixation durations for <italic>ROI</italic> (bottom
and top).</p>
					</caption>
					<graphic id="graph03" xlink:href="jemr-11-02-k-figure-03.png"/>
				</fig>        
    </sec>

    <sec id="S3b">
      <title>Dwell time</title>
      
<p>There was a significant main effect of ROI [F(1,72) = 51.424, p &#x3C;
0.001; η&#x00B2; = 0.054], but not of Expertise or Dance-type (F &#x3C; 1).
Participants spent significantly more time observing the top part of the
screen (see Figure 4), irrespective of whether they were experts or
novices, or whether the dance was narrative or non-narrative in nature.
The non-significant result for Expertise and Dance-type was not
surprising given that experts and novices viewed all narrative and
non-narrative videos for an equal length of time. No significant
interactions were found between the factors (F &#x3C; 1), and excluding
outliers did not affect the significance of the results.</p>

<fig id="fig04" fig-type="figure" position="float">
					<label>Figure 4.</label>
					<caption>
						<p>Boxplots of dwell time for <italic>ROI</italic> (bottom and top).
Dwell time is expressed as a percentage per participant.</p>
					</caption>
					<graphic id="graph04" xlink:href="jemr-11-02-k-figure-04.png"/>
				</fig>
    </sec>

    <sec id="S3c">
      <title>Fixation position SD</title>

<p>There was a significant main effect of <italic>Expertise</italic>
[<italic>F</italic>(1,1890) = 7.074, <italic>p</italic> &#x3C; 0.01, η&#x00B2; =
0.004]; experts were found to have smaller fixation position SDs
compared to novices (see Figure 5). This finding is consistent with the
heat maps generated for experts and novices (Figure 6). A significant
main effect of <italic>Dance-type</italic> was also found
[<italic>F</italic>(1,1890) = 4.693, <italic>p</italic> &#x3C; 0.05, η&#x00B2; =
0.002]; non-narrative stimuli yielded larger fixation-position SDs than
narrative stimuli (see Figure 7). No significant interactions were found
between the factors (<italic>F</italic> &#x3C; 1), and excluding outliers
did not affect the significance of the results.</p>

<fig id="fig05" fig-type="figure" position="float">
					<label>Figure 5.</label>
					<caption>
						<p>Notched boxplots of mean fixation position SD per
time window for <italic>Expertise</italic> (experts and novices).</p>
					</caption>
					<graphic id="graph05" xlink:href="jemr-11-02-k-figure-05.png"/>
				</fig>

<fig id="fig06" fig-type="figure" position="float">
					<label>Figure 6.</label>
					<caption>
						<p>Heat maps showing the relative dispersion of fixations
for experts and novices. Red areas depict higher dwell time;
blue depicts lower dwell time.</p>
					</caption>
					<graphic id="graph06" xlink:href="jemr-11-02-k-figure-06.png"/>
				</fig>

 <fig id="fig07" fig-type="figure" position="float">
					<label>Figure 7.</label>
					<caption>
						<p>Notched boxplots of mean fixation position SD per
time window for <italic>Dance-type</italic> (non-narrative and narrative).</p>
					</caption>
					<graphic id="graph07" xlink:href="jemr-11-02-k-figure-07.png"/>
				</fig>               
    </sec>
    </sec>

    <sec id="S4">
      <title>Discussion</title>

<p>This study explored the extent to which expertise and narrative
content influenced the eye movements of people observing Bharatanatyam
dance. Two dependent variables were measured, fixation duration and
dwell time. A further analysis calculated the dispersion of fixations
within 1,000 ms time windows, advancing in increments of 500 ms. Despite
the relatively small eta squared effect values, the analyses determined
that expert viewers possessed greater attentional similarity than
novices: lower fixation-position SDs within a given time period are
indicative of greater fixation alignment, and thus increased attentional
similarity.</p>

<p>The above was conducted in order to test four related hypotheses.
Hypothesis (1)—that novices will have longer fixations than experts—was
supported by the data: experts in our study did have shorter fixation
durations than novices, suggesting that the inexperienced participants
observed the dance videos less efficiently (Figure 2). This result is
consistent with previous findings(<xref ref-type="bibr" rid="b26">26</xref>) who reported a similar
effect in their novice-expert dance study. However, while those findings
were concentrated on western contemporary dance(<xref ref-type="bibr" rid="b26">26</xref>), our study
sought to expand and/or generalize this effect to a very different
cultural tradition, i.e. classical dance of southern India.</p>

<p>Hypothesis (2)—that narrative dance versus non-narrative dance will
produce differences in eye movements—was not supported by the
fixation-duration data, which showed no statistical differences within
factor <italic>Dance-type.</italic> This hypothesis was based on
previous research that even in the absence of any visual stimuli,
contextual information influences oculomotor responses(<xref ref-type="bibr" rid="b40">40</xref>).
Prior to conducting the experiment, we had speculated that longer
fixation durations might occur for the narrative videos as these dances
contain richer semantic information, and thus, arguably, require greater
attention; however, this was not the case in this instance. That said, a
significant effect of <italic>Dance-type</italic> was observed when the
dependent variable was fixation position SD (Figure 7). Whether this
effect was genuinely cognitive (e.g. involving different attentional
resources or processes), or simply due to the dancer moving more in the
non-narrative videos is, however, uncertain—dynamic video-contrast
analysis(<xref ref-type="bibr" rid="b32">32</xref>) would be required in order to answer this point
definitively. Be that as it may, given that we are uncertain as to the
cause of the effect of <italic>Dance-type</italic> on fixation position
SD, the findings above cannot be said to support the conclusions
discussed in the Introduction(<xref ref-type="bibr" rid="b31">31</xref>); namely, that narrative
contexts may contribute less to eye movements than visual features such
as motion (i.e. temporal contrast) during free-viewing of videos.</p>

<p>Hypothesis (3)—that greater gaze dwell times will occur in relation
to the upper body—was conclusively found to be the case (Figure 4). This
result strongly aligns with the findings that, perhaps
counter-intuitively, found that the feet of the dancer in their
experiment attracted the least fixations(<xref ref-type="bibr" rid="b17">17</xref>). Feet are arguably
a dancer’s greatest asset(<xref ref-type="bibr" rid="b41">41</xref>); it could be considered
paradoxically, then, that when observing dance, we seem to spend the
least amount of time fixating on this part of the body. That said, an
important confound should be mentioned. In scene viewing, gaze direction
can be predicted using a combination of saliency and face
detection(<xref ref-type="bibr" rid="b42">42</xref>), both of which do not depend on whether the scene
involves dance. The effects of saliency and face detection significantly
bias the upper screen area, in which the face of our dancer was almost
invariably located. As a result, no significant conclusions can be made
regarding whether dance per se specifically directs attention towards
the head.</p>

<p>An interesting finding concerning factor <italic>ROI</italic> was
that fixation durations were significantly shorter for the lower versus
upper part of the dancer’s body (Figure 3). We confess to being somewhat
puzzled by this, although it could be related to the fact that
participants spent significantly less time observing the lower part of
the dancer’s body (Figure 4). Maybe, observers genuinely find dancers’
legs and feet less interesting (or, at least, informationally
impoverished relative to the upper body and head), in which case, when
their gaze is drawn downwards, a vertical saccade abruptly directs
foveal vision upwards. Recent research supports this conjecture;
researchers found that dance communicates group coordination through
combined movement dynamics within groups of performers(<xref ref-type="bibr" rid="b43">43</xref>).
They showed that movement synchrony among a group of performers predicts
the aesthetic appreciation of live dance performances(<xref ref-type="bibr" rid="b43">43</xref>). They
concluded that their findings were in accordance with the evolutionary
function of dance in transmitting social signals between people through
human movement(<xref ref-type="bibr" rid="b43">43</xref>).</p>

<p>Hypothesis (4)—that experts will have greater attentional
similarity—was supported by the data: the fixations of the Experts in
our study were, in general, more tightly clustered than Novices’
fixations, both spatially and temporally (Figures 5). This is
particularly noticeable in the dwell time heat maps in Figure 6: the
lower image (novices) clearly has a larger and more diffuse gaze-pattern
relative to the upper (experts). This result lends support to the
conjecture, referred to in the Introduction, that experienced observers
of dance have choreographic schemata stored in long-term memory, which
enables them to target their attention towards (shared) salient elements
within the dance(<xref ref-type="bibr" rid="b26">26</xref>). In turn, this results in a higher degree
of attentional similarity between participants, as expressed in
fixation-dispersion data. Conversely, inexperienced viewers produce more
scattered fixations as they (variously) attempt to make sense of and
predict the dancer’s movements. Interestingly, within Bharatanatyam,
hand movements are generally considered more important than leg
movements, a fact which would have been known to our expert
participants, but not novices. Therefore, although the above finding may
be due, in part, to dance schemata, the possibility that explicit
Bharatanatyam knowledge may have also influenced our results cannot be
ruled out.</p>

<p>The fact that our video stimuli were based on examples of
Bharatanatyam has both benefits and drawbacks from an experimental
perspective. A particular benefit of using excerpts from an authentic
Bharatanatyam performance was that they provided the study with a degree
of ecological validity—the stimuli were not unduly controlled or
contrived, nor stripped of their cultural richness as expressed in the
dancer’s costume and the accompanying Carnatic music. It is legitimate
to claim, therefore, that our data, however imperfect, were at least
produced in response to a real-world phenomenon, and are thus
potentially applicable or relatable to other similar global dance
practices. For example, future research could similarly examine
Kathakali, which, like Bharatanatyam, is a major form of classical
Indian dance involving story telling, but which in contrast to
Bharatanatyam is predominantly performed by male actor-dancers.</p>

<p>With respect to our study’s limitations, by including only one
dancer, factors specific to this individual not controlled for in the
experiment may have skewed our results. That said, given the highly
codified nature of Bharatanatyam, achieved through multiple years of
training, it would seem likely that our findings would be replicated
using other dancers. A further potential drawback, previously mentioned,
is that our participants were all female. Given that Bharatanatyam is
most commonly danced by females, experts with this discipline tend also
to be female, if not exclusively so for all practical purposes. Our
desire to avoid a gender imbalance between the experts and novices
naturally led to the exclusion of male (novice) participants, which may,
in itself, impose restrictions on the degree to which our results are
generalizable. One further limitation concerns the interdependence of
the music and dance—these two crucial factors were not separately
manipulated and thus it is possible that there was an undetected
interaction between the two, i.e. there may have been factors within the
music that interacted with particular gestures, giving rise to specific
eye movements. That said, as mentioned previously, all the music was of
a similar emotional character and mood, and thus we believe it is
unlikely that there were any significant interactions.</p>

<p>Eye-tracking cameras invariably produce a wealth of data that can be
analyzed using systems’ proprietary software and/or exported to other
analytical packages. In this regard, our decision to concentrate on
fixation durations and dispersions, and dwell time may seem unduly
restricted: saccade and pupillometric information could, in theory, have
also been included in the analysis. Our reason for not doing so was due
to the hypotheses we wished to test and their relationship to previous
research(<xref ref-type="bibr" rid="b26 b17">26, 17</xref>). Certainly, multiple additional statistics
could, no doubt, have been included; the extent to which these would
have enriched or detracted from the study is, however, open to
debate.</p>
    </sec>

    <sec id="S5">
      <title>Summary</title>

<p>Our aim was to extend to a broader cultural context a series of
findings derived largely from western dance(<xref ref-type="bibr" rid="b17 b29">17, 29</xref>), and in this
regard the study achieved its main goal. Data consistent with three of
our four hypotheses were produced by stimuli consisting of videos of an
actual Bharatanatyam dance performance: experts had shorter fixations
and greater attentional similarity; greater gaze dwell times occurred
predominantly in relation to the upper body. Only one hypothesis
achieved limited support: fixation durations showed no difference
between narrative and non-narrative videos, whereas fixation dispersion
patterns did differ. In sum, the study assists in building a nuanced
picture of some of the eye movements associated with Bharatanatyam, and,
in so doing, helps pave the way for research investigating other
non-western dance forms.</p>
    </sec>

    <sec id="S6" sec-type="COI-statement">
      <title>Ethics and Conflict of Interest</title>

<p>The author(s) declare(s) that the contents of the article are in
agreement with the ethics described in
<ext-link ext-link-type="uri" xlink:href="http://biblio.unibe.ch/portale/elibrary/BOP/jemr/ethics.html" xlink:show="new">http://biblio.unibe.ch/portale/elibrary/BOP/jemr/ethics.html</ext-link>
and that there is no conflict of interest regarding the publication of
this paper.</p>
    </sec>

    <sec id="S7">
      <title>Acknowledgements</title>

<p>This research was supported by funding from the School of the Arts,
McMaster University, and the Arts Research Board—Social Sciences and
Humanities Research Council (SSHRC), Institutional Grants (SIG)
program.</p>

<p>We especially wish to thank Ms. Colleen Tang Poy for her detailed
reading of the manuscript, and for her helpful edits and suggestions.
Thanks are also due to Mr. James Anthony for his assistance in
processing the timewindow information. In addition to the university
students who took part in the experiment, special mention should be made
of our Bharatanatyam experts without whom this study would have been
impossible. We are deeply indebted to them for their input, made
possible through years of dedication to dance.</p>
    </sec>
 
</body>
<back>
<ref-list>
<ref id="b22"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Andrews</surname>, <given-names>T. J.</given-names></name>, &#x26; <name><surname>Coppola</surname>, <given-names>D. M.</given-names></name></person-group> (<year>1999</year>). <article-title>Idiosyncratic characteristics of saccadic eye movements when viewing different visual environments.</article-title> <source>Vision Research</source>, <volume>39</volume>(<issue>17</issue>), <fpage>2947</fpage>&#8211;<lpage>2953</lpage>. <pub-id pub-id-type="doi">10.1016/S0042-6989(99)00019-X</pub-id><pub-id pub-id-type="pmid">10492820</pub-id><issn>0042-6989</issn></mixed-citation></ref>
<ref id="b33"><mixed-citation publication-type="book" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Batten</surname> <given-names>JP</given-names></name>&#x26;<name><surname>Smith</surname> <given-names>TJ</given-names></name></person-group>. <article-title>Looking at sound: sound design and the audiovisual influences on gaze.</article-title>&#160;<source>Seeing into Screens: Eye Tracking and the Moving Image.</source> <year>2018</year>. p. 85&#8211;102</mixed-citation></ref>
<ref id="b35"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Bhagyalekshmy</surname>, <given-names>S.</given-names></name></person-group> (<year>1990</year>). <source>Ragas in Carnatic music</source>. <publisher-name>South Asia Books</publisher-name>.</mixed-citation></ref>
<ref id="b12"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Bispham</surname>, <given-names>J.</given-names></name></person-group> (<year>2006</year>). <article-title>Rhythm in music: What is it? Who has it? And why?</article-title>. <source>Music Perception: An Interdisciplinary J.</source>, <volume>24</volume>(<issue>2</issue>), <fpage>125</fpage>&#8211;<lpage>134</lpage>.</mixed-citation></ref>
<ref id="b2"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Bl&#228;sing</surname>, <given-names>B.</given-names></name>, <name><surname>Calvo-Merino</surname>, <given-names>B.</given-names></name>, <name><surname>Cross</surname>, <given-names>E. S.</given-names></name>, <name><surname>Jola</surname>, <given-names>C.</given-names></name>, <name><surname>Honisch</surname>, <given-names>J.</given-names></name>, &#x26; <name><surname>Stevens</surname>, <given-names>C. J.</given-names></name></person-group> (<year>2012</year>). <article-title>Neurocognitive control in dance perception and performance.</article-title> <source>Acta Psychologica</source>, <volume>139</volume>(<issue>2</issue>), <fpage>300</fpage>&#8211;<lpage>308</lpage>. <pub-id pub-id-type="doi">10.1016/j.actpsy.2011.12.005</pub-id><pub-id pub-id-type="pmid">22305351</pub-id><issn>0001-6918</issn></mixed-citation></ref>
<ref id="b38"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Burger</surname>, <given-names>B.</given-names></name>, <name><surname>Saarikallio</surname>, <given-names>S.</given-names></name>, <name><surname>Luck</surname>, <given-names>G.</given-names></name>, <name><surname>Thompson</surname>, <given-names>M. R.</given-names></name>, &#x26; <name><surname>Toiviainen</surname>, <given-names>P.</given-names></name></person-group> (<year>2013</year>). <article-title>Relationships between perceived emotions in music and music-induced movement.</article-title> <source>Music Perception: Interdiscip J.</source>, <volume>30</volume>(<issue>5</issue>), <fpage>517</fpage>&#8211;<lpage>533</lpage>. <pub-id pub-id-type="doi">10.1525/mp.2013.30.5.517</pub-id></mixed-citation></ref>
<ref id="b30"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Chua</surname>, <given-names>H. F.</given-names></name>, <name><surname>Boland</surname>, <given-names>J. E.</given-names></name>, &#x26; <name><surname>Nisbett</surname>, <given-names>R. E.</given-names></name></person-group> (<year>2005</year>). <article-title>Cultural variation in eye movements during scene perception.</article-title> <source>Proceedings of the National Academy of Sciences of the United States of America</source>, <volume>102</volume>(<issue>35</issue>), <fpage>12629</fpage>&#8211;<lpage>12633</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.0506162102</pub-id><pub-id pub-id-type="pmid">16116075</pub-id><issn>0027-8424</issn></mixed-citation></ref>
<ref id="b8"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Cirelli</surname>, <given-names>L. K.</given-names></name>, <name><surname>Einarson</surname>, <given-names>K. M.</given-names></name>, &#x26; <name><surname>Trainor</surname>, <given-names>L. J.</given-names></name></person-group> (<year>2014</year>). <article-title>Interpersonal synchrony increases prosocial behavior in infants.</article-title> <source>Developmental Science</source>, <volume>17</volume>(<issue>6</issue>), <fpage>1003</fpage>&#8211;<lpage>1011</lpage>. <pub-id pub-id-type="doi">10.1111/desc.12193</pub-id><pub-id pub-id-type="pmid">25513669</pub-id><issn>1363-755X</issn></mixed-citation></ref>
<ref id="b5"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Garfinkel</surname>, <given-names>Y.</given-names></name></person-group> (<year>2010</year>). <source>Dancing at the dawn of agriculture</source>. <publisher-name>University of Texas Press</publisher-name>.</mixed-citation></ref>
<ref id="b4"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><name><surname>Gundlach</surname>, <given-names>H. B.</given-names></name></person-group> (<year>2006</year>). <chapter-title>Imitation to&#160;Modification and&#160;Creation: Religious&#160;Dance in Contemporary Germany</chapter-title>. In <person-group person-group-type="editor"><name><given-names>E.</given-names> <surname>Arweck</surname></name> &#x26; <name><given-names>W. J.</given-names> <surname>Keenan</surname></name> (<role>Eds.</role>),</person-group> <source>Materializing religion: Expression, performance and ritual</source> (pp. <fpage>89</fpage>&#8211;<lpage>98</lpage>). <publisher-name>Ashgate Publishing</publisher-name>.</mixed-citation></ref>
<ref id="b1"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Hanna</surname>, <given-names>J.</given-names></name></person-group> (<year>1987</year>). <source>To dance is human: A theory of nonverbal communication</source>. <publisher-loc>Chicago</publisher-loc>: <publisher-name>University of Chicago Press</publisher-name>.</mixed-citation></ref>
<ref id="b36"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Holmqvist</surname>, <given-names>K.</given-names></name>, <name><surname>Nystr&#246;m</surname>, <given-names>M.</given-names></name>, <name><surname>Andersson</surname>, <given-names>R.</given-names></name>, <name><surname>Dewhurst</surname>, <given-names>R.</given-names></name>, &#x26; <name><surname>Jarodzka</surname>, <given-names>H.</given-names></name></person-group> (<year>2011</year>). <source>Van de Weijer J.&#160;Eye tracking: A comprehensive guide to methods and measures</source>. <publisher-name>OUP Oxford</publisher-name>.</mixed-citation></ref>
<ref id="b27"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Jean</surname>, <given-names>J.</given-names></name>, <name><surname>Cadopi</surname>, <given-names>M.</given-names></name>, &#x26; <name><surname>Ille</surname>, <given-names>A.</given-names></name></person-group> (<year>2001</year>). <article-title>How are dance sequences encoded and recalled by expert dancers.</article-title> <comment>[Current Psychology of Cognition]</comment>. <source>Cahiers de Psychologie Cognitive</source>, <volume>20</volume>(<issue>5</issue>), <fpage>325</fpage>&#8211;<lpage>337</lpage>.<issn>0249-9185</issn></mixed-citation></ref>
<ref id="b28"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Jola</surname> <given-names>C</given-names></name>, <name><surname>McAleer</surname> <given-names>P</given-names></name>, <name><surname>Grosbras</surname> <given-names>MH</given-names></name>, <name><surname>Love</surname> <given-names>SA</given-names></name>, <name><surname>Morison</surname> <given-names>G</given-names></name> &#x26;<name><surname>Pollick</surname> <given-names>FE</given-names></name></person-group>. Uni-and multisensory brain areas are synchronised across spectators when watching unedited dance recordings.&#160;i-Perception.&#160;<year>2013</year>;4(4):265&#8211;284.</mixed-citation></ref>
<ref id="b42"><mixed-citation publication-type="conference" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Judd</surname> <given-names>T</given-names></name>, <name><surname>Ehinger</surname> <given-names>K</given-names></name>, <name><surname>Durand</surname> <given-names>F</given-names></name> &#x26;<name><surname>Torralba</surname> <given-names>A</given-names></name></person-group>. Learning to predict where humans look. 12<sup>th</sup> International Conference IEEE in Computer Vision; <year>2009</year> <month>Sept.</month> 2106–2113 p.</mixed-citation></ref>
<ref id="b6"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Kirschner</surname>, <given-names>S.</given-names></name>, &#x26; <name><surname>Tomasello</surname>, <given-names>M.</given-names></name></person-group> (<year>2010</year>). <article-title>Joint music making promotes prosocial behavior in 4-year-old children.</article-title> <source>Evolution and Human Behavior</source>, <volume>31</volume>(<issue>5</issue>), <fpage>354</fpage>&#8211;<lpage>364</lpage>. <pub-id pub-id-type="doi">10.1016/j.evolhumbehav.2010.04.004</pub-id><issn>1090-5138</issn></mixed-citation></ref>
<ref id="b25"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Kundel</surname>, <given-names>H. L.</given-names></name>, &#x26; <name><surname>La Follette</surname>, <given-names>P. S.</given-names>, <suffix>Jr</suffix>.</name></person-group> (<year>1972</year>). <article-title>Visual search patterns and experience with radiological images.</article-title> <source>Radiology</source>, <volume>103</volume>(<issue>3</issue>), <fpage>523</fpage>&#8211;<lpage>528</lpage>. <pub-id pub-id-type="doi">10.1148/103.3.523</pub-id><pub-id pub-id-type="pmid">5022947</pub-id><issn>0033-8419</issn></mixed-citation></ref>
<ref id="b20"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Liversedge</surname>, <given-names>S. P.</given-names></name>, &#x26; <name><surname>Findlay</surname>, <given-names>J. M.</given-names></name></person-group> (<year>2000</year>). <article-title>Saccadic eye movements and cognition.</article-title> <source>Trends in Cognitive Sciences</source>, <volume>4</volume>(<issue>1</issue>), <fpage>6</fpage>&#8211;<lpage>14</lpage>. <pub-id pub-id-type="doi">10.1016/S1364-6613(99)01418-7</pub-id><pub-id pub-id-type="pmid">10637617</pub-id><issn>1364-6613</issn></mixed-citation></ref>
<ref id="b9"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Lobo</surname>, <given-names>Y. B.</given-names></name>, &#x26; <name><surname>Winsler</surname>, <given-names>A.</given-names></name></person-group> (<year>2006</year>). <article-title>The effects of a creative dance and movement program on the social competence of head start preschoolers.</article-title> <source>Social Development</source>, <volume>15</volume>(<issue>3</issue>), <fpage>501</fpage>&#8211;<lpage>519</lpage>. <pub-id pub-id-type="doi">10.1111/j.1467-9507.2006.00353.x</pub-id><issn>0961-205X</issn></mixed-citation></ref>
<ref id="b31"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Loschky</surname>, <given-names>L. C.</given-names></name>, <name><surname>Larson</surname>, <given-names>A. M.</given-names></name>, <name><surname>Magliano</surname>, <given-names>J. P.</given-names></name>, &#x26; <name><surname>Smith</surname>, <given-names>T. J.</given-names></name></person-group> (<year>2015</year>). <article-title>What would Jaws do? The tyranny of film and the relationship between gaze and higher-level narrative film comprehension.</article-title> <source>PLoS One</source>, <volume>10</volume>(<issue>11</issue>), <fpage>e0142474</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0142474</pub-id><pub-id pub-id-type="pmid">26606606</pub-id><issn>1932-6203</issn></mixed-citation></ref>
<ref id="b29"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Luck</surname>, <given-names>G.</given-names></name>, <name><surname>Saarikallio</surname>, <given-names>S.</given-names></name>, <name><surname>Burger</surname>, <given-names>B.</given-names></name>, <name><surname>Thompson</surname>, <given-names>M. R.</given-names></name>, &#x26; <name><surname>Toiviainen</surname>, <given-names>P.</given-names></name></person-group> (<year>2010</year>). <article-title>Effects of the Big Five and musical genre on music-induced movement.</article-title> <source>Journal of Research in Personality</source>, <volume>44</volume>(<issue>6</issue>), <fpage>714</fpage>&#8211;<lpage>720</lpage>. <pub-id pub-id-type="doi">10.1016/j.jrp.2010.10.001</pub-id><issn>0092-6566</issn></mixed-citation></ref>
<ref id="b41"><mixed-citation publication-type="web-page" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Macaulay</surname> <given-names>A.</given-names></name></person-group> Notice the feet in that body of work: The New York Times; <year>2009</year> [<date-in-citation content-type="access-date">cited 2018 March 1</date-in-citation>]. Available from <ext-link ext-link-type="uri" xlink:href="http://www.nytimes.com/2009/12/13/arts/dance/13feet.html?pagewanted=all">http://www.nytimes.com/2009/12/13/arts/dance/13feet.html?pagewanted=all</ext-link></mixed-citation></ref>
<ref id="b18"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Matari&#263;</surname>, <given-names>M. J.</given-names></name>, &#x26; <name><surname>Pomplun</surname>, <given-names>M.</given-names></name></person-group> (<year>1998</year>). <article-title>Fixation behavior in observation and imitation of human movement.</article-title> <source>Brain Research. Cognitive Brain Research</source>, <volume>7</volume>(<issue>2</issue>), <fpage>191</fpage>&#8211;<lpage>202</lpage>. <pub-id pub-id-type="doi">10.1016/S0926-6410(98)00025-1</pub-id><pub-id pub-id-type="pmid">9774730</pub-id><issn>0926-6410</issn></mixed-citation></ref>
<ref id="b13"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Merker</surname>, <given-names>B. H.</given-names></name>, <name><surname>Madison</surname>, <given-names>G. S.</given-names></name>, &#x26; <name><surname>Eckerdal</surname>, <given-names>P.</given-names></name></person-group> (<year>2009</year>). <article-title>On the role and origin of isochrony in human rhythmic entrainment.</article-title> <source>Cortex</source>, <volume>45</volume>(<issue>1</issue>), <fpage>4</fpage>&#8211;<lpage>17</lpage>. <pub-id pub-id-type="doi">10.1016/j.cortex.2008.06.011</pub-id><pub-id pub-id-type="pmid">19046745</pub-id><issn>0010-9452</issn></mixed-citation></ref>
<ref id="b3"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Merriam</surname>, <given-names>A. P.</given-names></name>, &#x26; <name><surname>Merriam</surname>, <given-names>V.</given-names></name></person-group> (<year>1964</year>). <source>The anthropology of music</source>. <publisher-loc>Chicago</publisher-loc>: <publisher-name>Northwestern University Press</publisher-name>.</mixed-citation></ref>
<ref id="b7"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Miles</surname>, <given-names>L. K.</given-names></name>, <name><surname>Nind</surname>, <given-names>L. K.</given-names></name>, &#x26; <name><surname>Macrae</surname>, <given-names>C. N.</given-names></name></person-group> (<year>2009</year>). <article-title>The rhythm of rapport: Interpersonal synchrony and social perception.</article-title> <source>Journal of Experimental Social Psychology</source>, <volume>45</volume>(<issue>3</issue>), <fpage>585</fpage>&#8211;<lpage>589</lpage>. <pub-id pub-id-type="doi">10.1016/j.jesp.2009.02.002</pub-id><issn>0022-1031</issn></mixed-citation></ref>
<ref id="b32"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Mital</surname>, <given-names>P. K.</given-names></name>, <name><surname>Smith</surname>, <given-names>T. J.</given-names></name>, <name><surname>Hill</surname>, <given-names>R. L.</given-names></name>, &#x26; <name><surname>Henderson</surname>, <given-names>J. M.</given-names></name></person-group> (<year>2011</year>). <article-title>Clustering of gaze during dynamic scene viewing is predicted by motion.</article-title> <source>Cognitive Computation</source>, <volume>3</volume>(<issue>1</issue>), <fpage>5</fpage>&#8211;<lpage>24</lpage>. <pub-id pub-id-type="doi">10.1007/s12559-010-9074-z</pub-id><issn>1866-9956</issn></mixed-citation></ref>
<ref id="b24"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Orgs</surname>, <given-names>G.</given-names></name>, <name><surname>Hagura</surname>, <given-names>N.</given-names></name>, &#x26; <name><surname>Haggard</surname>, <given-names>P.</given-names></name></person-group> (<year>2013</year>). <article-title>Learning to like it: Aesthetic perception of bodies, movements and choreographic structure.</article-title> <source>Consciousness and Cognition</source>, <volume>22</volume>(<issue>2</issue>), <fpage>603</fpage>&#8211;<lpage>612</lpage>. <pub-id pub-id-type="doi">10.1016/j.concog.2013.03.010</pub-id><pub-id pub-id-type="pmid">23624142</pub-id><issn>1053-8100</issn></mixed-citation></ref>
<ref id="b14"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Ravignani</surname>, <given-names>A.</given-names></name>, <name><surname>Bowling</surname>, <given-names>D. L.</given-names></name>, &#x26; <name><surname>Fitch</surname>, <given-names>W. T.</given-names></name></person-group> (<year>2014</year>). <article-title>Chorusing, synchrony, and the evolutionary functions of rhythm.</article-title> <source>Frontiers in Psychology</source>, <volume>5</volume>, <fpage>1118</fpage>. <pub-id pub-id-type="doi">10.3389/fpsyg.2014.01118</pub-id><pub-id pub-id-type="pmid">25346705</pub-id><issn>1664-1078</issn></mixed-citation></ref>
<ref id="b21"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Rayner</surname>, <given-names>K.</given-names></name>, <name><surname>Chace</surname>, <given-names>K. H.</given-names></name>, <name><surname>Slattery</surname>, <given-names>T. J.</given-names></name>, &#x26; <name><surname>Ashby</surname>, <given-names>J.</given-names></name></person-group> (<year>2006</year>). <article-title>Eye movements as reflections of comprehension processes in reading.</article-title> <source>Scientific Studies of Reading</source>, <volume>10</volume>(<issue>3</issue>), <fpage>241</fpage>&#8211;<lpage>255</lpage>. <pub-id pub-id-type="doi">10.1207/s1532799xssr1003_3</pub-id><issn>1088-8438</issn></mixed-citation></ref>
<ref id="b19"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Rayner</surname>, <given-names>K.</given-names></name></person-group> (<year>2009</year>). <article-title>Eye movements and attention in reading, scene perception, and visual search.</article-title> <source>Quart J of Experimental Psych.</source>, <volume>62</volume>(<issue>8</issue>), <fpage>1457</fpage>&#8211;<lpage>1506</lpage>. <pub-id pub-id-type="doi">10.1080/17470210902816461</pub-id><pub-id pub-id-type="pmid">19449261</pub-id><issn>1747-0226</issn></mixed-citation></ref>
<ref id="b37"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Sebanz</surname>, <given-names>N.</given-names></name>, <name><surname>Bekkering</surname>, <given-names>H.</given-names></name>, &#x26; <name><surname>Knoblich</surname>, <given-names>G.</given-names></name></person-group> (<year>2006</year>). <article-title>Joint action: Bodies and minds moving together.</article-title> <source>Trends in Cognitive Sciences</source>, <volume>10</volume>(<issue>2</issue>), <fpage>70</fpage>&#8211;<lpage>76</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2005.12.009</pub-id><pub-id pub-id-type="pmid">16406326</pub-id><issn>1364-6613</issn></mixed-citation></ref>
<ref id="b34"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="editor"><name><surname>Soneji</surname>, <given-names>D.</given-names></name> (<role>Ed.</role>)</person-group>. (<year>2012</year>). <source>Bharatanatyam: A reader</source>. <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref>
<ref id="b40"><mixed-citation publication-type="conference" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Spivey</surname> <given-names>MJ</given-names></name>, <name><surname>Tyler</surname> <given-names>MJ</given-names></name>, <name><surname>Richardson</surname> <given-names>DC</given-names></name> &#x26; <name><surname>Young</surname> <given-names>EE</given-names></name></person-group>. <article-title>Eye movements during comprehension of spoken descriptions.</article-title> In <source>Proceedings of the Annual Meeting of the Cognitive Science Society</source>. <year>2000</year>;22(22).</mixed-citation></ref>
<ref id="b26"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Stevens</surname>, <given-names>C.</given-names></name>, <name><surname>Winskel</surname>, <given-names>H.</given-names></name>, <name><surname>Howell</surname>, <given-names>C.</given-names></name>, <name><surname>Vidal</surname>, <given-names>L. M.</given-names></name>, <name><surname>Latimer</surname>, <given-names>C.</given-names></name>, &#x26; <name><surname>Milne-Home</surname>, <given-names>J.</given-names></name></person-group> (<year>2010</year>). <article-title>Perceiving dance: Schematic expectations guide experts&#8217; scanning of a contemporary dance film.</article-title> <source>Journal of Dance Medicine &#x26; Science : Official Publication of the International Association for Dance Medicine &#x26; Science</source>, <volume>14</volume>(<issue>1</issue>), <fpage>19</fpage>&#8211;<lpage>25</lpage>.<pub-id pub-id-type="pmid">20214851</pub-id><issn>1089-313X</issn></mixed-citation></ref>
<ref id="b10"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Tarr</surname>, <given-names>B.</given-names></name>, <name><surname>Launay</surname>, <given-names>J.</given-names></name>, &#x26; <name><surname>Dunbar</surname>, <given-names>R. I.</given-names></name></person-group> (<year>2014</year>). <article-title>Music and social bonding: &#8220;self-other&#8221; merging and neurohormonal mechanisms.</article-title> <source>Frontiers in Psychology</source>, <volume>5</volume>, <fpage>1096</fpage>. <pub-id pub-id-type="doi">10.3389/fpsyg.2014.01096</pub-id><pub-id pub-id-type="pmid">25324805</pub-id><issn>1664-1078</issn></mixed-citation></ref>
<ref id="b11"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Tarr</surname>, <given-names>B.</given-names></name>, <name><surname>Launay</surname>, <given-names>J.</given-names></name>, <name><surname>Cohen</surname>, <given-names>E.</given-names></name>, &#x26; <name><surname>Dunbar</surname>, <given-names>R.</given-names></name></person-group> (<year>2015</year>). <article-title>Synchrony and exertion during dance independently raise pain threshold and encourage social bonding.</article-title> <source>Biology Letters</source>, <volume>11</volume>(<issue>10</issue>), <fpage>20150767</fpage>. <pub-id pub-id-type="doi">10.1098/rsbl.2015.0767</pub-id><pub-id pub-id-type="pmid">26510676</pub-id><issn>1744-9561</issn></mixed-citation></ref>
<ref id="b23"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>van den Bosch</surname>, <given-names>I.</given-names></name>, <name><surname>Salimpoor</surname>, <given-names>V. N.</given-names></name>, &#x26; <name><surname>Zatorre</surname>, <given-names>R. J.</given-names></name></person-group> (<year>2013</year>). <article-title>Familiarity mediates the relationship between emotional arousal and pleasure during music listening.</article-title> <source>Frontiers in Human Neuroscience</source>, <volume>7</volume>, <fpage>534</fpage>. <pub-id pub-id-type="doi">10.3389/fnhum.2013.00534</pub-id><pub-id pub-id-type="pmid">24046738</pub-id><issn>1662-5161</issn></mixed-citation></ref>
<ref id="b43"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Vicary</surname>, <given-names>S.</given-names></name>, <name><surname>Sperling</surname>, <given-names>M.</given-names></name>, <name><surname>von Zimmermann</surname>, <given-names>J.</given-names></name>, <name><surname>Richardson</surname>, <given-names>D. C.</given-names></name>, &#x26; <name><surname>Orgs</surname>, <given-names>G.</given-names></name></person-group> (<year>2017</year>). <article-title>Joint action aesthetics.</article-title> <source>PLoS One</source>, <volume>12</volume>(<issue>7</issue>), <fpage>e0180101</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0180101</pub-id><pub-id pub-id-type="pmid">28742849</pub-id><issn>1932-6203</issn></mixed-citation></ref>
<ref id="b16"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>von Zimmermann</surname>, <given-names>J.</given-names></name>, <name><surname>Vicary</surname>, <given-names>S.</given-names></name>, <name><surname>Sperling</surname>, <given-names>M.</given-names></name>, <name><surname>Orgs</surname>, <given-names>G.</given-names></name>, &#x26; <name><surname>Richardson</surname>, <given-names>D. C.</given-names></name></person-group> (<year>2018</year>). <article-title>The choreography of group affiliation.</article-title> <source>Topics in Cognitive Science</source>, <volume>10</volume>(<issue>1</issue>), <fpage>80</fpage>&#8211;<lpage>94</lpage>. <pub-id pub-id-type="doi">10.1111/tops.12320</pub-id><pub-id pub-id-type="pmid">29327424</pub-id><issn>1756-8757</issn></mixed-citation></ref>
<ref id="b39"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Wass</surname>, <given-names>S. V.</given-names></name>, <name><surname>Smith</surname>, <given-names>T. J.</given-names></name>, &#x26; <name><surname>Johnson</surname>, <given-names>M. H.</given-names></name></person-group> (<year>2013</year>). <article-title>Parsing eye-tracking data of variable quality to provide accurate fixation duration estimates in infants and adults.</article-title> <source>Behavior Research Methods</source>, <volume>45</volume>(<issue>1</issue>), <fpage>229</fpage>–<lpage>250</lpage>. <pub-id pub-id-type="doi">10.3758/s13428-012-0245-6</pub-id><pub-id pub-id-type="pmid">22956360</pub-id><issn>1554-351X</issn></mixed-citation></ref>
<ref id="b17"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Woolhouse</surname>, <given-names>M. H.</given-names></name>, &#x26; <name><surname>Lai</surname>, <given-names>R.</given-names></name></person-group> (<year>2014</year>). <article-title>Traces across the body: Influence of music-dance synchrony on the observation of dance.</article-title> <source>Frontiers in Human Neuroscience</source>, <volume>8</volume>, <fpage>965</fpage>. <pub-id pub-id-type="doi">10.3389/fnhum.2014.00965</pub-id><pub-id pub-id-type="pmid">25520641</pub-id><issn>1662-5161</issn></mixed-citation></ref>
<ref id="b15"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Woolhouse</surname>, <given-names>M. H.</given-names></name>, <name><surname>Tidhar</surname>, <given-names>D.</given-names></name>, &#x26; <name><surname>Cross</surname>, <given-names>I.</given-names></name></person-group> (<year>2016</year>). <article-title>Effects on inter-personal memory of dancing in time with others.</article-title> <source>Frontiers in Psychology</source>, <volume>7</volume>, <fpage>167</fpage>. <pub-id pub-id-type="doi">10.3389/fpsyg.2016.00167</pub-id><pub-id pub-id-type="pmid">26941668</pub-id><issn>1664-1078</issn></mixed-citation></ref>
</ref-list>
</back>
</article>
