<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd">

<article article-type="research-article" xmlns:xlink="http://www.w3.org/1999/xlink">
 <front>
    <journal-meta>
	<journal-id journal-id-type="publisher-id">Jemr</journal-id>
      <journal-title-group>
        <journal-title>Journal of Eye Movement Research</journal-title>
      </journal-title-group>
      <issn pub-type="epub">1995-8692</issn>
	  <publisher>								
	  <publisher-name>Bern Open Publishing</publisher-name>
	  <publisher-loc>Bern, Switzerland</publisher-loc>
	</publisher>
    </journal-meta>
    <article-meta>
	<article-id pub-id-type="doi">10.16910/jemr.10.4.4</article-id> 
	  <article-categories>								
				<subj-group subj-group-type="heading">
					<subject>Research Article</subject>
				</subj-group>
		</article-categories>
      <title-group>
        <article-title>Does descriptive text change how people look at art? A novel analysis of eye - movements using data -driven Units of Interest</article-title>
      </title-group>
	   <contrib-group> 
				<contrib contrib-type="author">
					<name>
						<surname>Davies</surname>
						<given-names>Alan</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Reani</surname>
						<given-names>Manuele</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>				
				<contrib contrib-type="author">
					<name>
						<surname>Vigo</surname>
						<given-names>Markel</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Harper</surname>
						<given-names>Simon</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Gannaway</surname>
						<given-names>Clare</given-names>
					</name>
					<xref ref-type="aff" rid="aff2">2</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Grimes</surname>
						<given-names>Martin</given-names>
					</name>
					<xref ref-type="aff" rid="aff2">2</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Jay</surname>
						<given-names>Caroline</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>				
				
        <aff id="aff1">
		<institution>School of Computer Science, University of Manchester</institution>, <country>United Kingdom</country>
        </aff>
		<aff id="aff2">
		<institution>Manchester Art Gallery</institution>, <country>United Kingdom</country>
        </aff>
		</contrib-group>
     
	  <pub-date date-type="pub" publication-format="electronic"> 
		<day>22</day>  
		<month>11</month>
        <year>2017</year>
      </pub-date>
	  <pub-date date-type="collection" publication-format="electronic"> 
	  <year>2017</year>
	</pub-date>
      <volume>10</volume>
      <issue>4</issue>
	   <elocation-id>10.16910/jemr.10.4.4</elocation-id> 
	<permissions> 
	<copyright-year>2017</copyright-year>
	<copyright-holder>Davies et al.</copyright-holder>
	<license license-type="open-access">
  <license-p>This work is licensed under a Creative Commons Attribution 4.0 International License, 
  (<ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">
    https://creativecommons.org/licenses/by/4.0/</ext-link>), which permits unrestricted use and redistribution provided that the original author and source are credited.</license-p>
</license>
	</permissions>
      <abstract>
        <p>Does reading a description of an artwork affect how a person subsequently views it? In a controlled study, we show that in most cases, textual description does not influence how people subsequently view paintings, contrary to participants' self-report that they believed it did. To examine whether the description affected transition behaviour, we devised a novel analysis method that systematically determines Units of Interest (UOIs), and calculates transitions between these, to quantify the effect of an external factor (a descriptive text) on the viewing pattern of a naturalistic stimulus (a painting). UOIs are defined using a grid-based system, where the cell-size is determined by a clustering algorithm (DBSCAN). The Hellinger distance is computed for the distance between two Markov chains using a permutation test, constructed from the transition matrices (visual shifts between UOIs) of the two groups for each painting. Results show that the description does not affect the way in which people transition between UOIs for all but one of the paintings -- an abstract work -- suggesting that description may play more of a role in determining transition behaviour when a lack of semantic cues means it is unclear how the painting should be interpreted. The contribution is twofold: to the domain of art/curation, we provide evidence that descriptive texts do not effect how people view paintings, with the possible exception of some abstract paintings; to the domain of eye-movement research, we provide a method with the potential to answer questions across multiple research areas, where the goal is to determine whether a particular factor or condition consistently affects viewing behaviour of naturalistic stimuli.</p>
      </abstract>
      <kwd-group>
        <kwd>Art</kwd>
        <kwd>paintings</kwd>
        <kwd>eye tracking</kwd>
        <kwd>eye movement</kwd>
        <kwd>painting narration</kwd>
        <kwd>art perception</kwd>
        <kwd>areas of interest</kwd>
        <kwd>regions of interest</kwd>
        <kwd>Markov chain</kwd>
      </kwd-group>
    </article-meta>
  </front>	
  <body>

    <sec id="S1">
      <title>Introduction</title>
      <p>Science and art are often considered to be parallel
disciplines with little interaction between the two (
          <xref ref-type="bibr" rid="R31">1</xref>
		  ); here
we provide a scientific perspective on the perception of
art, which emerged from a collaborative project between
the University of Manchester and Manchester Art
Gallery. Manchester Gallery were specifically interested in
understanding the behaviour of their web visitors for
curatorial purposes. We explored whether reading a
description of an artwork affects the way a person
subsequently views it in a controlled study, leading to a richer
understanding of how people view art, and a
generalizable method that can be used by researchers in
eyemovement research. The method presented can help to
answer similar questions about differences in viewing
behaviour between groups, when using stimuli where
Areas of Interest (AOI) segmentation is challenging.</p>

      <p>Art is a unique and subjective perceptual experience(
          <xref ref-type="bibr" rid="R31">1</xref>
		  ). Although arguably the best context for some forms of
art, museums can be difficult for some people to visit,
including older people, those who have disabilities, and
those who are unable to travel to them. It is also known
that the time people spend viewing artworks decreases as
people move through an exhibition, a phenomenon
termed &#x201C;museum fatigue&#x201D; (
          <xref ref-type="bibr" rid="R32">2</xref>
		  ). As it is now possible to
view many paintings online, more people can potentially
access artworks than previously, and can access those
works faster. It is known that the context in which art is
viewed has an effect on how people evaluate it (
          <xref ref-type="bibr" rid="R33">3</xref>
		  ). With
more and more art consumed online, new questions are
emerging as to how best to digitally present it.
Eyetracking can play a valuable role in understanding how
people perceive art, and has the potential to provide
information that can be used to support its curation.
Quantitatively analysing gaze data over artwork can be
challenging, due to the fact that images are often naturalistic
(representing the various colours and forms as they
appear in nature), and do not generally contain explicit
semantic regions that can be labelled as Areas/Regions of
Interest (AOIs/ROIs). In this paper we introduce a
method of stimulus segmentation for subsequent data analysis
that reduces researcher bias and aids in the segmentation
of stimuli with difficult to identify or subjective semantic
details. The method is used to quantitatively examine
whether presenting a descriptive text to people before
they view a painting subsequently affects their viewing
pattern. The text consists of a short description written by
a curator or other expert, providing information about the
painting; in the current study, we used texts taken from
the Art UK website (
<ext-link ext-link-type="uri" xlink:href="http://www.artuk.org" xlink:show="new">http://www.artuk.org</ext-link>
), which
accompany each painting displayed online. The following
example is taken from one of the paintings used in the
study, entitled, &#x2018;Self Portrait&#x2019;, by Louise Jopling: <xref ref-type="fig" rid="fig01">Figure 1</xref></p>

      <p><italic>&#x201C;A frontal bust portrait of the artist as a young
woman with her hair tied up, wearing a pale coat with white
collar and matching hat, set at an angle. At her neck she
wears a decorative pink neck scarf. Her skin and features
are smoothly and evenly painted, in comparison to her
more textured clothes. She is set against a dark plain
background.&#x201D;</italic></p>

	<fig id="fig01" fig-type="figure" position="float">
					<label>Figure 1.</label>
					<caption>
						<p>&#x2018;Self-Portrait&#x2019;, by Louise Jopling (1843-1933).</p>
						</caption>
					<graphic id="graph01" xlink:href="jemr-10-04-d-figure-01.png"/>
				</fig>

      <p>This study is the first to consider the impact of a
descriptive text on subsequent gaze patterns over a painting. As
visual scanning is the genesis of aesthetic experience (
          <xref ref-type="bibr" rid="R34">4</xref>
		  ),
we apply a quantitative method to determine if the
presence or absence of a description has any impact on the
visual behaviour of participants by using eye-tracking, an
established measure of visual attention (
          <xref ref-type="bibr" rid="R35">5</xref>
		  ) that has
previously been identified as a meaningful method for
quantifying how people view artworks (
          <xref ref-type="bibr" rid="R31">1</xref>
		  ).</p>

      <p>The texts examined in the current study are primarily
used for describing the stimulus such that it can be
searched for within an online collection. We demonstrate
that reading such descriptions does not generally appear
to affect people's viewing behaviour in terms of the
nature of fixation frequency or duration, and that whilst
transition behaviour between UOIs is generally similar
across groups, it appears to vary more when a work is
abstract.</p>
    </sec>
	
    <sec id="S2">
      <title>Background</title>
      <p>Areas of interest (AOIs) are used to identify semantic
regions in a stimulus that are of importance to an
experiment (
          <xref ref-type="bibr" rid="R36">6</xref>
		  ). It can be challenging to apply these to artwork,
due to the non-uniform and naturalistic nature of the
stimuli, which make it harder to determine how and
where to draw the boundaries for AOIs. Our work
addresses this issue by segmenting the image into regions
using a data-driven clustering algorithm, before going on
to compare differences in gaze transitions between these
areas across two groups. We begin with a review of other
work that has used eye-tracking to explore how people
view art, and highlight the effect of the environment on
how a work is perceived. The second section looks at
different methods of segregating images to produce areas
of interest, to provide context for our approach.</p>

      <sec id="S2a">
        <title>Art and eye-tracking</title>
        <p>Several studies have used eye-tracking to investigate how
people view and interact with art. Bubic, Susac and
Palmovic (
          <xref ref-type="bibr" rid="R37">7</xref>
		  ) used eye-tracking to explore how people
view images that can represent either a human face, or
alternately when inverted (displayed upside down), a
still-life image, where no distinct facial components are
identifiable. The results showed that in the upright
position people fixated more on the image elements that
represent faces, focusing more on the eyes (upper AOI) than
in the inverted position.</p>

        <p>Gartus, Klemer, and Leder (
          <xref ref-type="bibr" rid="R38">8</xref>
		  ) considered the effect
that context has on how people view art, using eye
tracking to determine whether perception changed according
to whether works were viewed in a museum or street
context. They demonstrated that viewing durations were
substantially longer in the museum context than they
were in the street context. The study also provided
evidence that the context had no impact on ratings of graffiti
art, but that modern art received higher ratings for beauty
and interest in the museum context.</p>

        <p>The effect of context on the &#x201C;experience&#x201D; of viewing
art has also been considered by Brieber et al. (
          <xref ref-type="bibr" rid="R32">2</xref>
		  ), who
determined that viewing art in the context of a museum,
as opposed to a laboratory setting, led people to view
paintings for longer, and that there was a stronger
relationship between viewing time and appreciation in the
laboratory context. In both situations the information
labels were viewed longer for works with a higher
appreciation rating.</p>

        <p>In a study that explored centre-stage effect (CSE)
-the phenomenon that items/options placed in the central
position are more popular than those located to the sides
- participants were shown three paintings in a row, and
asked to select their preferred painting. Findings show
that allocation of a substantially larger proportion of gaze
to the paintings in the left and centre positions was not
associated with preference when the paintings were
identical. The fixation duration did, however, predict
preference when the paintings were different. The centre-stage
effect was seen in the centre image only when the
paintings were identical and had a positive valence. They
conclude that valence has a greater impact on CSE than
gaze allocation.</p>

        <p>The authors suggest that the &#x2018;centre stage heuristic&#x2019;
- the assumption that the best items are in the centre - can
explain their results. The final fixation was found to be
predictive of people choosing the central item, if it
exhibited positive valence (
          <xref ref-type="bibr" rid="R39">9</xref>
		  ). As acknowledged by the
authors, only a subset (22 of 50) participants had their
eyemovements measured with an eye-tracker, which may
have had an impact on the relationship strength between
gaze allocation and preferred painting (
          <xref ref-type="bibr" rid="R39">9</xref>
		  ).</p>

        <p>Massaro et al. (
          <xref ref-type="bibr" rid="R34">4</xref>
		  ) found that visual exploration
patterns appeared to be affected more by knowledge-driven
top-down processes when people are viewing faces, than
when they are viewing natural scenes, where the gaze
path appears to be driven more by low-level features.</p>

        <p>Visual behaviour can also be manipulated by
modulating the luminance of a painting, to guide people's gaze
to a portion of the painting not being directly focused
upon (
          <xref ref-type="bibr" rid="R40">10</xref>
          ).
        </p>
		
        <p>Aesthetic experience is known to be comprised of
competing top-down and bottom-up processes (
          <xref ref-type="bibr" rid="R34">4</xref>
		  ). A
large variability between participants (n=10) viewing
figurative paintings was identified by Quiroga and
Pedreira (
          <xref ref-type="bibr" rid="R31">1</xref>
		  ), in a study examining how digital
manipulation of artworks affects fixation patterns. This variability,
attributed to the participants&#x2019; individual knowledge and
appreciation of the work, made analysis of the subject
difficult (
          <xref ref-type="bibr" rid="R31">1</xref>
		  ) .</p>

        <p>Image complexity has also been considered in relation
to gaze-behaviour. Complexity relating to pattern detail
that changes with scale can be examined with the fractal
dimension. Regions of paintings with a higher fractal
dimension were fixated on for a longer period than other
regions with a lower fractal dimension (
          <xref ref-type="bibr" rid="R41">11</xref>
		  ).</p>

        <p>Many studies do not use AOIs when analysing gaze
over artwork, choosing instead to employ qualitative
methods (e.g. observation of gaze plots or heat maps) or
statistical methods, for understanding basic distribution
of gaze (
          <xref ref-type="bibr" rid="R32 R33 R39">2, 3, 9</xref>
		  ).</p>

        <p>A study by Brinkmann et al., (
          <xref ref-type="bibr" rid="R42">12</xref>
		  ) looked at
differences in the attention profiles of participants when
looking at both abstract and representational artwork. The
study revealed more diffuse attention for abstract art.
They also found that eye-movement patterns varied more
for the abstract paintings than the representational
paintings, pointing to individual image characteristics playing
a greater part in structuring attention when compared to
socio-demographic factors. The study used a bottom-up
approach to define AOIs, which were defined by circles
with an area of 90 pixels and a minimum of 5 fixations in
the circle per minute.</p>

        <p>One study (
          <xref ref-type="bibr" rid="R43">13</xref>
		  ) did examine the effect that a painting
title had on non-realistic cubist paintings. In this study
there were 3 experimental conditions: 1) no title; 2)
participants had to decide on a title; 3) participants told the
actual title. They discovered that the duration of fixations
increased in the group told the painting title relative to the
group tasked with coming up with a title for the painting.
They also found that the most fixated area for all the
paintings was the centre of the painting. There was an
increase in saccadic amplitude for the group that were
told the title of the paintings in the case of one painting,
which was attributed to additional cognitive processing
being required to link the title to the image. The study
concluded that the title information did have an impact on
the eye-movements and fixation distribution over time
(
          <xref ref-type="bibr" rid="R43">13</xref>
		  ). For this study the AOIs were defined for each
painting by arbitrarily splitting the image into a grid
containing 12 cells, the rationale for which is not discussed in
detail in the paper.</p>

        <p>Here we consider the impact of painting description,
written by experts, on the gaze behaviour of people
viewing artwork.</p>
      </sec>
	  
      <sec id="S2b">
        <title>Defining areas of interest</title>
        <p>The generation of areas or regions of interest requires
researchers to make decisions about how to segment a
stimulus. As AOIs usually correspond to semantic items
in a scene, they can be very useful for determining which
of these items participants focus their interest upon (
          <xref ref-type="bibr" rid="R36">6</xref>
		  ).
AOIs defined by the researcher in a top-down manner can
be very useful for answering particular research
questions, but they are also subject to bias, as a decision about
how to segment the scene will affect the subsequent data
analysis process, and may not be optimal. Although it is
not possible to eradicate all of the top-down factors that
can affect the way people view scenes, such as the
semantic dependency of objects or the context of the scene
(
          <xref ref-type="bibr" rid="R35">5</xref>
		  ), it is possible to reduce the potential bias introduced
by researcher-imposed segmentation of the scene for
analysis purposes. Gridded AOIs are crude in
comparison, but allow for content-independent analysis to take
place (
          <xref ref-type="bibr" rid="R44">14</xref>
		  ). One of the principal issues with using gridded
AOIs is determining the cell size, as this can significantly
affect the results (i.e. capture more or fewer fixations in
the defined geospatial area).</p>

        <p>Bottom-up AOIs have been generated with clustering
techniques, using circles and a minimum number of
fixations to define the AOI (
          <xref ref-type="bibr" rid="R42">12</xref>
		  )(
          <xref ref-type="bibr" rid="R42 R45">12, 15</xref>
		  ). This is also the case
for eye-tracking analysis software, such as Eyetrace (
          <xref ref-type="bibr" rid="R46">16</xref>
		  )
that allows both user defined top-down AOIs and
bottomup data-driven AOIs using fixation clustering. This is
achieved by setting neighbourhood thresholds or using
mean-shift clustering. As circles do not tessellate there
are gaps between them that exclude fixation data. Where
the circles do overlap and do not leave spaces, deciding
which cluster fixations belong to which AOIs can be
problematic. This can also be compounded by differently
sized AOIs that make carrying out comparative analysis
challenging. Indeed Klein, (
          <xref ref-type="bibr" rid="R45">15</xref>
		  ) state that they did not
analyse the AOI data across paintings due to differences
in their gross geometric structure.</p>

        <p>Other methods used to segment stimuli include
Voronoi diagrams, fuzzy AOIs and convex hulls. Voronoi
diagrams (segmentation of an area into different regions,
derived from the distance between predefined points in
subsets of the area) divide scenes into cells. The
distribution of the cells correlates to the fixation density
distributions. This method is predominantly spatial and is
analogous to fixation clustering (
          <xref ref-type="bibr" rid="R47">17</xref>
		  ).</p>

        <p>Fuzzy AOIs do not have hard borders, and thus rather
than taking a &#x2018;hit or miss&#x2019; approach, they use a
probabilistic method to determine which AOI (or none) a fixation
belongs to (
          <xref ref-type="bibr" rid="R36">6</xref>
		  ). Fuzzy AOIs can also be of use when data
quality is lower, as thresholds can be varied to account
for poorer precision (
          <xref ref-type="bibr" rid="R36">6</xref>
		  ). Convex hulls, which are sets of
points in Euclidean space or on a plane can be used to
describe the minimum area covered by a cluster of
fixations. The convex hull essentially represents the line that
encapsulates a set of these points such that the enclosing
polygon is convex as opposed to concave (i.e. has no
indents). Holmqvist et al. (
          <xref ref-type="bibr" rid="R36">6</xref>
		  ) point out that generating
AOIs this way is unsuitable for transitional analysis due
to the amount of manual post editing that would be
required and the potential for inflated values (when
compared to smaller AOIs), caused by the collection of stray
data points. The irregularly shaped and sized AOIs
resulting from Voronoi diagrams and convex hulls makes
quantitative comparison between them complex, and
difficult from a statistical perspective.</p>

        <p>Orquin, Ashby, and Clarke (
          <xref ref-type="bibr" rid="R48">18</xref>
		  ) describe several
recommendations for using AOIs with behavioural
eyetracking studies. These recommendations include using
maximum margins around AOIs when there are large
distances between objects of importance on the stimulus.
This allows fixations related to the object to be included
and reduces overlap. By contrast, if the distance between
objects is small, a smaller AOI margin should be used.
Orquin, Ashby, and Clarke (
          <xref ref-type="bibr" rid="R48">18</xref>
		  ) go on to state that
researchers should either choose these AOI margins
beforehand based on the possible overlap, or alternatively
do this post-hoc based on data conforming with quality
criteria. Finally, they suggest that details of the AOI
margins are reported alongside the analysis.</p>

        <p>Like (
          <xref ref-type="bibr" rid="R42">12</xref>
		  ) and (
          <xref ref-type="bibr" rid="R45">15</xref>
		  ), we use a bottom-up clustering
approach, but rather than using this to generate the AOIs
directly, we instead apply clustering to determine the cell
size for a grid that we then apply to each painting. Details
of our approach to AOI definition are provided in the
&#x2018;Analysis&#x2019; section below.</p>
      </sec>
    </sec>
	
    <sec id="S3">
      <title>Methods</title>
      <p>A between-subjects experiment was conducted. The
main factor was &#x201C;stimulus presentation&#x201D;. This had two
levels: 1) &#x201C;no-textual description&#x201D;, henceforth referred to
as the &#x201C;no-description&#x201D; condition, where 8 paintings were
shown sequentially without any description, and 2)
&#x201C;description&#x201D; condition, where the same 8 paintings were
presented sequentially, preceded by a descriptive
narrative written by experts presented before each painting.
The order of the presentation of the paintings was fixed,
and the same for both conditions. Participants were
randomly allocated to one of the two conditions. Neither of
the groups were told the titles of the paintings or given
any information concerning the artists. No specific task(s)
were given to the participants, allowing them to view the
paintings naturally.</p>

      <sec id="S3a">
        <title>Procedure</title>
        <p>The experiment was run at two open day events at the
University of Manchester, in a quiet controlled
environment, in facilities dedicated to the purpose. Participants
were given an information sheet to read, and asked to sit
in front of a desktop computer with a Tobii X2-60
(<ext-link ext-link-type="uri" xlink:href="https://www.tobiipro.com/siteassets/tobii-pro/usermanuals/tobii-pro-x2-60-eye-tracker-usermanual.pdf/?v=1.0.3" xlink:show="new">https://www.tobiipro.com/siteassets/tobii-pro/usermanuals/tobii-pro-x2-60-eye-tracker-usermanual.pdf/?v=1.0.3</ext-link>
) eye-tracker, attached to a monitor
with a resolution of 1366 x 768 pixels.</p>

        <p>Forty-four participants (with normal or
corrected-to-normal vision) who attended the open days volunteered to
take part in the experiment. Two participants were
excluded due to poor data quality leaving 42 participants,
24 males, 18 females. Before starting the experiment
participants signed an informed consent form describing
the nature of the study, in accordance with the
University's ethical procedures.</p>

        <p>Once the participant's gaze had been calibrated, they
began the experiment. In the &#x2018;no-description&#x2019; condition,
the paintings were displayed on the screen in sequence
for 10 seconds each, and the participant sat and viewed
them. In the &#x2018;description&#x2019; condition, a written description
of the painting appeared on the screen; when the
participant had read this, he or she pressed the space bar to view
the subsequent painting (also for 10 seconds per
painting). Participants in the description condition were also
asked a multiple choice on-screen question after viewing
all the paintings: &#x201C;Do you think the text (information
about the paintings) changed the way you looked at the
paintings?&#x201D; (yes/no). Participants' gaze was recorded
throughout the experiment.</p>
      </sec>
	  
      <sec id="S3b">
        <title>Stimuli</title>
        <p>Digital versions of eight paintings were selected by
Manchester Art Gallery staff as being representative of
their collections, and artwork they would be interested in
understanding people's visual perception of. The
descriptive text for each painting was obtained from the &#x201C;Art
UK&#x201D; website
(
<ext-link ext-link-type="uri" xlink:href="http://www.artuk.org/discover/artworks/search/venue:\\manchester-art-gallery-7282-54853" xlink:show="new">http://www.artuk.org/discover/artworks/search/venue:\\manchester-art-gallery-7282-54853</ext-link>
). The paintings consisted of 3 landscapes, 2 portraits and 3 abstract pieces.
The descriptions of the paintings were written by art
experts <xref ref-type="fig" rid="fig02">table 1</xref>.</p>

	<fig id="fig02" fig-type="figure" position="float">
					<label>Table 1.</label>
					<caption>
						<p>Paintings used as stimuli in the experiment</p>
						</caption>
					<graphic id="graph02" xlink:href="jemr-10-04-d-figure-09.png"/>
				</fig>	

      </sec>
    </sec>
	
    <sec id="S4">
      <title>Analysis</title>
      <p>All the analysis reported here was carried out using
the R project for statistical computing, version 3.3.2.
(
          <xref ref-type="bibr" rid="R49">19</xref>
		  ). Note that where effect size (partial eta squared)
is reported, it was calculated on untrimmed data. The
full code and data is available from (
          <xref ref-type="bibr" rid="R50">20</xref>
		  ). The
Densitybased spatial clustering of applications with noise
(DBSCAN) algorithm (
          <xref ref-type="bibr" rid="R51">21</xref>
		  ) was used to cluster visual
fixations for each of the paintings, using the
DBSCAN package (
          <xref ref-type="bibr" rid="R52">22</xref>
		  ) available for R. The algorithm
was selected as it is widely used for cluster discovery,
without a requirement to state the number of clusters
in advance. This was important, as we did not know <italic>a
priori</italic> where the fixations would be clustered, or how
many clusters there would be. Density is determined
by counting the points in a specific radius (termed
Eps). Where the number of these points exceeds the
threshold defined by a value called MinPts, it is
considered a &#x201C;core point&#x201D; by the algorithm. Noise
points are those that are neither core points, nor
contain a core point within the Eps radius (
          <xref ref-type="bibr" rid="R51">21</xref>
		  ).
Formally the Eps-neighbourhood (Eps) for a point is
defined as N<sub>Eps</sub>(P) = {q &#x2208; D | dist(p,q) &#x2264; Eps}. Points
can also be directly density-reachable, defined as p &#x2208; 
N<sub>Eps</sub>(q) and | N<sub>Eps</sub> (q)| &#x2264; MinPts (
          <xref ref-type="bibr" rid="R51">21</xref>
		  ). The optimal
Eps value was selected for each painting by
computing the k-nearest neighbour distances and
plotting them in ascending order to visualise the &#x201C;knee
of the curve&#x201D; (point of curve with significant change)
that corresponds to the optimal Eps value. The MinPts
value, referring to the minimum number of points that
are required to form &#x201C;core points&#x201D;, was set to 4. This
value was used as the original authors state that k
distance graphs did not alter significantly with values
&gt; 4, but did, however, require greater computational
effort (
          <xref ref-type="bibr" rid="R51">21</xref>
		  ).</p>

      <p>A grid of squares was then applied to each painting,
with the cell dimensions (height and width) set as double
the average of the optimal DBSCAN Eps value (radius).
Each of the cells represented a <italic>Unit of Interest</italic> (UOI), for
which fixation data was calculated.</p>

      <p>To ensure the analysis considered only fixations on
the painting itself, we added an offset to the grid using a
bespoke algorithm to calculate the position of the
painting inside the black container area <xref ref-type="fig" rid="fig01">Figure 3</xref>.
Additionally as the division of individual cell dimensions into the
image space may leave a remainder, we accounted for
this by locating the grid in the centre of the image so that
any additional space between the area of the grid and that
of the painting will be around the edges of the painting.
This was done instead of locating the grid in the top left
of the image on the assumption that the salient features
for a given painting are located centrally rather than
peripherally.</p>

	<fig id="fig03" fig-type="figure" position="float">
					<label>Figure 3</label>
					<caption>
						<p>Horizontal and vertical offsets applied to locate the grid in the area occupied by the paintings</p>
						</caption>
					<graphic id="graph03" xlink:href="jemr-10-04-d-figure-04.png"/>
				</fig>
				
	<fig id="fig04" fig-type="figure" position="float">
					<label>Figure 4</label>
					<caption>
						<p>Sample UOI's generated for the “Self-portrait” painting. The 8x8 grid generates 64 UOIs for this painting</p>
						</caption>
					<graphic id="graph04" xlink:href="jemr-10-04-d-figure-05.png"/>
				</fig>				

      <p>As there is an inherent error in gaze accuracy 
for each eye-tracker, we consider the appropriateness of 
the size of the cells used in the grid to determine if 
the cell size is small enough to be impacted significantly 
by gaze accuracy error. The units spanned by the visual 
field (x) can be calculated by first determining the visual 
angle (&#x3A6;) given the object size (s) and the object distance (d) 
converted from radians into degrees (Equation 1).</p>

<fig id="eq01a" fig-type="equation" position="anchor">
					<graphic id="equation01a" xlink:href="jemr-10-04-d-equation-01"/>
				</fig>
				
<fig id="eq01b" fig-type="equation" position="anchor">
					<graphic id="equation01b" xlink:href="jemr-10-04-d-equation-02"/>
				</fig>				

      <p>The smallest of the cell sizes used in this study (1.55°) 
is greater than the 1 to 1.5° that is suggested as the minimum 
practical AOI size (
          <xref ref-type="bibr" rid="R36">6</xref>
		  ). A transition matrix was then constructed 
for each condition (description and no-description), representing 
the number of transitions from and to each cell in the grid. This 
is then converted into a Markov chain representing the probability 
of transitioning from a given UOI to the same UOI, or to a 
different one. This technique has been used to compare differences 
between clinicians making correct and incorrect interpretations 
of medical scans with the Jensen-Shannon distance (
          <xref ref-type="bibr" rid="R53">23</xref>
		  ), and modified 
by Reani, (
          <xref ref-type="bibr" rid="R54">24</xref>
		  ) to use the Hellinger distance, which is more 
appropriate for comparing transition behaviour, as it permits values 
of 0 in the transition matrix. The Hellinger distance (Equation 2) 
is used to determine the difference between the Markov chains 
representing each condition and can be used as a proxy for dissimilarity.</p>

<fig id="eq02" fig-type="equation" position="anchor">
					<graphic id="equation02" xlink:href="jemr-10-04-d-equation-03"/>
				</fig>

      <p>This value can then be compared against that obtained
for two groups of equivalent size, but containing
participants chosen at random. By performing this
operation 10,000 times using a permutation test (
          <xref ref-type="bibr" rid="R55">25</xref>
		  ),
we obtain a distribution of the difference between groups compared
chosen at random, which against the difference can then between be the
description and no-description groups. This allows a
threshold to be set, where &#x201C;p-values&#x201D; to the right of
the critical value allow the rejection of the null
hypothesis. This allows us to see if there is something
&#x201C;special&#x201D; about the two conditions that is unlikely to
be explained by random chance, given that permutation tests are known 
to be robust against Type I error (
          <xref ref-type="bibr" rid="R56">26</xref>
		  ).</p>

      <p>The individual participants' scanpaths were also
compared against one another for both conditions (description
and no-description). The order in which the participants
transition their gaze around the UOIs was represented as
a &#x201C;string&#x201D; of text. This essentially represents each
participant's scanpath around the stimulus in terms of the
sequences of UOIs visited. Although this does not account
for the temporal dimension of the scanpath sequence, it
does allow comparison between the areas visited. The
Levenshtein distance applies a cost for each operation
(insertion, deletion and substitution) used to transform
one string of text into another (
          <xref ref-type="bibr" rid="R57">27</xref>
		  ). Visualising the
resulting distance in a matrix allows us to rapidly visually
compare all participants against each other and detect any
outliers, or participants that appear to use similar
visualisation strategies. The darker the shade of red the more
similar the scanpaths are; the darker the shade of blue the
less similar they are. <xref ref-type="fig" rid="fig01">Figure 2</xref> shows an example of the
Levenshtein distance for the 2 groups (description and
no-description) for the &#x201C;Rhyl Sands&#x201D; painting, where the
participants' scanpaths can be compared with one another
in that condition and between conditions. This allows for
rapid initial analysis of the spatial and sequential aspects
of the participants&#x2019; eye-movements as they viewed the
paintings. In this representative example we can see that
most of the cells are red in both groups, implying that the
sequence of transitions is fairly similar for most of the
participants.</p>

	<fig id="fig02a" fig-type="figure" position="float">
					<label>Figure 2a.</label>
					<caption>
						<p>No-narrative</p>
						</caption>
					<graphic id="graph02a" xlink:href="jemr-10-04-d-figure-02.png"/>
				</fig>	
				
	<fig id="fig02b" fig-type="figure" position="float">
					<label>Figure 2b.</label>
					<caption>
						<p>Narrative</p>
						</caption>
					<graphic id="graph02b" xlink:href="jemr-10-04-d-figure-03.png"/>
				</fig>	
				
      <p>The visualisations were generated for each stimulus
for the two groups (description and no-description). The
visualisations allow for rapid high level comparison of
the two conditions per painting. As mentioned previously
this also makes it easier to identify outliers among the
participants for further examination, or possible exclusion
from subsequent data analysis. A summary of the
Levenshtein distance results for each painting can be seen in
Table 2.</p>

<table-wrap id="t02" position="float">
					<label>Table 2.</label>
					<caption>
						<p>Summary of Levenshtein distance per painting</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>
          <tr>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1">
              <bold>Levenshtein distance</bold>
            </td>
          </tr>
						</thead>
						<tbody>
          <tr>
            <td rowspan="1" colspan="1">
              <bold>Condition 1</bold>
            </td>
            <td rowspan="1" colspan="1">
              <italic>Max</italic>
            </td>
            <td rowspan="1" colspan="1">
              <italic>Mean</italic>
            </td>
            <td rowspan="1" colspan="1">
              <italic>SD</italic>
            </td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Rhyl Sands</td>
            <td rowspan="1" colspan="1">25</td>
            <td rowspan="1" colspan="1">9.19</td>
            <td rowspan="1" colspan="1">5.41</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Flask-walk, Hampstead </td>
            <td rowspan="1" colspan="1">20</td>
            <td rowspan="1" colspan="1">7.88</td>
            <td rowspan="1" colspan="1">4.09</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Self-portrait</td>
            <td rowspan="1" colspan="1">39</td>
            <td rowspan="1" colspan="1">15.95</td>
            <td rowspan="1" colspan="1">7.74</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">When the West with evening glows</td>
            <td rowspan="1" colspan="1">28</td>
            <td rowspan="1" colspan="1">12.36</td>
            <td rowspan="1" colspan="1">6.82</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">14.6.1964</td>
            <td rowspan="1" colspan="1">32</td>
            <td rowspan="1" colspan="1">11.57</td>
            <td rowspan="1" colspan="1">7.03</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Woman and suspended man</td>
            <td rowspan="1" colspan="1">40</td>
            <td rowspan="1" colspan="1">13.15</td>
            <td rowspan="1" colspan="1">8.28</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Sir Gregory Page-Turner</td>
            <td rowspan="1" colspan="1">32</td>
            <td rowspan="1" colspan="1">14.77</td>
            <td rowspan="1" colspan="1">7.60</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Release</td>
            <td rowspan="1" colspan="1">30</td>
            <td rowspan="1" colspan="1">14.67</td>
            <td rowspan="1" colspan="1">7.90</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">
              <bold>Condition 2</bold>
            </td>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
            <td rowspan="1" colspan="1"/>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Rhyl Sands</td>
            <td rowspan="1" colspan="1">18</td>
            <td rowspan="1" colspan="1">8.57</td>
            <td rowspan="1" colspan="1">3.84</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Flask-walk, Hampstead</td>
            <td rowspan="1" colspan="1">35</td>
            <td rowspan="1" colspan="1">18.23</td>
            <td rowspan="1" colspan="1">6.45</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Self-portrait</td>
            <td rowspan="1" colspan="1">33</td>
            <td rowspan="1" colspan="1">17.23</td>
            <td rowspan="1" colspan="1">9.81</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">When the West with evening glows</td>
            <td rowspan="1" colspan="1">28</td>
            <td rowspan="1" colspan="1">14.24</td>
            <td rowspan="1" colspan="1">5.74</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">14.6.1964</td>
            <td rowspan="1" colspan="1">33</td>
            <td rowspan="1" colspan="1">12.25</td>
            <td rowspan="1" colspan="1">6.44</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Woman and suspended man</td>
            <td rowspan="1" colspan="1">31</td>
            <td rowspan="1" colspan="1">15.12</td>
            <td rowspan="1" colspan="1">7.72</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Sir Gregory Page-Turner</td>
            <td rowspan="1" colspan="1">33</td>
            <td rowspan="1" colspan="1">16.78</td>
            <td rowspan="1" colspan="1">7.29</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">Release</td>
            <td rowspan="1" colspan="1">30</td>
            <td rowspan="1" colspan="1">13.94</td>
            <td rowspan="1" colspan="1">6.47</td>
          </tr>
						</tbody>
					</table>
					</table-wrap>

    </sec>
	
    <sec id="S5">
      <title>Results</title>
      <p>The results indicate that the majority of fixations made
for both groups tend to occur in the 100-300 ms duration
range, suggesting that relatively short fixations are
predominant in both conditions. There were, on average,
893 (SD = 72) fixations in the no-description group and
815 (SD = 97) fixations in the description group. The
mean fixation duration for the no-description group was
239 ms (SD = 191) and 227 ms for the description group
(SD = 128). Figure 5 shows the fixations for both
conditions along with the fixation durations. A two-way
mixed ANOVA with trimmed means (&#x263; = 0.2) which was
used due to data violating parametric assumptions
(
          <xref ref-type="bibr" rid="R58 R59">28, 29</xref>
		  ), showed that there was no significant difference
between fixation counts for the description and
nodescription groups. There was a significant but small
main effect of painting (Q = 4.15, p = .006, &#x273;&#x00B2; = .04),
which post-hoc pairwise t-tests (Bonferroni correction)
showed was significant for &#x201C;Flask-walk, Hampstead&#x201D; and
&#x201C;Sir Gregory Page-Turner&#x201D; paintings (p = .013).</p>

	<fig id="fig05" fig-type="figure" position="float">
					<label>Figure 5.</label>
					<caption>
						<p>Fixations for both groups with their associated durations (all paintings)</p>
						</caption>
					<graphic id="graph05" xlink:href="jemr-10-04-d-figure-06.png"/>
				</fig>	

      <p>Figure 6 summarises the average fixation durations and 
number of fixations for each paining for both groups.</p>

	<fig id="fig06" fig-type="figure" position="float">
					<label>Figure 6.</label>
					<caption>
						<p>Distribution of distance results of permutation test for the painting “Release”. The vertical bar shows the distance for the description and no-description conditions</p>
						</caption>
					<graphic id="graph06" xlink:href="jemr-10-04-d-figure-07.png"/>
				</fig>	

      <p>The results of the permutation test, with the exception of
the painting &#x201C;Release&#x201D; did not show any significant
differences in transitions between the groups, resulting in p
values greater than .10. The painting &#x201C;Release&#x201D; did
indicate a difference between groups (Hd = 0.859, p = .08),
see Figure 7. Although this is not significant for &#x3B1; = .05.</p>

	<fig id="fig07" fig-type="figure" position="float">
					<label>Figure 7.</label>
					<caption>
						<p>Distribution of distance results of permutation test for the painting “Release”. The vertical bar shows the distance for the description and no-description conditions</p>
						</caption>
					<graphic id="graph07" xlink:href="jemr-10-04-d-figure-08.png"/>
				</fig>	

      <p>This form of analysis is known to be very robust, and
there is thus a very low risk of a type I error.</p>

      <p>We examined the recording quality between the two
groups to see if this could have impacted on any
differences between the groups. Recording quality pertains to a
percentage value that is derived from the number of gaze
samples identified by the eye-tracking software that is
divided by the number of attempts. Problems arising from
a failure to detect the eyes and participants looking away
from the screen can all contribute to reducing the
recording quality value. To this end a t-test was carried out
comparing the recording quality percentage values for
both groups. The difference was not significant t(37) =
0.61, p &gt; .05 suggesting that any differences detected were 
due to behavioural factors rather than as a result of
uneven recording quality between the groups. </p>
    </sec>
	
    <sec id="S6">
      <title>Discussion</title>
      <p>The majority of the paintings (n=7) did not demonstrate
any difference in terms of fixation and transition
behaviour. The painting that was associated with a
difference between groups was &#x201C;Release&#x201D;. As the
permutation test is robust against type I error (
          <xref ref-type="bibr" rid="R56">26</xref>
		  ), this
result suggests that there was a real difference between
the groups for this painting.</p>

      <p>It is notable that this painting lacks distinctive
features differentiating one area of the painting from any
other. The text, which provided an explanation of what
the features of the painting represent (images of
chromosomes viewed through an electron microscope) could
explain why transitional behaviour differs between the
groups in this case. Here, the description appeared to
provide information that could not be gleaned from the
painting itself, and it thus caused people to examine the
features of the painting differently. With no salient
features and a fairly uniform pattern, there may not
otherwise be cues to drive viewing behaviour.</p>

      <p>68% of the participants who read the description
thought that it did make a difference to how they
subsequently viewed the painting. A difference in gaze
behaviour was not observed between groups in this experiment,
and it would thus be interesting to further explore the
potential nature of this difference, if it exists.</p>

      <p>Recently there has been a move towards providing
descriptive information in a form that departs from
formal language, using instead descriptions that focus on
placing the work in context and describing the artist's
intentions (
          <xref ref-type="bibr" rid="R60">30</xref>
		  ). The work reported here focuses on
detecting whether reading a descriptive text leads to a
difference in visual behaviour; future work could
systematically address how different forms and formats of
curatorial narrative affect gaze patterns as well as different
types of image.</p>

      <p>The method presented here goes some way toward
removing biases that can be introduced by researchers
when manually defining areas/regions of interest.
Although in the current study it was applied to detecting
differences in visual behaviour with paintings, the
method could also be applied to other domains. Applying a
grid to the stimulus allows for comparison between these
uniform spatial Units of Interests (UOIs), defined using a
data-driven approach. This could be applied to any type
of stimulus that lacks obvious or predefined semantic
areas or regions. AOIs are typically added to segment a
stimulus in response to a hypothesis. Changing the AOI
changes the hypothesis, and adding AOIs after recording
data thus equates to formulating a post-hoc hypothesis
(
          <xref ref-type="bibr" rid="R36">6</xref>
		  ). Data-driven UOIs allow the segmentation of the
stimuli space into equally sized regions based on the
clustering of the fixation data. This allows for the easy
comparison of such UOIs to determine areas of the
stimulus that the participants focus most attention on, and how
they move between these areas. This data-driven method
allows an unbiased, exploratory approach to interpreting
gaze data across a stimulus. To mitigate the arbitrary
selection of grid size when using a gridded AOI system,
Holmqvist et al. (
          <xref ref-type="bibr" rid="R36">6</xref>
		  ) recommends several different cell
sizes be used. As changing the AOI size has an effect on
the results, there may be a temptation for researches to do
this until they find a statistically significant result, a
practice that can be problematic from a scientific perspective
(
          <xref ref-type="bibr" rid="R48">18</xref>
		  ). The method presented in this work addresses this
issue by using a combination of clusters within the gaze
data and pragmatic considerations to determine the size
of the grid.</p>

      <sec id="S6a">
        <title>Limitations</title>
        <p>A primary limitation of this study from the perspective of
the domain was the fact that participants were not able to
view the descriptive text and painting simultaneously or
switch between them, as they would on a website or in a
gallery; separating them was necessary to ensure the
paintings could be presented in the same way to
participants in both conditions. Paintings were viewed for
only 10 seconds, and it may thus be the case that,
regardless of the description, in this short time the eyes
were instinctively drawn to the salient components of the
painting, such as buildings, body shapes, facial details etc
(bottom-up features). The texts used here were
functional, and used for describing an artwork to aid its
identification; whilst we can hypothesize other forms of
curatorial narrative will not affect viewing behaviour, we
cannot be sure. Viewing a painting online is likely to be
quite different to viewing in a gallery, where scale and
context will affect the experience, so it is not clear
whether these results would extend to this scenario.</p>
      </sec>
	  
      <sec id="S6b">
        <title>Conclusions and future work</title>
        <p>The use of a grid based system with cell size
determined by data-driven clustering allows for the creation of
units of interest (UOIs), which can serve as a basis for
subsequent analysis. The UOIs simultaneously aid in
removing the bias introduced by researchers deciding on
the size and location of these areas, and allow direct
comparison between the units, as they are of equal
dimensions. The study demonstrated that viewing a
descriptive text has no significant impact on subsequent
gaze patterns over a painting, with one exception -- an
image that had a relatively uniform pattern with few
distinctive features. The effect may also vary according
to the type and quality of the textual information
provided in these descriptions, and it would be interesting to test
this in a future study. Here we examined descriptions -- it
may be that different forms of curatorial narrative have a
greater affect on viewing patterns. The techniques
described in this study may have a much wider application,
as they could also be of use in identifying the effects of
descriptive data on viewing behaviour in other domains,
such as understanding how patient history shown before a
subsequent medical scan affects the way this image is
viewed.</p>
      </sec>
	  
      <sec id="S6c" sec-type="COI-statement">
        <title>Ethics and Conflict of Interest</title>
        <p>The authors declare that the contents of the article are
in agreement with the ethics described in
<ext-link ext-link-type="uri" xlink:href="http://biblio.unibe.ch/portale/elibrary/BOP/jemr/ethics.html" xlink:show="new">http://biblio.unibe.ch/portale/elibrary/BOP/jemr/ethics.html</ext-link> 
and that there is no conflict of interest regarding the
publication of this paper.</p>
      </sec>
	  
      <sec id="S6d">
        <title>Acknowledgements</title>
        <p>We would like to thank the Engineering and Physical
Sciences Research Council (EPSRC) for funding this
work through grants EP/K502947/1, EP/L504877/1 and
EP/K503782/1-079. We would also like to thank Liz
Mitchell and Alex Wood for their help selecting the
paintings.</p>
      </sec>
    </sec>
  </body>
  <back>  
    <ref-list>
  
<ref id="R35"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Borji</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Sihite</surname>, <given-names>D. N.</given-names></string-name>, &amp; <string-name><surname>Itti</surname>, <given-names>L.</given-names></string-name></person-group> (<year>2013</year>). <article-title>What stands out in a scene? A study of human explicit saliency judgment.</article-title> <source>Vision Research</source>, <volume>91</volume>, <fpage>62</fpage>–<lpage>77</lpage>. <pub-id pub-id-type="doi">10.1016/j.visres.2013.07.016</pub-id><pub-id pub-id-type="pmid">23954536</pub-id><issn>0042-6989</issn></mixed-citation></ref>
<ref id="R32"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Brieber</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Nadal</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Leder</surname>, <given-names>H.</given-names></string-name>, &amp; <string-name><surname>Rosenberg</surname>, <given-names>R.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Art in time and space: Context modulates the relation between art experience and viewing time.</article-title> <source>PLoS One</source>, <volume>9</volume>(<issue>6</issue>), <fpage>e99019</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0099019</pub-id><pub-id pub-id-type="pmid">24892829</pub-id><issn>1932-6203</issn></mixed-citation></ref>
<ref id="R42"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Brinkmann</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Commare</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Leder</surname>, <given-names>H.</given-names></string-name>, &amp; <string-name><surname>Rosenburg</surname>, <given-names>R.</given-names></string-name></person-group> (<year>2013</year>). <article-title>Abstract Art as a Universal Language?</article-title> <source>Leonardo</source>, <volume>46</volume>(<issue>5</issue>), <fpage>488</fpage>–<lpage>489</lpage>. <pub-id pub-id-type="doi">10.1162/Leon</pub-id><issn>0024-094X</issn></mixed-citation></ref>
<ref id="R37"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Bubic</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Susac</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Palmovic</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Keeping our eyes on the eyes: The case of Arcimboldo.</article-title> <source>Perception</source>, <volume>43</volume>(<issue>5</issue>), <fpage>465</fpage>–<lpage>468</lpage>. <pub-id pub-id-type="doi">10.1068/p7671</pub-id><pub-id pub-id-type="pmid">25109013</pub-id><issn>0301-0066</issn></mixed-citation></ref>
<ref id="R53"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Davies</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2016</year>). <source>ECG Eye-tracking Experiment 1</source>. <publisher-loc>Manchester</publisher-loc>: <publisher-name>The University of Manchester</publisher-name>; <pub-id pub-id-type="doi">10.5281/zenodo.996475</pub-id></mixed-citation></ref>
<ref id="R50"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Davies</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2017</year>). <source>Manchester Art Gallery Eye-tracking Analysis</source>. <publisher-loc>Manchester</publisher-loc>: <publisher-name>The University of Manchester</publisher-name>; <pub-id pub-id-type="doi">10.5281/zenodo.996060</pub-id></mixed-citation></ref>
<ref id="R51"><mixed-citation publication-type="conference" specific-use="parsed"><person-group person-group-type="author"><string-name><surname>Ester</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Kriegel</surname>, <given-names>H.-P.</given-names></string-name>, <string-name><surname>Sander</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Xu</surname>, <given-names>X.</given-names></string-name></person-group> (<year>1996</year>). <article-title>A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise.</article-title> In <source>KDD-96 Proceedings</source> (pp. <fpage>226</fpage>–<lpage>231</lpage>).</mixed-citation></ref>
<ref id="R59"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Field</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Miles</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Field</surname>, <given-names>Z.</given-names></string-name></person-group> (<year>2012</year>). <source>Discovering statistics using R</source>. <publisher-loc>Los Angeles</publisher-loc>: <publisher-name>Sage</publisher-name>.</mixed-citation></ref>
<ref id="R60"><mixed-citation publication-type="web-page" specific-use="parsed"><person-group person-group-type="author"><string-name><surname>Gail</surname>, <given-names>G.</given-names></string-name></person-group> (<year>2010</year>). <article-title>Your labels make me feel stupid.</article-title> <date-in-citation content-type="access-date">Retrieved November 18, 2016</date-in-citation>, from <ext-link ext-link-type="uri" xlink:href="http://www.webcitation.org/6m77vjVPX">http://www.webcitation.org/6m77vjVPX</ext-link></mixed-citation></ref>
<ref id="R49"><element-citation><person-group person-group-type="author"><name><surname>Core R Team</surname></name></person-group><article-title>A Language and Environment for Statistical Computing</article-title><source>R Foundation for Statistical Computing</source><year>2014</year></element-citation></ref>
<ref id="R38"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Gartus</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Klemer</surname>, <given-names>N.</given-names></string-name>, &amp; <string-name><surname>Leder</surname>, <given-names>H.</given-names></string-name></person-group> (<year>2015</year>). <article-title>The effects of visual context and individual differences on perception and evaluation of modern art and graffiti art.</article-title> <source>Acta Psychologica</source>, <volume>156</volume>, <fpage>64</fpage>–<lpage>76</lpage>. <pub-id pub-id-type="doi">10.1016/j.actpsy.2015.01.005</pub-id><pub-id pub-id-type="pmid">25700235</pub-id><issn>0001-6918</issn></mixed-citation></ref>
<ref id="R33"><mixed-citation publication-type="unknown" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Gartus</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Leder</surname>, <given-names>H.</given-names></string-name></person-group> (<year>2014</year>). <article-title>The white cube of the museum versus the gray cube of the street : the role of context in aesthetic evaluations.</article-title> Psychology of Aesthetics, Creativity, and the Arts, 8(3), 311–320. 10(4):4 Does descriptive text change how people look at art? 13 <pub-id pub-id-type="doi">10.1037/a0036847</pub-id></mixed-citation></ref>
<ref id="R44"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Goldberg</surname>, <given-names>J. H.</given-names></string-name>, &amp; <string-name><surname>Kotval</surname>, <given-names>X. P.</given-names></string-name></person-group> (<year>1999</year>). <article-title>Computer interface evaluation using eye movements: Methods and constructs.</article-title> <source>International Journal of Industrial Ergonomics</source>, <volume>24</volume>(<issue>6</issue>), <fpage>631</fpage>–<lpage>645</lpage>. <pub-id pub-id-type="doi">10.1016/S0169-8141(98)00068-7</pub-id><issn>0169-8141</issn></mixed-citation></ref>
<ref id="R52"><mixed-citation publication-type="web-page" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Hahsler</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2016</year>). dbscan: Density Based Clustering of Applications with Noise (DBSCAN) and Related Algorithms. Retrieved from <ext-link ext-link-type="uri" xlink:href="http://cran.rproject.org/package=dbscan">http://cran.rproject.org/package=dbscan</ext-link></mixed-citation></ref>
<ref id="R36"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Holmqvist</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Nystrom</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Anderson</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Dewhurst</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Jarodzka</surname>, <given-names>H.</given-names></string-name>, &amp; <string-name><surname>Van de Weijer</surname>, <given-names>J.</given-names></string-name></person-group> (<year>2011</year>). <source>Eye tracking: A comprehensive guide to methods and measures</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref>
<ref id="R43"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Kapoula</surname>, <given-names>Z.</given-names></string-name>, <string-name><surname>Daunys</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Herbez</surname>, <given-names>O.</given-names></string-name>, &amp; <string-name><surname>Yang</surname>, <given-names>Q.</given-names></string-name></person-group> (<year>2009</year>). <article-title>Effect of title on eye-movement exploration of cubist paintings by Fernand Léger.</article-title> <source>Perception</source>, <volume>38</volume>(<issue>4</issue>), <fpage>479</fpage>–<lpage>491</lpage>. <pub-id pub-id-type="doi">10.1068/p6080</pub-id><pub-id pub-id-type="pmid">19522318</pub-id><issn>0301-0066</issn></mixed-citation></ref>
<ref id="R45"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Klein</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Betz</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Hirschbuehl</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Fuchs</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Schmiedtová</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Engelbrecht</surname>, <given-names>M.</given-names></string-name>, <etal>. . .</etal> <string-name><surname>Rosenberg</surname>, <given-names>R.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Describing art - An interdisciplinary approach to the effects of speaking on gaze movements during the beholding of paintings.</article-title> <source>PLoS One</source>, <volume>9</volume>(<issue>12</issue>), <fpage>e102439</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0102439</pub-id><pub-id pub-id-type="pmid">25494170</pub-id><issn>1932-6203</issn></mixed-citation></ref>
<ref id="R55"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Knijnenburg</surname>, <given-names>T. A.</given-names></string-name>, <string-name><surname>Wessels</surname>, <given-names>L. F.</given-names></string-name>, <string-name><surname>Reinders</surname>, <given-names>M. J.</given-names></string-name>, &amp; <string-name><surname>Shmulevich</surname>, <given-names>I.</given-names></string-name></person-group> (<year>2009</year>). <article-title>Fewer permutations, more accurate P-values.</article-title> <source>Bioinformatics (Oxford, England)</source>, <volume>25</volume>(<issue>12</issue>), <fpage>i161</fpage>–<lpage>i168</lpage>. <pub-id pub-id-type="doi">10.1093/bioinformatics/btp211</pub-id><pub-id pub-id-type="pmid">19477983</pub-id><issn>1367-4803</issn></mixed-citation></ref>
<ref id="R39"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Kreplin</surname>, <given-names>U.</given-names></string-name>, <string-name><surname>Thoma</surname>, <given-names>V.</given-names></string-name>, &amp; <string-name><surname>Rodway</surname>, <given-names>P.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Looking behaviour and preference for artworks: The role of emotional valence and location.</article-title> <source>Acta Psychologica</source>, <volume>152</volume>, <fpage>100</fpage>–<lpage>108</lpage>. <pub-id pub-id-type="doi">10.1016/j.actpsy.2014.08.003</pub-id><pub-id pub-id-type="pmid">25203454</pub-id><issn>0001-6918</issn></mixed-citation></ref>
<ref id="R46"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Kübler</surname>, <given-names>T. C.</given-names></string-name>, <string-name><surname>Sippel</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Fuhl</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Schievelbein</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Aufreiter</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Rosenberg</surname>, <given-names>R.</given-names></string-name></person-group> <article-title>… Kasneci, E. (2015). Analysis of eye movements with eyetrace.</article-title> <source>Communications in Computer and Information Science</source>. <pub-id pub-id-type="doi">10.1007/978-3-319-27707-3_28</pub-id><issn>1865-0929</issn></mixed-citation></ref>
<ref id="R57"><mixed-citation publication-type="web-page" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Levenshtein</surname>, <given-names>V. I.</given-names></string-name></person-group> (<year>1966</year>). <article-title>Binary codes capable of correcting deletions, insertions, and reversals.</article-title> Soviet Physics Doklady. <ext-link ext-link-type="uri" xlink:href="http://doi.org/citeulikearticle-id:311174">http://doi.org/citeulikearticle-id:311174</ext-link></mixed-citation></ref>
<ref id="R34"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Massaro</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Savazzi</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Di Dio</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Freedberg</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Gallese</surname>, <given-names>V.</given-names></string-name>, <string-name><surname>Gilli</surname>, <given-names>G.</given-names></string-name>, &amp; <string-name><surname>Marchetti</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2012</year>). <article-title>When art moves the eyes: A behavioral and eye-tracking study.</article-title> <source>PLoS One</source>, <volume>7</volume>(<issue>5</issue>), <fpage>e37285</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0037285</pub-id><pub-id pub-id-type="pmid">22624007</pub-id><issn>1932-6203</issn></mixed-citation></ref>
<ref id="R40"><mixed-citation publication-type="journal" specific-use="restruct"> <person-group person-group-type="author"><string-name><surname>McNamara</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2012</year>)<article-title>Directing gaze in narrative art.</article-title> <source>Acm Sap</source>, <volume>1</volume>(<issue>212</issue>), <fpage>63</fpage>–<lpage>70</lpage>. <pub-id pub-id-type="doi">10.1145/2338676.2338689</pub-id></mixed-citation></ref>
<ref id="R41"><mixed-citation publication-type="conference" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Nagai</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Oyana-Higa</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Miao</surname>, <given-names>T.</given-names></string-name></person-group> (<year>2007</year>). <article-title>Relationship between image gaze location and fractal dimension.</article-title> Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics, 4014–4018. http://doi.org/<pub-id pub-id-type="doi">10.1109/ICSMC.2007.4414253</pub-id></mixed-citation></ref>
<ref id="R48"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Orquin</surname>, <given-names>J. L.</given-names></string-name>, <string-name><surname>Ashby</surname>, <given-names>N. J. S.</given-names></string-name>, &amp; <string-name><surname>Clarke</surname>, <given-names>A. D. F.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Areas of Interest as a Signal Detection Problem in Behavioral Eye-Tracking Research.</article-title> <source>Journal of Behavioral Decision Making</source>, <volume>29</volume>(<issue>2–3</issue>), <fpage>103</fpage>–<lpage>115</lpage>. <pub-id pub-id-type="doi">10.1002/bdm.1867</pub-id><issn>0894-3257</issn></mixed-citation></ref>
<ref id="R47"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Over</surname>, <given-names>E. A.</given-names></string-name>, <string-name><surname>Hooge</surname>, <given-names>I. T.</given-names></string-name>, &amp; <string-name><surname>Erkelens</surname>, <given-names>C. J.</given-names></string-name></person-group> (<year>2006</year>). <article-title>A quantitative measure for the uniformity of fixation density: The Voronoi method.</article-title> <source>Behavior Research Methods</source>, <volume>38</volume>(<issue>2</issue>), <fpage>251</fpage>–<lpage>261</lpage>. <pub-id pub-id-type="doi">10.3758/BF03192777</pub-id><pub-id pub-id-type="pmid">16956102</pub-id><issn>1554-351X</issn></mixed-citation></ref>
<ref id="R31"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Quiroga</surname>, <given-names>R. Q.</given-names></string-name>, &amp; <string-name><surname>Pedreira</surname>, <given-names>C.</given-names></string-name></person-group> (<year>2011</year>). <article-title>How do we see art: An eye-tracker study.</article-title> <source>Frontiers in Human Neuroscience</source>, <volume>5</volume>(<issue>98</issue>), <fpage>98</fpage>. <pub-id pub-id-type="doi">10.3389/fnhum.2011.00098</pub-id><pub-id pub-id-type="pmid">21941476</pub-id><issn>1662-5161</issn></mixed-citation></ref>
<ref id="R54"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Reani</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2017</year>). <source>Scanpath analysis of 2-grams using Hellinger distance</source>. <publisher-loc>Manchester</publisher-loc>: <publisher-name>The University of Manchester</publisher-name>; <pub-id pub-id-type="doi">10.5281/zenodo.896239</pub-id></mixed-citation></ref>
<ref id="R56"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Wilcox</surname>, <given-names>R. R.</given-names></string-name></person-group> (<year>2010</year>).<article-title> Fundamentals of Modern Statistical Methods: Substantially Improving Power and Accuracy</article-title> <source>Fundamentals of Modern Statistical Methods.</source> (<edition>2nd ed.</edition>). <publisher-loc>New York</publisher-loc>: <publisher-name>Springer Verlag</publisher-name>; <pub-id pub-id-type="doi">10.1007/978-1-4419-5525-8</pub-id></mixed-citation></ref>
<ref id="R58"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Wilcox</surname>, <given-names>R. R.</given-names></string-name></person-group> (<year>2012</year>). <article-title>Introduction to Robust Estimation and Hypothesis Testing.</article-title><source> Introduction to Robust Estimation and Hypothesis Testing</source> (<edition>3rd ed.</edition>). <publisher-loc>Amsterdam</publisher-loc>: <publisher-name>Elsevier</publisher-name>; <pub-id pub-id-type="doi">10.1016/B978-0-12-386983-8.00012-3</pub-id></mixed-citation></ref>
</ref-list></back>
</article>

