<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd">

<article article-type="research-article" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML">
 <front>
    <journal-meta>
	<journal-id journal-id-type="publisher-id">Jemr</journal-id>
      <journal-title-group>
        <journal-title>Journal of Eye Movement Research</journal-title>
      </journal-title-group>
      <issn pub-type="epub">1995-8692</issn>
	  <publisher>								
	  <publisher-name>Bern Open Publishing</publisher-name>
	  <publisher-loc>Bern, Switzerland</publisher-loc>
	</publisher>
    </journal-meta>
    <article-meta>
	<article-id pub-id-type="doi">10.16910/jemr.11.6.4</article-id> 
	  <article-categories>								
				<subj-group subj-group-type="heading">
					<subject>Research Article</subject>
				</subj-group>
		</article-categories>
      <title-group>
        <article-title>Attention and Information Acquisition: Comparison of Mouse-Click with Eye-Movement Attention Tracking</article-title>
      </title-group>
	   <contrib-group> 
				<contrib contrib-type="author">
					<name>
						<surname>Egner</surname>
						<given-names>Steffen</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Reimann</surname>
						<given-names>Stefanie</given-names>
					</name>
					<xref ref-type="aff" rid="aff2">2</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Hoeger</surname>
						<given-names>Rainer</given-names>
					</name>
					<xref ref-type="aff" rid="aff2">2</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Zangemeister</surname>
						<given-names>Wolfgang H.</given-names>
					</name>
					<xref ref-type="aff" rid="aff3">3</xref>
				</contrib>                				
        <aff id="aff1">
		<institution>MediaAnalyzer GmbH, Hamburg</institution>,   <country>Germany</country>
        </aff>
        <aff id="aff2">
		<institution>University of Luneburg</institution>,   <country>Germany</country>
        </aff>
        <aff id="aff3">
		<institution>University of Hamburg</institution>,   <country>Germany</country>
        </aff>                
		</contrib-group>   

		
	  <pub-date date-type="pub" publication-format="electronic"> 
		<day>16</day>  
		<month>11</month>
        <year>2018</year>
      </pub-date>
	  <pub-date date-type="collection" publication-format="electronic"> 
	  <year>2018</year>
	</pub-date>
      <volume>11</volume>
      <issue>6</issue>
	 <elocation-id>10.16910/jemr.11.6.4</elocation-id> 
	<permissions> 
	<copyright-year>2018</copyright-year>
	<copyright-holder>Egner, S., Reimann, S., Hoeger, R., &#x26; Zangemeister, W. H. </copyright-holder>
	<license license-type="open-access">
  <license-p>This work is licensed under a Creative Commons Attribution 4.0 International License, 
  (<ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">
    https://creativecommons.org/licenses/by/4.0/</ext-link>), which permits unrestricted use and redistribution provided that the original author and source are credited.</license-p>
</license>
	</permissions>
      <abstract>
        <p>Attention is crucial as a fundamental prerequisite for perception. The measurement of attention in viewing and recognizing the images that surround us constitutes an important part of eye movement research, particularly in advertising-effectiveness research. Recording eye and gaze (i.e. eye and head) movements is considered the standard procedure for measuring attention. However, alternative measurement methods have been developed in recent years, one of which is mouse-click attention tracking (mcAT) by means of an on-line based procedure that measures gaze motion via a mouse-click (i.e. a hand and finger positioning maneuver) on a computer screen.</p> 
        <p>Here we compared the validity of mcAT with eye movement attention tracking (emAT). We recorded data in a between subject design via emAT and mcAT and analyzed and compared 20 subjects for correlations. The test stimuli consisted of 64 images that were assigned to eight categories. Our main results demonstrated a highly significant correlation (p &#x3C; 0.001) between mcAT and emAT data. We also found significant differences in correlations between different image categories. For simply structured pictures of humans or animals in particular, mcAT provided highly valid and more consistent results compared to emAT. We concluded that mcAT is a suitable method for measuring the attention we give to the images that surround us, such as photographs, graphics, art or digital and print advertisements.</p>        
      </abstract>
      <kwd-group>
        <kwd>Visual Attention</kwd>
        <kwd>Information acquisition</kwd>
        <kwd>Mouse-Click Attention Tracking</kwd>
        <kwd>Eye-Movement Attention Tracking</kwd>
        <kwd>Comparison of Attention Tracking</kwd>
        <kwd>Visual search</kwd>		
        <kwd>Scanpath</kwd>	        
      </kwd-group>
    </article-meta>
  </front>	
  <body>

<sec id="S1">
  <title>Introduction</title>

  <p>Visual Attention is a research topic of increasing impact. Not only is
  attention an interesting topic in itself, it also plays a crucial role
  in perception and motor control. Moreover, measuring attention also
  yields valuable data for studying higher cognitive functions such as
  interest, understanding and reading. Measurement of attention is
  traditionally being seen as parallel to eye-tracking (<xref ref-type="bibr" rid="b1 b2 b3 b4 b5">1, 2 ,3 ,4 ,5</xref>) the
  underlying rationale is that humans direct the region of their retina
  with the highest resolution (fovea) to aspects of the optical scene
  which are of high relevance for the organism. However, experiments
  with response latency tasks (<xref ref-type="bibr" rid="b7">7</xref>) clearly indicate that there are
  attention shifts that are not measurable as eye movements (covert
  attention). Moreover, the visual modalities do not seem to be
  specifically linked to attention. In fact, attention seems to be
  modality-unspecific (<xref ref-type="bibr" rid="b8 b9">8, 9</xref>). The two classical ways to measure
  attention have specific advantages and disadvantages.</p>

  <sec id="S1a">
    <title>Eye-Tracking and attention</title>
    <p>As indicated by its name, eye tracking measures the position and
    orientation of the eye(s). Based on these raw data, gaze position in
    the environment can be determined. Eye-tracking is a technique to
    measure an individual's visual attention, focus, and eye movements.
    This experimental methodology has proven useful both for
    human-computer interaction research and for studying the cognitive
    processes involved in visual information processing, including which
    visual elements people look at first and spend the most time on
    (<xref ref-type="bibr" rid="b10">10</xref>).</p>
    
    <p>Fixation criteria are often unclear. Blinks, correction saccades,
    physiological and technical noise contribute to difficulty of
    measurement: This is attention tracking by eye movement – emAT.</p>
    
    <p>Response latency tasks assume that the reaction upon an event
    that happens at a specific location – for instance the onset of a
    stimulus – will be quicker when the position of the emerging
    stimulus is expected at that particular moment. This method allows
    the measurement of covert attention (<xref ref-type="bibr" rid="b7">7</xref>), which precedes
    eye-movements in some cases. Each trial of a respondent only reveals
    one attended location at the most. Therefore, we cannot measure a
    full path of attentional shifts, but only individual locations. Some
    authors propose that attention can be measured in other ways than
    the two methods mentioned above.</p>
    
    <p>Some of these other ways employ the computer mouse to indicate
    attention locations. One of these methods, mouse-click based
    attention tracking (mcAT) will be examined in more detail in this
    paper. It seems obvious that a method that relies on the computer
    and a mouse as the only necessary devices would have many practical
    advantages over other methods. But the question is: Is it a method
    that generates results with validity comparable to eye tracking?</p>
    
    <p>Both, <italic>salience and conspicuousness</italic> of a stimulus
    in terms of its environment (<xref ref-type="bibr" rid="b11">11</xref>) as well as <italic>relevance of a
    stimulus</italic> are decisive criteria for the allocation of
    attention. They seem to be based upon two independent systems (<xref ref-type="bibr" rid="b12">12</xref>);
    however, to quantify the impact of bottom-up and top-down mechanisms
    within a certain setup may be highly difficult to differentiate with
    respect to their individual importance to the actual perception (<xref ref-type="bibr" rid="b13 b14">13, 14</xref>). 
    Various studies have shown that the degree of exogenous and
    endogenous direction of attention depends upon a number of factors.
    Novel or unfamiliar stimuli situations in free-viewing tasks, or the
    viewing of images in advertising are usually thought to be dominated
    by bottom-up processes, especially at the beginning of the viewing
    time (<xref ref-type="bibr" rid="b15 b16 b17">15, 16, 17</xref>). However, with increased viewing time, and with known
    visual performances and situations as well as in the search for
    certain stimuli, the situation is dominated by top-down
    processes.</p>
    
    <p>Viewing time: The total amount of time within an AOI
    approximately complies with the fixation duration – the time between
    two successive clicks, generally half the fixation before (max.
    500ms) and half of the fixation attributed thereafter (max. 500
    ms).</p>
  </sec>
  <sec id="S1b">
    <title>Selective attention and eye movements – the classical
    relationship to study.</title>
    <p>Selective attention is the gateway to conscious experience,
    affecting our ability to perceive, distinguish and remember the
    various stimuli that come our way (<xref ref-type="bibr" rid="b18">18</xref>). Selective attention denotes
    the allocation of limited processing resources to some stimuli or
    tasks at the expense of others (<xref ref-type="bibr" rid="b19 b20 b21 b22 b23 b24">19, 20, 21, 23, 24</xref>). Apart from its effects on
    perception or memory, selective attention is a significant
    contributor to motor control, determining which of the various
    objects in the visual field is to be the target used to plan and
    guide movement. As selective visual attention allows us to
    concentrate on one aspect of the visual field while ignoring other
    things, it is modulated by both involuntarily bottom-up and
    voluntary top-down mechanisms (<xref ref-type="bibr" rid="b20">20</xref>), within a
    brainstem-parietotemporal and basal ganglia-frontal neuronal network
    (<xref ref-type="bibr" rid="b25">25</xref>).</p>
    
    <p>Selective visual attention for spatial locations is under the
    control of the same neural circuits as those in charge of motor
    programming of saccades (<xref ref-type="bibr" rid="b26 b27 b28 b29 b30">26, 27, 28, 29, 30</xref>).</p>
    
    <p>Directing visual attention to a certain location as well as
    ocular saccades in visual attention tasks depend upon accurate
    saccade programming. Programming the eye saccade is thought to lead
    to an obligatory shift of attention to the saccade target before the
    voluntary eye movement is executed, which is due to two parameters:
    correct programming of the saccade and correct saccade dynamics.
    (<xref ref-type="bibr" rid="b31 b32 b33 b34">31, 32, 33, 34</xref>).</p>
    
    <p>Therefore, the alertness of central, top-down programming
    influences oculomotor function and, conversely, a resulting
    oculomotor dysfunction could have a direct, bottom up impact on
    results of visual attention tasks.</p>
    
    <p>Visual selective attention can be investigated by visual search
    tasks.</p>
    
    <p>Visual search means to look for something in a cluttered visual
    environment. The item that the observer is searching for is termed
    the target, while non-target items are termed distractors. Many
    visual scenes contain more information than we can fully process all
    at once. Accordingly, mechanisms like those subserving object
    recognition might process only a selected/restricted part of the
    visual scene at any one time. Visual attention is used to control
    the selection of the subset of the scene, and most visual searches
    consist of a <italic>series of attentional deployments</italic>,
    which ends either when the target is found, or when the search is
    abandoned. Overt search refers to a series of eye movements around
    the scene made to bring difficult-to-resolve items onto the fovea.
    Only if the relevant items in the visual scene are large enough to
    be identified without fixation can the search be successfully
    performed while the eyes are focused upon a single point. In this
    case, attentional shifts made during a single fixation are termed
    <italic>covert</italic>, because they are <italic>inferred rather
    than directly observed</italic>.</p>
    
    <p>While under laboratory conditions, many search tasks can be
    performed entirely with covert attention, under real world
    conditions a new point of fixation is selected 3 to 4 times per
    second. <italic>Overt fast movements of the eye, saccades, and
    covert deployments of attention are closely related</italic> (<xref ref-type="bibr" rid="b20">20</xref>),
    as the sample rate of saccades is 4/sec. With stimuli that do not
    require direct foveation, 4–8 objects can be searched during each
    fixation. As estimates of the minimum time required to recognize a
    single object are almost always greater than 100 ms, multiple items
    may be processed in parallel (<xref ref-type="bibr" rid="b35">35</xref>). <italic>Volitional</italic>
    deployments of attention are much slower than
    <italic>automatic</italic> deployments (<xref ref-type="bibr" rid="b36">36</xref>), and occur at a rate
    similar to saccadic eye movements, i.e. a sample rate of 4/sec (<xref ref-type="bibr" rid="b37">37</xref>).
    <italic>Search termination</italic> happens after finding the
    target, or one could declare the target to be absent after rejection
    of every distractor object, although it may be difficult to
    determine when this point has been reached.</p>
  </sec>
  <sec id="S1c">
    <title>Mouse-click attention tracking – Background</title>
    <p>The mouse-click Attention Tracking (mcAT) method measures
    attention by mouse clicks. They can be counted and concatenated to a
    time sequence that is analogous to the eye movement scanpath. Egner
    and Scheier developed this method in collaboration with Laurent Itti
    (<xref ref-type="bibr" rid="b38">38</xref>) at the California Institute of Technology (USA). They assumed
    the predictive power of a computerized attention model with three
    categories of visual stimuli (photographs of natural scenes,
    artificial laboratory stimuli and sites).</p>
    
    <p>Based upon empirical evidence of a close link between attention,
    eye movements/fixations and pointing movements (<xref ref-type="bibr" rid="b39">39</xref>) the eye tracking
    data (emAT), touch screen and click data with a computer mouse
    (mcAT) were highly correlated (an overview can be found in (<xref ref-type="bibr" rid="b16 b38 b40">16, 38, 40</xref>). The mcAT method was patented in the US and Europe as a
    mouse-click based AT procedure for measuring visual attention (<xref ref-type="bibr" rid="b41 b42">41, 42</xref>).</p>
    
    <p>The central idea of the mcAT method is the natural coupling of
    the use of mouse clicks with eye movement measures, which in turn
    represent a valid indicator of the attention.</p>
    
    <p>In the following years, also other researchers have explored the
    relationship between users' mouse movements and eye movements on web
    pages (<xref ref-type="bibr" rid="b43">43</xref>).</p>
    
    <p>Deng, Krause and Fei-Fei (<xref ref-type="bibr" rid="b44">44</xref>) used a bubble paradigm of Gosselin
    and Schyns (<xref ref-type="bibr" rid="b45">45</xref>) that was used to discover the object/image regions
    people explicitly choose to use when performing.</p>
    
    <p>Chen, Anderson and Sohn (<xref ref-type="bibr" rid="b46">46</xref>) described in their paper a study on
    the relationship between gaze position and cursor position on a
    computer screen during web browsing. Users were asked to browse
    several web sites while their eye/mouse movements were recorded. The
    data suggested that there was a strong relationship between gaze
    position and cursor position. The data also showed that there were
    regular patterns of eye/mouse movements. Based on these findings,
    they argued that a mouse could provide more information than just
    the x, y position where a user was pointing. They speculated that by
    understanding the intent of every mouse movement, one should be able
    to achieve a better interface for human computer interaction.</p>
    
    <p>Using eye and mouse data, Navalpakkam et al. (<xref ref-type="bibr" rid="b47">47</xref>) demonstrated
    that the mouse, like the eye, is sensitive to two key attributes of
    page elements: their position (layout), and their relevance to the
    user's task. They identified mouse measures that were strongly
    correlated with eye movements and developed models to predict user
    attention (eye gaze) from mouse activity.</p>
    
    <p>Our approach is different from the viewing window approach of
    (<xref ref-type="bibr" rid="b44">44</xref>) in that we explicitly collect the path of discretized click
    data, as each click represents a conscious choice made by the user
    to reveal a portion of the image. Since the clicks correspond to
    individual locations of attention, we can directly compare them to
    eye fixations.</p>
    
    <p>Kim et al. (<xref ref-type="bibr" rid="b48">48</xref>) investigated the utility of using mouse clicks as
    an alternative for eye fixations in the context of understanding
    data visualizations. They developed a crowdsourced study online in
    which participants were presented with a series of images containing
    graphs and diagrams and asked to describe them. They compared the
    mouse click data with the fixation data from a complementary
    eye-tracking experiment by calculating the similarity between
    resulting heatmaps and got a high similarity score and suggested
    that this methodology could also be used to complement eye-tracking
    studies with an additional behavioral measurement, since it is
    specifically designed to measure which information people
    consciously choose to examine for understanding visualizations.</p>
  </sec>
  <sec id="S1d">
    <title>Aim of our study</title>
    <p>The question is, can mouse clicks approximate human fixations in
    the context of data visualization understanding? When we compare eye
    movement/fixations and hand mouse movement/clicks, we assume that
    the sensory-attentional and the cognitive part of these actions are
    highly similar, whereas the motor part is obviously different. From
    this reasoning we can infer three questions:</p>
    <list list-type="order">
      <list-item>
        <p>What <italic>are</italic> the differences between eye and
        hand movements that have been described by many researchers (<xref ref-type="bibr" rid="b49 b50">49, 50</xref>) and how do they relate to our findings?</p>
      </list-item>
      <list-item>
        <p>What are the similarities of the two responses?</p>
      </list-item>
      <list-item>
        <p>Are there non-motor differences related to
        attention/cognition and how do they relate to our findings?</p>
      </list-item>
    </list>
    <p>The present study is of interest to the eye movement research
    community for the following reasons. While eye-tracking is the
    well-established method for measuring visual attention, the eye
    movement data does not allow to make a distinction between
    eye-movement-specific and attention-specific effects. The
    alternative measurement described and used in the present article
    uses the hand (computer mouse) to measure attention. The resulting
    data is highly similar to eye-tracking data, and it is not affected
    by eye-movement-specific processes. Thus, it allows to separate
    eye-movement-specific and attention-specific effects. Additionally,
    the alternative measurement enables a method comparison, which
    enriches our knowledge about eye-tracking methodology. Last, the new
    method can help to gain a better understanding of attention, which
    is also the goal of much eye and mouse tracking research on web page
    viewing. So, both methods contribute to the same goal.</p>
    
    <p>Therefore, two practical issues were addressed:</p>
    <list list-type="roman-lower">
      <list-item>
        <p>Does the spatial dimension of fixations and clicks correlate
        highly positively?</p>
      </list-item>
      <list-item>
        <p>Does this putative correlation depend upon the stimulus
        material?</p>
      </list-item>
    </list>
    <p>In view of our experiences, we hypothesized that:</p>
    <list list-type="roman-lower">
      <list-item>
        <p>The overall pattern of fixations/clicks correlate highly.</p>
      </list-item>
      <list-item>
        <p>The amount of the agreement between both recording methods
        depends upon the nature of the stimuli.</p>
      </list-item>
    </list>
  </sec>
</sec>
<sec id="S2">
  <title>Methods</title>
  <p>We used an independent experimental design i.e. a between-subject
  design selected to prevent both carry-over effects (affecting a later
  experimental condition by a previous condition) as well as to prevent
  position effects like fatigue and exercise.</p>
  <sec id="S2a">
    <title>Subjects </title>
    <p>The data of twenty participants were used. One group of subjects
    underwent one experimental condition only: In ten subjects emAT was
    measured via eye movement recordings while viewing the stimulus
    material.</p>
    
    <p>The remaining ten subjects were subject to mcAT, measured via
    mouse-click recordings while viewing the stimulus material. All
    other experimental conditions (stimulus material, type of
    presentation, site of examination, demographics of the subjects,
    experimenter) were kept constant. Before the actual test took place,
    multiple testing of subjects to clarify, check and optimize
    instructions and operation of the equipment took place.</p>
    
    <p>The study has been approved by the local ethics committee. It
    complies with the ethical practices and follows the Code of Conduct
    and Best Practice Guidelines outlined by the Committee on
    Publication Ethics. Informed consent for the research was obtained
    by an oral introduction and overview at the LEUPHANA University of
    Luneburg, Engineering Psychology research lab.</p>
  </sec>
  <sec id="S2b">
    <title>Sample</title>
    <p>The whole study consisted of three phases: (1) compilation of the
    stimulus material, (2) pilot run, and (3) final experiment.</p>
    
    <p>The participants of all three trials – of the preliminary
    selection of images and classification in the classification scheme
    (n = 12), the trial run (n = 6) and the final test with emAT and
    mcAT measurements (n = 29) – were students recruited from the
    University of Lueneburg. All participants gave their written
    informed consent following the rules of the Helsinki Declaration.
    None of the subjects (Ss) had participated in more than one of the
    tests or knew about the exact purpose of the investigation.</p>
    
    <p>The comparison of various studies shows a wide range in the
    number of required study participants. An overview of Borji and Itti
    (<xref ref-type="bibr" rid="b11">11</xref>) lists over 19 trials in which attention stimuli computer
    programs were used based on emAT; the subject numbers vary between 5
    and 40, but more than half of the listed studies used less than 15
    subjects. To keep the failure rate of the emAT test low, only
    subjects that did not rely on visual aids were invited.</p>
    
    <p>Four out of 14 emAT respondents were removed from the evaluation.
    The criterion for removing such recordings was the calibration
    quality. The calibration, which was performed in the beginning of
    the recording, was checked at the end of each recording. The
    respondent had to redo the calibration procedure. If the results
    were significantly different from the initial calibration, we
    removed the recording. This was decided upon face validity. The
    reasoning behind this is that, if the calibration parameters have
    changed throughout the recording process, this is due to a
    distortion that happened during the recording. The remaining ten
    were between 18 and 22 years old (average age: 20). Four were
    female. Also, five out of 15 mcAT respondents were removed from the
    evaluation. This was decided upon the mouse behavior during the
    recording. The recording was excluded if:</p>
    <list list-type="bullet">
      <list-item>
        <p>the click rate went below 1.5 clicks per second,</p>
      </list-item>
      <list-item>
        <p>the respondent stopped moving the mouse,</p>
      </list-item>
      <list-item>
        <p>the click pattern revealed that the respondent did not
        understand the instructions. The last point was decided upon
        face validity.</p>
      </list-item>
    </list>
    <p>Subjects for mcAT measurements had no restrictions concerning
    visual aids. Of the 15 published subjects, ten recordings could be
    used for further analysis. From the ten remaining subjects, the age
    range was 20–26 years (average age: 23) and the sex ratio was
    even.</p>
  </sec>
  <sec id="S2c">
    <title>Stimuli</title>
    <p>Until now, no generally accepted classification schemes, neither
    number nor type of classes have been suggested in attention
    research. Examples of the classification of the photos are: Natural
    landscapes and portraits (<xref ref-type="bibr" rid="b51">51</xref>); animals in natural environments,
    street scenes, buildings, flowers and natural landscapes (<xref ref-type="bibr" rid="b52">52</xref>);
    nature/landscape scenes, urban environments and artificial
    laboratory stimuli (<xref ref-type="bibr" rid="b53">53</xref>) as well as images with obvious and
    unclear/non-existent AOI (<xref ref-type="bibr" rid="b54">54</xref>). None of the authors stated the
    reasons/justification for the particular classification that was
    selected. Therefore, we developed our own classification scheme that
    reflects the suggestions in the literature but has also been derived
    from features that mirror the mechanisms of control of
    attention.</p>
    
    <p>Figure 1 shows the distribution of images at their different
    levels. At the highest level were photographs that represented
    animate and inanimate matter. At the second level, the class animate
    represented pictures of people/animals-plants; the inanimate class
    included artificial/natural environments. Each of these four classes
    is independent of content aspects and divided into simple and
    complex designs.</p>

    <fig id="fig01" fig-type="figure" position="float">
					<label>Figure 1.</label>
					<caption>
						<p>Images and their category levels. Utilized stimulus material: human/animal, easy, complex, plants, inanimate, artificial,
natural</p>
					</caption>
					<graphic id="graph01" xlink:href="jemr-11-06-d-figure-01.png"/>
				</fig>
    
    <p>Although complexity in different contexts may be an important
    determinant of attention processes (<xref ref-type="bibr" rid="b12 b55">12, 55</xref>), different authors
    define the term complexity in many different ways.</p>
  </sec>
  <sec id="S2d">
    <title>Image selection</title>
    <p>The photographs originate from two image databases: Borji et al. (<xref ref-type="bibr" rid="b56">56</xref>)
    which is currently the largest, freely available and commonly used
    image data set that was also used by Judd et al. (<xref ref-type="bibr" rid="b51">51</xref>). Another
    source was photos from the pixabay.com website, a database for a
    Creative Commons CCO.</p>
    
    <p>First, we chose a preselection of images per category (120). To
    ensure that the classification of images was done objectively
    despite different possible interpretations of schema classes, we
    reduced the dataset to eight photos per category using 12 subjects
    for classification. The subjects were given the task of classifying
    each of the 120 sequentially presented images in the classification
    scheme shown above without further explanation of the schema
    classes. In this way, an impartial classification could be
    performed.</p>
    
    <p>Only those images were chosen for the experiment for which a
    minimum of two thirds of the subjects chose a particular
    category.</p>
    
    <p>During the evaluation, it was found that some subjects had
    difficulties differentiating between the categories “natural” and
    “plants”; the terms “complex” and “simple” were also interpreted
    quite differently. Subsequently, the selected 64 images were cut to
    three uniform sizes to reflect the screen used in the following
    test: 8 photos are dimensioned in portrait format (690 x 920
    pixels); 30 photos are in landscape format (1226 x 920 pixels); 26
    photos are in landscape format with 1250 x 833 pixels.</p>
  </sec>
  <sec id="S2e">
    <title>Distribution of image material at different levels. </title>
    <p>Photographs with representation of animate and inanimate matter
    were at the highest level. The second level, the “animate” class,
    represented images of people/animals, and plants, and the
    “inanimate” class in artificial and natural environments. Each of
    these four classes was again independent of content aspects and
    divided into simple and complex designs. Although complexity was in
    different contexts an important determinant of attention processes
    (<xref ref-type="bibr" rid="b12 b55">12, 55</xref>) many authors have used the term differently.</p>
  </sec>
  <sec id="S2f">
    <title>A priori AOIs – grid application</title>
    <p>To compare the spatial distribution of the viewing and click data
    in a meaningful way, regions (ROI) or areas of interest (AOI) had to
    be defined. For this, semantically-based AOIs have been frequently
    used, in particular in the analyses of advertisements or sites. As
    we were less interested in gaze behavior with respect to specific
    image regions and were more interested in the global eye movements
    over the entire image, we used a grid laid over the image that
    divided the image into a certain number of fields. In this way, we
    excluded subjective preferences that might confound the results of
    our analysis. Of course, this procedure had its disadvantages: The
    choice of the grid field’s size sometimes played a particularly
    important role by defining objects too inaccurately, e.g. a face
    might be divided into several fields and, therefore, subdivided by
    single fixations. As the image data set used here contained many
    complex stimuli for which AOIs were difficult to define, gridding
    was the only meaningful way to analyze the data. As a compromise, we
    selected a 5 x 7 grid (35 fields), so that the fields had an average
    square size. The attention parameters were steadily distributed
    features and could assume values between 0.0 and 1.0. For example, a
    value of 0.24 in a grid meant that 24% of all clicks or fixations
    were made in this particular field. The contact parameters, however,
    were discretely distributed. 24% of all clicks or fixations were
    made in this field. The contact parameters, however, were discretely
    distributed features, with eleven possible specifications between 0
    and 1 (in increments of 10) for ten subjects; for example, a value
    of 0.3 means that 30% of the subjects had looked or clicked in a
    field.</p>
    
    <p>An arbitrarily chosen grid definitely has disadvantages in the
    evaluation. Among other things, adjacent fixations (or clicks) may
    fall into separate grid cells, even though both fixations belong to
    the same object. It would be desirable to evaluate fixations on the
    same object together. As an alternative to the grid approach, one
    can also define regions. Ideally, the regions are set to correspond
    to fixation goals (objects). This bypasses the above cited
    disadvantage. However, this approach also has a significant
    disadvantage. The manually selected regions can strongly distort the
    results if chosen unfavourably. They could also be used to
    deliberately distort results. We will not solve this general problem
    of the eye tracking community with our article (see (<xref ref-type="bibr" rid="b57">57</xref>) for a
    methodological overview). That is beyond the scope of our paper. The
    JEMR paper by Oliver Hein and Wolfgang H. Zangemeister (<xref ref-type="bibr" rid="b58">58</xref>) offers
    one possible solution.</p>
    
    <p>In summary, to calculate the fixation and click data, the
    <italic>contact value</italic> was calculated, i.e. the proportion
    of subjects that viewed or clicked in a particular field. Also, the
    <italic>attention value</italic> of the subjects was calculated by
    averaging the <italic>single grid</italic> percentage clicks or
    fixations with respect to the total clicks or fixations per
    image.</p>
    
    <p>For automatic algorithmic generation of particular grid sizes
    and/or content specific AOIs see: Privitera and Stark (<xref ref-type="bibr" rid="b59">59</xref>) and Hein
    and Zangemeister (<xref ref-type="bibr" rid="b58">58</xref>).</p>
  </sec>
  <sec id="S2g">
    <title>Experimental Setup</title>
    <p>In order to keep the experimental conditions for both measuring
    methods as constant as possible, data collection was carried out in
    both emAT and mcAT in the eye movement laboratory of the University
    of Lüneburg between November 1 and 16, 2015. The stimulus material
    was presented on a 21.5-inch monitor (Acer) with a resolution of
    1920 x 1080. To avoid sequence effects, the 64 images in both
    experiments were presented in randomized order each for the duration
    of 5 s. This timing was chosen in accordance with many other related
    studies (<xref ref-type="bibr" rid="b52 b54 b60">52, 54, 60</xref>). We separated the individual images by means of
    a blank screen (here for a duration of 2 s) on which a commonly used
    fixation cross was shown in the middle (<xref ref-type="bibr" rid="b61">61</xref>). A fixation cross was
    used for both measurement methods for all pictures to ensure a
    common starting position for the eyes and the computer mouse.
    Following both tests, the subjects answered a short questionnaire on
    their demographic data. They also completed a recognition test and
    had to judge whether a series of images in the previous experiment
    was shown or not. The mcAT-subjects were also requested to answer
    three qualitative questions with click behavior.</p>
  </sec>
  <sec id="S2ht">
    <title>Eye movement Attention Tracking (emAT)</title>
    <p>The eye movement measurement was carried out with the SMI iView
    X™ Hi-Speed 1250 eye tracker. This is a tower-mounted dark-pupil
    system recording movements of one eye with a sampling rate of 500 Hz
    (SMI SensoMotoric Instruments GmbH; SMI SensoMotoric Instruments
    GmbH). The distance between the chin rest of the iView X™ and the
    screen was 60cm. Programming, evaluation and control of the
    experiment was carried out using SMI’s BeGaze Analysis (version
    3.5). The iViewX program was used to control the recording of eye
    movement.</p>
    
    <p>Subjects received standardized verbal instructions, during which
    they were informed of the calibration and test procedure. Automatic
    calibration then followed, with a spatial accuracy of at least 0.5°.
    We used additional manual calibration in case accuracy was
    insufficient. This took place before and after the presentation of
    64 images for 10 seconds for each image. Thus, both the measurement
    accuracy and precision were validated to provide assessment of data
    quality. After the first calibration, further instructions were
    carried out on the screen. Subjects were asked to view the following
    images as they chose in a free-viewing task to create almost natural
    viewing conditions without any viewing strategies (see as refs.:
    Borji &#x26; Itti, (<xref ref-type="bibr" rid="b11">11</xref>); Parkhurst et al. (<xref ref-type="bibr" rid="b14">14</xref>)). Overall, the
    presentation of images took about ten minutes.</p>
  </sec>
  <sec id="S2i">
    <title>Mouse click Attention Tracking (mcAT)</title>
    <p>For the mcAT test, the instructions (Appendix 1), click training,
    click test with 64 images and the demographic data collection was
    programmed using MALight software from MediaAnalyzer in an online
    questionnaire. While the subjects initially completed the click
    training, the experimenter looked to answer questions and provide
    guidance for clicking behavior in case the instructions were not
    understood. The subsequent viewing of images was done similarly to
    the emAT without any task, with the supplementary advice: “You can
    click everywhere you are looking at”. Completion of the click
    training, and the click test took about 12 minutes.</p>
  </sec>
  <sec id="S2j">
    <title>Data Analysis</title>
    <p>Default settings algorithm parameters of “BeGaze” (SMI
    SensoMotoric Instruments GmbH) were: Saccade detection parameters:
    min. duration 22ms, peak velocity threshold 40°/s, min. fixation
    duration 50ms. Peak velocity start: 20% of saccade length; end: 80%
    of saccade length.</p>
    
    <p>First the emAT fixations were calculated. BeGaze contained both a
    dispersion-based and a speed-based algorithm, which was used here
    because the SMI machine is a high-speed device. The minimum fixation
    duration was based upon inspection of selected images and subjects
    for durations of 50 ms, 100 ms and 200 ms. Based upon this visual
    analysis and the information in the literature of Holmqvist et al.
    (<xref ref-type="bibr" rid="b61">61</xref>), we used a minimum fixation duration of 100 ms (instead of the
    default 50 ms) as the parameter setting.</p>
  </sec>
  <sec id="S2k">
    <title>Data Cleansing</title>
    <p>Next, the data quality of each subject was checked by their
    fixations at the beginning and at the end of the experiment. In case
    deviation between initial and final calibration fixations (precision
    and accuracy) was too high, we had to exclude four subjects (s.
    Annex 4). It can be assumed that the first specific fixation does
    not necessarily start with the very first fixation, but after a
    certain period of time that we defined to be 500ms: Therefore, the
    first 500ms were excluded from further evaluation. Compared to the
    first saccadic eye fixations, the manual start of the mouse clicks
    i.e. the “mouse-fixation” was slightly slower than the sequence of
    eye movement fixations, due to the inertial load difference between
    eye and hand. Therefore, for mouse clicks we excluded the first
    800ms from further evaluation. Decisive for the quality of click
    data was a minimum click speed that can be controlled. The demanded
    click rate was 1.5 per second or higher. This was attained by all
    subjects. The click data in the attention and contact parameters
    were transformed per grid and averaged across all subjects by means
    of Microsoft Excel. At 35 fields per frame and a total of 64 images,
    2240 values per sample were observed for each method (mcAT and
    emAT).</p>
    
    <p>Click test. Subjects view a series of stimuli on a screen for 5–7
    seconds – usually advertising materials, website or shelf view,
    mostly in combination with distractors that are shown before and
    after the test material. They are prompted to click quickly and
    without thinking on those places that they consider to be
    attractive. This measurement is carried out as a “click test” during
    an online survey, which they can have performed by an online panel
    of recruited volunteers with their own computers at home. As the
    subjects are required to perceive the mouse as an “extension” of the
    eye, a short time of training for the exercise is required. This
    click training is an interactive and playful method, based upon five
    tasks during which the subjects get accustomed to clicking
    continuously fast – at least 1–2 times per second – while they
    control certain image regions with the mouse. Meanwhile, they
    receive real-time evaluation feedback on their click behavior. Only
    subjects that pass all tasks, i.e. also after several attempts have
    been successfully completed, can they take part in the next click
    test. The aim of the training is to teach subjects to click as
    spontaneously and unconsciously as they direct their gaze, so that a
    fixation and a mouse-click become equivalent.</p>
    
    <p>The data collected from the click test and the survey are stored
    on a server, and MediaAnalyzer uses special software to
    statistically analyze and interpret the data. Firstly, there is a
    verification of data quality and possibly a data cleansing. Despite
    click training, few subjects fail to maintain the clicks throughout
    the test or show a lack of motivation by clicking only on the same
    spot. Data from these subjects can be detected and filtered by
    algorithms (<xref ref-type="bibr" rid="b39 b40 b62">39, 40, 62</xref>). During the evaluation, a click is taken as
    a fixation. It is analyzed similarly to an etAT based upon
    semantically-derived, predefined areas of interest (AOIs) – at an ad
    e.g. based on the logo or the name – and on the average results of
    all subjects.</p>
    
    <p>Typical parameters are:</p>
    
    <p>i. Time to contact: The time to first click in an AOI</p>
    
    <p>ii. Percent attention: Share of clicks in an AOI relative to the
    total number of click-stimulus corresponds approximately to the
    relative fixation frequency; thereafter referred to as attention
    value.</p>
    
    <p>iii. Percent contact: Relative proportion of subjects that
    clicked at least once in an AOI; thereafter referred to as contact
    value.</p>
  </sec>
  <sec id="S2l">
    <title>Statistical analysis</title>
    <p>Using IBM SPSS, various summary measures were calculated to
    determine the relationship between the converted data from mcAT and
    emAT: Pearson product-moment correlation coefficient, the area under
    curve (AUC) and the receiver operating characteristic (ROC)
    curve.</p>
    
    <p>The correlation analysis and the calculation of the ROC curve are the
    most commonly used methods for analyses of these data (<xref ref-type="bibr" rid="b53 b56 b63">53, 56, 63</xref>).
    The use of two or more evaluation methods is recommended to ensure
    that the observed effects are independent of the summary measure
    (<xref ref-type="bibr" rid="b56">56</xref>).</p>

<fig id="fig02" fig-type="figure" position="float">
					<label>Figure 2.</label>
					<caption>
						<p>ROC curve example (<xref ref-type="bibr" rid="b5 b6">5, 6</xref>).</p>
					</caption>
					<graphic id="graph02" xlink:href="jemr-11-06-d-figure-02.png"/>
				</fig>    
    
    <p>The <italic>Pearson product-moment correlation
    coefficient</italic> provides information about the amount and
    direction of the linear relationship between two interval-scaled
    variables, in this case between the pairs of values of the two
    samples (emAT and mcAT parameters per grid). The correlation
    coefficient r can take values between -1 and +1 that specify the
    strength and the sign of the direction of the relationship. If r =
    0, no linear relationship between the variables is evident.</p>
    
    <p>Correlations with r ≥ 0.5 are considered as high, and r ≥ 0.7 as
    very high (Cohen, 1988).</p>
  </sec>
  <sec id="S2m">
    <title>Receiver operating characteristic - ROC curve</title>
    <p>The ROC curve originated from the signal detection theory and is
    used in medicine as a tool for the evaluation of known diagnostic
    tests. Transmitted to the two methods of attention measurement the
    ROC curve or AUC (area under the curve) measures the goodness of
    this measure (mcAT parameters) to predict the occurrence or absence
    of the variable of the other method (emAT parameters). There are
    four possibilities of prediction: right positive, false positive,
    right negative and false negative.</p>
    
    <p>The ROC curve is created by a diagram of the <italic>correct
    positive rate</italic> (known as the <italic>“hit ratio” or
    “sensitivity”</italic>) and is deducted from the <italic>false
    positive rate</italic> (also known as <italic>“one minus
    specificity”</italic>) (Fig. 2), wherein the threshold of the
    classifier (the AT parameter) is continuously varied. The closer to
    the diagonal, the more the right-positive rate corresponds to false
    positive rate – which is expected of the right-positive frequency of
    a random process equivalent. Thus, the greater the area under the
    curve (AUC), the better is the prediction; and thus, the agreement
    between the two variables (<xref ref-type="bibr" rid="b64 b65">64, 65</xref>).</p>
    
    <p><italic>Sensitivity</italic> i.e. probability of detection (see 2
    refs. above) – measures the <italic>number</italic> of positives
    that are correctly identified as such (e.g. the percentage of mouse
    clicks that resemble <italic>true eye fixations</italic>).
    <italic>Specificity</italic> (also called the true negative rate)
    measures the number of negatives that are correctly identified as
    such (e.g., the <italic>number</italic> of mouse clicks not
    resembling eye fixations, false alarms). Thus,
    <italic>sensitivity</italic> quantifies the avoiding of false
    negatives, as specificity does for false positives. For any test,
    there is usually a trade-off between these measures. This trade-off
    can be represented graphically as a receiver operating
    characteristic, <italic>ROC curve</italic> (Fig. 2).</p>
    
    <p>A perfect predictor would be described as 100% sensitive and 100%
    specific; but any predictor will possess a minimum
    error bound (Bayes error rate). The ROC curve is the
    <italic>sensitivity as a function of fall-out</italic>, i.e. the
    proportion of non-relevant measures that are retrieved, out of all
    non-relevant measures available: In general, if the probability
    distributions for both detection and false alarm are known, the ROC
    curve can be generated by plotting the cumulative distribution
    function, i.e. the area under the probability distribution for the
    discrimination threshold of the detection probability in the y-axis
    versus the cumulative distribution function of the false-alarm
    probability in x-axis.</p>
    
    <p>In the medical field, the divisions to assess the test accuracy
    are: An AUC value ≥ 0.7 is considered acceptable, ≥ 0.8 good
    acceptable, and ≥ 0.9 as <italic>excellent</italic> (<xref ref-type="bibr" rid="b65">65</xref>).</p>
    
    <p>Another method used is the assessment of the predictive power of
    a computer model as a reference value, where the inter-subject
    variance or inter-subject homogeneity to validate emAT is employed.
    (<xref ref-type="bibr" rid="b53 b66">53, 66</xref>). For this purpose, the AUC for the prediction of the emAT
    data for one half of the subjects is determined by the other half of
    the subjects. The higher the value, the lower the variance – or the
    higher the homogeneity among emAT subjects. This value is considered
    to be the theoretically achievable AUC or the upper limit of a
    computer model for predicting fixations (<xref ref-type="bibr" rid="b63 b66">63, 66</xref>).</p>
    
    <p>To address the <italic>first hypothesis</italic>, i.e. to
    determine the relationship between the two samples, the correlation
    coefficient (correlation of measurement of pairs per grid) and the
    AUC are calculated based on the total stimulus material. A high
    positive correlation between mcAT and emAT values is observed for
    both parameters – <italic>contact, attention</italic> – if AUC is
    significantly above the chance level of 0.5.</p>
    
    <p>To determine the ROC curve using the present emAT and mcAT data
    it was necessary to clarify which contact or attention value of an
    emAT sample was interpreted as “seen”; as only the “not seen” and
    “seen” classes were used for the “seen” calculation.</p>
    
    <p>Therefore, <italic>three possible limits</italic> were initially
    set per parameter and the curves for all three were calculated and
    compared.</p>
    
    <p>As no reference values were found in the literature for this
    problem, the limits were set primarily by theoretical considerations
    on the respective central test limits (at contact: 0.3; in
    attention: 0.05) and set for calculation of the AUC values of the
    individual categories used. As the measured values were not normally
    distributed and thus a prerequisite for calculating the significance
    tests for correlations was not met (<xref ref-type="bibr" rid="b67">67</xref>), the confidence intervals of
    the correlation coefficients were determined using bootstrapping
    (<xref ref-type="bibr" rid="b68">68</xref>). Bootstrapping is a resampling technique used to obtain
    estimates of summary statistics. It can refer to any test or metric
    that relies on random sampling with replacement. Bootstrapping
    allows assigning measures of accuracy to sample estimates. It allows
    estimation of the sampling distribution of almost any statistic
    using random sampling methods.</p>
  </sec>
  <sec id="S2n">
    <title>Fixation/Click rates, scatter plots and difference
    histograms</title>

    <p>Figure 3 shows the attention values of mcAT as a function of emAT
    for all pictures. Graphically, it demonstrates a close relationship
    between the two methods</p>

<fig id="fig03" fig-type="figure" position="float">
					<label>Figure 3.</label>
					<caption>
						<p>. Attention values mcAT as a function of emAT for all pictures.</p>
					</caption>
					<graphic id="graph03" xlink:href="jemr-11-06-d-figure-03.png"/>
				</fig>    
  </sec>
  </sec>  
  <sec id="S3">
    <title>Results</title>    

    
    <p>To address the second hypothesis, the two summary measures were
    calculated and compared for each category of images. In the main
    “Simple” category in particular, higher compliance – i.e. a higher
    correlation coefficient and a higher AUC – between mcAT and emAT was
    postulated than in the adjacent category “complex”, as well as in
    the “human/animal” category compared to the other categories of the
    same level. In order to check whether the detected correlation
    coefficients of the various categories differ significantly from
    each other, the online calculator used significance testing with
    correlations suggested by Wolfgang Lenhard &#x26; Alexandra Lenhard
    (<xref ref-type="bibr" rid="b69">69</xref>), specifically the test for comparison of two correlation
    coefficients of independent samples. The average attention value (n
    = 10) in the emAT test amounted to between 0.00 and 0.44. This means
    that one single grid received up to 44% of all fixations while an
    image was being viewed. The highest number of attention value in the
    mcAT test was 0.28, i.e. one single grid received up to 28% of all
    clicks. With respect to the contact values, in both experiments all
    values were between 0, i.e. fields that nobody paid any attention
    to, and 1, i.e. fields all subjects did notice. The distribution of
    emAT and mcAT value pairs per grid-box is graphically depicted in
    Figure 3 by means of a scatter plot.</p>

  <sec id="S3a">
    <title>First Research Question - The ROC curve results</title>
    <p>In nearly all image categories the correlation amounted to r =
    0.76 (attention) and r = 0.71 (contact). Both correlations are
    highly significant and greater than zero (P &#x3C;;0.001). The
    confidence intervals determined by bootstrapping (.72 to 0.78
    (attention) and 0.68 to 0.74 (contact)), also indicate that, in our
    sample, the correlation coefficients can be classified as high or
    very high.</p>
    
    <p>Figure 4 shows the ROC curves for the entire stimulus material
    with three different thresholds. In the attention value for the
    selected limit of 0.05, the AUC size is 0.88 with a confidence
    interval of 0.87 to 0.90. In the contact value with the limit 0.3,
    the AUC size is 0.85 with a confidence interval of 0.84 to 0.87.
    Both values are thus significantly different from 0.5 (p
    &#x3C;;0.001).</p>

    <table-wrap id="t01" position="float">
					<label>Table 1.</label>
					<caption>
						<p>Correlation coefficients of the between-subjects (1st row) compared to the within-subject correlations (2nd and 3rd rows).</p>
					</caption>
					<graphic id="graph09" xlink:href="jemr-11-06-d-table-01.png"/>
					</table-wrap>

    <fig id="fig04" fig-type="figure" position="float">
					<label>Figure 4.</label>
					<caption>
						<p>ROC curves with different limits. Left: attention values; right: contact values.</p>
					</caption>
					<graphic id="graph04" xlink:href="jemr-11-06-d-figure-04.png"/>
				</fig>
    
    <p>The above described correlation coefficient and the AUC for the
    subject’s internal prediction of fixations is 0.88 (attention) and
    0.85 (contact); the correlation within the ET-sample amounts to r =
    0.68 (attention) and r = 0.66 (contact). This means there is a
    closer link between the mcAT and emAT data (n = 10) than between the
    emAT data of the one with the other sub-group (n = 5) (see Table
    1).</p>
    
    <p>Inter-subject
    variance of the emAT data was determined through a within-subject
    analysis. Correlation coefficients and AUC values were lower than
    the results from the between-subject design.</p>
    
    <p>The
    finding that mouse clicks were more similar to eye fixations than
    eye fixations to themselves seems hard to understand at first
    glance. It may be a consequence of the way we generate eye
    movements. Eye movements are very fast in three respects: We perform
    many movements per second, eye movements are generated with a short
    response latency, and saccades are the fastest movements we can
    generate. This may lead to the effect that eye movements are
    somewhat inexact, more often than Mouse clicks. We can observe the
    inaccuracy of eye movements in any eye-tracking recording: Fixations
    on a given target are located in an area around the target that is
    about one degree of visual angle. In comparison, Mouse clicks seem
    to have a higher accuracy than eye fixations.</p>
    
    <p>Statistically speaking, the inaccuracy of eye movements leads to
    noise in the recorded data. When we compare eye fixation data with
    eye fixation data, we compare two noisy sources. In contrast, when
    we compare gaze data with click data, we compare one more and one
    less noisy source. This explains, why the comparison of two eye
    fixation data yields a higher difference than the comparison of eye
    fixation data with click data.</p>
  </sec>
  <sec id="S3b">
    <title>Second Research Question – the picture categories and emAT
    vs. mcAT</title>
    <p>The correlation coefficients of the individual images
    demonstrated a large scattering width depending upon the different
    picture categories (see Fig.1 for reference). The picture with the
    highest obtained correlation (r = 0.95 (<italic>attention</italic>)
    and r = 0.94 (<italic>contact</italic>)) was within the category
    inanimate artificial simple (InAS) (Fig.5).</p>

    <fig id="fig05" fig-type="figure" position="float">
					<label>Figure 5.</label>
					<caption>
						<p>Comparison of the viewing (left) and click data (right) image InAS #5 (inanimate, artificial, simple)</p>
					</caption>
					<graphic id="graph05" xlink:href="jemr-11-06-d-figure-05.png"/>
				</fig>
    
    <p>The picture for both parameters with the lowest correlation (r = 0.22
    (<italic>attention</italic>) and r = 0.17
    (<italic>contact</italic>)) was one of eight images from the
    category <italic>inanimate natural complex</italic> (InNC, Fig.
    6).</p>

    <fig id="fig06" fig-type="figure" position="float">
					<label>Figure 6.</label>
					<caption>
						<p>Comparison of the viewing (left) and click data (right) image InNC #8 (inanimate, natural, complex)</p>
					</caption>
					<graphic id="graph06" xlink:href="jemr-11-06-d-figure-06.png"/>
				</fig>
    
    <p>Of the total of 64 images obtained, seven (<italic>attention</italic>)
    and four (<italic>contact</italic>) images showed a correlation of
    only r &#x3C;;0.5, i.e. a low effect size. On the other hand, six
    (<italic>attention</italic>) and two images
    (<italic>contact</italic>) showed a correlation of r &#x3E; 0.9.</p>
    
    <p>Similar results were obtained in the evaluation of the AUC of the
    individual super categories represented in Figure 7.</p>

    <fig id="fig07" fig-type="figure" position="float">
					<label>Figure 7.</label>
					<caption>
						<p>Comparison of the AUC values: Left: Basic categories, attention; right: Super categories, 2nd level, attention</p>
					</caption>
					<graphic id="graph07" xlink:href="jemr-11-06-d-figure-07.png"/>
				</fig>
    
    <p>All AUC values differ significantly from chance level with an AUC
    of 0.5, and all categories are at least in an acceptable, almost
    good range (<xref ref-type="bibr" rid="b65">65</xref>).</p>
    
    <p>But in this statistical analytical method there are also large,
    significant differences between the super categories. The image
    category with the highest values was AHS (<italic>animate human
    simple</italic>) with an AUC of 0.95 (<italic>attention</italic>) or
    0.94 (<italic>contact</italic>).</p>
    
    <p>With an AUC of 0.79, INC (<italic>inanimate natural
    complex</italic>) is the category with the lowest correlation
    between emAT and mcAT data, and <italic>attention</italic> values
    are medium (0.5). For <italic>contact</italic>, this applies to the
    category inanimate artificial complex (IAC) with an AUC of 0.771,
    similar to the AUC of category INC (0.774).</p>
    
    <p>Generally, attention showed slightly higher values than contact.
    The “human/animal” category consistently showed higher values than
    neighboring categories. Most of the test results regarding
    significance between the different categories were similar for both
    attention and contact. Both the attention and contact values clearly
    show the difference between the correlations for the AHS (animated
    human simple) category and almost all other categories. Both also
    very clearly differ regarding the “easy” and “complex” image
    categories. Furthermore, in the “human/animal” super category, the
    differences with respect to all three categories on the same level
    were highly significant. The “human/animal” super category also
    shows a highly significant difference between the “animate” and
    “inanimate” categories.</p>
    
    <p>We conclude with the observation that the comparable significant
    similarity of this data demonstrates convincingly the close link and
    resemblance of the mcAT and emAT methods for searching, recognizing
    and perceiving the images shown.</p>
  </sec>
</sec>
<sec id="S4">
  <title>Discussion</title>
  <p>We investigated the conformity of the mcAT-measurement data
  (clicks) (n = 10) and the emAT measurement data (fixations) (n = 10).
  This was based on 64 photographs that were viewed by our subjects.
  These images were divided into eight categories of our classification
  scheme. The comparison of click and fixation rates demonstrated that
  clicks yielded highly similar results to eye movements within our
  paradigm. In accordance with suggestions in the literature, clicks
  were on average slightly slower, and occurred with smaller numbers
  than fixations.</p>
  
  <p>To what extent fixations and clicks match each other, and whether
  there are differences depending upon the stimulus material?</p>
  
  <p>We found a highly positive and significant correlation between mcAT
  and emAT data. The AUC values were significantly (p&#x3C;;0.01) above
  chance level of 0.5 with correlations of r = 0.76 for attention
  values, and r = 0.71 for contact values. Inter-subject variance of the
  emAT data was determined through a within-subject analysis.
  Correlation coefficients as well as AUC-values were below results from
  the between-subject design. Due to the small number of participants (5
  vs. 5 of the within-subject designs compared to 10 vs. 10 of
  between-subject designs) the variance was high.</p>
  
  <p>This was comparable to other studies with a higher number of
  subjects: Rajashekar, van der Linde, Bovik and Cormack (<xref ref-type="bibr" rid="b70">70</xref>) reported a
  larger inter-subject variance with r = 0.75, compared to r = 0.68
  (attention) and 0.66 (contact). Their study was based on a sample of
  emAT with n = 29, and on stimuli excluding images with top-down
  features.</p>
  
  <p>Interestingly, using a calibration function built by an algorithm
  (<xref ref-type="bibr" rid="b71">71</xref>) it was possible to predict where a user will click with a mouse:
  The accuracy of the prediction was about 75% ‒ which points to a high
  correlation as also shown here.</p>
  
  <p>Our second hypothesis was also confirmed: The correlation
  coefficient and AUC values of individual categories differed from each
  other. Both the basic eight categories as well as the super-categories
  showed significant differences for many variations between categories
  of the same level. As expected, we found that the main “simple”
  category showed significantly higher correlation values (r = 0.81) for
  attention when compared to the “complex” category (r = 0.66).</p>
  
  <p>The same was true for the “human/animal” super-category (r = 0.82)
  with respect to the other three categories of the second level
  “plants” (r = 0.74), “natural” (r=0.74) and “artificial” (r=0.72).
  This is important evidence for the validity of the mcAT procedure. It
  has been demonstrated by many researchers that viewers prefer images
  of humans and animals. The preference for pictures of humans and
  animals, especially of faces that are simply structured, is also
  reflected in the eight fundamental categories. These show by far the
  highest correlations of 0.86 for attention and an excellent AUC of
  0.95. Thus, this category is significantly superior to all others. The
  lowest correlation of mcAT and emAT data is demonstrated by images
  that represent complex natural structures, but still with a high
  linear correlation. As print ads and websites often depict people with
  clearly defined AOIs such as title, motif, slogan, and brand logo, we
  conclude that the mcAT procedure is highly suitable for measuring
  attention in this type of stimuli.</p>
  
  <p>It is interesting to note that our results are in line with most
  previously published reports on our categories: Animate (human/animal
  plants), Inanimate <italic>(artificial natural</italic>), as we
  demonstrate in the following descriptions.</p>
  
  <p>The face recognition system is capable of extremely fine
  within-category judgments to recognize and discriminate between faces
  and different facial expressions displayed by the same face (<xref ref-type="bibr" rid="b72 b73">72, 73</xref>).
  To support this ability, it has been proposed that a separate system
  evolved to mediate face recognition.</p>
  
  <p>These results indicated the existence of an experience-independent
  ability for face processing as well as an apparent sensitive period
  during which a broad but flexible face prototype develops into a
  concrete one for efficient processing of familiar faces. (<xref ref-type="bibr" rid="b74">74</xref>).</p>
  
  <p>A cortical area selective for visual processing of the human body
  was described by Downing, Jiang, Shuman and
  Kanwisher (<xref ref-type="bibr" rid="b75">75</xref>). Despite extensive evidence for regions of human visual
  cortex that respond selectively to faces, few studies have considered
  the cortical representation of the appearance of the rest of the human
  body. They presented a series of functional magnetic resonance imaging
  (fMRI) studies revealing substantial evidence for a distinct cortical
  region in humans that responds selectively to images of the human
  body, as compared with a wide range of control stimuli. This region
  was found in the lateral occipitotemporal cortex in all subjects
  tested (n = 19) and apparently reflected a specialized neural system
  for the visual perception of the human body.</p>
  
  <p>Several lines of evidence suggest that animate entities have a
  privileged processing status over inanimate objects – in other words,
  that animates have priority over inanimates. The animate/inanimate
  distinction parallels the distinction between “living” and “nonliving”
  things that has been postulated to account for selective deficits in
  patients (for a review, see Capitani, Laiacona, Mahon, &#x26;
  Caramazza, (<xref ref-type="bibr" rid="b76">76</xref>)). Animates belong to the general category of living
  things.</p>
  
  <p>Their studies revealed better recall for words denoting animate
  than inanimate items, which was also true with the use of pictures.
  The findings provided further evidence for the functionalist view of
  memory championed by Nairne and co-workers (<xref ref-type="bibr" rid="b77 b78">77, 78</xref>).</p>
  
  <p>Evidence from neuropsychology suggests that the distinction between
  animate and inanimate kinds is fundamental to human cognition.
  Previous neuroimaging studies have reported that viewing animate
  objects activates ventrolateral visual brain regions, whereas
  inanimate objects activate ventromedial regions.</p>
  <sec id="S4a">
    <title>Comparison of eye and hand movements from a
    neuro-bioengineering perspective</title>
    <p>A saccade made to a target that appears eccentric to the point of
    fixation is sometimes called a ‘reflexive’ (or ‘stimulus-elicited’)
    saccade in contrast to those made in situations that depend more
    heavily upon voluntary (or ‘endogenous’) cognitive control processes
    (for example when directed by a simple instruction “look to the
    left”). Most saccades are essentially voluntary in nature, as an
    observer can always decide not to move the eyes. Also, if the time
    and place of a target’s appearance can be predicted, an anticipatory
    saccade often occurs before the target itself appears, or too
    briefly subsequently for visual guidance to have occurred.</p>
    
    <p>When reaching for targets presented in peripheral vision, the
    eyes generally begin moving before the hand (<xref ref-type="bibr" rid="b79 b80 b81 b82 b83 b84">79, 80, 81, 82 ,83, 84</xref>) This is the case
    because much of the delay in hand movement onset, relative to eye
    movement onset, can be attributed to the greater inertia of the arm.
    Recent studies demonstrate that the motor commands underlying
    coordinated eye and hand movements appear to be issued in close
    temporal proximity and that commands for hand movement may even
    precede those for eye movement (<xref ref-type="bibr" rid="b80 b85 b86">80, 85, 86</xref>).</p>
    
    <p>Furthermore, hand movement can influence saccadic initiation.
    Saccadic reaction time (SRT) is greater when eye movement is
    accompanied by hand movement compared to when the eyes move alone
    (<xref ref-type="bibr" rid="b87 b88">87, 88</xref>), and SRT and hand reaction time (HRT) both increase when
    reaching for targets in contralateral versus ipsilateral space (<xref ref-type="bibr" rid="b89">89</xref>).
    In addition, in eye-hand coordination saccades are faster when
    accompanied by a coordinated arm movement (<xref ref-type="bibr" rid="b90">90</xref>). Because of the large
    variation in response characteristics of both, hand and eye systems
    with different subjects, both systems have been compared by
    measuring the systems´ responses simultaneously (<xref ref-type="bibr" rid="b49 b88">49, 88</xref>). When eye
    and hand responses were recorded simultaneously to random steps and
    predictive regular steps, the eye shows shorter response times than
    those of the hand (Fig.8).</p>

    <fig id="fig08" fig-type="figure" position="float">
					<label>Figure 8.</label>
					<caption>
						<p>The dependence of median response time (i.e. latency) to frequency of repetitive square-wave patterns (Stark, 1968). Histograms of response time delays for hand and eye. Top:predictive square waves at 1.2 cps. Bottom: random target.</p>
					</caption>
					<graphic id="graph08" xlink:href="jemr-11-06-d-figure-08.png"/>
				</fig>
    
    <p>The eye muscles have considerable power with respect to their
    constant load, the eyeball, and show faster rise times than the
    hand, especially when tracking rapidly alternating signals. With
    random targets, the hand response lags behind the eye response due
    to the eye’s considerably smaller load. At moderate frequencies of
    0.7 to 1.0 cps the hand develops prediction faster and to a greater
    extent than the eye. At higher frequencies of &#x3E;1.1 cps the hand
    shows considerable prediction, while the median eye response time
    starts to lag despite the higher frequency characteristics of the
    actual movement dynamics of the eye. In general, at low frequencies
    there is some correlation evident between eye and hand response.</p>
    
    <p>Obviously, eye movement is not necessary for hand movement.
    Conversely, the physical movement of the hand appears to help the
    eye movement system: Adequate eye tracking may occur with
    comparatively high frequencies, if the hand is tracking and may not
    occur if the hand is still. When hand tracking improved eye
    performance and then stopped, the eye performance deteriorated
    significantly (<xref ref-type="bibr" rid="b49 b88">49, 88</xref>).</p>
    
    <p>With respect to our paradigm, this means that the mcAT method
    might help the sequence of eye fixations that must go along with the
    mouse clicks when viewing and perceiving the test images. This is
    particularly true in test settings where sensory-motor actions tend
    to be time-optimal due to limited time: This was the case in our
    test set with the time limit of 5 sec.</p>
    
    <p>Interestingly, Bednarik, Gowases, &#x26; Tukiainen (<xref ref-type="bibr" rid="b91">91</xref>) showed
    that users with gaze-augmented interaction outperformed two other
    groups – using mouse or dwell time interaction – on several
    problem-solving measures: they committed fewer errors, were more
    immersed, and had a better user experience. As mentioned, the slower
    manual action of the hand/finger-movement when activating the mouse
    may well be due to the physiological properties rather than solely
    on to the hidden cognitive processes of augmented gaze in problem
    solving as the authors speculate. A combined eye and mouse
    interaction may show an even more successful result, since hand and
    eye movements in coordination could act faster and more precisely in
    many situations (<xref ref-type="bibr" rid="b88">88</xref>).</p>
    
    <p>In continuation of the early results by Stark (<xref ref-type="bibr" rid="b49">49</xref>), Helsen,
    Elliott, Starkes, and Ricker (<xref ref-type="bibr" rid="b92">92</xref>)studied eye and hand movement
    coordination in goal directed movements. They found a remarkable
    temporal relationship between the arrival of the eye and the hand on
    the target position. Because point of gaze always arrives on the
    target area first, at roughly 50% of the hand response time, there
    is ample opportunity for the eye to provide extra-retinal
    information on target position either through oculomotor efference
    or visual and proprioceptive feedback resulting from the large
    primary saccade. They demonstrated an invariant relationship between
    the spatial-temporal characteristics of eye movements and hand
    kinematics in goal-directed movement that is optimal for the pickup
    of visual feedback. Interestingly, the natural coupling between eye
    and hand movements remains intact even when hand movements are
    merely imagined as opposed to being physically executed. So, it
    appears that the bioengineering and neurophysiological literature
    shows indeed a solid background for the highly correlated
    relationship of eye and mouse movements firstly described by Egner
    and Scheier in 2002.</p>
    
    <p>Bieg, Chuang, Fleming, Reiterer and Bülthoff (<xref ref-type="bibr" rid="b93">93</xref>) showed, when
    target location was unknown (quasi random), the eyes lead the mouse
    by 300 ms on average. When the approximate location of the target
    was known (i.e. predictive), the cursor often led gaze in acquiring
    the target, and fixations on the target occurred later in the
    pointing process. This again corresponds to the early results of
    Stark and Navas inasmuch the degree of prediction of a target
    influences the result.</p>
    
    <p>Knowledge about the target location is likely to be very
    important especially in non-laboratory settings. This was shown by
    Liebling and Dumais (<xref ref-type="bibr" rid="b94">94</xref>) presented an in-situ study of gaze and
    mouse coordination as participants went about their normal
    activities before and after clicks. They analyzed the coordination
    between gaze and mouse, showing that gaze often leads the mouse,
    about two thirds of the time, and that this depends on type of
    target and familiarity with the application; but not as much as
    previously reported, and in ways that depend on the type of
    target.</p>
    
    <p>Rodden, Fu, Aula and Spiro (<xref ref-type="bibr" rid="b95">95</xref>) tracked subjects´ eye and mouse
    movements and described three different types of eye-mouse
    coordination patterns. However, they found that the users were not
    easy to classify: each one seemed to exhibit several patterns of the
    three types to varying degrees. There was also substantial variation
    between users in high-level measures: The mean distance between eye
    and mouse ranged from 144 to 456 pixels, and the proportion of mouse
    data points at which the eye and mouse were in the same region
    ranged from 25.8% to 59.9%. This corresponds with Huang, White and
    Dumais (<xref ref-type="bibr" rid="b96">96</xref>) results. During Web search, Huang et al. found that eye
    and mouse most often were correlated. The average distance between
    the eye and mouse was 178 px, with the differences in the
    x-direction being much larger (50 px) than in the y-direction (7
    px).</p>
    
    <p>Later, Chen et al. (<xref ref-type="bibr" rid="b46">46</xref>) also reported that in web browsing during
    certain subtasks, mouse and gaze movements were very often
    correlated. They found that the average distance between mouse and
    gaze was 90 pixels during transitions from one area of interest
    (AOI) to another, and that 40% of the distances were closer than 35
    pixels.</p>
    
    <p>In summary, as gaze provides a measure of attention, knowing when
    the mouse and gaze are aligned i.e. highly correlated, this confirms
    the usability of the mouse as indicator of attention as shown by
    Egner and Scheier (<xref ref-type="bibr" rid="b62">62</xref>).</p>
  </sec>
  <sec id="S4b">
    <title>Limitations of emAT and mcAT </title>
    <p>Both methods used in this study, emAT and mcAT, were geared
    towards determining the respondents’ current attention location.
    However, as attention is an internal process in the brain, we are
    measuring external responses (gaze direction, mouse location) and
    use the measured data to infer the actual attention location. These
    external responses as well as their measurements are subject to
    noise (Tab.2) (Appendix 2).</p>

        <table-wrap id="t02" position="float">
					<label>Table 2.</label>
					<caption>
						<p>Comparison between the two attention measurements used in the present article. See also Appendix 3</p>
					</caption>
					<graphic id="graph10" xlink:href="jemr-11-06-d-table-02.png"/>
					</table-wrap>
    
    <p>Also, past research has found a correlation between gaze and
    cursor positions (<xref ref-type="bibr" rid="b46 b95 b96 b97 b98">46, 95, 96, 97 ,98</xref>) and that cursor movements can be useful
    for determining relevant parts of the Web page with varying degrees
    of success (<xref ref-type="bibr" rid="b96 b99 b100 b101">96, 99, 100, 101</xref>). Cursor interaction spans a variety of
    behaviors including reading, hesitating, highlighting, marking, and
    actions such as scrolling and clicking. Huang et al. (<xref ref-type="bibr" rid="b96">96</xref>) showed
    that user, time, and search task (to a lesser extent) each
    contribute to the variation in gaze-cursor alignment. The gaze and
    cursor positions are also better aligned when the gaze position is
    compared to a future cursor position. Furthermore, by distinguishing
    between five different cursor behaviors—inactive, examining,
    reading, action, and click—one might get a better idea of the
    strength of alignment. Identifying these behaviors was beyond the
    scope of this paper. For further discussion of these problems see
    Huang et al. (<xref ref-type="bibr" rid="b96">96</xref>). Demšar and Çöltekin (<xref ref-type="bibr" rid="b102">102</xref>) investigated the actual
    connection between the eye and the mouse when searching web pages.
    They found that there seemed to be natural coupling when eyes were
    not under conscious control, but that this coupling breaks down when
    instructed to move them intentionally. Therefore, they suggested
    that for natural tracing tasks, mouse tracking could potentially
    provide similar information as eye-tracking and so be used as a
    proxy for attention. Since our paradigm used a clear-cut task that
    asked our subjects to consciously coordinate eye and mouse
    movements, this aspect of Demšar and Çöltekin appears not to be
    relevant in our context.</p>
  </sec>
  <sec id="S4c">
    <title>Starting points for further evaluations and
    investigations</title>
    <p>Based on the results and limitations of this study, and the
    explanations of the theoretical part of this work, recommendations
    for further research can be derived that could complete the results
    of this work and lead to a broader assessment of the contribution
    and validity of mcAT. Firstly, it describes aspects that are already
    present in the data that can be evaluated in secondary analyzes. A
    further method would be <italic>the analysis of spatial
    correspondence of fixations <underline>and</underline>
    clicks</italic>, which would provide an extension to the viewing or
    clicking motion paths. The question of whether clicking may be
    influenced by short-term memory in relation to the presented stimuli
    cannot be answered here in the context of the evaluated tests but
    should be studied. The evaluation of the qualitative survey could
    also shed some light on possible gaze and click strategies with
    mcAT. Furthermore, the influence of the number and size of the grids
    on the images varies and should be checked systematically to
    determine whether the effects found in this study can be
    confirmed.</p>
    
    <p>Another possibility would be to select from the present stimulus
    material only those images that are semantically oriented instead of
    using grids. AOIs could be defined in a next step and evaluation
    methods for these new AOIs then repeated. For checking the
    coincidence of time of fixation and clicks, other parameters such as
    the viewing time and time to contact for both could be calculated
    and compared. Another question is how precise are spatial mouse
    clicks set? The variation of the presentation duration for certain
    looking/click strategies of the subjects would enable a time-based
    evaluation in which the extent to which the match of gaze and click
    data is also dependent upon the presentation duration could be
    checked.</p>
  </sec>
</sec>
<sec id="S5">
  <title>Conclusion</title>
  <p>The aim of this work was a fundamental review of the suitability of
  mcAT to measure the attention of participants by registering clicks on
  the computer screen. The validation was carried out under a
  between-subject design based on eye tracking (emAT), the standard
  method of attention measurement. The focus of the investigation was
  the analysis of the spatial relationship of the measured data of both
  methods. With respect to findings reported in the literature, it was
  assumed that, in general, a close relationship between gaze and click
  data exists. However, the extent of the relationship varies for
  different image categories. Both hypotheses were confirmed. As eye
  tracking (emAT) is predominantly accepted as the valid method for
  measuring attention, we can conclude that mouse-click tracking (mcAT)
  is similarly highly valid.</p>
  
  <p>A further finding of our research was that this innovative method
  obtains particularly valid results with stimuli that are simply
  structured and where humans or animals are shown. This suggests that
  the use of mcAT for attention measurement is well suited for print
  ads, and thus for advertising research a valid alternative to eye
  tracking, with benefits regarding practicability. Due to some emAT’s
  restrictions, we suggest that, in other fields, mcAT could replace the
  registration of eye movements in cases where eye tracking may be
  inaccurate or technically unfeasible and provides a promising
  additional method for usability research (<xref ref-type="bibr" rid="b103">103</xref>).</p>
</sec>
<sec id="S6">
  <title>Ethics and Conflict of Interest</title>
  <p>The author(s) declare(s) that the contents of the article are in
  agreement with the ethics described in
  <ext-link ext-link-type="uri" xlink:href="http://biblio.unibe.ch/portale/elibrary/BOP/jemr/ethics.html" xlink:show="new">http://biblio.unibe.ch/portale/elibrary/BOP/jemr/ethics.html</ext-link>
  and that there is no conflict of interest regarding the publication of
  this paper.</p>
</sec>

</body>
<back>
<ref-list>
<ref id="b1"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Noton</surname>, <given-names>D.</given-names></name>, &#x26; <name><surname>Stark</surname>, <given-names>L.</given-names></name></person-group> (<year>1971</year>). <article-title>Scanpaths in eye movements during pattern perception.</article-title> <source>Science</source>, <volume>171</volume>(<issue>3968</issue>), <fpage>308</fpage>–<lpage>311</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1126/science.171.3968.308</pub-id><pub-id pub-id-type="pmid">5538847</pub-id><issn>0036-8075</issn></mixed-citation></ref>
<ref id="b2"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><name><surname>Groner</surname>, <given-names>R.</given-names></name></person-group> (<year>1988</year>). <chapter-title>Eye movements, attention and visual information processing: some experimental results and methodological consideration</chapter-title>. In <person-group person-group-type="editor"><name><given-names>G.</given-names> <surname>Luer</surname></name>, <name><given-names>U.</given-names> <surname>Lass</surname></name>, &#x26; <name><given-names>J.</given-names> <surname>Shallo-Hoffmann</surname></name> (<role>Eds.</role>),</person-group> <source>Eye movement research, physiological and psychological aspects</source>. <publisher-loc>Gottingen</publisher-loc>: <publisher-name>Hogrefe</publisher-name>.</mixed-citation></ref>
<ref id="b3"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Groner</surname>, <given-names>R.</given-names></name>, &#x26; <name><surname>Groner</surname>, <given-names>M. T.</given-names></name></person-group> (<year>1989</year>). <article-title>Attention and eye movement control: An overview.</article-title> <source>European Archives of Psychiatry and Neurological Sciences</source>, <volume>239</volume>(<issue>1</issue>), <fpage>9</fpage>–<lpage>16</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1007/BF01739737</pub-id><pub-id pub-id-type="pmid">2676541</pub-id><issn>0175-758X</issn></mixed-citation></ref>
<ref id="b4"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><name><surname>Groner</surname>, <given-names>R.</given-names></name>, &#x26; <name><surname>Groner</surname>, <given-names>M. T.</given-names></name></person-group> (<year>2000</year>). <chapter-title>The issue of control in sensory and perceptual processes: attention selects and modulates the visual input</chapter-title>. In <person-group person-group-type="editor"><name><given-names>W. J.</given-names> <surname>Perrig</surname></name> &#x26; <name><given-names>A.</given-names> <surname>Grob</surname></name> (<role>Eds.</role>),</person-group> <source>Control of human behavior, mental processes, and consciousness</source> (pp. <fpage>125</fpage>–<lpage>135</lpage>). <publisher-loc>Mahwah, N.J.</publisher-loc>: <publisher-name>Lawrence. Erlbaum</publisher-name>.</mixed-citation></ref>
<ref id="b5"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Yarbus</surname>, <given-names>A. L.</given-names></name></person-group> (<year>1961</year>). <article-title>Eye movements during the examination of complicated objects.</article-title> <source>Biofizika</source>, <volume>6</volume>(<issue>2</issue>), <fpage>52</fpage>–<lpage>56</lpage>.<pub-id pub-id-type="pmid">14040367</pub-id><issn>0006-3029</issn></mixed-citation></ref>
<ref id="b6"><mixed-citation publication-type="unknown" specific-use="parsed"><person-group person-group-type="author"><name><surname>Cadek</surname> <given-names>V.</given-names></name></person-group> <article-title>Why are we predicting a probability rather than a binary response?</article-title> <year>2015</year>.</mixed-citation></ref>
<ref id="b7"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Posner</surname>, <given-names>M. I.</given-names></name>, <name><surname>Snyder</surname>, <given-names>C. R.</given-names></name>, &#x26; <name><surname>Davidson</surname>, <given-names>B. J.</given-names></name></person-group> (<year>1980</year>). <article-title>Attention and the detection of signals.</article-title> <source>Journal of Experimental Psychology</source>, <volume>109</volume>(<issue>2</issue>), <fpage>160</fpage>–<lpage>174</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1037/0096-3445.109.2.160</pub-id><pub-id pub-id-type="pmid">7381367</pub-id><issn>0022-1015</issn></mixed-citation></ref>
<ref id="b8"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Allport</surname> <given-names>DA</given-names></name></person-group>. Selection for action: Some behavioral and neurophysiological considerations of attention and action. Perspectives on perception and action. <year>1987</year>;15:395-419.</mixed-citation></ref>
<ref id="b9"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><name><surname>Neumann</surname>, <given-names>O.</given-names></name></person-group> (<year>1987</year>). <chapter-title>Beyond capacity: A functional view of attention</chapter-title>. In <person-group person-group-type="editor"><name><given-names>H.</given-names> <surname>Heuer</surname></name> (<role>Ed.</role>),</person-group> <source>Perspectives on perception and action</source> (pp. <fpage>361</fpage>–<lpage>394</lpage>). <publisher-loc>Hillsdale, NJ</publisher-loc>: <publisher-name>Erlbaum</publisher-name>.</mixed-citation></ref>
<ref id="b10"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Jacob</surname>, <given-names>R. J. K.</given-names></name>, &#x26; <name><surname>Karn</surname>, <given-names>K. S.</given-names></name></person-group> (<year>2003</year>). <source>Eye tracking in human-computer interaction and usability research. The Mind’s Eye</source> (pp. <fpage>573</fpage>–<lpage>605</lpage>). <publisher-name>Elsevier</publisher-name>.</mixed-citation></ref>
<ref id="b11"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Borji</surname>, <given-names>A.</given-names></name>, &#x26; <name><surname>Itti</surname>, <given-names>L.</given-names></name></person-group> (<year>2013</year>). <article-title>State-of-the-art in visual attention modeling.</article-title> <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>, <volume>35</volume>(<issue>1</issue>), <fpage>185</fpage>–<lpage>207</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1109/TPAMI.2012.89</pub-id><pub-id pub-id-type="pmid">22487985</pub-id><issn>0162-8828</issn></mixed-citation></ref>
<ref id="b12"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Schütz</surname>, <given-names>A. C.</given-names></name>, <name><surname>Braun</surname>, <given-names>D. I.</given-names></name>, &#x26; <name><surname>Gegenfurtner</surname>, <given-names>K. R.</given-names></name></person-group> (<year>2011</year>). <article-title>Eye movements and perception: A selective review.</article-title> <source>Journal of Vision (Charlottesville, Va.)</source>, <volume>11</volume>(<issue>5</issue>), <fpage>1</fpage>–<lpage>30</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1167/11.5.9</pub-id><pub-id pub-id-type="pmid">21917784</pub-id><issn>1534-7362</issn></mixed-citation></ref>
<ref id="b13"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Fischer</surname>, <given-names>B.</given-names></name></person-group> (<year>1999</year>). <source>Blick-Punkte: neurobiologische Grundlagen des Sehens und der Blicksteuerung</source>. <publisher-name>Huber</publisher-name>.</mixed-citation></ref>
<ref id="b14"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Parkhurst</surname>, <given-names>D.</given-names></name>, <name><surname>Law</surname>, <given-names>K.</given-names></name>, &#x26; <name><surname>Niebur</surname>, <given-names>E.</given-names></name></person-group> (<year>2002</year>). <article-title>Modeling the role of salience in the allocation of overt visual attention.</article-title> <source>Vision Research</source>, <volume>42</volume>(<issue>1</issue>), <fpage>107</fpage>–<lpage>123</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1016/S0042-6989(01)00250-4</pub-id><pub-id pub-id-type="pmid">11804636</pub-id><issn>0042-6989</issn></mixed-citation></ref>
<ref id="b15"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Anderson</surname>, <given-names>N. C.</given-names></name>, <name><surname>Ort</surname>, <given-names>E.</given-names></name>, <name><surname>Kruijne</surname>, <given-names>W.</given-names></name>, <name><surname>Meeter</surname>, <given-names>M.</given-names></name>, &#x26; <name><surname>Donk</surname>, <given-names>M.</given-names></name></person-group> (<year>2015</year>). <article-title>It depends on when you look at it: Salience influences eye movements in natural scene viewing and search early in time.</article-title> <source>Journal of Vision (Charlottesville, Va.)</source>, <volume>15</volume>(<issue>5</issue>), <fpage>9</fpage>. <pub-id pub-id-type="doi" specific-use="author">10.1167/15.5.9</pub-id><pub-id pub-id-type="pmid">26067527</pub-id><issn>1534-7362</issn></mixed-citation></ref>
<ref id="b16"><mixed-citation publication-type="unknown" specific-use="parsed"><person-group person-group-type="author"><name><surname>Berger</surname> <given-names>SJ</given-names></name></person-group>. <article-title>Spotlight-Viewer: das Aufzeichnen von manuellen Zeigebewegungen als neues Verfahren zur impliziten Messung der Effektivität von Werbeanzeigen: na</article-title>; <year>2009</year>.</mixed-citation></ref>
<ref id="b17"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Duchowski</surname>, <given-names>A. T.</given-names></name></person-group> (<year>2007</year>). <article-title>Eye tracking methodology: Theory into Practice</article-title> <source>(2. ed.). London: Springer.</source> <issn>0040-5841</issn></mixed-citation></ref>
<ref id="b18"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>James</surname>, <given-names>W.</given-names></name></person-group> (<year>1890</year>). <article-title>The principles of psychology</article-title><source>(1. Ed). New York: Dover Publications.</source></mixed-citation></ref>
<ref id="b19"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Dosher</surname>, <given-names>B. A.</given-names></name>, <name><surname>Sperling</surname>, <given-names>G.</given-names></name>, &#x26; <name><surname>Wurst</surname>, <given-names>S. A.</given-names></name></person-group> (<year>1986</year>). <article-title>Tradeoffs between stereopsis and proximity luminance covariance as determinants of perceived 3D structure.</article-title> <source>Vision Research</source>, <volume>26</volume>(<issue>6</issue>), <fpage>973</fpage>–<lpage>990</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1016/0042-6989(86)90154-9</pub-id><pub-id pub-id-type="pmid">3750879</pub-id><issn>0042-6989</issn></mixed-citation></ref>
<ref id="b20"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Kowler</surname>, <given-names>E.</given-names></name>, <name><surname>Anderson</surname>, <given-names>E.</given-names></name>, <name><surname>Dosher</surname>, <given-names>B.</given-names></name>, &#x26; <name><surname>Blaser</surname>, <given-names>E.</given-names></name></person-group> (<year>1995</year>). <article-title>The role of attention in the programming of saccades.</article-title> <source>Vision Research</source>, <volume>35</volume>(<issue>13</issue>), <fpage>1897</fpage>–<lpage>1916</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1016/0042-6989(94)00279-U</pub-id><pub-id pub-id-type="pmid">7660596</pub-id><issn>0042-6989</issn></mixed-citation></ref>
<ref id="b21"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Norman</surname>, <given-names>D. A.</given-names></name>, &#x26; <name><surname>Bobrow</surname>, <given-names>D. G.</given-names></name></person-group> (<year>1975</year>). <article-title>On data-limited and resource-limited processes.</article-title> <source>Cognitive Psychology</source>, <volume>7</volume>(<issue>1</issue>), <fpage>44</fpage>–<lpage>64</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1016/0010-0285(75)90004-3</pub-id><issn>0010-0285</issn></mixed-citation></ref>
<ref id="b22"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Reeves</surname>, <given-names>A.</given-names></name>, &#x26; <name><surname>Sperling</surname>, <given-names>G.</given-names></name></person-group> (<year>1986</year>). <article-title>Attention gating in short-term visual memory.</article-title> <source>Psychological Review</source>, <volume>93</volume>(<issue>2</issue>), <fpage>180</fpage>–<lpage>206</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1037/0033-295X.93.2.180</pub-id><pub-id pub-id-type="pmid">3714927</pub-id><issn>0033-295X</issn></mixed-citation></ref>
<ref id="b23"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Shaw</surname> <given-names>M.</given-names></name></person-group> <article-title>Division of attention among spatial locations: A fundamental difference between detection of letters and detection of luminance increments.</article-title> Attention and performance X: Control of language processes. <year>1984</year>:109-21.</mixed-citation></ref>
<ref id="b24"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Shaw</surname>, <given-names>M. L.</given-names></name></person-group> (<year>1982</year>). <article-title>Attending to multiple sources of information: I. The integration of information in decision making.</article-title> <source>Cognitive Psychology</source>, <volume>14</volume>(<issue>3</issue>), <fpage>353</fpage>–<lpage>409</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1016/0010-0285(82)90014-7</pub-id><issn>0010-0285</issn></mixed-citation></ref>
<ref id="b25"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Kastner</surname>, <given-names>S.</given-names></name>, &#x26; <name><surname>Ungerleider</surname>, <given-names>L. G.</given-names></name></person-group> (<year>2000</year>). <article-title>Mechanisms of visual attention in the human cortex.</article-title> <source>Annual Review of Neuroscience</source>, <volume>23</volume>(<issue>1</issue>), <fpage>315</fpage>–<lpage>341</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1146/annurev.neuro.23.1.315</pub-id><pub-id pub-id-type="pmid">10845067</pub-id><issn>0147-006X</issn></mixed-citation></ref>
<ref id="b26"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Dubois</surname>, <given-names>B.</given-names></name>, &#x26; <name><surname>Pillon</surname>, <given-names>B.</given-names></name></person-group> (<year>1997</year>). <article-title>Cognitive deficits in Parkinson’s disease.</article-title> <source>Journal of Neurology</source>, <volume>244</volume>(<issue>1</issue>), <fpage>2</fpage>–<lpage>8</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1007/PL00007725</pub-id><pub-id pub-id-type="pmid">9007738</pub-id><issn>0340-5354</issn></mixed-citation></ref>
<ref id="b27"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Mink</surname>, <given-names>J. W.</given-names></name></person-group> (<year>1996</year>). <article-title>The basal ganglia: Focused selection and inhibition of competing motor programs.</article-title> <source>Progress in Neurobiology</source>, <volume>50</volume>(<issue>4</issue>), <fpage>381</fpage>–<lpage>425</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1016/S0301-0082(96)00042-1</pub-id><pub-id pub-id-type="pmid">9004351</pub-id><issn>0301-0082</issn></mixed-citation></ref>
<ref id="b28"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><name><surname>Rizzolatti</surname>, <given-names>G.</given-names></name></person-group> (<year>1983</year>). <chapter-title>Mechanisms of Selective Attention in Mammals</chapter-title>. In <person-group person-group-type="editor"><name><given-names>J. P.</given-names> <surname>Ewert</surname></name>, <name><given-names>R. R.</given-names> <surname>Capranica</surname></name>, &#x26; <name><given-names>D. J.</given-names> <surname>Ingle</surname></name> (<role>Eds.</role>),</person-group> <source>Advances in Vertebrate Neuroethology</source> (pp. <fpage>261</fpage>–<lpage>297</lpage>). <publisher-loc>Boston, MA</publisher-loc>: <publisher-name>Springer US</publisher-name>. <pub-id pub-id-type="doi">10.1007/978-1-4684-4412-4_12</pub-id></mixed-citation></ref>
<ref id="b29"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Rizzolatti</surname>, <given-names>G.</given-names></name>, <name><surname>Riggio</surname>, <given-names>L.</given-names></name>, <name><surname>Dascola</surname>, <given-names>I.</given-names></name>, &#x26; <name><surname>Umiltá</surname>, <given-names>C.</given-names></name></person-group> (<year>1987</year>). <article-title>Reorienting attention across the horizontal and vertical meridians: Evidence in favor of a premotor theory of attention.</article-title> <source>Neuropsychologia</source>, <volume>25</volume>(<issue>1</issue>, <supplement>1A</supplement>), <fpage>31</fpage>–<lpage>40</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1016/0028-3932(87)90041-8</pub-id><pub-id pub-id-type="pmid">3574648</pub-id><issn>0028-3932</issn></mixed-citation></ref>
<ref id="b30"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Sheliga</surname>, <given-names>B. M.</given-names></name>, <name><surname>Craighero</surname>, <given-names>L.</given-names></name>, <name><surname>Riggio</surname>, <given-names>L.</given-names></name>, &#x26; <name><surname>Rizzolatti</surname>, <given-names>G.</given-names></name></person-group> (<year>1997</year>). <article-title>Effects of spatial attention on directional manual and ocular responses.</article-title> <source>Experimental Brain Research</source>, <volume>114</volume>(<issue>2</issue>), <fpage>339</fpage>–<lpage>351</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1007/PL00005642</pub-id><pub-id pub-id-type="pmid">9166923</pub-id><issn>0014-4819</issn></mixed-citation></ref>
<ref id="b31"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Deubel</surname>, <given-names>H.</given-names></name>, &#x26; <name><surname>Schneider</surname>, <given-names>W. X.</given-names></name></person-group> (<year>1996</year>). <article-title>Saccade target selection and object recognition: Evidence for a common attentional mechanism.</article-title> <source>Vision Research</source>, <volume>36</volume>(<issue>12</issue>), <fpage>1827</fpage>–<lpage>1837</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1016/0042-6989(95)00294-4</pub-id><pub-id pub-id-type="pmid">8759451</pub-id><issn>0042-6989</issn></mixed-citation></ref>
<ref id="b32"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Henderson</surname>, <given-names>J. M.</given-names></name>, &#x26; <name><surname>Hollingworth</surname>, <given-names>A.</given-names></name></person-group> (<year>1999</year>). <article-title>High-level scene perception.</article-title> <source>Annual Review of Psychology</source>, <volume>50</volume>(<issue>1</issue>), <fpage>243</fpage>–<lpage>271</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1146/annurev.psych.50.1.243</pub-id><pub-id pub-id-type="pmid">10074679</pub-id><issn>0066-4308</issn></mixed-citation></ref>
<ref id="b33"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Sheliga</surname>, <given-names>B. M.</given-names></name>, <name><surname>Riggio</surname>, <given-names>L.</given-names></name>, &#x26; <name><surname>Rizzolatti</surname>, <given-names>G.</given-names></name></person-group> (<year>1994</year>). <article-title>Orienting of attention and eye movements.</article-title> <source>Experimental Brain Research</source>, <volume>98</volume>(<issue>3</issue>), <fpage>507</fpage>–<lpage>522</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1007/BF00233988</pub-id><pub-id pub-id-type="pmid">8056071</pub-id><issn>0014-4819</issn></mixed-citation></ref>
<ref id="b34"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Umilta</surname>, <given-names>C.</given-names></name>, <name><surname>Riggio</surname>, <given-names>L.</given-names></name>, <name><surname>Dascola</surname>, <given-names>I.</given-names></name>, &#x26; <name><surname>Rizzolatti</surname>, <given-names>G.</given-names></name></person-group> (<year>1991</year>). <article-title>Differential effects of central and peripheral cues on the reorienting of spatial attention.</article-title> <source>The European Journal of Cognitive Psychology</source>, <volume>3</volume>(<issue>2</issue>), <fpage>247</fpage>–<lpage>267</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1080/09541449108406228</pub-id><issn>0954-1446</issn></mixed-citation></ref>
<ref id="b35"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Palmer</surname>, <given-names>J.</given-names></name></person-group> (<year>1995</year>). <article-title>Attention in Visual Search: Distinguishing Four Causes of a Set-Size Effect.</article-title> <source>Current Directions in Psychological Science</source>, <volume>4</volume>(<issue>4</issue>), <fpage>118</fpage>–<lpage>123</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1111/1467-8721.ep10772534</pub-id><issn>0963-7214</issn></mixed-citation></ref>
<ref id="b36"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Wolfe</surname>, <given-names>J. M.</given-names></name>, <name><surname>Alvarez</surname>, <given-names>G. A.</given-names></name>, &#x26; <name><surname>Horowitz</surname>, <given-names>T. S.</given-names></name></person-group> (<year>2000</year>). <article-title>Attention is fast but volition is slow.</article-title> <source>Nature</source>, <volume>406</volume>(<issue>6797</issue>), <fpage>691</fpage>. <pub-id pub-id-type="doi" specific-use="author">10.1038/35021132</pub-id><pub-id pub-id-type="pmid">10963584</pub-id><issn>0028-0836</issn></mixed-citation></ref>
<ref id="b37"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Gilchrist</surname>, <given-names>I. D.</given-names></name>, &#x26; <name><surname>Harvey</surname>, <given-names>M.</given-names></name></person-group> (<year>2006</year>). <article-title>Evidence for a systematic component within scan paths in visual search.</article-title> <source>Visual Cognition</source>, <volume>14</volume>(<issue>4-8</issue>), <fpage>704</fpage>–<lpage>715</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1080/13506280500193719</pub-id><issn>1350-6285</issn></mixed-citation></ref>
<ref id="b38"><mixed-citation publication-type="book-chapter" specific-use="unparsed"><person-group person-group-type="editor"><name><surname>Egner</surname> <given-names>S</given-names></name>, <name><surname>Itti</surname> <given-names>L</given-names></name>, <name><surname>Scheier</surname> <given-names>C</given-names></name></person-group> (<year>2000</year>). <article-title>Comparing attention models with different types of behavior data.</article-title><source>In: Investigative Ophthalmology and Visual Science (Proc. ARVO 2000), Vol. 41, No. 4, p. S39.</source></mixed-citation></ref>
<ref id="b39"><mixed-citation publication-type="book-chapter" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Scheier</surname> <given-names>CR</given-names></name>, <name><surname>Reigber</surname> <given-names>D</given-names></name>, <name><surname>Egner</surname> <given-names>S</given-names></name></person-group>. Messen der Aufmerksamkeit bei Internet-Nutzern -- Ansatz und Einsatz eines neuen Verfahrens zur Online-Messung von Aufmerksamkeit. In: Theobald A, Dreyer M, Starsetzki T, editors. Online-Marktforschung Theoretische Grundlagen und praktische Erfahrungen. 2nd ed. ed. Wiesbaden: Gabler Verlag; <year>2003</year>. p. 309-324.</mixed-citation></ref>
<ref id="b40"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Scheier</surname>, <given-names>C. R.</given-names></name>, &#x26; <name><surname>Egner</surname>, <given-names>S.</given-names></name></person-group> (<year>2005</year>). <source>Beobachten statt Fragen- Internetgestützte Verhaltensmessung mit Tracking</source>. <publisher-name>Planung &#x26; Analyse</publisher-name>.</mixed-citation></ref>
<ref id="b41"><mixed-citation publication-type="other" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Egner</surname> <given-names>S</given-names></name></person-group>, inventorApparatus and method for examination of images. U.S. patent 7,512,289. <year>2003</year>.</mixed-citation></ref>
<ref id="b42"><mixed-citation publication-type="other" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Egner</surname> <given-names>S</given-names></name></person-group>, inventorVorrichtung und Verfahren zum Untersuchen von Bildern. EU patent No. 3012738. <year>2003</year>.</mixed-citation></ref>
<ref id="b43"><mixed-citation publication-type="book-chapter" specific-use="unparsed"><person-group person-group-type="editor"><name><surname>Granka</surname> <given-names>L</given-names></name>, <name><surname>Joachims</surname> <given-names>T</given-names></name>, <name><surname>Gay</surname> <given-names>G</given-names></name></person-group> (<year>2013</year>). <article-title>Eye-tracking analysis of user behavior in www search2004.</article-title><source>In Proceed-ings of the 27th annual international ACM SIGIR conference on Research and development in infor-mation retrieval, pages 478-479.</source></mixed-citation></ref>
<ref id="b44"><mixed-citation publication-type="book-chapter" specific-use="linked"><person-group person-group-type="author"><name><surname>Deng</surname> <given-names>J</given-names></name>, <name><surname>Krause</surname> <given-names>J</given-names></name>, <name><surname>Fei-Fei</surname> <given-names>L</given-names></name></person-group>. <chapter-title>Fine-grained crowdsourcing for fine-grained recognition.</chapter-title> In CVPR, 580-587. <year>2013</year>. <pub-id pub-id-type="doi">10.1109/CVPR.2013.81</pub-id></mixed-citation></ref>
<ref id="b45"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Gosselin</surname>, <given-names>F.</given-names></name>, &#x26; <name><surname>Schyns</surname>, <given-names>P. G.</given-names></name></person-group> (<year>2001</year>). <article-title>Bubbles: A technique to reveal the use of information in recognition tasks.</article-title> <source>Vision Research</source>, <volume>41</volume>(<issue>17</issue>), <fpage>2261</fpage>–<lpage>2271</lpage>. <pub-id pub-id-type="doi">10.1016/S0042-6989(01)00097-9</pub-id><pub-id pub-id-type="pmid">11448718</pub-id><issn>0042-6989</issn></mixed-citation></ref>
<ref id="b46"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Chen</surname>, <given-names>M. C.</given-names></name>, <name><surname>Anderson</surname>, <given-names>J. R.</given-names></name>, &#x26; <name><surname>Sohn</surname>, <given-names>M. H.</given-names></name></person-group> (<year>2001</year>). <source>What can a mouse cursor tell us more? correlation of eye/mouse movements on web browsing. CHI EA</source> (pp. <fpage>281</fpage>–<lpage>282</lpage>). <publisher-name>ACM</publisher-name>.</mixed-citation></ref>
<ref id="b47"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="editor"><name><surname>Navalpakkam</surname> <given-names>V</given-names></name>, <name><surname>Jentzsch</surname> <given-names>L</given-names></name>, <name><surname>Sayres</surname> <given-names>R</given-names></name>, <name><surname>Ravi</surname> <given-names>S</given-names></name>, <name><surname>Ahmed</surname> <given-names>A</given-names></name>, <name><surname>Smola</surname> <given-names>A</given-names></name></person-group>. <article-title>Measurement and modeling of eye-mouse behavior in the presence of nonlinear page layouts.</article-title> <source>Proceedings of the 22nd international conference on World Wide Web</source>; <year>2013</year>: <publisher-name>ACM.</publisher-name> <pub-id pub-id-type="doi">10.1145/2488388.2488471</pub-id></mixed-citation></ref>
<ref id="b48"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="editor"><name><surname>Kim</surname> <given-names>NW</given-names></name>, <name><surname>Bylinskii</surname> <given-names>Z</given-names></name>, <name><surname>Borkin</surname> <given-names>MA</given-names></name>, <name><surname>Oliva</surname> <given-names>A</given-names></name>, <name><surname>Gajos</surname> <given-names>KZ</given-names></name>, <name><surname>Pfister</surname> <given-names>H</given-names></name></person-group>. <article-title>A crowdsourced alternative to eye-tracking for visualization understanding.</article-title> <source>Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems</source>; <year>2015</year>: <publisher-name>ACM.</publisher-name> <pub-id pub-id-type="doi">10.1145/2702613.2732934</pub-id></mixed-citation></ref>
<ref id="b49"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Stark</surname>, <given-names>L.</given-names></name></person-group> (<year>1968</year>). <source>Neulogical control systems; Studies in bio-engineering</source> (pp. <fpage>236</fpage>–<lpage>290</lpage>). <publisher-name>Plenum</publisher-name>.</mixed-citation></ref>
<ref id="b50"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Lacquaniti</surname>, <given-names>F.</given-names></name>, &#x26; <name><surname>Soechting</surname>, <given-names>J. F.</given-names></name></person-group> (<year>1982</year>). <article-title>Coordination of arm and wrist motion during a reaching task.</article-title> <source>The Journal of Neuroscience : The Official Journal of the Society for Neuroscience</source>, <volume>2</volume>(<issue>4</issue>), <fpage>399</fpage>–<lpage>408</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1523/JNEUROSCI.02-04-00399.1982</pub-id><pub-id pub-id-type="pmid">7069463</pub-id><issn>0270-6474</issn></mixed-citation></ref>
<ref id="b51"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="editor"><name><surname>Judd</surname>, <given-names>T.</given-names></name>, <name><surname>Ehinger</surname>, <given-names>K.</given-names></name>, <name><surname>Durand</surname>, <given-names>F.</given-names></name>, &#x26; <name><surname>Torralba</surname>, <given-names>A.</given-names></name> (<role>Eds.</role>)</person-group>. (<year>2009</year>). <source>Learning to predict where humans look</source>. <pub-id pub-id-type="doi">10.1109/ICCV.2009.5459462</pub-id></mixed-citation></ref>
<ref id="b52"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Kootstra</surname>, <given-names>G.</given-names></name>, <name><surname>de Boer</surname>, <given-names>B.</given-names></name>, &#x26; <name><surname>Schomaker</surname>, <given-names>L. R. B.</given-names></name></person-group> (<year>2011</year>). <article-title>Predicting Eye Fixations on Complex Visual Stimuli Using Local Symmetry.</article-title> <source>Cognitive Computation</source>, <volume>3</volume>(<issue>1</issue>), <fpage>223</fpage>–<lpage>240</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1007/s12559-010-9089-5</pub-id><pub-id pub-id-type="pmid">21475690</pub-id><issn>1866-9956</issn></mixed-citation></ref>
<ref id="b53"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Wilming</surname>, <given-names>N.</given-names></name>, <name><surname>Betz</surname>, <given-names>T.</given-names></name>, <name><surname>Kietzmann</surname>, <given-names>T. C.</given-names></name>, &#x26; <name><surname>König</surname>, <given-names>P.</given-names></name></person-group> (<year>2011</year>). <article-title>Measures and limits of models of fixation selection.</article-title> <source>PLoS One</source>, <volume>6</volume>(<issue>9</issue>), <fpage>e24038</fpage>. <pub-id pub-id-type="doi" specific-use="author">10.1371/journal.pone.0024038</pub-id><pub-id pub-id-type="pmid">21931638</pub-id><issn>1932-6203</issn></mixed-citation></ref>
<ref id="b54"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="editor"><name><surname>Oyekoya</surname>, <given-names>O.</given-names></name>, &#x26; <name><surname>Stentiford</surname>, <given-names>F.</given-names></name> (<role>Eds.</role>)</person-group>. (<year>2004</year>). <source>Exploring human eye behaviour using a model of visual attention</source>. <pub-id pub-id-type="doi">10.1109/ICPR.2004.1333929</pub-id></mixed-citation></ref>
<ref id="b55"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Pieters</surname>, <given-names>R.</given-names></name>, <name><surname>Wedel</surname>, <given-names>M.</given-names></name>, &#x26; <name><surname>Batra</surname>, <given-names>R.</given-names></name></person-group> (<year>2010</year>). <article-title>The Stopping Power of Advertising: Measures and Effects of Visual Complexity.</article-title> <source>Journal of Marketing</source>, <volume>74</volume>(<issue>5</issue>), <fpage>48</fpage>–<lpage>60</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1509/jmkg.74.5.48</pub-id> <pub-id pub-id-type="doi">10.1509/jmkg.74.5.048</pub-id><issn>0022-2429</issn></mixed-citation></ref>
<ref id="b56"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Borji</surname>, <given-names>A.</given-names></name>, <name><surname>Sihite</surname>, <given-names>D. N.</given-names></name>, &#x26; <name><surname>Itti</surname>, <given-names>L.</given-names></name></person-group> (<year>2013</year>). <article-title>Quantitative analysis of human-model agreement in visual saliency modeling: A comparative study.</article-title> <source>IEEE Transactions on Image Processing</source>, <volume>22</volume>(<issue>1</issue>), <fpage>55</fpage>–<lpage>69</lpage>. <pub-id pub-id-type="doi">10.1109/TIP.2012.2210727</pub-id><pub-id pub-id-type="pmid">22868572</pub-id><issn>1057-7149</issn></mixed-citation></ref>
<ref id="b57"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="editor"><name><surname>Riche</surname> <given-names>N</given-names></name>, <name><surname>Duvinage</surname> <given-names>M</given-names></name>, <name><surname>Mancas</surname> <given-names>M</given-names></name>, <name><surname>Gosselin</surname> <given-names>B</given-names></name>, <name><surname>Dutoit</surname> <given-names>T</given-names></name></person-group>. <article-title>Saliency and human fixations: state-of-the-art and study of comparison metrics.</article-title> <source>Proceedings of the IEEE international conference on computer vision</source>; <year>2013</year>. <pub-id pub-id-type="doi">10.1109/ICCV.2013.147</pub-id></mixed-citation></ref>
<ref id="b58"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Hein</surname>, <given-names>O.</given-names></name>, &#x26; <name><surname>Zangemeister</surname>, <given-names>W. H.</given-names></name></person-group> (<year>2017</year>). <article-title>Topology for gaze analyses - Raw data segmentation.</article-title> <source>Journal of Eye Movement Research</source>, <volume>10</volume>(<issue>1</issue>). <pub-id pub-id-type="doi" specific-use="author">10.16910/jemr.10.1.1</pub-id><issn>1995-8692</issn></mixed-citation></ref>
<ref id="b59"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Privitera</surname>, <given-names>C. M.</given-names></name>, &#x26; <name><surname>Stark</surname>, <given-names>L. W.</given-names></name></person-group> (<year>1998</year>). <article-title>Evaluating image processing algorithms that predict regions of interest.</article-title> <source>Pattern Recognition Letters</source>, <volume>19</volume>(<issue>11</issue>), <fpage>1037</fpage>–<lpage>1043</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1016/S0167-8655(98)00077-4</pub-id><issn>0167-8655</issn></mixed-citation></ref>
<ref id="b60"><mixed-citation publication-type="book-chapter" specific-use="unparsed"><person-group person-group-type="editor"><name><surname>Jiang</surname> <given-names>M</given-names></name>, <name><surname>Huang</surname> <given-names>S</given-names></name>, <name><surname>Duan</surname> <given-names>J</given-names></name>, <name><surname>Zhao</surname> <given-names>Q</given-names></name></person-group> (<year>2015</year>). <article-title>SALICON: Saliency in Context.</article-title> <source>In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.1072–1080).</source></mixed-citation></ref>
<ref id="b61"><mixed-citation publication-type="book" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Holmqvist</surname> <given-names>K</given-names></name>, <name><surname>Nystrom</surname> <given-names>M</given-names></name>, <name><surname>Andersson</surname> <given-names>R</given-names></name>, <name><surname>Dewhurst</surname> <given-names>R</given-names></name>, <name><surname>Jarodzka</surname> <given-names>H</given-names></name></person-group>, van de Weijer J. Eye tracking: A comprehensive guide to methods and measures. 1st ed. ed. Oxford, New York, Auckland: Oxford University Press; <year>2011</year>.</mixed-citation></ref>
<ref id="b62"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Egner</surname> <given-names>S</given-names></name>, <name><surname>Scheier</surname> <given-names>CR</given-names></name></person-group>. <article-title>Erfassung der Kundenwirkung von Bildmaterial.</article-title> Kunstliche Intelligenz. (1), 84-86: German; <year>2002</year>.</mixed-citation></ref>
<ref id="b63"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Zhao</surname>, <given-names>Q.</given-names></name>, &#x26; <name><surname>Koch</surname>, <given-names>C.</given-names></name></person-group> (<year>2011</year>). <article-title>Learning a saliency map using fixated locations in natural scenes.</article-title> <source>Journal of Vision (Charlottesville, Va.)</source>, <volume>11</volume>(<issue>3</issue>), <fpage>1</fpage>–<lpage>15</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1167/11.3.9</pub-id><pub-id pub-id-type="pmid">21393388</pub-id><issn>1534-7362</issn></mixed-citation></ref>
<ref id="b64"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Fawcett</surname>, <given-names>T.</given-names></name></person-group> (<year>2006</year>). <article-title>An introduction to ROC analysis.</article-title> <source>Pattern Recognition Letters</source>, <volume>27</volume>(<issue>8</issue>), <fpage>861</fpage>–<lpage>874</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1016/j.patrec.2005.10.010</pub-id><issn>0167-8655</issn></mixed-citation></ref>
<ref id="b65"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Janssen</surname>, <given-names>J.</given-names></name>, &#x26; <name><surname>Laatz</surname>, <given-names>W.</given-names></name></person-group> (<year>2016</year>). <source>Statistische Datenanalyse mit SPSS: eine anwendungsorientierte Einführung in das Basissystem und das Modul Exakte Tests</source>. <publisher-name>Springer-Verlag</publisher-name>.</mixed-citation></ref>
<ref id="b66"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="editor"><name><surname>Stankiewicz</surname>, <given-names>B. J.</given-names></name>, <name><surname>Anderson</surname>, <given-names>N. J.</given-names></name>, &#x26; <name><surname>Moore</surname>, <given-names>R. J.</given-names></name> (<role>Eds.</role>)</person-group>. (<year>2011</year>). <source>Using performance efficiency for testing and optimization of visual attention models. Image Quality and System Performance VIII</source>. <publisher-name>International Society for Optics and Photonics</publisher-name>.</mixed-citation></ref>
<ref id="b67"><mixed-citation publication-type="book-chapter" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Eid</surname> <given-names>M</given-names></name>, <name><surname>Gollwitzer</surname> <given-names>M</given-names></name>, <name><surname>Schmitt</surname> <given-names>M</given-names></name></person-group> (<year>2011</year>). Statistik und Forschungsmethoden (1. Ed). Weinheim u.a., Beltz.</mixed-citation></ref>
<ref id="b68"><mixed-citation publication-type="web-page" specific-use="unparsed"><person-group person-group-type="author"><collab>IBM</collab></person-group>. IBM SPSS Bootstrapping 20 <year>2011</year>. Available from: <ext-link ext-link-type="uri" xlink:href="ftp://public.dhe.ibm.com/software/analytics/spss/documenttion/statistics/20.0/de/client/Manuals/IBM_SPSS_Bootstrapping.pdf">ftp://public.dhe.ibm.com/software/analytics/spss/documenttion/statistics/20.0/de/client/Manuals/IBM_SPSS_Bootstrapping.pdf</ext-link></mixed-citation></ref>
<ref id="b69"><mixed-citation publication-type="unknown" specific-use="parsed"><person-group person-group-type="author"><name><surname>Lenhard</surname> <given-names>W</given-names></name>, <name><surname>Lenhard</surname> <given-names>A.</given-names></name></person-group> <article-title>Testing the Significance of Correlations.</article-title> <year>2014</year>.</mixed-citation></ref>
<ref id="b70"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Rajashekar</surname>, <given-names>U.</given-names></name>, <name><surname>van der Linde</surname>, <given-names>I.</given-names></name>, <name><surname>Bovik</surname>, <given-names>A. C.</given-names></name>, &#x26; <name><surname>Cormack</surname>, <given-names>L. K.</given-names></name></person-group> (<year>2008</year>). <article-title>GAFFE: A gaze-attentive fixation finding engine.</article-title> <source>IEEE Transactions on Image Processing</source>, <volume>17</volume>(<issue>4</issue>), <fpage>564</fpage>–<lpage>573</lpage>. <pub-id pub-id-type="doi">10.1109/TIP.2008.917218</pub-id><pub-id pub-id-type="pmid">18390364</pub-id><issn>1057-7149</issn></mixed-citation></ref>
<ref id="b71"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="editor"><name><surname>Kasprowski</surname>, <given-names>P.</given-names></name>, &#x26; <name><surname>Harezlak</surname>, <given-names>K.</given-names></name></person-group>. (<year>2016</year>). <article-title>Implicit Calibra-tion Using Predicted Gaze Targets.</article-title><source>In: Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research &#x26; Applications (ETRA '16). ACM, New York, NY, USA, 245-248.</source>. <pub-id pub-id-type="doi">10.1145/2857491.2857511</pub-id></mixed-citation></ref>
<ref id="b72"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Haxby</surname>, <given-names>J. V.</given-names></name>, <name><surname>Hoffman</surname>, <given-names>E. A.</given-names></name>, &#x26; <name><surname>Gobbini</surname>, <given-names>M. I.</given-names></name></person-group> (<year>2000</year>). <article-title>The distributed human neural system for face perception.</article-title> <source>Trends in Cognitive Sciences</source>, <volume>4</volume>(<issue>6</issue>), <fpage>223</fpage>–<lpage>233</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1016/S1364-6613(00)01482-0</pub-id><pub-id pub-id-type="pmid">10827445</pub-id><issn>1364-6613</issn></mixed-citation></ref>
<ref id="b73"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Kanwisher</surname>, <given-names>N.</given-names></name></person-group> (<year>2000</year>). <article-title>Domain specificity in face perception.</article-title> <source>Nature Neuroscience</source>, <volume>3</volume>(<issue>8</issue>), <fpage>759</fpage>–<lpage>763</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1038/77664</pub-id><pub-id pub-id-type="pmid">10903567</pub-id><issn>1097-6256</issn></mixed-citation></ref>
<ref id="b74"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Sugita</surname>, <given-names>Y.</given-names></name></person-group> (<year>2008</year>). <article-title>Face perception in monkeys reared with no exposure to faces.</article-title> <source>Proceedings of the National Academy of Sciences of the United States of America</source>, <volume>105</volume>(<issue>1</issue>), <fpage>394</fpage>–<lpage>398</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1073/pnas.0706079105</pub-id><pub-id pub-id-type="pmid">18172214</pub-id><issn>0027-8424</issn></mixed-citation></ref>
<ref id="b75"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Downing</surname>, <given-names>P. E.</given-names></name>, <name><surname>Jiang</surname>, <given-names>Y.</given-names></name>, <name><surname>Shuman</surname>, <given-names>M.</given-names></name>, &#x26; <name><surname>Kanwisher</surname>, <given-names>N.</given-names></name></person-group> (<year>2001</year>). <article-title>A cortical area selective for visual processing of the human body.</article-title> <source>Science</source>, <volume>293</volume>(<issue>5539</issue>), <fpage>2470</fpage>–<lpage>2473</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1126/science.1063414</pub-id><pub-id pub-id-type="pmid">11577239</pub-id><issn>0036-8075</issn></mixed-citation></ref>
<ref id="b76"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Capitani</surname>, <given-names>E.</given-names></name>, <name><surname>Laiacona</surname>, <given-names>M.</given-names></name>, <name><surname>Mahon</surname>, <given-names>B.</given-names></name>, &#x26; <name><surname>Caramazza</surname>, <given-names>A.</given-names></name></person-group> (<year>2003</year>). <article-title>What are the facts of semantic category-specific deficits? A critical review of the clinical evidence.</article-title> <source>Cognitive Neuropsychology</source>, <volume>20</volume>(<issue>3</issue>), <fpage>213</fpage>–<lpage>261</lpage>. <pub-id pub-id-type="doi">10.1080/02643290244000266</pub-id><pub-id pub-id-type="pmid">20957571</pub-id><issn>0264-3294</issn></mixed-citation></ref>
<ref id="b77"><mixed-citation publication-type="book-chapter" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Nairne</surname> <given-names>JS</given-names></name></person-group>. Adaptive memory: Evolutionary constraints on remembering. In: Ross BH, editor. The psychology of learning and motivation. 1st ed. ed. London: Academic Press; <year>2010</year>. p. 1-32.</mixed-citation></ref>
<ref id="b78"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Nairne</surname>, <given-names>J. S.</given-names></name>, &#x26; <name><surname>Pandeirada</surname>, <given-names>J. N. S.</given-names></name></person-group> (<year>2010</year>). <article-title>Adaptive memory: Ancestral priorities and the mnemonic value of survival processing.</article-title> <source>Cognitive Psychology</source>, <volume>61</volume>(<issue>1</issue>), <fpage>1</fpage>–<lpage>22</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1016/j.cogpsych.2010.01.005</pub-id><pub-id pub-id-type="pmid">20206924</pub-id><issn>0010-0285</issn></mixed-citation></ref>
<ref id="b79"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Bekkering</surname>, <given-names>H.</given-names></name>, <name><surname>Adam</surname>, <given-names>J. J.</given-names></name>, <name><surname>van den Aarssen</surname>, <given-names>A.</given-names></name>, <name><surname>Kingma</surname>, <given-names>H.</given-names></name>, &#x26; <name><surname>Whiting</surname>, <given-names>H. T.</given-names></name></person-group> (<year>1995</year>). <article-title>Interference between saccadic eye and goal-directed hand movements.</article-title> <source>Experimental Brain Research</source>, <volume>106</volume>(<issue>3</issue>), <fpage>475</fpage>–<lpage>484</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1007/BF00231070</pub-id><pub-id pub-id-type="pmid">8983991</pub-id><issn>0014-4819</issn></mixed-citation></ref>
<ref id="b80"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Biguer</surname>, <given-names>B.</given-names></name>, <name><surname>Jeannerod</surname>, <given-names>M.</given-names></name>, &#x26; <name><surname>Prablanc</surname>, <given-names>C.</given-names></name></person-group> (<year>1982</year>). <article-title>The coordination of eye, head, and arm movements during reaching at a single visual target.</article-title> <source>Experimental Brain Research</source>, <volume>46</volume>(<issue>2</issue>), <fpage>301</fpage>–<lpage>304</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1007/BF00237188</pub-id><pub-id pub-id-type="pmid">7095037</pub-id><issn>0014-4819</issn></mixed-citation></ref>
<ref id="b81"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Jeannerod</surname>, <given-names>M.</given-names></name></person-group> (<year>1988</year>). <source>The neural and behavioural organization of goal-directed movements. Oxford psychology series</source>. <publisher-loc>Oxford</publisher-loc>: <publisher-name>Oxford Univ. Press</publisher-name>.</mixed-citation></ref>
<ref id="b82"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Johansson</surname>, <given-names>R. S.</given-names></name>, <name><surname>Westling</surname>, <given-names>G.</given-names></name>, <name><surname>Bäckström</surname>, <given-names>A.</given-names></name>, &#x26; <name><surname>Flanagan</surname>, <given-names>J. R.</given-names></name></person-group> (<year>2001</year>). <article-title>Eye-hand coordination in object manipulation.</article-title> <source>The Journal of Neuroscience : The Official Journal of the Society for Neuroscience</source>, <volume>21</volume>(<issue>17</issue>), <fpage>6917</fpage>–<lpage>6932</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1523/JNEUROSCI.21-17-06917.2001</pub-id><pub-id pub-id-type="pmid">11517279</pub-id><issn>0270-6474</issn></mixed-citation></ref>
<ref id="b83"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Land</surname>, <given-names>M.</given-names></name>, <name><surname>Mennie</surname>, <given-names>N.</given-names></name>, &#x26; <name><surname>Rusted</surname>, <given-names>J.</given-names></name></person-group> (<year>1999</year>). <article-title>The roles of vision and eye movements in the control of activities of daily living.</article-title> <source>Perception</source>, <volume>28</volume>(<issue>11</issue>), <fpage>1311</fpage>–<lpage>1328</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1068/p2935</pub-id><pub-id pub-id-type="pmid">10755142</pub-id><issn>0301-0066</issn></mixed-citation></ref>
<ref id="b84"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Prablanc</surname>, <given-names>C.</given-names></name>, <name><surname>Echallier</surname>, <given-names>J. F.</given-names></name>, <name><surname>Komilis</surname>, <given-names>E.</given-names></name>, &#x26; <name><surname>Jeannerod</surname>, <given-names>M.</given-names></name></person-group> (<year>1979</year>). <article-title>Optimal response of eye and hand motor systems in pointing at a visual target. I. Spatio-temporal characteristics of eye and hand movements and their relationships when varying the amount of visual information.</article-title> <source>Biological Cybernetics</source>, <volume>35</volume>(<issue>2</issue>), <fpage>113</fpage>–<lpage>124</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1007/BF00337436</pub-id><pub-id pub-id-type="pmid">518932</pub-id><issn>0340-1200</issn></mixed-citation></ref>
<ref id="b85"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Gribble</surname>, <given-names>P. L.</given-names></name>, <name><surname>Everling</surname>, <given-names>S.</given-names></name>, <name><surname>Ford</surname>, <given-names>K.</given-names></name>, &#x26; <name><surname>Mattar</surname>, <given-names>A.</given-names></name></person-group> (<year>2002</year>). <article-title>Hand-eye coordination for rapid pointing movements. Arm movement direction and distance are specified prior to saccade onset.</article-title> <source>Experimental Brain Research</source>, <volume>145</volume>(<issue>3</issue>), <fpage>372</fpage>–<lpage>382</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1007/s00221-002-1122-9</pub-id><pub-id pub-id-type="pmid">12136387</pub-id><issn>0014-4819</issn></mixed-citation></ref>
<ref id="b86"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Sailer</surname> <given-names>U</given-names></name>, <name><surname>Flanagan</surname> <given-names>JR</given-names></name>, <name><surname>Johansson</surname> <given-names>RS</given-names></name></person-group>(<year>2005</year>). Eye-hand coordination during learning of a novel visuomotor task. J Neurosci. ;25(39):8833-42. <pub-id pub-id-type="doi" specific-use="author">10.1523/JNEUROSCI.2658-05</pub-id>.</mixed-citation></ref>
<ref id="b87"><mixed-citation publication-type="unknown" specific-use="linked"><person-group person-group-type="author"><name><surname>Mather</surname> <given-names>JA</given-names></name>, <name><surname>Fisk</surname> <given-names>JD</given-names></name></person-group>. Orienting to targets by looking and pointing: Parallels and interactions in ocular and manual performance. <year>1985</year>;37(3):315-38. doi: DOI:<pub-id pub-id-type="doi" specific-use="author">10.1080/14640748508400938</pub-id>.</mixed-citation></ref>
<ref id="b88"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Navas</surname>, <given-names>F.</given-names></name>, &#x26; <name><surname>Stark</surname>, <given-names>L.</given-names></name></person-group> (<year>1968</year>). <article-title>Sampling or intermittency in hand control system dynamics.</article-title> <source>Biophysical Journal</source>, <volume>8</volume>(<issue>2</issue>), <fpage>252</fpage>–<lpage>302</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1016/S0006-3495(68)86488-4</pub-id><pub-id pub-id-type="pmid">5639937</pub-id><issn>0006-3495</issn></mixed-citation></ref>
<ref id="b89"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Fisk</surname>, <given-names>J. D.</given-names></name>, &#x26; <name><surname>Goodale</surname>, <given-names>M. A.</given-names></name></person-group> (<year>1985</year>). <article-title>The organization of eye and limb movements during unrestricted reaching to targets in contralateral and ipsilateral visual space.</article-title> <source>Experimental Brain Research</source>, <volume>60</volume>(<issue>1</issue>), <fpage>159</fpage>–<lpage>178</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1007/BF00237028</pub-id><pub-id pub-id-type="pmid">4043274</pub-id><issn>0014-4819</issn></mixed-citation></ref>
<ref id="b90"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Snyder</surname>, <given-names>L. H.</given-names></name>, <name><surname>Calton</surname>, <given-names>J. L.</given-names></name>, <name><surname>Dickinson</surname>, <given-names>A. R.</given-names></name>, &#x26; <name><surname>Lawrence</surname>, <given-names>B. M.</given-names></name></person-group> (<year>2002</year>). <article-title>Eye-hand coordination: Saccades are faster when accompanied by a coordinated arm movement.</article-title> <source>Journal of Neurophysiology</source>, <volume>87</volume>(<issue>5</issue>), <fpage>2279</fpage>–<lpage>2286</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1152/jn.00854.2001</pub-id><pub-id pub-id-type="pmid">11976367</pub-id><issn>0022-3077</issn></mixed-citation></ref>
<ref id="b91"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Bednarik</surname>, <given-names>R.</given-names></name>, <name><surname>Gowases</surname>, <given-names>T.</given-names></name>, &#x26; <name><surname>Tukiainen</surname>, <given-names>M.</given-names></name></person-group> (<year>2009</year>). <article-title>Gaze interaction enhances problem solving: Effects of dwell-time based, gaze-augmented, and mouse interaction on problem-solving strategies and user experience.</article-title> <source>Journal of Eye Movement Research</source>, <volume>3</volume>(<issue>1</issue>). <pub-id pub-id-type="doi" specific-use="author">10.16910/jemr.3.1.3</pub-id><issn>1995-8692</issn></mixed-citation></ref>
<ref id="b92"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Helsen</surname>, <given-names>W. F.</given-names></name>, <name><surname>Elliott</surname>, <given-names>D.</given-names></name>, <name><surname>Starkes</surname>, <given-names>J. L.</given-names></name>, &#x26; <name><surname>Ricker</surname>, <given-names>K. L.</given-names></name></person-group> (<year>2000</year>). <article-title>Coupling of eye, finger, elbow, and shoulder movements during manual aiming.</article-title> <source>Journal of Motor Behavior</source>, <volume>32</volume>(<issue>3</issue>), <fpage>241</fpage>–<lpage>248</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1080/00222890009601375</pub-id><pub-id pub-id-type="pmid">10975272</pub-id><issn>0022-2895</issn></mixed-citation></ref>
<ref id="b93"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="editor"><name><surname>Bieg</surname>, <given-names>H. J.</given-names></name>, <name><surname>Chuang</surname>, <given-names>L. L.</given-names></name>, <name><surname>Fleming</surname>, <given-names>R. W.</given-names></name>, <name><surname>Reiterer</surname>, <given-names>H.</given-names></name>, &#x26; <name><surname>Bulthoff</surname>, <given-names>H. H.</given-names></name> (<role>Eds.</role>)</person-group>. (<year>2010</year>). <article-title>Eye and Pointer Coordination in Search and Selection Tasks</article-title><source>In: Proceedings of the Symposium on Eye Tracking Research and Applications, 89–92.</source>. <pub-id pub-id-type="doi">10.1145/1743666.1743688</pub-id></mixed-citation></ref>
<ref id="b94"><mixed-citation publication-type="unknown" specific-use="linked"><person-group person-group-type="author"><name><surname>Liebling</surname> <given-names>DJ</given-names></name>, &#x26; <name><surname>Dumais</surname>, <given-names>S. T.</given-names></name></person-group> . <article-title>Gaze and Mouse Coordination in Everyday Work</article-title> <year>2014</year>. Available from: http://dx.doi.org/<pub-id pub-id-type="doi" specific-use="author">10.1145/2638728.2641692</pub-id>.</mixed-citation></ref>
<ref id="b95"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Rodden</surname>, <given-names>K.</given-names></name>, <name><surname>Fu</surname>, <given-names>X.</given-names></name>, <name><surname>Aula</surname>, <given-names>A.</given-names></name>, &#x26; <name><surname>Spiro</surname>, <given-names>I.</given-names></name></person-group> (<year>2008</year>). <source>Eye-mouse coordination patterns on web search results pages. CHI EA</source> (pp. <fpage>2997</fpage>–<lpage>3002</lpage>). <publisher-name>ACM</publisher-name>.</mixed-citation></ref>
<ref id="b96"><mixed-citation publication-type="conference" specific-use="unparsed"><person-group person-group-type="editor"><name><surname>Huang</surname> <given-names>J</given-names></name>, <name><surname>White</surname> <given-names>RW</given-names></name>, <name><surname>Dumais</surname> <given-names>S</given-names></name><role>, editors</role></person-group>. <article-title>No clicks, no problem: using cursor movements to understand and improve search2011.</article-title> <person-group person-group-type="editor"><name><surname>Cooke</surname> <given-names>L</given-names></name><role>, editor</role></person-group> Is the mouse a poor man's eye tracker?2006. Guo Q, Agichtein E, editors. Exploring mouse movements for inferring query intent. <source>Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval</source>; <year>2008</year>: ACM. Goecks J, Shavlik J, editors. Learning users' interests by unobtrusively observing their normal behavior2000.</mixed-citation></ref>
<ref id="b97"><mixed-citation publication-type="conference" specific-use="parsed"><person-group person-group-type="editor"><name><surname>Cooke</surname> <given-names>L</given-names></name></person-group> <article-title>Is the mouse a poor man’s eye tracker?</article-title> In: <source>Proceedings of STC</source>. <year>2006</year>.</mixed-citation></ref>
<ref id="b98"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="editor"><name><surname>Guo</surname> <given-names>Q</given-names></name>, <name><surname>Agichtein</surname> <given-names>E</given-names></name></person-group>. <article-title>Exploring mouse movements for inferring query intent.</article-title> <source>Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval</source>; <year>2008</year>: <publisher-name>ACM.</publisher-name> <pub-id pub-id-type="doi">10.1145/1390334.1390462</pub-id></mixed-citation></ref>
<ref id="b99"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="editor"><name><surname>Goecks</surname> <given-names>J</given-names></name>, <name><surname>Shavlik</surname> <given-names>J</given-names></name></person-group>. <article-title>Learning users’ interests by unobtrusively observing their normal behavior.</article-title> In: <source>Proceedings of IUI</source>, <fpage>129</fpage>–<lpage>132</lpage>. <year>2000</year>. <pub-id pub-id-type="doi">10.1145/325737.325806</pub-id></mixed-citation></ref>
<ref id="b100"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="editor"><name><surname>Hijikata</surname>, <given-names>Y.</given-names></name> (<role>Ed.</role>)</person-group>. (<year>2004</year>). <article-title>Implicit user profiling for on demand relevance feedback.</article-title><source>In Proceedings of IUI,198–205.</source>. <pub-id pub-id-type="doi">10.1145/964442.964480</pub-id></mixed-citation></ref>
<ref id="b101"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="editor"><name><surname>Shapira</surname>, <given-names>B.</given-names></name>, <name><surname>Taieb-Maimon</surname>, <given-names>M.</given-names></name>, &#x26; <name><surname>Moskowitz</surname>, <given-names>A.</given-names></name> (<role>Eds.</role>)</person-group>. (<year>2006</year>). <article-title>Study of the usefulness of known and new implicit indicators and their optimal combination for accurate inference of users interests</article-title><source>In: Proceedings of SAC, 1118–1119.</source>. <pub-id pub-id-type="doi">10.1145/1141277.1141542</pub-id></mixed-citation></ref>
<ref id="b102"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Demšar</surname>, <given-names>U.</given-names></name>, &#x26; <name><surname>Çöltekin</surname>, <given-names>A.</given-names></name></person-group> (<year>2017</year>). <article-title>Quantifying gaze and mouse interactions on spatial visual interfaces with a new movement analytics methodology.</article-title> <source>PLoS One</source>, <volume>12</volume>(<issue>8</issue>), <fpage>e0181818</fpage>. <pub-id pub-id-type="doi" specific-use="author">10.1371/journal.pone.0181818</pub-id><pub-id pub-id-type="pmid">28777822</pub-id><issn>1932-6203</issn></mixed-citation></ref>
<ref id="b103"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><name><surname>Groner</surname>, <given-names>R.</given-names></name>, <name><surname>Raess</surname>, <given-names>R.</given-names></name>, &#x26; <name><surname>Sury</surname>, <given-names>P.</given-names></name></person-group> (<year>2008</year>). <chapter-title>Usability: Gestaltung und Optimierung von Benutzerschnittstellen</chapter-title>. In <person-group person-group-type="editor"><name><given-names>B.</given-names> <surname>Batinic</surname></name> (<role>Ed.</role>),</person-group> <source>Lehrbuch Medienpsychologie, Kapitel 18</source>. <publisher-loc>Berlin</publisher-loc>: <publisher-name>Springer</publisher-name>. <pub-id pub-id-type="doi">10.1007/978-3-540-46899-8_18</pub-id></mixed-citation></ref>
</ref-list>

<app-group>
	<app>
	
      <title>Appendix</title>

  <sec id="appendix-1">
  <title>Appendix 1</title>
  <p>Utilized stimulus material</p>
  
  <p><sc>Animate</sc></p>
  <p specific-use="wrapper">
    <disp-quote>
      <p><italic>human/animal</italic></p>
      <p>simple</p>
      <p>complex</p>
      <p><italic>plants</italic></p>
      <p>simple</p>
      <p>complex</p>
    </disp-quote>
  </p>
  
  <p><sc>Inanimate</sc></p>
  <p specific-use="wrapper">
    <disp-quote>
      <p><italic>artificial</italic></p>
      <p>simple</p>
      <p>complex</p>
      <p><italic>natural</italic></p>
      <p>simple</p>
      <p>complex</p>
    </disp-quote>
  </p>
</sec>

<sec id="appendix-2">
  <title>
  Appendix 2</title>
  <p>Instructions for mcAT trial</p>
  
  <p>Follow your viewing of the image with the computer mouse, so that
  the fixation points are transferred into mouse clicks. In order to
  optimize your eye-hand coordination, get a feel for the necessary
  click speed; initially you will have to complete a short click
  training. The first step is done when you have completed the training
  successfully. Now the real test starts, in which you have to follow
  your viewing process for 5 seconds for 64 images, each preceded and
  followed by a separating fixation cross. Throughout the trial period
  try to keep your click speed constant with a minimum frequency of at
  least 1-2 clicks per second. Note that it is not possible to stop the
  click test in between. Also, try to focus and concentrate fully until
  to the end of the test (duration: 7.5 minutes). At the end of the
  click test there will follow a brief survey of the pictures that you
  have just viewed. Tell us if you have seen the following 16 images
  mentioned in the test or not: 16 images (1 per page; 8 true, 8 wrong;
  1 from each category). This will allow us to evaluate the technical
  quality of your click performance.</p>
</sec>

<sec id="appendix-3">
  <title>
  Appendix 3</title>
  <p>Sources of noise</p>
  
  <p>Sources of noise in classical eye-tracking (emAT):</p>
  <list list-type="bullet">
    <list-item>
      <p>Gaze direction is not always identical to attention direction
      (Anderson et al., 2015; Chun &#x26; Wolfe, 2005). It can be assumed
      that, under normal viewing conditions, the gaze is preceded by
      attention. This introduced a delay in our measured data.</p>
    </list-item>
    <list-item>
      <p>When the gaze follows the attention direction, it typically
      performs saccades that target the current attention direction.
      However, saccades are not always precise. When a saccade misses
      the actual attention direction, an additional saccade (correction
      saccade) is performed. In such a case, emAT measures multiple
      saccades and fixations while only one attentional shift has been
      performed. The additional saccades and fixations are not
      corresponding to attention; they are noise. Of course, saccades
      are possible that represent a strategic under-shoot or
      over-shoot.</p>
    </list-item>
    <list-item>
      <p>Blinks are an additional source of noise. However, we attempted
      to remove all blinks from the emAT data in this study.</p>
    </list-item>
    <list-item>
      <p>An underlying assumption of emAT is that fixations directly
      correspond to attention directions. However, it is often hard to
      classify data from the eye-tracker into fixations. Depending upon
      the choice of parameters of the fixation detection algorithm
      (temporal, spatial), it classifies differing portions of the
      trajectory as fixation. There is no optimal parameter regime;
      therefore, there will always be some mis-detected fixations, which
      can be seen as noise.</p>
    </list-item>
    <list-item>
      <p>The measurement of gaze direction is technically demanding,
      because small differences in the eye position correspond to large
      differences in the gaze position. Therefore, the measurement has
      to have a high accuracy on the raw data level to avoid big
      mistakes on the level of gaze positions. Head movements in all
      directions create additional difficulty. Altogether, the technical
      difficulties lead to more or less noisy data.</p>
    </list-item>
  </list>
  <p>Analogously to the physiological and technical sources of noise in
  emAT, mcAT must deal with both kinds of noise sources:</p>
  <list list-type="bullet">
    <list-item>
      <p>The hand is much slower than the eye. It can, therefore, be
      assumed that the lag between the attention location and the
      measured position is even bigger than in emAT.</p>
    </list-item>
    <list-item>
      <p>Mouse movements are also not precise.</p>
    </list-item>
  </list>
  <p><italic>Stimulus material:</italic> There remains a certain
  arbitrariness in the derivation and definition of image categories. In
  addition, in reality mixed forms of categories such as simultaneous
  displays of plants and animals usually occur.</p>
  
  <p><italic>Sample</italic>: Although both samples are similar in their
  demographic structure, subjects consisted exclusively of students aged
  between 18 and 26 years. This accounts for a small population segment
  only. One of the advantages of the mcAT procedure is that it allows
  for a large population and many different audiences to perform the
  mcAT.</p>
  
  <p><italic>Statistical analysis:</italic> As determination of
  semantically-oriented AOIs involves certain disadvantages in terms of
  the stimulus material used, the division of the images in grids was
  the better alternative. However, the optimal number of fields per
  frame could not be determined. As each emAT and mcAT data pair per AOI
  counted as one case in the analysis, a large or too large field or
  case number might mean small, essentially insignificant effects get
  rated as significant. For this reason, we calculated and assessed
  effects across both linked measures (correlation and AUC) and
  parameters (attention and contact). To calculate the significance
  between correlation values of the individual picture category, the
  online calculator (ref:
  <ext-link ext-link-type="uri" xlink:href="http://www.socscistatistics.com/tests/pearson/" xlink:show="new">http://www.socscistatistics.com/tests/pearson/</ext-link>
  ) for comparison of two
  correlation coefficients was used. The emAT and mcAT metrics are
  independent in terms of subjects and methods of measurement, but they
  are not independent with respect to the stimulus material: i.e. the
  stimuli are the same for both maneuvers, but different in time and
  dynamics.</p>
</sec>

	</app>
</app-group>

</back>
</article>
