<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd">

<article article-type="research-article" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML">
 <front>
    <journal-meta>
	<journal-id journal-id-type="publisher-id">Jemr</journal-id>
      <journal-title-group>
        <journal-title>Journal of Eye Movement Research</journal-title>
      </journal-title-group>
      <issn pub-type="epub">1995-8692</issn>
	  <publisher>								
	  <publisher-name>Bern Open Publishing</publisher-name>
	  <publisher-loc>Bern, Switzerland</publisher-loc>
	</publisher>
    </journal-meta>
    <article-meta>
	<article-id pub-id-type="doi">10.16910/jemr.10.5.a</article-id> 
	  <article-categories>								
				<subj-group subj-group-type="heading">
					<subject>Research Article</subject>
				</subj-group>
		</article-categories>
      <title-group>
        <article-title>Eye Tracking and Visualization: Introduction to the Special Thematic Issue of the Journal of Eye Movement Research</article-title>
      </title-group>
	   <contrib-group> 
				<contrib contrib-type="author">
					<name>
						<surname>Burch</surname>
						<given-names>Michael</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Chuang</surname>
						<given-names>Lewis L.</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">2</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Duchowski</surname>
						<given-names>Andrew</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">3</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Weiskopf</surname>
						<given-names>Daniel</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Groner</surname>
						<given-names>Rudolf</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">4</xref>
				</contrib>				
        <aff id="aff1">
		<institution>Visualization Research Center, University of Stuttgart</institution>,   <country>Germany</country>
        </aff>
        <aff id="aff2">
		<institution>Max Planck Institute for Biological Cybernetics, Tübingen</institution>,   <country>Germany</country>
        </aff>
        <aff id="aff3">
		<institution>Clemson University,</institution>,   <country>USA</country>
        </aff>
        <aff id="aff4">
		<institution>scians Ltd, Bern, and University of Bern,</institution>,   <country>Switzerland</country>
        </aff>				
		</contrib-group>   

		
	  <pub-date date-type="pub" publication-format="electronic"> 
		<day>25</day>  
		<month>5</month>
        <year>2018</year>
      </pub-date>
	  <pub-date date-type="collection" publication-format="electronic"> 
	  <year>2017</year>
	</pub-date>
      <volume>10</volume>
      <issue>5</issue>
	 <elocation-id>10.16910/jemr.10.5.1</elocation-id> 
	<permissions> 
	<copyright-year>2017</copyright-year>
	<copyright-holder>Burch, M., Chuang, L., Duchowski, A., Weiskopf, D., Groner, R.</copyright-holder>
	<license license-type="open-access">
  <license-p>This work is licensed under a Creative Commons Attribution 4.0 International License, 
  (<ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">
    https://creativecommons.org/licenses/by/4.0/</ext-link>), which permits unrestricted use and redistribution provided that the original author and source are credited.</license-p>
</license>
	</permissions>
      <abstract>
        <p>There is a growing interest in eye tracking technologies
    applied to support traditional visualization techniques like diagrams,
    charts, maps, or plots, either static, animated, or interactive ones. More
    complex data analyses are required to derive knowledge and meaning from
    the data. Eye tracking systems serve that purpose in combination with
    biological and computer vision, cognition, perception, visualization,
    human-computer-interaction, as well as usability and user experience
    research. The 10 articles collected in this thematic special issue provide
    interesting examples how sophisticated methods of data analysis and
    representation enable researchers to discover and describe fundamental
    spatio-temporal regularities in the data. The human visual system,
    supported by appropriate visualization tools, enables the human operator
    to solve complex tasks, like understanding and interpreting
    three-dimensional medical images, controlling air traffic by radar
    displays, supporting instrument flight tasks, or interacting with virtual
    realities. The development and application of new visualization techniques
    is of major importance for future technological progress.</p>
      </abstract>
      <kwd-group>
        <kwd>Eye movement</kwd>
        <kwd>eye tracking</kwd>
        <kwd>visualization</kwd>
        <kwd>vision</kwd>
        <kwd>cognition</kwd>
        <kwd>perception</kwd>
        <kwd>human-computer-interaction</kwd>
        <kwd>usability</kwd>
        <kwd>user experience</kwd>		
      </kwd-group>
    </article-meta>
  </front>	
  <body>
    
    <p>There is a growing interest in eye tracking technologies,
       particularly applied to understand visualization techniques like
       diagrams, charts, maps, or plots, either static, animated, or
       interactive ones. However, in most cases the recorded eye movement
       data are too complex to be efficiently and fully explored with
		traditional visualization techniques, integrated into eye
		tracking systems like charts of frequency distributions or
		heat maps of perceptual and cognitive parameters. More
		complex data analyses are required to derive knowledge
		and meaning from the data, in particular to support the
		detection of design flaws, misinterpretations, and
		perceptual illusions in the stimuli, or to observe visual or
		attentional problems, all leading to degradations of
		performance at some tasks.</p> 

    <p>More advanced analyses can be based in two ways,
		either by algorithmic approaches trying to reduce the
		amount of data with the goal to find patterns, rules, and
		correlations, or by visualization approaches trying to
		exploit the strengths of the human visual system for rapidly 
		detecting visual patterns. Combining algorithms and interactive
    visualizations with the user-in-the-loop is a powerful concept in the
    present days, referred to as visual analytics, with the goal of gaining
    knowledge from heterogeneous, conflicting, and big data, which is a
    reasonable description of eye movement data—to the behest of many eye
    movement researchers. “Computers are incredibly fast, accurate and stupid;
    humans are incredibly slow, inaccurate and brilliant; together they are
    powerful beyond imagination“ (a quotation attributed to Albert Einstein;
    however see the critique of Shoemaker, (
        <xref ref-type="bibr" rid="b13">13</xref>)
		). Effective visualization
    techniques allow for complex information organization and re-organization,
    without compromising serendipitous discovery.</p>

    <p>The immense flood of eye movement data has been made possible by the
    technological advances in computer vision algorithms combined with
    sophisticated sensor hardware that have become affordable to many
    researchers and research institutions all over the world. In fact, we are
    likely to witness even more complex eye movement datasets with the growing
    integration of eye trackers in consumer products as well as mobile
    eye-tracking in-the-wild (
        <xref ref-type="bibr" rid="b2">2</xref>
		). These advances
    make eye tracking applicable to several research fields like psychology,
    neuroscience, and optometry.</p>

    <p>Eye movement data have an inherent spatio-temporal nature complemented
    by additional metrical and physiological data in a multivariate and
    dynamic form. Interpreting such complex and time-varying data together
    with the semantic meaning of the displayed stimuli consequently requires
    time-efficient algorithmic and heuristic concepts. To do this, the field
    of eye tracking and visualization has to take into account several related
    and interdependent fields like biological and computer vision, cognition,
    perception, visualization, human-computerinteraction, usability and user
    experience research. By the combination and interplay of those research
    disciplines, eye tracking becomes a powerful tool for evaluating stimuli
    like visualizations, and again using visualizations to find insights in
    the spatio-temporal domain of eye movement data.</p>

    <p>This special issue is based on a follow-up workshop, parallel with the
    IEEE VIS Conference (
<ext-link ext-link-type="uri" xlink:href="http://ieeevis.org" xlink:show="new">http://ieeevis.org</ext-link>	
		), Second Workshop on Eye Tracking and Visualization
    (ETVIS 2016) which took place in Baltimore, Maryland, USA, on October
    23rd, 2016. For the special issue of the Journal of Eye Movement Research
    some authors who presented papers at the ETVIS 2016 workshop were invited
    for an extended version of their research as a full-length article. In
    addition, an open call for articles was made for interested researchers to
    submit an article that was not presented at the workshop. Following the
    normal peer reviewing process with 2-5 peer reviews according to the
    standards of the Journal of Eye Movement Research, some submissions were
    not accepted for publication. Those which had been revised and finally
    accepted by the editors of the special issue (who are also the authors of
    this introduction) were published on a rolling basis immediately after
    final acceptance in this special thematic issue.</p>

    <p>In the article “A quality-centered analysis of eye tracking data in
    foveated rendering” the authors (
        <xref ref-type="bibr" rid="b10">10</xref>
		) present an analysis of eye tracking data for evaluating a
    foveated rendering approach for head-mounted displays. Foveated rendering
    methods adapt the image synthesis process to the user’s gaze, exploiting
    the capacities of the human visual system to increase rendering
    performance. Foveated rendering has great potential when certain
    requirements are fulfilled, like low-latency rendering to cope with high
    display refresh rates crucial for virtual reality at a high level of
    immersion.</p>

    <p>The contribution “Gaze self-similarity plot - a new visualization
    technique“ (
        <xref ref-type="bibr" rid="b6">6</xref>
		) introduces a technique called
    gaze self-similarity plot that can be applied to visualize both spatial
    and temporal eye movement features on a single two-dimensional plot. The
    technique is an extension of the idea of recurrence plots, commonly used
    in time series analysis. The paper presents the basic concepts of the
    proposed approach together with some examples of what kind of information
    may be disclosed and shows some possible applications.</p>

    <p>With the increasing number of studies where participants’ eye
    movements are tracked while watching videos, the volume of gaze data
    records is growing immensely. In “Visual analytics of gaze data with
    standard multimedia Player“ Schöning, Gundler, Heidemann, König, &#x26;
    Krumnack (
        <xref ref-type="bibr" rid="b11">11</xref>
		) define and utilize an exchange format that can be
    interpreted by standard multimedia players and can be streamed via
    Internet by using multimedia container formats for distributing and
    archiving eyetracking and gaze data bundled with the watched videos.</p>

    <p>Several popular visualizations of gaze data, such as scanpaths (
        <xref ref-type="bibr" rid="b4">4</xref>
		) and heatmaps, can be used independently of the
    viewing task. For a specific task, such as reading, more informative
    visualizations can be created, as Špakov, Siirtola, Istance, &#x26; Räihä
    (
        <xref ref-type="bibr" rid="b12">12</xref>
		) demonstrate in “Visualizing the reading activity of people learning
    to read”. The authors present several static and dynamic techniques to
    communicate the reading activity of children to primary school teachers.
    Static visualizations help in getting a simple overview of how the
    children read as a group and of their active vocabulary, while dynamic
    visualizations help to give the teachers a good understanding of how the
    individual students read.</p>

    <p>Although eye tracking data are in wide usage, little has been done to
    visually represent the uncertainty of recorded gaze data. In “Uncertainty
    visualization of gaze estimation to support operator controlled
    calibration“ Hassoumi, Peysakhovich &#x26; Hurter (
        <xref ref-type="bibr" rid="b5">5</xref>
		) demonstrate how
    visualization assets can support the qualitative evaluation of gaze
    estimation uncertainty. The authors’ gaze data processing method allows at
    every stage of the data transformation an estimate of uncertainty.</p>

    <p>In the article “A skeleton-based approach to analyze and visualize
    oculomotor behavior when viewing animated characters” Le Naour and
    Bresciani (
        <xref ref-type="bibr" rid="b8">8</xref>
		) propose a new approach to quantify and visualize the
    oculomotor behavior of viewers watching the movements of animated
    characters in dynamic sequences. They illustrate the gaze distribution of
    one or several viewers by visualizing the timelines of the viewers by
    combining the spatial and temporal characteristics of the gaze pattern
    which provides an efficient tool to compare the oculomotor behaviors of
    different viewers.</p>

    <p>In their study “Eye movement planning on single-sensorsingle-indicator
    displays is vulnerable to user anxiety and cognitive load“ Allsop, Gray,
    Bülthoff and Chuang (
        <xref ref-type="bibr" rid="b1">1</xref>
		)demonstrate the effects of anxiety and
    cognitive load on eye movement planning in an instrument flight task on
    the basis of a single-sensor-single-indicator data visualization design
    philosophy. The task was performed in neutral and anxiety conditions,
    while at low or high cognitive load, auditory <italic>n</italic>-back task was
    performed. Higher cognitive load led to a reduction in the number of
    transitions between instruments and impaired task performance. The results
    suggest that both, cognitive load and anxiety, influence gaze behavior.
    These effects should be taken into account when designing data
    visualization displays.</p>

    <p>In “Scanpath visualization and comparison using visual aggregation
    techniques“ Peysakhovich and Hurter (
        <xref ref-type="bibr" rid="b9">9</xref>
		) demonstrate the use of
    different visual aggregation techniques to obtain visual representations
    of scanpaths. Fixation points and saccades are aggregated using an
    algorithm that handles saccades direction, onset timestamp, magnitude or
    their combination for the edge compatibility criterion. Flow direction
    maps, computed during bundling, can be visualized separately or as a
    single image. The authors provide examples of basic patterns, visual
    search task, and art perception. Used together, the applied techniques
    provide new interesting information about the eye movement data.</p>

    <p>Kumar, Netzel, Burch, Weiskopf and Mueller (
        <xref ref-type="bibr" rid="b7">7</xref>
		) in their article
    “Visual multi-metric grouping of eyetracking data“ present an algorithmic
    and visual grouping of eye-tracking data using two visualization concepts.
    First, parallel coordinates are used to provide an overview of the used
    metrics, their interactions, and similarities. Next, a similarity matrix
    is used to visually represent the affine combination of metrics. In an
    algorithmic grouping of subjects the eye-tracking data are encoded into
    the cells of a similarity matrix of participants, a procedure that leads
    to distinct visual groups of similar behavior. The authors illustrate this
    visualization by a data set of subjects reading metro maps.</p>

    <p>In the article “Using simultaneous scanpath visualization to
    investigate the influence of visual behaviour on medical image
    interpretation“ Davies, Vigo, Harper and Jay (
        <xref ref-type="bibr" rid="b3">3</xref>
		) explore how a
    number of novel methods for visualizing and analyzing differences in
    eye-tracking data, including scanpath length, Levenshtein distance, and
    visual transition frequency, can help to reveal the methods clinicians use
    for interpreting electrocardiograms. Visualizing the differences between
    the scanpaths of the participants simultaneously gave answers to questions
    whether clinicians fixate randomly on the electrocardiograms or apply a
    systematic approach, and about the relationship between interpretation
    accuracy and visual behavior. Results indicate that practitioners have
    very different visual search strategies. Clinicians who incorrectly
    interpret the image have greater scanpath variability than those who
    correctly interpret it.</p>

    <p>Taken together, the ten articles collected in this thematic special
    issue provide interesting examples how sophisticated methods of data
    analysis and representation enable researchers to discover and describe
    fundamental spatiotemporal regularities in the data. On the other hand,
    the human visual system, equipped with appropriate visualization tools,
    supports the human operator in complex tasks, like understanding and
    interpreting threedimensional medical images, controlling air traffic by
    radar displays, performing an instrument flight task, or interacting with
    virtual realities. The development and application of new visualization
    techniques is of major importance for future technological progress.</p>

    <sec id="S1">
      <title>Acknowledgement</title>
		
    <p>The research of Daniel Weiskopf and Lewis Chuang is supported by the
    DFG SFB-TRR 161 (B01 &#x26; C03).</p>	
      </sec>
  </body>	

<back>
<ref-list>
<ref id="b1"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Allsop</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Gray</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Bülthoff</surname>, <given-names>H.H.</given-names></string-name>, &#x26; <string-name><surname>Chuang</surname>, <given-names>L.</given-names></string-name></person-group> (<year>2017</year>).Eye movement planning on Single Sensor-Single-Indicator displays is vulnerable to user anxiety andcognitive load. Journal of Eye Movement Research,10(5):8.1-15</mixed-citation></ref>
<ref id="b2"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Bulling</surname>, <given-names>A.</given-names></string-name>, &#x26; <string-name><surname>Gellersen</surname>, <given-names>H.</given-names></string-name></person-group> (<year>2010</year>). <article-title>Toward mobile eyebasedhuman-computer interaction.</article-title> <source>IEEE Pervasive Computing</source>, <volume>9</volume>(<issue>4</issue>), <fpage>8</fpage>–<lpage>12</lpage>. <pub-id pub-id-type="doi">10.1109/MPRV.2010.86</pub-id><issn>1536-1268</issn></mixed-citation></ref>
<ref id="b3"><mixed-citation publication-type="book" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Davies</surname>, <given-names>A</given-names></string-name></person-group>, Vigo, Harper, S. &#x26; Jay, C. (<year>2017</year>). Usingsimultaneous scanpath visualization to investigate therelationship between accuracy and eye movementduring medical image interpretation. Journal of EyeMovement Research, 10(5):11.</mixed-citation></ref>
<ref id="b4"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Groner</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Walder</surname>, <given-names>F.</given-names></string-name>, &#x26; <string-name><surname>Groner</surname>, <given-names>M.</given-names></string-name></person-group> (<year>1984</year>). <chapter-title>Looking atfaces: Local and global aspects of scanpaths</chapter-title>. In <person-group person-group-type="editor"><string-name><given-names>A. G.</given-names> <surname>Gale</surname></string-name> &#x26; <string-name><given-names>F.</given-names> <surname>Johnson</surname></string-name> (<role>Eds.</role>),</person-group> <source>Theoretical and appliedaspects of eye movement research</source>. <publisher-loc>Amsterdam</publisher-loc>: <publisher-name>NorthHolland</publisher-name>.</mixed-citation></ref>
<ref id="b5"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Hassoumi</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Peysakhovich</surname>, <given-names>V.</given-names></string-name>, &#x26; <string-name><surname>Hurter</surname> <given-names>C.</given-names></string-name></person-group> (<year>2018</year>).Uncertainty visualization of gaze estimation to supportoperator controlled calibration. Journal of EyeMovement Research, 10(5):6,1-15</mixed-citation></ref>
<ref id="b6"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Kasprowski</surname>, <given-names>P.</given-names></string-name>, &#x26; <string-name><surname>Harezlak</surname>, <given-names>K.</given-names></string-name></person-group> (<year>2017</year>). Gaze selfsimilarityplot - a new visualization technique. Journalof Eye Movement Research, 10(5):3, 1-14.</mixed-citation></ref>
<ref id="b7"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Kumar</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Netzel</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Burch</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Weiskopf</surname>, <given-names>D.</given-names></string-name>, &#x26; <string-name><surname>Mueller</surname>, <given-names>K.</given-names></string-name></person-group> (<year>2017</year>). <article-title>Visual multi-metric grouping of eyetrackingdata.</article-title> <source>Journal of Eye Movement Research</source>, <volume>10</volume>(<issue>5</issue>):10.<issn>1995-8692</issn></mixed-citation></ref>
<ref id="b8"><mixed-citation publication-type="journal" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Le Naour</surname>, <given-names>T.</given-names></string-name></person-group>, &#x26; Bresciani. (<year>2017</year>). <article-title>A skeleton-based approachto analyzing oculomotor behavior when viewinganimated characters.</article-title> <source>Journal of Eye Movement Research</source>, <volume>10</volume>(<issue>5</issue>):7.<issn>1995-8692</issn></mixed-citation></ref>
<ref id="b9"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Peysakhovich</surname>, <given-names>V.</given-names></string-name>, &#x26; <string-name><surname>Hurter</surname>, <given-names>C.</given-names></string-name></person-group> (<year>2018</year>). <article-title>Scanpath visualizationand comparison using visual aggregation techniques.</article-title> <source>Journal of Eye Movement Research</source>, <volume>10</volume>(<issue>5</issue>):9.<issn>1995-8692</issn></mixed-citation></ref>
<ref id="b10"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Roth</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Weier</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Hinkenjann</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Li</surname>, <given-names>Y.</given-names></string-name>, &#x26; <string-name><surname>Slusallek</surname>,<given-names>P.</given-names></string-name></person-group> (<year>2017</year>). A quality-centered analysis of eye trackingdata in foveated rendering. Journal of Eye MovementResearch, 10(5):2, 1-12.</mixed-citation></ref>
<ref id="b11"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Schöning</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Gundler</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Heidemann</surname> <given-names>G.</given-names></string-name>, <string-name><surname>König</surname>, <given-names>P.</given-names></string-name>, &#x26;<string-name><surname>Krumnack</surname>, <given-names>U.</given-names></string-name></person-group> (<year>2017</year>). Visual analytics of gaze datawith standard multimedia player. Journal of Eye MovementResearch, 10(5):4, 1-14.</mixed-citation></ref>
<ref id="b12"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Špakov</surname>, <given-names>O.</given-names></string-name>, <string-name><surname>Siirtola</surname>, <given-names>H.</given-names></string-name></person-group> Istance, H., &#x26; Räihä, K.-J.(<year>2017</year>). Visualizing the Reading Activity of PeopleLearning to Read. Journal of Eye Movement Research,10(5):5, 1-12</mixed-citation></ref>
<ref id="b13"><mixed-citation publication-type="web-page" specific-use="parsed"><person-group person-group-type="author"><string-name><surname>Shoemaker</surname>, <given-names>B.</given-names></string-name></person-group> (<year>2008</year>). <article-title>Einstein never said that.</article-title><ext-link ext-link-type="uri" xlink:href="http://www.benshoemate.com/2008/11/30/einsteinnever-said-that/">http://www.benshoemate.com/2008/11/30/einsteinnever-said-that/</ext-link></mixed-citation></ref>
</ref-list>
</back>
</article>