<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd">

<article article-type="research-article" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML">
 <front>
    <journal-meta>
	<journal-id journal-id-type="publisher-id">Jemr</journal-id>
      <journal-title-group>
        <journal-title>Journal of Eye Movement Research</journal-title>
      </journal-title-group>
      <issn pub-type="epub">1995-8692</issn>
	  <publisher>								
	  <publisher-name>Bern Open Publishing</publisher-name>
	  <publisher-loc>Bern, Switzerland</publisher-loc>
	</publisher>    </journal-meta>
    <article-meta>
	<article-id pub-id-type="doi">10.16910/jemr.11.3.6</article-id> 
	  <article-categories>								
				<subj-group subj-group-type="heading">
					<subject>Research Article</subject>
				</subj-group>
		</article-categories>
      <title-group>
        <article-title>Eye-tracking en masse: Group user studies, lab infrastructure, and practices</article-title>
      </title-group>
	   <contrib-group> 
				<contrib contrib-type="author">
					<name>
						<surname>Bielikova</surname>
						<given-names>Maria</given-names>
					</name>
					<xref ref-type="aff" rid="aff1 aff2">1, 2</xref>
				</contrib>
 				<contrib contrib-type="author">
					<name>
						<surname>Konopka</surname>
						<given-names>Martin</given-names>
					</name>
					<xref ref-type="aff" rid="aff1 aff2">1, 2</xref>
				</contrib>
 				<contrib contrib-type="author">
					<name>
						<surname>Simko</surname>
						<given-names>Jakub</given-names>
					</name>
					<xref ref-type="aff" rid="aff1 aff2">1, 2</xref>
				</contrib>
 				<contrib contrib-type="author">
					<name>
						<surname>Moro</surname>
						<given-names>Robert</given-names>
					</name>
					<xref ref-type="aff" rid="aff1 aff2">1, 2</xref>
				</contrib>
 				<contrib contrib-type="author">
					<name>
						<surname>Tvarozek</surname>
						<given-names>Jozef</given-names>
					</name>
					<xref ref-type="aff" rid="aff1 aff2">1, 2</xref>
				</contrib>
 				<contrib contrib-type="author">
					<name>
						<surname>Hlavac</surname>
						<given-names>Patrik</given-names>
					</name>
					<xref ref-type="aff" rid="aff1 aff2">1, 2</xref>
				</contrib>
 				<contrib contrib-type="author">
					<name>
						<surname>Kuric</surname>
						<given-names>Eduard</given-names>
					</name>
					<xref ref-type="aff" rid="aff1 aff2">1, 2</xref>
				</contrib>                                         
			
        <aff id="aff1">
		<institution>User Experience and Interaction Research Center, Slovak University of Technology in Bratislava</institution>
        </aff>
        <aff id="aff2">
{maria.bielikova, martin_konopka, jakub.simko, robert.moro, jozef.tvarozek, patrik.hlavac, eduard.kuric}@stuba.sk	, www.uxi.sk	
        </aff>       
		</contrib-group>   

		
	  <pub-date date-type="pub" publication-format="electronic"> 
		<day>20</day>  
		<month>8</month>
        <year>2018</year>
      </pub-date>
	  <pub-date date-type="collection" publication-format="electronic"> 
	  <year>2018</year>
	</pub-date>
      <volume>11</volume>
      <issue>3</issue>
	 <elocation-id>10.16910/jemr.11.3.6</elocation-id> 
	<permissions> 
	<copyright-year>2018</copyright-year>
	<copyright-holder>Bielikova, M., Konopka, M., Simko, J., Moro, R., Tvaroz-ek, J., Hlavac, P., &#x26; Kuric, E.</copyright-holder>
	<license license-type="open-access">
  <license-p>This work is licensed under a Creative Commons Attribution 4.0 International License, 
  (<ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">
    https://creativecommons.org/licenses/by/4.0/</ext-link>), which permits unrestricted use and redistribution provided that the original author and source are credited.</license-p>
</license>
	</permissions>
      <abstract>
        <p>The costs of eye-tracking technologies steadily decrease. This allows research institutions to obtain multiple eye-tracking devices. Already, several multiple eye-tracker laboratories have been established. Researchers begin to recognize the subfield of group eye-tracking. In comparison to the single-participant eye-tracking, group eye-tracking brings new tech-nical and methodological challenges. Solutions to these challenges are far from being established within the research community. In this paper, we present the Group Studies system, which manages the infrastructure of the group eye-tracking laboratory at the User Experience and Interaction Research Center (UXI) at the Slovak University of Technology in Bratislava. We discuss the functional and architectural characteristics of the system. Furthermore, we illustrate our infrastructure with one of our past studies. With this paper, we also publish the source code and the documentation of our system to be re-used.</p>
      </abstract>
      <kwd-group>
        <kwd>Group eye-tracking</kwd>
        <kwd>eye tracking</kwd>
        <kwd>gaze</kwd>
        <kwd>infrastructure</kwd>
        <kwd>group user study</kwd>
        <kwd>user experience</kwd>
        <kwd>usability</kwd>
        <kwd>reading</kwd>   
        <kwd>interaction</kwd>                                                   
      </kwd-group>
    </article-meta>
  </front>	
  <body>

    <sec id="S1">
      <title>Motivation for group eye-tracking</title>

<p>Nowadays, technological advances and vendor competition are steadily lowering the
price of the eye-tracking technology. Research institutions can buy more
eye-tracking equipment, i.e., more individual eye-tracking stations
(<xref ref-type="bibr" rid="b12">12</xref>). So much so, the
number of eye-trackers can easily surpass the number of rooms available
to house them, or the number of personnel available (and able) to use
them in a “traditional” study setup. By a traditional setup, we mean
studies, in which the participants work one-at-a-time on a single
eye-tracking station. In such setup, the study moderators manage the
participants one-to-one. As the prices go lower, an institution may want
to furnish couple more “traditional setup” eye-tracking labs. This way,
however, the low prices of eye-tracking technology cannot really be
exploited, because the personnel and overhead costs would need to scale
as well.</p>

<p>It may, however, make sense to arrange multiple eye-trackers in a
different setup. Multiple eye-tracking stations (e.g., 20 PCs) can be
placed together into a <italic>group eye-tracking room</italic>. In such
physical setup, studies will no longer have the
<italic>one-to-one,</italic> but <italic>one-to-many</italic> design. In
such design, the moderator can (at the same time) manage multiple
participants working in parallel.</p>

<p>A group eye-tracking setup requires a special infrastructure. The
system must provide means to design the experiments, effectively
distribute the scenarios to workstations, orchestrate the work and
collect the recorded data.</p>

<p>To this day, laboratories with the group eye-tracking setups have
existed for some years in several (but not many) institutions worldwide
(<xref ref-type="bibr" rid="b14 b2 b8 b6 b19">14, 2, 8, 6, 19</xref>). Software solutions with features allowing the
control of multiple eye-trackers also started to emerge (<xref ref-type="bibr" rid="b12">12</xref>). However, as a discipline, the group eye-tracking is not yet
well described, discussed and methodologically established. Only
recently, fora dedicated to this field started to emerge. And, seasoned
infrastructural solutions are not yet available.</p>

    <sec id="S1a">
      <title>Contribution of this paper</title>

<p>With this paper, we aim to contribute to the forming field of group
eye-tracking. We present the <italic>infrastructure of our group
eye-tracking laboratory</italic>, which we developed at the User
Experience and Interaction Research Center (UXI) of the Slovak
University of Technology in Bratislava.</p>

<p>Our system, called <italic>Group Studies,</italic> places all the
eye-tracking stations under one umbrella to be easily controlled. In
this paper, we discuss various aspects of this system, mostly through a
functional and architectural perspective. We put a strong emphasis on
flexibility of the study design process, extensibility and integration
of our system to other applications. To better illustrate the potential
of our infrastructure, this paper also presents an example study from
the domain of programmer eye-tracking.</p>

<p>With this paper, we also publish the source code of our
infrastructure along with the necessary technical documentation. Our
solution can thus be used by any individual or institution wishing to
use the group eye-tracking.</p>
    </sec>
    </sec>	

    <sec id="S2">
      <title>Background</title>

<p>In comparison with the traditional setup, <italic>group
eye-tracking</italic> has several advantages, but also limitations and
challenges. Depending on the study requirements, the trade-off between
the pros and cons can, in many cases, play in favor of the group
setup.</p>

<p>The advantages and benefits of the group eye-tracking include:</p>
<list list-type="order">
  <list-item>
    <p><italic>Time and effort savings</italic>. If the study
    participants work in parallel, the total duration of the experiment
    sessions can be radically cut down. Also, the effort needed to
    moderate the sessions scales down too (e.g., a couple of moderators
    to tens of participants). This shortens the studies, which rely on
    an automated quantitative evaluation. Naturally, if there is a need
    for manual evaluation (coding) of the recorded sessions, the group
    setup is only little different to the single device setup – both
    require the human labor.</p>
  </list-item>
  <list-item>
    <p><italic>Move towards uniform experiment conditions</italic>. The
    group studies make participants to work at the same time, at the
    same place and listen tothe same instructions. This lowers the risk
    of biases caused by uncontrolled environment variables.</p>
  </list-item>
  <list-item>
    <p><italic>Possibilities for collaborative scenarios</italic>. Using
    multiple eye-tracking stations at once opens a completely new domain
    of studies, where interaction of participants is involved. Examples
    already exist, e.g., in collaborative gaming, learning or search
    (<xref ref-type="bibr" rid="b1 b13 b18">1, 13, 18</xref>).</p>
  </list-item>
</list>

<p>The limitations and challenges of the group eye-tracking include:</p>
<list list-type="order">
  <list-item>
    <p><italic>Need for a non-trivial infrastructure</italic>, which
    must provide means for a central control of the study process and
    enable integration with the experimental and analytic applications,
    required for the study. Addressing this concern is a primary
    contribution of this paper.</p>
  </list-item>
  <list-item>
    <p><italic>Study organization issues.</italic> In group studies, we
    have lesser control over the individual participants, fewer
    instructing options, tighter scheduling, etc.</p>
  </list-item>
  <list-item>
    <p><italic>Data quality issues</italic>. The lesser control over the
    participants throughout the experiment may lessen the quality of the
    acquired eye-tracking data.</p>
  </list-item>
  <list-item>
    <p><italic>Potential interactions between the participants</italic>.
    Some studies suggest, that the very presence of other participants
    may influence the outcome of certain metrics (<xref ref-type="bibr" rid="b16">16</xref>).
    The participants may disturb each other, for example by noise.
    Therefore, certain types of experiments may not be possible (e.g.,
    when we need the participants to express themselves verbally during
    a think-aloud protocol).</p>
  </list-item>
</list>

<p>Today, the group eye-tracking requires a custom software and hardware
infrastructure. The available eye-tracking software tools (e.g., Tobii
Studio, SMI Experiment Center, OGama) are suited for single-user
experiments and are generally inadequate for the group studies. To be
fair, we are aware of the initiatives of some traditional eye-tracker
vendors and other companies to develop a solution for the group
eye-tracking use. There are also some open source initiatives <xref ref-type="bibr" rid="b12">12</xref>). Still, as in our case, laboratories tend to develop and
maintain their own solutions for practical reasons.</p>

<p>The main source of inadequacy of existing tools is the absence of
proper study management features. Especially missing are the features
for the centralized remote control and monitoring of the study process.
A group study also requires an effective distribution and collection of
the required data (stimuli, tasks, logs) to and from the workstations.
Finally, a high-level programmatic control of the eye-trackers is seldom
available outside of the vendor’s <italic>canned</italic> (closed)
tools. Although the eye-trackers do have low-level SDKs, these require a
lot of programming effort to be set up for studies. This hampers the
integration of external applications, often required in the studies.</p>

<p>We have overcome some of these challenges by building a custom
infrastructure for the group eye-tracking laboratory at our
institute.</p>
    </sec>
	
    <sec id="S3">
      <title>Infrastructure overview</title>

<p>Our system, the <italic>Group Studies,</italic> was developed and is
currently deployed at the User Experience and Interaction Research
Center (UXI) at the Slovak University of Technology in Bratislava.</p>

<p>The principal high-level requirements of the <italic>Group
Studies</italic> system are:</p>
<list list-type="order">
  <list-item>
    <p>To run the eye-tracking experiments on the individual
    workstations in the group eye-tracking laboratory (room).</p>
  </list-item>
  <list-item>
    <p>To allow a centralized design and scheduling of the
    experiments.</p>
  </list-item>
  <list-item>
    <p>To monitor the experiments centrally.</p>
  </list-item>
  <list-item>
    <p>To access the recorded data centrally.</p>
  </list-item>
</list>

<p>Following these requirements, we designed the <italic>Group
Studies</italic> system as a thick client-server application. The system
consists of two principal components:</p>
<list list-type="order">
  <list-item>
    <p><italic>UXR</italic> (UX Research): a web-based management
    application for administration of the experiments. This application
    is deployed on a physical server in the laboratory.</p>
  </list-item>
  <list-item>
    <p><italic>UXC</italic> (UX Client): a desktop-based client
    application, which executes the experiment sessions. This
    application is deployed on every workstation in the laboratory (PC
    with an eye-tracker).</p>
  </list-item>
</list>

<p>Our system works primarily with the Tobii technology but allows the
integration with devices from other vendors. It is implemented in C#,
utilizes .NET and Windows ecosystem and relies on a fast intranet
connection between its elements (10 Gbps in our case). However, since
the bulk of the recorded data (screen recordings, eye-tracker logs,
etc.) is sent from the individual workstations to the server after the
session end, it could be in theory used also in a setup, in which the
server and clients do not reside on the same local network.</p>

<p>The system was designed iteratively and incrementally. We base it on
our experience with the study organization systems and experimental
education platforms, which support group classroom experiments (<xref ref-type="bibr" rid="b25 b26">25, 26</xref>). We were also
inspired by crowdsourcing systems such as Mechanical Turk and systems
for interactive experiment support (<xref ref-type="bibr" rid="b21">21</xref>).</p>

<p>Our system distinguishes between the two types of users: (1) the
<italic>study owners</italic>, who interact with both UXR and UXC and
(2) the <italic>study participants</italic> who interact with the UXC.
The study owner role covers the study designers, moderators, and
analysts.</p>

<p>Following is a typical workflow of an experiment in the <italic>Group
Studies</italic> (see also Figure 1):</p>
<list list-type="order">
  <list-item>
    <p>The study owner defines the experiment (scenario).</p>
  </list-item>
  <list-item>
    <p>The study owner schedules the experiment session(s) using the UXR
    web interface.</p>
  </list-item>
  <list-item>
    <p>During the experiment session, the study participants interact
    with the UXC (which runs all the necessary steps of the session,
    e.g., instructions, calibration, stimuli, questionnaires). When
    necessary, 3<sup>rd</sup> party applications can exchange events and
    gaze data with the UXC as well.</p>
  </list-item>
  <list-item>
    <p>When the session ends, the UXC uploads all recorded data to the
    UXR.</p>
  </list-item>
  <list-item>
    <p>The recordings are exported from the UXR for further
    analyses.</p>
  </list-item>
</list>

<fig id="fig1" fig-type="figure" position="float">
					<label>Figure 1</label>
					<caption>
						<p>Workflow of a typical experiment conducted in the Group Studies system.</p>
					</caption>
					<graphic id="graph01" xlink:href="jemr-11-03-f-figure-01.png"/>
				</fig>


<p><italic>The client application (UXC) is autonomous</italic> to a large extent. The
UXC can be run on its own, without the connection to the UXR (the
server-side application of our system). This has several advantages.
First, when experiments are run in the group session laboratory, the
system is less prone to server and network failures. Second, it allows
the experiments to be designed and tested anywhere, using only a single
machine, even without the eye-tracker (which can be substituted by a
mock input).</p>

<p>The experiment is defined using a data structure called the
<italic>Session definition</italic>. This structure is stored as a JSON
file. It contains various setup parameters and most importantly, the
timeline. The timeline is a sequence of stimuli, questionnaires,
calibrations, calibration validations and other events, that the
participants encounter during the session.</p>

<p>Defining the timeline through JSON files differs from other
eye-tracking tools, which usually use graphical interfaces. We chose
this approach, because of its:</p>
<list list-type="bullet">
  <list-item>
    <p><italic>Flexibility</italic>. The experiment owner can write the
    Session definition JSON anywhere. He/she can then load it directly
    to an UXC instance (for testing purposes) or distribute it through
    UXR to all the workstations in the lab (when the real experiment is
    about to start).</p>
  </list-item>
  <list-item>
    <p><italic>Transparency</italic>. The experiment owner can rely
    solely on the content of the JSON file. There are no
    &#x22;invisible&#x22; side effects, as the UXC literally interprets
    the contents of the timeline.</p>
  </list-item>
  <list-item>
    <p><italic>Versionability</italic>. The JSON files can easily be
    versioned in the source code control tools.</p>
  </list-item>
  <list-item>
    <p><italic>Maintainability</italic>. We did not have to write any
    graphical timeline definition tool, either in the UXR or the UXC
    (which was a design dilemma on its own). This made future
    functionality extensions of our system easier.</p>
  </list-item>
</list>

<p>A downside of using such &#x22;programmatic&#x22; approach is, of
course, the lower accessibility of our system for study owners with a
non-technical background. Yet, using the learn-by-example approach, even
the non-technical persons can quickly grasp the principles of the JSON
session definition, especially when they have the access to a battery of
example scenarios.</p>
    </sec>
	
    <sec id="S4">
      <title>System functionality</title>

<p>The following section lists the functionality provided by the
<italic>Group Studies</italic> system. We present it component-wise
(first UXR, then UXC). In addition, we describe the options available
for <italic>Session definition</italic> JSON file, which is defined
outside of the both components.</p>

    <sec id="S4a">
      <title>Functionality of the UXR (server) application</title>

<p><italic>Create a new project (experiment)</italic>. The experiment
owner creates a new experiment in the system, describes it with a name,
free-text details, and a session definition file which all newly
scheduled sessions will inherit from.</p>

<p><italic>Schedule the experiment</italic>. The experiment owner plans
the experiment in one or multiple sessions. The start and end times of
each session must be defined. The session timespans may overlap (we
allow the running of sessions in parallel). This allows introduction of
some variability in <italic>Session definitions</italic> among
participants of the same experiment (for example, the study owner may
want to counterbalance the task order). Each scheduled session inherits
its <italic>Session definition</italic> from the default definition
specified for the project, but it may be modified by the experiment
owner in any way, for example to provide alternative stimuli timeline
per group of participants.</p>

<p><italic>Load Session definition</italic>. The experiment owner loads the prepared
JSON file with the experiment scenario into the project or an individual
session.</p>

<p><italic>Alter Session definition.</italic> The study owner may alter
each Session definition, even for the same project, e.g., to change the
tasks or the stimuli between the groups of participants.</p>

<p><italic>Integrate external applications</italic>. The study owner
registers all applications that will interact in real time with the
client application (UXC) during the experiment. UXC enables this through
the local web API. The interaction may be outbound and/or inbound. The
outbound interaction stands for feeding the gaze data to an external
application. The inbound interaction represents the case, when the
external application feeds arbitrary logs (as JSON objects) into the UXC
application, so they are timestamped and can be later retrieved with
other recorded data. Such logs may be for example AOI hits occurring in
a dynamic environment of the external application resolved by that
application itself based on the gaze data retrieved from the UXC (the
application should in that case define the AOIs as well; neither UXC nor
UXR does not currently allow AOI definition, but it leaves it to the
external application or data analyst who may wish to define them
manually after the data recording and collection is finished). Another
type of inbound interaction is the direct control of the experiment
timeline. An external application may force the UXC to advance on the
timeline. Or, it may even <italic>insert</italic> a new step into the
timeline dynamically, which makes adaptive scenarios possible.</p>

<p><italic>Remotely observe the state of the workstations in the
laboratory</italic>. For each workstation, the study owner can centrally
oversee its connection status and the state of its sensors. The
information is arranged in a dashboard according to the physical floor
plan of the lab. This allows the study owner to quickly track down the
problematic workstation and deal with the possible physical issues
quickly.</p>

<p><italic>Start the experiment recording</italic>. The study owner
initiates the session on the workstations in the laboratory. Multiple
sessions may be started in parallel on different workstations, allowing
the study owner to conduct variants of the same experiment. The option
for individual manual experiment startup by participants themselves
(more suitable in some situations) is also possible in the UXC.</p>

<p><italic>Retrieve recorded data</italic>. Study owner may retrieve
(download) all data recorded in the experiment so far. The data are
organized first participant-wise and then source-wise (for each
participant, the output of each device is in a separate file).</p>
    </sec>
	
    <sec id="S4b">
      <title>Functionality of the UXC (client) application</title>

<p><italic>Start up the client station</italic>. The workstations in the
laboratory are usually started by study owner, not by the participants.
After the study owner turns on a workstation PC, the UXC is launched
automatically. When running, the application listens for any centrally
issued commands and sends updates to the UXR.</p>

<p><italic>Start the session recording</italic>. In the experiments,
where session does not need to be synchronized, the participants can
start the session by themselves. They do so using the main application
screen (see Figure 2).</p>

<fig id="fig2" fig-type="figure" position="float">
					<label>Figure 2</label>
					<caption>
						<p>2 UXC application screen. A participant selects a session from a dropdown and starts it using the bottom-right button.</p>
					</caption>
					<graphic id="graph02" xlink:href="jemr-11-03-f-figure-02.png"/>
				</fig>

<p><italic>Calibrate the eye-tracker</italic>. The participant is
informed about the need for calibrating the eye-tracker. The calibration
is managed by the UXC. It consists of three steps: (1) head-positioning,
(2) point animation (default is a 9-point calibration) and (3)
calibration result. The result is displayed graphically and can be
either accepted or rejected (after which the calibration is restarted).
Currently, it is not possible to select the best calibration result from
the recent calibrations; this might not even be desirable, since the
participant or the eye tracker position (e.g., the screen tilt) might
have changed between the different calibration runs. The default
calibration behavior (the number of calibration points, etc.) can be
overridden in the <italic>Session definition</italic> JSON.</p>

<p><italic>Validate the calibration</italic>. The participant is
informed, that the calibration procedure must be validated. Then, he/she
follows the similar procedure as with the regular calibration. This
procedure is recommended to be scheduled by the experiment owner at
least once somewhere on the experiment timeline. The computation of
validation metrics such as accuracy and precision (<xref ref-type="bibr" rid="b9">9</xref>) is currently not part of this step and is left to the
study owner after the experiment.</p>

<p><italic>Watch instructions.</italic> An instruction text, centered on
a screen, is displayed. The participant proceeds by pressing a
“continue” button or after a time limit elapses.</p>

<p><italic>Fill a questionnaire</italic>. The participant is requested
by the system to answer some questions.</p>

<p><italic>Interact with a stimulus</italic>. The UXC displays the
desktop or starts up a program, which the participant is expected to
interact with.</p>

<p><italic>Complete the experiment</italic>. After completing all steps,
the recording finishes and the participant is informed about it. The
recorded data are transferred to the server where the experiment owner
may access them. The upload process can be observed in the UXC (so the
participants do not shut down the station too early by accident).</p>
    </sec>
	
    <sec id="S4c">
      <title>Session definition JSON schema</title>

<p>Through the <italic>Session definition</italic> JSON, the experiment
owner defines a sequence of <italic>steps</italic>. The steps represent
the activities in which the participants will be engaged during the
experiment session. Also, the study owner defines which devices should
be used for the session recording. The complete documentation on the
<italic>Session definition</italic> can be found within the UXC GitHub
repository<xref ref-type="fn" rid="fn1">1</xref>.</p>

<p>There are several step types from which a timeline may be composed.
Each of these step types can be, to some degree, configured. Each step
type can have multiple instances within a single timeline. Each step
starts, when the previous one ends. An end of a step can be defined by a
hotkey, time limit or an API call (a 3rd party application, used in the
experiment, may force a step to end, for example during the <italic>Show
desktop</italic> or <italic>Launch program</italic> steps).</p>

<p>The <italic>Group studies</italic> system supports the following
timeline step types:</p>
<list list-type="order">
  <list-item>
    <p><italic>Eye-tracker calibration</italic>. The default 9-point
    calibration can be overridden with a custom number of calibration
    points placed on arbitrary locations with an arbitrary order.</p>
  </list-item>
  <list-item>
    <p><italic>Eye-tracker validation</italic>. As with the calibration,
    the 9-point default can be overridden.</p>
  </list-item>
  <list-item>
    <p><italic>Instructions</italic>. The study owner specifies the
    instruction text. Optionally, the text size and color and the color
    of the background can be specified. Also, a continue button may be
    optionally set up.</p>
  </list-item>
  <list-item>
    <p><italic>Questionnaire</italic>. The study owner defines a set of
    questions. The questions can be of two types: (1) a free text answer
    or (2) a pre-defined multi-choice. The answers to a question may be
    constrained by a regular expression for the text input or by the
    maximum number of the selected choices. Any question can be marked
    as required. The text and background style can also be defined for
    the questionnaires.</p>
  </list-item>
  <list-item>
    <p><italic>Show desktop</italic>. This step serves as means for
    general recording of the screen. The study owner defines, whether
    any running applications should be minimized.</p>
  </list-item>
  <list-item>
    <p><italic>Launch program</italic>. This step launches any program
    available on the workstation machine. The step ends when the program
    is closed. The study owner specifies a launch command (by specifying
    path, working directory, etc.). For this command, parameters can be
    specified. These parameters can take values acquired during the
    previous session steps (e.g., a user name), which is another option
    for the scenario adaptation.</p>
  </list-item>
  <list-item>
    <p><italic>Fixation filter</italic>. Usually the last in a session,
    this step silently executes the event detection algorithm for the
    eye movement events on the client workstation, e.g., a
    velocity-based one. The data of eye movement events are transferred
    to the UXR along with the raw data.</p>
  </list-item>
</list>

<p>The timeline consists of three principal subsequent timelines, into
which instances of step types can be assigned (see Figure 3 for
illustration):</p>
<list list-type="order">
  <list-item>
    <p><italic>Pre-session timeline</italic>. The steps in this section
    are executed first and they are not recorded. The eye-tracker
    calibration step must be placed here. Optionally, the study owner
    may place other steps here (for example some longer questionnaires
    that do not require gaze recording).</p>
  </list-item>
  <list-item>
    <p><italic>Session timeline</italic>. The steps in this section are
    executed after the pre-session timeline and are recorded. In
    general, all stimuli steps are placed here, along with the
    respective instructions. The calibration validation steps should be
    placed here as well.</p>
  </list-item>
  <list-item>
    <p><italic>Post-session timeline</italic>. These steps are executed
    last and are not recorded. This section can be used for fixation
    filtering or any other steps which do not require recording.</p>
  </list-item>
</list>
 
<fig id="fig3" fig-type="figure" position="float">
					<label>Figure 3</label>
					<caption>
						<p>The session timeline is comprised of 3 sub-timelines (pre-session, session and post-session). Each sub-timeline comprises one or more steps. The timelines are defined in a Session definition JSON file.</p>
					</caption>
					<graphic id="graph03" xlink:href="jemr-11-03-f-figure-03.png"/>
				</fig> 

<p>Apart from defining the timeline steps, the study owner must enumerate, which
devices will be used for data recording. Currently, the <italic>Group
Studies</italic> supports the following possible data sources:</p>
<list list-type="order">
  <list-item>
    <p><italic>Eye-tracker</italic>.</p>
  </list-item>
  <list-item>
    <p><italic>External events</italic>.</p>
  </list-item>
  <list-item>
    <p><italic>Keyboard events</italic>.</p>
  </list-item>
  <list-item>
    <p><italic>Mouse events</italic>.</p>
  </list-item>
  <list-item>
    <p><italic>Webcam audio</italic>.</p>
  </list-item>
  <list-item>
    <p><italic>Webcam video</italic>.</p>
  </list-item>
  <list-item>
    <p><italic>Screen recording video</italic>.</p>
  </list-item>
</list>

<p>Most devices can be recorded automatically without further
configuration. The exception are external events, which must be pushed
in via local web API by the third-party applications (which the study
owners wish to use as stimuli). Also, the quality of audio and video
recording can be optionally configured before the recording starts.</p>
    </sec>
    </sec>

    <sec id="S5">
      <title>System architecture and physical setup</title>

<p>The <italic>Group Studies</italic> system has two principal
components (1) UXR – the web application for experiment management and
(2) UXC – the desktop client application for operating the eye-trackers
and stimuli. The system also allows the use of (3) external
applications, which are often required to serve as stimuli. Figure 7
(appendix) shows interconnection of these system components. Based on
the use case, an external application may use any of the interfaces
provided by the client application, i.e., push events, read gaze data or
even control the experiment timeline.</p>

<p>The role of the UXR (which runs on a web server) is to support the
use cases for the study setup and control, as well as retrieving data
after the experiment. The UXR also serves for distributing UXC updates.
Figure 8 (appendix) shows the main internal components of the management
application, built on top of the Microsoft ASP.NET
MVC framework and Microsoft SQL Server database.</p>

<p>The client application (UXC) is autonomous during the session
execution. The architectural style of the system is a
<italic>thick-client</italic>. The client application can receive
information (e.g., a session definition) and commands (e.g., a
synchronized recording start) from the server (the management
application), but apart from that, the <italic>client application
manages the session autonomously</italic>. The client application
implements the eye-tracker calibration, records the session data (e.g.,
eye-tracking data, screen recording, user camera, keyboard, and mouse
events) and sends them back to the server after the session
finishes.</p>

<p><italic>The autonomous character of the client application is
important, because it increases the robustness of the system</italic>,
which is thus less prone to server and network failures caused by the
bottlenecks.</p>

<p>Figure 9 (appendix) shows the main internal components of the UXC
with the <italic>Sessions Control</italic> module for controlling the
session recording. The data sources (devices) are controlled
automatically by the <italic>Sessions Control</italic> through the
<italic>Adapters Control</italic> module. The data source components are
adapters, which implement routines required for collecting the specific
data types, but which share the same internal interface for the
<italic>Adapters Control</italic> module.</p>

<p>The <italic>Eye-tracker</italic> component uses Tobii Pro
SDK<xref ref-type="fn" rid="fn2">2</xref> library to communicate with a
Tobii Pro Eye-tracker device.
FFmpeg<xref ref-type="fn" rid="fn3">3</xref> is used for recording
multimedia: the participant’s screen with the
UScreenCapture<xref ref-type="fn" rid="fn4">4</xref> software and a
webcam available on the workstation. The <italic>Mouse &#x26;
Keyboard</italic> component records participant’s keystrokes, mouse
clicks and movements using the WinAPI provided by the Microsoft Windows
operating system. A special data source type is <italic>External
Events</italic> which allows external applications to add events
recorded during the experiment. During the whole session recording, the
experiment timeline is played and gaze data may be accessed by an
external application.</p>

<p>When the external applications are going to be used, the study owner,
or a developer of the application must implement communication with the
UXC local web services, either using REST API or web sockets. The
external applications can be either desktop applications or
browser-based applications with their own web servers. The external
applications communicate with the UXC through the localhost domain. This
helps to preserve the overall workstation autonomy. A problem arises if
the external application is secured (i.e., uses HTTPS). This can be
solved with advanced configuration of the workstation, which we provide
details about in the project documentation.</p>

<p>Physically, the <italic>Group Studies</italic> system is, with
exception of the server, entirely deployed in the room where the group
experiments take place. The room can be seen in the Figure 4 during an
experiment. 20 workstations are positioned to form a classroom. In our
setup, each workstation is equipped with a 60Hz eye-tracker (Tobii Pro
X2-60) and a web camera (Creative Senz3D). One additional workstation is
dedicated for the study owner and is equipped with a projector. The
study owner can use the workstation for controlling the recording of an
experiment session. The server side of the system (the management
application) runs on a dedicated server, which also hosts the data
storage, allowing direct and single-point access to the recorded
data.</p>

<fig id="fig4" fig-type="figure" position="float">
					<label>Figure 4</label>
					<caption>
						<p>The eye-tracking group lab during an experiment. The layout of the room follows a classroom setup.</p>
					</caption>
					<graphic id="graph04" xlink:href="jemr-11-03-f-figure-04.png"/>
				</fig>
    </sec>

    <sec id="S6">
      <title>Example user study</title>

<p>So far, we have used our infrastructure for several studies. These
included studies on cleaning pupillary dilation data from the real-world
stimuli lightning effects (<xref ref-type="bibr" rid="b10">10</xref>), student attention
during an interactive lecture (<xref ref-type="bibr" rid="b26">26</xref>) (see also Figure
4), visual search on real websites (<xref ref-type="bibr" rid="b7">7</xref>), detecting deception in the questionnaires (<xref ref-type="bibr" rid="b20">20</xref>), or the
eye-tracking aided crowdsourcing (<xref ref-type="bibr" rid="b24">24</xref>).</p>

<p>To demonstrate the use of our <italic>Group Studies</italic> system
in the real lab settings, we present a setup from a study for our
ongoing research on the program comprehension (<xref ref-type="bibr" rid="b27 b11">27, 11</xref>) where participants’ task is to read the source
code fragments, understand them and answer the comprehension
questions.</p>

    <sec id="S6a">
      <title>Study motivation</title>

<p>Reading and writing source code is an essential task in software
development. The source code reading strategies differ from the typical
natural text reading strategies (<xref ref-type="bibr" rid="b3">3</xref>). In education, we
seek to understand how the novice programmers read, comprehend, and
write source code, how they find and repair bugs, and how we can improve
their learning process. However, most of the time we only know
correctness of their solution, not the process leading to it. The
programmer’s visual attention reflects not only the source code itself,
but also the programmer’s experience and familiarity with the source
code. Research in the empirical software engineering is interested in
how the novices differ from the expert programmers and how they can
become experts faster.</p>

<p>In these studies, we use eye-tracking to observe how the students in
the introductory programming courses solve the programming exercises.
During the course’s lab session, the students use an online programming
environment, which is integrated with the <italic>Group Studies</italic>
system. We collect gaze data and fine-grained interactions with the code
editor during the programming session. Then, we are able to reconstruct,
analyze and replay the programmer’s activity over time (<xref ref-type="bibr" rid="b27">27</xref>). The collected data is used for automatic identification of the
program comprehension patterns (e.g. linear scan, retrace declaration,
control and data flow tracing) (<xref ref-type="bibr" rid="b4">4</xref>). We use these
patterns along with the source code-related eye-tracking metrics
(<xref ref-type="bibr" rid="b22">22</xref>) to train models for predicting the programmer’s
performance in the program comprehension tasks, to compare their
comprehension strategies, and describe them to the teacher. We explore,
whether describing the programmer’s activity in the program
comprehension tasks can help the teacher to better identify the
student’s misconceptions.</p>

<p>A program source code, although a textual stimulus, differs from
natural texts in its structure, semantics, and cognitive processes
required for understanding it (<xref ref-type="bibr" rid="b4">4</xref>). The previous
program comprehension studies with eye-tracking were performed with
short code fragments due to the software limitations (<xref ref-type="bibr" rid="b3 b15">3, 15</xref>), or tightly coupled with the source code
editor (<xref ref-type="bibr" rid="b23">23</xref>) which makes them difficult to replicate.
The UXI <italic>Group Studies</italic> system enables us to collect data
from the program comprehension studies more robustly and efficiently,
when compared to the previous works (<xref ref-type="bibr" rid="b15">15</xref>). In total,
we had 33 participants in this experiment comprising two recording
sessions (<xref ref-type="bibr" rid="b11">11</xref>).</p>
    </sec>
	
    <sec id="S6b">
      <title>The role of the Group Studies system in the studydThe role of the Group Studies system in the study</title>

<p>The UXC client application is used with the Tobii X2-60 eye-trackers
to record the gaze data, screen recording, mouse events, and external
events. The source code fragments stimuli are presented to the
participants on a custom website with the web-based source code editor
Monaco<xref ref-type="fn" rid="fn5">5</xref>. Unlike in the previous
studies, the participants can interact with the editor, i.e., scroll the
document, move text cursor and select text. If needed, they can also
move the window or change its size; we monitor these changes as well.
The editor was set to read-only mode for this study, although code
changes could be logged as well. All interactions with the editor were
translated to the stimulus change logs and pushed as <italic>External
events</italic> into the UXC using its local API. The code editor also
managed the session through the API for session control.</p>

<p>At the beginning of the recording, the eye-tracker calibration was
performed, together with the calibration validation before and after the
source code reading tasks. Figure 5 outlines the recording part of the
study experiment with the UXC. After the recording, all data were
collected to the UXR server application.</p>

<fig id="fig5" fig-type="figure" position="float">
					<label>Figure 5</label>
					<caption>
						<p>Overview of data recording in the program comprehension study with the UXC. The gaze data, interaction events, mouse events, and screen recording are recorded for all participants. All data is collected in the UXR after the recording </p>
					</caption>
					<graphic id="graph05" xlink:href="jemr-11-03-f-figure-05.png"/>
				</fig>

<p>The experiment sessions took place during the seminars of the
introductory procedural programming course at our faculty. The
participants were used to work with the web-based source code
editors.</p>

<p>For the data analysis part Figure 6), we map gaze fixations into
positions in the source code documents, while considering where and how
each source code fragment was displayed. This mapping is, though, done outside of our system. What is
still inside our infrastructure, is the fixation filter we used. It is
our implementation of I-VT filter<xref ref-type="fn" rid="fn6">6</xref>
based on the Tobii whitepaper (<xref ref-type="bibr" rid="b17">17</xref>). From the recorded
interactions with the source code editor, we reconstruct its visual
state for each point in time during the recording, then recalculate
fixations to the positions relative to the source code document. Since
the source code elements form an AOI hierarchy, such mapping allows us
to automatically analyze eye movement data together with AOIs in the
source code.</p>

<fig id="fig6" fig-type="figure" position="float">
					<label>Figure 6</label>
					<caption>
						<p>The gaze and interaction data processing from the program comprehension study to reconstruct the visual state of the code editor and fixations relative to the source code documents.</p>
					</caption>
					<graphic id="graph06" xlink:href="jemr-11-03-f-figure-06.png"/>
				</fig>
    </sec>
    </sec>

    <sec id="S7">
      <title>Source code and documentation</title>

<p>We made the software components of our infrastructure publicly
available as source code and documentation. We publish the software in
several GitHub repositories:</p>
<list list-type="order">
  <list-item>
    <p>UXC source code<xref ref-type="fn" rid="fn7">7</xref></p>
  </list-item>
  <list-item>
    <p>UXR source code<xref ref-type="fn" rid="fn8">8</xref></p>
  </list-item>
  <list-item>
    <p>UXC and UXR dependency
    libraries<xref ref-type="fn" rid="fn9">9</xref></p>
  </list-item>
</list>

<p>The documentation for the source code is placed within the wiki
sections of these repositories.</p>
    </sec>

    <sec id="S8">
      <title>Discussion</title>

<p>The <italic>Group Studie</italic>s system is currently best suited
for scaling up the eye-tracking experiments, which otherwise have a
“single-participant nature”. This means experiments, which do not
involve any interaction between the participants (workstations). Many
studies are like this and the group eye-tracking simply helps them to be
finished faster.</p>

<p>There is, however, an entire line of research dealing with the
between-participant interactive eye-tracking scenarios (<xref ref-type="bibr" rid="b1 b13 b18 b5">1, 13, 18, 5</xref>). Such scenarios are currently not supported by our system, as
there is no direct native support for the data exchange between the
workstations.</p>

<p>Despite that, our system does not prevent collaborative scenarios and
provides suitable basis for implementing them. By a suitable basis, we
mean the local API capabilities of the UXC application. Using this API,
an external application can request gaze data from the UXC (UXC can
provide the actual scalar or buffered historical data). Therefore, if a
collaborative scenario were required to be run on our infrastructure,
the data exchange would have to be implemented in the external
application.</p>
    </sec>

    <sec id="S9">
      <title>Conclusion</title>

<p>The prices of the eye-tracking technologies are steadily dropping and
allow larger (hardware) purchases. This allows institutions to furnish
more conventional eye-tracking labs, but it also opens the possibility
to furnish labs with the group eye-tracking setups. However, running
studies in such setups requires special infrastructure to support
them.</p>

<p>This paper presented the <italic>Group Studies</italic> system, a
software part of a group eye-tracking infrastructure deployed at the
User Experience and Interaction Research Center (UXI). The system is
based on a thick client-server architecture. It allows flexible
preparation of the experiment scenarios and integration of the
3<sup>rd</sup> party software and is extendable in the future.</p>

<p>We have described the functionality of the system, which supports all
required phases of a group eye-tracking experiment. We have also looked
at the system from the architectural perspective and shown a high-level
overview of its components. This overview serves as a good introduction
into the entire implementation of our system, which we made publicly
available on GitHub.</p>

<p>The system was primarily designed to support our specific needs in
conducting the group eye-tracking studies (at UXI Research Center). It
was designed through multiple iterations and evolved over time. Despite
that we believe that it can inspire new labs as well. Moreover,
researchers can use our code and modify, tailor and deploy this system
at their own lab sites.</p>

<p>The system does not support all possible scenarios for the group
eye-tracking or the user study designs right now. But also, it does not
prevent them. For example, we did not focus it on the collaborative
scenarios. Therefore, researchers pursuing this path would be required
to put additional effort to use it for these studies. Nevertheless, the
ability of our system to integrate external software into the
infrastructure would enable such scenarios.</p>

<p>We see several possible directions for the future work. First, we
understand that the stimuli timeline structure in its current state may
be limiting for certain studies because of its linearity. It is possible
to define alternative session timelines when scheduling the session in
the UXR or control the stimuli timeline and insert new steps during the
recording using the UXC local API from a 3<sup>rd</sup> party
application. However, it is currently not possible to randomize or
counterbalance the order of the pre-defined timeline steps, nor is it
possible to conditionally select the next step during the recording,
possibly based on the results from the previous steps. Another possible
feature (and direction for future work), which we identified the need
for during our studies, is to validate the eye-tracking data on
completion of the <italic>Eye-tracker validation</italic> step during
the recording and request the participant to re-calibrate the
eye-tracker. Thanks to the design and architecture of the presented
system, it will be possible to implement these features in the system in
the future.</p>
    </sec>

    <sec id="S10" sec-type="COI-statement">
      <title>Ethics and Conflict of Interest</title>

<p>The authors declare that the contents of the article are in agreement
with the ethics described in
<ext-link ext-link-type="uri" xlink:href="http://biblio.unibe.ch/portale/elibrary/BOP/jemr/ethics.html" xlink:show="new">http://biblio.unibe.ch/portale/elibrary/BOP/jemr/ethics.html</ext-link>
and that there is no conflict of interest regarding the publication of
this paper.</p>
    </sec>

    <sec id="S11">
      <title>Acknowledgements</title>

<p>This article was created with the support of the Slovak Research and
Development Agency under the contracts No. APVV-15-0508 and
APVV-17-0267, the Scientific Grant Agency of the Slovak Republic, grant
No. VG 1/0646/15, the Ministry of Education, Science, Research and Sport
of the Slovak Republic within the Research and Development Operational
Programme for the project “University Science Park of STU Bratislava”,
ITMS 26240220084, co-funded by the ERDF and the project “Development of
research infrastructure STU”, project no. 003STU-2-3/2016 by the
Ministry of Education, Science, Research and Sport of the Slovak
Republic.</p>

<p>We wish to thank Tobii Pro for fruitful collaboration with setting up
and maintaining our lab infrastructure.</p>
    </sec>

</body>
<back>

<ref-list>
<ref id="b1"><mixed-citation publication-type="conference" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Acarturk</surname>, <given-names>C.</given-names></name>, <name><surname>Tajaddini</surname>, <given-names>M.</given-names></name>, &#x26; <name><surname>Kilic</surname>, <given-names>O.</given-names></name></person-group> (<year>2017</year>). Group Eye Tracking (GET) Applications in Gaming and Decision [abstract]. In Radach, R., Deubel, H., Vorstius, C., &#x26; Hofmann, M.J. (Eds.), Abstracts of the 19th European Conference on Eye Movements, 2017, Wuppertal. Journal of Eye Movement Research, 10(6), 103.</mixed-citation></ref>
<ref id="b2"><mixed-citation publication-type="conference" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Blignaut</surname>, <given-names>P.</given-names></name></person-group> (<year>2017</year>): Real-time visualisation of student attention in a computer laboratory [abstract]. In Radach, R., Deubel, H., Vorstius, C., &#x26; Hofmann, M.-J. (Eds.), Abstracts of the 19th European Conference on Eye Movements, 2017, Wuppertal. Journal of Eye Movement Research, 10(6), 288.</mixed-citation></ref>
<ref id="b3"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><name><surname>Busjahn</surname>, <given-names>T.</given-names></name>, <name><surname>Schulte</surname>, <given-names>C.</given-names></name>, &#x26; <name><surname>Sharif</surname>, <given-names>B.</given-names></name></person-group>, Simon, Begel, A., Hansen, M., Bednarik, R., Orlov, P., Ihantola, P., Shchekotova, G., &#x26;Antropova, M. (<year>2014</year>, <month>August</month>). <article-title>Eye tracking in computing education.</article-title> In <source>Proceedings of 10th Annual Conference on International Computing Education Research</source> (pp. <fpage>3</fpage>-<lpage>10</lpage>). <conf-loc>Glasgow, Scotland, UK</conf-loc>. <publisher-loc>New York, NY, USA</publisher-loc>: <publisher-name>ACM</publisher-name>. doi:<pub-id pub-id-type="doi" specific-use="author">10.1145/2632320.2632344</pub-id></mixed-citation></ref>
<ref id="b4"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><name><surname>Busjahn</surname>, <given-names>T.</given-names></name>, <name><surname>Bednarik</surname>, <given-names>R.</given-names></name>, <name><surname>Begel</surname>, <given-names>A.</given-names></name>, <name><surname>Crosby</surname>, <given-names>M.</given-names></name>, <name><surname>Paterson</surname>, <given-names>J. H.</given-names></name>, <name><surname>Schulte</surname>, <given-names>C.</given-names></name>, <name><surname>Sharif</surname>, <given-names>B.</given-names></name>, &#x26; <name><surname>Tamm</surname>, <given-names>S.</given-names></name></person-group> (<year>2015</year>, <month>May</month>). <article-title>Eye Movements in Code Reading: Relaxing the Linear Order.</article-title> In <source>Proceedings of the 2015 IEEE 23rd International Conference on Program Comprehension</source> (pp. <fpage>255</fpage>-<lpage>265</lpage>). <conf-loc>Florence, Italy</conf-loc>. <publisher-loc>Piscataway, NJ, USA</publisher-loc>: <publisher-name>IEEE Press</publisher-name>. doi:<pub-id pub-id-type="doi" specific-use="author">10.1109/ICPC.2015.36</pub-id></mixed-citation></ref>
<ref id="b5"><mixed-citation publication-type="conference" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Dalmaijer</surname>, <given-names>E. S.</given-names></name>, <name><surname>Niehorster</surname>, <given-names>D. C.</given-names></name>, <name><surname>Holmqvist</surname>, <given-names>K.</given-names></name>, &#x26; <name><surname>Husain</surname>, <given-names>M.</given-names></name></person-group> (<year>2017</year>). Joint visual working memory through implicit collaboration [abstract]. In Radach, R., Deubel, H., Vorstius, C., &#x26; Hofmann, M.J. (Eds.), Abstracts of the 19th European Conference on Eye Movements, 2017, Wuppertal. Journal of Eye Movement Research, 10(6), 111.</mixed-citation></ref>
<ref id="b6"><mixed-citation publication-type="conference" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Deniz</surname>, <given-names>O.</given-names></name>, <name><surname>Fal</surname>, <given-names>M.</given-names></name>, <name><surname>Bozkurt</surname>, <given-names>U.</given-names></name>, &#x26; <name><surname>Acartürk</surname>, <given-names>C.</given-names></name></person-group> (<year>2015</year>). GET-Social: Group Eye Tracking environment for social gaze analysis [abstract]. In Ansorge, U., Ditye, T., Florack, A., &#x26;Leder, H. (Eds.), Abstracts of the 18th European Conference on Eye Movements, 2015, Vienna. Journal of Eye Movement Research, 8(4):1, 252.</mixed-citation></ref>
<ref id="b7"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><name><surname>Dragunova</surname>, <given-names>M.</given-names></name>, <name><surname>Moro</surname>, <given-names>R.</given-names></name>, &#x26; <name><surname>Bielikova</surname>, <given-names>M.</given-names></name></person-group> (<year>2017</year>, <month>March</month>). <article-title>Measuring Visual Search Ability on the Web.</article-title> In <source>Proceedings of the 22nd International Conference on Intelligent User Interfaces Companion</source> (pp. <fpage>97</fpage>-<lpage>100</lpage>). <conf-loc>Limassol, Cyprus</conf-loc>. <publisher-loc>New York, NY, USA</publisher-loc>: <publisher-name>ACM</publisher-name>. doi:<pub-id pub-id-type="doi" specific-use="author">10.1145/3030024.3038272</pub-id></mixed-citation></ref>
<ref id="b8"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><name><surname>Duchowski</surname>, <given-names>A. T.</given-names></name></person-group> (<year>2016</year>, <month>October</month>). <chapter-title>The Eye Tracking Digital Classroom</chapter-title>. [<comment>invited talk</comment>] In <person-group person-group-type="editor"><name><given-names>K.</given-names> <surname>Holmqvist</surname></name>, <name><given-names>H.</given-names> <surname>Jarodzka</surname></name>, &#x26; <name><given-names>D.</given-names> <surname>Niehorster</surname></name> (<role>Eds.</role>),</person-group> <source>Colloquium on Multiple Eye Tracker Classrooms. Humanities Lab</source>. <publisher-name>Lund University</publisher-name>.</mixed-citation></ref>
<ref id="b9"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><name><surname>Holmqvist</surname>, <given-names>K.</given-names></name>, <name><surname>Nyström</surname>, <given-names>M.</given-names></name>, &#x26; <name><surname>Mulvey</surname>, <given-names>F.</given-names></name></person-group> (<year>2012</year>, <month>March</month>). <article-title>Eye tracker data quality: what it is and how to measure it.</article-title> In <source>Proceedings of the Symposium on Eye Tracking Research and Applications</source> (pp. <fpage>45</fpage>-<lpage>52</lpage>). <conf-loc>Santa Barbara, CA, USA</conf-loc>. <publisher-loc>New York, NY, USA</publisher-loc>: <publisher-name>ACM</publisher-name>. doi:<pub-id pub-id-type="doi" specific-use="author">10.1145/2168556.2168563</pub-id></mixed-citation></ref>
<ref id="b10"><mixed-citation publication-type="conference" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Juhaniak</surname>, <given-names>T.</given-names></name>, <name><surname>Hlavac</surname>, <given-names>P.</given-names></name>, <name><surname>Moro</surname>, <given-names>R.</given-names></name>, <name><surname>Simko</surname>, <given-names>J.</given-names></name>, &#x26; <name><surname>Bielikova</surname>, <given-names>M.</given-names></name></person-group> (<year>2016</year>, <month>July</month>). Pupillary Response: Removing Screen Luminosity Effects for Clearer Implicit Feedback. In Late-breaking Results, Posters, Demos, Doctoral Consortium and Workshops Proceedings of the 24th ACM Conference on User Modeling, Adaptation and Personalisation. CEUR-WS, 1618.</mixed-citation></ref>
<ref id="b11"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><name><surname>Konopka</surname>, <given-names>M.</given-names></name>, <name><surname>Talian</surname>, <given-names>A.</given-names></name>, <name><surname>Tvarozek</surname>, <given-names>J.</given-names></name>, &#x26; <name><surname>Navrat</surname>, <given-names>P.</given-names></name></person-group> (<year>2018</year>, <month>June</month>). <article-title>Data Flow Metrics in Program Comprehension Tasks.</article-title> In <source>Proceedings of the Workshop on Eye Movements in Programming</source> (Article No. 2). Warsaw, Poland. <publisher-loc>New York, NY, USA</publisher-loc>: <publisher-name>ACM</publisher-name>. doi:<pub-id pub-id-type="doi" specific-use="author">10.1145/3216723.3216728</pub-id></mixed-citation></ref>
<ref id="b12"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Lejarraga</surname>, <given-names>T.</given-names></name>, <name><surname>Schulte-Mecklenbeck</surname>, <given-names>M.</given-names></name>, &#x26; <name><surname>Smedema</surname>, <given-names>D.</given-names></name></person-group> (<year>2017</year>). <article-title>The pyeTribe: Simultaneous eyetracking for economic games.</article-title> <source>Behavior Research Methods</source>, <volume>49</volume>(<issue>5</issue>), <fpage>1769</fpage>–<lpage>1779</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.3758/s13428-016-0819-9</pub-id><pub-id pub-id-type="pmid">27797092</pub-id><issn>1554-351X</issn></mixed-citation></ref>
<ref id="b13"><mixed-citation publication-type="conference" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Niehorster</surname>, <given-names>D. C.</given-names></name>, <name><surname>Cornelissen</surname>, <given-names>T. H. W.</given-names></name>, <name><surname>Hooge</surname>, <given-names>I. T. C.</given-names></name>, &#x26; <name><surname>Holmquist</surname>, <given-names>K.</given-names></name></person-group> (<year>2017</year>). Searching with and against each other [abstract]. In Radach, R., Deubel, H., Vorstius, C., &#x26; Hofmann, M.J. (Eds.), Abstracts of the 19th European Conference on Eye Movements, 2017, Wuppertal. Journal of Eye Movement Research, 10(6), 146.</mixed-citation></ref>
<ref id="b14"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Nyström</surname>, <given-names>M.</given-names></name>, <name><surname>Niehorster</surname>, <given-names>D. C.</given-names></name>, <name><surname>Cornelissen</surname>, <given-names>T.</given-names></name>, &#x26; <name><surname>Garde</surname>, <given-names>H.</given-names></name></person-group> (<year>2016</year>). <article-title>Real-time sharing of gaze data between multiple eye trackers-evaluation, tools, and advice.</article-title> <source>Behavior Research Methods</source>, <volume>49</volume>(<issue>4</issue>), <fpage>1310</fpage>–<lpage>1322</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.3758/s13428-016-0806-1</pub-id><pub-id pub-id-type="pmid">27743316</pub-id><issn>1554-351X</issn></mixed-citation></ref>
<ref id="b15"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Obaidellah</surname>, <given-names>U.</given-names></name>, <name><surname>Al Haek</surname>, <given-names>M.</given-names></name>, &#x26; <name><surname>Cheng</surname>, <given-names>P. C.-H.</given-names></name></person-group> (<year>2018</year>). <article-title>A Survey on the Usage of Eye-Tracking in Computer Programming.</article-title> <source>ACM Computing Surveys</source>, <volume>51</volume>(<issue>1</issue>), <fpage>5</fpage>. <comment>Advance online publication</comment>. <pub-id pub-id-type="doi" specific-use="author">10.1145/3145904</pub-id><issn>0360-0300</issn></mixed-citation></ref>
<ref id="b16"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Oliva</surname>, <given-names>M.</given-names></name>, <name><surname>Niehorster</surname>, <given-names>D. C.</given-names></name>, <name><surname>Jarodzka</surname>, <given-names>H.</given-names></name>, &#x26; <name><surname>Holmqvist</surname>, <given-names>K.</given-names></name></person-group> (<year>2017</year>). <article-title>Influence of Coactors on Saccadic and Manual Responses.</article-title> <source>Perception</source>, <volume>8</volume>(<issue>1</issue>), <fpage>2041669517692814</fpage>. <comment>Advance online publication</comment>. <pub-id pub-id-type="doi" specific-use="author">10.1177/2041669517692814</pub-id><pub-id pub-id-type="pmid">28321288</pub-id><issn>0301-0066</issn></mixed-citation></ref>
<ref id="b17"><mixed-citation publication-type="web-page" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Olsen</surname>, <given-names>A.</given-names></name></person-group> (<year>2012</year>). <article-title><italic>The Tobii I-VT Fixation Filter: Algorithm description</italic>.</article-title> Retrieved from Tobii Pro website: <ext-link ext-link-type="uri" xlink:href="https://www.tobiipro.com/siteassets/tobii-pro/learn-and-support/analyze/how-do-we-classify-eye-movements/tobii-pro-i-vt-fixation-filter.pdf">https://www.tobiipro.com/siteassets/tobii-pro/learn-and-support/analyze/how-do-we-classify-eye-movements/tobii-pro-i-vt-fixation-filter.pdf</ext-link></mixed-citation></ref>
<ref id="b18"><mixed-citation publication-type="conference" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Räihä</surname>, <given-names>K.-J.</given-names></name>, <name><surname>Spakov</surname>, <given-names>O.</given-names></name>, <name><surname>Istance</surname>, <given-names>H.</given-names></name>, &#x26; <name><surname>Niehorster</surname>, <given-names>D. C.</given-names></name></person-group> (<year>2017</year>). Gaze-assisted remote communication between teacher and students [abstract]. In Radach, R., Deubel, H., Vorstius, C., &#x26; Hofmann, M.J. (Eds.), Abstracts of the 19th European Conference on Eye Movements, 2017, Wuppertal. Journal of Eye Movement Research, 10(6), 105.</mixed-citation></ref>
<ref id="b19"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><name><surname>Richter</surname>, <given-names>J.</given-names></name></person-group> (<year>2016</year>, <month>October</month>). <chapter-title>The Tübingen Digital Teaching Lab and Eye Tracking Multimedia Research</chapter-title>. [<comment>invited talk</comment>] In <person-group person-group-type="editor"><name><given-names>K.</given-names> <surname>Holmqvist</surname></name>, <name><given-names>H.</given-names> <surname>Jarodzka</surname></name>, &#x26; <name><given-names>D.</given-names> <surname>Niehorster</surname></name> (<role>Eds.</role>),</person-group> <source>Colloquium on Multiple Eye Tracker Classrooms. Humanities Lab</source>. <publisher-name>Lund University</publisher-name>.</mixed-citation></ref>
<ref id="b20"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><name><surname>Rybar</surname>, <given-names>M.</given-names></name>, &#x26; <name><surname>Bielikova</surname>, <given-names>M.</given-names></name></person-group> (<year>2016</year>, <month>October</month>). <article-title>Automated detection of user deception in on-line questionnaires with focus on eye tracking use.</article-title> In <source>Proceedings of the 11th International Workshop on Semantic and Social Media Adaptation and Personalization</source> (pp. <fpage>24</fpage>-<lpage>28</lpage>). <conf-loc>Thessaloniki, Greece</conf-loc>. <publisher-loc>Piscataway, NJ, USA</publisher-loc>: <publisher-name>IEEE Press</publisher-name>.doi:<pub-id pub-id-type="doi" specific-use="author">10.1109/SMAP.2016.7753379</pub-id></mixed-citation></ref>
<ref id="b21"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Seithe</surname>, <given-names>M.</given-names></name>, <name><surname>Morina</surname>, <given-names>J.</given-names></name>, &#x26; <name><surname>Glöckner</surname>, <given-names>A.</given-names></name></person-group> (<year>2016</year>). <article-title>Bonn eXperimental System (BoXS): An open-source platform for interactive experiments in psychology and economics.</article-title> <source>Behavior Research Methods</source>, <volume>48</volume>(<issue>4</issue>), <fpage>1454</fpage>–<lpage>1475</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.3758/s13428-015-0660-6</pub-id><pub-id pub-id-type="pmid">26558901</pub-id><issn>1554-351X</issn></mixed-citation></ref>
<ref id="b22"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><name><surname>Sharafi</surname>, <given-names>Z.</given-names></name>, <name><surname>Shaffer</surname>, <given-names>T.</given-names></name>, <name><surname>Sharif</surname>, <given-names>B.</given-names></name>, &#x26; <name><surname>Guéhéneuc</surname>, <given-names>Y.-G.</given-names></name></person-group> (<year>2015</year>, <month>December</month>). <article-title>Eye-Tracking Metrics in Software Engineering.</article-title> In <source>Proceedings of the 2015 Asia-Pacific Software Engineering Conference</source> (pp. <fpage>96</fpage>-<lpage>103</lpage>). <conf-loc>New Delhi, India</conf-loc>. <publisher-loc>Piscataway, NJ, USA</publisher-loc>: <publisher-name>IEEE Press</publisher-name>.doi:<pub-id pub-id-type="doi" specific-use="author">10.1109/APSEC.2015.53</pub-id></mixed-citation></ref>
<ref id="b23"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Sharif</surname>, <given-names>B.</given-names></name>, <name><surname>Shaffer</surname>, <given-names>T.</given-names></name>, <name><surname>Wise</surname>, <given-names>J.</given-names></name>, &#x26; <name><surname>Maletic</surname>, <given-names>J. I.</given-names></name></person-group> (<year>2016</year>). <article-title>Tracking Developers’ Eyes in the IDE.</article-title> <source>IEEE Software</source>, <volume>33</volume>(<issue>3</issue>), <fpage>105</fpage>–<lpage>108</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1109/MS.2016.84</pub-id><issn>0740-7459</issn></mixed-citation></ref>
<ref id="b24"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><name><surname>Simko</surname>, <given-names>J.</given-names></name>, &#x26; <name><surname>Bielikova</surname>, <given-names>M.</given-names></name></person-group> (<year>2015</year>, <month>November</month>). <article-title>Gaze-tracked crowdsourcing.</article-title> In <source>Proceedings of the 2015 10th International Workshop on Semantic and Social Media Adaptation and Personalization Semantic and Social Media Adaptation and Personalization</source> (pp. <fpage>1</fpage>-<lpage>5</lpage>). <conf-loc>Trento, Italy</conf-loc>. <publisher-loc>Piscataway, NJ, USA</publisher-loc>: <publisher-name>IEEE Press</publisher-name>.doi:<pub-id pub-id-type="doi" specific-use="author">10.1109/SMAP.2015.7370084</pub-id></mixed-citation></ref>
<ref id="b25"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Simko</surname>, <given-names>M.</given-names></name>, <name><surname>Barla</surname>, <given-names>M.</given-names></name>, &#x26; <name><surname>Bielikova</surname>, <given-names>M.</given-names></name></person-group> (<year>2010</year>). <source><italic>ALEF: A framework for adaptive web-based learning 2.0. </italic>Key Competencies in the Knowledge Society</source>. <publisher-name>Springer Berlin, Heidelberg</publisher-name>., <pub-id pub-id-type="doi" specific-use="author">10.1007/978-3-642-15378-5_36</pub-id></mixed-citation></ref>
<ref id="b26"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><name><surname>Triglianos</surname>, <given-names>V.</given-names></name>, <name><surname>Labaj</surname>, <given-names>M.</given-names></name>, <name><surname>Moro</surname>, <given-names>R.</given-names></name>, <name><surname>Simko</surname>, <given-names>J.</given-names></name>, <name><surname>Hucko</surname>, <given-names>M.</given-names></name>, <name><surname>Tvarozek</surname>, <given-names>J.</given-names></name>, &#x26; <name><surname>Bielikova</surname>, <given-names>M.</given-names></name></person-group> (<year>2017</year>, <month>July</month>). <article-title>Experiences Using an Interactive Presentation Platform in a Functional and Logic Programming Course.</article-title> In Adjunct Proceedings of the 25th Conference on User Modeling, Adaptation and Personalization (pp. 311-316). Bratislava, Slovakia. New York, NY, USA: ACM.doi:<pub-id pub-id-type="doi" specific-use="author">10.1145/3099023.3099082</pub-id></mixed-citation></ref>
<ref id="b27"><mixed-citation publication-type="conference" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Tvarozek</surname>, <given-names>J.</given-names></name>, <name><surname>Konopka</surname>, <given-names>M.</given-names></name>, <name><surname>Hucko</surname>, <given-names>J.</given-names></name>, <name><surname>Navrat</surname>, <given-names>P.</given-names></name>, &#x26; <name><surname>Bielikova</surname>, <given-names>M.</given-names></name></person-group> (<year>2017</year>). Robust Recording of Program Comprehension Studies with Eye Tracking for Repeatable Analysis and Replay [abstract]. In Radach, R., Deubel, H., Vorstius, C., &#x26; Hofmann, M.J. (Eds.), Abstracts of the 19th European Conference on Eye Movements, 2017, Wuppertal. Journal of Eye Movement Research, 10(6), 293.</mixed-citation></ref>
</ref-list>

<app-group>
	<app>
	
      <title>Appendix</title>
	  

	
	<fig id="app01" fig-type="figure" position="anchor">
					<label>Figure 7</label>
					<caption>
						<p>The components overview of the Group Studies system. The web application and the client application are the principal components of the system, with the optional external applications used by the participant during the recording session.</p>
					</caption>
					<graphic id="appendix01" xlink:href="jemr-11-03-f-figure-07.png"/>
				</fig>

	<fig id="app02" fig-type="figure" position="anchor">
					<label>Figure 8</label>
					<caption>
						<p>The main internal components of the UXR web application for management of group studies, data storage, access, and distributing client applications in the laboratory.</p>
					</caption>
					<graphic id="appendix02" xlink:href="jemr-11-03-f-figure-08.png"/>
				</fig>  

	<fig id="app03" fig-type="figure" position="anchor">
					<label>Figure 9</label>
					<caption>
						<p>The main internal components of the UXC client application of the Group Studies system. The application automatically records the experiment sessions with multiple devices and allows other external applications to control the recording and access the gaze data.</p>
					</caption>
					<graphic id="appendix03" xlink:href="jemr-11-03-f-figure-09.png"/>
				</fig>             
							

	
	</app>
</app-group>

<fn-group>
  <fn id="fn1">
    <p><ext-link ext-link-type="uri" xlink:href="https://github.com/uxifiit/UXC/wiki/Stimuli-Timeline-Definition" xlink:show="new">https://github.com/uxifiit/UXC/wiki/Stimuli-Timeline-Definition</ext-link></p>
    <p><ext-link ext-link-type="uri" xlink:href="https://github.com/uxifiit/UXC/wiki/Session-Recording-Definition" xlink:show="new">https://github.com/uxifiit/UXC/wiki/Session-Recording-Definition</ext-link></p>
  </fn>
  <fn id="fn2">
    <p><ext-link ext-link-type="uri" xlink:href="https://www.tobiipro.com/product-listing/tobii-pro-sdk/" xlink:show="new">https://www.tobiipro.com/product-listing/tobii-pro-sdk/</ext-link></p>
  </fn>
  <fn id="fn3">
    <p><ext-link ext-link-type="uri" xlink:href="https://www.ffmpeg.org/" xlink:show="new">https://www.ffmpeg.org/</ext-link></p>
  </fn>
  <fn id="fn4">
    <p><ext-link ext-link-type="uri" xlink:href="http://umediaserver.net/components/index.html" xlink:show="new">http://umediaserver.net/components/index.html</ext-link></p>
  </fn>
  <fn id="fn5">
    <p><ext-link ext-link-type="uri" xlink:href="https://microsoft.github.io/monaco-editor/index.html" xlink:show="new">https://microsoft.github.io/monaco-editor/index.html</ext-link></p>
  </fn>
  <fn id="fn6">
    <p><ext-link ext-link-type="uri" xlink:href="https://github.com/uxifiit/UXI.GazeToolkit/" xlink:show="new">https://github.com/uxifiit/UXI.GazeToolkit/</ext-link></p>
  </fn>
  <fn id="fn7">
    <p><ext-link ext-link-type="uri" xlink:href="https://github.com/uxifiit/UXC" xlink:show="new">https://github.com/uxifiit/UXC</ext-link></p>
  </fn>
  <fn id="fn8">
    <p><ext-link ext-link-type="uri" xlink:href="https://github.com/uxifiit/UXR" xlink:show="new">https://github.com/uxifiit/UXR</ext-link></p>
  </fn>
  <fn id="fn9">
    <p><ext-link ext-link-type="uri" xlink:href="https://github.com/uxifiit/UXI.Libs" xlink:show="new">https://github.com/uxifiit/UXI.Libs</ext-link></p>
  </fn>
</fn-group>
</back>
</article>

