<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd">
<article article-type="research-article" xmlns:xlink="http://www.w3.org/1999/xlink">
 <front>
    <journal-meta>
	<journal-id journal-id-type="publisher-id">Jemr</journal-id>
      <journal-title-group>
        <journal-title>Journal of Eye Movement Research</journal-title>
      </journal-title-group>
      <issn pub-type="epub">1995-8692</issn>
	  <publisher>								
	  <publisher-name>Bern Open Publishing</publisher-name>
	  <publisher-loc>Bern, Switzerland</publisher-loc>
	</publisher>
    </journal-meta>
    <article-meta>
	<article-id pub-id-type="doi">10.16910/jemr.10.1.3</article-id> 
	  <article-categories>								
				<subj-group subj-group-type="heading">
					<subject>Research Article</subject>
				</subj-group>
		</article-categories>
      <title-group>
        <article-title>Eye tracking in Educational Science: Theoretical frameworks and research agendas.</article-title>
      </title-group>
	   <contrib-group> 
				<contrib contrib-type="author">
					<name>
						<surname>Halszka </surname>
						<given-names>Jarodzka
</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
					<xref ref-type="aff" rid="aff2">2</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Holmqvist</surname>
						<given-names>Kenneth </given-names>
					</name>
					<xref ref-type="aff" rid="aff3">3</xref>
						<xref ref-type="aff" rid="aff2">2</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Gruber</surname>
						<given-names>Hans</given-names>
					</name>
					<xref ref-type="aff" rid="aff4">4</xref>
						<xref ref-type="aff" rid="aff5">5</xref>
				</contrib>
        <aff id="aff1">
		<institution>Open University of the Netherlands</institution>, <country>Netherlands</country>
        </aff>
		<aff id="aff2">
		<institution>Lund University</institution>, <country>Sweden</country>
        </aff>
		<aff id="aff3">
		<institution>UPSET, NWU Vaal</institution>, <country>South Africa</country>
        </aff>
		
			<aff id="aff4">
		<institution>University of Regensburg</institution>, <country>Germany</country>
        </aff>
		
			<aff id="aff5">
		<institution>Turku University</institution>, <country>Finland</country>
        </aff>
		</contrib-group>
		<pub-date date-type="pub" publication-format="electronic"> 
		<day>4</day>  
		<month>2</month>
        <year>2017</year>
      </pub-date>
	  <pub-date date-type="collection" publication-format="electronic"> 
	  <year>2017</year>
	</pub-date>
      <volume>10</volume>
      <issue>1</issue>
	  <elocation-id>10.16910/jemr.10.1.3</elocation-id>
	<permissions> 
	<copyright-year>2017</copyright-year>
	<copyright-holder>Jarodzka et al.</copyright-holder>
	<license license-type="open-access">
  <license-p>This work is licensed under a Creative Commons Attribution 4.0 International License, 
  (<ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">
    https://creativecommons.org/licenses/by/4.0/</ext-link>), which permits unrestricted use and redistribution provided that the original author and source are credited.</license-p>
</license>
	</permissions>
	<abstract>
          <p>Eye tracking is increasingly being used in Educational Science and so has the interest of
the eye tracking community grown in this topic. In this paper we briefly introduce the
discipline of Educational Science and why it might be interesting to couple it with eye
tracking research. We then introduce three major research areas in Educational Science
that have already successfully used eye tracking: First, eye tracking has been used to improve the instructional design of computer-based learning and testing environments, often
using hyper- or multimedia. Second, eye tracking has shed light on expertise and its development in visual domains, such as chess or medicine. Third, eye tracking has recently
been also used to promote visual expertise by means of eye movement modeling examples.
We outline the main educational theories for these research areas and indicate where further eye tracking research is needed to expand them.</p></abstract>
	 <kwd-group>
        <kwd>applied eye tracking</kwd>
        <kwd>education</kwd>
        <kwd>learning</kwd>
        <kwd>expertise</kwd>
      </kwd-group>
    </article-meta>
  </front>
<body>



<sec id="s1">
<title>Introduction</title>
<p>Eye tracking has been developed to measure ‘where we look at’. For a long time and up until now, optimizing the apparatuses to measure accurately and unobtrusively how the eyes move, considerations which eye movements can be distinguished from a neurological perspective (cf. the discussion of whether post-saccadic oscillations are separate eye movements or belong to saccades), and developing software to detect these different types of eye movements were in focus. These topics are still ongoing and there is still plenty room for this fundamental eye tracking research. But already from the beginning, these apparatuses were used – irrespective of the many fundamental unknowns and imperfections – to apply them to answer research questions from other fields. This applied eye tracking research field began with letting people view art paintings [<xref ref-type="bibr" rid="b93">93</xref>]. Quickly linguistics jumped onto the eye tracking train and this became probably the best investigated field of applied eye tracking research [<xref ref-type="bibr" rid="b67 b68">67, 68</xref>]. Later on, usability and human-computer-interaction researchers discovered the value of eye tracking for their purposes [<xref ref-type="bibr" rid="b33">33</xref>]. A rather young field of applied eye tracking research is the one of Educational Science that we would like to introduce here to the readers. Let us begin with what Educational Science actually entails.</p>
<p>Educational Science investigates how people learn and how this learning can be fostered with instruction. But what is learning? Kids at school begin with deciphering single letters and end up analyzing complex texts and relate these to accompanying graphs or pictures. University students begin with studying countless facts over years to finally become highly specialized experts who effortlessly diagnose complex problems. Hence, learning is the act of acquiring or improving knowledge, skills or behavior. Its result is a persistent change of these. Learning follows a trajectory from an initial encounter with a topic or task, such as studying a textbook page for 30 minutes, to mastering it on high levels of expertise, in professional development lasting for decades. Thus, learning is rather a process than merely an outcome, such as a grade or a diploma. Researchers in Educational Science investigate this process to understand how learning is constituted and how it can be fostered through instruction.</p>
<p>Eye tracking [<xref ref-type="bibr" rid="b29">29</xref>] has become an important tool to investigate learning processes over the past years. The reason for this is that we take most information in via our eyes; this is true when we learn, but also when we execute a professional task. Consider for instance scientific illustrations. Such illustrations on the composition or functioning of diverse systems have been around since hundreds of years. Below you see an example from the 19th century [<xref ref-type="bibr" rid="b48">48</xref>] on the flight of birds (<xref ref-type="fig" rid="fig01">Figure 1</xref>). Not only professionals had to deal with such illustrations, but also students had to use them to study the subject matter. Nowadays, with increasing possibilities to create visualizations, their use, but also their variability has mushroomed. For instance, professionals have to operate complex computer-generated simulations (e.g., interactive 3D medical images), while students have to learn from all sorts of visualizations, such as videos, and often they have to integrate information from many sources. And these are just few examples of where eye tracking can aid in understanding and even improving learning and its instruction within Educational Science.</p>	

<fig id="fig01" fig-type="figure" position="float">
					<label>Figure 1</label>
					<caption>
						<p>Scientific illustration on the flight of birds. Otto
Lilienthal, Der Vogelflug als Grundlage der Fliegekunst,
Berlin, 1889.</p>
					</caption>
					<graphic id="graph01" xlink:href="jemr-10-01-c-figure-01.png"/>
				</fig>
<p>Nowadays, learning often takes place in environments that are rich in information. These environments may be learning materials, such as textbooks or e-learning settings. But they may also be working environments, such as a surgical room for medical residents or a flight simulator for pilots. Often, they can be so information-rich that they can easily overwhelm the learner. Basically, there are two possibilities to deal with this issue. First, the environment can be adapted to the learner. This approach is most effective for initial stages of learning and is called Instructional Design. Instructional material that is designed to optimally make use of the human cognitive information processing system as well as the abilities of the learner enables the learner to autonomously and efficiently make progress. In later stages of learning, it is important to encounter the environments in their full complexity. This is for instance the case in workplace learning. In such cases, the second option comes into play, namely, scaffolding the learner to the environment. This part of educational research is called expertise development. Again with the long-term aim to enable the learner to autonomously develop. The theories used in Educational Science are based on findings from fundamental research on cognition and perception, but are at the same time applicable to concrete educational practice.</p>
<p>In the following we will describe these two areas of research in education with concrete examples from our own research. Next we will show how both areas can be integrated into a training method of visual expertise, called eye movement modeling examples.</p>
</sec>	
	 
<sec id="s2">
<title>Instructional Design – adapting the environment to the learner’s abilities</title>
	<sec id="s2a">
	<title>Theories of human learning – the working memory perspective</title>
	<p>Let us begin with the initial stage of learning: a person who has little prior knowledge on a topic wants to learn new facts from a textbook, for instance about the functioning of a car engine. The material presented in this book contains a text describing the functioning of this engine, but also several graphs that show how the different elements of the engine would move at different stages of the stroke cycle. This person might experience quite some difficulties to relate all this information into one coherent mental model in his or her mind. He or she might be also distracted by a picture of a fancy car placed on this page. The research area of Instructional Design investigates how to construct learning material that optimally supports the learner. One very important aspect of this is how the material is visually presented.</p>
	<p>The strongest focus in Instructional Design lies on the (visual) flow and processing of information to and within working memory. This view is based on (a simple version of) Baddeley’s working memory model [<xref ref-type="bibr" rid="b3">3</xref>] and Paivio’s dual coding theory [<xref ref-type="bibr" rid="b63">63</xref>]. The two most influential theories on Instructional Design are the Cognitive Theory of Multimedia Learning [<xref ref-type="bibr" rid="b54">54</xref>] and the Cognitive Load Theory [<xref ref-type="bibr" rid="b11">11</xref>]. Both theories assume that (a) the working memory capacity is limited and learning can only take place if enough capacity is available and not consumed by ‘bad’ Instructional Design. Moreover, (b) learning only takes place if the learner actively engages with the learning material or the task. The Cognitive Load Theory [<xref ref-type="bibr" rid="b11">11</xref>] mainly states that the working memory capacity can be consumed by different types of load that can either be attributed to the difficulty of the task itself (known as intrinsic load), ineffective layout of the instructional material (known as extraneous load), or active elaborations on the task content (known as germane load). Only the latter results in learning. The Cognitive Theory of Multimedia Learning [<xref ref-type="bibr" rid="b54">54</xref>] focuses on the working memory’s dual coding in interaction with the instructional material and long-term memory. This theory predicts how pictures and words are processed in working memory depending on their modality (written or spoken) and integrated with long-term memory content. For learning to occur, relevant information from the material must be visually selected and integrated, organized in mental models and integrated with prior knowledge. If this happens, a person learned. It is easy to see that the theories include statements on perceptual processes (e.g., visual search of relevant information; integration of information from different sources), although these processes were not directly tested when these theories were formed.</p>
	<p>Both theories result in astonishingly similar guidelines on how to design (the layout of) instructional material [<xref ref-type="bibr" rid="b55 b81">55, 81</xref>]. The aim of these guidelines is to decrease unnecessary cognitive processes (i.e., extraneous load) and to foster cognitive processes leading to learning (i.e., germane load). These guidelines shall make learning efficient (i.e., as much content learnt within as little time as possible). The most established guidelines include</p>
	<p>Seeking <bold>coherence of information</bold>. First and foremost it is crucial to avoid unnecessary information presented in instructional material, such as decorative pictures. As the learner tries to make sense out of every information given and integrate it with the other presented information and with own prior knowledge, irrelevant information will only unnecessarily consume cognitive capacities.</p>
	<p>Avoiding <bold>redundant information</bold>. The exact same information should not be given in different formats, because the learner tries to integrate all information with each other as well as with prior knowledge. This in turn costs cognitive capacities, which are not available for learning any more. One common ‘bad’ example is presenting a text on the slides and reading it out loud at the same time.</p>
	<p>Making use of <bold>multimedia</bold>. Even though the exact same information should not be presented in different modalities, preferable the same subject matter should be presented in different ways. For instance, an explanation of a car engine is easier to understand with an accompanying picture or animation.</p>
	<p>Making use of <bold>different modalities</bold>. To account for the dual-coding characteristics of working memory, instructional material should present related information in different modalities. For instance, a graph accompanied by an audio text instead of a written text.</p>
	<p>Avoiding <bold>split attention</bold> by seeking contiguity. Instructional material should present related information that needs to be integrated in closely, both in space and time. For instance, the legend of a graph should better be incorporated in the graph itself than presented on the side.</p>
	<p>More principles were developed over time and fill entire textbooks [<xref ref-type="bibr" rid="b54">54</xref>], but these are the most fundamental ones. These guidelines sound valid and were often supported by empirical studies – but not always.</p>
	</sec>
	<sec id="s2b">
	<title>Testing learning theories in educational practice</title>
	<p>The above described theories were developed based on many empirical studies that were conducted under specific circumstances. We will exemplify this with the studies of Mayer (for an overview of 15 years of studies:
[<xref ref-type="bibr" rid="b54">54</xref>]) and describe how new studies should enrich
these findings. First, most studies were conducted
with psychology students as participants. This is common
research practice, as psychology students have to participate
in research for course credits and form the backbone
of a lot of psychological research. For many research
topics that should be equal across humans (e.g., perception,
memory) psychology students are valid participants.
For educational research, however, they represent a preselected
group with very specific characteristics that may
influence the outcomes (e.g., in Germany only students
with very high grades are allowed to enter psychology
study). Thus, we argue that it is crucial to test the actual
target group of a learning material when investigating
educational principles. Second, the illustrations used were
very specific. Mayer used in most studies short black and
white drawings (animated or static) showing the formation
of lightning (or a bicycle pump). Of course it was
important to keep the material constant when investigating
different principles. Nowadays, however, we must
acknowledge that this was a very specific format (simple
black and white drawings) and a specific topic (shouldn’t
lighting formation be known to university students?).
Third, these studies used short, one sentence texts in
English. This may have caused artefacts in the findings.
For instance, research suggests that a modality effect only
occurs for short sentences, while for long sentences only
the last part is affected [<xref ref-type="bibr" rid="b73">73</xref>] or that it might
even occur only for English text [<xref ref-type="bibr" rid="b50">50</xref>].
We argue that it is necessary to test the guidelines and
principles found thus far on diverse material that probably
uses more up-to-date multimedia.</p>
	<p>In the following, we present two examples, where eye
tracking shed light on the processes underlying these
effects that were carried out in ecologically valid scenarios.
In the first example, we tested the <bold>split-attention effect</bold> [<xref ref-type="bibr" rid="b37">37</xref>]. In
our study, we used multimedia material on the topic of
arts that is used nation-wide for assessment of all Dutch
pupils at secondary school level. Moreover, our participants
were 16 years old pupils. So, we used ecologically
valid material that was tested with the actual target group.
The material itself consisted not only of one task, but of
eight tasks. Each task consisted of a text paragraph describing
the task background and additional multimedia
material, such as pictures, text or videos. We compared two versions of this material (<xref ref-type="fig" rid="fig02">Figure 2</xref>): In one version,
all additional material was presented on one side of the
screen and the task text on the other. This is a classic
split-attention design as the pupils must visually search
for the related information. In the second version, all
additional material was placed within the text, right
where it was referred to. This corresponds to a classic
integrated design as it allows the pupils to process the
multimedia information right when it is needed.</p>
<fig id="fig02" fig-type="figure" position="float">
					<label>Figure 2</label>
					<caption>
						<p>The computer-based testing environment in a
split (left) and in an integrated design (right). Adapted
from Jarodzka, Janssen, et al. (2015)[<xref ref-type="bibr" rid="b37">37</xref>].</p>
					</caption>
					<graphic id="graph02" xlink:href="jemr-10-01-c-figure-02.png"/>
				</fig>
	<p>Surprisingly, pupils achieved better test scores in the
split-version of the test (50&#x25; correct, vs. 44&#x25; correct in
the integrated format). Eye tracking data showed that
pupils largely neglected the additional information in the
split-design (32 sec fixation time). Contrary to the predictions
of the CTMML [<xref ref-type="bibr" rid="b54">54</xref>], pupils did not put a
lot of effort to integrate the related information that
would have consumed up cognitive capacity (5 points on
a 9-point score for both conditions). Actually, these pupils
were ‘lazy’ (or clever!) and ignored everything that
they figured was not mandatory to solve the task. This
was indeed the better strategy as it turned out that this
additional information was not crucial to solve these tasks
correctly. So was the integrated design pointless? On the
contrary! Eye tracking results showed that exactly the
same pupils processed all information in the integrated
design (44 sec fixation time). Hence, they might have
built a richer mental model in these cases. Probably, the
test items were just not appropriately designed to tackle
this richness of the mental model. Either way, learners
might not always be as eager to actively process all given
information as multimedia theories assume them to be.</p>
	<p>In another example, we investigated the <bold>multimedia effect</bold> [<xref ref-type="bibr" rid="b61">61</xref>]. In this study,
we used multimedia material on the topic of vector calculus. Again, for our participants this was relevant educational material, as they were university physic students. These students solved eight tasks. Each task was composed of a text describing the problem including a formula, and a statement about this formula that the students had to confirm or reject (i.e., task performance). Additionally, half of the problems included a graph that presented one exemplary instance of the formula (<xref ref-type="fig" rid="fig03">Figure 3</xref>).</p>

<fig id="fig03" fig-type="figure" position="float">
					<label>Figure 3</label>
					<caption>
						<p>Exemplary task from the multimedia condition.
Adapted from Ögren et al. (2016)[<xref ref-type="bibr" rid="b61">61</xref>].</p>
					</caption>
					<graphic id="graph03" xlink:href="jemr-10-01-c-figure-03.png"/>
				</fig>
	<p>CTMML would predict that such an additional visualization should enrich the mental model the students are building and thus, lead to better performance. This was not what we found (56&#x25; correct with graphs vs. 52&#x25; correct without graphs). Instead, we found a bias in students to confirm the statement, if a graph was present (65&#x25; confirmation vs. 47&#x25; rejection). This is in line with findings that scientific pictures make text appear more credible [<xref ref-type="bibr" rid="b56">56</xref>]. Hence, our students probably saw the graph, judged it as being correct and concluded the same for the statement. Eye tracking data revealed that in the multimedia tasks, students paid less attention to the task description (50&#x25; vs. 40&#x25;) and to the statement (45&#x25; vs. 40&#x25;) – obviously, as they also looked at the graph (20&#x25;). The amount of looking at the graph was not related to task performance (dwelling on graph when answering correctly 20&#x25; vs. incorrectly 19&#x25;). Looking at the statement, however, was positively related to task performance (dwelling on statement when answering correctly 43&#x25; vs. incorrectly 38&#x25;). Also, many transitions between the statement and the graph were positively related to task performance (correct answer: 9 transitions vs. 6 transitions for incorrect answers). Lin and Lin [<xref ref-type="bibr" rid="b49">49</xref>] received similar findings when investigating geometrical problem solving with eye tracking: while looking at the graph was an indicator for perceived difficulty, looking at the area where the task performance actively takes place (here: calculation area) was positively correlated with task performance. Consequently, we must specify the CTMML based on our findings: it is not enough that the learners process a graph; they must process it in the context of the main task question. Only then graphs are beneficial, otherwise they might even pursue learners to be uncritical. Moreover, a recent study by Krejtz, Duchowski, Krejtz, Kopacz, and Chrzastowski-Wachtel [<xref ref-type="bibr" rid="b44">44</xref>] has shown that the type of graph that is presented plays a role: interactive graphs evoke most systematic text-graph integrative saccades than static or dynamic graphs. Future research should investigate, whether this has also a positive effect on learning outcomes.</p>
	</sec>
	<sec id="s2c">
	<title>Research agenda for Instructional Design theories</title>
	<p>We can conclude from these two examples already that eye tracking can help to explain unexpected findings, as it allows unique insights into processes underlying learning outcomes. One possible reason for the unexpected findings might be that the perceptual processes assumed by multimedia theories (CTML, CLT) were not directly tested with eye tracking when these theories were developed. These theories have been very helpful heuristics to design instructional material. However, now we must unravel new evidence to further develop, specify, correct, and form these theories. The following issues should be considered in future eye tracking research to achieve this:</p>
	<p>In the latter example presented above [<xref ref-type="bibr" rid="b61">61</xref>], we saw that the guidelines given, might need to be specified. Hence, it is crucial to test also the <bold>other guidelines for Instructional Design</bold> with eye tracking, but also under ecologically valid circumstances (i.e., actual learning material with real students).</p>
	<p>In the first example above [<xref ref-type="bibr" rid="b37">37</xref>], we saw that even if assuming that those guidelines are appropriate, some <bold>basic pre-assumptions of these theories</bold> might not be (e.g., that learners do their best to actively integrate material). Hence, it is crucial to test also these. In particular, the many assumptions about perceptual processes must be tested directly with eye tracking.</p>
	<p>The research discussed thus far considered cognitive processes. However, <bold>metacognitive processes</bold> are also crucial for learning (i.e., monitoring what I already can do what I still need to practice). However, too little research has been conducted on this important topic until now [<xref ref-type="bibr" rid="b84">84</xref>].</p>
	<p>Finally, eye tacking research is conducted in laboratories where one participant at a time is tested under minimal disturbance. This has, however, nothing to do with educational practice. From social psychology research, we know that performing a task in the presence of others might be inhibiting, but also facilitating [<xref ref-type="bibr" rid="b9">9</xref>]. Eye tracking research also shows effects of social presence on attention [<xref ref-type="bibr" rid="b62 b72">62, 72</xref>] Hence, future eye tracking research should investigate <bold>social effects</bold> on processes of learning, for instance within so-called digital classrooms.</p>
	<p>It has to be noted that eye tracking – in particular in methodological triangulation with other process data – cannot only be used to derive instructional guidelines, but also to concretely usability test concrete computer-based multimedia learning environments. For a comprehensive description on how to proceed in such a case, see Groner and Siegenthaler [<xref ref-type="bibr" rid="b26">26</xref>].</p>
	</sec>
</sec>	
	
<sec id="s3">
<title>Expertise development – scaffolding the learner to the environment</title>
	<sec id="s3a">
	<title>Theories of human learning – the long-term memory perspective</title>
	<p>So far, we have looked into initial learning processes. The more a person knows about a task or a domain, the more we must take the long-term memory into account as well. In the long-term memory all knowledge is stored and with increasing experience in a task it is reorganized. This <bold>knowledge organization</bold>, in turn, changes the deal for the working memory. It changes it to this extent that Ericsson and Kintsch [<xref ref-type="bibr" rid="b20">20</xref>] suggested the concept of long-term working memory. For instance, with increasing numerical skills, children do not have to memorize six digits separately, but can form two chunks of three digits each and thus increase their working memory capacity [<xref ref-type="bibr" rid="b59">59</xref>]. With ongoing mathematical education, children can even solve mathematical problems described in text form. They quickly see the crucial cues that indicate which type of formula should be used. Based on this info, they know which other information they have to search for in the text and which they can ignore to fill in the formula. Next they solve the formula and formulate a solution to the problem. This procedure describes an exemplary use of a schema [<xref ref-type="bibr" rid="b88">88</xref>]. Similar to a chunk, a schema is not only an efficient way to store information in long-term memory, but it also expands working memory: one entire schema functions as only one entity. Thus, plenty capacity is left over to collect new information to fill in the schema’s empty slots. If a schema includes a specific temporal order, such as visiting a restaurant (enter a restaurant, look for a table, order from menu, …), it is called a script [<xref ref-type="bibr" rid="b74">74</xref>]. Another form of knowledge organization is forming short-cuts within long chains of reasoning by encapsulating parts of it into entities that are only unfolded into its pieces if necessary [<xref ref-type="bibr" rid="b10 b75">10, 75</xref>]. The more knowledge a person has in a task and the more efficient it is organized, the faster and more correct this person can execute this task. Until he or she eventually becomes an expert [<xref ref-type="bibr" rid="b19">19</xref>]. For certain professions, such as medicine, we already know so much from research that these short-cuts and organizations of knowledge can be described very specific [<xref ref-type="bibr" rid="b35">35</xref>]. In the current section, we specifically focus on visual expertise and what we know so far about its knowledge and skill organization.</p>
	<p>The specific case of visual expertise.</p>
	<p>Expertise is defined as a consistently superior performance on a specified set of <bold>representative tasks</bold> for a domain [<xref ref-type="bibr" rid="b22 b24">22, 24</xref>]. This superiority is due to the above described efficient organization of large amounts of knowledge and skills in a domain. This efficient knowledge organization reflects in different aspects, depending on the task itself. One example is the above mentioned well documented cognitive chunking in chess [<xref ref-type="bibr" rid="b12 b15">12, 15</xref>]. Typically, expert and novice chess players are asked to build chess formation from memory; a task in which experts excel largely [<xref ref-type="bibr" rid="b25 b27">25, 27</xref>]. Eye tracking research revealed that this chunking is also reflected in perceptual
	processes: experts look rather in between chess figures, while novices look at each single figure [<xref ref-type="bibr" rid="b69 b70">69, 70</xref>]. We see that the concept of chunking in chess is reflected in two aspects: a cognitive recall performance and perceptual processes. Similar findings occur also in other domains of expertise, such as playing music [<xref ref-type="bibr" rid="b47">47</xref>]. In most cases, reading from notes is an important part of playing music and thus, it is one aspect of musical expertise that is investigated with eye tracking [<xref ref-type="bibr" rid="b1 b64 b65 b66">1, 64, 65, 66</xref>].</p>
	<p>Reingold and Sheridan [<xref ref-type="bibr" rid="b70">70</xref>] provide a comprehensive overview of eye tracking research on visual expertise. The authors draw two main conclusions from their review. First, experts are able to encode domain related patterns in a superior way, which is due to their larger visual span. Second, eye tracking data of experts often entails information that they were not aware of. This is a clear indicator of experts’ tacit knowledge. The increased visual span is a reflection of the above described chunking in perceptual processes. The tacit knowledge could be linked to encapsulated knowledge and its automated use. When reading this review, you will quickly realize that most research was conducted on the traditional expertise domains of chess and medicine. These studies used static and perceptually simple stimuli, such as chess boards or X-rays of the chest.</p>
	<p>However, a lot of visual expertise plays a role in perceptually much more complex environments, such as air traffic control [<xref ref-type="bibr" rid="b7">7</xref>], new medical imaging techniques [<xref ref-type="bibr" rid="b8">8</xref>], meteorology [<xref ref-type="bibr" rid="b80">80</xref>], etc. These environments are difficult for cognitive processing for two reasons [<xref ref-type="bibr" rid="b2 b11 b54">2, 11, 54</xref>]. First, they are information-rich [<xref ref-type="bibr" rid="b18 b76">18, 76</xref>]. Hence, they entail large amount of information; and a lot of it is irrelevant. On top of that, the relation of thematic relevance and visual saliency is often not optimal. Hence, it is challenging to select the relevant information. Moreover, these environments are dynamic [<xref ref-type="bibr" rid="b28 b52">28, 52</xref>]. Thus, information may be transient. Also, several information elements may appear (and disappear) simultaneously (cf. split-attention effect). Consequently, it is challenging to keep information active so that it can be integrated. Consequently, the stimuli used in most visual expertise research so far are not representative for most expertise domains. Thus, we cannot simply generalize these findings to information-rich or even dynamic domains. Research in this field, is in focus of the following section.</p>
	</sec>
	<sec id="s3b">
	<title>Research on visual expertise in information-rich environments</title>
	<p>The concept of visual expertise is difficult to tackle as it entails so many different aspects (as already described above). In most cases, it is thus necessary to approach this concept from different angles by means of methodological triangulation [<xref ref-type="bibr" rid="b16 b82">16, 82</xref>]. One the one hand, eye tracking can tackle the perceptual aspects of visual expertise, while other data sources complete the picture on the more cognitive side, such as performance data, verbal data, and even drawings of what a person thinks where he or she looked at. Due to the nature of this concept and the research tradition, verbal data are most often used to investigate expertise [<xref ref-type="bibr" rid="b23">23</xref>]. They can take the form of interviews, self-explanation, retrospective reports or thinking aloud (for an overview of different forms of verbal data and how to combine them with eye tracking see Chapters 3.4.8 and 4.7.3 in Holmqvist et al. [<xref ref-type="bibr" rid="b29">29</xref>]). If implemented carefully, verbal reports will not disturb the actual task performance. Instead, they will give us more information on the reason why a person looked at a certain area. In the following, we present examples from own research using this methodological triangulation for investigating visual expertise and its knowledge organization in information-rich environments.</p>
	<p>One reason that this field is still so little investigated [<xref ref-type="bibr" rid="b70">70</xref>] besides its obvious relevance as described above, are software issues. In 2010, we published the very first article investigating visual expertise with eye tracking using video material and an AOI analysis [<xref ref-type="bibr" rid="b38">38</xref>]. This study investigated expertise in the domain of marine zoology. In other words, seven professors and PhD students, and 14 biology students classified the swimming modes of reef fish. In reality, marine zoologists often execute their profession under water (either snorkeling or diving). To get as close as possible to this situation, we asked participants to watch four videos of single fish swimming in a colorful reef for as long as they wanted to. In this way, we created representative, but at the same time experimentally controllable tasks. Afterwards, they watched their own eye tracking recordings and reported what they were thinking while approaching this task [<xref ref-type="bibr" rid="b87">87</xref>]. As we wanted to compare where experts and where novices looked at, we used a cumbersome manual procedure to define AIOs on videos, which delivered interesting findings: Experts clearly outperformed novices (experts: 4/4 points, novices: 3/4 points; η<sub>p</sub><sup>2</sup> =.18), which meant that they were indeed true experts in this task (not a trivial finding in expertise research!). Also, we compared the sequences in which participants inspected the different body parts of the fish. Experts were more diverse than novices (similarity of experts: 67&#x25;; novices: 72&#x25;; η<sub>p</sub><sup>2</sup> =.08). Probably, novices just followed the most visually salient features, which resulted in a rather similar scanpath. Experts, on the other hand, seem to have had different scripts to approach this task, which resulted in different scanpaths. These different scripts might be due to different forms of experience (i.e., when diving you see the fish from the side, while when snorkeling you see it from above; consequently, you rely on different features when classifying its motion). Indeed, dwell time analyses of AOI data taken together with participants’ verbal reports, showed that part of the experts took a short-cut: they first classified the fish and deduced from this, how it must swim (dwell time on according AOIs of experts: 375 ms; novices: 160 ms; η<sub>p</sub><sup>2</sup> =.29; according verbal utterances of experts: 57; novices: 26; η<sub>p</sub><sup>2</sup> =.56). In sum, we found that visual expertise in marine zoology (a) leads to different types of scripts, probably depending on the concrete experience in that task, and (b) which form these scripts can take.</p>
	<p>In a following step, we moved towards an interactive task stimulus, namely digital pathology [<xref ref-type="bibr" rid="b30 b31 b32">30, 31, 32</xref>]. In the first study [<xref ref-type="bibr" rid="b31">31</xref>] we compared how participants of three expertise levels diagnosed 10 pathological slides based on a two seconds inspection. They were eye tracked during this inspection and reported afterwards how they came about their diagnosis. Obviously, novices were incorrect (38&#x25; correct diagnoses), incomplete and inconclusive in their diagnosis (hardly conclusive terms or diagnostic specifications mentioned) and looked little at relevant areas (3 fixations). Experts (85&#x25;) and intermediates (87&#x25; correct diagnoses), on the other hand, diagnosed these slides equally well. However, they differed in how they processed the slides. Experts relied on their first inspection of the relevant area (fixation dispersion 1st trial part: 135) and then further checked the slide for other potentially relevant information (2nd trial part: 167). In their explanations they mainly focused on the typicality of the slide (e.g., high usage of comparative terms). Intermediates kept inspecting the relevant area throughout the entire trial duration (fixation dispersion 1st trial part: 192; 2nd: 165) and considered many potential diagnoses (e.g., a lot of mentioning of pathologies). For their knowledge organization, we may conclude that experts have such consolidated illness-scripts that they can rely on, which leaves them capacity to check for further potential problems. Intermediates, instead, possess already according schemata, however, they still have to check many competing schemata to reach a diagnosis. Even though this study yielded interesting findings, the task we used was not really representative for this profession. Hence, in following studies [<xref ref-type="bibr" rid="b32 b30">32, 30</xref>], we used a digital version of a tissue sample that could be operated as under a regular microscope: zooming in and out as well as panning around the slide. Hence, this was a highly representative task. Despite the progress in commercial eye tracking software, using a stimulus that can be individually changed that much (and that is not a website) is still challenging and requires a lot of manual work and programming. We found that experts were more efficient as they used fewer microscopic movements (e.g., opposed zooming movements: η2p = 0.03; expertise effect for all navigation behavior: η2p= 0.11) and shorter reasoning chains to reach a diagnosis (reasoning terms used by experts: 109; intermediates: 63; novices: 159). This is in line with the findings from the first study that indicated that experts possess consolidated illness-scripts that allow fast decision making. Also, navigation data showed that experts visited fewer diagnostically relevant areas (experts: 3.05; intermediates: 3.98; novices: 4.05). This poses the question whether it is even possible to define areas as being relevant for each expertise group. It might be difficult to grasp the effects, because experts understand the stimuli so quickly. Intermediates also showed processes that are in line with Study 1: they took longer to reach a decision (expert: 86 sec; intermediates: 110 sec; novices: 152 sec) and looked more at relevant areas while basing their diagnosis on many specific abnormalities (novices: 35; intermediates: 96; experts: 94). Thus, intermediates already have established schemata. However, they still need a lot of time to check them. Novices again were simply all over the place and clearly lacked any relevant knowledge (or its organization).</p>
	<p>Another expertise domain we have investigated is air traffic control [<xref ref-type="bibr" rid="b36 b90">36, 90</xref>]. Controlling air traffic is a really challenging task: constantly flying in and departing airplanes need to be coordinated with a high emphasis of safety, but also on environmentally friendly travel. 31 air traffic controllers of three different expertise levels solved nine situations. Each depicted a real radar screen, with airplanes (including type, height, and speed), sectors, and start and landing points. Participants reported the optimal order of arrival of the airplanes while their eye movements were recorded. Individuals with higher levels of expertise clearly outperformed those of lower levels (experts: 4.63, intermediates: 4.30; novices: 3.82; η<sub>p</sub><sup>2</sup> = .49). Interestingly, the performance of those with higher expertise was more similar than of those of lower expertise (experts: 0.59; intermediates: 0.53; novices: 0.43; η<sub>p</sub><sup>2</sup> = .44; in contrast to our findings with marine zoologists: Jarodzka et al. [<xref ref-type="bibr" rid="b38">38</xref>]). In this profession it seems, thus, that there is one optimal script to solve this task. Eye tracking analyses revealed that individuals with higher expertise looked mainly at the aircrafts and at the background between them (e.g., time to first fixation on aircraft for experts: 41.59 sec; intermediates: 54.6 sec; novices: 65.06; η<sub>p</sub><sup>2</sup> = .37). This indicates that the script individuals with more expertise establish allows them to better focus on the relevant information and chunk single information entities. Novices, on the other hand, had no appropriate strategy to relay on and fall back on the sub-optimal means-end-strategy as indicated by them looking mainly at the destination of the airplanes (e.g., time to first fixation on destination for experts: 38.38 sec; intermediates: 36.62 sec; novices: 25.37 sec; η<sub>p</sub><sup>2</sup> = .36). We have to admit, though, that participants only saw static screenshots of radar screens. In a recent follow up study, we used a more representative task of this profession [<xref ref-type="bibr" rid="b36">36</xref>]. In that twelve participants with varying expertise levels worked on a simulation of an actual airport. The situation was entirely realistic including communication with other co-workers. Already the first eye tracking analyses reveal a drastic difference to the first study: novices mainly focus on the area of their own responsibility, while individuals with higher expertise look more outside this area, including the starting and landing points of the planes. This strategy allowed them to plan ahead in this very dynamic environment. Hence, the scripts individuals with higher expertise possess in this task, must be updated dynamically if the task includes more time pressure.</p>
	</sec>
	<sec id="s3c">
	<title>Research agenda for visual expertise research</title>
	<p>From the research presented above, but also from other research on visual expertise of teachers [<xref ref-type="bibr" rid="b45 b92">45, 92</xref>], neurological pediatrists [<xref ref-type="bibr" rid="b4">4</xref>], or radiology [<xref ref-type="bibr" rid="b43 b83">43, 83</xref>] we have learned already a lot about visual expertise in information-rich environments. Experts use chunks (e.g., air traffic control) and shortcuts (e.g., marine zoology) and this can be also seen in their perceptual processes and measured with eye tracking. Also, we have clearly seen the use of cognitive scripts or schemata and their influence on the visual processing of an environment and vice versa in each profession. Often, even very concrete statements about the form of these schemata or scripts could be made. Still, many open research questions remain.</p>
	<p>To which extent can we generalize these findings? We have seen that sometimes even slight changes in the task can lead to different outcomes (cf. air traffic control), while sometimes the changes go in the same direction (cf. pathology). Also, some findings that are found in one profession (e.g., experts become more similar in air traffic control) are not true for another profession (e.g., experts in marine zoology become more diverse). Hence, future research should consistently vary task characteristics and professions to understand, which aspects of visual expertise are generic and which domain-specific.</p>
	<p>A lot of research on visual expertise has been conducted on simplified tasks. This was largely due to technological restrictions of the eye tracking apparatuses and software. Research should not be hold back by technological obstacles, but rather feed their development. In particular two issues must be tackled to foster ecologically valid research on visual expertise. First, the detection of smooth pursuit to enable valid analysis of dynamic stimuli. Thereby, it is not enough to detect smooth pursuit with a stand-alone algorithm, but it must be implemented into existing analysis software, so it can be used in applied research as well. Second, more automated analyses for mobile eye tracking. Clearly, the truest way of analyzing visual expertise often requires real-world eye tracking. However, cumbersome manual analyses often hold researchers back.</p>
	<p>The presented research has shown how much we can benefit from methodological triangulation when investigating multifaceted concepts such as visual expertise. In a next step, research should directly link the analysis of verbal and eye tracking data. Only in this way it will be possible to make more concrete statements about the cognitive structures underlying these processes.</p>
	<p>Finally, it must become the ultimate aim of this research line to unravel the organization of knowledge and skills in long-term memory and how it develops with increasing expertise. Only then it is possible to draw meaningful conclusions from eye tracking data that go beyond superficial statements such as ‘experts had longer fixation durations’ that have virtually no meaning for professional or educational practice [<xref ref-type="bibr" rid="b42">42</xref>].</p>
	</sec>
</sec>

<sec id="s4">
<title>Eye movement modeling examples: Bridging Instructional Design and expertise research</title>
	<sec id="s4a">
	<title>Theories of human learning – training visual aspects of expertise</title>
	<p>So far, we have discussed how initial learning takes place, how it can be supported by Instructional Design, and which role eye tracking can play in this. Then, we have shown how individuals develop further over time and until they become experts in visual domains. In this section, we try to bring both research areas together to show how this road to visual expertise can be supported by instruction. This is not as trivial as it may sound, as Instructional Design entails the simplification of learning material, while expertise development requires to be faced with the authentic, information-rich tasks.</p>
	<p>One very powerful way of learning authentic tasks is imitation. It is so inherent to our system that even two weeks old babies imitate adults [<xref ref-type="bibr" rid="b58">58</xref>]. 
	Bandura [<xref ref-type="bibr" rid="b5">5</xref>] has shown in his classic bobo doll experiment that imitation leads indeed to learning. Children watched videos of an adult playing with a ‘bobo doll’, which is an inflatable, large doll that stands up again once it is tipped over. Depending on the experimental condition this adult was either behaving aggressively (e.g., punching the doll) towards this doll or not. Once these children were confronted with this doll, they treated it in a similar way as the model they saw in the video before [<xref ref-type="bibr" rid="b6">6</xref>].</p>
	<p>Consequently, research on teaching and training has picked up this approach. Indeed, decades of research have shown that studying examples of a model successfully executing a task is more efficient for learning than learning by trial-and-error [<xref ref-type="bibr" rid="b41">41</xref>]. 
	It is not trivial, though, to model a task. Many critical processes are not observable from outside, such as solving a mathematical equation. In such cases the model verbalizes his or her thoughts 
	(cognitive apprenticeship: Collins, Brown and Newman 
	[<xref ref-type="bibr" rid="b14">14</xref>]); 
	process-oriented modeling examples: Van Gog, Paas and Van Merrieboer 
	[<xref ref-type="bibr" rid="b86">86</xref>]). But what about perceptual processes in a visual task? We know that simply telling beginners to “look the way experts do” does work, but does not necessarily improve their performance [<xref ref-type="bibr" rid="b43">43</xref>]. These beginners may now know where to look, but not why.</p>
	<p>To address this issue, we developed eye movement modeling examples (EMME). These are video recordings of a model executing a task and explaining how he or she goes about that. On top of that, the model’s eye movements are tracked and replayed on top of the video [<xref ref-type="bibr" rid="b85">85</xref>]. However, novices are often already overwhelmed with information-rich material that forms the basis of visual tasks. Adding an eye movement display on top of that, is likely to overwhelm them. An alternative is to display the model’s eye movements by reducing existing information on videos [<xref ref-type="bibr" rid="b17 b60">17, 60</xref>]. This results in a spotlight wandering across the video, while the rest of it appears blurred. <xref ref-type="fig" rid="fig04">Figure 4</xref> presents screenshots of both, a traditional and a spotlight display used in EMME.</p>
	<fig id="fig04" fig-type="figure" position="float">
					<label>Figure 4</label>
					<caption>
						<p>Eye movement modeling examples with a traditional dot display (left) and a spotlight display (right).
Material used in Jarodzka et al. (2010)[<xref ref-type="bibr" rid="b38">38</xref>].</p></caption>
					<graphic id="graph04" xlink:href="jemr-10-01-c-figure-04.png"/>
				</fig>
	
	
	</sec>
	<sec id="s4b">
	<title>Research on eye movement modeling examples</title>
	<p>Research described in the last section has shown that experts dramatically differ from novices. Hence, there is no point in trying to ‘make novices act like experts’. Consequently, in our research, we have always used a systematic way to make the expert model act more didactical. On the one hand, the models in our studies were always not only experts in their domains, but also highly experienced in teaching this domain. Hence, they knew from experience which difficulties students face in these tasks and how to best explain these tasks to them. On the other hand, we used a specific recording procedure to ensure that the EMME videos were of high quality. First, to ensure a close relation of the voice and the eye movements of the models, we first show them the task itself (e.g., a video recording of something they need to classify). Only after they are familiar with this specific task, we begin with the recording. Such recording procedure have resulted tight gaze-voice couplings elsewhere [<xref ref-type="bibr" rid="b71">71</xref>]. Second, to shift the models’ focus from the task to the novice recipient, they evaluate their own recordings based on several questions: Will a student know what each term means? Is the task explained in comprehensible enough terms for students? Is it explained in enough detail? Are all information that a student needs contained? Are all contained information really important? Such questions have shown to improve written communication of experts to novices [<xref ref-type="bibr" rid="b40">40</xref>]. Third, if necessary, the models could revise their recordings.</p>
	<p>We have used such EMMEs, for instance, to train the classification of the locomotion patterns of reef fish [<xref ref-type="bibr" rid="b39">39</xref>]. In the learning phase, participants studied four videos with either a dot display EMME, a spotlight EMME or a video with verbal explanations only (<xref ref-type="fig" rid="fig04">Figure 4</xref>). In the meantime, their eye movements were recorded to study whether they actually did follow the eye movement display of the model on the videos. In the testing phase, participants watched four new videos without any form of guidance or verbal explanation. They had the task to classify these videos accordingly. While watching the testing videos, participants’ eye movements were recoded to investigate the efficiency of their visual search of relevant information on the videos. Then, they indicated via a questionnaire how they interpreted this information. Results showed that both EMME videos guided the eye movements of the participants to the spots where the model looked at (measured as coherence between the model’s and the learner’s scanpath: Spot = 15.10; Dot = 15.11; Control = 12.07; η<sub>p</sub><sup>2</sup> = .39). Moreover, in the spotlight condition, participants showed a more efficient visual search on testing videos (measured e.g., time to first fixation on relevant areas: Spot = 1236 ms; Dot = 1530 ms; Control = 1632 ms; η<sub>p</sub><sup>2</sup> = .11), while participants in the dot group exhibited better interpretation performance in comparison to the control group (measured as &#x25; correct: Dot = 74&#x25;; Spot = 69&#x25;; Control = 67&#x25;; η<sub>p</sub><sup>2</sup> = .12).</p>
	<p>We have conducted a similar study in the domain of diagnosing epileptic seizures in infants [<xref ref-type="bibr" rid="b34">34</xref>]. The experimental procedure was just as in the study described above, except from the task: participants watched videos of infants either suffering a form of epileptic seizure or a differential diagnosis. Even though both tasks sound very different, they had crucial commonalities: participants had to identify relevant body parts (fins that were used to produce propulsion vs. limbs that might be affected by the disease) and to describe how exactly these body parts move. Based on these two steps, a classification or a diagnosis, respectively, can be made. Also, these steps rely on a visual inspection of a video input. A further difference to the fish locomotion study was that display of the eye movements: The traditional display was shown as a circle instead of a dot to not occlude relevant information on the video (e.g., a twitching eye). The spotlight display was far more subtle than in the fish locomotion study. Results showed an overall advantage of the spotlight display on attention guidance in the learning phase (measured as Euclidean distance to model’s gaze: Spot = 210; Circle = 238; Control = 237; η<sub>p</sub><sup>2</sup> = .13), visual search (measured as e.g., time until looking at relevant area: Spot = 189 ms; Circle = 274 ms; Control = 289 ms; η<sub>p</sub><sup>2</sup> = .13) and interpretation performance (measured as &#x25; correct: Spot = 60&#x25; Circle = 53&#x25;; Control = 50&#x25;; η<sub>p</sub><sup>2</sup> = .11) in the testing phase.</p>
	<p>Similar training approaches have been used for visual tasks, which require hardly prior knowledge [<xref ref-type="bibr" rid="b51 b53 b77">51, 53, 77</xref>], for expertise tasks [<xref ref-type="bibr" rid="b46 b57 b79">46, 57, 79</xref>], and even for problem solving in dyads [<xref ref-type="bibr" rid="b13">13</xref>]. However, these studies did not test whether the found performance differences could be transferred to similar tasks (as we did on our studies), i.e., whether learning took place. Thus, strictly speaking, these cannot be seen as educational studies.</p>
	</sec>
	<sec id="s4c">
	<title>Research agenda for EMME</title>
	<p>EMME as similar gaze-based approaches may be helpful in training visual tasks. Still, we should not become too enthusiastic, as there are also enough examples where these approaches had no (single conditions in the two studies reported above) or even detrimental effects [<xref ref-type="bibr" rid="b78 b85">78, 85</xref>]. Hence, the question is not whether EMME does foster the performance of visual tasks (or even visual expertise), but rather, under which circumstances in does so. We thus recommend the following research questions to be addressed in the future:</p>
	<p>The role of the task and the stimulus characteristics: The research on EMME covers a diversity of tasks (from insight problem solving, to performance only, to transfer and learning) and a diversity of stimuli (from simple line drawings to complex videos). A systematic variation and concrete description of these factors should shed more light into when EMME are effective. For instance, existing studies already indicate that the visual complexity of the task is crucial: Van Gog et al. [<xref ref-type="bibr" rid="b85">85</xref>] used a task that could be executed without perceptual input and found negative effects of EMME on performance [<xref ref-type="bibr" rid="b89">89</xref>]. Jarodzka et al. [<xref ref-type="bibr" rid="b39">39</xref>] used a fish locomotion classification task where all relevant information was visual salient. EMME was in part helpful in this case. Jarodzka, Balslev, et al. [<xref ref-type="bibr" rid="b34">34</xref>] used a pediatric neurology task, where the relevant information was transient and not salient. This is where EMME were most helpful.</p>
	<p>The role of the eye movement display design is an entirely understudied aspect. Apart from two studies [<xref ref-type="bibr" rid="b34 b39">34, 39</xref>], none has compared different designs directly even though these studies indicate that this might be a crucial success factor for EMME. Results showed that reducing information on a spotlight manner guides visual attention on EMME videos best. Also, the spotlight facilitates visual search on testing videos most. However, the interpretation of relevant features is only enhanced, if a holistic processing is possible during learning.</p>
	<p>Moreover, the role of didactizing the expert model, as we have done in our studies, has not been directly investigated. In fact, most studies provide hardly any description on how the model’s eye movements were collected. This is surprising as we know very well from research to which large extent experts and novices differ in their processing and how unlikely it thus is that forcing experts’ processes upon novices can hardly work.</p>
	<p>Finally, the EMME methodology could be embedded into well-established methods of expertise trainings. For instance, the 4C-ID training [<xref ref-type="bibr" rid="b91">91</xref>] is an elaborated model to design a curriculum for complex tasks. It includes modeling episodes that might easily be filled in with EMME for specific visual tasks. Another example is deliberate practice [<xref ref-type="bibr" rid="b21">21</xref>]. This method involves a detailed study of own and others performance. If the task includes visual aspects, studying the eye movements of an expert (or one owns) might provide additional benefits.</p>
	</sec>
</sec>

<sec id="s5">
<title>Discussion</title>
<p>In the current paper we have introduced Educational Science as a field of applied eye tracking research. We have structured it along three topics, namely Instructional Design, expertise development, and eye movement modeling examples. The topic of Instructional Design investigates how learning of a new skill or knowledge by optimally designing the according learning material. Educational theories on human cognitive processing, in particular in the working memory, resulted in guidelines on how to design such material and which processes learners should devote to efficiently achieve learning gains. Up until now, eye tracking helped us to understand how learners actually process such instructional material, which was not always in line with what theory predicted. Future eye tracking research on this topic can thus help to further corroborate, improve, and enrich these theories. Not only to understand and support processes of initial learning, but also to better understand how we as humans process information in working memory under realistic circumstances.</p>
<p>The topic of expertise development investigates the other side of the learning spectrum, namely people, who already have a lot of experience and knowledge on a task. How do they process information? How do they differ from people with slightly less or more experience? A large body of expertise research started already many years ago to expend towards visual processes underlying expertise and thus, eye tracking research. This research showed that, indeed, changes in long-term memory structures that come along the development of expertise influence not only working memory processing, but also visual processing of the environment and vice versa. Future eye tracking research on this promising topic must dive into more real-world scenarios with diverse tasks and information-rich, dynamic environments. Not only will we understand in this way more about the development and characteristics of visual expertise, but we will also better understand how long-term memory structures influence the way we see and interpret our environment, both in every day and in challenging situations.</p>
<p>The third topic we have presented are eye movement modeling examples. This is the youngest topic within the field of applied eye tracking research in Educational Science, but nonetheless, a very promising one. It addresses the question, how visual expertise could be trained with the help of instructional videos of real-world tasks that are explained by experts in the field. These videos include an overlay of these experts’ visual focuses to support the learner in connecting the verbal explanation of the expert to the real-world complexity of the task. Of course, this research topic gives us practical implications for educational practice. But it also provides interesting research questions apart from education, such as: how to best guide eye movements of people on videos? How to support speech comprehension with displaying the eye movements of the speaker to the listener? Etc.</p>	
<p>It is important to keep in mind that the area of applied eye tracking in Educational Science is clearly applied research. This means that the tasks and stimuli used are very diverse and less well controlled in comparison to fundamental experiments in vision science, for instance. However, they are ecologically valid. This is crucial for this research to allow drawing actual conclusions for educational practice. Therefore, research questions should always be developed together with stakeholders from educational practice. And the models or frameworks derived in research should always be tested ‘in the wild’ (aka schools, universities). But this also means that we can learn a lot from this research field on real-world processing, which in turn can be fruitful to establish new research question for fundamental research.</p>
<p>Furthermore, this research area is still relatively new. This means that there are no well-established eye tracking measures, like in reading research, that can be clearly related to concrete processes. This is due to the fact, that there is simply less research conducted as, for instance, in reading. But the ecologically valid nature makes it almost impossible to hope for such simple relations: each learning environment, each expertise domain are so inherently different in terms of tasks and stimuli that the eye tracking measures have to be found each time anew. The process of finding the appropriate measures must not be driven by what is given by the manufacturers. Instead, it is important to work along existing theories and carefully operationalize measures that are clearly related to concrete hypotheses.</p>
</sec>	

<sec id="s6" sec-type="COI-statement">
<title>Acknowledgements and Conflict of Interest</title>
<p>This paper is based on two keynote speeches of the first author at the 4th Polish Eye Tracking Conference, Warsaw, Poland (2016) and the 7th Scandinavian Workshop on Applied Eye Tracking, Turku, Finland (2016).</p>
<p>The authors declare that there is no conflict of interest regarding the publication of this paper.</p>
</sec>	

</body>

<back>
<ref-list>
<ref id="b1"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Arthur</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Blom</surname>, <given-names>D.</given-names></string-name>, &amp; <string-name><surname>Khuu</surname>, <given-names>S.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Music sight-reading expertise, visually disrupted score and eye movements.</article-title> <source>Journal of Eye Movement Research</source>, <volume>9</volume>(<issue>7</issue>), <fpage>1</fpage>–<lpage>12</lpage>. <pub-id pub-id-type="doi">10.16910/jemr.9.7.1</pub-id><issn>1995-8692</issn></mixed-citation></ref>
<ref id="b2"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Atkinson</surname>, <given-names>R. C.</given-names></string-name>, &amp; <string-name><surname>Shiffrin</surname>, <given-names>R. M.</given-names></string-name></person-group> (<year>1968</year>). <chapter-title>Human memory: A proposed system and its control processes</chapter-title>. In <person-group person-group-type="editor"><string-name><given-names>K. W.</given-names> <surname>Spence</surname></string-name> &amp; <string-name><given-names>J. T.</given-names> <surname>Spence</surname></string-name> (<role>Eds.</role>),</person-group> <source>The psychology of learning and motivation: Advances in research and theory</source> (<volume>Vol. 2</volume>, pp. <fpage>89</fpage>–<lpage>192</lpage>). <publisher-loc>New York</publisher-loc>: <publisher-name>Academic Press</publisher-name>.</mixed-citation></ref>
<ref id="b3"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Baddeley</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2012</year>). <article-title>Working memory: Theories, models, and controversies.</article-title> <source>Annual Review of Psychology</source>, <volume>63</volume>(<issue>1</issue>), <fpage>1</fpage>–<lpage>29</lpage>. <pub-id pub-id-type="doi">10.1146/annurev-psych-120710-100422</pub-id><pub-id pub-id-type="pmid">21961947</pub-id><issn>0066-4308</issn></mixed-citation></ref>
<ref id="b4"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Balslev</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Jarodzka</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Holmqvist</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>de Grave</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Muijtjens</surname>, <given-names>A. M.</given-names></string-name>, <string-name><surname>Eika</surname>, <given-names>B.</given-names></string-name>, <etal>. . .</etal> <string-name><surname>Scherpbier</surname>, <given-names>A. J.</given-names></string-name></person-group> (<year>2012</year>). <article-title>Visual expertise in paediatric neurology.</article-title> <source>European Journal of Paediatric Neurology</source>, <volume>16</volume>(<issue>2</issue>), <fpage>161</fpage>–<lpage>166</lpage>. <pub-id pub-id-type="doi">10.1016/j.ejpn.2011.07.004</pub-id><pub-id pub-id-type="pmid">21862371</pub-id><issn>1090-3798</issn></mixed-citation></ref>
<ref id="b5"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Bandura</surname>, <given-names>A.</given-names></string-name></person-group> (<year>1977</year>). <source>Social learning theory</source>. <publisher-loc>Englewood Cliffs</publisher-loc>: <publisher-name>Prentice-Hall</publisher-name>.</mixed-citation></ref>
<ref id="b6"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Bandura</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Ross</surname>, <given-names>D.</given-names></string-name>, &amp; <string-name><surname>Ross</surname>, <given-names>S. A.</given-names></string-name></person-group> (<year>1961</year>). <article-title>Transmission of aggression through imitation of aggressive models.</article-title> <source>Journal of Abnormal and Social Psychology</source>, <volume>63</volume>(<issue>3</issue>), <fpage>575</fpage>–<lpage>582</lpage>. <pub-id pub-id-type="doi">10.1037/h0045925</pub-id><pub-id pub-id-type="pmid">13864605</pub-id><issn>0096-851X</issn></mixed-citation></ref>
<ref id="b7"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Beck</surname>, <given-names>M. R.</given-names></string-name>, <string-name><surname>Trenchard</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>van Lamsweerde</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Goldstein</surname>, <given-names>R. R.</given-names></string-name>, &amp; <string-name><surname>Lohrenz</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2012</year>). <article-title>Searching in clutter: Visual attention strategies of expert pilots.</article-title> <comment>[). Santa Monica, USA: Sage.]</comment>. <source>Proceedings of the Human Factors and Ergonomics Society Annual Meeting</source>, <volume>56</volume>(<issue>1</issue>), <fpage>1411</fpage>–<lpage>1415</lpage>. <pub-id pub-id-type="doi">10.1177/1071181312561400</pub-id><issn>1071-1813</issn></mixed-citation></ref>
<ref id="b8"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Bertram</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Kaakinen</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Bensch</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Helle</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Lantto</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Niemi</surname>, <given-names>P.</given-names></string-name>, &amp; <string-name><surname>Lundbom</surname>, <given-names>N.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Eye movements of radiologists reflect expertise in CT study interpretation: A potential tool to measure resident development.</article-title> <source>Radiology</source>, <volume>281</volume>(<issue>3</issue>), <fpage>805</fpage>–<lpage>815</lpage>. <pub-id pub-id-type="doi">10.1148/radiol.2016151255</pub-id><pub-id pub-id-type="pmid">27409563</pub-id><issn>0033-8419</issn></mixed-citation></ref>
<ref id="b9"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Bond</surname>, <given-names>C. F.</given-names>, <suffix>Jr</suffix>.,</string-name> &amp; <string-name><surname>Titus</surname>, <given-names>L. J.</given-names></string-name></person-group> (<year>1983</year>). <article-title>Social facilitation: A meta-analysis of 241 studies.</article-title> <source>Psychological Bulletin</source>, <volume>94</volume>(<issue>2</issue>), <fpage>265</fpage>–<lpage>292</lpage>. <pub-id pub-id-type="doi">10.1037/0033-2909.94.2.265</pub-id><pub-id pub-id-type="pmid">6356198</pub-id><issn>0033-2909</issn></mixed-citation></ref>
<ref id="b10"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Boshuizen</surname>, <given-names>H. P. A.</given-names></string-name>, &amp; <string-name><surname>Schmidt</surname>, <given-names>H. G.</given-names></string-name></person-group> (<year>1992</year>). <article-title>On the role of biomedical knowledge in clinical reasoning by experts, intermediates, and novices.</article-title> <source>Cognitive Science</source>, <volume>16</volume>(<issue>2</issue>), <fpage>153</fpage>–<lpage>184</lpage>. <pub-id pub-id-type="doi">10.1207/s15516709cog1602_1</pub-id><issn>0364-0213</issn></mixed-citation></ref>
<ref id="b11"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Chandler</surname>, <given-names>P.</given-names></string-name>, &amp; <string-name><surname>Sweller</surname>, <given-names>J.</given-names></string-name></person-group> (<year>1991</year>). <article-title>Cognitive load theory and the format of instruction.</article-title> <source>Cognition and Instruction</source>, <volume>8</volume>(<issue>4</issue>), <fpage>293</fpage>–<lpage>332</lpage>. <pub-id pub-id-type="doi">10.1207/s1532690xci0804_2</pub-id><issn>0737-0008</issn></mixed-citation></ref>
<ref id="b12"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Chase</surname>, <given-names>W. G.</given-names></string-name>, &amp; <string-name><surname>Simon</surname>, <given-names>H. A.</given-names></string-name></person-group> (<year>1973</year>). <chapter-title>The mind’s eye in chess</chapter-title>. In <person-group person-group-type="editor"><string-name><given-names>W. G.</given-names> <surname>Chase</surname></string-name> (<role>Ed.</role>),</person-group> <source>Visual information processing</source> (pp. <fpage>215</fpage>–<lpage>281</lpage>). <publisher-loc>New York</publisher-loc>: <publisher-name>Academic Press</publisher-name>. <pub-id pub-id-type="doi">10.1016/B978-0-12-170150-5.50011-1</pub-id></mixed-citation></ref>
<ref id="b13"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Cherubini</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Nüssli</surname>, <given-names>M. A.</given-names></string-name>, &amp; <string-name><surname>Dillenbourg</surname>, <given-names>P.</given-names></string-name></person-group> (<year>2010</year>). <article-title>This is it! Indicating and looking in collaborative work at distance.</article-title> <source>Journal of Eye Movement Research</source>, <volume>3</volume>(<issue>5</issue>), <fpage>1</fpage>–<lpage>20</lpage>. <pub-id pub-id-type="doi">10.16910/jemr.3.5.3</pub-id><issn>1995-8692</issn></mixed-citation></ref>
<ref id="b14"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Collins</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Brown</surname>, <given-names>J. S.</given-names></string-name>, &amp; <string-name><surname>Newman</surname>, <given-names>S. E.</given-names></string-name></person-group> (<year>1989</year>). <chapter-title>Cognitive apprenticeship: Teaching the craft of reading, writing, and mathematics</chapter-title>. In <person-group person-group-type="editor"><string-name><given-names>L. B.</given-names> <surname>Resnick</surname></string-name> (<role>Ed.</role>),</person-group> <source>Cognition and instruction: Issues and gendas</source> (pp. <fpage>453</fpage>–<lpage>494</lpage>). <publisher-loc>Mahwah</publisher-loc>: <publisher-name>Erlbaum</publisher-name>.</mixed-citation></ref>
<ref id="b15"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>De Groot</surname>, <given-names>A. D.</given-names></string-name></person-group> (<year>2008</year>). <source>Thought and choice in chess</source>. <publisher-loc>The Hague</publisher-loc>: <publisher-name>Mouton</publisher-name>. <comment>(Original work published 1946)</comment> <pub-id pub-id-type="doi">10.5117/9789053569986</pub-id></mixed-citation></ref>
<ref id="b16"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Denzin</surname>, <given-names>N. K.</given-names></string-name></person-group> (<year>2012</year>). <article-title>Triangulation 2.0.</article-title> <source>Journal of Mixed Methods Research</source>, <volume>6</volume>(<issue>2</issue>), <fpage>80</fpage>–<lpage>88</lpage>. <pub-id pub-id-type="doi">10.1177/1558689812437186</pub-id><issn>1558-6898</issn></mixed-citation></ref>
<ref id="b17"><mixed-citation publication-type="conference" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Dorr</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Vig</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Gegenfurtner</surname>, <given-names>K. R.</given-names></string-name>, <string-name><surname>Martinetz</surname>, <given-names>T.</given-names></string-name>, &amp; <string-name><surname>Barth</surname>, <given-names>E.</given-names></string-name></person-group> (<year>2008</year>, October). <article-title>Eye movement modelling and gaze guidance.</article-title> Paper presented at the <source>Fourth International Workshop on Human-Computer Conversation</source>, <conf-loc>Oxford, UK</conf-loc>.</mixed-citation></ref>
<ref id="b18"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Dwyer</surname>, <given-names>F. M.</given-names></string-name></person-group> (<year>1976</year>). <article-title>Adapting media attributes for effective learning.</article-title> <source>Educational Technology</source>, <volume>16</volume>(<issue>8</issue>), <fpage>7</fpage>–<lpage>13</lpage>.<issn>0013-1962</issn></mixed-citation></ref>
<ref id="b19"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="editor"><string-name><surname>Ericsson</surname>, <given-names>K. A.</given-names></string-name>, <string-name><surname>Charness</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Feltovich</surname>, <given-names>P.</given-names></string-name>, &amp; <string-name><surname>Hoffman</surname>, <given-names>R. R.</given-names></string-name> (<role>Eds.</role>)</person-group>. (<year>2006</year>). <source>The Cambridge handbook of expertise and expert performance</source>. <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>. <pub-id pub-id-type="doi">10.1017/CBO9780511816796</pub-id></mixed-citation></ref>
<ref id="b20"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Ericsson</surname>, <given-names>K. A.</given-names></string-name>, &amp; <string-name><surname>Kintsch</surname>, <given-names>W.</given-names></string-name></person-group> (<year>1995</year>). <article-title>Long-term working memory.</article-title> <source>Psychological Review</source>, <volume>102</volume>(<issue>2</issue>), <fpage>211</fpage>–<lpage>245</lpage>. <pub-id pub-id-type="doi">10.1037/0033-295X.102.2.211</pub-id><pub-id pub-id-type="pmid">7740089</pub-id><issn>0033-295X</issn></mixed-citation></ref>
<ref id="b21"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Ericsson</surname>, <given-names>K. A.</given-names></string-name>, <string-name><surname>Krampe</surname>, <given-names>R. T.</given-names></string-name>, &amp; <string-name><surname>Tesch-Römer</surname>, <given-names>C.</given-names></string-name></person-group> (<year>1993</year>). <article-title>The role of deliberate practice in the acquisition of expert performance.</article-title> <source>Psychological Review</source>, <volume>100</volume>(<issue>3</issue>), <fpage>363</fpage>–<lpage>406</lpage>. <pub-id pub-id-type="doi">10.1037/0033-295X.100.3.363</pub-id><issn>0033-295X</issn></mixed-citation></ref>
<ref id="b22"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Ericsson</surname>, <given-names>K. A.</given-names></string-name>, &amp; <string-name><surname>Lehmann</surname>, <given-names>A. C.</given-names></string-name></person-group> (<year>1996</year>). <article-title>Expert and exceptional performance: Evidence of maximal adaptation to task constraints.</article-title> <source>Annual Review of Psychology</source>, <volume>47</volume>(<issue>1</issue>), <fpage>273</fpage>–<lpage>305</lpage>. <pub-id pub-id-type="doi">10.1146/annurev.psych.47.1.273</pub-id><pub-id pub-id-type="pmid">15012483</pub-id><issn>0066-4308</issn></mixed-citation></ref>
<ref id="b23"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Ericsson</surname>, <given-names>K. A.</given-names></string-name>, &amp; <string-name><surname>Simon</surname>, <given-names>H. A.</given-names></string-name></person-group> (<year>1993</year>). <source>Protocol analysis: Verbal reports as data</source>. <publisher-loc>Cambridge</publisher-loc>: <publisher-name>MIT Press</publisher-name>.</mixed-citation></ref>
<ref id="b24"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Ericsson</surname>, <given-names>K. A.</given-names></string-name>, &amp; <string-name><surname>Smith</surname>, <given-names>J.</given-names></string-name></person-group> (<year>1991</year>). <chapter-title>Prospects and limits in the empirical study of expertise</chapter-title>. In <person-group person-group-type="editor"><string-name><given-names>K. A.</given-names> <surname>Ericsson</surname></string-name> &amp; <string-name><given-names>J.</given-names> <surname>Smith</surname></string-name> (<role>Eds.</role>),</person-group> <source>Towards a general theory of expertise: Prospects and limits</source> (pp. <fpage>1</fpage>–<lpage>38</lpage>). <publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>.</mixed-citation></ref>
<ref id="b25"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Freyhof</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Gruber</surname>, <given-names>H.</given-names></string-name>, &amp; <string-name><surname>Ziegler</surname>, <given-names>A.</given-names></string-name></person-group> (<year>1992</year>). <article-title>Expertise and hierarchical knowledge representation in chess.</article-title> <source>Psychological Research</source>, <volume>54</volume>(<issue>1</issue>), <fpage>32</fpage>–<lpage>37</lpage>. <pub-id pub-id-type="doi">10.1007/BF01359221</pub-id><issn>0340-0727</issn></mixed-citation></ref>
<ref id="b26"><mixed-citation publication-type="web-page" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Groner</surname>, <given-names>R.</given-names></string-name>, &amp; <string-name><surname>Siegenthaler</surname>, <given-names>E.</given-names></string-name></person-group> (<year>2009</year>). Improving the usability of eLearning tools: The IFeL Multifunctional Analysis and its application in distance teaching. Proceedings of the ICDE/EADTU Conference in Maastricht, June 2009. Retrieved from: <ext-link ext-link-type="uri" xlink:href="http://www.ou.nl/Docs/Campagnes/ICDE2009/Papers/Final_Paper_100Groner.pdf">http://www.ou.nl/Docs/Campagnes/ICDE2009/Papers/Final_Paper_100Groner.pdf</ext-link></mixed-citation></ref>
<ref id="b27"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Gruber</surname>, <given-names>H.</given-names></string-name></person-group> (<year>1991</year>). <source>Wissensakquisition und Gedächtnisleistung in Abhängigkeit vom Expertisegrad</source>. <publisher-loc>Munich</publisher-loc>: <publisher-name>University of Munich, Chair of Education and Educational Psychology</publisher-name>.</mixed-citation></ref>
<ref id="b28"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Hegarty</surname>, <given-names>M.</given-names></string-name></person-group> (<year>1992</year>). <article-title>Mental animation: Inferring motion from static displays of mechanical systems.</article-title> <source>Journal of Experimental Psychology. Learning, Memory, and Cognition</source>, <volume>18</volume>(<issue>5</issue>), <fpage>1084</fpage>–<lpage>1102</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1037//0278-7393.18.5.1084</pub-id> <pub-id pub-id-type="doi">10.1037/0278-7393.18.5.1084</pub-id><pub-id pub-id-type="pmid">1402712</pub-id><issn>0278-7393</issn></mixed-citation></ref>
<ref id="b29"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Holmqvist</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Nyström</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Andersson</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Dewhurst</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Jarodzka</surname>, <given-names>H.</given-names></string-name>, &amp; <string-name><surname>Van de Weijer</surname>, <given-names>J.</given-names></string-name></person-group> (<year>2011</year>). <source>Eye tracking: A comprehensive guide to methods and measures</source>. <publisher-loc>Oxford</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref>
<ref id="b30"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Jaarsma</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Boshuizen</surname>, <given-names>H. P. A.</given-names></string-name>, <string-name><surname>Jarodzka</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Nap</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Verboon</surname>, <given-names>P.</given-names></string-name>, &amp; <string-name><surname>Van Merriënboer</surname>, <given-names>J. J. G.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Tracks to a medical diagnosis: Expertise differences in visual problem solving.</article-title> <source>Applied Cognitive Psychology</source>, <volume>30</volume>(<issue>3</issue>), <fpage>314</fpage>–<lpage>322</lpage>. <pub-id pub-id-type="doi">10.1002/acp.3201</pub-id><issn>0888-4080</issn></mixed-citation></ref>
<ref id="b31"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Jaarsma</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Jarodzka</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Nap</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>van Merriënboer</surname>, <given-names>J. J. G.</given-names></string-name>, &amp; <string-name><surname>Boshuizen</surname>, <given-names>H. P. A.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Expertise under the microscope: Processing histopathological slides.</article-title> <source>Medical Education</source>, <volume>48</volume>(<issue>3</issue>), <fpage>292</fpage>–<lpage>300</lpage>. <pub-id pub-id-type="doi">10.1111/medu.12385</pub-id><pub-id pub-id-type="pmid">24528464</pub-id><issn>0308-0110</issn></mixed-citation></ref>
<ref id="b32"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Jaarsma</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Jarodzka</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Nap</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Van Merriënboer</surname>, <given-names>J. J. G.</given-names></string-name>, &amp; <string-name><surname>Boshuizen</surname>, <given-names>H. P. A.</given-names></string-name></person-group> (<year>2015</year>). <article-title>Expertise in clinical pathology: Bridging the gap.</article-title> <source>Advances in Health Sciences Education : Theory and Practice</source>, <volume>20</volume>(<issue>4</issue>), <fpage>1089</fpage>–<lpage>1106</lpage>. <pub-id pub-id-type="doi">10.1007/s10459-015-9589-x</pub-id><pub-id pub-id-type="pmid">25677013</pub-id><issn>1382-4996</issn></mixed-citation></ref>
<ref id="b33"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Jacob</surname>, <given-names>R.</given-names></string-name>, &amp; <string-name><surname>Karn</surname>, <given-names>K. S.</given-names></string-name></person-group> (<year>2003</year>). <chapter-title>Eye tracking in human-computer interaction and usability research: Ready to deliver the promises</chapter-title>. In <person-group person-group-type="editor"><string-name><given-names>J.</given-names> <surname>Hyönä</surname></string-name>, <string-name><given-names>R.</given-names> <surname>Radach</surname></string-name>, &amp; <string-name><given-names>H.</given-names> <surname>Deubel</surname></string-name> (<role>Eds.</role>),</person-group> <source>The mind’s eye: Cognitive and applied aspects of eye movement research</source> (pp. <fpage>573</fpage>–<lpage>605</lpage>). <publisher-loc>Oxford</publisher-loc>: <publisher-name>Elsevier</publisher-name>. <pub-id pub-id-type="doi">10.1016/B978-044451020-4/50031-1</pub-id></mixed-citation></ref>
<ref id="b34"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Jarodzka</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Balslev</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Holmqvist</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Nyström</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Scheiter</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Gerjets</surname>, <given-names>P.</given-names></string-name>, &amp; <string-name><surname>Eika</surname>, <given-names>B.</given-names></string-name></person-group> (<year>2012</year>). <article-title>Conveying clinical reasoning based on visual observation via eye-movement modelling examples.</article-title> <source>Instructional Science</source>, <volume>40</volume>(<issue>5</issue>), <fpage>813</fpage>–<lpage>827</lpage>. <pub-id pub-id-type="doi">10.1007/s11251-012-9218-5</pub-id><issn>0020-4277</issn></mixed-citation></ref>
<ref id="b35"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Jarodzka</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Boshuizen</surname>, <given-names>H. P. A.</given-names></string-name>, &amp; <string-name><surname>Kirschner</surname>, <given-names>P. A.</given-names></string-name></person-group> (<year>2012</year>). <chapter-title>Cognitive skills in medical diagnosis and intervention</chapter-title>. In <person-group person-group-type="editor"><string-name><given-names>P.</given-names> <surname>Lanzer</surname></string-name> (<role>Ed.</role>),</person-group> <source>Catheter-based cardiovascular interventions; Knowledge-based approach</source> (pp. <fpage>69</fpage>–<lpage>86</lpage>). <publisher-loc>Berlin, Germany</publisher-loc>: <publisher-name>Springer</publisher-name>.</mixed-citation></ref>
<ref id="b36"><mixed-citation publication-type="conference" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Jarodzka</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Gouw</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Van Meeuwen</surname>, <given-names>L.</given-names></string-name>, &amp; <string-name><surname>Brand-Gruwel</surname>, <given-names>S.</given-names></string-name></person-group> (<year>2015</year>, August). <article-title>Air traffic control: Visual expertise in a dynamic problem solving task.</article-title> Paper presented at the <source>18th European Conference on Eye Movements</source>, <conf-loc>Vienna, Austria</conf-loc>.</mixed-citation></ref>
<ref id="b37"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Jarodzka</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Janssen</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Kirschner</surname>, <given-names>P. A.</given-names></string-name>, &amp; <string-name><surname>Erkens</surname>, <given-names>G.</given-names></string-name></person-group> (<year>2015</year>). <article-title>Avoiding split attention in computer-based testing: Is neglecting additional information facilitative?</article-title> <source>British Journal of Educational Technology</source>, <volume>46</volume>(<issue>4</issue>), <fpage>803</fpage>–<lpage>817</lpage>. <pub-id pub-id-type="doi">10.1111/bjet.12174</pub-id><issn>0007-1013</issn></mixed-citation></ref>
<ref id="b38"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Jarodzka</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Scheiter</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Gerjets</surname>, <given-names>P.</given-names></string-name>, &amp; <string-name><surname>Van Gog</surname>, <given-names>T.</given-names></string-name></person-group> (<year>2010</year>). <article-title>In the eyes of the beholder: How experts and novices interpret dynamic stimuli.</article-title> <source>Learning and Instruction</source>, <volume>20</volume>(<issue>2</issue>), <fpage>146</fpage>–<lpage>154</lpage>. <pub-id pub-id-type="doi">10.1016/j.learninstruc.2009.02.019</pub-id><issn>0959-4752</issn></mixed-citation></ref>
<ref id="b39"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Jarodzka</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Van Gog</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Dorr</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Scheiter</surname>, <given-names>K.</given-names></string-name>, &amp; <string-name><surname>Gerjets</surname>, <given-names>P.</given-names></string-name></person-group> (<year>2013</year>). <article-title>Learning to see: Guiding students’ attention via a model’s eye movements fosters learning.</article-title> <source>Learning and Instruction</source>, <volume>25</volume>, <fpage>62</fpage>–<lpage>70</lpage>. <pub-id pub-id-type="doi">10.1016/j.learninstruc.2012.11.004</pub-id><issn>0959-4752</issn></mixed-citation></ref>
<ref id="b40"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Jucks</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Schulte-Löbbert</surname>, <given-names>P.</given-names></string-name>, &amp; <string-name><surname>Bromme</surname>, <given-names>R.</given-names></string-name></person-group> (<year>2007</year>). <article-title>Supporting experts’ written knowledge communication through reflective prompts on the use of specialist concepts.</article-title> <source>The Journal of Psychology</source>, <volume>215</volume>(<issue>4</issue>), <fpage>237</fpage>–<lpage>247</lpage>. <pub-id pub-id-type="doi">10.1027/0044-3409.215.4.237</pub-id><issn>0022-3980</issn></mixed-citation></ref>
<ref id="b41"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Kirschner</surname>, <given-names>P. A.</given-names></string-name>, <string-name><surname>Sweller</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Clark</surname>, <given-names>R. E.</given-names></string-name></person-group> (<year>2006</year>). <article-title>Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching.</article-title> <source>Educational Psychologist</source>, <volume>41</volume>(<issue>2</issue>), <fpage>75</fpage>–<lpage>86</lpage>. <pub-id pub-id-type="doi">10.1207/s15326985ep4102_1</pub-id><issn>0046-1520</issn></mixed-citation></ref>
<ref id="b42"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Kok</surname>, <given-names>E. M.</given-names></string-name>, &amp; <string-name><surname>Jarodzka</surname>, <given-names>H.</given-names></string-name></person-group> (<year>2017</year>). <article-title>Before your very eyes: The value and limitations of eye tracking in medical education.</article-title> <source>Medical Education</source>, <volume>51</volume>(<issue>1</issue>), <fpage>114</fpage>–<lpage>122</lpage>. <pub-id pub-id-type="doi">10.1111/medu.13066</pub-id><pub-id pub-id-type="pmid">27580633</pub-id><issn>0308-0110</issn></mixed-citation></ref>
<ref id="b43"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Kok</surname>, <given-names>E. M.</given-names></string-name>, <string-name><surname>Jarodzka</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>de Bruin</surname>, <given-names>A. B. H.</given-names></string-name>, <string-name><surname>BinAmir</surname>, <given-names>H. A.</given-names></string-name>, <string-name><surname>Robben</surname>, <given-names>S. G.</given-names></string-name>, &amp; <string-name><surname>van Merriënboer</surname>, <given-names>J. J.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Systematic viewing in radiology: Seeing more, missing less?</article-title> <source>Advances in Health Sciences Education : Theory and Practice</source>, <volume>21</volume>(<issue>1</issue>), <fpage>189</fpage>–<lpage>205</lpage>. <pub-id pub-id-type="doi">10.1007/s10459-015-9624-y</pub-id><pub-id pub-id-type="pmid">26228704</pub-id><issn>1382-4996</issn></mixed-citation></ref>
<ref id="b44"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Krejtz</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Duchowski</surname>, <given-names>A. T.</given-names></string-name>, <string-name><surname>Krejtz</surname>, <given-names>I.</given-names></string-name>, <string-name><surname>Kopacz</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Chrząstowski-Wachtel</surname>, <given-names>P.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Gaze transitions when learning with multimedia.</article-title> <source>Journal of Eye Movement Research</source>, <volume>9</volume>(<issue>1</issue>), <fpage>1</fpage>–<lpage>17</lpage>. <pub-id pub-id-type="doi">10.16910/jemr.7.1.2</pub-id><issn>1995-8692</issn></mixed-citation></ref>
<ref id="b45"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Lachner</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Jarodzka</surname>, <given-names>H.</given-names></string-name>, &amp; <string-name><surname>Nückles</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2016</year>). <article-title>What makes an expert teacher? Investigating teachers’ professional vision and discourse abilities.</article-title> <source>Instructional Science</source>, <volume>44</volume>(<issue>3</issue>), <fpage>197</fpage>–<lpage>203</lpage>. <pub-id pub-id-type="doi">10.1007/s11251-016-9376-y</pub-id><issn>0020-4277</issn></mixed-citation></ref>
<ref id="b46"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Leff</surname>, <given-names>D. R.</given-names></string-name>, <string-name><surname>James</surname>, <given-names>D. R.</given-names></string-name>, <string-name><surname>Orihuela-Espina</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Kwok</surname>, <given-names>K.-W.</given-names></string-name>, <string-name><surname>Sun</surname>, <given-names>L. W.</given-names></string-name>, <string-name><surname>Mylonas</surname>, <given-names>G.</given-names></string-name>, <etal>. . .</etal> <string-name><surname>Yang</surname>, <given-names>G.-Z.</given-names></string-name></person-group> (<year>2015</year>). <article-title>The impact of expert visual guidance on trainee visual search strategy, visual attention and motor skills.</article-title> <source>Frontiers in Human Neuroscience</source>, <volume>9</volume>, <fpage>526</fpage>. <pub-id pub-id-type="doi">10.3389/fnhum.2015.00526</pub-id><pub-id pub-id-type="pmid">26528160</pub-id><issn>1662-5161</issn></mixed-citation></ref>
<ref id="b47"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Lehmann</surname>, <given-names>A. C.</given-names></string-name>, &amp; <string-name><surname>Gruber</surname>, <given-names>H.</given-names></string-name></person-group> (<year>2006</year>). <chapter-title>Music</chapter-title>. In <person-group person-group-type="editor"><string-name><given-names>K. A.</given-names> <surname>Ericsson</surname></string-name>, <string-name><given-names>N.</given-names> <surname>Charness</surname></string-name>, <string-name><given-names>P.</given-names> <surname>Feltovich</surname></string-name>, &amp; <string-name><given-names>R. R.</given-names> <surname>Hoffman</surname></string-name> (<role>Eds.</role>),</person-group> <source>The Cambridge handbook of expertise and expert performance</source> (pp. <fpage>457</fpage>–<lpage>470</lpage>). <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>. <pub-id pub-id-type="doi">10.1017/CBO9780511816796.026</pub-id></mixed-citation></ref>
<ref id="b48"><mixed-citation publication-type="web-page" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Lilienthal</surname>, <given-names>O.</given-names></string-name></person-group> (<year>1889</year>). Der Vogelflug als Grundlage der Fliegekunst, Berlin, Germany.Retrieved from: <ext-link ext-link-type="uri" xlink:href="https://commons.wikimedia.org/wiki/File:LilienthalFliegekunst.png">https://commons.wikimedia.org/wiki/File:LilienthalFliegekunst.png</ext-link></mixed-citation></ref>
<ref id="b49"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Lin</surname>, <given-names>J. J.</given-names></string-name>, &amp; <string-name><surname>Lin</surname>, <given-names>S. S.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Tracking eye movements when solving geometry problems with handwriting devices.</article-title> <source>Journal of Eye Movement Research</source>, <volume>7</volume>(<issue>1</issue>), <fpage>1</fpage>–<lpage>15</lpage>. <pub-id pub-id-type="doi">10.16910/jemr.7.1.2</pub-id><issn>1995-8692</issn></mixed-citation></ref>
<ref id="b50"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Lindow</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Fuchs</surname>, <given-names>H. M.</given-names></string-name>, <string-name><surname>Fürstenberg</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Kleber</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Schweppe</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Rummer</surname>, <given-names>R.</given-names></string-name></person-group> (<year>2011</year>). <article-title>On the robustness of the modality effect: Attempting to replicate a basic finding.</article-title> <source>Zeitschrift fur Padagogische Psychologie</source>, <volume>25</volume>(<issue>4</issue>), <fpage>231</fpage>–<lpage>243</lpage>. <pub-id pub-id-type="doi">10.1024/1010-0652/a000049</pub-id><issn>1010-0652</issn></mixed-citation></ref>
<ref id="b51"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Litchfield</surname>, <given-names>D.</given-names></string-name>, &amp; <string-name><surname>Ball</surname>, <given-names>L. J.</given-names></string-name></person-group> (<year>2011</year>). <article-title>Using another’s gaze as an explicit aid to insight problem solving.</article-title> <source>Quarterly Journal of Experimental Psychology</source>, <volume>64</volume>(<issue>4</issue>), <fpage>649</fpage>–<lpage>656</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1037/a0020082</pub-id> <pub-id pub-id-type="doi">10.1080/17470218.2011.558628</pub-id><pub-id pub-id-type="pmid">21347990</pub-id><issn>1747-0218</issn></mixed-citation></ref>
<ref id="b52"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Lowe</surname>, <given-names>R. K.</given-names></string-name></person-group> (<year>2003</year>). <article-title>Animation and learning: Selective processing of information in dynamic graphics.</article-title> <source>Learning and Instruction</source>, <volume>13</volume>(<issue>2</issue>), <fpage>157</fpage>–<lpage>176</lpage>. <pub-id pub-id-type="doi">10.1016/S0959-4752(02)00018-X</pub-id><issn>0959-4752</issn></mixed-citation></ref>
<ref id="b53"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Mason</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Tornatora</surname>, <given-names>M. C.</given-names></string-name>, &amp; <string-name><surname>Pluchino</surname>, <given-names>P.</given-names></string-name></person-group> (<year>2013</year>). <article-title>Do fourth graders integrate text and picture in processing and learning from an illustrated science text? Evidence from eye-movement patterns.</article-title> <source>Computers &amp; Education</source>, <volume>60</volume>(<issue>1</issue>), <fpage>95</fpage>–<lpage>109</lpage>. <pub-id pub-id-type="doi">10.1016/j.compedu.2012.07.011</pub-id><issn>0360-1315</issn></mixed-citation></ref>
<ref id="b54"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Mayer</surname>, <given-names>R. E.</given-names></string-name></person-group> (<year>2009</year>). <source>Multimedia learning</source>. <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>. <pub-id pub-id-type="doi">10.1017/CBO9780511811678</pub-id></mixed-citation></ref>
<ref id="b55"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Mayer</surname>, <given-names>R. E.</given-names></string-name>, &amp; <string-name><surname>Moreno</surname>, <given-names>R.</given-names></string-name></person-group> (<year>2003</year>). <article-title>Nine ways to reduce cognitive load in multimedia learning.</article-title> <source>Educational Psychologist</source>, <volume>38</volume>(<issue>1</issue>), <fpage>43</fpage>–<lpage>52</lpage>. <pub-id pub-id-type="doi">10.1207/S15326985EP3801_6</pub-id><issn>0046-1520</issn></mixed-citation></ref>
<ref id="b56"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>McCabe</surname>, <given-names>D. P.</given-names></string-name>, &amp; <string-name><surname>Castel</surname>, <given-names>A. D.</given-names></string-name></person-group> (<year>2008</year>). <article-title>Seeing is believing: The effect of brain images on judgments of scientific reasoning.</article-title> <source>Cognition</source>, <volume>107</volume>(<issue>1</issue>), <fpage>343</fpage>–<lpage>352</lpage>. <pub-id pub-id-type="doi">10.1016/j.cognition.2007.07.017</pub-id><pub-id pub-id-type="pmid">17803985</pub-id><issn>0010-0277</issn></mixed-citation></ref>
<ref id="b57"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>McNamara</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Booth</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Sridharan</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Caffey</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Grimm</surname>, <given-names>C.</given-names></string-name>, &amp; <string-name><surname>Bailey</surname>, <given-names>R.</given-names></string-name></person-group> (<year>2012</year>, August). <article-title>Directing gaze in narrative art.</article-title> Paper presented at the <source>Proceedings of the ACM Symposium on Applied Perception</source>, <conf-loc>Los Angeles, USA</conf-loc>. <pub-id pub-id-type="doi">10.1145/2338676.2338689</pub-id></mixed-citation></ref>
<ref id="b58"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Meltzoff</surname>, <given-names>A. N.</given-names></string-name>, &amp; <string-name><surname>Moore</surname>, <given-names>M. K.</given-names></string-name></person-group> (<year>1977</year>). <article-title>Imitation of facial and manual gestures by human neonates.</article-title> <source>Science</source>, <volume>198</volume>(<issue>4312</issue>), <fpage>75</fpage>–<lpage>78</lpage>. <pub-id pub-id-type="doi">10.1126/science.198.4312.75</pub-id><pub-id pub-id-type="pmid">17741897</pub-id><issn>0036-8075</issn></mixed-citation></ref>
<ref id="b59"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Miller</surname>, <given-names>G. A.</given-names></string-name></person-group> (<year>1956</year>). <article-title>The magical number seven plus or minus two: Some limits on our capacity for processing information.</article-title> <source>Psychological Review</source>, <volume>63</volume>(<issue>2</issue>), <fpage>81</fpage>–<lpage>97</lpage>. <pub-id pub-id-type="doi">10.1037/h0043158</pub-id><pub-id pub-id-type="pmid">13310704</pub-id><issn>0033-295X</issn></mixed-citation></ref>
<ref id="b60"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Nyström</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Holmqvist</surname>, <given-names>K.</given-names></string-name></person-group> (<year>2008</year>). <article-title>Semantic override of low-level features in image viewing - both initially and overall.</article-title> <source>Journal of Eye Movement Research</source>, <volume>2</volume>, <fpage>1</fpage>–<lpage>11</lpage>. <pub-id pub-id-type="doi">10.1007/s11251-016-9397-6</pub-id><issn>1995-8692</issn></mixed-citation></ref>
<ref id="b61"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Ögren</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Nyström</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Jarodzka</surname>, <given-names>H.</given-names></string-name></person-group> (<year>2016</year>). <article-title>There’s more to the multimedia effect than meets the eye: Is seeing pictures believing?</article-title> <source>Instructional Science</source>, <volume>•••</volume>, <fpage>1</fpage>–<lpage>25</lpage>. <pub-id pub-id-type="doi">10.1007/s11251-016-9397-6</pub-id><issn>0020-4277</issn></mixed-citation></ref>
<ref id="b62"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Oliva</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Niehorster</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Jarodzka</surname>, <given-names>H.</given-names></string-name>, &amp; <string-name><surname>Holmqvist</surname>, <given-names>K.</given-names></string-name></person-group> (in press). Influence of coactors on saccadic and manual responses. i-Percetion.</mixed-citation></ref>
<ref id="b63"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Paivio</surname>, <given-names>A.</given-names></string-name></person-group> (<year>1991</year>). <article-title>Dual coding theory: Retrospect and current status.</article-title> <source>Canadian Journal of Psychology</source>, <volume>45</volume>(<issue>3</issue>), <fpage>255</fpage>–<lpage>287</lpage>. <pub-id pub-id-type="doi">10.1037/h0084295</pub-id><issn>0008-4255</issn></mixed-citation></ref>
<ref id="b64"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Penttinen</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Huovinen</surname>, <given-names>E.</given-names></string-name></person-group> (<year>2011</year>). <article-title>The early development of sight-reading skills in adulthood: A study of eye movements.</article-title> <source>Journal of Research in Music Education</source>, <volume>59</volume>(<issue>2</issue>), <fpage>196</fpage>–<lpage>220</lpage>. <pub-id pub-id-type="doi">10.1177/0022429411405339</pub-id><issn>0022-4294</issn></mixed-citation></ref>
<ref id="b65"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Penttinen</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Huovinen</surname>, <given-names>E.</given-names></string-name>, &amp; <string-name><surname>Ylitalo</surname>, <given-names>A.-K.</given-names></string-name></person-group> (<year>2013</year>). <article-title>Silent music reading: Amateur musicians’ visual processing and descriptive skill.</article-title> <source>Musicae Scientiae</source>, <volume>17</volume>(<issue>2</issue>), <fpage>198</fpage>–<lpage>216</lpage>. <pub-id pub-id-type="doi">10.1177/1029864912474288</pub-id><issn>1029-8649</issn></mixed-citation></ref>
<ref id="b66"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Penttinen</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Huovinen</surname>, <given-names>E.</given-names></string-name>, &amp; <string-name><surname>Ylitalo</surname>, <given-names>A.-K.</given-names></string-name></person-group> (<year>2015</year>). <article-title>Reading ahead: Adult music students’ eye movements in temporally controlled performances of a children’s song.</article-title> <source>International Journal of Music Education</source>, <volume>33</volume>(<issue>1</issue>), <fpage>36</fpage>–<lpage>50</lpage>. <pub-id pub-id-type="doi">10.1177/0255761413515813</pub-id><issn>0255-7614</issn></mixed-citation></ref>
<ref id="b67"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Rayner</surname>, <given-names>K.</given-names></string-name></person-group> (<year>1998</year>). <article-title>Eye movements in reading and information processing: 20 years of research.</article-title> <source>Psychological Bulletin</source>, <volume>124</volume>(<issue>3</issue>), <fpage>372</fpage>–<lpage>422</lpage>. <pub-id pub-id-type="doi">10.1037/0033-2909.124.3.372</pub-id><pub-id pub-id-type="pmid">9849112</pub-id><issn>0033-2909</issn></mixed-citation></ref>
<ref id="b68"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Rayner</surname>, <given-names>K.</given-names></string-name></person-group> (<year>2009</year>). <article-title>Eye movements and attention in reading, scene perception, and visual search.</article-title> <source>Quarterly Journal of Experimental Psychology</source>, <volume>62</volume>(<issue>8</issue>), <fpage>1457</fpage>–<lpage>1506</lpage>. <pub-id pub-id-type="doi">10.1080/17470210902816461</pub-id><pub-id pub-id-type="pmid">19449261</pub-id><issn>1747-0218</issn></mixed-citation></ref>
<ref id="b69"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Reingold</surname>, <given-names>E. M.</given-names></string-name>, <string-name><surname>Charness</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Pomplun</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Stampe</surname>, <given-names>D. M.</given-names></string-name></person-group> (<year>2001</year>). <article-title>Visual span in expert chess players: Evidence from eye movements.</article-title> <source>Psychological Science</source>, <volume>12</volume>(<issue>1</issue>), <fpage>48</fpage>–<lpage>55</lpage>. <pub-id pub-id-type="doi">10.1111/1467-9280.00309</pub-id><pub-id pub-id-type="pmid">11294228</pub-id><issn>0956-7976</issn></mixed-citation></ref>
<ref id="b70"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Reingold</surname>, <given-names>E. M.</given-names></string-name>, &amp; <string-name><surname>Sheridan</surname>, <given-names>H.</given-names></string-name></person-group> (<year>2011</year>). <chapter-title>Eye movements and visual expertise in chess and medicine</chapter-title>. In <person-group person-group-type="editor"><string-name><given-names>S. P.</given-names> <surname>Liversedge</surname></string-name>, <string-name><given-names>I. D.</given-names> <surname>Gilchrist</surname></string-name>, &amp; <string-name><given-names>S.</given-names> <surname>Everling</surname></string-name> (<role>Eds.</role>),</person-group> <source>Oxford handbook of eye movements</source> (pp. <fpage>523</fpage>–<lpage>550</lpage>). <publisher-loc>Oxford</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>. <pub-id pub-id-type="doi">10.1093/oxfordhb/9780199539789.013.0029</pub-id></mixed-citation></ref>
<ref id="b71"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Richardson</surname>, <given-names>D. C.</given-names></string-name>, &amp; <string-name><surname>Dale</surname>, <given-names>R.</given-names></string-name></person-group> (<year>2005</year>). <article-title>Looking to understand: The coupling between speakers’ and listeners’ eye movements and its relationship to discourse comprehension.</article-title> <source>Cognitive Science</source>, <volume>29</volume>(<issue>6</issue>), <fpage>1045</fpage>–<lpage>1060</lpage>. <pub-id pub-id-type="doi">10.1207/s15516709cog0000_29</pub-id><pub-id pub-id-type="pmid">21702802</pub-id><issn>0364-0213</issn></mixed-citation></ref>
<ref id="b72"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Richardson</surname>, <given-names>D. C.</given-names></string-name>, <string-name><surname>Street</surname>, <given-names>C. N. H.</given-names></string-name>, <string-name><surname>Tan</surname>, <given-names>J. Y. M.</given-names></string-name>, <string-name><surname>Kirkham</surname>, <given-names>N. Z.</given-names></string-name>, <string-name><surname>Hoover</surname>, <given-names>M. A.</given-names></string-name>, &amp; <string-name><surname>Ghane Cavanaugh</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2012</year>). <article-title>Joint perception: Gaze and social context.</article-title> <source>Frontiers in Human Neuroscience</source>, <volume>6</volume>, <fpage>194</fpage>. <pub-id pub-id-type="doi">10.3389/fnhum.2012.00194</pub-id><pub-id pub-id-type="pmid">22783179</pub-id><issn>1662-5161</issn></mixed-citation></ref>
<ref id="b73"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Rummer</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Schweppe</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Fürstenberg</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Scheiter</surname>, <given-names>K.</given-names></string-name>, &amp; <string-name><surname>Zindler</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2011</year>). <article-title>The perceptual basis of the modality effect in multimedia learning.</article-title> <source>Journal of Experimental Psychology. Applied</source>, <volume>17</volume>(<issue>2</issue>), <fpage>159</fpage>–<lpage>173</lpage>. <pub-id pub-id-type="doi">10.1037/a0023588</pub-id><pub-id pub-id-type="pmid">21604912</pub-id><issn>1076-898X</issn></mixed-citation></ref>
<ref id="b74"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Schank</surname>, <given-names>R. C.</given-names></string-name>, &amp; <string-name><surname>Abelson</surname>, <given-names>R. P.</given-names></string-name></person-group> (<year>2013</year>). <source>Scripts, plans, goals, and understanding: An inquiry into human knowledge structures</source>. <publisher-loc>Hillsdale</publisher-loc>: <publisher-name>Lawrence Erlbaum Associates</publisher-name>. <pub-id pub-id-type="doi">10.4324/9780203781036</pub-id></mixed-citation></ref>
<ref id="b75"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Schmidt</surname>, <given-names>H. G.</given-names></string-name>, &amp; <string-name><surname>Boshuizen</surname>, <given-names>H. P. A.</given-names></string-name></person-group> (<year>1992</year>). <chapter-title>Encapsulation of biomedical knowledge</chapter-title>. In <person-group person-group-type="editor"><string-name><given-names>D. A.</given-names> <surname>Evans</surname></string-name> &amp; <string-name><given-names>V. L.</given-names> <surname>Patel</surname></string-name> (<role>Eds.</role>),</person-group> <source>Advanced models of cognition for medical training and practice</source> (pp. <fpage>265</fpage>–<lpage>282</lpage>). <publisher-loc>New York</publisher-loc>: <publisher-name>Springer Verlag</publisher-name>. <pub-id pub-id-type="doi">10.1007/978-3-662-02833-9_15</pub-id></mixed-citation></ref>
<ref id="b76"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Schnotz</surname>, <given-names>W.</given-names></string-name>, &amp; <string-name><surname>Lowe</surname>, <given-names>R. K.</given-names></string-name></person-group> (<year>2008</year>). <chapter-title>A unified view of learning from animated and static graphics</chapter-title>. In <person-group person-group-type="editor"><string-name><given-names>R. K.</given-names> <surname>Lowe</surname></string-name> &amp; <string-name><given-names>W.</given-names> <surname>Schnotz</surname></string-name> (<role>Eds.</role>),</person-group> <source>Learning with animation: Research and design implications</source> (pp. <fpage>304</fpage>–<lpage>356</lpage>). <publisher-loc>New York</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>.</mixed-citation></ref>
<ref id="b77"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Skuballa</surname>, <given-names>I. T.</given-names></string-name>, <string-name><surname>Fortunski</surname>, <given-names>C.</given-names></string-name>, &amp; <string-name><surname>Renkl</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2015</year>). <article-title>An eye movement pre-training fosters the comprehension of processes and functions in technical systems.</article-title> <source>Frontiers in Psychology</source>, <volume>6</volume>, <fpage>598</fpage>. <pub-id pub-id-type="doi">10.3389/fpsyg.2015.00598</pub-id><pub-id pub-id-type="pmid">26029138</pub-id><issn>1664-1078</issn></mixed-citation></ref>
<ref id="b78"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Skuballa</surname>, <given-names>I. T.</given-names></string-name>, <string-name><surname>Schwonke</surname>, <given-names>R.</given-names></string-name>, &amp; <string-name><surname>Renkl</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2012</year>). <article-title>Learning from narrated animations with different support procedures: Working memory capacity matters.</article-title> <source>Applied Cognitive Psychology</source>, <volume>26</volume>(<issue>6</issue>), <fpage>840</fpage>–<lpage>847</lpage>. <pub-id pub-id-type="doi">10.1002/acp.2884</pub-id><issn>0888-4080</issn></mixed-citation></ref>
<ref id="b79"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><string-name><surname>Sridharan</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Bailey</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>McNamara</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Grimm</surname>, <given-names>C.</given-names></string-name></person-group> (<year>2012</year>, March). <article-title>Subtle gaze manipulation for improved mammography training.</article-title> <source>Proceedings of the Symposium on Eye Tracking Research and Applications</source>, <conf-loc>Santa Barbara, USA</conf-loc>. <pub-id pub-id-type="doi">10.1145/2168556.2168568</pub-id></mixed-citation></ref>
<ref id="b80"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Stofer</surname>, <given-names>K.</given-names></string-name>, &amp; <string-name><surname>Che</surname>, <given-names>X.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Comparing experts and novices on scaffolded data visualizations using eye-tracking.</article-title> <source>Journal of Eye Movement Research</source>, <volume>7</volume>(<issue>5</issue>), <fpage>1</fpage>–<lpage>15</lpage>. <pub-id pub-id-type="doi">10.16910/jemr.7.5.2</pub-id><issn>1995-8692</issn></mixed-citation></ref>
<ref id="b81"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Sweller</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Van Merriënboer</surname>, <given-names>J. J. G.</given-names></string-name>, &amp; <string-name><surname>Paas</surname>, <given-names>F.</given-names></string-name></person-group> (<year>1998</year>). <article-title>Cognitive architecture and instructional design.</article-title> <source>Educational Psychology Review</source>, <volume>10</volume>(<issue>3</issue>), <fpage>251</fpage>–<lpage>296</lpage>. <pub-id pub-id-type="doi">10.1023/A:1022193728205</pub-id><issn>1040-726X</issn></mixed-citation></ref>
<ref id="b82"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Thurmond</surname>, <given-names>V. A.</given-names></string-name></person-group> (<year>2001</year>). <article-title>The point of triangulation.</article-title> <source>Journal of Nursing Scholarship</source>, <volume>33</volume>(<issue>3</issue>), <fpage>253</fpage>–<lpage>258</lpage>. <pub-id pub-id-type="doi">10.1111/j.1547-5069.2001.00253.x</pub-id><pub-id pub-id-type="pmid">11552552</pub-id><issn>1527-6546</issn></mixed-citation></ref>
<ref id="b83"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Van der Gijp</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Ravesloot</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Jarodzka</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Van der Schaaf</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Van der Schaaf</surname>, <given-names>I.</given-names></string-name>, <string-name><surname>Van Schaik</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Ten Cate</surname>, <given-names>T. J.</given-names></string-name></person-group> (<year>2016</year>). <article-title>How visual search relates to visual diagnostic performance: A narrative systematic review of eye-tracking research in radiology.</article-title> <source>Advances in Health Sciences Education : Theory and Practice</source>, <volume>•••</volume>, <fpage>1</fpage>–<lpage>23</lpage>. <pub-id pub-id-type="doi">10.1007/s10459-016-9698-1</pub-id><pub-id pub-id-type="pmid">27436353</pub-id><issn>1382-4996</issn></mixed-citation></ref>
<ref id="b84"><mixed-citation publication-type="book-chapter" specific-use="unparsed"><person-group person-group-type="author"><string-name><surname>Van Gog</surname>, <given-names>T.</given-names></string-name>, &amp; <string-name><surname>Jarodzka</surname>, <given-names>H.</given-names></string-name></person-group> (<year>2013</year>). Eye tracking as a tool to study and enhance (meta-)cognitive processes in computer-based learning environments. In R. Azevedo &amp; V. Aleven (Eds.), International handbook of metacognition and learning technologies (pp. 143-156). New York: Springer Science+Business Media.</mixed-citation></ref>
<ref id="b85"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Van Gog</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Jarodzka</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Scheiter</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Gerjets</surname>, <given-names>P.</given-names></string-name>, &amp; <string-name><surname>Paas</surname>, <given-names>F.</given-names></string-name></person-group> (<year>2009</year>). <article-title>Attention guidance during example study via the model’s eye movements.</article-title> <source>Computers in Human Behavior</source>, <volume>25</volume>(<issue>3</issue>), <fpage>785</fpage>–<lpage>791</lpage>. <pub-id pub-id-type="doi">10.1016/j.chb.2009.02.007</pub-id><issn>0747-5632</issn></mixed-citation></ref>
<ref id="b86"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Van Gog</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Paas</surname>, <given-names>F.</given-names></string-name>, &amp; <string-name><surname>Van Merriënboer</surname>, <given-names>J. J. G.</given-names></string-name></person-group> (<year>2004</year>). <article-title>Process-oriented worked examples: Improving transfer performance through enhanced understanding.</article-title> <source>Instructional Science</source>, <volume>32</volume>(<issue>1/2</issue>), <fpage>83</fpage>–<lpage>98</lpage>. <pub-id pub-id-type="doi">10.1023/B:TRUC.0000021810.70784.b0</pub-id><issn>0020-4277</issn></mixed-citation></ref>
<ref id="b87"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>van Gog</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Paas</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>van Merriënboer</surname>, <given-names>J. J.</given-names></string-name>, &amp; <string-name><surname>Witte</surname>, <given-names>P.</given-names></string-name></person-group> (<year>2005</year>). <article-title>Uncovering the problem-solving process: Cued retrospective reporting versus concurrent and retrospective reporting.</article-title> <source>Journal of Experimental Psychology. Applied</source>, <volume>11</volume>(<issue>4</issue>), <fpage>237</fpage>–<lpage>244</lpage>. <pub-id pub-id-type="doi">10.1037/1076-898X.11.4.237</pub-id><pub-id pub-id-type="pmid">16393033</pub-id><issn>1076-898X</issn></mixed-citation></ref>
<ref id="b88"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>VanLehn</surname>, <given-names>K.</given-names></string-name></person-group> (<year>1996</year>). <article-title>Cognitive skill acquisition.</article-title> <source>Annual Review of Psychology</source>, <volume>47</volume>(<issue>1</issue>), <fpage>513</fpage>–<lpage>539</lpage>. <pub-id pub-id-type="doi">10.1146/annurev.psych.47.1.513</pub-id><pub-id pub-id-type="pmid">15012487</pub-id><issn>0066-4308</issn></mixed-citation></ref>
<ref id="b89"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Van Marlen</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Van Wermeskerken</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Jarodzka</surname>, <given-names>H.</given-names></string-name>, &amp; <string-name><surname>Van Gog</surname>, <given-names>T.</given-names></string-name></person-group> (in press). <article-title>Showing a model’s eye movements in examples does not improve performance of problem-solving tasks.</article-title> <source>Computers in Human Behavior</source>. <pub-id pub-id-type="doi">10.1016/j.chb.2016.08.041</pub-id><issn>0747-5632</issn></mixed-citation></ref>
<ref id="b90"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Van Meeuwen</surname>, <given-names>L. W.</given-names></string-name>, <string-name><surname>Jarodzka</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Brand-Gruwel</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Kirschner</surname>, <given-names>P. A.</given-names></string-name>, <string-name><surname>De Bock</surname>, <given-names>J. J. P. R.</given-names></string-name>, &amp; <string-name><surname>Van Merriënboer</surname>, <given-names>J. J. G.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Identification of effective visual problem solving strategies in a complex visual domain.</article-title> <source>Learning and Instruction</source>, <volume>32</volume>, <fpage>10</fpage>–<lpage>21</lpage>. <pub-id pub-id-type="doi">10.1016/j.learninstruc.2014.01.004</pub-id><issn>0959-4752</issn></mixed-citation></ref>
<ref id="b91"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Van Merriënboer</surname>, <given-names>J. J. G.</given-names></string-name>, &amp; <string-name><surname>Kirschner</surname>, <given-names>P. A.</given-names></string-name></person-group> (<year>2007</year>). <source>Ten steps to complex learning</source>. <publisher-loc>Hillsdale</publisher-loc>: <publisher-name>Erlbaum</publisher-name>. <pub-id pub-id-type="doi">10.4324/9781410618054</pub-id></mixed-citation></ref>
<ref id="b92"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Wolff</surname>, <given-names>C. E.</given-names></string-name>, <string-name><surname>Jarodzka</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Van den Bogert</surname>, <given-names>N.</given-names></string-name>, &amp; <string-name><surname>Boshuizen</surname>, <given-names>H. P. A.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Teacher vision: Comparing expert and novice teachers’ perception of problematic classroom management scenes.</article-title> <source>Instructional Science</source>, <volume>44</volume>(<issue>3</issue>), <fpage>243</fpage>–<lpage>265</lpage>. <pub-id pub-id-type="doi">10.1007/s11251-016-9367-z</pub-id><issn>0020-4277</issn></mixed-citation></ref>
<ref id="b93"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><string-name><surname>Yarbus</surname>, <given-names>A. L.</given-names></string-name></person-group> (<year>1967</year>). <source>Eye movements and vision</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>Plenum</publisher-name>. <pub-id pub-id-type="doi">10.1007/978-1-4899-5379-7</pub-id></mixed-citation></ref>
</ref-list>
</back>
</article>
