<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd">
<article article-type="research-article"
	xmlns:xlink="http://www.w3.org/1999/xlink"
	xmlns:mml="http://www.w3.org/1998/Math/MathML">
	<front>
		<journal-meta>
			<journal-id journal-id-type="publisher-id">Jemr</journal-id>
			<journal-title-group>
				<journal-title>Journal of Eye Movement Research</journal-title>
			</journal-title-group>
			<issn pub-type="epub">1995-8692</issn>
			<publisher>
				<publisher-name>Bern Open Publishing</publisher-name>
				<publisher-loc>Bern, Switzerland</publisher-loc>
			</publisher>
		</journal-meta>
		<article-meta>
			<article-id pub-id-type="doi">10.16910/jemr.12.1.1</article-id>
			<article-categories>
				<subj-group subj-group-type="heading">
					<subject>Research Article</subject>
				</subj-group>
			</article-categories>
			<title-group>
				<article-title>Comparing written and photo-based indoor wayfinding instructions through eye fixation measures and user ratings as mental effort assessments</article-title>
			</title-group>
			<contrib-group>
				<contrib contrib-type="author">
					<name>
						<surname>De Cock</surname>
						<given-names>Laure</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Michels</surname>
						<given-names>Ralph</given-names>
					</name>
					<xref ref-type="aff" rid="aff2">2</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Viaene</surname>
						<given-names>Pepijn</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>De Wulf</surname>
						<given-names>Alain</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Ooms</surname>
						<given-names>Kristien</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Vanhaeren</surname>
						<given-names>Nina</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Van de Weghe</surname>
						<given-names>Nico</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>De Maeyer</surname>
						<given-names>Philippe</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
				<aff id="aff1">
					<institution>Ghent University</institution>,
					<country>Belgium</country>
				</aff>
				<aff id="aff2">
					<institution>Eyedog Wayfinding, Culemborg</institution>,
					<country>The Netherlands</country>
				</aff>
			</contrib-group>
			<pub-date date-type="pub" publication-format="electronic">
				<day>09</day>
				<month>01</month>
				<year>2019</year>
			</pub-date>
			<pub-date date-type="collection" publication-format="electronic">
				<year>2019</year>
			</pub-date>
			<volume>12</volume>
			<issue>1</issue>
			<elocation-id>10.16910/jemr.12.1.1</elocation-id>
			<permissions>
				<copyright-year>2019</copyright-year>
				<copyright-holder>De Cock, L., Viaene, P., Ooms, K., Van de Weghe, N., Michels, R., De Wulf, A., Vanhaeren, N. &#x26; De Maeyer, P.</copyright-holder>
				<license license-type="open-access">
					<license-p>This work is licensed under a Creative Commons Attribution 4.0 International License,							(						<ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">								https://creativecommons.org/licenses/by/4.0/</ext-link>), which permits unrestricted use and							redistribution provided that the original author and source are credited.</license-p>
				</license>
			</permissions>
			<abstract>
				<p>The use of mobile pedestrian wayfinding applications is gaining importance indoors. However, compared to outdoors, much less research has been conducted with respect to the most adequate ways to convey indoor wayfinding information to a user. An explorative study was conducted to compare two pedestrian indoor wayfinding applications, one text-based (SoleWay) and one image-based (Eyedog), in terms of mental effort. To do this, eye tracking data and mental effort ratings were collected from 29 participants during two routes in an indoor environment. The results show that both textual instructions and photographs can enable a navigator to find his/her way while experiencing no or very little cognitive effort or difficulties. However, these instructions must be in line with a user’s expectations of the route, which are based on his/her interpretation of the indoor environment at decision points. In this case, textual instructions offer the advantage that specific information can be explicitly and concisely shared with the user. Furthermore, the study drew attention to potential usability issues of the wayfinding aids (e.g. the incentive to swipe) and, as such, demonstrated the value of eye tracking and mental effort assessments in usability research.</p>
			</abstract>
			<kwd-group>
				<kwd>Indoor navigation</kwd>
				<kwd>wayfinding aids</kwd>
				<kwd>route communication</kwd>
				<kwd>eye tracking</kwd>
				<kwd>attention</kwd>
				<kwd>eye movement</kwd>
				<kwd>head-mounted eye tracker</kwd>
				<kwd>mental effort</kwd>
				<kwd>usability</kwd>
			</kwd-group>
		</article-meta>
	</front>
	<body>
		<sec id="S1">
			<title>Introduction</title>
			<p>The use of mobile pedestrian wayfinding applications (e.g. Insoft,
					MediNav by Connexient, SPREO Indoor Navigation, Meridian) is a form of
					wayfinding aid that is omnipresent outdoors and is gaining importance
					indoors, especially in very large and complex
					buildings. To enable an optimal use of these
					applications indoors, it must be examined how wayfinding information
					should be conveyed to the navigator in an user-friendly and adequate way
					[				<xref ref-type="bibr" rid="b1">1</xref>].
			</p>
			<p>Maps (with or without a route displayed on top) are frequently used
					to communicate a path from A to B. The survey perspective of the
					environment, that maps offer, enables a user to build up and improve
					his/her cognitive map (i.e. a mental representation of the external
					environment). However, the limited screen size decreases the map
					interaction quality in terms of efficiency and effectiveness and may
					result in more map reading difficulties [<xref ref-type="bibr" rid="b2">2</xref>]. Moreover, many visitors of
					complex buildings, especially non-recurrent visitors, wish to maximise
					the ease of wayfinding and have no interest in acquiring or improving
					their cognitive map [<xref ref-type="bibr" rid="b3">3</xref>].
			</p>
			<p>A valuable alternative can be found in (simple) turn-by-turn route
					instructions, defined and generated by a system. Here, the route is
					divided into segments. In route instructions, these segments should be
					described by at least two elements, which form so-called view-action
					pairs. Firstly, a description must contain an indication of movement or
					state-of-being describing a wayfinding action, such as ‘turn left at’,
					‘go down to’, ‘continue along’ and other basic motor activities.
					Secondly, a route (segment) description should also contain unambiguous
					and concise references to clearly visible physical features along the
					route or at decision points that serve as environmental cues to
					correctly pinpoint the location where that wayfinding action should take
					place and can act as feedback to the navigator [<xref ref-type="bibr" rid="b4">4</xref>,				<xref ref-type="bibr" rid="b5">5</xref>,				<xref ref-type="bibr" rid="b6">6</xref>]. These salient
					physical features are often referred to as landmarks. These play an
					important role in natural wayfinding behaviour as they are central to
					all forms of spatial reasoning (e.g. orientation, wayfinding) and
					spatial communication [<xref ref-type="bibr" rid="b7">7</xref>].
			</p>
			<p>Often, one automatically assumes that these view-action pairs are
					expressed verbally or textually. However, a symbol (e.g. an arrow)
					combined with a photograph depicting (one or more landmarks at) a
					location may be equally as useful. In this explorative study, two mobile
					wayfinding aids are compared, in terms of cognitive load; one provides
					written route instructions (<italic>SoleWay</italic>) and the other
					photographic-based route instructions (<italic>Eyedog</italic>).
			</p>
			<p>This paper is organised as follows. In the next section, previous
					work on indoor wayfinding smartphone applications and their use is
					described. Section 3 presents the study design. Following, the results
					and discussion are presented in sections 4 and 5 respectively. Finally,
					section 6 presents the main conclusions and future work on this
					topic.</p>
		</sec>
		<sec id="S2">
			<title>Background</title>
			<p>According to Fallah, Apostolopoulos, Bekris and Folmer (2013), an
					indoor human wayfinding system should include at least four
					functionalities or components: (1) a (basic) form of localisation, (2)
					the ability to plan a path and turn it into easy to follow instructions,
					(3) the ability to retrieve and store different types of information and
					(4) the ability to interact with a navigator [<xref ref-type="bibr" rid="b8">8</xref>]. This paper only
					focusses on the last functionality, namely how a system can adequately
					interact with a user to provide the previously determined directions.
					More specifically, the user interaction of two indoor wayfinding
					smartphone applications, that are available to the public, will be
					compared and form the topic of this paper, namely
				<italic>SoleWay</italic> and <italic>Eyedog (Indoor
						Navigation)</italic>.
			</p>
			<sec id="S2a">
				<title>SoleWay</title>
				<p>The first, <italic>SoleWay</italic>, offers indoor wayfinding support
						through textual instructions (see Figure 1, on the left). The app and
						related website are based on a crowd-based outsourcing platform. As
						such, a community of (potential) wayfinders is created. On the one hand,
						people who are familiar with a certain route can add written route
						descriptions to the <italic>SoleWay</italic> database. Added route
						descriptions are purely textual. Each description is geographically
						located by pinpointing the approximate location of the starting point
						(e.g. main entrance) or the building in which the described route is
						situated on a map (i.e. Google Maps). On the other hand, wayfinders can
						find these descriptions by searching for the destination through a
						search box. The <italic>SoleWay</italic> platform will then provide the
						user with all descriptions of routes that lead to that destination and
						are in the vicinity of the user or the building of interest.
						Consequently, there is no need to spatially model the indoor environment
						or to use indoor positioning techniques which would require the
						installation of sensors (e.g. RFID tags, Bluetooth beacons, WLAN), which
						in turn may require costly infrastructure or augmentation of the
						building. A member of the community only needs a device with network
						capabilities (e.g. a smartphone). As a result, the cost and complexity
						of the application is minimised. <italic>SoleWay</italic> was developed
						by co-author Prof. Nico Van de Weghe (Department of Geography, UGent -
					<ext-link ext-link-type="uri" xlink:href="https://soleway.ugent.be">https://soleway.ugent.be</ext-link>).
				</p>
				<fig id="fig01" fig-type="figure" position="float">
					<label>Figure 1.</label>
					<caption>
						<p>Screenshot SoleWay route (on the left) and screenshot Eyedog route (on the right).</p>
					</caption>
					<graphic id="graph01" xlink:href="jemr-12-01-a-figure-01.png"/>
				</fig>
				<p>These textual instructions offer the advantage that (more) abstract
						concepts can concisely be conveyed to the user. Navigators have a good
						understanding of locations like the main entrance, a reception (desk)
						and a meeting room, regardless of their appearance or the extent to
						which they can change over time in terms of design. Consequently, these
						locations can easily be incorporated in an instruction without lengthy
						or detailed descriptions [<xref ref-type="bibr" rid="b9">9</xref>]. Analogously, it can be assumed that these
						textual route instructions (only) contain the most relevant information
						for the wayfinding task at hand and are not cluttered with unnecessary
						elements. In contrast, photographs may contain a high level of
						information and details as these depict everything present at a specific
						location.</p>
				<p>Unfortunately, this is a double-edged sword as it can be difficult to
						determine what the most relevant, essential or suitable (type of)
						wayfinding information is. There are no unequivocal criteria for
						selecting salient physical features that can be used to describe the
						location where an action should take place as part of previously
						mentioned view-action pairs. Several studies have shown that landmarks
						are the most commonly used cues to enable wayfinding decisions and that
						route instructions containing landmarks as descriptive features are
						rated as highly effective [<xref ref-type="bibr" rid="b10">10</xref>,					<xref ref-type="bibr" rid="b11">11</xref>]. The main reason for this is that they
						allow fast reasoning and efficient communication for directing a person
						from A to B. Firstly, because they act as points of correspondence
						between different forms of spatial knowledge (e.g. reality, wayfinding
						tools [such as maps] and the cognitive model of the environment) [<xref ref-type="bibr" rid="b12">12</xref>].
						Secondly, landmarks define a place with reduced representational
						complexity. While a place may exhibit a high level of information and
						details, a landmark is an anchor point that is abstracted to a node
						without internal structure [<xref ref-type="bibr" rid="b7">7</xref>]. As Streeter, Vitello and Wonsiewicz
						(1985) put forward, however, the landmark selection process is highly
						individual [<xref ref-type="bibr" rid="b13">13</xref>]. It depends on the perception and individual preferences
						of the observer, which are influenced by gender, age, social and
						cultural background, experience, familiarity with the environment and
						intentions [<xref ref-type="bibr" rid="b14">14</xref>]. For example, women prefer three-dimensional objects
						over two-dimensional elements [<xref ref-type="bibr" rid="b15">15</xref>]. In addition to the selection of the
						correct wayfinding information, the assessment of the adequate amount of
						information may be problematic as well. An instruction that is too brief
						may lead to uncertainty, while too much information can result in
						confusion and both will lead to higher cognitive load levels [<xref ref-type="bibr" rid="b16">16</xref>].
				</p>
			</sec>
			<sec id="S2b">
				<title>Eyedog</title>
				<p>The other smartphone application, <italic>Eyedog</italic>, provides
						wayfinding support by means of ‘street-view’ like photographic imagery
						(see Figure 1, on the right). The route is presented as a sequence of
						photographs wherein the user can (manually) swipe back and forth. The
						photographs are augmented with textual or schematic (e.g. arrows)
						directions to clarify the intended wayfinding instruction. Although
					<italic>Eyedog</italic> can operate in combination with indoor
						positioning systems, similar to <italic>SoleWay</italic> it can function
						without the use of external hardware. In contrast to
					<italic>SoleWay</italic>, the indoor environment is spatially modelled
						with the help of a network of nodes and edges attributed with weights
						and photographs. Based on this network, shortest paths are generated
						automatically. <italic>Eyedog</italic> was developed by co-author Ralph
						Michels (PhD researcher and CTO of <italic>Eyedog Indoor Navigation
							-</italic>
					<ext-link ext-link-type="uri" xlink:href="http://www.eyedog.mobi">http://www.eyedog.mobi</ext-link>).
				</p>
				<p>Photographs, as used by <italic>Eyedog</italic>, represent the indoor
						environment at a certain location. This way, the landmark and wayfinding
						information selection process is to a large extent in the hands of the
						user. Photographs support this process as the visual sense contributes
						greatly to the recognition of landmarks and the estimation of distance
						and orientation during navigation [<xref ref-type="bibr" rid="b8">8</xref>]. Indoors this is of great value.
						Indoor routes are often characterised by frequent shifts in direction
						and, therefore, require a higher density of landmarks to be clearly
						described. Moreover, the number of object categories from which
						landmarks can be selected is usually limited indoors [<xref ref-type="bibr" rid="b17">17</xref>]. Accordingly,
						several studies have shown that, in comparison to paper and mobile maps,
						participants prefer images to visualise the environment while executing
						wayfinding tasks indoors as the use of photographs leads to improved
						wayfinding performance in terms of task duration and success rate
						[					<xref ref-type="bibr" rid="b17">17</xref>,					<xref ref-type="bibr" rid="b18">18</xref>].
				</p>
			</sec>
			<sec id="S2c">
				<title>Performance measures</title>
				<p>The comparison between different modalities to convey route
						descriptions to a navigator is generally based on the time needed to
						complete a wayfinding task and whether or not a person is able to reach
						the destination with the help of a specific modality. In most of these
						indoor wayfinding studies, however, it is very rare that a person does
						not reach the destination point. Furthermore, the observed task duration
						differences may be statistically significant, but in practice these are
						very small. Other performance measures that may be more adequate to
						reflect the usability of a description are location and orientation
						accuracy [<xref ref-type="bibr" rid="b3">3</xref>], numbers of errors, feeling lost episodes and/or dwell
						points [<xref ref-type="bibr" rid="b19">19</xref>], smartphone interaction recordings to identify wayfinding
						strategies [<xref ref-type="bibr" rid="b20">20</xref>] and user ratings with respect to quality and usefulness
						[					<xref ref-type="bibr" rid="b16">16</xref>].
				</p>
				<p>Another research tool, which is relatively new in the domain of
						indoor wayfinding, is eye tracking. The analysis of gaze characteristics
						can provide useful insights regarding a navigator’s use of environmental
						and wayfinding information, and the interplay of both [<xref ref-type="bibr" rid="b21">21</xref>].
						Consequently, in recent years eye tracking has frequently been used in a
						wide range of settings within the field of pedestrian navigation (e.g.
						spatial decision making, map interaction, wayfinding aids) [<xref ref-type="bibr" rid="b22">22</xref>].
						However, the number of studies, specifically within the context of
						communication modalities of indoor pedestrian wayfinding systems, is
						limited.</p>
				<p>Ohm, Müller and Ludwig (2017) used eye tracking measures as fixation
						time, number of fixations and revisits to the mobile phone screen to
						demonstrate that participants preferred a reduced interface displaying
						landmarks and simplified route segments instead of an interface using
						floor plans [<xref ref-type="bibr" rid="b23">23</xref>]. In contrast to the original experimental design, the
						smart phone screen was seen as a single area of interest. The small
						screen and the limited accuracy of the eye tracking device did not allow
						fixations to be attributed to different interface elements. Following,
						Schnitzler et al. (2016) investigated what the effect of a wayfinding
						aid (i.e. no map, paper map, digital map) was on fixation frequencies at
						decision points [<xref ref-type="bibr" rid="b21">21</xref>]. The number of fixations was determined for three
						areas of interest: signage, correct route option and incorrect option.
						Next, Li (2017) [<xref ref-type="bibr" rid="b18">18</xref>] used the number of fixations and their duration to
						create heat maps and gaze plots on photographs and maps to investigate
						the role of maps in combination with other aids during indoor
						wayfinding. In these three studies, eye tracking data was collected by a
						group of test persons that individually completed an indoor route to a
						destination with the help of a wayfinding aid. During this task,
						participants were equipped with a mobile eye tracker.</p>
			</sec>
			<sec id="S2d">
				<title>Mental effort</title>
				<p>Most studies on communication modalities of (indoor) pedestrian
						wayfinding applications focus on the usability of such a modality
						compared to or in combination with (mobile) maps. Moreover, no studies
						take into account the aspect of mental effort. Mental effort refers to
						the proportion of working memory capacity that is allocated to the
						(instructional) demands of the task and can be used as an index to
						assess the cognitive load that the execution of a task imposes on a
						person [<xref ref-type="bibr" rid="b24">24</xref>,					<xref ref-type="bibr" rid="b25">25</xref>]. Photographs and textual descriptions are both external
						representations of (wayfinding) information that are employed to support
						memory and thinking [<xref ref-type="bibr" rid="b9">9</xref>]. A person must translate such a representation
						(e.g. of a location), which is briefly stored in the short-term memory,
						to reality (e.g. the corresponding actual location in his/her
						surroundings). This translation requires the interaction of the
						short-term memory with previously acquired knowledge and skills (e.g.
						orientation skills) stored in the long-term memory. In turn, this
						interaction (i.e. working memory) and, as such, this translation demand
						mental effort [<xref ref-type="bibr" rid="b26">26</xref>]. Central in the cognitive load theory is that the
						working memory capacity is limited [<xref ref-type="bibr" rid="b27">27</xref>]. Visitors of large-scale spaces,
						especially first-time visitors, may already experience high stress
						levels and a significant working memory load caused by factors other
						than the wayfinding task. The wayfinding aid and the method used to
						present wayfinding information (e.g. words or images) can reduce the
						complexity of the decision-making process and therefore the cognitive
						load [<xref ref-type="bibr" rid="b28">28</xref>].
				</p>
				<p>In general, there are four ways to assess mental effort: (1) indirect
						(performance) measures, (2) subjective measures, (3) secondary task
						measures, and (4) physiological measures [<xref ref-type="bibr" rid="b27">27</xref>]. In this study, subjective
						measures (i.e. rating scales) and physiological measures (i.e. eye
						tracking) will be used to assess which communication modality (i.e.
						written or photographic-based route instructions) requires less mental
						effort to understand and to act in accordance with. A large number of
						studies have used a nine-grade rating scale as a subjective measure to
						examine the experienced mental effort [<xref ref-type="bibr" rid="b24">24</xref>,					<xref ref-type="bibr" rid="b29">29</xref>,					<xref ref-type="bibr" rid="b30">30</xref>]. For an extensive
						overview of these studies, we refer to [<xref ref-type="bibr" rid="b31">31</xref>] and [<xref ref-type="bibr" rid="b24">24</xref>]. This frequent use
						has proven that the numerical values of a (nine-grade) rating scale
						enable test persons to veraciously express the required mental effort.
						Furthermore, multiple measurements during an experiment are possible.
						This way, a more detailed analysis of mental effort and task complexity
						variations can be conducted [<xref ref-type="bibr" rid="b27">27</xref>,					<xref ref-type="bibr" rid="b31">31</xref>].
				</p>
				<p>Following, eye tracking can also be used as a measure of the
						processing demands of a task [<xref ref-type="bibr" rid="b32">32</xref>]. Especially in problem solving tasks,
						longer fixations and shorter saccadic amplitudes are linked to more
						effortful cognitive processing and indicate that a person has (more)
						difficulties in extracting information or relating this information to
						internalised representations. In scene perception, features that are
						considered more important, interesting or semantically informative
						generate longer fixations and more revisits compared to those elements
						that are perceived less important. Additionally, several studies assume
						that the number of fixations and saccadic rate overall is negatively
						correlated with the search efficiency and could be an indication of the
						difficulties a person experiences while collecting relevant information
						[					<xref ref-type="bibr" rid="b33">33</xref>,					<xref ref-type="bibr" rid="b34">34</xref>].
				</p>
				<p>Ooms (2016) emphasizes the importance of using mixed methods in
						usability research [35]. Using multiple eye tracking measures and mental
						effort ratings makes it possible to verify the results across datasets.
						This improves the reliability and validity of the study. However, some
						authors have argued that fixation measures and mental effort ratings
						measure different aspects of cognitive load [<xref ref-type="bibr" rid="b27">27</xref>,					<xref ref-type="bibr" rid="b32">32</xref>]. Fixations represent
						parts of the task or an individual problem, while an effort rating
						represents the mental effort of the overall task or process (e.g. the
						total number of problems). As such, these may result in non-equivalent
						assessments of the invested mental effort. By asking mental effort
						ratings at multiple intermediate points, the authors hope to minimise
						such distortion.</p>
			</sec>
		</sec>
		<sec id="S3">
			<title>Methods</title>
			<p>Most user studies in interactive cartography are conducted in
					controlled, laboratory environments. Roth et al. (2017) state in their
					research agenda the need for both laboratory and field-based studies
					[				<xref ref-type="bibr" rid="b36">36</xref>]. Explorative user studies in the field are essential to confirm
					laboratory findings, or to identify new aspects that need follow-up of
					laboratory research. Therefore, in order to assess the experienced
					mental effort linked to the textual route instructions offered by
					SoleWay on the one hand and Eyedog’s photographic-based instructions on
					the other hand, both apps are used in a real environment. Participants
					are guided by Eyedog on one route and by SoleWay on another route in a
					complex building. During the full extent of both routes eye fixations
					are recorded and user ratings on a nine-grade scale are collected at
					intermediate points. Four eye tracking measures are extracted (the
					number of revisits, fixation count, fixation time and average fixation
					duration) and two areas of interest are defined (smartphone screen and
					signage). Although all eye tracking measures are correlated, they do not
					measure the same aspect of mental effort. The number of revisits
					indicates how many times participants needed to switch their gaze from
					the environment to an aid (smartphone or signage). The average fixation
					duration indicates how difficult it is to interpret the information
					provided by one fixation on a specific element of the wayfinding aid,
					while the total fixation time and count indicate how difficult it is to
					gain the relevant wayfinding information from the wayfinding aid in
					general and translate this to the environment. To be able to interpret
					the eye tracking results correctly, all measures must be analysed
					together [<xref ref-type="bibr" rid="b33">33</xref>]. Saccadic measures are not included in the analysis but
					could be an equally valid alternative. After completing the two routes,
					participants answered a questionnaire to gain insight in their general
					appreciation of the two wayfinding aids.</p>
			<sec id="S3a">
				<title>Participants</title>
				<p>In total 14 male and 15 female subjects participated in the
						experiment. The questionnaire, wherein participants were asked to rate a
						series of statements, revealed the following. Participants were, on
						average, relatively familiar with (parts of) the test environment (see
						materials section). As such, they are acquainted with the building’s
						structure and design. In contrast, the destinations along the route were
						not known to them. Furthermore, test persons had used smartphone
						applications as wayfinding aid before outdoors, but rarely in an indoor
						setting. Their ages ranged between twenty and sixty years old
						(					<italic>M</italic> = 34, <italic>SD</italic> = 9). During the test,
						they were not distracted by the researcher following them or by the
						mobile eye tracking device. Five participants were excluded from the eye
						tracking results, because the tracking ratio was too low. The required
						tracking ratio was set to 95%.</p>
			</sec>
			<sec id="S3b">
				<title>Materials</title>
				<p>During the completion of both routes, participants wore a SMI ETG 2.1
						mobile eye tracking device (60 Hz / 30 FPS). Fixations were calculated
						with the help of the SMI Event Detection (dispersion-based) algorithm
						and were transferred manually to four reference images (i.e. one for
						each route and application). Each reference image displayed two
						categories (i.e. (screen of) smartphone, signage (along the route)),
						which were attributed with areas of interest by using the semantic gaze
						mapping tool of BeGaze 3.6.</p>
				<p>Both wayfinding apps (i.e. <italic>Eyedog Indoor Navigation</italic>
						1.0.0 and <italic>SoleWay</italic> v15) were installed and presented on
						the same smartphone, namely an LG Nexus 4. As mentioned in section 2,
					<italic>Eyedog</italic> automatically provided the shortest routes
						between the selected destinations based on a network model of the
						building. The <italic>SoleWay</italic> routes were formulated and
						entered manually. A <italic>SoleWay</italic> route is a text depicted on
						a single screen. Each line in the text consisted of one view-action pair
						specifying the location and the required wayfinding action as simple as
						possible (see Figure 1). For each route, the number of lines was
						comparable to the number of photographs.</p>
				<p>The star shaped building (see Figure 2) was considered to be fairly
						complex by most participants. It was built in 1976 and has a traditional
						interior (see Figure 1 and 4). Within this building two routes were
						selected. Both routes had had a total length of approximately 360 meters
						and consisted of three connected route segments leading to a
						destination. All participants completed the same routes (see Figure 2).
						The first route went from the starting point on the second floor level
						to the men’s toilet (destination A) on the ground floor. From there,
						participants were asked to go to the secretariat of the Marine Biology
						Research Group (B). Finally, they were asked to find the Geophysics
						Processing Lab (C) on the first floor level. The second route started at
						the same starting point. The first intermediate destination was the
						small garage (D) in the curved outer corridor on the ground floor level.
						The following destination was the office of prof. V. Cnudde on the first
						floor level (E). The final destination of the second route was lecture
						room 3.065 on the third floor (F). Route 1 includes 22 decision points
						and route 2 counts 24 decision points.</p>
				<fig id="fig02" fig-type="figure" position="float">
					<label>Figure 2.</label>
					<caption>
						<p>Illustration of the building and routes.</p>
					</caption>
					<graphic id="graph02" xlink:href="jemr-12-01-a-figure-02.png"/>
				</fig>
				<p>Finally, the following statements (subdivided in four categories) are
						rated on a seven-grade scale in the concluding questionnaire:</p>
				<p>Evaluation of the route</p>
				<p>
					<italic>(1) “The route was complex.”</italic>
				</p>
				<p>
					<italic>
					</italic>Evaluation of the wayfinding instructions</p>
				<p>
					<italic>(2) “I often had doubts about the further course of the</italic>
					<italic>route.”
							(3) “The wayfinding instructions were clear.”
							(4) “The wayfinding instructions were easy to follow.”
							(5) “The wayfinding instructions were detailed enough.”
							(6) “The wayfinding instructions were adequate to convey the
							route.”</italic>
				</p>
				<p>
					<italic>
					</italic>Memory of the route</p>
				<p>
					<italic>(7) “After the experiment, I am able to complete the same route
							without help of a wayfinding aid.”
							(8) “After the experiment, I am able to verbalise route instructions to
							a person who is not familiar with the route.”
							(9) “After the experiment, I am able to draw the route on a floor
							plan.”</italic>
				</p>
				<p>
					<italic>
					</italic>Application recommendation</p>
				<p>
					<italic>(10) “I would recommend the application for other
							buildings.”</italic>
				</p>
			</sec>
			<sec id="S3c">
				<title>Procedure</title>
				<p>At the beginning of the experiment, the participants were instructed
						as follows. “<italic>After the calibration of the eye tracking device,
							you will be asked to complete two routes while wearing the eye tracking
							device. This device will register your eye movements. I will follow
							while completing these routes. One route will be explained with the help
							of SoleWay. During the other route, you will be guided by Eyedog. Each
							route starts in this office and at the end of each route we will check
							if the eye tracking device is still correctly calibrated. Each route
							consists of a number of destinations or intermediate stops. At each
							stop, I will ask you to rate the mental effort that was needed to reach
							this destination with the help of the wayfinding application. You can
							rate this effort on a nine-grade scale: zero being very, very low and
							nine being very, very high. Then I will give you the next destination.
							After the completion of both routes, you will be asked to fill in a
							small questionnaire. During the experiment, you may always ask for help
							or clarification. I will intervene if you would get lost.</italic>”
				</p>
				<p>The experiment proceeded as described. Five point targets placed at
						approximately 1.5 meters were used to calibrate the eye tracking device.
						For all five points, the gaze error is corrected, making the calibration
						more accurate for every additional point. Participants always completed
						the routes in the same order. However, the wayfinding app used to guide
						a participant along a route was randomised in order to assess the
						potential influence of familiarity with the experimental setup. As such,
						half of the participants completed the first route with
					<italic>SoleWay</italic> and the second with <italic>Eyedog</italic>,
						while the other half first used <italic>Eyedog</italic> and then
					<italic>SoleWay</italic>. The guidelines as expressed by Holmqvist et
						al. (2011) were taken into account during calibration, instruction
						giving and route completion [<xref ref-type="bibr" rid="b33">33</xref>]. After completing the two routes,
						participants filled in the final questionnaire.</p>
			</sec>
			<sec id="S3d">
				<title>Data analysis</title>
				<p>The significance of potential differences in eye tracking measures
						between both groups (i.e. <italic>SoleWay</italic> users and
					<italic>Eyedog</italic> users) was determined by a parametric test (see
						Table 2). As there is disagreement about whether (ordinal) rating scale
						data should be analysed with parametric statistics or nonparametric
						statistics [<xref ref-type="bibr" rid="b37">37</xref>], the normality of the mental effort and questionnaire
						data was tested with the Shapiro-Wilk Test. Because not all data samples
						are normally distributed, both the parametric t-test and the
						non-parametric Mann-Whitney test are conducted to determine the
						significance of potential differences between both
					<italic>SoleWay</italic> users and <italic>Eyedog</italic> users
						regarding mental effort. Both tests showed the same result and are
						listed in Table 1. The significance of potential differences between
					<italic>Soleway</italic> and <italic>Eyedog</italic> in the general
						questionnaire is also analysed with both a parametric and a
						non-parametric test, for the same reason. In this case, a Wilcoxon
						signed-rank test and a dependent t-test is executed, because all
						participants used both <italic>Soleway</italic> and
					<italic>Eyedog</italic> and are therefore in both groups. The results of
						both tests are indicated in Table 3.</p>

				<table-wrap id="t01" position="float">
					<label>Table 1</label>
					<caption>
						<p>Overview of mental effort ratings at intermediate
											destinations for both routes<sup>*</sup>
						</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>
							<tr>
								<td/>
								<td>
									<italic>SoleWay</italic>
								</td>
								<td/>
								<td/>
								<td>
									<italic>Eyedog</italic>
								</td>
								<td/>
								<td/>
								<td/>
								<td/>
							</tr>
						</thead>
						<tbody>
							<tr>
								<td>
									<bold>Route 1<sup>c</sup>
									</bold>
								</td>
								<td>
									<italic>N</italic>
								</td>
								<td>
									<italic>Mean</italic>
								</td>
								<td>
									<italic>SD</italic>
								</td>
								<td>
									<italic>N</italic>
								</td>
								<td>
									<italic>Mean</italic>
								</td>
								<td>
									<italic>SD</italic>
								</td>
								<td>
									<italic>Sig</italic>.									<sup>a</sup>
								</td>
								<td>
									<italic>Sig</italic>.									<sup>b</sup>
								</td>
							</tr>
							<tr>
								<td>Start - A</td>
								<td>15</td>
								<td>1.93</td>
								<td>1.58</td>
								<td>14</td>
								<td>2.21</td>
								<td>1.85</td>
								<td>.771</td>
								<td>.665</td>
							</tr>
							<tr>
								<td>A - B</td>
								<td>15</td>
								<td>2.13</td>
								<td>1.51</td>
								<td>14</td>
								<td>1.64</td>
								<td>1.28</td>
								<td>.369</td>
								<td>.352</td>
							</tr>
							<tr>
								<td>B - C</td>
								<td>15</td>
								<td>3.07</td>
								<td>1.67</td>
								<td>14</td>
								<td>2.86</td>
								<td>1.96</td>
								<td>.532</td>
								<td>.760</td>
							</tr>
							<tr>
								<td>
									<bold>Route 2<sup>c</sup>
									</bold>
								</td>
								<td/>
								<td/>
								<td/>
								<td/>
								<td/>
								<td/>
								<td/>
								<td/>
							</tr>
							<tr>
								<td>Start - D</td>
								<td>14</td>
								<td>2.29</td>
								<td>1.73</td>
								<td>15</td>
								<td>5.93</td>
								<td>2.12</td>
								<td>
									<bold>.000</bold>
								</td>
								<td>
									<bold>.000</bold>
								</td>
							</tr>
							<tr>
								<td>D - E</td>
								<td>14</td>
								<td>1.57</td>
								<td>1.02</td>
								<td>15</td>
								<td>4.73</td>
								<td>2.25</td>
								<td>
									<bold>.000</bold>
								</td>
								<td>
									<bold>.000</bold>
								</td>
							</tr>
							<tr>
								<td>E - F</td>
								<td>14</td>
								<td>1.50</td>
								<td>1.40</td>
								<td>15</td>
								<td>4.40</td>
								<td>2.67</td>
								<td>
									<bold>.004</bold>
								</td>
								<td>
									<bold>.001</bold>
								</td>
							</tr>
						</tbody>
					</table>
					<table-wrap-foot>
						<fn id="FN1">
							<p>
								<sup>*</sup> based on scores on a nine-grade
											scale.
								<sup>a</sup> two-tailed significance value at the 95
											% confidence level resulting from a Mann-Whitney
											test.
								<sup>b</sup> two-tailed significance value at the 95
											% confidence level resulting from an independent samples t-test
											with tested equal variances.
								<sup>c</sup> more information on route segmentation
											can be found in Materials section.
							</p>
						</fn>
					</table-wrap-foot>
				</table-wrap>

				<table-wrap id="t02" position="float">
					<label>Table 2</label>
					<caption>
						<p>Overview of eye fixation measures during route
											completion</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>
							<tr>
								<td/>
								<td>Route</td>
								<td>Smartphone AOI</td>
								<td/>
								<td/>
								<td/>
								<td/>
								<td>Signage AOI</td>
								<td/>
								<td/>
								<td/>
								<td/>
							</tr>
						</thead>
						<tbody>
							<tr>
								<td/>
								<td/>
								<td>
									<italic>SoleWay</italic>
								</td>
								<td/>
								<td>
									<italic>Eyedog</italic>
								</td>
								<td/>
								<td/>
								<td>
									<italic>SoleWay</italic>
								</td>
								<td/>
								<td>
									<italic>Eyedog</italic>
								</td>
								<td/>
								<td/>
							</tr>
							<tr>
								<td/>
								<td/>
								<td>
									<italic>M</italic>
								</td>
								<td>
									<italic>SD</italic>
								</td>
								<td>
									<italic>M</italic>
								</td>
								<td>
									<italic>SD</italic>
								</td>
								<td>
									<italic>Sig</italic>.*
								</td>
								<td>
									<italic>M</italic>
								</td>
								<td>
									<italic>SD</italic>
								</td>
								<td>
									<italic>M</italic>
								</td>
								<td>
									<italic>SD</italic>
								</td>
								<td>
									<italic>Sig</italic>.*
								</td>
							</tr>
							<tr>
								<td>Revisits</td>
								<td>1</td>
								<td>21.20</td>
								<td>10.69</td>
								<td>71.93</td>
								<td>25.32</td>
								<td>
									<bold>.000</bold>
								</td>
								<td>8.11</td>
								<td>6.31</td>
								<td>4.43</td>
								<td>2.68</td>
								<td>
									<bold>.003</bold>
								</td>
							</tr>
							<tr>
								<td/>
								<td>2</td>
								<td>32.50</td>
								<td>8.33</td>
								<td>66.70</td>
								<td>23.99</td>
								<td>
									<bold>.000</bold>
								</td>
								<td>10.07</td>
								<td>4.25</td>
								<td>13.40</td>
								<td>4.79</td>
								<td>.096</td>
							</tr>
							<tr>
								<td>Fixation Count</td>
								<td>1</td>
								<td>262.70</td>
								<td>146.23</td>
								<td>464.43</td>
								<td>146.08</td>
								<td>
									<bold>.003</bold>
								</td>
								<td>21.80</td>
								<td>14.97</td>
								<td>11.21</td>
								<td>6.86</td>
								<td>
									<bold>.029</bold>
								</td>
							</tr>
							<tr>
								<td/>
								<td>2</td>
								<td>291.71</td>
								<td>64.57</td>
								<td>490.60</td>
								<td>197.46</td>
								<td>
									<bold>.002</bold>
								</td>
								<td>29.93</td>
								<td>13.08</td>
								<td>39.70</td>
								<td>16.44</td>
								<td>.138</td>
							</tr>
							<tr>
								<td>Fixation Time [ms]</td>
								<td>1</td>
								<td>65,383.7</td>
								<td>39,161.14</td>
								<td>123,871.1</td>
								<td>49,979.70</td>
								<td>
									<bold>.004</bold>
								</td>
								<td>5,170.9</td>
								<td>3,084.99</td>
								<td>2,572.9</td>
								<td>1,864.47</td>
								<td>
									<bold>.033</bold>
								</td>
							</tr>
							<tr>
								<td/>
								<td>2</td>
								<td>76,494.0</td>
								<td>18,717.71</td>
								<td>122,391.1</td>
								<td>52,893.86</td>
								<td>
									<bold>.006</bold>
								</td>
								<td>10,461.2</td>
								<td>4,495.02</td>
								<td>12,010.2</td>
								<td>4,289.12</td>
								<td>.403</td>
							</tr>
							<tr>
								<td>Fixation Time [%]</td>
								<td>1</td>
								<td>24.91</td>
								<td>9.75</td>
								<td>37.939</td>
								<td>11.04</td>
								<td>
									<bold>.006</bold>
								</td>
								<td>1.77</td>
								<td>1.02</td>
								<td>0.82</td>
								<td>0.58</td>
								<td>
									<bold>.020</bold>
								</td>
							</tr>
							<tr>
								<td/>
								<td>2</td>
								<td>24.44</td>
								<td>5.37</td>
								<td>30.052</td>
								<td>10.49</td>
								<td>.146</td>
								<td>3.27</td>
								<td>1.19</td>
								<td>3.07</td>
								<td>1.16</td>
								<td>.677</td>
							</tr>
							<tr>
								<td>Average fixation duration [ms]</td>
								<td>1</td>
								<td>243.65</td>
								<td>32.97</td>
								<td>260.62</td>
								<td>31.98</td>
								<td>.223</td>
								<td>214.57</td>
								<td>107.55</td>
								<td>211.15</td>
								<td>61.57</td>
								<td>.929</td>
							</tr>
							<tr>
								<td/>
								<td>2</td>
								<td>262.92</td>
								<td>36.73</td>
								<td>248.12</td>
								<td>17.35</td>
								<td>.251</td>
								<td>351.05</td>
								<td>79.87</td>
								<td>311.37</td>
								<td>62.32</td>
								<td>.186</td>
							</tr>
						</tbody>
					</table>
					<table-wrap-foot>
						<fn id="FN2">
							<p>* two-tailed significance value at the 95 % confidence level resulting from an independent samples t-test with tested equal variances</p>
						</fn>
					</table-wrap-foot>
				</table-wrap>


				<table-wrap id="t03" position="float">
					<label>Table 3</label>
					<caption>
						<p>Overview of statement ratings in questionnaire for
											both applications<sup>*</sup>
						</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>
							<tr>
								<td/>
								<td>
									<italic>SoleWay</italic>
								</td>
								<td/>
								<td/>
								<td>
									<italic>Eyedog</italic>
								</td>
								<td/>
								<td/>
								<td/>
								<td/>
							</tr>
						</thead>
						<tbody>
							<tr>
								<td>Statement<sup>c</sup>
								</td>
								<td>
									<italic>N</italic>
								</td>
								<td>
									<italic>Mean</italic>
								</td>
								<td>
									<italic>SD</italic>
								</td>
								<td>
									<italic>N</italic>
								</td>
								<td>
									<italic>Mean</italic>
								</td>
								<td>
									<italic>SD</italic>
								</td>
								<td>
									<italic>Sig</italic>.									<sup>a</sup>
								</td>
								<td>
									<italic>Sig</italic>.									<sup>b</sup>
								</td>
							</tr>
							<tr>
								<td>(1)</td>
								<td>29</td>
								<td>-1.76</td>
								<td>1.57</td>
								<td>29</td>
								<td>-1.14</td>
								<td>1.68</td>
								<td>.107</td>
								<td>.065</td>
							</tr>
							<tr>
								<td>(2)</td>
								<td>29</td>
								<td>-1.76</td>
								<td>1.33</td>
								<td>29</td>
								<td>0.07</td>
								<td>2.10</td>
								<td>
									<bold>.000</bold>
								</td>
								<td>
									<bold>.000</bold>
								</td>
							</tr>
							<tr>
								<td>(3)</td>
								<td>29</td>
								<td>2.07</td>
								<td>0.80</td>
								<td>29</td>
								<td>0.48</td>
								<td>1.98</td>
								<td>
									<bold>.001</bold>
								</td>
								<td>
									<bold>.000</bold>
								</td>
							</tr>
							<tr>
								<td>(4)</td>
								<td>29</td>
								<td>2.03</td>
								<td>0.82</td>
								<td>29</td>
								<td>0.66</td>
								<td>1.95</td>
								<td>
									<bold>.002</bold>
								</td>
								<td>
									<bold>.002</bold>
								</td>
							</tr>
							<tr>
								<td>(5)</td>
								<td>29</td>
								<td>2.14</td>
								<td>0.92</td>
								<td>29</td>
								<td>0.55</td>
								<td>1.88</td>
								<td>
									<bold>.001</bold>
								</td>
								<td>
									<bold>.000</bold>
								</td>
							</tr>
							<tr>
								<td>(6)</td>
								<td>29</td>
								<td>2.43</td>
								<td>0.67</td>
								<td>29</td>
								<td>0.86</td>
								<td>1.64</td>
								<td>
									<bold>.000</bold>
								</td>
								<td>
									<bold>.000</bold>
								</td>
							</tr>
							<tr>
								<td>(7)</td>
								<td>29</td>
								<td>0.10</td>
								<td>1.74</td>
								<td>29</td>
								<td>0.21</td>
								<td>1.86</td>
								<td>.586</td>
								<td>.621</td>
							</tr>
							<tr>
								<td>(8)</td>
								<td>29</td>
								<td>-0.21</td>
								<td>1.78</td>
								<td>29</td>
								<td>-0.28</td>
								<td>1.89</td>
								<td>.747</td>
								<td>.805</td>
							</tr>
							<tr>
								<td>(9)</td>
								<td>29</td>
								<td>-0.41</td>
								<td>1.76</td>
								<td>29</td>
								<td>-0.14</td>
								<td>1.68</td>
								<td>.437</td>
								<td>.318</td>
							</tr>
							<tr>
								<td>(10)</td>
								<td>29</td>
								<td>1.34</td>
								<td>1.26</td>
								<td>29</td>
								<td>0.55</td>
								<td>1.35</td>
								<td>
									<bold>.019</bold>
								</td>
								<td>
									<bold>.018</bold>
								</td>
							</tr>
						</tbody>
					</table>
					<table-wrap-foot>
						<fn id="FN3">
							<p>
								<sup>*</sup> based on scores on a seven-grade
											scale. 
								<sup>a</sup> two-tailed significance value at the 95
											% confidence level resulting from a Wilcoxon signed-rank
											test.
								<sup>b</sup> two-tailed significance value at the 95
											% confidence level resulting from a dependent
											t-test.
								<sup>c</sup> Overview of statements in the research
											design.											
							</p>
						</fn>
					</table-wrap-foot>
				</table-wrap>
			</sec>
		</sec>
		<sec id="S4">
			<title>Results</title>
			<p>The results for the mental effort ratings can be found in Table 1.
					Participants experienced a significantly lower mental effort when using
				<italic>SoleWay</italic> while completing the second route.</p>
			<p>Following, Table 2 shows the results for the eye fixation measures.
					Based on the number of revisits, fixations and fixation time, it can be
					said that SoleWay users spent more attention on the available signs
					along route 1, while Eyedog users focussed more (frequent) on the
					smartphone screen. With respect to the second route, Eyedog users still
					fixated more on the application, but there was no significant difference
					with respect to signage use. There are no significant differences
					between the average fixation duration on smartphone and signage.
					Finally, the results of the questionnaire were analysed analogously to
					the mental effort ratings (see Table 3).</p>
		</sec>
		<sec id="S5">
			<title>Discussion</title>
			<p>An explorative study was conducted to examine different modalities to
					share wayfinding information by comparing two pedestrian indoor
					wayfinding applications, namely <italic>Sole-Way</italic> and
				<italic>Eyedog</italic>, in terms of mental effort. To do this, eye
					tracking data and mental effort ratings were collected from participants
					during a series of wayfinding tasks in an indoor environment.</p>
			<p>No significant differences between the two applications were found
					with respect to the mental effort ratings collected during the first
					route and the experienced mental effort was relatively low (mostly less
					than 3 on the nine-grade scale). In contrast, participants had
					significantly more difficulties to understand, interpret and act in
					accordance with the view-action pairs displayed by photographs (i.e.
				<italic>Eyedog</italic>) during the second route. An explanation for
					this finding might be found in the eye tracking results.</p>
			<p>The average fixation duration does not differ significantly.
					Therefore, the difficulty lies not in the interpretation of the
					information provided by one fixation on the wayfinding aids (smartphone
					and signage). The total fixation count and overall fixation time,
					however, show that <italic>Eyedog</italic> users looked significantly
					more to different elements of the smartphone screen during both routes.
					This seems logical as more information is displayed by the
				<italic>Eyedog</italic> interface. Users needed to interpret this
					information and relate the (selected) depicted features to reality. The
					number of revisits indicates that Eyedog users switched their gaze back
					to the smartphone more often, which indicates that information
					translation to the environment was more difficult compared to the
					text-instructions. More striking is the use of signage during the
					wayfinding tasks. During the first route, <italic>SoleWay</italic> users
					gave significantly more attention to signs along the route compared to
				<italic>Eyedog</italic> users. Although not statistically significant,
					this observation was turned around during the second route as a result
					of an increase in fixations on signs by <italic>Eyedog</italic> users.
					Nevertheless, also in this case the average fixation duration was not
					found to be significantly different.</p>
			<p>The extent to which signage is fixated on can be related to (1) the
					availability of signage, to (2) the wayfinding task complexity and (3)
					whether or not the smartphone application provides sufficient
					information to ensure a comfortable wayfinding experience. Firstly,
					although the entire building has a similar design, a slightly larger
					amount of signs was visible along route two. This may have accounted for
					the increased use of signage for both applications during the second
					route compared to the traversal of the first route. However, the signage
					was the same for both wayfinding applications and, therefore, this
					cannot explain the substantial increase of attention to signs when using
				<italic>Eyedog</italic>. Secondly, the results of the questionnaire show
					that no significant difference was found in terms of experienced route
					complexity. As such, it is not expected that this factor influenced sign
					usage. Therefore, the finding with respect to signage is most likely to
					be explained by the third factor: a lack of (an) adequate (amount of)
					information offered by the application. For example, the conciseness of
					the written route instructions might have prompted wayfinders to collect
					additional information (through signs) in the first route. As mentioned
					in the introduction, the selection of the adequate amount of information
					can be challenging. With respect to <italic>SoleWay</italic>, this
					assessment has to be made by the author each time he/she describes a
					route. Turning back to <italic>Eyedog</italic>, if
				<italic>Eyedog</italic> did not provide sufficient or adequate
					wayfinding information during the second route, then this will have
					forced <italic>Eyedog</italic> users to rely more on signage while
					completing this second route. In turn, having to interpret both detailed
					photographs and a large number of signs could have led to the increased
					mental effort ratings as mentioned earlier.</p>
			<p>Based on the recordings, it is clear that all <italic>Eyedog</italic>
					users that ostensibly experienced wayfinding difficulties (e.g. errors,
					doubt), encountered them at two specific decision points along route
					two.</p>
			<p>Firstly, when participants arrived on the ground floor on their way
					to destination D, they came across a covered passageway that is part of
					the curved hallway where the destination is situated. At this
					passageway, however, participants were not aware that this hallway is
					curved. The <italic>SoleWay</italic> instruction, which was generated by
					a person based on his/her wayfinding experience, says “continue straight
					ahead through the glass double doors in front of you”. In contrast, the
				<italic>Eyedog</italic> platform took this curve into account when
					automatically generating wayfinding instructions as it starts from the
					spatial network of the building, which in turn is based on the floor
					plan of the building. As a result, the <italic>Eyedog</italic>
					instruction displays an arrow that is in line with the curve and
					mentions to “keep left” (see Figure 3). Although this is not incorrect,
					it was not in line with the expectations of the user. Consequently,
					there was much doubt whether to continue through the glass doors or to
					take a left turn to the inner courtyard.</p>
			<fig id="fig03" fig-type="figure" position="float">
				<label>Figure 3.</label>
				<caption>
					<p>Screenshot Eyedog at start curved hallway.</p>
				</caption>
				<graphic id="graph03" xlink:href="jemr-12-01-a-figure-03.png"/>
			</fig>
			<p>Secondly, to pinpoint destination E as (intermediate) destination,
				<italic>SoleWay</italic> (see Figure 1) and <italic>Eyedog</italic> (see
					Figure 2) both refer to a display cabinet, which is situated right in
					front of the office. However, <italic>SoleWay</italic> specifies that
					this cabinet is located “halfway through the hallway”. This addition
					turned out to be of great value as a nearly identical cabinet is located
					at the beginning of the hallway. As a result, <italic>Eyedog</italic>
					users expected the destination to be (near the display cabinet) at the
					beginning of the corridor. In this case, the cabinet functioned as a
					‘false landmark’, namely an identical or very similar object that can
					mislead the navigator as it is wrongfully associated with specific
					wayfinding actions [<xref ref-type="bibr" rid="b38">38</xref>]. Although several details on the photograph
					allow a differentiation between both cabinets, most participants only
					focus on the cabinet itself. As they are not familiar with the route, it
					is difficult for them to assess to what level of detail the depicted
					information needs to be interpreted.</p>
			<p>At these two problematic decision points, <italic>SoleWay</italic>
					offered the advantage that the information had been interpreted in
					advance by the author of the instructions who is highly familiar with
					the route(s). As such, the author formulated route instructions that
					were (likely to be) in line with the user’s expectations and
					(unwittingly) differentiated between the cabinets by providing
					information regarding location in the hallway. This explains why
				<italic>Eyedog</italic> users experienced more mental effort compared to
				<italic>SoleWay</italic> users during the second route, as mentioned
					earlier. Analogously, the results of the questionnaire (see Table 3)
					show that they had less doubts about the further course of the route and
					found the <italic>SoleWay</italic> instructions significantly clearer,
					easier to follow, more detailed and more adequate to convey the route.
					Consequently, participants were more inclined to recommend
				<italic>SoleWay</italic> for other buildings than
				<italic>Eyedog</italic>.
			</p>
			<p>Additionally, the recordings reveal a point of particular interest in
					terms of the usability and interface of <italic>Eyedog</italic>, namely
					the (lack of) incentive to swipe from one photograph to another. The
					apparent difference between <italic>Eyedog</italic> users that
					experienced little or no difficulties on the one hand and those that
					expressed high mental effort ratings on the other hand was that the
					first swiped freely between photographs. For example, these participants
					started by viewing the first three photographs or even the entire route
					before commencing the route itself and returning to the first
					photograph. That way they gained route knowledge, enabling them to
					connect different landmarks into a route. In contrast, the latter
					strictly focussed on a single photograph and only swiped to the
					following once they were absolutely sure that they had encountered the
					location depicted on that photograph. This group could therefore only
					rely on landmark knowledge for orientation in the building. Navigators
					are more successful in finding destinations inside a building when using
					multiple types of spatial knowledge, such as landmark knowledge and
					route knowledge [<xref ref-type="bibr" rid="b39">39</xref>]. As a result, the participants with solely landmark
					knowledge were not able to anticipate certain wayfinding actions and/or
					adapt their expectations with respect to the continuation of the route.
					In the current design, the app itself gives no clear incentive to swipe.
					At present, the user interface of <italic>Eyedog</italic> is being
					redesigned whereby the app will indicate within which distance a user is
					expected to swipe to the next picture. The discovery of these usability
					issues is an important advantage of qualitative, field-based research
					[				<xref ref-type="bibr" rid="b36">36</xref>].
			</p>
		</sec>
		<sec id="S6">
			<title>Limitations</title>
			<p>An explorative study was conducted to compare two pedestrian indoor
					wayfinding applications. Although explorative studies have many
					strengths, they also impose a few limitations. One of them is the
					difficulty to generalize findings as two existing systems were used in a
					realistic setting with possible end-users. This means that the
					configuration of the wayfinding aids, the architecture of the building
					and the familiarity of the participants with the building had an
					influence on the results. However, because the participants were not
					familiar with the destinations, both routes were equally new to them.
					Therefore, the order of the routes was not randomized. Another factor
					that caused some restrictions is the use of a mobile eye tracker.
					Extraction of saccadic measures can be difficult as both the eye and the
					head are moving. Therefore, this research used fixation measures and
					revisits as a measure for cognitive load.</p>
		</sec>
		<sec id="S7">
			<title>Conclusions and Future Research</title>
			<p>This explorative study made an effort to gain more insight into the
					use of textual and photo-based route instructions by comparing two
					wayfinding aids in terms of mental effort. A combination of eye fixation
					measures and subjective user ratings showed that both textual
					instructions and (augmented) photographs can enable a navigator to find
					his/her way while experiencing no or very little cognitive effort or
					difficulties. However, certain decision points during a given wayfinding
					task require a specific interpretation of the situation or location to
					facilitate a comfortable wayfinding experience. In this case, textual
					instructions offer the advantage that this specific information can be
					explicitly and concisely shared with the user, providing that the author
					is able to deduce this information based on his/her wayfinding
					experience. Furthermore, the study drew attention to potential usability
					issues of the wayfinding aids and, as such, demonstrated the value of
					eye tracking and mental effort assessments to facilitate a user-centered
					design.</p>
			<p>Future research will examine whether a new design, whereby incentives
					to swipe are given, can avoid the type of problems that were encountered
					in this study. This may also require an analysis of the swiping behavior
					of Eyedog users to examine when and where wayfinders need new
					information to get from A to B. This need for new information, hence a
					wayfinding instruction, can differ when using a Location Based System
					(LBS) instead of swiping. As already mentioned in the background
					section, Eyedog can operate with an LBS which facilitates this research
					topic. Another possibility for future research is linking different
					types of route instructions (e.g. text, image, video) to building
					architecture. To reduce the cognitive load during wayfinding, the right
					amount of information has to be provided in the most suitable manner. In
					other words, the right type of route instruction must be given at a
					certain type of decision points.</p>
		</sec>
		<sec id="S8" sec-type="COI-statement">
			<title>Ethics and Conflict of Interest</title>
			<p>The author(s) declare(s) that the contents of the article are in
					agreement with the ethics described in
				<ext-link ext-link-type="uri" xlink:href="http://biblio.unibe.ch/portale/elibrary/BOP/jemr/ethics.html">http://biblio.unibe.ch/portale/elibrary/BOP/jemr/ethics.html</ext-link>
					and that there is no conflict of interest regarding the publication of
					this paper.</p>
		</sec>
		<sec id="S9">
			<title>Acknowledgements</title>
			<p>This research was supported and funded by the Research Foundation
					Flanders (FWO)
					(				<ext-link ext-link-type="uri" xlink:href="http://www.fwo.be">http://www.fwo.be</ext-link>)
					by a PhD fellowship grant (FWO17/ASP/242).</p>
		</sec>
	</body>
	<back>
		<ref-list>
			<ref id="b3">
				<mixed-citation publication-type="web-page" specific-use="linked">
					<person-group person-group-type="author">
						<string-name>
							<surname>Bouwer</surname>,							<given-names>A.</given-names>
						</string-name>,						<string-name>
							<surname>Nack</surname>,							<given-names>F.</given-names>
						</string-name>, &#x26; <string-name>
						<surname>El Ali</surname>,						<given-names>A.</given-names>
					</string-name>
				</person-group>
				<article-title>Lost in navigation.</article-title> Proc 14th ACM Int Conf Multimodal Interact - ICMI &#8217;12 [Internet]. <year>2012</year>;(				<month>October</month>):173. 
				<pub-id pub-id-type="doi">10.1145/2388676.2388712</pub-id>
			</mixed-citation>
		</ref>
		<ref id="b6">
			<mixed-citation publication-type="journal" specific-use="restruct">
				<person-group person-group-type="author">
					<string-name>
						<surname>Burnett</surname>,						<given-names>G.</given-names>
					</string-name>,					<string-name>
						<surname>Smith</surname>,						<given-names>D.</given-names>
					</string-name>, &#x26; <string-name>
					<surname>May</surname>,					<given-names>A.</given-names>
				</string-name>
			</person-group> (			<year>2001</year>).			<article-title>Supporting the navigation task: Characteristics of &#8220;good&#8221; landmarks.</article-title>
			<source>Contemp Ergon.</source>,			<volume>1</volume>,			<fpage>441</fpage>&#8211;<lpage>446</lpage>.
		</mixed-citation>
	</ref>
	<ref id="b37">
		<mixed-citation publication-type="journal" specific-use="restruct">
			<person-group person-group-type="author">
				<string-name>
					<surname>de Winter</surname>,					<given-names>J. C. F.</given-names>
				</string-name>, &#x26; <string-name>
				<surname>Dodou</surname>,				<given-names>D.</given-names>
			</string-name>
		</person-group> (		<year>2010</year>).		<article-title>Five-Point Likert Items&#8239;: T test versus Mann-Whitney-Wilcoxon.</article-title>
		<source>Practical Assessment, Research &#x26; Evaluation</source>,		<volume>15</volume>(		<issue>11</issue>),		<fpage>1</fpage>&#8211;<lpage>16</lpage>.		<issn>1531-7714</issn>
	</mixed-citation>
</ref>
<ref id="b15">
	<mixed-citation publication-type="journal" specific-use="restruct">
		<person-group person-group-type="author">
			<string-name>
				<surname>Denis</surname>,				<given-names>M.</given-names>
			</string-name>
		</person-group> (		<year>1997</year>).		<article-title>The description of routes A cognitive approach to the production of spatial discourse.</article-title>
		<source>Curr Psychol Cogn.</source>,		<volume>16</volume>(		<issue>4</issue>),		<fpage>409</fpage>&#8211;<lpage>458</lpage>.
	</mixed-citation>
</ref>
<ref id="b38">
	<mixed-citation publication-type="web-page" specific-use="unparsed">
		<person-group person-group-type="author">
			<string-name>
				<surname>Elias</surname>,				<given-names>B.</given-names>
			</string-name>
		</person-group>
		<article-title>Determination of Landmarks and Reliability Criteria for Landmarks.</article-title> In: Fifth workshop on progress in Automated Map Generalization Paris [Internet]. Paris; <year>2003</year>. p. 1&#8211;12. 
	</mixed-citation>
</ref>
<ref id="b8">
	<mixed-citation publication-type="journal" specific-use="restruct">
		<person-group person-group-type="author">
			<string-name>
				<surname>Fallah</surname>,				<given-names>N.</given-names>
			</string-name>,			<string-name>
				<surname>Apostolopoulos</surname>,				<given-names>I.</given-names>
			</string-name>,			<string-name>
				<surname>Bekris</surname>,				<given-names>K.</given-names>
			</string-name>, &#x26; <string-name>
			<surname>Folmer</surname>,			<given-names>E.</given-names>
		</string-name>
	</person-group> (	<year>2013</year>).	<article-title>Indoor human navigation systems: A survey.</article-title>
	<source>Interacting with Computers</source>,	<volume>25</volume>(	<issue>1</issue>),	<fpage>21</fpage>&#8211;<lpage>33</lpage>.	<issn>0953-5438</issn>
</mixed-citation>
</ref>
<ref id="b26">
<mixed-citation publication-type="journal" specific-use="restruct">
	<person-group person-group-type="author">
		<string-name>
			<surname>Fu</surname>,			<given-names>E.</given-names>
		</string-name>,		<string-name>
			<surname>Bravo</surname>,			<given-names>M.</given-names>
		</string-name>, &#x26; <string-name>
		<surname>Roskos</surname>,		<given-names>B.</given-names>
	</string-name>
</person-group> (<year>2015</year>).<article-title>Single-destination navigation in a multiple-destination environment: A new &#8220;later-destination attractor&#8221; bias in route choice.</article-title>
<source>Memory &#x26; Cognition</source>,<volume>43</volume>(<issue>7</issue>),<fpage>1043</fpage>&#8211;<lpage>1055</lpage>.<pub-id pub-id-type="doi">10.3758/s13421-015-0521-7</pub-id>
<pub-id pub-id-type="pmid">25821948</pub-id>
<issn>0090-502X</issn>
</mixed-citation>
</ref>
<ref id="b2">
<mixed-citation publication-type="conference" specific-use="linked">
<person-group person-group-type="author">
	<string-name>
		<surname>Giannopoulos</surname>,		<given-names>I.</given-names>
	</string-name>,	<string-name>
		<surname>Kiefer</surname>,		<given-names>P.</given-names>
	</string-name>, &#x26; <string-name>
	<surname>Raubal</surname>,	<given-names>M.</given-names>
</string-name>
</person-group>
<article-title>The influence of gaze history visualization on map interaction sequences and cognitive maps.</article-title>
<source>Proc 1st ACM SIGSPATIAL Int Work MapInteraction</source>.<year>2013</year>;(<month>November</month>):<fpage>1</fpage>&#8211;            <lpage>6</lpage>.<pub-id pub-id-type="doi">10.1145/2534931.2534940</pub-id>
</mixed-citation>
</ref>
<ref id="b28">
<mixed-citation publication-type="conference" specific-use="linked">
<person-group person-group-type="author">
<string-name>
	<surname>Giannopoulos</surname>,	<given-names>I.</given-names>
</string-name>,<string-name>
	<surname>Kiefer</surname>,	<given-names>P.</given-names>
</string-name>,<string-name>
	<surname>Raubal</surname>,	<given-names>M.</given-names>
</string-name>,<string-name>
	<surname>Richter</surname>,	<given-names>K.</given-names>
</string-name>, &#x26; <string-name>
<surname>Thrash</surname>,<given-names>T.</given-names>
</string-name>
</person-group>
<article-title>Wayfinding Decision Situations: A ConceptualModel and Evaluation.</article-title> In: <source>International Conference on Geographic Information Science</source>.<publisher-name>Springer</publisher-name>,<publisher-loc>Cham</publisher-loc>;<year>2014</year>. p. <fpage>221</fpage>&#8211;<lpage>34</lpage>.<pub-id pub-id-type="doi">10.1007/978-3-319-11593-1_15</pub-id>
</mixed-citation>
</ref>
<ref id="b34">
<mixed-citation publication-type="journal" specific-use="restruct">
<person-group person-group-type="author">
<string-name>
<surname>Goldberg</surname>,<given-names>J. H.</given-names>
</string-name>, &#x26; <string-name>
<surname>Kotval</surname>,<given-names>X. P.</given-names>
</string-name>
</person-group> (<year>1999</year>).<article-title>Computer interface evaluation using eye movements: Methods and constructs.</article-title>
<source>International Journal of Industrial Ergonomics</source>,<volume>24</volume>(<issue>6</issue>),<fpage>631</fpage>&#8211;<lpage>645</lpage>.<pub-id pub-id-type="doi">10.1016/S0169-8141(98)00068-7</pub-id>
<issn>0169-8141</issn>
</mixed-citation>
</ref>
<ref id="b29">
<mixed-citation publication-type="journal" specific-use="restruct">
<person-group person-group-type="author">
<string-name>
<surname>Hasler</surname>,<given-names>B. S.</given-names>
</string-name>,<string-name>
<surname>Kersten</surname>,<given-names>B.</given-names>
</string-name>, &#x26; <string-name>
<surname>Sweller</surname>,<given-names>J.</given-names>
</string-name>
</person-group> (<year>2007</year>).<article-title>Learner Control, Cognitive Load and Instructional Animation.</article-title>
<comment>[Internet]</comment>.<source>Applied Cognitive Psychology</source>,<volume>21</volume>,<fpage>713</fpage>&#8211;<lpage>729</lpage>.
<pub-id pub-id-type="doi">10.1002/acp.1345</pub-id>
<issn>0888-4080</issn>
</mixed-citation>
</ref>
<ref id="b33">
<mixed-citation publication-type="book" specific-use="restruct">
<person-group person-group-type="author">
<string-name>
<surname>Holmqvist</surname>,<given-names>K.</given-names>
</string-name>,<string-name>
<surname>Nystrom</surname>,<given-names>M.</given-names>
</string-name>,<string-name>
<surname>Andersson</surname>,<given-names>R.</given-names>
</string-name>,<string-name>
<surname>Dewhurst</surname>,<given-names>R.</given-names>
</string-name>,<string-name>
<surname>Halszka</surname>,<given-names>J.</given-names>
</string-name>, &#x26; <string-name>
<surname>Van De Weijer</surname>,<given-names>J.</given-names>
</string-name>
</person-group> (<year>2011</year>).<source>Eye Tracking A Comprehensive Guide to Methods and Measures</source>.<publisher-name>Oxford University Press</publisher-name>.
</mixed-citation>
</ref>
<ref id="b11">
<mixed-citation publication-type="journal" specific-use="restruct">
<person-group person-group-type="author">
<string-name>
<surname>Hund</surname>,<given-names>A. M.</given-names>
</string-name>, &#x26; <string-name>
<surname>Padgitt</surname>,<given-names>A. J.</given-names>
</string-name>
</person-group> (<year>2010</year>,<month>December</month>).<article-title>Direction giving and following in the service of wayfinding in a complex indoor environment.</article-title>
<comment>[Internet]</comment>.<source>Journal of Environmental Psychology</source>,<volume>30</volume>(<issue>4</issue>),<fpage>553</fpage>&#8211;<lpage>564</lpage>.
<pub-id pub-id-type="doi">10.1016/j.jenvp.2010.01.002</pub-id>
<issn>0272-4944</issn>
</mixed-citation>
</ref>
<ref id="b39">
<mixed-citation publication-type="journal" specific-use="restruct">
<person-group person-group-type="author">
<string-name>
<surname>H&#246;lscher</surname>,<given-names>C.</given-names>
</string-name>,<string-name>
<surname>Meilinger</surname>,<given-names>T.</given-names>
</string-name>,<string-name>
<surname>Vrachliotis</surname>,<given-names>G.</given-names>
</string-name>,<string-name>
<surname>Br&#246;samle</surname>,<given-names>M.</given-names>
</string-name>, &#x26; <string-name>
<surname>Knauff</surname>,<given-names>M.</given-names>
</string-name>
</person-group> (<year>2006</year>,<month>December</month>).<article-title>Up the down staircase: Wayfinding strategies in multi-level buildings.</article-title>
<comment>[Internet]</comment>.<source>Journal of Environmental Psychology</source>,<volume>26</volume>(<issue>4</issue>),<fpage>284</fpage>&#8211;<lpage>299</lpage>.
<pub-id pub-id-type="doi">10.1016/j.jenvp.2006.09.002</pub-id>
<issn>0272-4944</issn>
</mixed-citation>
</ref>
<ref id="b22">
<mixed-citation publication-type="journal" specific-use="restruct">
<person-group person-group-type="author">
<string-name>
<surname>Kiefer</surname>,<given-names>P.</given-names>
</string-name>,<string-name>
<surname>Giannopoulos</surname>,<given-names>I.</given-names>
</string-name>, &#x26; <string-name>
<surname>Raubal</surname>,<given-names>M.</given-names>
</string-name>
</person-group> (<year>2014</year>).<article-title>Where am i? Investigating map matching during self-localization with mobile eye tracking in an urban environment.</article-title>
<source>Transactions in GIS</source>,<volume>18</volume>(<issue>5</issue>),<fpage>660</fpage>&#8211;<lpage>686</lpage>.<pub-id pub-id-type="doi">10.1111/tgis.12067</pub-id>
<issn>1361-1682</issn>
</mixed-citation>
</ref>
<ref id="b18">
<mixed-citation publication-type="journal" specific-use="restruct">
<person-group person-group-type="author">
<string-name>
<surname>Li</surname>,<given-names>Q.</given-names>
</string-name>,
</person-group> (<year>2017</year>).<article-title>Use of maps in indoor wayfinding.</article-title>
<source>University of Twente.</source>
</mixed-citation>
</ref>
<ref id="b19">
<mixed-citation publication-type="journal" specific-use="restruct">
<person-group person-group-type="author">
<string-name>
<surname>Liu</surname>,<given-names>A. L.</given-names>
</string-name>,<string-name>
<surname>Hile</surname>,<given-names>H.</given-names>
</string-name>,<string-name>
<surname>Kautz</surname>,<given-names>H.</given-names>
</string-name>,<string-name>
<surname>Borriello</surname>,<given-names>G.</given-names>
</string-name>,<string-name>
<surname>Brown</surname>,<given-names>P. A.</given-names>
</string-name>,<string-name>
<surname>Harniss</surname>,<given-names>M.</given-names>
</string-name>, &#x26; <string-name>
<surname>Johnson</surname>,<given-names>K.</given-names>
</string-name>
</person-group> (<year>2008</year>).<article-title>Indoor wayfinding: Developing a functional interface for individuals with cognitive impairments.</article-title>
<source>Disability and Rehabilitation. Assistive Technology</source>,<volume>3</volume>(<issue>1</issue>),<fpage>69</fpage>&#8211;<lpage>81</lpage>.<pub-id pub-id-type="doi">10.1080/17483100701500173</pub-id>
<pub-id pub-id-type="pmid">18416519</pub-id>
<issn>1748-3107</issn>
</mixed-citation>
</ref>
<ref id="b5">
<mixed-citation publication-type="book-chapter" specific-use="restruct">
<person-group person-group-type="author">
<string-name>
<surname>Lovelace</surname>,<given-names>K. L.</given-names>
</string-name>,<string-name>
<surname>Hegarty</surname>,<given-names>M.</given-names>
</string-name>, &#x26; <string-name>
<surname>Montello</surname>,<given-names>D. R.</given-names>
</string-name>
</person-group> (<year>1999</year>).<chapter-title>Elements of Good Route Directions in Familiar and Unfamiliar Environments</chapter-title>. In <person-group person-group-type="editor">
<string-name>
<given-names>C.</given-names>
<surname>Freksa</surname>
</string-name> &#x26; <string-name>
<given-names>D.</given-names>
<surname>Mark</surname>
</string-name> (<role>Eds.</role>),
</person-group>
<source>Spatial information theory Cognitive and Computional Foundations of Geographic Information Science</source> (pp. <fpage>65</fpage>&#8211;<lpage>82</lpage>).<publisher-name>Springer-Verlag</publisher-name>.<pub-id pub-id-type="doi">10.1007/3-540-48384-5_5</pub-id>
</mixed-citation>
</ref>
<ref id="b16">
<mixed-citation publication-type="book-chapter" specific-use="restruct">
<person-group person-group-type="author">
<string-name>
<surname>Mackaness</surname>,<given-names>W.</given-names>
</string-name>,<string-name>
<surname>Bartie</surname>,<given-names>P.</given-names>
</string-name>, &#x26; <string-name>
<surname>Espeso</surname>,<given-names>C. S.-R.</given-names>
</string-name>
</person-group> (<year>2014</year>).<chapter-title>Understanding Information Requirements in &#8216;Text Only&#8217; Pedestrian Wayfinding Systems</chapter-title>. In <person-group person-group-type="editor">
<string-name>
<given-names>M.</given-names>
<surname>Duckham</surname>
</string-name>,<string-name>
<given-names>E.</given-names>
<surname>Pebesma</surname>
</string-name>,<string-name>
<given-names>K.</given-names>
<surname>Stewart</surname>
</string-name>, &#x26; <string-name>
<given-names>A. U.</given-names>
<surname>Frank</surname>
</string-name> (<role>Eds.</role>),
</person-group>
<source>GIScience Conference</source> (pp. <fpage>235</fpage>&#8211;<lpage>252</lpage>).<pub-id pub-id-type="doi">10.1007/978-3-319-11593-1_16</pub-id>
</mixed-citation>
</ref>
<ref id="b10">
<mixed-citation publication-type="journal" specific-use="restruct">
<person-group person-group-type="author">
<string-name>
<surname>May</surname>,<given-names>A. J.</given-names>
</string-name>,<string-name>
<surname>Ross</surname>,<given-names>T.</given-names>
</string-name>,<string-name>
<surname>Bayer</surname>,<given-names>S. H.</given-names>
</string-name>, &#x26; <string-name>
<surname>Tarkiainen</surname>,<given-names>M. J.</given-names>
</string-name>
</person-group> (<year>2003</year>,<month>December</month>
<day>1</day>).<article-title>Pedestrian navigation aids: Information requirements and design implications.</article-title>
<comment>[Internet]</comment>.<source>Personal and Ubiquitous Computing</source>,<volume>7</volume>(<issue>6</issue>),<fpage>331</fpage>&#8211;<lpage>338</lpage>.
<pub-id pub-id-type="doi" specific-use="author">10.1007/s00779-003-0248-5</pub-id>
<issn>1617-4909</issn>
</mixed-citation>
</ref>
<ref id="b20">
<mixed-citation publication-type="web-page" specific-use="unparsed">
<person-group person-group-type="author">
<string-name>
<surname>M&#246;ller</surname>,<given-names>A.</given-names>
</string-name>,<string-name>
<surname>Diewald</surname>,<given-names>S.</given-names>
</string-name>,<string-name>
<surname>Roalter</surname>,<given-names>L.</given-names>
</string-name>, &#x26; <string-name>
<surname>Kranz</surname>,<given-names>M.</given-names>
</string-name>
</person-group> Computer Aided Systems Theory - EUROCAST 2009. Eurocast 2009 [Internet]. <year>2009</year>;(<month>February</month>):53&#8211;62. 
</mixed-citation>
</ref>
<ref id="b1">
<mixed-citation publication-type="web-page" specific-use="linked">
<person-group person-group-type="author">
<string-name>
<surname>M&#246;ller</surname>,<given-names>A.</given-names>
</string-name>,<string-name>
<surname>Kranz</surname>,<given-names>M.</given-names>
</string-name>,<string-name>
<surname>Diewald</surname>,<given-names>S.</given-names>
</string-name>,<string-name>
<surname>Roalter</surname>,<given-names>L.</given-names>
</string-name>,<string-name>
<surname>Huitl</surname>,<given-names>R.</given-names>
</string-name>,<string-name>
<surname>Stockinger</surname>,<given-names>T.</given-names>
</string-name>,<etal>. . .</etal>
</person-group>.<article-title>Experimental evaluation of user interfaces for visual indoor navigation.</article-title> Proc 32nd Annu ACM Conf Hum factors Comput Syst - CHI &#8217;14 [Internet]. <year>2014</year>;(<month>May</month>):3607&#8211;16. 
<pub-id pub-id-type="doi">10.1145/2556288.2557003</pub-id>
</mixed-citation>
</ref>
<ref id="b17">
<mixed-citation publication-type="conference" specific-use="unparsed">
<person-group person-group-type="author">
<string-name>
<surname>Ohm</surname>,<given-names>C.</given-names>
</string-name>,<string-name>
<surname>Ludwig</surname>,<given-names>B.</given-names>
</string-name>, &#x26; <string-name>
<surname>Gerstmeier</surname>,<given-names>S.</given-names>
</string-name>
</person-group>
(<year>2015</year>).
<article-title>Photographs or Mobile Maps? - Displaying Landmarks in Pedestrian Navigation Systems.</article-title>
<source>Reinventing Inf Sci Networked Soc Proc 14th Int Symp Inf Sci {(ISI} 2015), Zadar, Croat 19th--21st May 2015</source>,
<volume>66</volume>,<fpage>302</fpage>&#8211;<lpage>312</lpage>.
</mixed-citation>
</ref>
<ref id="b23">
<mixed-citation publication-type="journal" specific-use="restruct">
<person-group person-group-type="author">
<string-name>
<surname>Ohm</surname>,<given-names>C.</given-names>
</string-name>,<string-name>
<surname>M&#252;ller</surname>,<given-names>M.</given-names>
</string-name>, &#x26; <string-name>
<surname>Ludwig</surname>,<given-names>B.</given-names>
</string-name>
</person-group> (<year>2017</year>).<article-title>Evaluating indoor pedestrian navigation interfaces using mobile eye tracking.</article-title>
<source>Spatial Cognition and Computation</source>,<volume>17</volume>(<issue>1&#8211;2</issue>),<fpage>89</fpage>&#8211;<lpage>120</lpage>.<pub-id pub-id-type="doi">10.1080/13875868.2016.1219913</pub-id>
<issn>1387-5868</issn>
</mixed-citation>
</ref>
<ref id="b35">
<mixed-citation publication-type="web-page" specific-use="unparsed">
<person-group person-group-type="author">
<string-name>
<surname>Ooms</surname>,<given-names>K.</given-names>
</string-name>
</person-group> Cartographic User Research in the 21St Century: Mixing and Interacting. 6th Int Conf Cartogr GIS Proc [Internet]. <year>2016</year>;(<month>June</month>):367&#8211;77. 
</mixed-citation>
</ref>
<ref id="b25">
<mixed-citation publication-type="journal" specific-use="restruct">
<person-group person-group-type="author">
<string-name>
<surname>Paas</surname>,<given-names>F.</given-names>
</string-name>
</person-group> (<year>1992</year>).<article-title>Training strategies for attaining transfer of problem-solving skill in statistics: A cognitive-load approach.</article-title>
<source>J Educ Psychol.</source>,<volume>84</volume>(<issue>4</issue>),<fpage>429</fpage>&#8211;<lpage>434</lpage>.<pub-id pub-id-type="doi">10.1037/0022-0663.84.4.429</pub-id>
</mixed-citation>
</ref>
<ref id="b24">
<mixed-citation publication-type="journal" specific-use="restruct">
<person-group person-group-type="author">
<string-name>
<surname>Paas</surname>,<given-names>F.</given-names>
</string-name>,<string-name>
<surname>Tuovinen</surname>,<given-names>J.</given-names>
</string-name>,<string-name>
<surname>Tabbers</surname>,<given-names>H.</given-names>
</string-name>, &#x26; <string-name>
<surname>Van Gerven</surname>,<given-names>P. W. M.</given-names>
</string-name>
</person-group> (<year>2010</year>).<article-title>Cognitive Load Measurement as a Means to Advance Cognitive Load Theory.</article-title>
<source>Educ Psychol.</source>,<volume>38</volume>(<issue>38</issue>),<fpage>43</fpage>&#8211;<lpage>52</lpage>.
</mixed-citation>
</ref>
<ref id="b12">
<mixed-citation publication-type="journal" specific-use="restruct">
<person-group person-group-type="author">
<string-name>
<surname>Presson</surname>,<given-names>C. C.</given-names>
</string-name>, &#x26; <string-name>
<surname>Montello</surname>,<given-names>D. R.</given-names>
</string-name>
</person-group> (<year>1988</year>).<article-title>Points of reference in spatial cognition Stalking the elusive landmark.</article-title>
<source>British Journal of Developmental Psychology</source>,<volume>6</volume>,<fpage>378</fpage>&#8211;<lpage>381</lpage>.<pub-id pub-id-type="doi">10.1111/j.2044-835X.1988.tb01113.x</pub-id>
<issn>0261-510X</issn>
</mixed-citation>
</ref>
<ref id="b14">
<mixed-citation publication-type="journal" specific-use="restruct">
<person-group person-group-type="author">
<string-name>
<surname>Raubal</surname>,<given-names>M.</given-names>
</string-name>
</person-group> (<year>2001</year>).<article-title>Human wayfinding in unfamiliar buildings: A simulation with a cognizing agent.</article-title>
<source>Cognitive Processing</source>,<volume>2</volume>(<issue>2&#8211;3</issue>),<fpage>363</fpage>&#8211;<lpage>388</lpage>.<issn>1612-4782</issn>
</mixed-citation>
</ref>
<ref id="b7">
<mixed-citation publication-type="book" specific-use="restruct">
<person-group person-group-type="author">
<string-name>
<surname>Richter</surname>,<given-names>K.</given-names>
</string-name>, &#x26; <string-name>
<surname>Winter</surname>,<given-names>S.</given-names>
</string-name>
</person-group> (<year>2014</year>).<source>Landmarks</source>.<publisher-name>Springer Cham Heidelberg New York Dordrecht London</publisher-name>.<pub-id pub-id-type="doi">10.1007/978-3-319-05732-3</pub-id>
</mixed-citation>
</ref>
<ref id="b36">
<mixed-citation publication-type="journal" specific-use="restruct">
<person-group person-group-type="author">
<string-name>
<surname>Roth</surname>,<given-names>R. E.</given-names>
</string-name>,<string-name>
<surname>&#199;&#246;ltekin</surname>,<given-names>A.</given-names>
</string-name>,<string-name>
<surname>Delazari</surname>,<given-names>L.</given-names>
</string-name>,<string-name>
<surname>Filho</surname>,<given-names>H. F.</given-names>
</string-name>,<string-name>
<surname>Griffin</surname>,<given-names>A.</given-names>
</string-name>,<string-name>
<surname>Hall</surname>,<given-names>A.</given-names>
</string-name>,<string-name>
<surname>Korpi</surname>,<given-names>J.</given-names>
</string-name>,<string-name>
<surname>Lokka</surname>,<given-names>I.</given-names>
</string-name>,<string-name>
<surname>Mendon&#231;a</surname>,<given-names>A.</given-names>
</string-name>,<string-name>
<surname>Ooms</surname>,<given-names>K.</given-names>
</string-name>, &#x26; <string-name>
<surname>van Elzakker</surname>,<given-names>C. P. J. M.</given-names>
</string-name>
</person-group> (<year>2017</year>).<article-title>User studies in cartography: Opportunities for empirical research on interactive maps and visualizations.</article-title>
<source>Int J Cartogr [Internet]</source>,<volume>3</volume>(<supplement>S1</supplement>),<fpage>61</fpage>&#8211;<lpage>89</lpage>.
<pub-id pub-id-type="doi" specific-use="author">10.1080/23729333.2017.1288534</pub-id>
</mixed-citation>
</ref>
<ref id="b27">
<mixed-citation publication-type="journal" specific-use="restruct">
<person-group person-group-type="author">
<string-name>
<surname>Schmeck</surname>,<given-names>A.</given-names>
</string-name>,<string-name>
<surname>Opfermann</surname>,<given-names>M.</given-names>
</string-name>,<string-name>
<surname>van Gog</surname>,<given-names>T.</given-names>
</string-name>,<string-name>
<surname>Paas</surname>,<given-names>F.</given-names>
</string-name>, &#x26; <string-name>
<surname>Leutner</surname>,<given-names>D.</given-names>
</string-name>
</person-group> (<year>2015</year>).<article-title>Measuring cognitive load with subjective rating scales during problem solving: Differences between immediate and delayed ratings.</article-title>
<source>Instructional Science</source>,<volume>43</volume>(<issue>1</issue>),<fpage>93</fpage>&#8211;<lpage>114</lpage>.<pub-id pub-id-type="doi">10.1007/s11251-014-9328-3</pub-id>
<issn>0020-4277</issn>
</mixed-citation>
</ref>
<ref id="b21">
<mixed-citation publication-type="web-page" specific-use="linked">
<person-group person-group-type="author">
<string-name>
<surname>Schnitzler</surname>,<given-names>V.</given-names>
</string-name>,<string-name>
<surname>Giannopoulos</surname>,<given-names>I.</given-names>
</string-name>,<string-name>
<surname>H&#246;lscher</surname>,<given-names>C.</given-names>
</string-name>, &#x26; <string-name>
<surname>Barisic</surname>,<given-names>I.</given-names>
</string-name>
</person-group>
<article-title>The interplay of pedestrian navigation, wayfinding devices, and environmental features in indoor settings.</article-title> Proc Ninth Bienn ACM Symp Eye Track Res Appl - ETRA &#8217;16 [Internet]. <year>2016</year>;85&#8211;93. 
<pub-id pub-id-type="doi">10.1145/2857491.2857533</pub-id>
</mixed-citation>
</ref>
<ref id="b4">
<mixed-citation publication-type="book-chapter" specific-use="restruct">
<person-group person-group-type="author">
<string-name>
<surname>Sorrows</surname>,<given-names>M.</given-names>
</string-name>, &#x26; <string-name>
<surname>Hirtle</surname>,<given-names>S.</given-names>
</string-name>
</person-group> (<year>1999</year>).<chapter-title>The nature of landmarks for real and electronic spaces</chapter-title>. In <person-group person-group-type="editor">
<string-name>
<given-names>C.</given-names>
<surname>Freska</surname>
</string-name> &#x26; <string-name>
<given-names>D. M.</given-names>
<surname>Mark</surname>
</string-name> (<role>Eds.</role>),
</person-group>
<source>Spatial information theory Cognitive and Computional Foundations of Geographic Information Science</source> [<comment>Internet</comment>]. (pp. <fpage>37</fpage>&#8211;<lpage>50</lpage>).<publisher-name>Springer-Verlag</publisher-name>.
<pub-id pub-id-type="doi" specific-use="author">10.1007/3-540-48384-5_3</pub-id>
</mixed-citation>
</ref>
<ref id="b30">
<mixed-citation publication-type="journal" specific-use="restruct">
<person-group person-group-type="author">
<string-name>
<surname>Stark</surname>,<given-names>R.</given-names>
</string-name>,<string-name>
<surname>Mandl</surname>,<given-names>H.</given-names>
</string-name>,<string-name>
<surname>Gruber</surname>,<given-names>H.</given-names>
</string-name>, &#x26; <string-name>
<surname>Renkl</surname>,<given-names>A.</given-names>
</string-name>
</person-group> (<year>2002</year>).<article-title>Conditions and effects of example elaboration.</article-title>
<source>Learning and Instruction</source>,<volume>12</volume>(<issue>1</issue>),<fpage>39</fpage>&#8211;<lpage>60</lpage>.<pub-id pub-id-type="doi">10.1016/S0959-4752(01)00015-9</pub-id>
<issn>0959-4752</issn>
</mixed-citation>
</ref>
<ref id="b13">
<mixed-citation publication-type="journal" specific-use="restruct">
<person-group person-group-type="author">
<string-name>
<surname>Streeter</surname>,<given-names>L.</given-names>
</string-name>,<string-name>
<surname>Vitello</surname>,<given-names>D.</given-names>
</string-name>, &#x26; <string-name>
<surname>Wonsiewicz</surname>,<given-names>S. A.</given-names>
</string-name>
</person-group> (<year>1985</year>).<article-title>How to tell people where to go&#8239;: Comparing navigational aids.</article-title>
<source>International Journal of Man-Machine Studies</source>,<volume>22</volume>(<issue>5</issue>),<fpage>549</fpage>&#8211;<lpage>562</lpage>.<pub-id pub-id-type="doi">10.1016/S0020-7373(85)80017-1</pub-id>
<issn>0020-7373</issn>
</mixed-citation>
</ref>
<ref id="b9">
<mixed-citation publication-type="journal" specific-use="restruct">
<person-group person-group-type="author">
<string-name>
<surname>Tversky</surname>,<given-names>B.</given-names>
</string-name>, &#x26; <string-name>
<surname>Lee</surname>,<given-names>P. U.</given-names>
</string-name>
</person-group> (<year>1999</year>).<article-title>Pictorial and Verbal Tools for Conveying Routes. Spat Inf theory Cogn Comput Found Geogr</article-title>.<source>Inf Sci.</source>,<volume>1661</volume>,<fpage>51</fpage>&#8211;<lpage>64</lpage>.
</mixed-citation>
</ref>
<ref id="b31">
<mixed-citation publication-type="journal" specific-use="restruct">
<person-group person-group-type="author">
<string-name>
<surname>van Gog</surname>,<given-names>T.</given-names>
</string-name>, &#x26; <string-name>
<surname>Paas</surname>,<given-names>F.</given-names>
</string-name>
</person-group> (<year>2008</year>).<article-title>Instructional efficiency: Revisiting the original construct in educational research.</article-title>
<source>Educ Psychol.</source>,<volume>43</volume>(<issue>1</issue>),<fpage>16</fpage>&#8211;<lpage>26</lpage>.<pub-id pub-id-type="doi">10.1080/00461520701756248</pub-id>
</mixed-citation>
</ref>
<ref id="b32">
<mixed-citation publication-type="journal" specific-use="restruct">
<person-group person-group-type="author">
<string-name>
<surname>van Gog</surname>,<given-names>T.</given-names>
</string-name>,<string-name>
<surname>Kester</surname>,<given-names>L.</given-names>
</string-name>,<string-name>
<surname>Nievelstein</surname>,<given-names>F.</given-names>
</string-name>,<string-name>
<surname>Giesbers</surname>,<given-names>B.</given-names>
</string-name>, &#x26; <string-name>
<surname>Paas</surname>,<given-names>F.</given-names>
</string-name>
</person-group> (<year>2009</year>,<month>March</month>).<article-title>Uncovering cognitive processes: Different techniques that can contribute to cognitive load research and instruction.</article-title>
<comment>[Internet]</comment>.<source>Computers in Human Behavior</source>,<volume>25</volume>(<issue>2</issue>),<fpage>325</fpage>&#8211;<lpage>331</lpage>.
<pub-id pub-id-type="doi">10.1016/j.chb.2008.12.021</pub-id>
<issn>0747-5632</issn>
</mixed-citation>
</ref>
</ref-list>
</back>
</article>
	