<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd">

<article article-type="research-article" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML">
 <front>
    <journal-meta>
	<journal-id journal-id-type="publisher-id">Jemr</journal-id>
      <journal-title-group>
        <journal-title>Journal of Eye Movement Research</journal-title>
      </journal-title-group>
      <issn pub-type="epub">1995-8692</issn>
	  <publisher>								
	  <publisher-name>Bern Open Publishing</publisher-name>
	  <publisher-loc>Bern, Switzerland</publisher-loc>
	</publisher>
    </journal-meta>
    <article-meta>
	<article-id pub-id-type="doi">10.16910/jemr.11.4.2</article-id> 
	  <article-categories>								
				<subj-group subj-group-type="heading">
					<subject>Research Article</subject>
				</subj-group>
		</article-categories>
      <title-group>
        <article-title>The impact of text segmentation
on subtitle reading</article-title>
      </title-group>
	   <contrib-group> 
				<contrib contrib-type="author">
					<name>
						<surname>Gerber-Morón</surname>
						<given-names>Olivia</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Szarkowska</surname>
						<given-names>Agnieszka</given-names>
					</name>
					<xref ref-type="aff" rid="aff2 aff3">2, 3</xref>
				</contrib>	
				<contrib contrib-type="author">
					<name>
						<surname>Woll</surname>
						<given-names>Bencie</given-names>
					</name>
					<xref ref-type="aff" rid="aff2"></xref>
				</contrib>        			
        <aff id="aff1">
		<institution>Universitat Autònoma de Barcelona</institution>,   <country>Spain</country>
        </aff>
        <aff id="aff2">
		<institution>University College London</institution>,   <country>UK</country>
        </aff>
        <aff id="aff3">
		<institution>University of Warsaw</institution>,   <country>Poland</country>
        </aff>                
		</contrib-group>   

		
	  <pub-date date-type="pub" publication-format="electronic"> 
		<day>30</day>  
		<month>6</month>
        <year>2018</year>
      </pub-date>
	  <pub-date date-type="collection" publication-format="electronic"> 
	  <year>2018</year>
	</pub-date>
      <volume>11</volume>
      <issue>4</issue>
	 <elocation-id>10.16910/jemr.11.4.2</elocation-id> 
	<permissions> 
	<copyright-year>2018</copyright-year>
	<copyright-holder>Gerber-Morón, O., Szarkowska, A., &#x26; Woll, B.</copyright-holder>
	<license license-type="open-access">
  <license-p>This work is licensed under a Creative Commons Attribution 4.0 International License, 
  (<ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">
    https://creativecommons.org/licenses/by/4.0/</ext-link>), which permits unrestricted use and redistribution provided that the original author and source are credited.</license-p>
</license>
	</permissions>
      <abstract>
        <p>Understanding the way people watch subtitled films has become a central concern for
subtitling researchers in recent years. Both subtitling scholars and professionals generally
believe that in order to reduce cognitive load and enhance readability, line breaks in twoline
subtitles should follow syntactic units. However, previous research has been
inconclusive as to whether syntactic-based segmentation facilitates comprehension and
reduces cognitive load. In this study, we assessed the impact of text segmentation on subtitle
processing among different groups of viewers: hearing people with different mother tongues
(English, Polish, and Spanish) and deaf, hard of hearing, and hearing people with English
as a first language. We measured three indicators of cognitive load (difficulty, effort, and
frustration) as well as comprehension and eye tracking variables. Participants watched two
video excerpts with syntactically and non-syntactically segmented subtitles. The aim was to
determine whether syntactic-based text segmentation as well as the viewers’ linguistic
background influence subtitle processing. Our findings show that non-syntactically
segmented subtitles induced higher cognitive load, but they did not adversely affect
comprehension. The results are discussed in the context of cognitive load, audiovisual
translation, and deafness.</p>
      </abstract>
      <kwd-group>
        <kwd>eye movement</kwd>
        <kwd>reading</kwd>
        <kwd>region of interest</kwd>
        <kwd>subtitling</kwd>
        <kwd>audiovisual translation</kwd>
        <kwd>media accessibility</kwd>
        <kwd>cognitive load</kwd>
        <kwd>segmentation</kwd>
        <kwd>line breaks</kwd>
        <kwd>revisits</kwd>                                                             
      </kwd-group>
    </article-meta>
  </front>	
  <body>

<sec id="S1">
  <title>Introduction </title>

  <p>In the modern world, we are surrounded by screens, captions, and
  moving images more than ever before. Technological advancements and
  accessibility legislation, such as the United Nations Convention on
  the Rights of Persons with Disabilities (2006), Audiovisual Media
  Services Directive or the European Accessibility Act,have empowered
  different types of viewers across the globe in accessing multilingual
  audiovisual content. Viewers who do not know the language of the
  original production or people who are deaf or hard of hearing can
  follow film dialogues thanks to subtitles (<xref ref-type="bibr" rid="b15">15</xref>).</p>

  <p>Because watching subtitled films requires viewers to follow the
  action, listen to the soundtrack and read the subtitles, it is
  important for subtitles to be presented in a way that facilitates
  rather than hampers reading (<xref ref-type="bibr" rid="b12 b21">12, 21</xref>). Some typographical subtitle parameters, such as
  small font size, illegible typeface or optical blur, have been shown
  to impede reading (<xref ref-type="bibr" rid="b2 b63">2, 63</xref>). In this study, we examine whether segmentation,
  i.e. the way text is divided across lines in a two-line subtitle,
  affects the subtitle reading process. We predict that segmentation not
  aligned with grammatical structure may have a detrimental effect on
  the processing of subtitles.</p>

  <sec id="S1a">
    <title>Readability and syntactic segmentation in subtitles</title>

    <p>The general consensus among scholars in audiovisual translation,
    media regulation, and television broadcasting is that to enhance
    readability, linguistic phrases in two-line subtitles should not be
    split across lines (<xref ref-type="bibr" rid="b6 b12 b18 b21 b42 ">6, 12, 18, 21, 42</xref>). For
    instance, subtitle (1a) below is an example of correct
    syntactic-based line segmentation, whereas in (1b) the indefinite
    article “a” is incorrectly separated from the accompanying noun
    phrase (<xref ref-type="bibr" rid="b6">6</xref>).</p>

  <p specific-use="wrapper">
    <disp-quote>
      <p>(1a)</p>
      <p>We are aiming to get</p>
      <p><underline>a better television service</underline>.</p>
      <p>(1b)</p>
      <p>We are aiming to get <underline>a</underline></p>
      <p><underline>better television service</underline>.</p>
    </disp-quote>
  </p>      
        
    <p>The underlying assumption is that more cognitive effort is
    required to process text when it is not segmented according to
    syntactic rules (<xref ref-type="bibr" rid="b44">44</xref>). However, segmentation rules are not
    always respected in the subtitling industry. One of the reasons for
    this might be the cost: editing text in subtitles requires human
    time and effort, and as such is not always cost-effective. Another
    reason is that syntactic-based segmentation may require substantial
    text reduction in order to comply with maximum line length limits.
    As a result, when applying syntactic rules to segmentation of
    subtitles, some information might be lost. Following this line of
    thought, BBC subtitling guidelines (<xref ref-type="bibr" rid="b6">6</xref>) stress that
    well-edited text and synchronisation should be prioritized over
    syntactically-based line breaks.</p>
    
    <p>The widely held belief that words “intimately connected by logic,
    semantics, or grammar” should be kept in the same line whenever
    possible (<xref ref-type="bibr" rid="b18">18</xref>, p. 77) may be rooted in the
    concept of parsing in reading (<xref ref-type="bibr" rid="b51">51</xref>, p. 216). Parsing, i.e. the process of identifying
    which groups of words go together in a sentence (<xref ref-type="bibr" rid="b68">68</xref>),
    allows a text to be interpreted incrementally as it is read. It has
    been reported that “line breaks, like punctuation, may have quite
    profound effects on the reader’s segmentation strategies” (<xref ref-type="bibr" rid="b23">23</xref>, p. 56). Insight into these
    strategies can be obtained through studies of readers’ eye
    movements, which reflect the process of parsing: longer fixation
    durations, higher frequency of regressions, and longer reading time
    may be indicative of processing difficulties (<xref ref-type="bibr" rid="b49">49</xref>). An
    inappropriately placed line break may lead a reader to incorrectly
    interpret the meaning and structure, luring the reader into a parse
    that turns out to be a dead end or yield a clearly unintended
    reading – a so-called “garden path” experience (<xref ref-type="bibr" rid="b14 b51">14, 51</xref>). The reader must then reject their initial
    interpretation and re-read the text. This takes extra time and, as
    such, is unwanted in subtitling, which is supposed to be as
    unobtrusive as possible and should not interfere with the viewer’s
    enjoyment of the moving images (<xref ref-type="bibr" rid="b12">12</xref>).</p>
    
    <p>Despite a substantial body of experimental research on subtitling
    (<xref ref-type="bibr" rid="b7 b9 b10 b24 b29 b30 b47 b61">7, 9, 10, 24, 29, 30, 47, 61</xref>), the question of whether text segmentation affects subtitle
    processing (<xref ref-type="bibr" rid="b44">44</xref>) still remains unanswered. Previous
    research is inconclusive as to whether linguistically segmented text
    facilitates subtitle processing and comprehension. Contrary to
    arguments underpinning professional subtitling recommendations,
    Perego, Del Missier, Porta, &#x26;  Mosconi (<xref ref-type="bibr" rid="b46">46</xref>), who used
    eye-tracking to examine subtitle comprehension and processing, found
    no disruptive effect of “syntactically incoherent” segmentation of
    noun phrases on the effectiveness of subtitle processing in Italian.
    In their study, the number of fixations and saccadic crossovers
    (i.e. gaze jumps between the image and the subtitle) did not differ
    between the syntactically segmented and non-segmented conditions. In
    contrast, in a study on live subtitling, Rajendran, Duchowski,
    Orero, Martínez, &#x26;  Romero-Fresco (<xref ref-type="bibr" rid="b48">48</xref>) showed benefits of
    linguistically-based segmentation by phrase, which induced fewer
    fixations and saccadic crossovers, and resulted in shortest mean
    fixation duration, together indicating less effortful
    processing.</p>
    
    <p>Ivarsson &#x26;  Carroll (<xref ref-type="bibr" rid="b18">18</xref>) noted that “matching line breaks
    with sense blocks is especially important for viewers with any kind
    of linguistic disadvantage, e.g. immigrants or young children
    learning to read or the deaf with their acknowledged reading
    problems” (p. 78). Indeed, early deafness is strongly associated
    with reading difficulties (<xref ref-type="bibr" rid="b37 b41">37, 41</xref>). Researchers investigating subtitle reading
    by deaf viewers have demonstrated processing difficulties resulting
    in lower comprehension and more time spent by deaf viewers on
    reading subtitles (<xref ref-type="bibr" rid="b25 b26 b60">25, 26, 60</xref>). Lack of familiarity with subtitling is
    another aspect which may affect the way people read subtitles. In a
    recent study, Perego et al. (<xref ref-type="bibr" rid="b47">47</xref>) found that subtitling can hinder
    viewers accustomed to dubbing from fully processing film images,
    especially in the case of structurally complex subtitles.</p>
  </sec>

  <sec id="S1b">
    <title>Cognitive load</title>

    <p>Watching a subtitled video is a complex task: not only do viewers
    need to follow the dynamically unfolding on-screen actions,
    accompanied by various sounds, but they also need to read the
    subtitles (<xref ref-type="bibr" rid="b32">32</xref>). This complex
    processing task may be hindered by poor quality subtitles, possibly
    including aspects such as non-syntactic segmentation. The processing
    of subtitles has been previously studied in association with the
    concept of cognitive load (<xref ref-type="bibr" rid="b27">27</xref>), rooted in
    cognitive load theory (CLT) and instructional design (<xref ref-type="bibr" rid="b56">56</xref>). Drawing on the central tenet of CLT, the design of materials
    should aim at reducing any unnecessary load to free the processing
    capacity for task-related activities (<xref ref-type="bibr" rid="b58">58</xref>).</p>
    
    <p>In the initial formulation of CLT, two types of cognitive load
    were distinguished: intrinsic and extraneous (<xref ref-type="bibr" rid="b8">8</xref>). Intrinsic cognitive load is related to the
    complexity and characteristics of the task (<xref ref-type="bibr" rid="b54">54</xref>). Extraneous load relates to how the
    information is presented; if presentation is inefficient, learning
    can be hindered (<xref ref-type="bibr" rid="b57">57</xref>). For instance,
    too many colours or blinking headlines in a lecture presentation can
    distract students rather than help them focus, wasting attentional
    resources on taskirrelevant details (<xref ref-type="bibr" rid="b54">54</xref>). Later
    studies in CLT also distinguish the concept of ‘germane cognitive
    load’ and, more recently, ‘germane resources’ (<xref ref-type="bibr" rid="b54 b57">54, 57</xref>). It is believed that germane load is not
    imposed by the characteristics of the materials and germane
    resources should be “high enough to deal with the intrinsic
    cognitive load caused by the content” (<xref ref-type="bibr" rid="b54">54</xref>). In
    this paper, we set out to test whether non-syntactically segmented
    text may strain working memory capacity and prevent viewers from
    efficiently processing subtitled videos. It is our contention that
    just as the goal of instructional designers is to foster learning by
    keeping extraneous cognitive load as low as possible (<xref ref-type="bibr" rid="b54">54</xref>), so it is the task of subtitlers to reduce the extraneous
    load on viewers, enabling them to focus on what is important during
    the filmwatching experience.</p>
    
    <p>The concept of cognitive load encompasses different categories
    (<xref ref-type="bibr" rid="b58 b67">58, 67</xref>). Mental effort is
    understood, following Paas, Tuovinen, Tabbers, &#x26;  Van Gerven
    (<xref ref-type="bibr" rid="b43">43</xref>, p. 64) and Sweller et al. (<xref ref-type="bibr" rid="b57">57</xref>, p. 73), as “the aspect of
    cognitive load that refers to the cognitive capacity that is
    actually allocated to accommodate the demands imposed by the task”.
    As mental effort invested in a task is not necessarily equal to the
    difficulty of the task, difficulty is a construct distinct from
    effort (<xref ref-type="bibr" rid="b66">66</xref>). Drawing on the multidimensional
    NASA Task Load Index (<xref ref-type="bibr" rid="b16">16</xref>), some researchers
    also included other aspects of cognitive load, such as temporal
    demand, performance, and frustration with the task (<xref ref-type="bibr" rid="b57">57</xref>). Apart from effort, difficulty and frustration, of particular
    importance in the present study is performance, operationalised here
    as comprehension score, which demonstrates how well a person carried
    out the task. Performance may be positively affected by lower
    cognitive load, as there is more unallocated processing capacity to
    carry out the task. As the task complexity increases, more effort
    needs to be expended to keep the performance at the same level (<xref ref-type="bibr" rid="b43">43</xref>).</p>
    
    <p>Cognitive load can be measured using subjective or objective
    methods (<xref ref-type="bibr" rid="b27 b57">27, 57</xref>).
    Subjective cognitive load measurement is usually done indirectly
    using rating scales (<xref ref-type="bibr" rid="b43 b54">43, 54</xref>), where
    people are asked to rate their mental effort or the perceived
    difficulty of a task on a 7- or 9-point Likert scale, ranging from
    “very low” to “very high” (<xref ref-type="bibr" rid="b66">66</xref>). Subjective
    rating scales have been criticised for using only one single item
    (usually either mental load or difficulty) in assessing cognitive
    load (<xref ref-type="bibr" rid="b54">54</xref>). Yet, they have been found to
    effectively show the correlations between the variation in cognitive
    load reported by people and the variation in the complexity of the
    task they were given (<xref ref-type="bibr" rid="b43">43</xref>). According to Sweller et
    al. (<xref ref-type="bibr" rid="b57">57</xref>), “the simple subjective rating scale [...], has, perhaps
    surprisingly, been shown to be the most sensitive measure available
    to differentiate the cognitive load imposed by different
    instructional procedures” (p. 74). The problem with rating scales is
    they are applied to the task as a whole, after it has been
    completed. In contrast, objective methods, which include
    physiological tools such as eye tracking or electroencephalography
    (EEG), enable researchers to see fluctuations in cognitive load over
    time (<xref ref-type="bibr" rid="b3 b65">3, 65</xref>). Higher number of fixations and longer fixation
    durations are generally associated with higher processing effort and
    increased cognitive load (<xref ref-type="bibr" rid="b17 b28">17, 28</xref>). In our study, we combine subjective
    rating scales with objective eye-tracking measures to obtain a more
    reliable view on cognitive load during the task of subtitle
    processing.</p>
    
    <p>Various types of measures have been used to evaluate cognitive
    load in subtitling. Some previous studies have used subjective
    post-hoc rating scales to assess people’s cognitive load when
    watching subtitled audiovisual material (<xref ref-type="bibr" rid="b27 b30 b69">27, 30, 69</xref>);
    subtitlers’ cognitive load when producing live subtitles with
    respeaking (<xref ref-type="bibr" rid="b62">62</xref>); or the level
    of translation difficulty (<xref ref-type="bibr" rid="b55">55</xref>). Some studies on
    subtitling have used eye tracking to examine cognitive load and
    attention distribution in a subtitled lecture (<xref ref-type="bibr" rid="b30">30</xref>);
    cognitive load while reading edited and verbatim subtitles
    (<xref ref-type="bibr" rid="b60">60</xref>); or the processing of native and foreign
    subtitles in films (<xref ref-type="bibr" rid="b7">7</xref>); to mention just a few.
    Using both eye tracking and subjective self-report ratings, Łuczak
    (<xref ref-type="bibr" rid="b34">34</xref>) tested the impact of the language of the soundtrack (English,
    Hungarian, or no audio) on viewers’ cognitive load. Kruger, Doherty,
    Fox, et al. (<xref ref-type="bibr" rid="b28">28</xref>) combined eye tracking, EEG and selfreported
    psychometrics in their examination of the effects of language and
    subtitle placement on cognitive load in traditional intralingual
    subtitling and experimental integrated titles. For a critical
    overview of eye tracking measures used in empirical research on
    subtitling, see (<xref ref-type="bibr" rid="b13">13</xref>), and of the
    applications of cognitive load theory to subtitling research, see
    Kruger &#x26;  Doherty (<xref ref-type="bibr" rid="b27">27</xref>).</p>
  </sec>

  <sec id="S1c">
    <title>Overview of the current study</title>

    <p>The main goal of this study is to test the impact of segmentation
    on subtitle processing. With this goal in mind, we showed
    participants two videos: one with syntactically segmented text in
    the subtitles (SS) and one where text was not syntactically
    segmented (NSS). In order to compensate for any differences in the
    knowledge of source language and accessibility of the soundtrack to
    deaf and hearing participants, we used videos where the soundtrack
    was in Hungarian – a language that participants could not
    understand.</p>
    
    <p>All subtitles in this study were shown in English.
    The reason for this is threefold. First, the noncompliance with
    the subtitling guidelines with regard to text segmentation and line
    breaks is particularly visible on British television in
    Englishto-English subtitling. Although the UK is the leader in
    subtitling when it comes to the quantity of subtitle provision, with
    many TV channels having 100% subtitling to its programmes, the
    quality of prerecorded subtitles is often below professional
    subtitling standards with regard to subtitle segmentation. Another
    reason for using English – as opposed to showing participants
    subtitles in their respective mother tongues – was to ensure
    identical linguistic structures in the subtitles. A final reason for
    using English is that, as participants live in the UK, they are able
    to watch English subtitles on television. The choice of English
    subtitles is therefore ecologically valid.</p>
    
    <p>We measured participants’ cognitive load and comprehension as
    well as a number of eye tracking variables. Following the
    established method of measuring self-reported cognitive load
    previously used by Kruger et al. (<xref ref-type="bibr" rid="b30">30</xref>), (<xref ref-type="bibr" rid="b61">61</xref>), and Łuczak (<xref ref-type="bibr" rid="b34">34</xref>), we measured three
    aspects of cognitive load: perceived difficulty, effort, and
    frustration, using subjective 17 rating scales (<xref ref-type="bibr" rid="b54">54</xref>). We also related viewers’ cognitive load to their performance,
    operationalised here as comprehension score. Based on the subtitling
    literature (<xref ref-type="bibr" rid="b45">45</xref>), we predicted that non-syntactically
    segmented text in subtitles would result in higher cognitive load
    and lower comprehension. We hypothesised that subtitles in the NSS
    condition would be more difficult to read because of increased
    parsing difficulties and extra cognitive resources which might be
    expended on additional processing.</p>
    
    <p>In terms of eye tracking, we hypothesised that people would spend
    more time reading subtitles in the NSS condition. To measure this,
    we calculated the absolute reading time and proportional reading
    time of subtitles as well as fixation count in the subtitles.
    Absolute reading time is the time the viewers spent in the subtitle
    area, measured in milliseconds, whereas proportional reading time is
    a percentage of time spent in the subtitle area relative to subtitle
    duration (<xref ref-type="bibr" rid="b11 b24">11, 24</xref>). Furthermore, because we thought that the
    non-syntactically segmented text would be more difficult to process,
    we also expected higher mean fixation duration and more revisits to
    the subtitle area in the NSS condition (<xref ref-type="bibr" rid="b17 b50 b51">17, 50, 51</xref>).</p>
    
    <p>To address the contribution of hearing status and experience with
    subtitling to cognitive processing, our study includes British
    viewers with varying hearing status (deaf, hard of hearing, and
    hearing), and hearing native speakers of different languages:
    Spanish people, who grew up in a country where the dominant type of
    audiovisual translation is dubbing, and Polish people, who come from
    the tradition of voice-over and subtitling. We conducted two
    experiments: Experiment 1 with hearing people from the UK, Poland,
    and Spain, and Experiment 2 with English hearing, hard of hearing
    and deaf people. We predicted that for those who are not used to
    subtitling, cognitive load would be higher, comprehension would be
    lower and time spent in the subtitle would be higher, as indicated
    by absolute reading time, fixation count and proportional reading
    time.</p>
    
    <p>By using a combination of different research methods, such as eye
    tracking, self-reports, and questionnaires, we have been able to
    analyse the impact of text segmentation on the processing of
    subtitles, modulated by different linguistic backgrounds of viewers.
    Examining these issues is particularly relevant from the point of
    view of current subtitling standards and practices.</p>
  </sec>
</sec>

<sec id="S2">
  <title>Methods</title>

  <p>The study took place at University College London and was part of a
  larger project on testing subtitle processing with eye tracking. In
  this paper, we report the results from two experiments using the same
  methodology and materials: Experiment 1 with hearing native speakers
  of English, Polish, and Spanish; and Experiment 2 with hearing, hard
  of hearing, and deaf British participants. The Englishspeaking hearing
  participants are the same in both experiments. In each of the two
  experiments, we employed a mixed factorial design with segmentation
  (syntactically segmented vs. nonsyntactically segmented) as the main
  within-subject independent variable, and language (Exp. 1) or hearing
  loss (Exp. 2) as a between-subject factor.</p>

  <p>All the study materials and results are available in an open data
  repository RepOD hosted by the University of Warsaw (<xref ref-type="bibr" rid="b59">59</xref>).</p>

  <sec id="S2a">
    <title>Participants </title>

    <p>Participants were recruited from the UCL Psychology pool of
    volunteers, social media (Facebook page of the project, Twitter),
    and personal networking. Hard of hearing participants were recruited
    with the help of the National Association of Deafened People. Deaf
    participants were also contacted through the UCL Deafness,
    Cognition, and Language Research Centre participant pool.
    Participants were required not to know Hungarian.</p>
    
    <table-wrap id="t01" position="float">
					<label>Table 1.</label>
					<caption>
						<p>Demographic information on participants</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">

        <thead>
          <tr>
            <th colspan="2">Experiment 1</th>
            <th></th>
            <th></th>
            <th></th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td></td>          
            <td></td>
            <td>English</td>
            <td>Polish</td>
            <td>Spanish</td>
          </tr>
          <tr>
            <td>Gender</td>
            <td>Male</td>
            <td>13</td>
            <td>5</td>
            <td>10</td>
          </tr>
          <tr>
            <th></th>          
            <td>Female</td>
            <td>14</td>
            <td>16</td>
            <td>16</td>
          </tr>
          <tr>
            <td>Age</td>
            <td>Mean</td>
             <td>27.59</td>
            <td>24.71</td>
             <td>28.12</td> 
          </tr>
           <tr>    
            <th></th>                                     
              <td>(SD)</td>
              <td>(7.79)</td>
              <td>(5.68)</td>
              <td>(5.88)</td>
          </tr>
          <tr>
            <th></th>          
            <td>Range</td>
            <td>20-54</td>
            <td>19-38</td>
            <td>19-42</td>
          </tr>
          <tr>
            <td colspan="2">Experiment 2</td>
            <td></td>
            <td></td>
            <td></td>
          </tr>
          <tr>
            <td></td>
            <td></td>            
            <td>Hearing</td>
            <td>Hard of hearing</td>
            <td>Deaf</td>
          </tr>
          <tr>
            <td>Gender</td>
            <td>Male</td>
            <td>13</td>
            <td>2</td>
            <td>4</td>
          </tr>
          <tr>
            <th></th>          
            <td>Female</td>
            <td>14</td>
            <td>8</td>
            <td>5</td>
          </tr>
          <tr>
            <td>Age</td>
            <td>Mean</td>
             <td>27.59</td>
             <td>46.40</td>
            <td>42.33</td>
          </tr>
          <tr>
            <th></th>          
              <td>(SD)</td>
              <td>(7.79)</td>
              <td>(12.9)</td>
              <td>(14.18)</td>
          </tr>
          <tr>
            <th></th>          
            <td>Range</td>
            <td>20-54</td>
            <td>22-72</td>
            <td>24-74</td>
          </tr>
        </tbody>
      </table>
    </table-wrap>

    <p>Experiment 1 participants were pre-screened to be native speakers
    of English, Polish or Spanish, aged above 18. They were all resident
    in the UK. We tested 27 English, 21 Polish, and 26 Spanish speakers
    (see Table 1). At the study planning and design stage, Spanish
    speakers were included on the assumption that they would be
    unaccustomed to subtitling as they come from Spain, a country in
    which foreign programming is traditionally presented with dubbing.
    Polish participants were included as Poland is a country where
    voice-over and subtitling are commonly used, the former on
    television and VOD, and the latter in cinemas, DVDs, and VOD. The
    hearing English participants were used as a control group.</p>
    
    <p>Despite their experiences in their native countries, when asked
    about the preferred type of audiovisual translation (AVT), most of
    the Spanish participants declared they preferred subtitling and many
    of the Polish participants reported that they watch films in the
    original (see Table 2).</p>
    
<table-wrap id="t02" position="float">
					<label>Table 2.</label>
					<caption>
						<p>Preferred way of watching foreign films</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">

        <thead>
          <tr>
            <th></th>
            <th>English</th>
            <th>Polish</th>
            <th>Spanish</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td>Subtitling</td>
            <td>24</td>
            <td>11</td>
            <td>22</td>
          </tr>
          <tr>
            <td>Dubbing</td>
            <td>0</td>
            <td>0</td>
            <td>1</td>
          </tr>
          <tr>
            <td>Voice-over</td>
            <td>1</td>
            <td>0</td>
            <td>0</td>
          </tr>
          <tr>
            <td>I watch films in their original version</td>
            <td>1</td>
            <td>10</td>
            <td>3</td>
          </tr>
          <tr>
            <td>I never watch foreign films</td>
            <td>1</td>
            <td>0</td>
            <td>0</td>
          </tr>
        </tbody>
      </table>
    </table-wrap>

    <p>We also asked the participants how often they watched English and
    non-English programmes with English subtitles (Fig. 1).</p>

<fig id="fig01" fig-type="figure" position="float">
					<label>Figure 1.</label>
					<caption>
						<p>Participants’ subtitle viewing habits</p>
					</caption>
					<graphic id="graph01" xlink:href="jemr-11-04-b-figure-01.png"/>
				</fig>    
    
    <p>The heterogeneity of participants’ habits and preferences
    reflects the changing AVT landscape in Europe (<xref ref-type="bibr" rid="b36">36</xref>) on the one hand, and on the other, may be
    attributed to the fact that participants were living in the UK and
    thus had different experiences of audiovisual translation than in
    their home countries. The participants’ profiles make them not fully
    representative of the Spanish/Polish population, which we
    acknowledge here as a limitation of the study.</p>


    <p>To determine the level of participants’ education, hearing people
    were asked to state the highest level of education they completed
    (Table 3, see also Table 5 for hard of hearing and deaf participants). Overall,
              the sample was relatively well-educated.</p>

<table-wrap id="t03" position="float">
					<label>Table 3.</label>
					<caption>
						<p>Education background of hearing participants in
    Experiment 1</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">

        <thead>
          <tr>
            <th></th>
            <th>English</th>
            <th>Polish</th>
            <th>Spanish</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td>Secondary education</td>
            <td>5</td>
            <td>9</td>
            <td>6</td>
          </tr>
          <tr>
            <td>Bachelor degree</td>
            <td>14</td>
            <td>4</td>
            <td>6</td>
          </tr>
          <tr>
            <td>Master degree</td>
            <td>8</td>
            <td>8</td>
            <td>13</td>
          </tr>
          <tr>
            <td>PhD</td>
            <td>0</td>
            <td>0</td>
            <td>1</td>
          </tr>
        </tbody>
      </table>
    </table-wrap>

    <p>As subtitles used in the experiments were in English, we asked
    Polish and Spanish speakers to assess their proficiency in reading
    English using the Common European Framework of Reference for
    Languages (from A1 to C2), see Table 4. None of the participants
    declared a reading level lower than B1. The difference between the
    proficiency in English of Polish and Spanish participants was not
    statistically significant, <italic>χ<sup>2</sup></italic>(3) =
    5.144, <italic>p</italic> = .162. Before declaring their
    proficiency, each participant was presented with a sheet describing
    the skills and competences required at each proficiency level
    (<xref ref-type="bibr" rid="b59">59</xref>). There is evidence that
    self-report correlates reasonably well with objective assessments
    (<xref ref-type="bibr" rid="b35">35</xref>).</p>

<table-wrap id="t04" position="float">
					<label>Table 4.</label>
					<caption>
						<p>Self-reported English proficiency in reading of
Polish and Spanish participants</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">

        <thead>
          <tr>
            <th></th>
            <th>Polish</th>
            <th>Spanish</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td>B1</td>
            <td>0</td>
            <td>1</td>
          </tr>
          <tr>
            <td>B2</td>
            <td>0</td>
            <td>4</td>
          </tr>
          <tr>
            <td>C1</td>
            <td>3</td>
            <td>5</td>
          </tr>
          <tr>
            <td>C2</td>
            <td>18</td>
            <td>16</td>
          </tr>
          <tr>
            <td>Total</td>
            <td>21</td>
            <td>26</td>
          </tr>
        </tbody>
      </table>
    </table-wrap>

    <p>In Experiment 2, participants were classified as either hearing,
    hard of hearing, or deaf. Before taking part in the study, those
    with hearing impairment completed a questionnaire about the severity
    of their hearing impairment, age of onset of hearing impairment,
    communication preferences, etc. and were asked if they described
    themselves as deaf or hard of hearing. They were also asked to
    indicate their education background (see Table 5). We recruited 27
    hearing, 10 hard of hearing, and 9 deaf participants. Of the deaf
    and hard of hearing participants, 7 were born deaf or hard of
    hearing, 4 lost hearing under the age of 8, 2 lost hearing between
    the ages of 9-17, and 6 lost hearing between the ages of 18-40. Nine
    were profoundly deaf, 6 were severely deaf, and 4 had a moderate
    hearing loss. Seventeen of the deaf and hard of hearing participants
    preferred to use spoken English as their means of communication in
    the study and two chose to use a British Sign Language interpreter.
    In relation to AVT, 84.2% stated that they often watch films in
    English with English subtitles; 78.9% declared they could not follow
    a film without subtitles; 58% stated that they always or very often
    watch non-English films with English subtitles. Overall, deaf and
    hard of hearing participants in our study were experienced subtitle
    users, who rely on subtitles to follow audiovisual materials.</p>

<table-wrap id="t05" position="float">
					<label>Table 5.</label>
					<caption>
						<p>Education background of deaf and hard of hearing participants</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">

        <thead>
          <tr>
            <th></th>
            <th>Deaf</th>
            <th>Hard of hearing</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td>GCSE/O-levels</td>
            <td>3</td>
            <td>1</td>
          </tr>
          <tr>
            <td>A-levels</td>
            <td>2</td>
            <td>4</td>
          </tr>
          <tr>
            <td>University level</td>
            <td>4</td>
            <td>5</td>
          </tr>
        </tbody>
      </table>
    </table-wrap>

    <p>In line with UCL hourly rates for experimental participants,
    hearing participants received £10 for their participation in the
    experiment. In recognition of the greater difficulty in recruiting
    special populations, hard of hearing and deaf participants were paid
    £25. Travel expenses were reimbursed as required.</p>
  </sec>

  <sec id="S2b">
    <title>Materials </title>

    <p>These comprised two self-contained 1-minute scenes from films
    featuring two people engaged in a conversation: one from
    <italic>Philomena</italic> (Desplat &#x26;  Frears, 2013) and one from
    <italic>Chef</italic> (Bespalov &#x26;  Favreau, 2014). The clips were
    dubbed into Hungarian – a language unknown to any of the
    participants and linguistically unrelated to their native languages.
    Subtitles were displayed in English, while the audio of the films
    was in Hungarian. Table 6 shows the number of linguistic units
    manipulated for each clip.</p>

<table-wrap id="t06" position="float">
					<label>Table 6.</label>
					<caption>
						<p>Number of instances manipulated for each type of
    linguistic unit</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">

        <thead>
          <tr>
            <th>Linguistic unit</th>
            <th>Chef</th>
            <th>Philomena</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td>Auxiliary and lexical verb</td>
            <td>2</td>
            <td>2</td>
          </tr>
          <tr>
            <td>Subject and predicate</td>
            <td>3</td>
            <td>3</td>
          </tr>
          <tr>
            <td>Article and noun</td>
            <td>3</td>
            <td>3</td>
          </tr>
          <tr>
            <td>Conjunction between two clauses</td>
            <td>4</td>
            <td>5</td>
          </tr>
        </tbody>
      </table>
    </table-wrap>

    <p>Subtitles were prepared in two versions: syntactically segmented
    and non-syntactically segmented (see Table 7) (SS and NSS,
    respectively). The SS condition was prepared in accordance with
    professional subtitling standards, with linguistic phrases appearing
    on a single line. In the NSS version, syntactic phrases were split
    between the first and the second line of the subtitle. Both the SS
    and the NSS versions had identical time codes and contained exactly
    the same text. The clip from Philomena contained 16 subtitles, of
    which 13 were manipulated for the purposes of the experiment; Chef
    contained 22 subtitles, of which 12 were manipulated. Four types of
    linguistic units were manipulated in the NSS version of both clips
    (see Tables 6 and 7).</p>
    
    <p>Each participant watched two clips: one from
    <italic>Philomena</italic> and one from <italic>Chef</italic>; one
    in the SS and one in the NSS condition. The conditions were
    counterbalanced and their order of presentation was randomised using
    SMI Experiment Centre (see <xref ref-type="bibr" rid="b59">59</xref>).</p>

<table-wrap id="t07" position="float">
					<label>Table 7.</label>
					<caption>
						<p>Examples of line breaks in the SS and the NSS
              condition</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>

                  <tr>
                    <th>Linguistic unit</th>
                    <th>SS condition</th>
                    <th>NSS condition</th>
                  </tr>
                </thead>
                <tbody>
                  <tr>
                    <td>Auxiliary and lexical verb</td>
                    <td>Now, should we <underline>have served</underline></td>
                    <td>Now, should we <underline>have</underline></td>
                  </tr>
                  <tr>
                    <td></td>                  
                    <td>that sandwich?</td>
                    <td><underline>served</underline> that sandwich?</td>
                  </tr>
                  <tr>
                    <td>Subject and predicate</td>
                    <td>That's my son. Get back in there.</td>
                    <td>That's my son. Get back in there. <underline>We</underline></td>
                  </tr>
                  <tr>
                    <td></td>
                    <td><underline>We got</underline> some hungry people.</td>
                    <td><underline>got</underline> some hungry people.</td>
                  </tr>                  
                  <tr>
                    <td>Article and noun</td>
                    <td>I've loved <underline>the hotels</underline>,</td>
                    <td>I've loved <underline>the</underline></td>
                  </tr>
                  <tr>
                    <td></td>
                    <td>the food and everything,</td>
                    <td><underline>hotels</underline>,the food and everything,</td>
                  </tr>
                  <tr>                                   
                    <td>Conjunction between two clauses</td>
                    <td>Now I've made a decision</td>
                    <td>Now I've made a decision <underline>and</underline></td>
                  </tr>
                  <tr>                  
                    <td></td>
                    <td><underline>and</underline> my mind's made up.</td>
                    <td>my mind's made up.</td>
                  </tr>                  
                </tbody>
              </table>
            </table-wrap>
  </sec>

  <sec id="S2c">
    <title>Eye tracking recording </title>

    <p>An SMI RED 250 mobile eye tracker was used in the experiment.
    Participants’ eye movements were recorded with a sampling rate of
    250Hz. The experiment was designed and conducted with the SMI
    software package Experiment Suite, using the velocity-based saccade
    detection algorithm. The minimum duration of a fixation was 80ms.
    The analyses used SMI BeGaze and SPSS v. 24. Eighteen participants
    whose tracking ratio was below 80% were excluded from the eye
    tracking analyses (but not from comprehension or cognitive load
    assessments).</p>
  </sec>

  <sec id="S2d">
    <title>Dependent variables </title>

    <p>The dependent variables were: 3 indicators of cognitive load
    (difficulty, effort and frustration), comprehension score, and 5 eye
    tracking measures.</p>
    
    <p>The following three indicators of cognitive load were measured
    using self-reports on a 1-7 scale: difficulty (“Was it difficult for
    you to read the subtitles in this clip?”, ranging from “very easy”
    to “very difficult”), effort (“Did you have to put a lot of effort
    into reading the subtitles in this clip?”, ranging from “very little
    effort” to “a lot of effort”), and frustration (“Did you feel
    annoyed when reading the subtitles in this clip?”, ranging from “not
    annoyed at all” to “very annoyed”).</p>
    
    <p>Comprehension was measured as the number of correct answers to a
    set of five questions per clip about the content, focussing on the
    information from the dialogue (not the visual elements). See
    Szarkowska &#x26;  Gerber-Morón (<xref ref-type="bibr" rid="b59">59</xref>2018) for the details, including the
    exact formulations of the questions.</p>

    <p>Table 8 contains a description of the eye tracking measures. We
    drew individual areas of interest (AOIs) on each subtitle in each clip. All eye tracking
              data reported here comes from AOIs on subtitles</p>

<table-wrap id="t08" position="float">
					<label>Table 8.</label>
					<caption>
						<p>Description of the eye tracking measures</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">


                <thead>
                  <tr>
                    <th>Eye tracking measure</th>
                    <th>Description</th>
                  </tr>
                </thead>
                <tbody>
                  <tr>
                    <td>Absolute reading time</td>
                    <td>The sum of all fixation durations and saccade
                    durations, starting from the duration of the saccade
                    entering the AOI, referred to in SMI software as
                    ‘glance duration’. Longer time spent on reading may
                    be indicative of difficulties with extracting
                    information (<xref ref-type="bibr" rid="b17">17</xref>).</td>
                  </tr>
                  <tr>
                    <td>Proportional reading time</td>
                    <td>The percentage of dwell time (the sum of
                    durations of all fixations and saccades in an AOI
                    starting with the first fixation) a participant
                    spent in the AOI as a function of subtitle display
                    time. For example, if a subtitle lasted for 3
                    seconds and the participant spent 2.5 seconds in
                    that subtitle, the proportional reading time was
                    2500/3000 ms = 83% (i.e. while the subtitle was
                    displayed for 3 seconds, the participant was looking
                    at that subtitle for 83% of the time). Longer
                    proportional time spent in the AOI translates into
                    less time available to follow on-screen action.</td>
                  </tr>
                  <tr>
                    <td>Mean fixation duration</td>
                    <td>The duration of a fixation in a subtitle AOI,
                    averaged per clip per participant. Longer mean
                    fixation duration may indicate more effortful
                    cognitive processing (<xref ref-type="bibr" rid="b17">17</xref>).</td>
                  </tr>
                  <tr>
                    <td>Fixation count</td>
                    <td>The number of fixations in the AOI, averaged per
                    clip per participant. Higher numbers of fixations
                    have been reported in poor readers (<xref ref-type="bibr" rid="b17">17</xref>).</td>
                  </tr>
                  <tr>
                    <td>Revisits</td>
                    <td>The number of glances a participant made to the
                    subtitle AOI after visiting the subtitle for the
                    first time. Revisits to the AOI may indicate
                    problems with processing, as people go back to the
                    AOI to re-read the text.</td>
                  </tr>
                </tbody>
              </table>
            </table-wrap>

  </sec>

  <sec id="S2e">
    <title>Procedure </title>

    <p>The study received full ethical approval from the UCL Research
    Ethics Committee. Participants were tested individually. They were
    informed they would take part in an eye tracking study on the
    quality of subtitles. The details of the experiment were not
    revealed until the debrief.</p>
    
    <p>After reading the information sheet and signing the informed
    consent form, each participant underwent a 9-point calibration
    procedure. There was a training session, whose results were not
    recorded. Its aim was to familiarise the participants with the
    experimental procedure and the type of questions that would be asked
    in the experiment (comprehension and cognitive load). Participants
    watched the clips with the sound on. After the test, participants’
    views on subtitle segmentation were elicited in a brief
    interview.</p>
    
    <p>Each experiment lasted approx. 90 minutes (including other tests
    not reported in this paper), depending on the time it took the
    participants to answer the questions and participate in the
    interview. Deaf participants had the option of either communicating
    via a British Sign Language interpreter or by using their preferred
    combination of spoken language, writing and lip-reading.</p>
  </sec>
</sec>

<sec id="S3">
  <title>Results </title>
  <sec id="S3a">
    <title><underline>Experiment 1</underline></title>

    <p>Seventy-four participants took part in this experiment: 27
    English, 21 Polish, 26 Spanish.</p>
  </sec>

    <sec id="S3b">
      <title>Cognitive load </title>

      <p>To examine whether subtitle segmentation affects viewers’
      cognitive load, we conducted a 2 x 3 mixed ANOVA on three
      indicators of cognitive load: difficulty, effort, and frustration,
      with segmentation as a within-subject independent variable (SS vs.
      NSS) and language (English, Polish, Spanish) as a between-subject
      factor. We found a main effect of segmentation on all three
      aspects of cognitive load, which were consistently higher in the
      NSS condition compared to the SS one (Table 9).</p>  

<table-wrap id="t09" position="float">
					<label>Table 9.</label>
					<caption>
						<p>Mean cognitive load indicators for different
                participant groups in Experiment 1</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
         

                  <thead>
                    <tr>
                      <th></th>

                      <th colspan="3">Language</th>
                      <th></th>
                      <th></th>
                      <th></th>
                      <th></th>

                    </tr>
                  </thead>
                  <tbody>
                    <tr>
                      <td></td>
                      <td>English</td>
                      <td>Polish</td>
                      <td>Spanish</td>
                      <td>df</td>
                      <td><italic>F</italic></td>
                      <td><italic>P</italic></td>
                      <td>𝜂<sub>p</sub>&#x00B2;</td>
                    </tr>
                    <tr>
                      <td>Difficulty</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td>1,71</td>
                      <td>15,584</td>
                      <td>&#x3C; .001*</td>
                      <td>.18</td>
                    </tr>
                    <tr>
                      <td>SS</td>
                      <td>2.37 (1.27)</td>
                      <td>2.05 (1.02)</td>
                      <td>1.96 (1.14)</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td></td>
                    </tr>
                    <tr>
                      <td>NSS</td>
                      <td>2.63 (1.44)</td>
                      <td>2.67 (1.46)</td>
                      <td>3.42 (1.65)</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td></td>
                    </tr>
                    <tr>
                      <td>Effort</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td>1,71</td>
                      <td>7,788</td>
                      <td>.007*</td>
                      <td>.099</td>
                    </tr>
                    <tr>
                      <td>SS</td>
                      <td>2.78 (1.55)</td>
                      <td>1.90 (1.26)</td>
                      <td>2.23 (1.50)</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td></td>
                    </tr>
                    <tr>
                      <td>NSS</td>
                      <td>2.89 (1.60)</td>
                      <td>2.43 (1.16)</td>
                      <td>3.54 (2.10)</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td></td>
                    </tr>
                    <tr>
                      <td>Frustration</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td>1,71</td>
                      <td>27,030</td>
                      <td>&#x3C; .001*</td>
                      <td>.276</td>
                    </tr>
                    <tr>
                      <td>SS</td>
                      <td>2.15 (1.40)</td>
                      <td>1.38 (.80)</td>
                      <td>1.62 (.89)</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td></td>
                    </tr>
                    <tr>
                      <td>NSS</td>
                      <td>3.04 (1.85)</td>
                      <td>2.48 (1.91)</td>
                      <td>3.27 (2.07)</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td></td>
                    </tr>
                  </tbody>
                </table>
              </table-wrap>

      <p>We also found an interaction between segmentation and language
      in the case of difficulty, <italic>F</italic>(2,71) = 3,494,
      <italic>p</italic> = .036, 𝜂<sub>p</sub>&#x00B2; = .090,
      which we separated with simple effects analyses (post-hoc tests
      with Bonferroni correction). We found a significant main effect of
      segmentation on the difficulty of reading subtitles among Spanish
      participants, <italic>F</italic>(1,25) = 19,161,
      <italic>p</italic>&#x3C; .001, 𝜂<sub>p</sub>&#x00B2; =
      .434. Segmentation did not have a statistically significant effect
      on the difficulty experienced by English participants,
      <italic>F</italic>(1,26) = ,855, <italic>p</italic> = .364,
      𝜂<sub>p</sub>&#x00B2; = .032 or by Polish participants,
      <italic>F</italic>(1,20) = 2,147, <italic>p</italic> = .158,
      𝜂<sub>p</sub>&#x00B2; = .097. To recap, although
      cognitive load difficulty was declared to be higher by all
      participants in the NSS condition, only in the case of Spanish
      participants was the main effect of segmentation statistically
      significant.</p>

      <p>We did not find any significant main effect of language on
      cognitive load (Table 10), which means that participants reported
      similar scores regardless of their linguistic background.</p>

<table-wrap id="t10" position="float">
					<label>Table 10.</label>
					<caption>
						<p>Between-subjects results for cognitive load</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">

          <thead>
            <tr>
              <th>Measure</th>
              <th>df</th>
              <th><italic>F</italic></th>
              <th><italic>p</italic></th>
              <th>𝜂<sub>p</sub>&#x00B2;</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td>Difficulty</td>
              <td>2,71</td>
              <td>.592</td>
              <td>.556</td>
              <td>.016</td>
            </tr>
            <tr>
              <td>Effort</td>
              <td>2,71</td>
              <td>2.382</td>
              <td>.100</td>
              <td>.063</td>
            </tr>
            <tr>
              <td>Frustration</td>
              <td>2,71</td>
              <td>1.850</td>
              <td>.165</td>
              <td>.050</td>
            </tr>
          </tbody>
        </table>
      </table-wrap>
    </sec>

    <sec id="S3c">
      <title>Comprehension </title>

      <p>To see whether segmentation affects viewers’ performance, we
      conducted a 2 x 3 mixed ANOVA on segmentation (SS vs. NSS
      condition) with language (English, Polish, Spanish) as a
      betweensubject factor. The dependent variable was comprehension
      score. There was no main effect of segmentation on comprehension
      <italic>F</italic>(1,71) = .412, <italic>p</italic> = .523,
      𝜂<sub>p</sub>&#x00B2; = .006. Table 11 shows descriptive
      statistics for this analysis. There were no significant
      interactions.</p>

<table-wrap id="t11" position="float">
					<label>Table 11.</label>
					<caption>
						<p>Descriptive statistics for comprehension</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
    
          <thead>
            <tr>
              <th></th>
              <th>Language</th>
              <th>Mean</th>
              <th>(SD)</th>              
            </tr>
          </thead>
          <tbody>
            <tr>
              <td>Comprehension SS</td>
              <td>English</td>
              <td>4.11</td>
              <td>(1.01)</td>
            </tr>
            <tr>
              <td></td>
              <td>Polish</td>
              <td>4.48</td>
              <td>(.81)</td>
            </tr>            
            <tr>
              <td></td>
              <td>Spanish</td>
              <td>4.08</td>
              <td>(1.09)</td>              
            </tr>
            <tr>
              <td></td>
              <td>Total</td>
              <td>4.20</td>
              <td>(.99)</td>              
            </tr>            
            <tr>
              <td>Comprehension NSS</td>
              <td>English</td>
              <td>4.26</td>
              <td>(1.02)</td>
            </tr>
            <tr>
              <td></td>
              <td>Polish</td>
              <td>4.76</td>
              <td>(.43)</td>
            </tr>            
            <tr>
              <td></td>
              <td>Spanish</td>
              <td>3.88</td>
              <td>(1.21)</td>              
            </tr>
            <tr>
              <td></td>
              <td>Total</td>
              <td>4.27</td>
              <td>(1.02)</td>              
            </tr>
          </tbody>
        </table>
      </table-wrap>

      <p>We found a main effect of language on comprehension,
      <italic>F</italic>(2,71) = 3,563, <italic>p</italic> = .034,
      𝜂<sub>p</sub>&#x00B2; = .091. Pairwise comparisons with
      Bonferroni correction showed that Polish participants had
      significantly higher comprehension than Spanish participants,
      <italic>p</italic> = .031, 95% CI [.05, 1.23]. There was no
      difference between Polish and English, <italic>p</italic> =.224,
      95% CI [-.15, 1.02], or Spanish and English participants, <italic>p</italic>
      =1.00, 95% CI [-.76, .35].</p>
    </sec>

    <sec id="S3d">
      <title>Eye tracking measures </title>

      <p>Because of data quality issues, for eye tracking analyses we
      had to exclude 8 participants from the original sample, leaving 22
      English, 19 Polish, and 25 Spanish participants. We found a significant main effect of
      segmentation on revisits to the subtitle area (Table 12).
      Participants went back to the subtitles more in the NSS condition
      (<italic>M<sub>NSS</sub></italic> = .37, <italic>SD</italic> =
      .25) compared to the SS one (<italic>M<sub>SS</sub></italic> =
      .25, <italic>SD</italic> = .22), implying potential parsing
      problems. There was no effect of segmentation for any other eye
      tracking measure (Table 12). There were no interactions.</p>

<table-wrap id="t12" position="float">
					<label>Table 12.</label>
					<caption>
						<p>Mean eye tracking measures by segmentation
                in Experiment 1</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">     

                  <thead>
                    <tr>
                      <th></th>
                      <th colspan="3">Language</th>
                      <th></th>
                      <th></th>
                      <th></th>
                      <th></th>
                    </tr>
                  </thead>
                  <tbody>
                    <tr>
                      <td></td>
                      <td>English</td>
                      <td>Polish</td>
                      <td>Spanish</td>
                      <td>df</td>
                      <td><italic>F</italic></td>
                      <td>p</td>
                      <td>𝜂<sub>p</sub>&#x00B2;</td>
                    </tr>
                    <tr>
                      <td>Absolute reading time (ms)</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td>1,63</td>
                      <td>2.950</td>
                      <td>.091</td>
                      <td>.045</td>
                    </tr>
                    <tr>
                      <td>SS</td>
                      <td>1614</td>
                      <td>1634</td>
                      <td>1856</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td></td>
                    </tr>
                    <tr>
                      <td>NSS</td>
                      <td>1617</td>
                      <td>1529</td>
                      <td>1817</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td></td>
                    </tr>
                    <tr>
                      <td>Proportional reading time</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td>1,63</td>
                      <td>2.128</td>
                      <td>.150</td>
                      <td>.033</td>
                    </tr>
                    <tr>
                      <td>SS</td>
                      <td>.65</td>
                      <td>.67</td>
                      <td>.76</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td></td>
                    </tr>
                    <tr>
                      <td>NSS</td>
                      <td>.66</td>
                      <td>.62</td>
                      <td>.74</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td></td>
                    </tr>
                    <tr>
                      <td>Mean fixation duration (ms)</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td>1,63</td>
                      <td>2.128</td>
                      <td>.906</td>
                      <td>.000</td>
                    </tr>
                    <tr>
                      <td>SS</td>
                      <td>209</td>
                      <td>194</td>
                      <td>214</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td></td>
                    </tr>
                    <tr>
                      <td>NSS</td>
                      <td>211</td>
                      <td>187</td>
                      <td>218</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td></td>
                    </tr>
                    <tr>
                      <td>Fixation count</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td>1,63</td>
                      <td>2.279</td>
                      <td>.136</td>
                      <td>.035</td>
                    </tr>
                    <tr>
                      <td>SS</td>
                      <td>6.41</td>
                      <td>6.68</td>
                      <td>7.27</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td></td>
                    </tr>
                    <tr>
                      <td>NSS</td>
                      <td>6.45</td>
                      <td>6.42</td>
                      <td>6.95</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td></td>
                    </tr>
                    <tr>
                      <td>Revisits</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td>1,63</td>
                      <td>11.839</td>
                      <td>.001*</td>
                      <td>.158</td>
                    </tr>
                    <tr>
                      <td>SS</td>
                      <td>.28</td>
                      <td>.27</td>
                      <td>.21</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td></td>
                    </tr>
                    <tr>
                      <td>NSS</td>
                      <td>.39</td>
                      <td>.34</td>
                      <td>.36</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td></td>
                    </tr>
                  </tbody>
                </table>
              </table-wrap>

      <p>In relation to the between-subject factor, we found a main
      effect of language on absolute reading time, proportional reading
      time, mean fixation duration, and fixation count, but not on
      revisits (see Table 13).</p>

      <p>Post-hoc Bonferroni analyses showed that Spanish participants
      spent significantly more time in the subtitle area compared to
      English and Polish participants. This was shown by significantly
      longer absolute reading time in the case of Spanish participants
      compared to English, <italic>p</italic> = .027, 95% CI [19.20,
      422.73], and Polish participants, <italic>p</italic> = .012, 95%
      CI [44.61, 464.75]. Polish and English participants did not differ
      from each other in absolute reading time, <italic>p</italic>
      =1.00, 95% CI [-249.88, 182.45]. There was a tendency approaching
      significance for fixation count to be higher among Spanish
      participants than English participants, <italic>p</italic> = .077,
      95% CI [-.05, 1.41]. Spanish participants also had higher
      proportional reading time when compared to English participants,
      <italic>p</italic> = .029, 95% CI [.007, .189] and Polish
      participants, <italic>p</italic> = .015, 95% CI [.01, .20], i.e.
      the Spanish participants spent most time reading the subtitle
      while viewing the clip. Finally, Polish participants had a
      statistically lower mean fixation duration compared to English,
      <italic>p</italic> = .041, 95% CI [-38.10, -59], and Spanish,
      <italic>p</italic> = .003, 95% CI [-43.62, -7.16]. English and
      Spanish participants did not differ from each other in mean
      fixation duration, <italic>p</italic> =1.00, 95% CI [-23.55,
      11.47].</p>

<table-wrap id="t13" position="float">
					<label>Table 13.</label>
					<caption>
						<p>ANOVA results for between-subject effects in Experiment 1</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">

          <thead>
            <tr>
              <th>Measure</th>
              <th>df</th>
              <th><italic>F</italic></th>
              <th><italic>p</italic></th>
              <th>𝜂<sub>p</sub>&#x00B2;</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td>Absolute reading time</td>
              <td>2,63</td>
              <td>5.593</td>
              <td>.006*</td>
              <td>.151</td>
            </tr>
            <tr>
              <td>Proportional reading time</td>
              <td>2,63</td>
              <td>5.398</td>
              <td>.007*</td>
              <td>.146</td>
            </tr>
            <tr>
              <td>Mean fixation duration</td>
              <td>2,63</td>
              <td>6.166</td>
              <td>.004*</td>
              <td>.164</td>
            </tr>
            <tr>
              <td>Fixation count</td>
              <td>2,63</td>
              <td>2.980</td>
              <td>.058</td>
              <td>.086</td>
            </tr>
            <tr>
              <td>Revisits</td>
              <td>2,63</td>
              <td>.332</td>
              <td>.719</td>
              <td>.010</td>
            </tr>
          </tbody>
        </table>
      </table-wrap>

      <p>Overall, the results indicate that the processing of subtitles
      was least effortful for Polish participants and most effortful for
      Spanish participants.</p>
    </sec>

  <sec id="S3e">
    <title><underline>Experiment 2</underline></title>

    <p>A total of 46 participants (19 males, 27 females) took part in
    the experiment: 27 were hearing, 10 hard of hearing, and 9 deaf.</p>
    </sec>

    <sec id="S3f">
      <title>Cognitive load </title>

      <p>We conducted 2 x 3 mixed ANOVAs on each indicator of cognitive
      load with segmentation (SS vs. NSS) as a within-subject variable
      and degree of hearing loss (hearing, hard of hearing, deaf) as a
      between-subject variable.</p>

        
          <p>Similarly to Experiment 1, we found a significant main effect of segmentation on
                difficulty, effort, and frustration (Table 14). The NSS subtitles induced
      higher cognitive load than the SS condition in all groups of
      participants. There were no interactions.</p>

<table-wrap id="t14" position="float">
					<label>Table 14.</label>
					<caption>
						<p>Mean cognitive load indicators for
                different participant groups in Experiment 2</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>        

							<tr>
								<td></td>
								<td colspan="3">Degree of hearing loss</td>
								<td></td>
								<td></td>
								<td></td>
								<td></td>                                                                
 							</tr>
							<tr>
								<td></td>
								<td>Hearing</td>
								<td>Hard of hearing</td>
								<td>Deaf</td>                
								<td>df</td>
								<td><italic>F</italic></td>
								<td><italic>p</italic></td>
								<td>𝜂<sub>p</sub>&#x00B2;</td> 
 							</tr>
 							<tr>
								<td></td>
								<td>M (SD)</td>
								<td>M (SD)</td>
								<td>M (SD)</td>
								<td></td>
								<td></td>    
								<td></td>
								<td></td>                                                                            
 							</tr>                              
						</thead>  
						<tbody>
							<tr>
 								<td>Difficulty</td>
								<td></td>
								<td></td>
								<td></td>              
								<td>1,43</td>
								<td>6,580</td>
								<td>.014*</td>
								<td>.133</td> 
 							</tr>
 							<tr>              
								<td>SS</td>
								<td>2.37 (1.27)</td>
								<td>1.60 (1.07)</td>
								<td>2.56 (1.42)</td> 
								<td></td>
								<td></td>
								<td></td>
								<td></td> 
 							</tr>             
 							<tr>              
								<td>NSS</td>
								<td>2.63 (1.44)</td>
								<td>2.20 (1.31)</td>
								<td>3.44 (1.59)</td> 
								<td></td>
								<td></td>
								<td></td>
								<td></td> 
 							</tr> 
							<tr>
 								<td>Effort</td>
								<td></td>
								<td></td>
								<td></td>              
								<td>1,43</td>
								<td>4,372</td>
								<td>.042*</td>
								<td>.092</td> 
 							</tr>           
 							<tr>              
								<td>SS</td>
								<td>2.78 (1.55)</td>
								<td>1.60 (1.07)</td>
								<td>2.78 (1.64)</td> 
								<td></td>
								<td></td>
								<td></td>
								<td></td> 
 							</tr>
  						<tr>              
								<td>NSS</td>
								<td>2.89 (1.60)</td>
								<td>2.50 (1.35)</td>
								<td>3.44 (1.42)</td> 
								<td></td>
								<td></td>
								<td></td>
								<td></td> 
 							</tr>             
							<tr>
 								<td>Frustration</td>
								<td></td>
								<td></td>
								<td></td>              
								<td>1,43</td>
								<td>7,669</td>
								<td>.008*</td>
								<td>.151</td> 
 							</tr> 
							<tr>               
								<td>SS</td>
								<td>2.15 (1.40)</td>
								<td>1.00 (.00)</td>
								<td>2.56 (1.59)</td> 
								<td></td>
								<td></td>
								<td></td>
								<td></td> 
 							</tr>
  						<tr>              
								<td>NSS</td>
								<td>3.04 (1.85)</td>
								<td>2.10 (1.28)</td>
								<td>3.00 (1.58)</td> 
								<td></td>
								<td></td>
								<td></td>
								<td></td> 
 							</tr> 
          </tbody>
        </table>
      </table-wrap>
      
      <p>There was no main effect of hearing loss on difficulty,
      <italic>F</italic>(2,43) = 2.100, <italic>p</italic> = .135,
      𝜂<sub>p</sub>&#x00B2; = .089 or on effort,
      <italic>F</italic>(2,43) = 1.932, <italic>p</italic> = .157,
      𝜂<sub>p</sub>&#x00B2; = .082, but there was an effect
      near to significance on frustration, <italic>F</italic>(2,43) =
      3.100, <italic>p</italic> = .052, 𝜂<sub>p</sub>&#x00B2; =
      .129. Post-hoc tests showed a result approaching significance:
      hard of hearing participants reported lower frustration levels
      than hearing participants, <italic>p</italic> = .079, 95% CI
      [-2.17, .09]. In general, the lowest cognitive load was reported
      by hard of hearing participants.</p>
    </sec>

    <sec id="S3g">
      <title>Comprehension </title>

      <p>Expecting that non-syntactic segmentation would negatively
      affect comprehension, we conducted a 2 x 3 mixed ANOVA on
      segmentation (SS vs. NSS) and degree of hearing loss (hearing,
      hard of hearing, and deaf).</p>

<table-wrap id="t15" position="float">
					<label>Table 15.</label>
					<caption>
						<p>Descriptive statistics for comprehension in Experiment 2</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">

          <thead>
            <tr>
              <th></th>
              <th>Deafness</th>
              <th>Mean (SD)</th>
            </tr>
          </thead>
          <tbody>
           <tr>
              <th>Comprehension SS</th>
              <th>Hearing</th>
              <th>4.11 (1.01))</th>
            </tr>
            <tr>
              <th></th>
              <th>Hard of hearing</th>
              <th>4.60 (.51)</th>
            </tr>
            <tr>
              <th></th>
              <th>Deaf</th>
              <th>4.00 (.70)</th>
            </tr>
            <tr>
              <th></th>
              <th>Total</th>
              <th>4.20 (.88)</th>
            </tr>
            <tr>
              <th>Comprehension NSS</th>
              <th>Hearing</th>
              <th>4.26 (1.02)</th>
            </tr>
            <tr>            
              <th></th>
              <th>Hard of hearing</th>
              <th>4.50 (.70)</th>
            </tr>
            <tr>
              <th></th>
              <th>Deaf</th>
              <th>3.44 (1.23)</th>
            </tr>
            <tr>
              <th></th>
              <th>Total</th>
              <th>4.15 (1.05)</th>
            </tr>
          </tbody>
        </table>
					<table-wrap-foot>
						<fn id="FN1">
						<p>Note<italic>:</italic> Maximum score was 5</p>
						</fn>
					</table-wrap-foot>        
      </table-wrap>

      <p>Despite our predictions, and similarly to Experiment 1, we
      found no main effect of segmentation on comprehension
      <italic>F</italic>(1,43) = .713, <italic>p</italic> = .403,
      𝜂<sub>p</sub>&#x00B2; = .016. There were no
      interactions.</p>

      <p>As for between-subject effects, we found a marginally
      significant main effect of hearing loss on comprehension,
      <italic>F</italic>(2,43) = 3.061, <italic>p</italic> = .057,
      𝜂<sub>p</sub>&#x00B2; = .125. The highest comprehension
      scores were obtained by hard of hearing participants and the
      lowest by deaf participants (Table 15). Post-hoc analyses with
      Bonferroni correction showed that deaf participants differed from
      hard of hearing participants, <italic>p</italic> = .053, 95% CI
      [-1.66, .01].</p>
    </sec>

    <sec id="S3h">
      <title>Eye tracking measures </title>

      <p>Due to problems with calibration, 10 participants had to be
      excluded from eye tracking analyses, leaving a total of 22
      hearing, 8 hard of hearing, and 6 deaf participants.</p>

      <p>To examine whether the non-syntactically segmented text
      resulted in longer reading times, more revisits and higher mean
      fixation duration, we conducted an analogous mixed ANOVA. We found
      no main effect of segmentation on any of the eye tracking measures
      (Table 16), but a few interactions between segmentation and
      deafness: in absolute reading time, <italic>F</italic>(2,33) =
      4,205, <italic>p</italic> = .024, 𝜂<sub>p</sub>&#x00B2;=
      .203; proportional reading time, <italic>F</italic>(2,33) = 4,912,
      <italic>p</italic> = .014, 𝜂<sub>p</sub>&#x00B2; = .229;
      fixation count, <italic>F</italic>(2,33) = 3,992,
      <italic>p</italic> = .028, 𝜂<sub>p</sub>&#x00B2; = .195;
      and revisits, <italic>F</italic>(2,33) = 6,572, <italic>p</italic>
      = .004, 𝜂<sub>p</sub>&#x00B2; = .285.</p>

<table-wrap id="t16" position="float">
					<label>Table 16.</label>
					<caption>
						<p>Mean eye tracking measures by segmentation in Experiment 2</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>        

							<tr>
								<td></td>
								<td colspan="3">Hearing loss</td>
								<td></td>
								<td></td>
								<td></td>
								<td></td>                                                                
 							</tr>
							<tr>
								<td></td>
								<td>Hearing</td>
								<td>Hard of hearing</td>
								<td>Deaf</td>                
								<td>Df</td>
								<td><italic>F</italic></td>
								<td><italic>p</italic></td>
								<td>𝜂<sub>p</sub>&#x00B2;</td> 
 							</tr>
 						</thead>
						<tbody>              
                    <tr>
                      <td>Absolute reading time (ms)</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td>1,33</td>
                      <td>1.752</td>
                      <td>.195</td>
                      <td>.050</td>
                    </tr>
                    <tr>
                      <td>SS</td>
                      <td>1614</td>
                      <td>1619</td>
                      <td>1222</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td></td>
                    </tr>
                    <tr>
                      <td>NSS</td>
                      <td>1617</td>
                      <td>1519</td>
                      <td>1522</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td></td>
                    </tr>
                    <tr>
                      <td>Proportional reading time</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td>1,33</td>
                      <td>2.270</td>
                      <td>.141</td>
                      <td>.064</td>
                    </tr>
                    <tr>
                      <td>SS</td>
                      <td>.65</td>
                      <td>.66</td>
                      <td>.45</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td></td>
                    </tr>
                    <tr>
                      <td>NSS</td>
                      <td>.66</td>
                      <td>.61</td>
                      <td>.62</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td></td>
                    </tr>
                    <tr>
                      <td>Mean fixation duration (ms)</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td>1,33</td>
                      <td>.199</td>
                      <td>.659</td>
                      <td>.006</td>
                    </tr>
                    <tr>
                      <td>SS</td>
                      <td>209</td>
                      <td>199</td>
                      <td>214</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td></td>
                    </tr>
                    <tr>
                      <td>NSS</td>
                      <td>211</td>
                      <td>185</td>
                      <td>219</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td></td>
                    </tr>
                    <tr>
                      <td>Fixation count</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td>1,33</td>
                      <td>2.686</td>
                      <td>.111</td>
                      <td>.075</td>
                    </tr>
                    <tr>
                      <td>SS</td>
                      <td>6.41</td>
                      <td>6.73</td>
                      <td>4.63</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td></td>
                    </tr>
                    <tr>
                      <td>NSS</td>
                      <td>6.45</td>
                      <td>6.45</td>
                      <td>5.90</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td></td>
                    </tr>
                    <tr>
                      <td>Revisits</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td>1,33</td>
                      <td>.352</td>
                      <td>.557</td>
                      <td>.011</td>
                    </tr>
                    <tr>
                      <td>SS</td>
                      <td>.28</td>
                      <td>.20</td>
                      <td>.45</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td></td>
                    </tr>
                    <tr>
                      <td>NSS</td>
                      <td>39</td>
                      <td>.30</td>
                      <td>.15</td>
                      <td></td>
                      <td></td>
                      <td></td>
                      <td></td>
                    </tr>
                  </tbody>
                </table>
              </table-wrap>              


      <p>We broke down the interactions with simpleeffects analyses by
      means of post-hoc tests using Bonferroni correction. In the deaf
      group, we found an effect of segmentation on revisits approaching
      significance, <italic>F</italic>(1,5) = 5.934, <italic>p</italic>
      = .059, 𝜂<sub>p</sub>&#x00B2; = .543. Deaf participants
      had more revisits in the SS condition than in the NSS one,
      <italic>p</italic> = .059. They also had a higher absolute reading
      time, proportional reading time, and fixation count in the NSS
      compared to the SS condition, but possibly owing to the small
      sample size, these differences did not reach statistical
      significance. In the hard of hearing group, there was no
      significant main effect of segmentation on any of the eye tracking
      measures (<italic>ps</italic>&#x3E; .05). In the hearing group,
      there was no statistically significant main effect of segmentation
      (all <italic>ps</italic>&#x3E; .05).</p>

      <p>A between-subject analysis showed a close to significant main
      effect of degree of hearing loss on fixation count,
      <italic>F</italic>(2,33) = 3.204, <italic>p</italic> = .054,
      𝜂<sub>p</sub>&#x00B2; = .163. Deaf participants had fewer
      fixations per subtitle compared to hard of hearing,
      <italic>p</italic> = .088, 95% CI [2.79, .14], or hearing
      participants, <italic>p</italic> = .076, 95% CI [-2.41, .08]. No
      other measures were significant.</p>
    </sec>

    <sec id="S3i">
      <title>Interviews </title>

      <p>Following the eye tracking tests, we conducted short
      semi-structured interviews to elicit participants’ views on
      subtitle segmentation, complementing the quantitative part of the
      study (<xref ref-type="bibr" rid="b5">5</xref>). We used inductive coding to identify themes
      reported by participants. Several Spanish, Polish, and deaf
      participants said that keeping units of meaning together
      contributed to the readability of subtitles because by creating
      false expectations (i.e. “garden path” sentences), NSS line-breaks
      can require more effort to process. These participants believed
      that chunking text by phrases according to “natural thoughts”
      allowed subtitles to be read quickly. In contrast, other
      participants said that NSS subtitles gave them a sense of
      continuity in reading the subtitles. A third theme in relation to
      dealing with SS and NSS subtitles was that participants adapted
      their reading strategies to different types of line-breaks.
      Finally, a number of people also admitted they had not noticed any
      differences in the subtitle segmentation between the clips, saying
      they had never paid any attention to subtitle segmentation.</p>
    </sec>
  </sec>

<sec id="S4">
  <title>Discussion </title>

  <p>The two experiments reported in this paper examined the impact of
  text segmentation in subtitles on cognitive load and reading
  performance. We also investigated whether viewers’ linguistic
  background (native language and hearing status) impacts on how they
  process syntactically and nonsyntactically segmented subtitles.
  Drawing on the large body of literature on text segmentation in
  subtitling (<xref ref-type="bibr" rid="b12 b18 b44 b45 b48">12, 18, 44, 45, 48</xref>) and literature on
  parsing and text chunking during reading (<xref ref-type="bibr" rid="b22 b23 b33 b39 b40 b51">22, 23, 33, 39, 40, 51</xref>), we predicted that subtitle
  reading would be adversely affected by non-syntactic segmentation.</p>

  <p>This prediction was partly upheld. One of the most important
  findings of this study is that participants reported higher cognitive
  load in nonsyntactically segmented (NSS) subtitles compared to
  syntactically segmented (SS) ones. In both experiments, mental effort,
  difficulty, and frustration were reported as higher in the NSS
  condition. A possible explanation of this finding may be that NSS text
  increases extraneous load, i.e. the type of cognitive load related to
  the way information is presented (<xref ref-type="bibr" rid="b58">58</xref>). Given the
  limitations of working memory capacity (<xref ref-type="bibr" rid="b4 b8">4, 8</xref>), NSS may leave less capacity to process the remaining
  visual, auditory, and textual information. This, in turn, would
  increase their frustration, make them expend more effort and lead them
  to perceive the task as more difficult.</p>

  <p>Although cognitive load was found to be consistently higher in the
  NSS condition across the board in all participant groups, the mean
  differences between the two conditions do not differ substantially and
  thus the effect sizes are not large. We believe the small effect size
  may stem from the fact that the clips used in this study were quite
  short. As cognitive fatigue increases with the length of the task, and
  declines simultaneously in performance (<xref ref-type="bibr" rid="b1 b53 b64">1, 53, 64</xref>), we might expect that in longer clips
  with non-syntactically segmented subtitles, the cognitive load would
  accumulate over time, resulting in more prominent mean differences
  between the two conditions. We acknowledge that the short duration of
  clips, necessitated by the length of the entire experiment, is an
  important limitation of this study. However, a number of previous
  studies on subtitling have also used very short clips (<xref ref-type="bibr" rid="b19 b20 b48 b52">19, 20, 48, 52</xref>). In this study, we only examined text
  segmentation within a single subtitle; further research should also
  explore the effects of non-syntactic segmentation across two or more
  consecutive subtitles, where the impact of NSS subtitles on cognitive
  load may be even higher.</p>

  <p>Despite the higher cognitive load and contrary to our predictions,
  we found no evidence that subtitles which are not segmented in
  accordance with professional standards result in lower comprehension.
  Participants coped well in both conditions, achieving similar
  comprehension scores regardless of segmentation. This finding is in
  line with the results reported by Perego et al. (<xref ref-type="bibr" rid="b46">46</xref>), using Italian
  participants, that subtitles containing non-syntactically segmented
  noun phrases did not negatively affect participants’ comprehension.
  Our research extends these findings to other linguistic units in
  English (verb phrases and conjunctions as well as noun phrases) and
  other groups of participants (hearing English, Polish, and Spanish
  speakers, as well as deaf and hard of hearing participants). The
  finding that performance in processing NSS text is not negatively
  affected despite the participants’ extra effort (as shown by increased
  cognitive load) may be attributed to the short duration of the clips
  and also to overall high comprehension scores. As the clips were
  short, there were limited points that could be included in the
  comprehension questions. Other likely reasons for the lack of
  significant differences between the two conditions is the extensive
  experience that all the participants had of using subtitles in the UK,
  and that participants may have become accustomed to subtitling not
  adhering to professional segmentation standards. Our sample of
  participants was also relatively well-educated, which may have been a
  reason for their comprehension scores being near ceiling. Furthermore,
  as noted by Mitchell (<xref ref-type="bibr" rid="b40">40</xref>), when interpreting the syntactic structure
  of sentences in reading, people use non-lexical cues such as text
  layout or punctuation as parsing aids, although these cues are of
  secondary importance when compared to words, which constitute “the
  central source of information” (p. 123). This is also consistent with
  what the participants in our study reported in the interviews. For
  example, one deaf participant said: “Line breaks have their value, yet
  when you are reading fast, most of the time it becomes less
  relevant.”</p>

  <p>In addition to understanding the effects of segmentation on
  subtitle processing, this study also found interesting results
  relating to differences in subtitle processing between the different
  groups of viewers. In Experiment 1, Spanish participants had the
  highest cognitive load and lowest comprehension, and spent more time
  reading subtitles than Polish and English participants. Although it is
  impossible to attribute these findings unequivocally to Spanish
  participants coming from a dubbing country, this finding may related
  to their experience of having grown up exposed more to dubbing than
  subtitling. In Experiment 2, we found that subtitle processing was the
  least effortful for the hard of hearing group: they reported the
  lowest cognitive effort and had the highest comprehension score. This
  result may be attributed to their high familiarity with subtitling (as
  declared in the pre-test questionnaire) compared to the hearing group.
  Although no data were obtained for the groups in Experiment 2 in
  relation to English literacy measures, as a group, individuals born
  deaf or deafened early in life have low average reading ages, and more
  effortful processing by the deaf group may be related to lower
  literacy.</p>

  <p>Different viewers adopt different strategies to cope with reading
  NSS subtitles. In the case of hearing participants, there were more
  revisits to the subtitle area for NSS subtitles, which is a likely
  indication of parsing difficulties (<xref ref-type="bibr" rid="b51">51</xref>). In the group
  of participants with hearing loss, deaf people spent more time reading
  NSS subtitles than SS ones. Given that longer reading time may
  indicate difficulty in extracting information (<xref ref-type="bibr" rid="b17">17</xref>), this may also be taken to reflect parsing problems. This
  interpretation is also in accordance with the longer durations of
  fixations in the deaf group, which is another indicator of processing
  difficulties (<xref ref-type="bibr" rid="b17 b49">17, 49</xref>). Unlike the
  findings of other studies (<xref ref-type="bibr" rid="b26 b60 b61">26, 60, 61</xref>), in this study, deaf
  participants fixated less on the subtitles than hard of hearing and
  hearing participants. Our results, however, are in line with a recent
  eye tracking study (<xref ref-type="bibr" rid="b38">38</xref>), where deaf people also had
  fewer fixations than relation hearing viewers. According to Miquel
  Iriarte (<xref ref-type="bibr" rid="b38">38</xref>), deaf viewers relate to the visual information on the
  screen as a whole to a greater extent than hearing viewers, reading
  the subtitles faster to give them more time to direct their attention
  towards the visual narrative.</p>
</sec>

<sec id="S5">
  <title>Conclusions </title>

  <p>Our study has shown that text segmentation influences the
  processing of subtitled videos: nonsyntactically segmented subtitles
  may increase viewers’ cognitive load and eye movements. This was
  particularly noticeable for Spanish and deaf people. In order to
  enhance the viewing experience, using syntactic segmentation in
  subtitles may facilitate the process of reading subtitles, thus giving
  viewers greater time to follow the visual narrative of the film.
  Further research is necessary to disentangle the impact of the
  viewers’ country of origin, familiarity with subtitling, reading
  skills, and language proficiency on subtitle processing.</p>

  <p>This study also provides support for the need to base subtitling
  guidelines on research evidence, particularly in view of the
  tremendous expansion of subtitling across different media and formats.
  The results are directly applicable to current practices in television
  broadcasting and video-on-demand services. They can also be adopted in
  subtitle personalization to improve automation algorithms for subtitle
  display in order to facilitate the processing of subtitles among the
  myriad different viewers using subtitles.</p>
</sec>

  <sec id="S6" sec-type="COI-statement">
    <title>Ethics and Conflict of Interest </title>

    <p>The author(s) declare(s) that the contents of the article are in
    agreement with the ethics described in
    <ext-link ext-link-type="uri" xlink:href="http://biblio.unibe.ch/portale/elibrary/BOP/jemr/ethics.html" xlink:show="new">http://biblio.unibe.ch/portale/elibrary/BOP/jemr/ethics.html</ext-link>
     and that there is no conflict of interest
    regarding the publication of this paper.</p>
  </sec>

  <sec id="S7">
    <title>Acknowledgements</title>

    <p>The research reported here has been supported by a grant from the
    European Union’s Horizon 2020 research and innovation programme
    under the Marie Skłodowska-Curie Grant Agreement No. 702606, “La
    Caixa” Foundation (E-08-2014-1306365) and Transmedia Catalonia
    Research Group (2017SGR113).</p>
    
    <p>Many thanks for Pilar Orero and Gert Vercauteren for their
    comments on an earlier version of the manuscript.</p>
  </sec>

  </body>
<back>
<ref-list>
<ref id="b1"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Ackerman</surname>, <given-names>P. L.</given-names></name>, &#x26;  <name><surname>Kanfer</surname>, <given-names>R.</given-names></name></person-group> (<year>2009</year>). <article-title>Test length and cognitive fatigue: An empirical examination of effects on performance and test-taker reactions.</article-title> <source>Journal of Experimental Psychology. Applied</source>, <volume>15</volume>(<issue>2</issue>), <fpage>163</fpage>–<lpage>181</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1037/a0015719</pub-id><pub-id pub-id-type="pmid">19586255</pub-id><issn>1076-898X</issn></mixed-citation></ref>
<ref id="b2"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><name><surname>Allen</surname>, <given-names>P.</given-names></name>, <name><surname>Garman</surname>, <given-names>J.</given-names></name>, <name><surname>Calvert</surname>, <given-names>I.</given-names></name>, &#x26;  <name><surname>Murison</surname>, <given-names>J.</given-names></name></person-group> (<year>2011</year>). <article-title>Reading in multimodal environments: assessing legibility and accessibility of typography for television.</article-title> In The proceedings of the 13th international ACM SIGACCESS conference on computers and accessibility (pp. 275–276). https://doi.org/<pub-id pub-id-type="doi" specific-use="author">10.1145/2049536.2049604</pub-id></mixed-citation></ref>
<ref id="b3"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Antonenko</surname>, <given-names>P.</given-names></name>, <name><surname>Paas</surname>, <given-names>F.</given-names></name>, <name><surname>Grabner</surname>, <given-names>R.</given-names></name>, &#x26;  <name><surname>van Gog</surname>, <given-names>T.</given-names></name></person-group> (<year>2010</year>). <article-title>Using Electroencephalography to Measure Cognitive Load.</article-title> <source>Educational Psychology Review</source>, <volume>22</volume>(<issue>4</issue>), <fpage>425</fpage>–<lpage>438</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1007/s10648-010-9130-y</pub-id><issn>1040-726X</issn></mixed-citation></ref>
<ref id="b4"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Baddeley</surname>, <given-names>A.</given-names></name></person-group> (<year>2007</year>). <source>Working memory, thought, and action</source>. <publisher-loc>Oxford</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>. <pub-id pub-id-type="doi">10.1093/acprof:oso/9780198528012.001.0001</pub-id></mixed-citation></ref>
<ref id="b5"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Bazeley</surname>, <given-names>P.</given-names></name></person-group> (<year>2013</year>). <source>Qualitative Data</source>. <publisher-loc>Los Angeles</publisher-loc>: <publisher-name>Sage</publisher-name>.</mixed-citation></ref>
<ref id="b6"><mixed-citation publication-type="web-page" specific-use="unparsed"><person-group person-group-type="author"><collab>BBC</collab></person-group>. (<year>2017</year>). <article-title><italic>BBC subtitle guidelines</italic>.</article-title> London: The British Broadcasting Corporation. Retrieved from <ext-link ext-link-type="uri" xlink:href="http://bbc.github.io/subtitleguidelines/">http://bbc.github.io/subtitleguidelines/</ext-link></mixed-citation></ref>
<ref id="b7"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Bisson</surname>, <given-names>M.-J.</given-names></name>, <name><surname>Van Heuven</surname>, <given-names>W. J. B.</given-names></name>, <name><surname>Conklin</surname>, <given-names>K.</given-names></name>, &#x26;  <name><surname>Tunney</surname>, <given-names>R. J.</given-names></name></person-group> (<year>2012</year>). <article-title>Processing of native and foreign language subtitles in films: An eye tracking study.</article-title> <source>Applied Psycholinguistics</source>, <volume>35</volume>(<issue>02</issue>), <fpage>399</fpage>–<lpage>418</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1017/s0142716412000434</pub-id> <pub-id pub-id-type="doi">10.1017/S0142716412000434</pub-id><issn>0142-7164</issn></mixed-citation></ref>
<ref id="b8"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Chandler</surname>, <given-names>P.</given-names></name>, &#x26;  <name><surname>Sweller</surname>, <given-names>J.</given-names></name></person-group> (<year>1991</year>). <article-title>Cognitive Load Theory and the Format of Instruction.</article-title> <source>Cognition and Instruction</source>, <volume>8</volume>(<issue>4</issue>), <fpage>293</fpage>–<lpage>332</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1207/s1532690xci0804_2</pub-id><issn>0737-0008</issn></mixed-citation></ref>
<ref id="b9"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>d’Ydewalle</surname>, <given-names>G.</given-names></name>, &#x26;  <name><surname>De Bruycker</surname>, <given-names>W.</given-names></name></person-group> (<year>2007</year>). <article-title>Eye Movements of Children and Adults While Reading Television Subtitles.</article-title> <source>European Psychologist</source>, <volume>12</volume>(<issue>3</issue>), <fpage>196</fpage>–<lpage>205</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1027/1016-9040.12.3.196</pub-id><issn>1016-9040</issn></mixed-citation></ref>
<ref id="b10"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>d’Ydewalle</surname>, <given-names>G.</given-names></name>, <name><surname>Praet</surname>, <given-names>C.</given-names></name>, <name><surname>Verfaillie</surname>, <given-names>K.</given-names></name>, &#x26;  <name><surname>Van Rensbergen</surname>, <given-names>J.</given-names></name></person-group> (<year>1991</year>). <article-title>Watching Subtitled Television: Automatic Reading Behavior.</article-title> <source>Communication Research</source>, <volume>18</volume>(<issue>5</issue>), <fpage>650</fpage>–<lpage>666</lpage>. <pub-id pub-id-type="doi">10.1177/009365091018005005</pub-id><issn>0093-6502</issn></mixed-citation></ref>
<ref id="b11"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><name><surname>D’Ydewalle</surname>, <given-names>G.</given-names></name>, <name><surname>Van Rensbergen</surname>, <given-names>J.</given-names></name>, &#x26;  <name><surname>Pollet</surname>, <given-names>J.</given-names></name></person-group> (<year>1987</year>). <chapter-title>Reading a message when the same message is available auditorily in another language: the case of subtitling</chapter-title>. In <person-group person-group-type="editor"><name><given-names>J. K.</given-names> <surname>O’Regan</surname></name> &#x26;  <name><given-names>A.</given-names> <surname>Levy-Schoen</surname></name> (<role>Eds.</role>),</person-group> <source>Eye movements: from physiology to cognition</source> (pp. <fpage>313</fpage>–<lpage>321</lpage>). <publisher-loc>Amsterdam, New York</publisher-loc>: <publisher-name>Elsevier</publisher-name>. <pub-id pub-id-type="doi">10.1016/B978-0-444-70113-8.50047-3</pub-id></mixed-citation></ref>
<ref id="b12"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Díaz Cintas</surname>, <given-names>J.</given-names></name>, &#x26;  <name><surname>Remael</surname>, <given-names>A.</given-names></name></person-group> (<year>2007</year>). <source>Audiovisual translation: subtitling</source>. <publisher-loc>Manchester</publisher-loc>: <publisher-name>St. Jerome</publisher-name>.</mixed-citation></ref>
<ref id="b13"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><name><surname>Doherty</surname>, <given-names>S.</given-names></name>, &#x26;  <name><surname>Kruger</surname>, <given-names>J.-L.</given-names></name></person-group> (<year>2018</year>). <chapter-title>The development of eye tracking in empirical research on subtitling and captioning</chapter-title>. In <person-group person-group-type="editor"><name><given-names>T.</given-names> <surname>Dwyer</surname></name>, <name><given-names>C.</given-names> <surname>Perkins</surname></name>, <name><given-names>S.</given-names> <surname>Redmond</surname></name>, &#x26;  <name><given-names>J.</given-names> <surname>Sita</surname></name> (<role>Eds.</role>),</person-group> <source>Seeing into Screens: Eye Tracking and the Moving Image</source> (pp. <fpage>46</fpage>–<lpage>64</lpage>). <publisher-loc>London</publisher-loc>: <publisher-name>Bloomsbury</publisher-name>. <pub-id pub-id-type="doi">10.5040/9781501329012.0009</pub-id></mixed-citation></ref>
<ref id="b14"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Frazier</surname>, <given-names>L.</given-names></name></person-group> (<year>1979</year>). <source>On comprehending sentences: syntactic parsing strategies</source>. <publisher-loc>Storrs</publisher-loc>: <publisher-name>University of Connecticut</publisher-name>.</mixed-citation></ref>
<ref id="b15"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Gernsbacher</surname>, <given-names>M. A.</given-names></name></person-group> (<year>2015</year>). <article-title>Video Captions Benefit Everyone.</article-title> <source>Policy Insights from the Behavioral and Brain Sciences</source>, <volume>2</volume>(<issue>1</issue>), <fpage>195</fpage>–<lpage>202</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1177/2372732215602130</pub-id><pub-id pub-id-type="pmid">28066803</pub-id><issn>2372-7322</issn></mixed-citation></ref>
<ref id="b16"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><name><surname>Hart</surname>, <given-names>S. G.</given-names></name>, &#x26;  <name><surname>Staveland</surname>, <given-names>L. E.</given-names></name></person-group> (<year>1988</year>). <chapter-title>Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research</chapter-title>. In <person-group person-group-type="editor"><name><given-names>P. A.</given-names> <surname>Hancock</surname></name> &#x26;  <name><given-names>N.</given-names> <surname>Meshkati</surname></name> (<role>Eds.</role>),</person-group> <source>Human mental workload</source> (pp. <fpage>139</fpage>–<lpage>183</lpage>). <publisher-loc>Amsterdam</publisher-loc>: <publisher-name>North-Holland</publisher-name>. <pub-id pub-id-type="doi">10.1016/S0166-4115(08)62386-9</pub-id></mixed-citation></ref>
<ref id="b17"><mixed-citation publication-type="book" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Holmqvist</surname>, <given-names>K.</given-names></name>, <name><surname>Nyström</surname>, <given-names>M.</given-names></name>, <name><surname>Andersson</surname>, <given-names>R.</given-names></name>, <name><surname>Dewhurst</surname>, <given-names>R.</given-names></name>, <name><surname>Jarodzka</surname>, <given-names>H.</given-names></name>, &#x26;  <name><surname>van de Weijer</surname>, <given-names>J.</given-names></name></person-group> (<year>2011</year>). <source><italic>Eye tracking: a comprehensive guide to methods and measures</italic>.</source> Oxford: Oxford University Press.</mixed-citation></ref>
<ref id="b18"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Ivarsson</surname>, <given-names>J.</given-names></name>, &#x26;  <name><surname>Carroll</surname>, <given-names>M.</given-names></name></person-group> (<year>1998</year>). <article-title>Subtitling</article-title>.<source>Simrishamn: TransEdit HB.</source>.</mixed-citation></ref>
<ref id="b19"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Jensema</surname>, <given-names>C.</given-names></name></person-group> (<year>1998</year>). <article-title>Viewer reaction to different television captioning speeds</article-title>. <source>American Annals of the Deaf</source>, <volume>143</volume>(<issue>4</issue>), <fpage>318</fpage>–<lpage>324</lpage>.</mixed-citation></ref>
<ref id="b20"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Jensema</surname>, <given-names>C. J.</given-names></name>, <name><surname>el Sharkawy</surname>, <given-names>S.</given-names></name>, <name><surname>Danturthi</surname>, <given-names>R. S.</given-names></name>, <name><surname>Burch</surname>, <given-names>R.</given-names></name>, &#x26;  <name><surname>Hsu</surname>, <given-names>D.</given-names></name></person-group> (<year>2000</year>). <article-title>Eye movement patterns of captioned television viewers.</article-title> <source>American Annals of the Deaf</source>, <volume>145</volume>(<issue>3</issue>), <fpage>275</fpage>–<lpage>285</lpage>. <pub-id pub-id-type="doi">10.1353/aad.2012.0093</pub-id><pub-id pub-id-type="pmid">10965591</pub-id><issn>0002-726X</issn></mixed-citation></ref>
<ref id="b21"><mixed-citation publication-type="web-page" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Karamitroglou</surname>, <given-names>F.</given-names></name></person-group> (<year>1998</year>). A Proposed Set of Subtitling Standards in Europe. Translation Journal, 2(2). Retrieved from <ext-link ext-link-type="uri" xlink:href="http://translationjournal.net/journal/04stndrd.htm">http://translationjournal.net/journal/04stndrd.htm</ext-link></mixed-citation></ref>
<ref id="b22"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Keenan</surname>, <given-names>S. A.</given-names></name></person-group> (<year>1984</year>). <article-title>Effects of Chunking and Line Length on Reading Efficiency.</article-title> <source>Visible Language</source>, <volume>18</volume>(<issue>1</issue>), <fpage>61</fpage>–<lpage>80</lpage>.<issn>0022-2224</issn></mixed-citation></ref>
<ref id="b23"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Kennedy</surname>, <given-names>A.</given-names></name>, <name><surname>Murray</surname>, <given-names>W. S.</given-names></name>, <name><surname>Jennings</surname>, <given-names>F.</given-names></name>, &#x26;  <name><surname>Reid</surname>, <given-names>C.</given-names></name></person-group> (<year>1989</year>). <article-title>Parsing Complements: Comments on the Generality of the Principle of Minimal Attachment.</article-title> <source>Language and Cognitive Processes</source>, <volume>4</volume>(<issue>3–4</issue>), <fpage>SI51</fpage>–<lpage>SI76</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1080/01690968908406363</pub-id><issn>0169-0965</issn></mixed-citation></ref>
<ref id="b24"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Koolstra</surname>, <given-names>C. M.</given-names></name>, <name><surname>Van Der Voort</surname>, <given-names>T. H. A.</given-names></name>, &#x26;  <name><surname>D’Ydewalle</surname>, <given-names>G.</given-names></name></person-group> (<year>1999</year>). <article-title>Lengthening the Presentation Time of Subtitles on Television: Effects on Children’s Reading Time and Recognition.</article-title> <source>Communications</source>, <volume>24</volume>(<issue>4</issue>), <fpage>407</fpage>–<lpage>422</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1515/comm.1999.24.4.407</pub-id><issn>0341-2059</issn></mixed-citation></ref>
<ref id="b25"><mixed-citation publication-type="web-page" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Krejtz</surname>, <given-names>I.</given-names></name>, <name><surname>Szarkowska</surname>, <given-names>A.</given-names></name>, &#x26;  <name><surname>Krejtz</surname>, <given-names>K.</given-names></name></person-group> (<year>2013</year>). The Effects of Shot Changes on Eye Movements in Subtitling. Journal of Eye Movement Research, 6(5), 1–12. <ext-link ext-link-type="uri" xlink:href="https://doi.org/http://dx.doi.org/10.16910/jem">https://doi.org/http://dx.doi.org/10.16910/jem</ext-link> r.6.5.3</mixed-citation></ref>
<ref id="b26"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Krejtz</surname>, <given-names>I.</given-names></name>, <name><surname>Szarkowska</surname>, <given-names>A.</given-names></name>, &#x26;  <name><surname>Łogińska</surname>, <given-names>M.</given-names></name></person-group> (<year>2016</year>). <article-title>Reading Function and Content Words in Subtitled Videos.</article-title> <source>Journal of Deaf Studies and Deaf Education</source>, <volume>21</volume>(<issue>2</issue>), <fpage>222</fpage>–<lpage>232</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1093/deafed/env061</pub-id><pub-id pub-id-type="pmid">26681268</pub-id><issn>1081-4159</issn></mixed-citation></ref>
<ref id="b27"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Kruger</surname>, <given-names>J.-L.</given-names></name>, &#x26;  <name><surname>Doherty</surname>, <given-names>S.</given-names></name></person-group> (<year>2016</year>). <article-title>Measuring Cognitive Load in the Presence of Educational Video: Towards a Multimodal Methodology.</article-title> <source>Australasian Journal of Educational Technology</source>, <volume>32</volume>(<issue>6</issue>), <fpage>19</fpage>–<lpage>31</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.14742/ajet.3084</pub-id><issn>1449-3098</issn></mixed-citation></ref>
<ref id="b28"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><name><surname>Kruger</surname>, <given-names>J.-L.</given-names></name>, <name><surname>Doherty</surname>, <given-names>S.</given-names></name>, <name><surname>Fox</surname>, <given-names>W.</given-names></name>, &#x26;  <name><surname>de Lissa</surname>, <given-names>P.</given-names></name></person-group> (<year>2017</year>). <chapter-title>Multimodal measurement of cognitive load during subtitle processing: Same-language subtitles for foreign language viewers</chapter-title>. In <person-group person-group-type="editor"><name><given-names>I.</given-names> <surname>Lacruz</surname></name> &#x26;  <name><given-names>R.</given-names> <surname>Jääskeläinen</surname></name> (<role>Eds.</role>),</person-group> <source>New Directions in Cognitive and Empirical Translation Process Research</source> (pp. <fpage>267</fpage>–<lpage>294</lpage>). <publisher-loc>London</publisher-loc>: <publisher-name>John Benjamins</publisher-name>.</mixed-citation></ref>
<ref id="b29"><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><name><surname>Kruger</surname>, <given-names>J.-L.</given-names></name>, <name><surname>Hefer</surname>, <given-names>E.</given-names></name>,&#x26;  <name><surname>Matthew</surname>, <given-names>G.</given-names></name></person-group> (<year>2013</year>). <article-title>Measuring the impact of subtitles on cognitive load: eye tracking and dynamic audiovisual texts.</article-title> <source>Proceedings of the 2013 Conference on Eye Tracking South Africa</source>. <publisher-loc>Cape Town, South Africa</publisher-loc>: <publisher-name>ACM</publisher-name>. https://doi.org/<pub-id pub-id-type="doi" specific-use="author">10.1145/2509315.2509331</pub-id></mixed-citation></ref>
<ref id="b30"><mixed-citation publication-type="web-page" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Kruger</surname>, <given-names>J.-L.</given-names></name>, <name><surname>Hefer</surname>, <given-names>E.</given-names></name>, &#x26;  <name><surname>Matthew</surname>, <given-names>G.</given-names></name></person-group> (<year>2014</year>). Attention distribution and cognitive load in a subtitled academic lecture: L1 vs. L2. Journal of Eye Movement Research, 7(5), 1– 15. <ext-link ext-link-type="uri" xlink:href="https://doi.org/http://dx.doi.org/10.16910/jem">https://doi.org/http://dx.doi.org/10.16910/jem</ext-link> r.7.5.4</mixed-citation></ref>
<ref id="b31"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Kruger</surname>, <given-names>J.-L.</given-names></name>, &#x26;  <name><surname>Steyn</surname>, <given-names>F.</given-names></name></person-group> (<year>2014</year>). <article-title>Subtitles and Eye Tracking: Reading and Performance.</article-title> <source>Reading Research Quarterly</source>, <volume>49</volume>(<issue>1</issue>), <fpage>105</fpage>–<lpage>120</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1002/rrq.59</pub-id><issn>0034-0553</issn></mixed-citation></ref>
<ref id="b32"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Kruger</surname>, <given-names>J.-L.</given-names></name>, <name><surname>Szarkowska</surname>, <given-names>A.</given-names></name>, &#x26;  <name><surname>Krejtz</surname>, <given-names>I.</given-names></name></person-group> (<year>2015</year>). Subtitles on the moving image: an overview of eye tracking studies. Refractory, 25.</mixed-citation></ref>
<ref id="b33"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>LeVasseur</surname>, <given-names>V. M.</given-names></name>, <name><surname>Macaruso</surname>, <given-names>P.</given-names></name>, <name><surname>Palumbo</surname>, <given-names>L. C.</given-names></name>, &#x26;  <name><surname>Shankweiler</surname>, <given-names>D.</given-names></name></person-group> (<year>2006</year>). <article-title>Syntactically cued text facilitates oral reading fluency in developing readers.</article-title> <source>Applied Psycholinguistics</source>, <volume>27</volume>(<issue>3</issue>), <fpage>423</fpage>–<lpage>445</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1017/s0142716406060346</pub-id> <pub-id pub-id-type="doi">10.1017/S0142716406060346</pub-id><issn>0142-7164</issn></mixed-citation></ref>
<ref id="b34"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Łuczak</surname>, <given-names>K.</given-names></name></person-group> (<year>2017</year>). <source>The effects of the language of the soundtrack on film comprehension, cognitive load and subtitle reading patterns. An eye-tracking study</source>. <publisher-loc>Warsaw</publisher-loc>: <publisher-name>University of Warsaw</publisher-name>.</mixed-citation></ref>
<ref id="b35"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Marian</surname>, <given-names>V.</given-names></name>, <name><surname>Blumenfeld</surname>, <given-names>H. K.</given-names></name>, &#x26;  <name><surname>Kaushanskaya</surname>, <given-names>M.</given-names></name></person-group> (<year>2007</year>). <article-title>The Language Experience and Proficiency Questionnaire (LEAP-Q): Assessing language profiles in bilinguals and multilinguals.</article-title> <source>Journal of Speech, Language, and Hearing Research: JSLHR</source>, <volume>50</volume>(<issue>4</issue>), <fpage>940</fpage>–<lpage>967</lpage>. <pub-id pub-id-type="doi">10.1044/1092-4388(2007/067)</pub-id><pub-id pub-id-type="pmid">17675598</pub-id><issn>1092-4388</issn></mixed-citation></ref>
<ref id="b36"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Matamala</surname>, <given-names>A.</given-names></name>, <name><surname>Perego</surname>, <given-names>E.</given-names></name>, &#x26;  <name><surname>Bottiroli</surname>, <given-names>S.</given-names></name></person-group> (<year>2017</year>). <article-title>Dubbing versus subtitling yet again?</article-title> <source>Babel</source>, <volume>63</volume>(<issue>3</issue>), <fpage>423</fpage>–<lpage>441</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1075/babel.63.3.07mat</pub-id></mixed-citation></ref>
<ref id="b37"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Mayberry</surname>, <given-names>R. I.</given-names></name>, <name><surname>del Giudice</surname>, <given-names>A. A.</given-names></name>, &#x26;  <name><surname>Lieberman</surname>, <given-names>A. M.</given-names></name></person-group> (<year>2011</year>). <article-title>Reading achievement in relation to phonological coding and awareness in deaf readers: A meta-analysis.</article-title> <source>Journal of Deaf Studies and Deaf Education</source>, <volume>16</volume>(<issue>2</issue>), <fpage>164</fpage>–<lpage>188</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1093/deafed/enq049</pub-id><pub-id pub-id-type="pmid">21071623</pub-id><issn>1081-4159</issn></mixed-citation></ref>
<ref id="b38"><mixed-citation publication-type="book" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Miquel Iriarte</surname>, <given-names>M.</given-names></name></person-group> (<year>2017</year>). <source><italic>The reception of subtitling for the deaf and hard of hearing: viewers’ hearing and communication profile &#x26;  Subtitling speed of exposure</italic>.</source> Universitat Autnoma de Barcelona, Barcelona.</mixed-citation></ref>
<ref id="b39"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><name><surname>Mitchell</surname>, <given-names>D. C.</given-names></name></person-group> (<year>1987</year>). <chapter-title>Lexical guidance in human parsing: Locus and processing characteristics</chapter-title>. In <person-group person-group-type="editor"><name><given-names>M.</given-names> <surname>Coltheart</surname></name> (<role>Ed.</role>),</person-group> <source>Attention and Performance XII</source>. <publisher-loc>London</publisher-loc>: <publisher-name>Lawrence Erlbaum Associates Ltd.</publisher-name></mixed-citation></ref>
<ref id="b40"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Mitchell</surname>, <given-names>D. C.</given-names></name></person-group> (<year>1989</year>). <article-title>Verb guidance and other lexical effects in parsing.</article-title> <source>Language and Cognitive Processes</source>, <volume>4</volume>(<issue>3–4</issue>), <fpage>SI123</fpage>–<lpage>SI154</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1080/01690968908406366</pub-id><issn>0169-0965</issn></mixed-citation></ref>
<ref id="b41"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Musselman</surname>, <given-names>C.</given-names></name></person-group> (<year>2000</year>). <article-title>How Do Children Who Can’t Hear Learn to Read an Alphabetic Script? A Review of the Literature on Reading and Deafness.</article-title> <source>Journal of Deaf Studies and Deaf Education</source>, <volume>5</volume>(<issue>1</issue>), <fpage>9</fpage>–<lpage>31</lpage>. <pub-id pub-id-type="doi">10.1093/deafed/5.1.9</pub-id><pub-id pub-id-type="pmid">15454515</pub-id><issn>1081-4159</issn></mixed-citation></ref>
<ref id="b42"><mixed-citation publication-type="unknown" specific-use="unparsed">Ofcom. (<year>2015</year>). Ofcom’s Code on Television Access Services.</mixed-citation></ref>
<ref id="b43"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Paas</surname>, <given-names>F.</given-names></name>, <name><surname>Tuovinen</surname>, <given-names>J. E.</given-names></name>, <name><surname>Tabbers</surname>, <given-names>H.</given-names></name>, &#x26;  <name><surname>Van Gerven</surname>, <given-names>P. W. M.</given-names></name></person-group> (<year>2003</year>). <article-title>Cognitive Load Measurement as a Means to Advance Cognitive Load Theory.</article-title> <source>Educational Psychologist</source>, <volume>38</volume>(<issue>1</issue>), <fpage>63</fpage>–<lpage>71</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1207/s15326985ep3801_8</pub-id> <pub-id pub-id-type="doi">10.1207/S15326985EP3801_8</pub-id><issn>0046-1520</issn></mixed-citation></ref>
<ref id="b44"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><name><surname>Perego</surname>, <given-names>E.</given-names></name></person-group> (<year>2008</year>a). <chapter-title>Subtitles and line-breaks: Towards improved readability</chapter-title>. In <person-group person-group-type="editor"><name><given-names>D.</given-names> <surname>Chiaro</surname></name>, <name><given-names>C.</given-names> <surname>Heiss</surname></name>, &#x26;  <name><given-names>C.</given-names> <surname>Bucaria</surname></name> (<role>Eds.</role>),</person-group> <source>Between Text and Image: Updating research in screen translation</source> (pp. <fpage>211</fpage>–<lpage>223</lpage>). <publisher-name>John Benjamins</publisher-name>; <pub-id pub-id-type="doi" specific-use="author">10.1075/btl.78.21per</pub-id></mixed-citation></ref>
<ref id="b45"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Perego</surname>, <given-names>E.</given-names></name></person-group> (<year>2008</year>b). <article-title>What would we read best? Hypotheses and suggestions for the location of line breaks in film subtitles.</article-title> <source>The Sign Language Translator and Interpreter</source>, <volume>2</volume>(<issue>1</issue>), <fpage>35</fpage>–<lpage>63</lpage>.</mixed-citation></ref>
<ref id="b46"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Perego</surname>, <given-names>E.</given-names></name>, <name><surname>Del Missier</surname>, <given-names>F.</given-names></name>, <name><surname>Porta</surname>, <given-names>M.</given-names></name>, &#x26;  <name><surname>Mosconi</surname>, <given-names>M.</given-names></name></person-group> (<year>2010</year>). <article-title>The Cognitive Effectiveness of Subtitle Processing.</article-title> <source>Media Psychology</source>, <volume>13</volume>(<issue>3</issue>), <fpage>243</fpage>–<lpage>272</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1080/15213269.2010.502873</pub-id><issn>1521-3269</issn></mixed-citation></ref>
<ref id="b47"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Perego</surname>, <given-names>E.</given-names></name>, <name><surname>Laskowska</surname>, <given-names>M.</given-names></name>, <name><surname>Matamala</surname>, <given-names>A.</given-names></name>, <name><surname>Remael</surname>, <given-names>A.</given-names></name>, <name><surname>Robert</surname>, <given-names>I. S.</given-names></name>, <name><surname>Szarkowska</surname>, <given-names>A.</given-names></name>, <etal>. . .</etal> <name><surname>Bottiroli</surname>, <given-names>S.</given-names></name></person-group> (<year>2016</year>). <article-title>Is subtitling equally effective everywhere? A first cross-national study on the reception of interlingually subtitled messages.</article-title> <source>Across Languages and Cultures</source>, <volume>17</volume>(<issue>2</issue>), <fpage>205</fpage>–<lpage>229</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1556/084.2016.17.2.4</pub-id><issn>1585-1923</issn></mixed-citation></ref>
<ref id="b48"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Rajendran</surname>, <given-names>D. J.</given-names></name>, <name><surname>Duchowski</surname>, <given-names>A. T.</given-names></name>, <name><surname>Orero</surname>, <given-names>P.</given-names></name>, <name><surname>Martínez</surname>, <given-names>J.</given-names></name>, &#x26;  <name><surname>Romero-Fresco</surname>, <given-names>P.</given-names></name></person-group> (<year>2013</year>). <article-title>Effects of text chunking on subtitling: A quantitative and qualitative examination.</article-title> <source>Perspectives</source>, <volume>21</volume>(<issue>1</issue>), <fpage>5</fpage>–<lpage>21</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1080/0907676X.2012.722651</pub-id></mixed-citation></ref>
<ref id="b49"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Rayner</surname>, <given-names>K.</given-names></name></person-group> (<year>1998</year>). <article-title>Eye movements in reading and information processing: 20 years of research.</article-title> <source>Psychological Bulletin</source>, <volume>124</volume>(<issue>3</issue>), <fpage>372</fpage>–<lpage>422</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1037/0033-2909.124.3.372</pub-id><pub-id pub-id-type="pmid">9849112</pub-id><issn>0033-2909</issn></mixed-citation></ref>
<ref id="b50"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><name><surname>Rayner</surname>, <given-names>K.</given-names></name></person-group> (<year>2015</year>). <chapter-title>Eye Movements in Reading</chapter-title>. In <person-group person-group-type="editor"><name><given-names>J. D.</given-names> <surname>Wright</surname></name> (<role>Ed.</role>),</person-group> <source>International Encyclopedia of the Social &#x26;  Behavioral Sciences</source> (<edition>2nd ed.</edition>, pp. <fpage>631</fpage>–<lpage>634</lpage>). <publisher-loc>United Kingdom</publisher-loc>: <publisher-name>Elsevier</publisher-name>; <pub-id pub-id-type="doi" specific-use="author">10.1016/b978-0-08-097086-8.54008-2</pub-id> <pub-id pub-id-type="doi">10.1016/B978-0-08-097086-8.54008-2</pub-id></mixed-citation></ref>
<ref id="b51"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Rayner</surname>, <given-names>K.</given-names></name>, <name><surname>Pollatsek</surname>, <given-names>A.</given-names></name>, <name><surname>Ashby</surname>, <given-names>J.</given-names></name>, &#x26;  <name><surname>Clifton</surname>, <given-names>C. J.</given-names></name></person-group> (<year>2012</year>). <source>Psychology of reading</source> (<edition>2nd ed.</edition>). <publisher-loc>New York, London</publisher-loc>: <publisher-name>Psychology Press</publisher-name>. <pub-id pub-id-type="doi">10.4324/9780203155158</pub-id></mixed-citation></ref>
<ref id="b52"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Romero-Fresco</surname>, <given-names>P.</given-names></name></person-group> (<year>2015</year>). <source>The reception of subtitles for the deaf and hard of hearing in Europe</source>. <publisher-loc>Bern</publisher-loc>: <publisher-name>Peter Lang</publisher-name>.</mixed-citation></ref>
<ref id="b53"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Sandry</surname>, <given-names>J.</given-names></name>, <name><surname>Genova</surname>, <given-names>H. M.</given-names></name>, <name><surname>Dobryakova</surname>, <given-names>E.</given-names></name>, <name><surname>DeLuca</surname>, <given-names>J.</given-names></name>, &#x26;  <name><surname>Wylie</surname>, <given-names>G.</given-names></name></person-group> (<year>2014</year>). <article-title>Subjective cognitive fatigue in multiple sclerosis depends on task length.</article-title> <source>Frontiers in Neurology</source>, <volume>5</volume>, <fpage>214</fpage>. <pub-id pub-id-type="doi" specific-use="author">10.3389/fneur.2014.00214</pub-id><pub-id pub-id-type="pmid">25386159</pub-id><issn>1664-2295</issn></mixed-citation></ref>
<ref id="b54"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Schmeck</surname>, <given-names>A.</given-names></name>, <name><surname>Opfermann</surname>, <given-names>M.</given-names></name>, <name><surname>van Gog</surname>, <given-names>T.</given-names></name>, <name><surname>Paas</surname>, <given-names>F.</given-names></name>, &#x26;  <name><surname>Leutner</surname>, <given-names>D.</given-names></name></person-group> (<year>2014</year>). <article-title>Measuring cognitive load with subjective rating scales during problem solving: Differences between immediate and delayed ratings.</article-title> <source>Instructional Science</source>, <volume>43</volume>(<issue>1</issue>), <fpage>93</fpage>–<lpage>114</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1007/s11251-014-9328-3</pub-id><issn>0020-4277</issn></mixed-citation></ref>
<ref id="b55"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Sun</surname>, <given-names>S.</given-names></name>, &#x26;  <name><surname>Shreve</surname>, <given-names>G.</given-names></name></person-group> (<year>2014</year>). <article-title>Measuring translation difficulty: An empirical study.</article-title> <source>Target</source>, <volume>26</volume>(<issue>1</issue>), <fpage>98</fpage>–<lpage>127</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1075/target.26.1.04sun</pub-id></mixed-citation></ref>
<ref id="b56"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Sweller</surname>, <given-names>J.</given-names></name></person-group> (<year>2011</year>). <article-title>Cognitive Load Theory.</article-title> <source>Psychology of Learning and Motivation</source>, <volume>55</volume>, <fpage>37</fpage>–<lpage>76</lpage>. <pub-id pub-id-type="doi">10.1016/B978-0-12-387691-1.00002-8</pub-id><issn>0079-7421</issn></mixed-citation></ref>
<ref id="b57"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Sweller</surname>, <given-names>J.</given-names></name>, <name><surname>Ayres</surname>, <given-names>P.</given-names></name>, &#x26;  <name><surname>Kalyuga</surname>, <given-names>S.</given-names></name></person-group> (<year>2011</year>). <article-title>Cognitive Load Theory.</article-title> <source>Psychology of Learning and Motivation</source>, <volume>55</volume>, <fpage>37</fpage>–<lpage>76</lpage>. <pub-id pub-id-type="doi">10.1016/B978-0-12-387691-1.00002-8</pub-id><issn>0079-7421</issn></mixed-citation></ref>
<ref id="b58"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Sweller</surname>, <given-names>J.</given-names></name>, <name><surname>Van Merrienboer</surname>, <given-names>J.</given-names></name>, &#x26;  <name><surname>Paas</surname>, <given-names>F.</given-names></name></person-group> (<year>1998</year>). <article-title>Cognitive Architecture and Instructional Design.</article-title> <source>Educational Psychology Review</source>, <volume>10</volume>(<issue>3</issue>), <fpage>251</fpage>–<lpage>296</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1023/a:1022193728205</pub-id> <pub-id pub-id-type="doi">10.1023/A:1022193728205</pub-id><issn>1040-726X</issn></mixed-citation></ref>
<ref id="b59"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Szarkowska</surname>, <given-names>A.</given-names></name>, &#x26;  <name><surname>Gerber-Morón</surname>, <given-names>O.</given-names></name></person-group> (<year>2018</year>). <source>SURE Project Dataset</source>. <publisher-name>RepOD</publisher-name>; <pub-id pub-id-type="doi" specific-use="author">10.18150/repod.4469278</pub-id></mixed-citation></ref>
<ref id="b60"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Szarkowska</surname>, <given-names>A.</given-names></name>, <name><surname>Krejtz</surname>, <given-names>I.</given-names></name>, <name><surname>Kłyszejko</surname>, <given-names>Z.</given-names></name>, &#x26;  <name><surname>Wieczorek</surname>, <given-names>A.</given-names></name></person-group> (<year>2011</year>). <article-title>Verbatim, standard, or edited? Reading patterns of different captioning styles among deaf, hard of hearing, and hearing viewers.</article-title> <source>American Annals of the Deaf</source>, <volume>156</volume>(<issue>4</issue>), <fpage>363</fpage>–<lpage>378</lpage>. <pub-id pub-id-type="doi">10.1353/aad.2011.0039</pub-id><pub-id pub-id-type="pmid">22256538</pub-id><issn>0002-726X</issn></mixed-citation></ref>
<ref id="b61"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Szarkowska</surname>, <given-names>A.</given-names></name>, <name><surname>Krejtz</surname>, <given-names>I.</given-names></name>, <name><surname>Pilipczuk</surname>, <given-names>O.</given-names></name>, <name><surname>Dutka</surname>, <given-names>Ł.</given-names></name>, &#x26;  <name><surname>Kruger</surname>, <given-names>J.-L.</given-names></name></person-group> (<year>2016</year>). <article-title>The effects of text editing and subtitle presentation rate on the comprehension and reading patterns of interlingual and intralingual subtitles among deaf, hard of hearing and hearing viewers.</article-title> <source>Across Languages and Cultures</source>, <volume>17</volume>(<issue>2</issue>), <fpage>183</fpage>–<lpage>204</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1556/084.2016.17.2.3</pub-id><issn>1585-1923</issn></mixed-citation></ref>
<ref id="b62"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Szarkowska</surname>, <given-names>A.</given-names></name>, <name><surname>Krejtz</surname>, <given-names>K.</given-names></name>, <name><surname>Dutka</surname>, <given-names>Ł.</given-names></name>, &#x26;  <name><surname>Pilipczuk</surname>, <given-names>O.</given-names></name></person-group> (<year>2016</year>). <article-title>Cognitive load in intralingual and interlingual respeaking-a preliminary study.</article-title> <source>Poznán Studies in Contemporary Linguistics</source>, <volume>52</volume>(<issue>2</issue>). <pub-id pub-id-type="doi" specific-use="author">10.1515/psicl-2016-0008</pub-id><issn>1732-0747</issn></mixed-citation></ref>
<ref id="b63"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Thorn</surname>, <given-names>F.</given-names></name>, &#x26;  <name><surname>Thorn</surname>, <given-names>S.</given-names></name></person-group> (<year>1996</year>). <article-title>Television captions for hearing-impaired people: A study of key factors that affect reading performance.</article-title> <source>Human Factors</source>, <volume>38</volume>(<issue>3</issue>), <fpage>452</fpage>–<lpage>463</lpage>. <pub-id pub-id-type="doi">10.1518/001872096778702006</pub-id><pub-id pub-id-type="pmid">8865768</pub-id><issn>0018-7208</issn></mixed-citation></ref>
<ref id="b64"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Van Dongen</surname>, <given-names>H. P. A.</given-names></name>, <name><surname>Belenky</surname>, <given-names>G.</given-names></name>, &#x26;  <name><surname>Krueger</surname>, <given-names>J. M.</given-names></name></person-group> (<year>2011</year>). <source>Investigating the temporal dynamics and underlying mechanisms of cognitive fatigue</source>. <publisher-name>American Psychological Association</publisher-name>; <pub-id pub-id-type="doi" specific-use="author">10.1037/12343-006</pub-id></mixed-citation></ref>
<ref id="b65"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Van Gerven</surname>, <given-names>P. W. M.</given-names></name>, <name><surname>Paas</surname>, <given-names>F.</given-names></name>, <name><surname>Van Merriënboer</surname>, <given-names>J. J. G.</given-names></name>, &#x26;  <name><surname>Schmidt</surname>, <given-names>H. G.</given-names></name></person-group> (<year>2004</year>). <article-title>Memory load and the cognitive pupillary response in aging.</article-title> <source>Psychophysiology</source>, <volume>41</volume>(<issue>2</issue>), <fpage>167</fpage>–<lpage>174</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1111/j.14698986.2003.00148.x</pub-id> <pub-id pub-id-type="doi">10.1111/j.1469-8986.2003.00148.x</pub-id><pub-id pub-id-type="pmid">15032982</pub-id><issn>0048-5772</issn></mixed-citation></ref>
<ref id="b66"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>van Gog</surname>, <given-names>T.</given-names></name>, &#x26;  <name><surname>Paas</surname>, <given-names>F.</given-names></name></person-group> (<year>2008</year>). <article-title>Instructional Efficiency: Revisiting the Original Construct in Educational Research.</article-title> <source>Educational Psychologist</source>, <volume>43</volume>(<issue>1</issue>), <fpage>16</fpage>–<lpage>26</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1080/00461520701756248</pub-id><issn>0046-1520</issn></mixed-citation></ref>
<ref id="b67"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Wang</surname>, <given-names>Z.</given-names></name>, &#x26;  <name><surname>Duff</surname>, <given-names>B. R. L.</given-names></name></person-group> (<year>2016</year>). <article-title>All Loads Are Not Equal: Distinct Influences of Perceptual Load and Cognitive Load on Peripheral Ad Processing.</article-title> <source>Media Psychology</source>, <volume>19</volume>(<issue>4</issue>), <fpage>589</fpage>–<lpage>613</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1080/15213269.2015.1108204</pub-id><issn>1521-3269</issn></mixed-citation></ref>
<ref id="b68"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Warren</surname>, <given-names>P.</given-names></name></person-group> (<year>2012</year>). <source>Introducing Psycholinguistics</source>. <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>. <pub-id pub-id-type="doi">10.1017/CBO9780511978531</pub-id></mixed-citation></ref>
<ref id="b69"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Yoon</surname>, <given-names>J.-O.</given-names></name>, &#x26;  <name><surname>Kim</surname>, <given-names>M.</given-names></name></person-group> (<year>2011</year>). <article-title>The effects of captions on deaf students’ content comprehension, cognitive load, and motivation in online learning.</article-title> <source>American Annals of the Deaf</source>, <volume>156</volume>(<issue>3</issue>), <fpage>283</fpage>–<lpage>289</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1353/aad.2011.0026</pub-id><pub-id pub-id-type="pmid">21941878</pub-id><issn>0002-726X</issn></mixed-citation></ref>
</ref-list>
</back>
</article>
