<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd">

<article article-type="research-article" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML">
 <front>
    <journal-meta>
	<journal-id journal-id-type="publisher-id">Jemr</journal-id>
      <journal-title-group>
        <journal-title>Journal of Eye Movement Research</journal-title>
      </journal-title-group>
      <issn pub-type="epub">1995-8692</issn>
	  <publisher>								
	  <publisher-name>Bern Open Publishing</publisher-name>
	  <publisher-loc>Bern, Switzerland</publisher-loc>
	</publisher>
    </journal-meta>
    <article-meta>
	<article-id pub-id-type="doi">10.16910/jemr.11.3.2</article-id> 
	  <article-categories>								
				<subj-group subj-group-type="heading">
					<subject>Research Article</subject>
				</subj-group>
		</article-categories>
      <title-group>
        <article-title>Line breaks in subtitling: an eye tracking study on viewer preferences</article-title>
      </title-group>
	   <contrib-group> 
				<contrib contrib-type="author">
					<name>
						<surname>Gerber-Morón</surname>
						<given-names>Olivia</given-names>
					</name>
					<xref ref-type="aff" rid="aff1">1</xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Szarkowska</surname>
						<given-names>Agnieszka</given-names>
					</name>
					<xref ref-type="aff" rid="aff1 aff3">2, 3</xref>
				</contrib>				
        <aff id="aff1">
		<institution>Universitat Autònoma de Barcelona</institution>,   <country>Spain</country>
        </aff>
        <aff id="aff2">
		<institution>University College London</institution>,   <country>UK</country>
        </aff>
        <aff id="aff3">
		<institution>University of Warsaw</institution>,   <country>Poland</country>
        </aff>		
		</contrib-group>   

		
	  <pub-date date-type="pub" publication-format="electronic"> 
		<day>17</day>  
		<month>5</month>
        <year>2018</year>
      </pub-date>
	  <pub-date date-type="collection" publication-format="electronic"> 
	  <year>2018</year>
	</pub-date>
      <volume>11</volume>
      <issue>3</issue>
	 <elocation-id>10.16910/jemr.11.3.2</elocation-id> 
	<permissions> 
	<copyright-year>2018</copyright-year>
	<copyright-holder>Gerber-Morón, O. &#x26; Szarkowska, A. </copyright-holder>
	<license license-type="open-access">
  <license-p>This work is licensed under a Creative Commons Attribution 4.0 International License, 
  (<ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">
    https://creativecommons.org/licenses/by/4.0/</ext-link>), which permits unrestricted use and redistribution provided that the original author and source are credited.</license-p>
</license>
	</permissions>
      <abstract>
        <p>There is a discrepancy between
          professional subtitling guidelines and how they are implemented in
          real life. One example of such discrepancy are line breaks: the way
          the text is divided between the two lines in a subtitle. Although we
          know from the guidelines how subtitles <italic>should</italic> look like and
          from watching subtitled materials how they <italic>really</italic> look like,
          little is known about what line breaks viewers would prefer. We
          examined individual differences in syntactic processing and viewers’
          preferences regarding line breaks in various linguistic units,
          including noun, verb and adjective phrases. We studied people’s eye
          movements while they were reading pictures with subtitles. We also
          investigated whether these preferences are affected by hearing
          status and previous experience with subtitling. Viewers were shown
          30 pairs of screenshots with syntactically segmented and
          non-syntactically segmented subtitles and they were asked to choose
          which subtitle in each pair was better. We tested 21 English, 26
          Spanish and 21 Polish hearing people, and 19 hard of hearing and
          deaf people from the UK. Our results show that viewers prefer
          syntactically segmented line breaks. Eye tracking results indicate
          that linguistic units are processed differently depending on the
          linguistic category and the viewers’ profile.</p>
      </abstract>
      <kwd-group>
        <kwd>Eye movements</kwd>
        <kwd>eye tracking</kwd>
        <kwd>reading</kwd>
        <kwd>subtitling</kwd>
        <kwd>line breaks</kwd>
        <kwd>individual differences</kwd>
        <kwd>segmentation</kwd>
        <kwd>audiovisual translation</kwd>
        <kwd>syntactic processing</kwd>		
      </kwd-group>
    </article-meta>
  </front>	
  <body>

    <sec id="S1">
      <title>Introduction</title>

    <p>It is a truth universally acknowledged that subtitles should be easy to
    read and not stand in viewers’ enjoyment of a film. One way of enhancing
    subtitle readability is segmentation, i.e. the way the text is divided
    between the two lines in a subtitle. Both subtitling scholars and
    professionals believe that subtitle segmentation should follow syntactic
    rules (<xref ref-type="bibr" rid="b1 b2 b3 b4 b5 b6 b7 b8">1, 2, 3, 4, 5, 6, 7, 8</xref>). This means that linguistic units should be kept together in
    one line. For instance, rather than having a subtitle segmented in this
    way (<xref ref-type="bibr" rid="b2">2</xref>):</p>


    <p>We are aiming to get a </p>
    <p>better television service.</p>


    <p>a well-segmented subtitle would have the indefinite article ‘<italic>a</italic>’
    in the second line together with the rest of the noun phrase it belongs
    to:</p>



    <p>We are aiming to get</p>
    <p>a better television service.</p>



    <p>As subtitles compete for screen space and viewers’ attention with
    images, good subtitle segmentation is crucial to optimise readability and
    to enhance viewers’ enjoyment of the film (<xref ref-type="bibr" rid="b3">3</xref>). In this study, we look into
    viewers’ preferences on subtitle segmentation and its impact on
    readability.</p>

    <sec id="S1a">
      <title>Syntactically-cued text and reading</title>

    <p>When reading, people make sense of words by grouping them
    into phrases – a process known as parsing (<xref ref-type="bibr" rid="b9">9</xref>). Parsing is done
    incrementally, word by word: readers do not wait until the end of the
    sentence to interpret it, but try to make sense of it while they are
    reading (<xref ref-type="bibr" rid="b10 b11">10, 11</xref>). To understand a sentence, readers must “first identify
    its syntactic relations” (<xref ref-type="bibr" rid="b11">11</xref>). If text is not syntactically cued, the
    reader’s comprehension may be disrupted. Syntactic ambiguities leading the
    reader to an incorrect interpretation, known as “garden path” sentences,
    need to be reanalysed and disambiguated (<xref ref-type="bibr" rid="b11 b12">11, 12</xref>). These ambiguities and
    disruptions affect eye movements, as readers make longer fixations and
    regress to earlier parts of the sentence to disambiguate unclear text
    (<xref ref-type="bibr" rid="b10">10</xref>).</p>

    <p>Previous studies on reading printed text showed that syntactically-cued
    text facilitates reading (<xref ref-type="bibr" rid="b13 b14 b15">13, 14, 15</xref>), resulting in fewer dysfluencies at line
    breaks than uncued texts (<xref ref-type="bibr" rid="b13">13</xref>). Dividing phrases based on syntactic units
    has also been found to improve children’s reading comprehension (<xref ref-type="bibr" rid="b14 b15">14, 15</xref>).
    From previous eye tracking literature, we know that some grammatical
    structures are more difficult to process than others, resulting in
    regressive eye movements and longer reading times (<xref ref-type="bibr" rid="b16 b17 b18">16, 17, 18</xref>). In this study,
    we expect to find eye movement disfluencies (revisits, longer dwell time)
    in non-syntactically segmented text.</p>
    </sec>
	
    <sec id="S1b">
      <title>Linguistic units in subtitle segmentation</title>

    <p>Subtitling guidelines recommend that subtitle text should be presented
    in sense blocks and divided based on linguistic units (<xref ref-type="bibr" rid="b1 b19 b20 b21">1, 19, 20, 21</xref>), at the
    highest syntactic nodes possible (<xref ref-type="bibr" rid="b6">6</xref>). At the phrase level, it is believed
    (<xref ref-type="bibr" rid="b7">7</xref>) that the following phrases should be displayed on the same subtitle
    line: noun phrases (nouns preceded by an article); prepositional phrases
    (simple and/or complex preposition heading a noun or noun phrase); and
    verb phrases (auxiliaries and main verbs or phrasal verbs). At the clause
    and sentence level, constructions that should be kept on the same subtitle
    line include (<xref ref-type="bibr" rid="b7">7</xref>): coordination constructions (sentential conjunctions such
    as ‘and’ and negative constructions with ‘not’); subordination
    constructions (clauses introduced by the conjunction ‘that’);
    <italic>if</italic>-structures and comparative constructions (clauses preceded by
    the conjunction ‘than’).</p>

    <p>Similar rules regarding line breaks are put forward in many
    subtitling guidelines endorsed by television broadcasters and media
    regulators (<xref ref-type="bibr" rid="b2 b8 b22 b23 b24 b25">2, 8 22, 23, 24, 25</xref>). According to them, the parts of speech that should
    not be split across a two-line subtitle are: article and noun; noun and
    adjective; first and last name; preposition and following phrase;
    conjunction and following phrase/clause; prepositional verb and
    preposition; pronoun and verb; and parts of a complex verb. However, when
    there is a conflict, synchronisation with the soundtrack should take
    precedence over line breaks (<xref ref-type="bibr" rid="b2">2</xref>).</p>
    </sec>
	
    <sec id="S1c">
      <title>Geometry in subtitle segmentation</title>

    <p>Apart from sense blocks and syntactic phrases, another important
    consideration in how to form a two-line subtitle is its geometry (<xref ref-type="bibr" rid="b1 b3 b5 b6">1, 3, 5, 6</xref>). 
	When watching subtitled videos, viewers may
    not be aware of syntactic rules used to split linguistic units between the
    lines. What they may notice instead is subtitle shape: either closer to a
    pyramid or trapezoid with one line shorter than the other, or a rectangle
    with two lines of roughly equal length.</p>

    <p>It is generally believed that lines within a subtitle should be
    proportionally equal in length because “untidy formats are disliked by
    viewers” (<xref ref-type="bibr" rid="b1">1</xref>) and people are used to reading printed material in a
    rectangular format (<xref ref-type="bibr" rid="b6">6</xref>). When two lines of unequal length are used, “the
    upper line should preferably be shorter to keep as much of the image as
    free” (<xref ref-type="bibr" rid="b19">19</xref>). If geometry is in conflict with syntax, then preference is
    given to the latter (<xref ref-type="bibr" rid="b6">6</xref>).</p>

    <p>In view of the above, it is plausible that viewers make their
    preferences based on the shape rather than syntax (<xref ref-type="bibr" rid="b1 b26">1, 26</xref>). Tests with
    viewers are therefore needed to understand subtitle segmentation
    preferences and to establish the effects of line breaks on subtitling
    processing.</p>
    </sec>
	
    <sec id="S1d">
      <title>Empirical studies on subtitle segmentation</title>

    <p>Previous research on subtitle segmentation, including studies with eye
    tracking, has been limited and inconclusive. In a study
    on the cognitive effectiveness of subtitle processing (<xref ref-type="bibr" rid="b27">27</xref>), no differences
    were found in processing subtitles with and without syntactic-based
    segmentation, except for longer fixations in non-syntactically segmented
    text. Similarly, Gerber-Morón and Szarkowska (<xref ref-type="bibr" rid="b28">28</xref>) did not find differences
    in comprehension between syntactically and non-syntactically segmented
    subtitles, but reported higher cognitive load in the latter. In contrast,
    a study on text chunking in live subtitles (<xref ref-type="bibr" rid="b29">29</xref>) showed that subtitles
    segmented following linguistic phrases facilitate subtitle processing.
    They found a significant difference in the number of eye movements between
    the subtitles and the image compared to non-syntactically segmented
    subtitles displayed word by word.</p>
    </sec>
	
    <sec id="S1e">
      <title>Different types of viewers</title>

    <p>People may watch subtitled films differently depending on whether or
    not they are familiar with subtitling. Yet, despite an increasingly
    growing number of eye tracking studies on subtitling (<xref ref-type="bibr" rid="b30 b31 b32 b33">30, 31, 32, 33</xref>), little is
    known about the role of viewers’ previous experience with subtitling on
    the way they process subtitled videos. Perego et al. (<xref ref-type="bibr" rid="b34">34</xref>) conducted a
    cross-national study on subtitle reception and found that Italians, who
    are not habitual subtitle users, spent most of the watching time on
    reading subtitles and took more effort processing subtitles. In a study on
    eye movements of adults and children while reading television subtitles
    (<xref ref-type="bibr" rid="b35">35</xref>), longer fixations in the text were observed in children, who were
    less experienced in subtitling than adults. Similar fixation durations
    were obtained in another study on the processing of native and foreign
    language subtitles in native English speakers (<xref ref-type="bibr" rid="b30">30</xref>), which was attributed
    to the lack of familiarity with subtitles.</p>

    <p>Apart from previous experience with subtitling, another factor that
    impacts on the processing of subtitled videos is hearing status (<xref ref-type="bibr" rid="b36">36</xref>).
    Burnham et al. (<xref ref-type="bibr" rid="b37">37</xref>) note that “hearing status and literacy tend to covary”
    (p. 392). Early deafness has been found to be a predictor of poor reading
    (<xref ref-type="bibr" rid="b38 b39 b40 b41 b42 b43 b44">38, 39, 40, 41, 42, 43, 44</xref>). In consequence, deaf viewers may experience difficulties when
    reading subtitles and their comprehension of subtitled content may be
    lower than that of hearing viewers (<xref ref-type="bibr" rid="b45 b46 b47">45, 46, 47</xref>). One of the difficulties
    experienced by deaf people when reading is related to definite and
    indefinite articles (<xref ref-type="bibr" rid="b48 b49">48, 49</xref>). Deaf people spend more time reading function
    words in subtitles (such as determiners, prepositions, conjunctions or
    auxiliary verbs) than hard of hearing and hearing viewers (<xref ref-type="bibr" rid="b50">50</xref>). This has
    been attributed to the fact that many function words do not exist in sign
    languages, that such words tend to be short and unstressed, and therefore
    more difficult to identify, and that they have “low fixed semantic content
    outside of specific context in which they occur” (<xref ref-type="bibr" rid="b48">48</xref>). Given that function
    words are an important part of the linguistic units split between the two
    subtitle lines, in this study we investigate whether hearing status and
    previous experience with subtitling affects the preferences for or against
    syntactically-cued text.</p>
    </sec>
	
    <sec id="S1f">
      <title>Different types of viewers</title>

    <p>This study adopts the viewers’ perspective on subtitle segmentation by
    analysing people’s preferences and reactions to different types of line
    breaks. To investigate these issues, the approach we developed was
    three-fold. First, we examined the preferences of different groups of
    subtitle viewers with the goal of identifying any potential differences
    depending on their experience with subtitling, their hearing status and
    the nature of the linguistic units. Second, we analysed viewers’ eye
    movements while they were reading syntactically segmented and
    non-syntactically segmented subtitles. Drawing on the assumption that
    processing takes longer in the case of more effortful texts (<xref ref-type="bibr" rid="b51">51</xref>), we
    predicted that syntactically segmented text would be preferred by viewers,
    whereas non-syntactically segmented text would take more time to read and
    result in higher mean fixation durations, particularly in the case of
    viewers less experienced with subtitling or deaf, given their known
    difficulties with processing syntactic structures (<xref ref-type="bibr" rid="b52 b53 b54 b55 b56 b57">52, 53, 54, 55, 56, 57</xref>). Finally, we
    invited participants to a short semi-structured interview to elicit their
    views on subtitle segmentation.</p>

    <p>This study consists of two experiments: in Experiment 1 we tested
    hearing viewers from the UK, Poland, and Spain, while in Experiment 2 we
    tested British deaf, hard of hearing and hearing people. In each
    experiment, participants were asked to choose subtitles which they thought
    were better from 30 pairs of screenshots (see the Methods section). In
    each pair, one subtitle was segmented following the established subtitling
    rules, as described in the Introduction, and the other violated them,
    splitting linguistic units between the two lines. After the experiment,
    participants were also asked whether they made their choices based on
    linguistic considerations or rather on subtitle shape.</p>

    <p>Using a mixed-methods approach, where we combined preferences, eye
    tracking and interviews, has enabled us to gain unique insights into the
    reception of subtitle segmentation among different groups of viewers. To
    the best of our knowledge, no previous research has been conducted into
    viewers’ preferences on subtitle segmentation, using such a wide selection
    of linguistic units. The results of this study are particularly relevant
    in the context of current subtitling practices and subtitle
    readability.</p>
    </sec>
    </sec>
	
    <sec id="S2">
      <title>Methods</title>

    <p>The study took place at University College London. Two experiments
    were conducted, using the same methodology and materials. The study
    received full ethical approval from the UCL Research Ethics Committee.</p>

    <sec id="S2a">
      <title>Participants</title>

    <p>Experiment 1 involved 68 participants (21 English, 21 Polish, and 26
    Spanish native speakers) ranging from 19 to 42 years of age
    (<italic>M</italic>=26.51, <italic>SD</italic>=6.02). Spanish speakers were included given
    their exposure to dubbing. Polish speakers were more accustomed to
    watching subtitles in comparison with Spanish speakers. English speakers
    were used as a control group. However, even though the participants came
    from different audiovisual translation traditions, most of them declared
    that subtitling is their preferred type of watching foreign films. They
    said they either use subtitles in their mother tongue or in English, which
    is not surprising given that the majority of the productions they watch
    are in English. This can be on the one hand be explained by changing
    viewers habits (<xref ref-type="bibr" rid="b58">58</xref>) and on the other by the fact that our participants
    were living in the UK. The fact that they are frequent subtitle users also
    makes them a good group to ask about certain solutions used in subtitles,
    such as line breaks.</p>

    <p>As the subtitles in this study were in English, we asked Polish and
    Spanish participants to evaluate their proficiency in reading English
    using the Common European Framework of Reference for Languages (from A1 to
    C2). All the participants declared a reading level equal or higher than
    B1. Of the total sample of Polish participants, 3 had a C1 level and 18
    had a C2 level. In the sample of Spanish participants, 1 had a B1 level, 4
    had a B2 level, 5 had a C1 and 16 had a C2 level. No statistically
    significant differences were found between the proficiency of Polish and
    Spanish participants, <italic>χ</italic><sup><italic>2</italic></sup>(3)=5.144,
    <italic>p</italic>=.162.</p>

    <p>Experiment 2 involved either hearing, hard of hearing, or deaf
    participants from the UK. We recruited 40 participants (21 hearing, 10
    hard of hearing and 9 deaf) ranging from 20 to 74 years of age
    (<italic>M</italic>=35.59, <italic>SD</italic>=13.7). Before taking part in the experiment,
    hard of hearing and deaf participants completed a demographic
    questionnaire with information on their hearing impairment, age of hearing
    loss onset, communication preferences, etc. and were asked if they
    described themselves as either deaf or hard of hearing. Of the total
    sample of deaf and hard of hearing participants, 10 were profoundly deaf,
    6 were severely deaf and 3 had a moderate hearing loss. In relation to the
    age of onset, 7 were born deaf or hard of hearing, 4 lost hearing under
    the age of 8, 2 lost hearing between the ages of 9-17, and 6 lost hearing
    between the ages of 18-40. Except for two participants who used a BSL
    interpreter, other hard of hearing and deaf participants chose spoken and
    written English to communicate during the experiment.</p>

    <p>Participants were recruited using the UCL Psychology pool of
    volunteers, social media (Facebook page of the SURE project, Twitter), and
    personal networking. Hard of hearing and deaf participants were recruited
    with the help of the National Association of Deafened People and the UCL
    Deafness, Cognition and Language Centre participant pool. Hearing
    participants were paid £10 for participating in the experiment, following
    UCL hourly rates for experimental participants. Hard of hearing and deaf
    participants received £25 in recognition of the greater difficulty in
    recruiting special populations.</p>
    </sec>
	
    <sec id="S2b">
      <title>Design</title>

    <p>In each experiment, we employed a mixed factorial design. The
    independent between-subject variables were language in Experiment 1
    (English, Polish, Spanish) or hearing loss in Experiment 2 (hearing, hard
    of hearing and deaf), and the type of segmentation (syntactically
    segmented subtitles vs. non-syntactically segmented subtitles, henceforth
    referred to as SS and NSS, respectively). The main dependent variables
    were preferences on line breaks (SS and NSS) and eye tracking measures
    (dwell time, mean fixation duration and revisits).</p>
    </sec>
	
    <sec id="S2c">
      <title>Materials</title>

    <p>The subtitles used in this study were in English. One reason for this
    choice was that it would be difficult to test line breaks and subtitle
    segmentation across different languages. For instance, as opposed to
    English and Spanish, the Polish language does not have articles, so it
    would be impossible to compare this linguistic unit across the languages
    of study participants. Another reason for using English subtitles was that
    it is particularly in intralingual English-to-English subtitles on
    television in the UK (where our study materials came from and there this
    study was based) that non-syntactic based segmentation is common despite
    the current subtitling guidelines (<xref ref-type="bibr" rid="b2 b8">2, 8</xref>).</p>

    <p>The stimuli were 30 pairs of screenshots with subtitles in English
    from the BBC’s <italic>Sherlock, Series 4 </italic>(2017, dir. Mark Gatiss and
    Steven Moffat). Each pair contained exactly the same text, but differently
    segmented lines (see Figure 1). In one version, the two lines were segmented in accordance to
    subtitling standards, using syntactic rules to keep linguistic units on a
    single line (SS version). In the other version, syntactic rules were not
    followed and linguistic units were split between the first and the second
    line of the subtitle (NSS version).</p>

<fig id="fig01" fig-type="figure" position="float">
					<label>Figure 1.</label>
					<caption>
						<p>Stimulus example with syntactically segmented (left) and
    non-syntactically segmented text (right).on</p>
					</caption>	
					<graphic id="graph01" xlink:href="jemr-11-03-b-figure-01.png"/>
				</fig>


    <p>The following ten categories of the
    most common linguistic units (<xref ref-type="bibr" rid="b59">59</xref>) were manipulated in the study:</p>


       <p>1. Indefinite article + noun (<italic>IndArt</italic>) </p>

      <p> 2. Definite article + noun (<italic>DefArt</italic>) </p>

      <p> 3. To + infinitive (<italic>ToInf</italic>) </p>

      <p> 4. Compound (<italic>Comp</italic>) </p>

      <p> 5. Auxiliary + lexical verb (<italic>AuxVerb</italic>) </p>

      <p> 6. Sentence + sentence (<italic>SentSent</italic>) </p>

      <p> 7. Preposition (<italic>Prep</italic>) </p>

      <p> 8. Possessive (<italic>Poss</italic>) </p>

      <p> 9. Adjective + noun (<italic>AdjN</italic>) </p>

      <p> 10. Conjunction (<italic>Conj</italic>) </p>
	  


    <p>For each of these categories, three instances, i.e. three different
    sentence stimuli, were shown (see Table 1 for examples). The presentation
    of screenshots (right/left) was counterbalanced, with 15 sentences in the
    SS condition displayed on the left, and 15 on the right. The order of
    presentation of the pairs (and therefore of different linguistic units)
    was randomised using SMI Experiment Centre.</p>
	
<table-wrap id="t01" position="float">
					<label>Table 1.</label>
					<caption>
						<p>Examples of linguistic units manipulated in the syntactically
    segmented and non-syntactically segmented versions.</p>						
					</caption>
					<graphic id="graph07" xlink:href="jemr-11-03-b-table-01.png"/>					
					</table-wrap>
		
    </sec>
	
    <sec id="S2d">
      <title>Apparatus</title>

    <p>SMI Red 250 mobile eye tracker was used with a two-screen set-up, one
    for experimenter and the other for the participant. Participants’ eye
    movements were recorded with the sampling rate of 250Hz. The minimum
    duration of a fixation was set at 80 ms. We used the SMI velocity-based
    saccade detection algorithm. Participants with tracking ratio below 80%
    were excluded from eye tracking analyses. The experiment was designed and
    conducted using the SMI Experiment Suite. SMI BeGaze and SPSS v. 24 were
    used to analyse the data.</p>
    </sec>
	
    <sec id="S2e">
      <title>Dependent variables</title>

    <p>The dependent variables were the preference score and three eye
    tracking measures (see Table 2). The preference score was calculated based
    on the preference expressed by a participant regarding each linguistic
    unit: as a percentage of people preferring SS or NSS subtitles in each
    linguistic unit. As there were three examples per unit, their scores were
    averaged per participant per unit. Participants expressed their preference
    by clicking on the picture with subtitles they thought were better (see
    Figure 2.)</p>

<fig id="fig02" fig-type="figure" position="float">
					<label>Figure 2.</label>
					<caption>
						<p>Visualisation of mouse clicks on syntactically segmented (left) and
    non-syntactically segmented (right) subtitles (SentSent condition).</p>
					</caption>
					<graphic id="graph02" xlink:href="jemr-11-03-b-figure-03.png"/>
				</fig>

    <p>After completing the test with 30 pairs of subtitles, participants were
    asked a multiple-choice follow-up question displayed on the screen:<italic>
    What was most important for you when deciding which subtitles were
    better</italic>? The following options were provided: <italic>I chose those that
    looked like a pyramid/trapeze (shape), I chose those that looked like a
    rectangle (shape), I chose those that had semantic and syntactic phrases
    together, I don’t know</italic>. In the post-test interview, we asked the
    participants if they prefer to have the first line in the subtitle
    shorter, longer or the same length as the second line, which prompted them
    to elaborate on their choices and allowed us to elicit their views on line
    breaks in subtitling.</p>

    <p>Eye tracking analyses were conducted on data from areas of interest
    (AOIs) drawn for each subtitle in each screenshot. The three eye tracking
    measures used in this study are described in Table 2.</p>
	
<table-wrap id="t02" position="float">
					<label>Table 2.</label>
					<caption>
						<p>Description of the eye tracking measures.</p>				
					</caption>
					<graphic id="graph08" xlink:href="jemr-11-03-b-table-02.png"/>				
					</table-wrap>	

    </sec>
	
    <sec id="S2f">
      <title>Procedure</title>


    <p>Participants were tested individually in a lab. They were informed the
    study was on the quality of subtitles. The details of the experiment were
    not revealed until the end of the test during the debrief.</p>

    <p>Before starting the test, participants read the information sheet,
    signed an informed consent form and underwent a 9-point calibration
    procedure. Participants saw 30 pairs of screenshots in randomised order.
    From each pair, participants had to select (i.e. click on) the screenshot
    with the subtitle segmentation they preferred (SS or NSS). Participants
    then answered the question on segmentation style preference. At the end,
    they undertook a short interview in which they expressed their views on
    subtitle segmentation based on the test and their personal experience with
    subtitles. The experiment concluded with the debrief of the study. The
    experiment lasted approximately 15 minutes, depending on the time it took
    the participants to answer the questions and participate in the
    interview.</p>
    </sec>
    </sec>
	
    <sec id="S3">
      <title>Results</title>


    <p>All raw data, results and experimental protocols from this
    experiment are openly availably in RepOD repository (<xref ref-type="bibr" rid="b63">63</xref>).</p>

    <sec id="S3a">
      <title>Experiment 1</title>
    </sec>	  
    <sec id="S3b">
      <title>Preferences</title>


    <p>We conducted a 2 x 3 mixed ANOVA with segmentation (SS vs. NSS
    subtitles) as a within-subjects factor and language (English, Polish,
    Spanish) as a between-subjects factor with a percentage of preference for
    a particular linguistic unit as a dependent variable. In all linguistic
    parameters tested, we found a large main effect of segmentation (see Table
    3). The SS subtitles were preferred over the NSS ones.</p> 
 
 <p>Figure 3 shows preferences by linguistic units and Table 3 by participant groups. There
    were no differences between groups in any of the linguistic conditions and
    no interactions. This means that regardless of their mother tongue, all
    participants had similar preferences.</p>

<fig id="fig03" fig-type="figure" position="float">
					<label>Figure 3.</label>
					<caption>
						<p>Preferences for SS and NSS subtitles by linguistic units in
    Experiment 1</p>
					</caption>
					<graphic id="graph03" xlink:href="jemr-11-03-b-figure-03.png"/>
				</fig>
				
<table-wrap id="t03" position="float">
					<label>Table 3.</label>
					<caption>
						<p>Percentage of participants who preferred the syntactically
    segmented condition.</p>				
					</caption>
					<graphic id="graph09" xlink:href="jemr-11-03-b-table-03.png"/>				
					</table-wrap>					

    <p>As shown by Figure 4, the overwhelming majority of
    participants made their choices based on semantic and syntactic units
    rather than subtitle shape. Most Polish participants declared to
    prioritize semantic and syntactic units, whereas for English and Spanish
    participants pyramid shape was also considered as a choice.</p>

<fig id="fig04" fig-type="figure" position="float">
					<label>Figure 4.</label>
					<caption>
						<p>Segmentation preferences by group and style</p>
					</caption>
					<graphic id="graph04" xlink:href="jemr-11-03-b-figure-04.png"/>
				</fig>

    </sec>
	
    <sec id="S3c">
      <title>Eye tracking measures</title>

    <p>Due to data quality issues, eye tracking analyses in Experiment 1 were
    conducted on 16 English, 16 Polish and 18 Spanish participants.</p>
	
    <sec id="S3ca">
      <title>Dwell time</title>

    <p>There was a main effect of segmentation on dwell
    time in all linguistic units apart from <italic>ToInf</italic>, <italic>SentSent</italic> and
    <italic>Prep</italic> (see Table 4). Dwell time was higher in most SS noun phrases
    (<italic>IndArt</italic>, <italic>DefArt</italic>, <italic>Comp</italic>, <italic>Poss</italic>) as well as in SS
    <italic>Conj</italic>, and lower in NSS <italic>AuxVerb</italic> and <italic>AdjN</italic>. There was no
    main effect of language on dwell time in any of the linguistic units. We
    found an interaction, approaching statistical significance, between
    segmentation and language in <italic>Poss</italic>, <italic>F</italic>(2,47)=3.092,
    <italic>p</italic>=.055, &#x3B7;<sub>p</sub>&#x00B2;=.116. We
    decomposed this interaction with simple effects with Bonferroni correction
    and found that for English participants there was a main effect of
    segmentation on dwell time in <italic>Poss</italic>
    , <italic>F</italic>(1,15)=13.217, <italic>p</italic>=.002, &#x3B7;<sub>p</sub>&#x00B2;=.468. Their
    dwell time was higher in the SS condition than in the NSS condition. There
    was no main effect for either Polish or Spanish participants.</p>
	
<table-wrap id="t04" position="float">
					<label>Table 4.</label>
					<caption>
						<p>Dwell Time on subtitles by linguistic unit and segmentation
    (ms).</p>				
					</caption>
					<graphic id="graph10" xlink:href="jemr-11-03-b-table-04.png"/>				
					</table-wrap>	
	
    </sec>

    <sec id="S3cb">
      <title>Mean fixation duration (MFD)</title>


    <p>There was a main effect of segmentation on MFD only in one linguistic
    unit: <italic>AdjN</italic> (Table 5), where the SS condition resulted in higher MFD
    than the NSS one. We also found an interaction between segmentation and
    language in <italic>DefArt</italic>, <italic>F</italic>(2,41)=3.199, <italic>p</italic>=.051, &#x3B7;<sub>p</sub>&#x00B2;=.135. We decomposed this
    interaction with simple effects with Bonferroni correction and found that
    for Polish participants there was a main effect of segmentation on MFD in
    <italic>DefArt</italic>, <italic>F</italic>(1,12)=8.215, <italic>p</italic>=.014, &#x3B7;<sub>p</sub>&#x00B2;=.140, their mean fixation duration
    was longer for the NSS condition. There was no main effect for English or
    Spanish participants.</p>


    <p>There was a main effect of language on MFD in a number of linguistic
    units (see Table 6). Post-hoc Bonferroni tests showed that Polish had
    significantly shorter MFD than Spanish participants in <italic>IndArt</italic>,
    <italic>p</italic>=.042, 95% CI [-74.52, -1.06]; <italic>DefArt</italic>, <italic>p</italic>=.020, 95%
    CI [-60.83, -4.21]; <italic>ToInf</italic>, <italic>p</italic>=.009, 95% CI [-68.47, -7.97];
    <italic>Comp</italic>, <italic>p</italic>=.029, 95% CI [-61.92, -2.62]; and <italic>Prep</italic>,
    <italic>p</italic>=.034, 95% CI [-1.95, -66.18]. English participants did not differ
    from Polish or Spanish participants.<italic> </italic></p>
	
<table-wrap id="t05" position="float">
					<label>Table 5.</label>
					<caption>
						<p>Mean fixation duration by linguistic unit and segmentation.</p>				
					</caption>
					<graphic id="graph11" xlink:href="jemr-11-03-b-table-05.png"/>				
					</table-wrap>	

<table-wrap id="t06" position="float">
					<label>Table 6.</label>
					<caption>
						<p>ANOVA results for between-subject effects in mean fixation
    duration in Experiment 1.</p>				
					</caption>
					<graphic id="graph12" xlink:href="jemr-11-03-b-table-06.png"/>				
					</table-wrap>					

    </sec>

    <sec id="S3cc">
      <title>Revisits</title>


    <p>To see whether NSS subtitles induced more re-reading, which would show
    their lower readability, we analysed the number of revisits to the
    subtitles. We found a main effect of segmentation on revisits in all
    linguistic units apart from <italic>SentSent</italic>, <italic>Prep</italic> and <italic>Conj</italic>
    (see Table 7). Contrary to expectations, the number of revisits was higher
    in the SS condition for noun phrases (<italic>IndArt</italic>, <italic>DefArt</italic>,
    <italic>Comp</italic>, <italic>Poss</italic>). As for verb phrases (<italic>ToInf</italic>,
    <italic>AuxVerb</italic>) and <italic>AdjN</italic>, revisits were higher in the NSS
    condition.</p>

    <p>We found interactions between segmentation and language in
    <italic>Poss</italic>, <italic>F</italic>(2,53)=3.418, <italic>p</italic>=.040, &#x3B7;<sub>p</sub>&#x00B2;=.114, and <italic>AdjN,</italic>
    <italic>F</italic>(2,53)=7.696, <italic>p</italic>=.001, &#x3B7;<sub>p</sub>&#x00B2;=.225. We decomposed these
    interactions with simple effects with Bonferroni correction and found that
    for English participants there was a main effect of segmentation on
    revisits in <italic>Poss</italic>, <italic>F</italic>(1,17)=20.823, <italic>p</italic>=.000, &#x3B7;<sub>p</sub>&#x00B2;=.551, and <italic>AdjN</italic>,
    <italic>F</italic>(1,17)=5,017, <italic>p</italic>=.039, &#x3B7;<sub>p</sub>&#x00B2;=.228. <italic>Poss</italic> was higher in
    the SS condition and <italic>AdjN</italic> was higher in the NSS condition. For
    Polish participants, there was no main effect of segmentation in
    <italic>Poss</italic>, but there was a main effect in <italic>AdjN</italic>,
    <italic>F</italic>(1,15)=26.340, <italic>p</italic>=.000, &#x3B7;<sub>p</sub>&#x00B2;=.637, being higher in the NSS
    condition. For Spanish participants, we found a main effect in
    <italic>Poss</italic>, <italic>F</italic>(1,21)=5.469, <italic>p</italic>=.029, &#x3B7;<sub>p</sub>&#x00B2;=.207, but only a tendency in
    <italic>AdjN</italic>, <italic>F</italic>(1,21)=3.980, <italic>p</italic>=.059, &#x3B7;<sub>p</sub>&#x00B2;=.159. They had more revisits for
    <italic>Poss</italic> in the SS condition, whereas there were more revisits for
    <italic>AdjN</italic> in the NSS condition.</p>

    <p>There was no main effect of language on revisits in any of the
    linguistic units, apart from <italic>AuxVerb,</italic> <italic>F</italic>(2,53)=6.437,
    <italic>p</italic>=.003, &#x3B7;<sub>p</sub>&#x00B2;=.195. Post-hoc Bonferroni tests
    showed that Polish participants made significantly more revisits than
    Spanish participants, <italic>p</italic>=.003, 95% CI [.37, 2.10], being higher in
    the NSS for both groups.</p>
	
	<table-wrap id="t07" position="float">
					<label>Table 7.</label>
					<caption>
						<p>Revisits by linguistic unit and segmentation.</p>				
					</caption>
					<graphic id="graph13" xlink:href="jemr-11-03-b-table-07.png"/>				
					</table-wrap>	

    </sec>
	
    <sec id="S3cd">
      <title>Discussion</title>	



    <p>All participants preferred SS than NSS subtitles. The strongest effect
    was found in the SS <italic>SentSent</italic> condition, with 86% participants
    expressing preference for the syntactically cued subtitles compared to 14%
    for non-syntactically cued ones. Most participants stated they prefer
    subtitles to be segmented according to semantic and syntactic phrase
    structures, and not shape.</p>

    <p>Two interesting patterns emerged from eye tracking results on the time
    spent reading the noun and verb phrases in the subtitles. SS subtitles
    consistently induced longer dwell time for noun phrases (<italic>IndArt,
    DefArt, Comp, Poss</italic>), whereas NSS subtitles induced longer dwell time
    for verb phrases (<italic>AuxVerb</italic> and <italic>ToInf</italic>). We observed an
    interaction effect in English participants: for <italic>Poss</italic>, they had
    longer dwell time in the SS condition than Spanish and Polish
    participants.</p>

    <p>Results in revisits followed the same pattern: participants made more
    revisits in the SS subtitles in noun phrases (<italic>IndArt</italic>,
    <italic>DefArt</italic>, <italic>Comp</italic>, <italic>Poss</italic>) and more revisits in NSS
    subtitles in verb phrases (<italic>ToInf</italic>, <italic>AuxVerb</italic>). The interactions
    indicated that there were more revisits for <italic>Adj</italic> in the SS condition
    across the three groups and for <italic>Poss</italic> in the SS condition for
    English and Spanish participants. These results seem to indicate that noun
    phrases are more difficult to process in SS condition, and verb phrases in
    the NSS condition.</p>

    <p>In line with our predictions, Spanish participants, who come from
    dubbing tradition, showed longer mean fixation duration than English and
    Polish participants in both SS and NSS subtitles. There was an interaction
    showing that Polish had more difficulties processing <italic>DefArt</italic> in the
    NSS condition, with longer mean fixation duration.</p>
    </sec>
	    </sec>
		
    <sec id="S3d">
      <title>Experiment 2</title>
	    </sec>	  
    <sec id="S3e">
      <title>Preferences</title>


    <p>Similarly, to Experiment 1, we conducted a 2 x 3 mixed ANOVA with
    segmentation (SS vs. NSS subtitles) as a within-subject factor and hearing
    loss (hearing, hard of hearing, and deaf) as a between-subjects factor
    with a percentage of preference for a linguistic unit as a dependent
    variable.</p>

    <p>This time we found a main effect of segmentation in all linguistic
    parameters apart from <italic>AuxVerb</italic> and <italic>AdjN</italic>: the SS subtitles
    were preferred over the NSS ones. Figure 5 presents general preferences
    for all linguistic units and Table 8 shows how they differed by hearing
    loss.</p>

<fig id="fig05" fig-type="figure" position="float">
					<label>Figure 5.</label>
					<caption>
						<p>Preferences for SS and NSS subtitles by linguistic units in
    Experiment 2. Table 8. Percentage of Experiment 2 participants who
    preferred the syntactically segmented condition.</p>
					</caption>
					<graphic id="graph05" xlink:href="jemr-11-03-b-figure-05.png"/>
				</fig>
				
	<table-wrap id="t08" position="float">
					<label>Table 8.</label>
					<caption>
						<p>Percentage of Experiment 2 participants who preferred the
    syntactically segmented condition.</p>				
					</caption>
					<graphic id="graph14" xlink:href="jemr-11-03-b-table-08.png"/>				
					</table-wrap>					

    <p>We found an almost significant interaction between segmentation and
    hearing loss in <italic>DefArt</italic>, <italic>F</italic>(2,37)=3.086, <italic>p</italic>=.058,
    &#x3B7;<sub>p</sub>&#x00B2;=.143. We decomposed it with simple
    effects with Bonferroni correction and found that for hearing participants
    there was a main effect of preference on segmentation in <italic>DefArt</italic>,
    <italic>F</italic>(1,20)=19,375, <italic>p</italic>=.000, &#x3B7;<sub>p</sub>&#x00B2;=.492, as well as for hard of
    hearing participants, <italic>F</italic>(1,9)=7.111, <italic>p</italic>=.026, &#x3B7;<sub>p</sub>&#x00B2;=.441, but there was no effect for
    deaf participants. This means that deaf participants expressed a slight
    preference towards NSS, but it was not significant.</p>

    <p>There was a main effect of hearing loss in AdjN, <italic>F</italic>(2,37)=3.469,
    <italic>p</italic>=.042, &#x3B7;<sub>p</sub>&#x00B2;=.158 and a tendency approaching
    significance in Comp, <italic>F</italic>(2,37)=3.063, <italic>p</italic>=.059, &#x3B7;<sub>p</sub>&#x00B2; =.142. Post-hoc Bonferroni tests
    showed that hearing participants tended to express higher preference for
    SS <italic>AdjN</italic> than hard of hearing participants, <italic>p</italic>=.051, 95% CI
    [-.0009, .0834], as well as for SS <italic>Comp</italic>, <italic>p</italic>=.057, 95% CI
    [-.1001, .0001]. No statistically significant difference was reached in
    the group of deaf participants.</p>

    <p>When asked about their choices, most hearing and hard of hearing
    participants declared to prioritize semantic and syntactic units, whereas
    for deaf participants it was the subtitle shape that was more important,
    as shown on Figure 6.</p>

<fig id="fig06" fig-type="figure" position="float">
					<label>Figure 6.</label>
					<caption>
						<p>Segmentation preferences by group.</p>
					</caption>
					<graphic id="graph06" xlink:href="jemr-11-03-b-figure-06.png"/>
				</fig>

	  </sec>
		
    <sec id="S3f">
      <title>Eye tracking measures</title>

    <p>Due to data quality issues, eye tracking analyses in Experiment 2 were
    conducted on 16 English, 8 hard of hearing and 5 deaf participants.</p>

    <sec id="S3fa">
      <title>Dwell time</title>

    <p>We found a significant main effect of segmentation on dwell time in
    <italic>IndArt</italic>, <italic>AuxVerb</italic> and <italic>Poss</italic> (see Table 9). Dwell time
    was higher for <italic>IndArt</italic> in the SS condition and for <italic>AuxVerb</italic> in
    the NSS condition.</p>

    <p>We found interactions between segmentation and hearing loss in dwell
    time for <italic>AdjN</italic>, <italic>F</italic>(2,26)= 7.898, <italic>p</italic>=.002, &#x3B7;<sub>p</sub>&#x00B2; =.378, and <italic>Conj</italic>,
    <italic>F</italic>(2,26)= 4.334, <italic>p</italic>=.024, &#x3B7;<sub>p</sub>&#x00B2; =.250. We decomposed these
    interactions with simple effects with Bonferroni correction and found that
    for hard of hearing participants there was a main effect of segmentation
    on dwell time in <italic>AdjN</italic>, <italic>F</italic>(1,7)=31.727, <italic>p</italic>=.001, &#x3B7;<sub>p</sub>&#x00B2; =.819, and <italic>Conj</italic>,
    <italic>F</italic>(1,7)=8,306, <italic>p</italic>=.024, &#x3B7;<sub>p</sub>&#x00B2; =.543. Dwell time was higher for
    <italic>AdjN</italic> in the NSS condition and for <italic>Conj</italic> in the SS condition.
    Main effect of segmentation of <italic>Poss</italic> for hard of hearing was higher
    in the SS condition. As for deaf participants, the main effect of
    segmentation on dwell time for <italic>Poss</italic> was higher in the NSS
    condition. There was no effect for hearing or deaf participants in
    <italic>AdjN</italic> and <italic>Conj</italic>.</p>

    <p>Between-subject analysis showed a significant main effect of hearing
    loss in <italic>DefArt</italic> (<italic>F</italic>(2,26)=3.846, <italic>p</italic>=.034, &#x3B7;<sub>p</sub>&#x00B2; = .228) and a tendency approaching
    significance in <italic>SentSent</italic> (<italic>F</italic>(2,26)=3.241, <italic>p</italic>=.055,
    &#x3B7;<sub>p</sub>&#x00B2;=.200). Post-hoc tests with
    Bonferroni correction showed that deaf participants had significantly
    lower dwell time than hard of hearing in <italic>DefArt</italic>, <italic>p</italic>=.032, 95%
    CI [-1801.76, -64.33]. Hard of hearing participants tended to have higher
    dwell time than hearing participants in <italic>SentSent</italic>, <italic>p</italic>=.053,
    95% CI [-962.76, -4.14].</p>
	
	<table-wrap id="t09" position="float">
					<label>Table 9.</label>
					<caption>
						<p>Dwell Time by linguistic unit and segmentation (ms).</p>				
					</caption>
					<graphic id="graph15" xlink:href="jemr-11-03-b-table-09.png"/>				
					</table-wrap>	

    </sec>
	
    <sec id="S3fb">
      <title>Mean fixation duration (MFD</title>

    <p>Segmentation had no effect on MFD (Table 10) and there were no
    interactions between segmentation and degree of hearing loss.</p>

    <p>There was a main effect of hearing loss on mean fixation duration in
    <italic>SentSent</italic>, <italic>F</italic>(2,20)=3.603, <italic>p=.</italic>046, &#x3B7;<sub>p</sub>&#x00B2; =.265.</p>

    <p>Post-hoc Bonferriori tests showed that hard of hearing participants had
    significantly longer mean fixation durations than hearing participants in
    <italic>SentSent</italic>, <italic>p</italic>=.044, 95% CI [-59.84, -64]. Mean fixation
    duration for <italic>SentSent</italic> was higher in the SS condition for both
    groups.</p>
	
	<table-wrap id="t10" position="float">
					<label>Table 10.</label>
					<caption>
						<p>Mean Fixation Duration by linguistic unit and
    segmentation</p>				
					</caption>
					<graphic id="graph16" xlink:href="jemr-11-03-b-table-10.png"/>				
					</table-wrap>		

    </sec>
	
    <sec id="S3fc">
      <title>Revisits</title>

    <p>We found a significant main effect of segmentation on revisits in
    <italic>IndArt</italic>, <italic>AuxVerb</italic> and <italic>Poss</italic>. The number of revisits was
    higher for <italic>IndArt</italic> and <italic>Poss</italic> in the SS condition and for
    <italic>AuxVerb</italic> in the NSS condition.</p>

    <p>We also found interactions between segmentation and hearing loss in
    revisits in <italic>ToInf</italic>, <italic>F</italic>(2,29)= 41.48, <italic>p</italic>=.026, &#x3B7;<sub>p</sub>&#x00B2;=.222.
	We decomposed these interactions with simple effects with Bonferroni
    correction and found that deaf participants tended to have more revisits
    for <italic>ToInf</italic> in the SS condition <italic>F</italic>(1,4)=6.968, <italic>p</italic>=.058,
    &#x3B7;<sub>p</sub>&#x00B2;=.635. There was no effect for
    English or hard of hearing participants.</p>
	
	<table-wrap id="t11" position="float">
					<label>Table 11.</label>
					<caption>
						<p>Revisits by linguistic unit and segmentation.n</p>				
					</caption>
					<graphic id="graph17" xlink:href="jemr-11-03-b-table-11.png"/>				
					</table-wrap>	

    </sec>
	
    <sec id="S3fd">
      <title>Discussion</title>

    <p>Similarly to Experiment 1, most participants expressed a marked
    preference towards SS subtitles. Again, the strongest effect was in
    <italic>SentSent</italic> cases with 90% for the SS condition compared to 10% for
    NSS. Deaf participants showed lower preferences than the other groups for
    SS subtitles in function words, such as <italic>DefArt</italic>, <italic>Conj</italic>,
    <italic>Poss</italic> and <italic>Prep</italic>.</p>

    <p>Hearing and hard of hearing participants stated clearly they chose
    subtitles based on semantic and syntactic phrases, whereas deaf
    participants based their decisions on shape, with the preference towards
    the pyramid-shaped subtitles.</p>

    <p>Deaf participants seemed to have more difficulties processing definite
    and indefinite articles than the other groups, as shown by eye tracking
    results: they tended to have more revisits for the SS <italic>ToInf</italic>
    compared to hearing and hard of hearing participants.</p>
    </sec>
	
    <sec id="S3fe">
      <title>Interviews</title>

    <p>In the post-task interviews, more than half of the participants
    of all the groups stated that they preferred line breaks that follow
    syntactic and semantic rules. However, a number of participants opted for
    non-syntactic line breaks, stating they give them a sense of continuity in
    reading, especially for some linguistic categories such as <italic>ToInf</italic> or
    <italic>IndArt</italic>. Many participants commented that segmentation should keep
    syntax and shape in balance; subtitles should be chunked according to
    natural thoughts, so that they can be read as quickly as possible. Other
    participants specified that segmentation might be an important aspect for
    slow readers. One interesting observation by a hard of hearing participant
    was that “line breaks have their value, yet when you are reading fast most
    of the time it becomes less relevant.”</p>
    </sec>
    </sec>
    </sec>
	
    <sec id="S4">
      <title>General discussion</title>

    <p>In this study we investigated the preferences and reactions of viewers
    to syntactically segmented (SS) and non-syntactically segmented (NSS) text
    in subtitles. Our study combined an offline, metalinguistic measure of
    preference with online eye tracking-based reading time measures. To
    determine whether these measures depend on previous experience with
    subtitling or on hearing loss, we tested participants from countries with
    different audiovisual translation traditions: hearing people from the UK,
    Poland and Spain as well as British deaf, hard of hearing, and hearing
    viewers. We expected participants to prefer SS subtitles as this type of
    segmentation follows the “natural sentence structure” (<xref ref-type="bibr" rid="b21">21</xref>). We also
    hypothesized that NSS text would be more difficult to read, resulting in
    longer reading times. Our predictions were confirmed in relation to
    preferences, but only partially confirmed when it comes to eye tracking
    measures.</p>

    <p>The most important finding of this study is that viewers expressed a
    very clear preference for syntactically segmented text in subtitles. They
    also declared in post-test interviews that when making their decisions,
    they relied more on syntactic and semantic considerations rather than on
    subtitle shape. These results confirm previous conjectures expressed in
    subtitling guidelines (<xref ref-type="bibr" rid="b5 b6">5, 6</xref>) and provide empirical evidence in their
    support.</p>

    <p>SS text was preferred over NSS in nearly all linguistic units by all
    types of viewers except for the deaf in the case of the definite article.
    The largest preference for SS was found in the <italic>SentSent</italic> condition,
    whereas the lowest in the case of <italic>AuxVerb</italic>. The <italic>SentSent</italic>
    condition was the only one in our study which included punctuation. The
    two sentences in a subtitle were clearly separated by a full stop, thus
    providing participants with guidance on where one unit of meaning finished
    and another began. Viewers preferred punctuation marks to be placed at the
    end of the first line and not separating the subject from the predicate in
    the second sentence, thus supporting the view that each subtitle line
    should contain one clause or sentence (<xref ref-type="bibr" rid="b6">6</xref>). In contrast, in the
    <italic>AuxVerb</italic> condition, which tested the splitting of the auxiliary from
    the main verb in a two-constituent verb phrase, the viewers preferred SS
    text, but their preference was not as strong as in the case of the
    <italic>SentSent</italic> condition. It is plausible that in order to fully
    integrate the meaning of text in the subtitle, viewers needed to process
    not only the verb phrase itself (auxiliary + main verb), but also the verb
    complement.</p>

    <p>Contrary to our predictions, some linguistic units took longer to
    read in the SS rather than NSS condition, as reflected by longer dwell
    time and more revisits. To interpret the differences between linguistic
    units, we classified some of them as noun or verb phrases. The
    <italic>IndArt</italic>, <italic>DefArt</italic>, <italic>Comp</italic> and <italic>Poss</italic> conditions were
    grouped under the umbrella term ‘noun phrases’, whereas <italic>AuxVerb</italic> as
    ‘verb phrases’. In general, people spent more time reading the SS text in
    noun phrases, and less time reading the NSS text in the <italic>AuxVerb</italic>.
    This finding goes against the results reported by (<xref ref-type="bibr" rid="b27">27</xref>), who tested
    ‘ill-segmented’ and ‘well-segmented’ noun phrases in Italian subtitles on
    a group of hearing people, and found no differences in the number of
    fixations or proportion of fixation time between the SS and NSS
    conditions. Interestingly, the authors also found a slightly longer mean
    fixation duration on NSS subtitles (228 ms in NSS compared to 216 ms in
    SS) – a result which was not confirmed by our data. In fact, in our study
    the mean fixation duration in the noun phrase <italic>AdjN</italic> in Experiment 1
    was longer in the SS than in the NSS condition. That readers looked longer
    at this noun phrase category in the SS condition may be attributed to its
    final position at the end of the first subtitle line.</p>

    <p>Compare, for instance:</p>


    <p> (SS) He's looking for the memory stick</p> 
	<p> he managed to hide.</p>

   
   <p>and</p>


    <p> (SS) He's looking for the memory </p> 
	<p>stick he managed to hide.</p>


    <p>where in the SS condition, the complete noun phrase <italic>Comp</italic>
    is situated at the end of the first subtitle line. (<xref ref-type="bibr" rid="b64">64</xref>) found that readers
    looked longer at noun phrases when they were in the clause-final position.
    Syntactically segmented text in subtitles is characterized by the presence
    of complete phrases at the end of lines (<xref ref-type="bibr" rid="b6">6</xref>). According to Rayner et al.
    (<xref ref-type="bibr" rid="b64">64</xref>), readers “fixate longer on a word when it ends a clause than when the
    same word does not end a clause,” which could explain the longer fixation
    time. This result may be taken as an indication that people integrate the
    information from the clause at its end, including any unfinished
    processing before they move on, which has been referred to in literature
    as “clause wrap-up effect” (<xref ref-type="bibr" rid="b64 b65">64, 65</xref>).</p>

    <p>This study also brought to light some important difference between how
    various types of viewers process line breaks in subtitling. Spanish
    viewers, who are generally less accustomed to subtitling and more to
    dubbing, had longest mean fixation duration in a number of linguistic
    units, indicating more effortful cognitive processing (<xref ref-type="bibr" rid="b60">60</xref>) compared to
    Polish participants, who were more accustomed to subtitling. This result
    is not necessarily related to the nature of text segmentation, but rather
    to participant characteristics.</p>

    <p>We also discovered interesting patterns of results depending on hearing
    loss. Deaf participants were not as concerned about syntactic segmentation
    as other groups, which was demonstrated by a lack of effect of
    segmentation on preferences in some linguistic units. This finding
    confirms our initial prediction about deaf people experiencing more
    difficulties in processing syntactic structures. The fact that there was
    no effect of segmentation in <italic>DefArt</italic> for deaf participants, combined
    with their longer dwell time spent on reading sentences in the
    <italic>DefArt</italic> condition, should perhaps be unsurprising, considering that
    deaf people with profound and severe prelingual hearing loss tend to
    experience difficulties with function words, including articles (<xref ref-type="bibr" rid="b48 b49 b50">48, 49, 50</xref>).
    This effect can be attributed to the absence of many function words in
    sign languages, their context-dependence and low fixed semantic content
    (<xref ref-type="bibr" rid="b48 b66">48, 66</xref>).</p>

    <p>One important limitation of this study is that we tested static text of
    subtitles rather than dynamically changing subtitles displayed naturally
    as part of a film. The reason for this was that this approach enabled us
    to control linguistic units and to present participants with two clear
    conditions to compare. However, this self-paced reading allowed
    participants to take as much time as they needed to complete the task,
    whereas in real-life subtitling, viewers have no control over the
    presentation speed and have thus less time to process subtitles. The
    understanding of subtitled text is also context-sensitive, and as our
    study only contained screenshots, it did not allow participants to rely
    more on the context to interpret the sentences, as they would normally do
    when watching subtitled videos. Another limitation is the lack of sound,
    which could have given more context to hearing and hard of hearing
    participants. Yet, despite these limitations in ecological validity, we
    believe that this study contributes to our understanding of processing
    different linguistic units in subtitles.</p>

    <p>Future research could look into subtitle segmentation in subtitled
    videos (see also Gerber-Morón and Szarkowska (<xref ref-type="bibr" rid="b28">28</xref>)), using other languages
    with other syntactic structures than English, which was the only language
    tested in this study. Further research is also required to fully
    understand the impact of word frequency and word length on the reading of
    subtitles (<xref ref-type="bibr" rid="b67 b68">67, 68</xref>). Subtitle segmentation implications could also be
    explored across subtitles, when a sentence runs over two or more
    subtitles.</p>

    <p>Our findings may have direct implications on current subtitling
    practices: if possible, text in the subtitles should be segmented to keep
    syntactic phrases together. This is particularly important in the case of
    two clauses or sentences separated by a punctuation mark. It is perhaps
    less important in the case of verb phrases like auxiliary and main verb.
    Following syntactic rules for segmenting subtitles can facilitate the
    reading process to viewers less experienced with subtitling, and can
    benefit deaf viewers from improving their syntax.</p>

    <sec id="S4a" sec-type="COI-statement">
      <title>Ethics and Conflict of Interest</title>

    <p>The authors declare that the contents of the article are in agreement
    with the ethics described in <ext-link ext-link-type="uri" xlink:href="http://biblio.unibe.ch/portale/elibrary/BOP/jemr/ethics.html" xlink:show="new">http://biblio.unibe.ch/portale/elibrary/BOP/jemr/ethics.html</ext-link>
	and that there is no conflict of interest regarding the publication of
    this paper.</p>
    </sec>
	
    <sec id="S4b">
      <title>Acknowledgements</title>

    <p>The research reported here has been supported by a grant from the
    European Union’s Horizon 2020 research and innovation programme under the
    Marie Skłodowska-Curie Grant Agreement No. 702606, “La Caixa” Foundation
    (E-08-2014-1306365) and Transmedia Catalonia Research Group
    (SGR2005/230).</p>

    <p>Many thanks for Pilar Orero and Judit Castellà for their comments on
    an earlier version of the manuscript.</p>
    </sec>
    </sec>

  </body>
<back>
<ref-list>
<ref id="b22"><mixed-citation publication-type="web-page" specific-use="unparsed"><person-group person-group-type="author"><collab>ABC</collab></person-group>. Industry Guidelines on Captioning Television Programs [Internet]. <year>2010</year>. Available from: <ext-link ext-link-type="uri" xlink:href="www.abc.net.au/mediawatch/transcripts/1105_freetvguidelines.pdf">www.abc.net.au/mediawatch/transcripts/1105_freetvguidelines.pdf</ext-link></mixed-citation></ref>
<ref id="b38"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Albertini</surname>, <given-names>J.</given-names></name>, &#x26; <name><surname>Mayer</surname>, <given-names>C.</given-names></name></person-group> (<year>2011</year>). <article-title>Using miscue analysis to assess comprehension in deaf college readers.</article-title> <source>Journal of Deaf Studies and Deaf Education</source>, <volume>16</volume>(<issue>1</issue>), <fpage>35</fpage>&#8211;<lpage>46</lpage>. <pub-id pub-id-type="doi">10.1093/deafed/enq017</pub-id><pub-id pub-id-type="pmid">20488871</pub-id><issn>1081-4159</issn></mixed-citation></ref>
<ref id="b39"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Antia</surname>, <given-names>S. D.</given-names></name>, <name><surname>Jones</surname>, <given-names>P. B.</given-names></name>, <name><surname>Reed</surname>, <given-names>S.</given-names></name>, &#x26; <name><surname>Kreimeyer</surname>, <given-names>K. H.</given-names></name></person-group> (<year>2009</year>). <article-title>Academic status and progress of deaf and hard-of-hearing students in general education classrooms.</article-title> <source>Journal of Deaf Studies and Deaf Education</source>, <volume>14</volume>(<issue>3</issue>), <fpage>293</fpage>&#8211;<lpage>311</lpage>. <pub-id pub-id-type="doi">10.1093/deafed/enp009</pub-id><pub-id pub-id-type="pmid">19502625</pub-id><issn>1081-4159</issn></mixed-citation></ref>
<ref id="b1"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Baker</surname>, <given-names>R. G.</given-names></name>, <name><surname>Lambourne</surname>, <given-names>A.</given-names></name>, &#x26; <name><surname>Rowston</surname>, <given-names>G.</given-names></name></person-group> (<year>1984</year>). <source>Handbook for Television Subtitlers</source>. <publisher-loc>Winchester</publisher-loc>: <publisher-name>Independent Broadcasting Authority</publisher-name>.</mixed-citation></ref>
<ref id="b2"><mixed-citation publication-type="web-page" specific-use="unparsed"><person-group person-group-type="author"><collab>BBC</collab></person-group>. BBC subtitle guidelines [Internet]. London: The British Broadcasting Corporation; <year>2017</year>. Available from: <ext-link ext-link-type="uri" xlink:href="http://bbc.github.io/subtitle-guidelines/">http://bbc.github.io/subtitle-guidelines/</ext-link></mixed-citation></ref>
<ref id="b59"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Biber</surname>, <given-names>D.</given-names></name>, <name><surname>Johansson</surname>, <given-names>S.</given-names></name>, <name><surname>Leech</surname>, <given-names>G.</given-names></name>, <name><surname>Conrad</surname>, <given-names>S.</given-names></name>, &#x26; <name><surname>Finegan</surname>, <given-names>E.</given-names></name></person-group> (<year>1999</year>). <source>Longman grammar of spoken and written English</source>. <publisher-loc>Harlow</publisher-loc>: <publisher-name>Longman</publisher-name>.</mixed-citation></ref>
<ref id="b30"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Bisson</surname>, <given-names>M.-J.</given-names></name>, <name><surname>Van Heuven</surname>, <given-names>W. J. B.</given-names></name>, <name><surname>Conklin</surname>, <given-names>K.</given-names></name>, &#x26; <name><surname>Tunney</surname>, <given-names>R. J.</given-names></name></person-group> (<year>2014</year>). <article-title>Processing of native and foreign language subtitles in films: An eye tracking study.</article-title> <source>Applied Psycholinguistics</source>, <volume>35</volume>(<issue>2</issue>), <fpage>399</fpage>&#8211;<lpage>418</lpage>. <pub-id pub-id-type="doi">10.1017/S0142716412000434</pub-id><issn>0142-7164</issn></mixed-citation></ref>
<ref id="b52"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Brasel</surname>, <given-names>K. E.</given-names></name>, &#x26; <name><surname>Quigley</surname>, <given-names>S.</given-names></name></person-group> (<year>1975</year>). <source>The influence of early language and communication environments on the development of language in deaf children</source>. <publisher-loc>Urbana, IL</publisher-loc>: <publisher-name>University of Illinois</publisher-name>.</mixed-citation></ref>
<ref id="b53"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Brown</surname>, <given-names>R.</given-names></name></person-group> (<year>1973</year>). <source>A First Language: The Early Stages</source>. <publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>Harvard University Press</publisher-name>. <pub-id pub-id-type="doi">10.4159/harvard.9780674732469</pub-id></mixed-citation></ref>
<ref id="b37"><mixed-citation publication-type="web-page" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Burnham</surname> <given-names>D</given-names></name>, <name><surname>Leigh</surname> <given-names>G</given-names></name>, <name><surname>Noble</surname> <given-names>W</given-names></name>, <name><surname>Jones</surname> <given-names>C</given-names></name>, <name><surname>Tyler</surname> <given-names>M</given-names></name>, <name><surname>Grebennikov</surname> <given-names>L</given-names></name>, <etal>et al.</etal></person-group> Parameters in television captioning for deaf and hard-of-hearing adults: effects of caption rate versus text reduction on comprehension. J Deaf Stud Deaf Educ [Internet]. 2008/03/29. <year>2008</year>;13(3):391–404. Available from: <ext-link ext-link-type="uri" xlink:href="https://www.ncbi.nlm.nih.gov/pubmed/18372297">https://www.ncbi.nlm.nih.gov/pubmed/18372297</ext-link></mixed-citation></ref>
<ref id="b45"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Cambra</surname>, <given-names>C.</given-names></name>, <name><surname>Silvestre</surname>, <given-names>N.</given-names></name>, &#x26; <name><surname>Leal</surname>, <given-names>A.</given-names></name></person-group> (<year>2009</year>). <article-title>Comprehension of television messages by deaf students at various stages of education.</article-title> <source>American Annals of the Deaf</source>, <volume>153</volume>(<issue>5</issue>), <fpage>425</fpage>&#8211;<lpage>434</lpage>. <pub-id pub-id-type="doi">10.1353/aad.0.0065</pub-id><pub-id pub-id-type="pmid">19350951</pub-id><issn>0002-726X</issn></mixed-citation></ref>
<ref id="b19"><mixed-citation publication-type="web-page" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Carroll</surname> <given-names>M</given-names></name>, <name><surname>Ivarsson</surname> <given-names>J.</given-names></name></person-group> Code of Good Subtitling Practice [Internet]. Berlin; <year>1998</year> [<date-in-citation content-type="access-date">cited 2017 Nov 27</date-in-citation>]. Available from: <ext-link ext-link-type="uri" xlink:href="https://www.esist.org/wp-content/uploads/2016/06/Code-of-Good-Subtitling-Practice.PDF.pdf">https://www.esist.org/wp-content/uploads/2016/06/Code-of-Good-Subtitling-Practice.PDF.pdf</ext-link></mixed-citation></ref>
<ref id="b48"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Channon</surname>, <given-names>R.</given-names></name>, &#x26; <name><surname>Sayers</surname>, <given-names>E. E.</given-names></name></person-group> (<year>2007</year>). <article-title>Toward a description of deaf college students&#8217; written English: Overuse, avoidance, and mastery of function words.</article-title> <source>American Annals of the Deaf</source>, <volume>152</volume>(<issue>2</issue>), <fpage>91</fpage>&#8211;<lpage>103</lpage>. <pub-id pub-id-type="doi">10.1353/aad.2007.0018</pub-id><pub-id pub-id-type="pmid">17849610</pub-id><issn>0002-726X</issn></mixed-citation></ref>
<ref id="b54"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Conrad</surname>, <given-names>R.</given-names></name></person-group> (<year>1979</year>). <source>The Deaf School Child</source>. <publisher-loc>London</publisher-loc>: <publisher-name>Harper and Row</publisher-name>.</mixed-citation></ref>
<ref id="b35"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>d’Ydewalle</surname>, <given-names>G.</given-names></name>, &#x26; <name><surname>De Bruycker</surname>, <given-names>W.</given-names></name></person-group> (<year>2007</year>). <article-title>Eye Movements of Children and Adults While Reading Television Subtitles.</article-title>. <source>European Psychologist</source>, <volume>12</volume>(<issue>3</issue>), <fpage>196</fpage>–<lpage>205</lpage>. Retrieved from <ext-link ext-link-type="uri" xlink:href="http://econtent.hogrefe.com/doi/abs/10.1027/1016-9040.12.3.196">http://econtent.hogrefe.com/doi/abs/10.1027/1016-9040.12.3.196</ext-link> <pub-id pub-id-type="doi" specific-use="author">10.1027/1016-9040.12.3.196</pub-id><issn>1016-9040</issn></mixed-citation></ref>
<ref id="b23"><mixed-citation publication-type="web-page" specific-use="unparsed"><person-group person-group-type="author"><collab>DCMP</collab></person-group>. DCMP Captioning Key [Internet]. <year>2017</year>. Available from: <ext-link ext-link-type="uri" xlink:href="http://www.captioningkey.org/">http://www.captioningkey.org/</ext-link></mixed-citation></ref>
<ref id="b36"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><name><surname>de Linde</surname>, <given-names>Z.</given-names></name></person-group> (<year>1996</year>). <chapter-title>Le sous-titrage intralinguistique pour les sourds et les mal entendants</chapter-title>. In <person-group person-group-type="editor"><name><given-names>Y.</given-names> <surname>Gambier</surname></name> (<role>Ed.</role>),</person-group> <source>Les transferts linguistiques dans les medias audiovisuels</source> (pp. <fpage>173</fpage>&#8211;<lpage>183</lpage>). <publisher-loc>Villeneuve d&#8217;Ascq</publisher-loc>: <publisher-name>Presses Universitaires du Septentrion</publisher-name>.</mixed-citation></ref>
<ref id="b3"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>D&#237;az Cintas</surname>, <given-names>J.</given-names></name>, &#x26; <name><surname>Remael</surname>, <given-names>A.</given-names></name></person-group> (<year>2007</year>). <source>Audiovisual translation: subtitling</source>. <publisher-loc>Manchester</publisher-loc>: <publisher-name>St. Jerome</publisher-name>.</mixed-citation></ref>
<ref id="b62"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><name><surname>Doherty</surname>, <given-names>S.</given-names></name>, &#x26; <name><surname>Kruger</surname>, <given-names>J.-L.</given-names></name></person-group> (<year>2018</year>). <chapter-title>The development of eye tracking in empirical research on subtitling and captioning</chapter-title>. In <person-group person-group-type="editor"><name><given-names>J.</given-names> <surname>Sita</surname></name>, <name><given-names>T.</given-names> <surname>Dwyer</surname></name>, <name><given-names>S.</given-names> <surname>Redmond</surname></name>, &#x26; <name><given-names>C.</given-names> <surname>Perkins</surname></name> (<role>Eds.</role>),</person-group> <source>Seeing into Screens</source>. <publisher-loc>London</publisher-loc>: <publisher-name>Bloomsbury</publisher-name>. <pub-id pub-id-type="doi">10.5040/9781501329012.0009</pub-id></mixed-citation></ref>
<ref id="b18"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Ehrlich</surname>, <given-names>S. F.</given-names></name>, &#x26; <name><surname>Rayner</surname>, <given-names>K.</given-names></name></person-group> (<year>1981</year>). <article-title>Contextual effects on word perception and eye movements during reading.</article-title> <source>Journal of Verbal Learning and Verbal Behavior</source>, <volume>20</volume>(<issue>6</issue>), <fpage>641</fpage>&#8211;<lpage>655</lpage>. <pub-id pub-id-type="doi">10.1016/S0022-5371(81)90220-6</pub-id><issn>0022-5371</issn></mixed-citation></ref>
<ref id="b12"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Frazier</surname>, <given-names>L.</given-names></name></person-group> (<year>1979</year>). <source>On comprehending sentences: syntactic parsing strategies</source>. <publisher-loc>Storrs</publisher-loc>: <publisher-name>University of Connecticut</publisher-name>.</mixed-citation></ref>
<ref id="b10"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Frazier</surname>, <given-names>L.</given-names></name>, &#x26; <name><surname>Rayner</surname>, <given-names>K.</given-names></name></person-group> (<year>1982</year>). <article-title>Making and correcting errors during sentence comprehension: Eye movements in the analysis of structurally ambiguous sentences.</article-title> <source>Cognitive Psychology</source>, <volume>14</volume>(<issue>2</issue>), <fpage>178</fpage>&#8211;<lpage>210</lpage>. <pub-id pub-id-type="doi">10.1016/0010-0285(82)90008-1</pub-id><issn>0010-0285</issn></mixed-citation></ref>
<ref id="b4"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><name><surname>Gambier</surname>, <given-names>Y.</given-names></name></person-group> (<year>2006</year>). <chapter-title>Subtitling</chapter-title>. In <person-group person-group-type="editor"><name><given-names>K.</given-names> <surname>Brown</surname></name> (<role>Ed.</role>),</person-group> <source>Encyclopedia of Language and Linguistics</source> (<edition>2nd ed.</edition>, pp. <fpage>258</fpage>&#8211;<lpage>263</lpage>). <publisher-loc>Oxford, UK</publisher-loc>: <publisher-name>Elsevier</publisher-name>. <pub-id pub-id-type="doi">10.1016/B0-08-044854-2/00472-7</pub-id></mixed-citation></ref>
<ref id="b28"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Gerber-Mor&#243;n</surname>, <given-names>O.</given-names></name>, &#x26; <name><surname>Szarkowska</surname>, <given-names>A.</given-names></name></person-group> (<year>2018</year>). (Forthcoming). <article-title>The impact of text segmentation on subtitle reading: An eye tracking study on cognitive load and reading performance.</article-title> <source>Journal of Eye Movement Research</source>.<issn>1995-8692</issn></mixed-citation></ref>
<ref id="b60"><mixed-citation publication-type="book" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Holmqvist</surname> <given-names>K</given-names></name>, <name><surname>Nystr&#246;m</surname> <given-names>M</given-names></name>, <name><surname>Andersson</surname> <given-names>R</given-names></name>, <name><surname>Dewhurst</surname> <given-names>R</given-names></name>, <name><surname>Jarodzka</surname> <given-names>H</given-names></name></person-group>, van de Weijer J. Eye tracking: a comprehensive guide to methods and measures. Oxford: Oxford University Press; <year>2011</year>.</mixed-citation></ref>
<ref id="b5"><mixed-citation publication-type="unknown" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Ivarsson</surname> <given-names>J</given-names></name>, <name><surname>Carroll</surname> <given-names>M.</given-names></name></person-group> Subtitling. Simrishamn: TransEdit HB; <year>1998</year>.</mixed-citation></ref>
<ref id="b65"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Just</surname>, <given-names>M. A.</given-names></name>, &#x26; <name><surname>Carpenter</surname>, <given-names>P. A.</given-names></name></person-group> (<year>1980</year>). <article-title>A theory of reading: From eye fixations to comprehension.</article-title> <source>Psychological Review</source>, <volume>87</volume>(<issue>4</issue>), <fpage>329</fpage>&#8211;<lpage>354</lpage>. <pub-id pub-id-type="doi">10.1037/0033-295X.87.4.329</pub-id><pub-id pub-id-type="pmid">7413885</pub-id><issn>0033-295X</issn></mixed-citation></ref>
<ref id="b6"><mixed-citation publication-type="web-page" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Karamitroglou</surname> <given-names>F.</given-names></name></person-group> <article-title>A Proposed Set of Subtitling Standards in Europe.</article-title> Transl J [Internet]. <year>1998</year>;2(2). Available from: <ext-link ext-link-type="uri" xlink:href="http://translationjournal.net/journal/04stndrd.htm">http://translationjournal.net/journal/04stndrd.htm</ext-link></mixed-citation></ref>
<ref id="b40"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><name><surname>Karchmer</surname>, <given-names>M.</given-names></name>, &#x26; <name><surname>Mitchell</surname>, <given-names>R. E.</given-names></name></person-group> (<year>2003</year>). <chapter-title>Demographic and achievement characteristics of deaf and hard-of-hearing students</chapter-title>. In <person-group person-group-type="editor"><name><given-names>M.</given-names> <surname>Marschark</surname></name> &#x26; <name><given-names>P. E.</given-names> <surname>Spencer</surname></name> (<role>Eds.</role>),</person-group> <source>Oxford handbook of deaf studies, language and education</source> (pp. <fpage>21</fpage>&#8211;<lpage>37</lpage>). <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref>
<ref id="b31"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Krejtz</surname>, <given-names>I.</given-names></name>, <name><surname>Szarkowska</surname>, <given-names>A.</given-names></name>, &#x26; <name><surname>Krejtz</surname>, <given-names>K.</given-names></name></person-group> (<year>2013</year>). <article-title>The Effects of Shot Changes on Eye Movements in Subtitling.</article-title> <source>Journal of Eye Movement Research</source>, <volume>6</volume>(<issue>5</issue>), <fpage>1</fpage>&#8211;<lpage>12</lpage>.<issn>1995-8692</issn></mixed-citation></ref>
<ref id="b50"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Krejtz</surname>, <given-names>I.</given-names></name>, <name><surname>Szarkowska</surname>, <given-names>A.</given-names></name>, &#x26; <name><surname>Łogińska</surname>, <given-names>M.</given-names></name></person-group> (<year>2016</year>). <article-title>Reading Function and Content Words in Subtitled Videos.</article-title> <comment>[Internet]</comment>. <source>Journal of Deaf Studies and Deaf Education</source>, <volume>21</volume>(<issue>2</issue>), <fpage>222</fpage>–<lpage>232</lpage>. Retrieved from <ext-link ext-link-type="uri" xlink:href="http://jdsde.oxfordjournals.org/content/21/2/222.full.pdf">http://jdsde.oxfordjournals.org/content/21/2/222.full.pdf</ext-link> <pub-id pub-id-type="doi">10.1093/deafed/env061</pub-id><pub-id pub-id-type="pmid">26681268</pub-id><issn>1081-4159</issn></mixed-citation></ref>
<ref id="b32"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Kruger</surname>, <given-names>J.-L.</given-names></name>, &#x26; <name><surname>Steyn</surname>, <given-names>F.</given-names></name></person-group> (<year>2014</year>). <article-title>Subtitles and Eye Tracking: Reading&#160;and&#160;Performance.</article-title> <source>Reading Research Quarterly</source>, <volume>49</volume>(<issue>1</issue>), <fpage>105</fpage>&#8211;<lpage>120</lpage>. <pub-id pub-id-type="doi">10.1002/rrq.59</pub-id><issn>0034-0553</issn></mixed-citation></ref>
<ref id="b33"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Kruger</surname>, <given-names>J.-L.</given-names></name>, <name><surname>Szarkowska</surname>, <given-names>A.</given-names></name>, &#x26; <name><surname>Krejtz</surname>, <given-names>I.</given-names></name></person-group> (<year>2015</year>). <source>Subtitles on the moving image: an overview of eye tracking studies</source> (p. <fpage>25</fpage>). <publisher-name>Refractory</publisher-name>.</mixed-citation></ref>
<ref id="b13"><mixed-citation publication-type="web-page" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Levasseur</surname> <given-names>V.</given-names></name></person-group> Promoting gains in reading fluency by rereading phrasally-cued text [Internet]. ProQuest Dissertations Publishing; <year>2004</year>. Available from: <ext-link ext-link-type="uri" xlink:href="https://search.proquest.com/docview/305210949/?pq-origsite=primo">https://search.proquest.com/docview/305210949/?pq-origsite=primo</ext-link></mixed-citation></ref>
<ref id="b21"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Luyken</surname>, <given-names>G.-M.</given-names></name>, <name><surname>Herbst</surname>, <given-names>T.</given-names></name>, <name><surname>Langham-Brown</surname>, <given-names>J.</given-names></name>, <name><surname>Reid</surname>, <given-names>H.</given-names></name>, &#x26; <name><surname>Spinhof</surname>, <given-names>H.</given-names></name></person-group> (<year>1991</year>). <source>Overcoming language barriers in television: dubbing and subtitling for the European audience</source>. <publisher-loc>Manchester</publisher-loc>: <publisher-name>The European Institute for the Media</publisher-name>.</mixed-citation></ref>
<ref id="b41"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Marschark</surname>, <given-names>M.</given-names></name></person-group> (<year>1993</year>). <source>Psychological development of deaf children</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Oxford Press</publisher-name>.</mixed-citation></ref>
<ref id="b42"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Marschark</surname>, <given-names>M.</given-names></name>, <name><surname>Lang</surname>, <given-names>H. G.</given-names></name>, &#x26; <name><surname>Albertini</surname>, <given-names>J. A.</given-names></name></person-group> (<year>2002</year>). <source>Educating deaf students: Research into practice</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref>
<ref id="b58"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Matamala</surname>, <given-names>A.</given-names></name>, <name><surname>Perego</surname>, <given-names>E.</given-names></name>, &#x26; <name><surname>Bottiroli</surname>, <given-names>S.</given-names></name></person-group> (<year>2017</year>). <source>Dubbing versus subtitling yet again?</source> <publisher-name>Babel</publisher-name>. <pub-id pub-id-type="doi">10.1075/babel.63.3.07mat</pub-id></mixed-citation><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Biber</surname>, <given-names>D.</given-names></name>, <name><surname>Johansson</surname>, <given-names>S.</given-names></name>, <name><surname>Leech</surname>, <given-names>G.</given-names></name>, <name><surname>Conrad</surname>, <given-names>S.</given-names></name>, &#x26; <name><surname>Finegan</surname>, <given-names>E.</given-names></name></person-group> (<year>1999</year>). <source>Longman grammar of spoken and written English</source>. <publisher-loc>Harlow</publisher-loc>: <publisher-name>Longman</publisher-name>.</mixed-citation></ref>
<ref id="b24"><mixed-citation publication-type="web-page" specific-use="unparsed"><person-group person-group-type="author"><collab>Media Access Australia</collab></person-group>. Captioning Guidelines [Internet]. <year>2012</year>. Available from: <ext-link ext-link-type="uri" xlink:href="https://mediaaccess.org.au/practical-web-accessibility/media/caption-guidelines">https://mediaaccess.org.au/practical-web-accessibility/media/caption-guidelines</ext-link></mixed-citation></ref>
<ref id="b67"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><name><surname>Moran</surname>, <given-names>S.</given-names></name></person-group> (<year>2009</year>). <chapter-title>The Effect of Linguistic Variation on Subtitle Reception</chapter-title>. In <person-group person-group-type="editor"><name><given-names>E.</given-names> <surname>Perego</surname></name> (<role>Ed.</role>),</person-group> <source>Eye Tracking in Audiovisual Translation</source> (pp. <fpage>183</fpage>&#8211;<lpage>222</lpage>). <publisher-loc>Roma</publisher-loc>: <publisher-name>Aracne Editrice</publisher-name>.</mixed-citation></ref>
<ref id="b14"><mixed-citation publication-type="web-page" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Murnane</surname> <given-names>Y.</given-names></name></person-group> The influences of chunking text and familiarity of text on comprehension [Internet]. Marquette University; <year>1987</year>. Available from: <ext-link ext-link-type="uri" xlink:href="https://search.proquest.com/docview/303587165/?pq-origsite=primo">https://search.proquest.com/docview/303587165/?pq-origsite=primo</ext-link></mixed-citation></ref>
<ref id="b25"><mixed-citation publication-type="web-page" specific-use="unparsed">Netflix. Timed Text Style Guide: General Requirements [Internet]. <year>2016</year>. Available from: <ext-link ext-link-type="uri" xlink:href="https://backlothelp.netflix.com/hc/en-us/articles/215758617-Timed-Text-Style-Guide-General-Requirements">https://backlothelp.netflix.com/hc/en-us/articles/215758617-Timed-Text-Style-Guide-General-Requirements</ext-link></mixed-citation></ref>
<ref id="b55"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Odom</surname>, <given-names>P. B.</given-names></name>, &#x26; <name><surname>Blanton</surname>, <given-names>R.</given-names></name></person-group> (<year>1970</year>). <article-title>Implicit and explicit grammatical factors in reading achievement in the deaf.</article-title> <source>Journal of Reading Behavior</source>, <volume>2</volume>(<issue>1</issue>), <fpage>47</fpage>&#8211;<lpage>55</lpage>. <pub-id pub-id-type="doi">10.1080/10862967009546877</pub-id><issn>0022-4111</issn></mixed-citation></ref>
<ref id="b8"><mixed-citation publication-type="web-page" specific-use="unparsed">Ofcom. Code on Television Access Services [Internet]. <year>2017</year>. Available from: <ext-link ext-link-type="uri" xlink:href="https://www.ofcom.org.uk/__data/assets/pdf_file/0020/97040/Access-service-code-Jan-2017.pdf">https://www.ofcom.org.uk/__data/assets/pdf_file/0020/97040/Access-service-code-Jan-2017.pdf</ext-link></mixed-citation></ref>
<ref id="b51"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Paas</surname>, <given-names>F.</given-names></name>, <name><surname>Tuovinen</surname>, <given-names>J. E.</given-names></name>, <name><surname>Tabbers</surname>, <given-names>H.</given-names></name>, &#x26; <name><surname>Van Gerven</surname>, <given-names>P. W. M.</given-names></name></person-group> (<year>2003</year>). <article-title>Cognitive Load Measurement as a Means to Advance Cognitive Load Theory.</article-title> <source>Educational Psychologist</source>, <volume>38</volume>(<issue>1</issue>), <fpage>63</fpage>&#8211;<lpage>71</lpage>. <pub-id pub-id-type="doi">10.1207/S15326985EP3801_8</pub-id></mixed-citation></ref>
<ref id="b20"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><name><surname>Perego</surname>, <given-names>E.</given-names></name></person-group> (<year>2008a</year>). <chapter-title>Subtitles and line-breaks: Towards improved readability</chapter-title>. In <person-group person-group-type="editor"><name><given-names>D.</given-names> <surname>Chiaro</surname></name>, <name><given-names>C.</given-names> <surname>Heiss</surname></name>, &#x26; <name><given-names>C.</given-names> <surname>Bucaria</surname></name> (<role>Eds.</role>),</person-group> <source>Between Text and Image: Updating research in screen translation</source> (pp. <fpage>211</fpage>–<lpage>223</lpage>). <publisher-name>John Benjamins</publisher-name>. <pub-id pub-id-type="doi">10.1075/btl.78.21per</pub-id></mixed-citation></ref>
<ref id="b7"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Perego</surname>, <given-names>E.</given-names></name></person-group> (<year>2008b</year>). <article-title>What would we read best? Hypotheses and suggestions for the location of line breaks in film subtitles.</article-title> <source>Sign Lang Transl Interpret.</source>, <volume>2</volume>(<issue>1</issue>), <fpage>35</fpage>&#8211;<lpage>63</lpage>.</mixed-citation></ref>
<ref id="b27"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Perego</surname>, <given-names>E.</given-names></name>, <name><surname>Del Missier</surname>, <given-names>F.</given-names></name>, <name><surname>Porta</surname>, <given-names>M.</given-names></name>, &#x26; <name><surname>Mosconi</surname>, <given-names>M.</given-names></name></person-group> (<year>2010</year>). <article-title>The Cognitive Effectiveness of Subtitle Processing.</article-title> . <source>Media Psychology</source>, <volume>13</volume>(<issue>3</issue>), <fpage>243</fpage>&#8211;<lpage>272</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1080/15213269.2010.502873</pub-id><issn>1521-3269</issn></mixed-citation></ref>
<ref id="b34"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Perego</surname>, <given-names>E.</given-names></name>, <name><surname>Laskowska</surname>, <given-names>M.</given-names></name>, <name><surname>Matamala</surname>, <given-names>A.</given-names></name>, <name><surname>Remael</surname>, <given-names>A.</given-names></name>, <name><surname>Robert</surname>, <given-names>I. S.</given-names></name>, <name><surname>Szarkowska</surname>, <given-names>A.</given-names></name>, <name><surname>Bottiroli</surname>, <given-names>S.</given-names></name></person-group> (<year>2016</year>). <article-title>Is subtitling equally effective everywhere? A first cross-national study on the reception of interlingually subtitled messages.</article-title> <comment>[Internet]</comment>. <source>Across Languages and Cultures</source>, <volume>17</volume>(<issue>2</issue>), <fpage>205</fpage>&#8211;<lpage>229</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1556/084.2016.17.2.4</pub-id><issn>1585-1923</issn></mixed-citation></ref>
<ref id="b43"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Qi</surname>, <given-names>S.</given-names></name>, &#x26; <name><surname>Mitchell</surname>, <given-names>R. E.</given-names></name></person-group> (<year>2012</year>). <article-title>Large-scale academic achievement testing of deaf and hard-of-hearing students: Past, present, and future.</article-title> <source>Journal of Deaf Studies and Deaf Education</source>, <volume>17</volume>(<issue>1</issue>), <fpage>1</fpage>&#8211;<lpage>18</lpage>. <pub-id pub-id-type="doi">10.1093/deafed/enr028</pub-id><pub-id pub-id-type="pmid">21712463</pub-id><issn>1081-4159</issn></mixed-citation></ref>
<ref id="b56"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Quigley</surname>, <given-names>S.</given-names></name>, &#x26; <name><surname>Paul</surname>, <given-names>P.</given-names></name></person-group> (<year>1984</year>). <source>Language and Deafness</source>. <publisher-loc>California</publisher-loc>: <publisher-name>College-Hill Press</publisher-name>.</mixed-citation></ref>
<ref id="b29"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Rajendran</surname>, <given-names>D. J.</given-names></name>, <name><surname>Duchowski</surname>, <given-names>A. T.</given-names></name>, <name><surname>Orero</surname>, <given-names>P.</given-names></name>, <name><surname>Mart&#237;nez</surname>, <given-names>J.</given-names></name>, &#x26; <name><surname>Romero-Fresco</surname>, <given-names>P.</given-names></name></person-group> (<year>2013</year>). <article-title>Effects of text chunking on subtitling: A quantitative and qualitative examination.</article-title> <comment>[Internet]</comment>. <source>Perspectives</source>, <volume>21</volume>(<issue>1</issue>), <fpage>5</fpage>&#8211;<lpage>21</lpage>. <pub-id pub-id-type="doi" specific-use="author">10.1080/0907676X.2012.722651</pub-id><issn>0191-6556</issn></mixed-citation></ref>
<ref id="b16"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Rayner</surname>, <given-names>K.</given-names></name>, &#x26; <name><surname>Well</surname>, <given-names>A. D.</given-names></name></person-group> (<year>1996</year>). <article-title>Effects of contextual constraint on eye movements in reading: A further examination.</article-title> <source>Psychonomic Bulletin &#x26; Review</source>, <volume>3</volume>(<issue>4</issue>), <fpage>504</fpage>&#8211;<lpage>509</lpage>. <pub-id pub-id-type="doi">10.3758/BF03214555</pub-id><pub-id pub-id-type="pmid">24213985</pub-id><issn>1069-9384</issn></mixed-citation></ref>
<ref id="b61"><mixed-citation publication-type="book-chapter" specific-use="unparsed"><person-group person-group-type="author"><name><surname>Rayner</surname> <given-names>K.</given-names></name></person-group> Eye Movements in Reading and Information Processing: 20 Years of Research. Eisenberg N, editor. Psychol Bull. <year>1998</year>;124(3):372&#8211;422.</mixed-citation></ref>
<ref id="b64"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Rayner</surname>, <given-names>K.</given-names></name>, <name><surname>Kambe</surname>, <given-names>G.</given-names></name>, &#x26; <name><surname>Duffy</surname>, <given-names>S. A.</given-names></name></person-group> (<year>2000</year>). <article-title>The effect of clause wrap-up on eye movements during reading.</article-title> <source>Q J Exp Psychol Sect A.</source>, <volume>53</volume>(<issue>4</issue>), <fpage>1061</fpage>&#8211;<lpage>1080</lpage>. <pub-id pub-id-type="doi">10.1080/713755934</pub-id><pub-id pub-id-type="pmid">11131813</pub-id><issn>0272-4987</issn></mixed-citation></ref>
<ref id="b17"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Rayner</surname>, <given-names>K.</given-names></name>, <name><surname>Ashby</surname>, <given-names>J.</given-names></name>, <name><surname>Pollatsek</surname>, <given-names>A.</given-names></name>, &#x26; <name><surname>Reichle</surname>, <given-names>E. D.</given-names></name></person-group> (<year>2004</year>). <article-title>The effects of frequency and predictability on eye fixations in reading: Implications for the E-Z Reader model.</article-title> <source>Journal of Experimental Psychology. Human Perception and Performance</source>, <volume>30</volume>(<issue>4</issue>), <fpage>720</fpage>&#8211;<lpage>732</lpage>. <pub-id pub-id-type="doi">10.1037/0096-1523.30.4.720</pub-id><pub-id pub-id-type="pmid">15301620</pub-id><issn>0096-1523</issn></mixed-citation></ref>
<ref id="b11"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Rayner</surname>, <given-names>K.</given-names></name>, <name><surname>Pollatsek</surname>, <given-names>A.</given-names></name>, <name><surname>Ashby</surname>, <given-names>J.</given-names></name>, &#x26; <name><surname>Clifton</surname>, <given-names>C. J.</given-names></name></person-group> (<year>2012</year>). <source>Psychology of reading</source> (<edition>2nd ed.</edition>). <publisher-loc>New York, London</publisher-loc>: <publisher-name>Psychology Press</publisher-name>. <pub-id pub-id-type="doi">10.4324/9780203155158</pub-id></mixed-citation></ref>
<ref id="b68"><mixed-citation publication-type="book-chapter" specific-use="restruct"><person-group person-group-type="author"><name><surname>Rayner</surname>, <given-names>K.</given-names></name></person-group> (<year>2015</year>). <chapter-title>Eye Movements in Reading</chapter-title>. In <person-group person-group-type="editor"><name><given-names>J. D.</given-names> <surname>Wright</surname></name> (<role>Ed.</role>),</person-group> <source>International Encyclopedia of the Social &#x26; Behavioral Sciences</source> (<edition>2nd ed.</edition>, pp. <fpage>631</fpage>&#8211;<lpage>634</lpage>). <publisher-loc>United Kingdom</publisher-loc>: <publisher-name>Elsevier</publisher-name>. <pub-id pub-id-type="doi">10.1016/B978-0-08-097086-8.54008-2</pub-id></mixed-citation></ref>
<ref id="b57"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Savage</surname>, <given-names>R. D.</given-names></name>, <name><surname>Evans</surname>, <given-names>L.</given-names></name>, &#x26; <name><surname>Savage</surname>, <given-names>J. F.</given-names></name></person-group> (<year>1981</year>). <source>Psychology and Communication in Deaf Children</source>. <publisher-loc>Sydney, London</publisher-loc>: <publisher-name>Grune &#x26; Stratton</publisher-name>.</mixed-citation></ref>
<ref id="b44"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Schirmer</surname>, <given-names>B. R.</given-names></name>, &#x26; <name><surname>McGough</surname>, <given-names>S. M.</given-names></name></person-group> (<year>2005</year>). <article-title>Teaching reading to children who are deaf: Do the conclusions of the National Reading Panel apply?</article-title> <source>Review of Educational Research</source>, <volume>75</volume>(<issue>1</issue>), <fpage>83</fpage>&#8211;<lpage>117</lpage>. <pub-id pub-id-type="doi">10.3102/00346543075001083</pub-id><issn>0034-6543</issn></mixed-citation></ref>
<ref id="b46"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Szarkowska</surname>, <given-names>A.</given-names></name>, <name><surname>Krejtz</surname>, <given-names>I.</given-names></name>, <name><surname>Klyszejko</surname>, <given-names>Z.</given-names></name>, &#x26; <name><surname>Wieczorek</surname>, <given-names>A.</given-names></name></person-group> (<year>2011</year>). <article-title>Verbatim, standard, or edited? Reading patterns of different captioning styles among deaf, hard of hearing, and hearing viewers.</article-title> <source>American Annals of the Deaf</source>, <volume>156</volume>(<issue>4</issue>), <fpage>363</fpage>&#8211;<lpage>378</lpage>. <pub-id pub-id-type="doi">10.1353/aad.2011.0039</pub-id><pub-id pub-id-type="pmid">22256538</pub-id><issn>0002-726X</issn></mixed-citation></ref>
<ref id="b63"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Szarkowska</surname>, <given-names>A.</given-names></name>, &#x26; <name><surname>Gerber-Mor&#243;n</surname>, <given-names>O.</given-names></name></person-group> (<year>2018</year>). <source>SURE Project Dataset</source> [<comment>Internet</comment>]. <publisher-name>RepOD</publisher-name>; <pub-id pub-id-type="doi" specific-use="author">10.18150/repod.4469278</pub-id></mixed-citation></ref>
<ref id="b26"><mixed-citation publication-type="web-page" specific-use="unparsed"><person-group person-group-type="author"><collab>TED</collab></person-group>. How to break lines [Internet]. Open Translation Project; <year>2015</year> [<date-in-citation content-type="access-date">cited 2017 Nov 27</date-in-citation>]. Available from: <ext-link ext-link-type="uri" xlink:href="http://translations.ted.org/wiki/How_to_break_lines">http://translations.ted.org/wiki/How_to_break_lines</ext-link></mixed-citation></ref>
<ref id="b47"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Torres Monreal</surname>, <given-names>S.</given-names></name>, &#x26; <name><surname>Santana Hernández</surname>, <given-names>R.</given-names></name></person-group> (<year>2005</year>). <article-title>Reading levels of Spanish deaf students.</article-title> <comment>[Internet]</comment>. <source>American Annals of the Deaf</source>, <volume>150</volume>(<issue>4</issue>), <fpage>379</fpage>–<lpage>387</lpage>. Retrieved from <ext-link ext-link-type="uri" xlink:href="http://muse.jhu.edu/content/crossref/journals/american_annals_of_the_deaf/v150/150.4monreal.html">http://muse.jhu.edu/content/crossref/journals/american_annals_of_the_deaf/v150/150.4monreal.html</ext-link> <pub-id pub-id-type="doi">10.1353/aad.2005.0043</pub-id><pub-id pub-id-type="pmid">16466193</pub-id><issn>0002-726X</issn></mixed-citation></ref>
<ref id="b66"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Trezek</surname>, <given-names>B. J.</given-names></name>, <name><surname>Wang</surname>, <given-names>Y.</given-names></name>, &#x26; <name><surname>Paul</surname>, <given-names>P. V.</given-names></name></person-group> (<year>2010</year>). <source>Reading and deafness: theory, research, and practice</source>. <publisher-loc>Clifton Park, N.Y.</publisher-loc>: <publisher-name>Delmar Cengage Learning</publisher-name>.</mixed-citation></ref>
<ref id="b9"><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Warren</surname>, <given-names>P.</given-names></name></person-group> (<year>2012</year>). <source>Introducing Psycholinguistics</source>. <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>. <pub-id pub-id-type="doi">10.1017/CBO9780511978531</pub-id></mixed-citation></ref>
<ref id="b15"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Weiss</surname>, <given-names>D. S.</given-names></name></person-group> (<year>1983</year>). <article-title>The effects of text segmentation on children&#8217;s reading comprehension.</article-title> <source>Discourse Processes</source>, <volume>6</volume>(<issue>1</issue>), <fpage>77</fpage>&#8211;<lpage>89</lpage>. <pub-id pub-id-type="doi">10.1080/01638538309544555</pub-id><issn>0163-853X</issn></mixed-citation></ref>
<ref id="b49"><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Wolbers</surname>, <given-names>K. A.</given-names></name>, <name><surname>Dostal</surname>, <given-names>H. M.</given-names></name>, &#x26; <name><surname>Bowers</surname>, <given-names>L. M.</given-names></name></person-group> (<year>2012</year>). <article-title>&#8220;I was born full deaf.&#8221; Written language outcomes after 1 year of strategic and interactive writing instruction.</article-title> <source>Journal of Deaf Studies and Deaf Education</source>, <volume>17</volume>(<issue>1</issue>), <fpage>19</fpage>&#8211;<lpage>38</lpage>. <pub-id pub-id-type="doi">10.1093/deafed/enr018</pub-id><pub-id pub-id-type="pmid">21571902</pub-id><issn>1081-4159</issn></mixed-citation></ref>
</ref-list>
</back>
</article>
