<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd">

<article article-type="research-article" xmlns:xlink="http://www.w3.org/1999/xlink">
 <front>
    <journal-meta>
	<journal-id journal-id-type="publisher-id">Jemr</journal-id>
      <journal-title-group>
        <journal-title>Journal of Eye Movement Research</journal-title>
      </journal-title-group>
      <issn pub-type="epub">1995-8692</issn>
	  <publisher>								
	  <publisher-name>Bern Open Publishing</publisher-name>
	  <publisher-loc>Bern, Switzerland</publisher-loc>
	</publisher>
    </journal-meta>
    <article-meta>
	<article-id pub-id-type="doi">10.16910/jemr.11.3.3</article-id> 
	  <article-categories>								
				<subj-group subj-group-type="heading">
					<subject>Research Article</subject>
				</subj-group>
		</article-categories>
      <title-group>
        <article-title>Identifying experts in the field of visual arts using oculomotor signals</article-title>
      </title-group>
	   <contrib-group> 
				<contrib contrib-type="author">
					<name>
						<surname>Ko&#x142;odziej</surname>
						<given-names>Marcin</given-names>
					</name>
					<xref ref-type="aff" rid="aff1"></xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Majkowski</surname>
						<given-names>Andrzej</given-names>
					</name>
					<xref ref-type="aff" rid="aff1"></xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Rak</surname>
						<given-names>Remigiusz J.</given-names>
					</name>
					<xref ref-type="aff" rid="aff1"></xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Francuz</surname>
						<given-names>Piotr</given-names>
					</name>
					<xref ref-type="aff" rid="aff2"></xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Augustynowicz</surname>
						<given-names>Pawe&#x142;</given-names>
					</name>
					<xref ref-type="aff" rid="aff2"></xref>
				</contrib>				
        <aff id="aff1">
		<institution>Warsaw University of Technology</institution>,   <country>Poland</country>
        </aff>
        <aff id="aff2">
		<institution>John Paul II Catholic University of Lublin</institution>,   <country>Poland</country>
        </aff>		
		</contrib-group>     
	  <pub-date date-type="pub" publication-format="electronic"> 
		<day>24</day>  
		<month>5</month>
        <year>2018</year>
      </pub-date>
	  <pub-date date-type="collection" publication-format="electronic"> 
	  <year>2018</year>
	</pub-date>
      <volume>11</volume>
      <issue>3</issue>
 <elocation-id>10.16910/jemr.11.3.3</elocation-id>
	<permissions> 
	<copyright-year>2018</copyright-year>
	<copyright-holder>Ko&#x142;odziej, M., Majkowski, M., Francuz, P., Rak. R. J., &#x26; Augustynowicz, P.</copyright-holder>
	<license license-type="open-access">
  <license-p>This work is licensed under a Creative Commons Attribution 4.0 International License, 
  (<ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">
    https://creativecommons.org/licenses/by/4.0/</ext-link>), which permits unrestricted use and redistribution provided that the original author and source are credited.</license-p>
</license>
	</permissions>
      <abstract>
        <p>In this article, we aimed to present a system that enables identifying experts in the field of visual art based on oculographic data. The difference between the two classified groups of tested people concerns formal education. At first, regions of interest (ROI) were determined based on position of fixations on the viewed picture. For each ROI, a set of features (the number of fixations and their durations) was calculated that enabled distinguishing professionals from laymen. The developed system was tested for several dozen of users. We used k-nearest neighbors (k-NN) and support vector machine (SVM) classifiers for classification process. Classification results proved that it is possible to distinguish experts from non-experts.</p>
      </abstract>
      <kwd-group>
        <kwd>Expert system</kwd>
        <kwd>eye-tracking</kwd>
        <kwd>fixation</kwd>
        <kwd>clusters</kwd>
        <kwd>neural network</kwd>
        <kwd>support vector machine</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>

    <sec id="S1">
      <title>Introduction</title>
	  
      <p>In the field of empirical esthetics, we pose questions
about the differences between experts and non-experts in
terms of their esthetic preferences and emotional,
behavioral, or neurophysiological reactions. In the vast
majority of them, we assume that the experts in the field of art
(as opposed to the laymen) are continuing or completed
their studies in art history, an academy of fine arts, or a
conservatory. We assume that they are involved in some
kind of art or actively participate in cultural life (e.g.,
they visit museums and exhibitions, paint, take
photographs, sculpture, or read about art either professionally
or as a hobbyist. Furthermore, studies have shown
inhomogeneity of groups of experts and non-experts
confronted with evaluation of works of art. Therefore, there
is a need to look for an objective method of measuring
expert level in the field of art.</p>

      <p>Oculography, as a method of measuring of human
visual activity, gives some possibilities in this field. One
of the reliable indicators of an interest in a specific
fragment of an image is the density of fixations, registered by
eye-tracker [
        <xref ref-type="bibr" rid="b1">1</xref>
        ]. Regions of interest (ROI) are interpreted
as places of especially high information values [
        <xref ref-type="bibr" rid="b2 b3 b4 b5">2-5</xref>
        ].
Generally, higher values of many oculomotor indicators
(e.g., average fixation time, duration of fixations or
length of saccades preceding fixations) are recorded in
areas of high information values [
        <xref ref-type="bibr" rid="b10 b11 b6 b7 b8 b9">6, 7, 8, 9, 10, 11</xref>
        ]. The results of
eye-tracking research to find experts and non-experts in
the field of visual arts show some differences in the
distribution of fixations on the known and unknown pictures
[
        <xref ref-type="bibr" rid="b1">1</xref>
        ]. Practicing artists often pay attention to these
fragments of images that lie beyond the obvious centers of
interest (e.g., the faces of people) unlike non-experts. It
was also found that the experts have a more global
strategy to search image area than non-experts [
        <xref ref-type="bibr" rid="b12">12</xref>
        ]. However,
non-experts pay more attention to objects and people
shown in pictures, whereas experts are more interested in
structural features of these images. Vogt and Magnussen
[
        <xref ref-type="bibr" rid="b13">13</xref>
        ] found that the non-experts fix their sight longer on
earlier watched parts of images than the experts. It was
also found that non-experts, regardless of the type of task
being performed (free viewing of photos or scanning
them to find the specified object) fix their sight according
to the salience-driven effect, which is in line with the
bottom up strategy of information processing [
        <xref ref-type="bibr" rid="b14">14</xref>
        ].
Hermens [
        <xref ref-type="bibr" rid="b15">15</xref>
        ] presented an extensive review of the literature
concerning eye movements in surgery. On the basis of
eye movements some techniques to assess surgical skill
were developed, and the role of eye movements in
surgical training was examined. Sugano [
        <xref ref-type="bibr" rid="b16">16</xref>
        ] investigated the
possibility of image preference estimation based on a
person&#x2019;s eye movements. Stofer and Che [
        <xref ref-type="bibr" rid="b17">17</xref>
        ]
investigated of expert and novice meaning-making from
scaffolded data visualizations using clinical inter-views.
Boccignone [
        <xref ref-type="bibr" rid="b18">18</xref>
        ] applied machine learning to detect
expertise from the oculomotor behavior of novice and
expert billiard players.</p>

      <p>Viewing a picture runs fragmentarily. While viewing
a picture, people focus their eyes on different parts of it
with different frequencies [
        <xref ref-type="bibr" rid="b2">2</xref>
        ]. If an image is watched by
a dozen or so people, it is likely that they will pay
attention to its similar fragments. This tendency has been
previously confirmed by numerous studies, started from
experiments conducted by the pioneers of oculography
such as [
        <xref ref-type="bibr" rid="b19 b20 b21 b6">6,19, 20, 21</xref>
        ]. Can we, based on the coordinates and
durations of fixations, predict who is watching the image:
an expert or a layman? In this article, we present a system
that enables identifying experts in the field of art based
on eye movements while watching assessed paintigs. The
difference between the classified groups of people
concerns formal education and related to it greater or
lesser experience in dealing with works of art.</p>
    </sec>
	
    <sec id="S2">
      <title>Participants and setup</title>
	  
      <p>In this study, we collected data from 44 people: 23
experts (including 11 women) and 21 non-experts
(including 11 women), who were in the age group of 20&#x2013;27
years (mean value = 23.4; standard deviation = 1.6).
Eighty-five percent of the people in the group of experts
were students of the fourth and fifth years of studies, and
fifteen percent were students of the second and third year,
mainly art history (90%) and painting and graphics
(10%). In addition to formal education, all of them
declared interest in visual arts and about half of them have
been actively involved in some form of art (painting,
graphics, sculpture, photography, design, etc.) for several
years. Non-experts did not meet any of the above criteria.
All persons had normal or corrected to normal vision and
did not report any symptoms of neurological disorder. All
people participating in the experiments received financial
compensation.</p>

      <p>We used digitized reproductions of five known
paintings. The list of the images is presented below:</p>

      <p>P1. 	&#xA0;James J. Tissot&#x2014;The Traveller [1883&#x2013;1885]),</p>
      <p>P2. 	&#xA0;Caravaggio&#x2014;Crucifixion of St. Peter [1600],</p>
      <p>P3. 	&#xA0;Gustave Courbet&#x2014;Malle Babbe [1628&#x2013;1640],</p>
      <p>P4. 	&#xA0;Carl Hols&#xF8;e&#x2014;Reflections [year unknown],</p>
      <p>P5. 	&#xA0;Ilja Repin&#x2014;Unexpected Visitor [1884&#x2013;1888]).</p>

      <p>One image was used in the instructions for users:</p>
	  
      <p>P0. 	&#xA0;Alexandre Cabanel&#x2014;Cleopatra Testing Poisons
on Condemned Prisoners [1887].</p>

      <p>In this study, we used SMI RED 500 eye-tracker. The
images were displayed on a color monitor with
1920x1200 pixel resolution. The person being examined
was sitting in front of a monitor at a distance of
approximately 65 cm. The program for stimuli presentation and
recording the reactions of the respondents was written in
E-Prime v.2.0. The subjects answered the question of
esthetic evaluation using a keyboard with a variable
arrangement of keys.</p>

      <p>The task of the users was to watch random sequence of
five test pictures. Their eye movements were recorded
while they were viewing the images, in the form of
fixations and fixation durations. The recordings lasted for
approximately 20 min, including the time required for
calibration of the eye-tracker and passing instructions to
the user. The experiment consisted of the following
phases:</p>

      <p>1. Instruction on how to perform tasks in test phase,</p>
      <p>2. Eye-tracker calibration,</p>
      <p>3. The draw of the image,</p>
      <p>4. Watching image for 15 s,</p>
      <p>5. Esthetic image evaluation.</p>

        <p>It needs to be highlighted that our aim was not
classify experts and non-experts based on their aesthetic
preferences. The idea was to check whether we can
distinguish experts and non-experts from the way they view the
images.</p>
    </sec>
	
    <sec id="S3">
      <title>Methods</title>
	  
      <p>We assumed that for images, there are individual
ROIs, which in a different way attract the attention of
experts and non-experts. Therefore, we specified sets of
ROIs for each image separately. For each ROI, we
calculated the following features: the number of fixations and
the average duration of fixation that could enable
distinguishing an expert from non-expert. We did not use
diameter of eye pupil as a feature, because it is linked
significantly with the brightness of the observed portion of
the image [
        <xref ref-type="bibr" rid="b22 b23">22,23</xref>
        ]. We deliberately limited ourselves to
static features related to specified clusters. We did not
consider transition between clusters which might be
useful [
        <xref ref-type="bibr" rid="b24">24</xref>
        ]. We are aware that in this way we could limit the
classification accuracy, but the purpose of the article was
to check only static features. In the first step, the
calculated features were used to learn the classifier. Then, the
system was tested using cross-validation test (CV). Block
diagram of the proposed system is given in Fig. 1.</p>

<fig id="fig01" fig-type="figure" position="float">
					<label>Figure 1</label>
					<caption>
						<p>The steps of operation of the system for automatic recognition of experts</p>
					</caption>
					<graphic id="graph01" xlink:href="jemr-11-03-c-figure-01.png"/>
				</fig>

      <sec id="S3a">
        <title>Specification of ROI</title>
		
        <p>We considered several methods to specify ROI. The
simplest of them included an arbitrary division of an
image on separate areas (e.g., rectangles). However, in
this case, both the selection of size and number of ROIs
was a big problem. Consequently, we decided that such a
simple division is unnatural and ineffective. Therefore,
we used number of fixations to identify ROIs.</p>

        <p>To specify ROIs, based on registered fixations, many
clustering methods could be applied. The basis of most of
them is the similarity between elements, expressed by
that the observation x belongs to k-th cluster can be
exsome metrics. Hierarchical methods, K-means, and fuzzy
cluster analysis are frequently used for that purpose [
          <xref ref-type="bibr" rid="b8">8</xref>
          ]. It
turns out that, depending on the nature of observations,
the type of method used plays an important role. Not
without significance is the number of clusters, on which
we want to divide the observations. In a number of
known methods, the researcher must decide on the
number of clusters. This makes analysis more difficult and
requires a researcher participation in working out results.</p>

        <p>We decided to use expectation-maximization (EM)
clustering algorithm [
          <xref ref-type="bibr" rid="b4">4</xref>
		  ]. Bayesian information criterion
(BIC) [
          <xref ref-type="bibr" rid="b9">9</xref>
		  ] was implemented to automatically determine
the number of meaningful clusters. In EM algorithm, we
used approximation of distribution of observations (x,y)
with the use of mixtures probability density functions of
normal distributions [
          <xref ref-type="bibr" rid="b10">10</xref>
		  ]. Suppose that the probability
density function of observations x for K clusters is defined
as [
          <xref ref-type="bibr" rid="b11">11</xref>
		  ]:
		  
<fig id="eq01" fig-type="equation" position="anchor">
					<label>(1)</label>
					<graphic id="equation01" xlink:href="jemr-11-03-c-equation-01.png"/>
				</fig>		  

where f(x; &#x398;<sub>k</sub>) is a probability density function of the k-th
cluster with &#x398;<sub>k</sub> parameter and &#x3C0;<sub>k</sub> depicts a mixture
parameter. In case f(x; &#x398;<sub>k</sub> ) is a normal distribution function,
there exists &#x398;<sub>k</sub> = (&#x3BC;<sub>k</sub> , 𝕽<sub>k</sub> ), where &#x3BC;<sub>k</sub> is the vector
of expected values for observations and 𝕽<sub>k</sub> is the covariance
matrix. We can use the EM algorithm to determine
the expected values&#x2019; vectors and the covariance matrix of
the probability density function of the k-th cluster. Let us
define &#x3A8; = {&#x3C0;<sub>k</sub>, &#x398;<sub>k</sub>: k = 1, … , K} as a set of parameters
of normal distributions&#x2019; mixture. Then, the probability p<sub>ik</sub>
that the observation x belongs to k-th cluster can be expressed
as [
          <xref ref-type="bibr" rid="b9">9</xref>
		  ]:
		  
<fig id="eq02" fig-type="equation" position="anchor">
					<label>(2)</label>
					<graphic id="equation02" xlink:href="jemr-11-03-c-equation-02.png"/>
				</fig>		  

This is a basic step of EM method, denoted as E. In the
following steps (called M), we can estimate the parameters
of f(x) [
          <xref ref-type="bibr" rid="b22">22</xref>
		  ]:
		  
<fig id="eq03" fig-type="equation" position="anchor">
					<label>(3)</label>
					<graphic id="equation03" xlink:href="jemr-11-03-c-equation-03.png"/>
				</fig>

<fig id="eq04" fig-type="equation" position="anchor">
					<label>(4)</label>
					<graphic id="equation04" xlink:href="jemr-11-03-c-equation-04.png"/>
				</fig>	

<fig id="eq05" fig-type="equation" position="anchor">
					<label>(5)</label>
					<graphic id="equation05" xlink:href="jemr-11-03-c-equation-05.png"/>
				</fig>				
  
where N is the number of fixations. Using this procedure
in an iterative mode, starting from an initial value of
normal distribution and repeating steps E and M, we can
guarantee that the logarithmic reliability of the observed
data did not decrease [
          <xref ref-type="bibr" rid="b22">22</xref>
          ]. This means that the
parameters  &#x398;<sub>k</sub> converge to at least one of a local maximum of
logarithmic likelihood function. It should be noted that an
observation belongs to the k-th cluster, when the value p<sub>ik</sub>/p<sub>k</sub> 
is the maximum, where p<sub>k</sub> = &#x2211;<sup>N</sup><sub>i=1</sub>=p<sub>ik</sub>.</p>

        <p>Clusters were determined for all registered fixations
(for experts and non-experts), as large number of
fixations ensured cluster calculation that can be interpreted as
representative ROI.</p>
      </sec>
	  
      <sec id="S3b">
        <title>Feature extraction and selection</title>
		
        <p>A fixation is described by its location on the screen
(x,y) and/or its duration. Therefore, for each person and
each cluster k (ROI), we determined features associated
with fixations:</p>

        <p>l<sub>k</sub>&#x2022;   &#x2013; the number of fixations in k cluster,</p>
        <p>t<sub>k</sub>&#x2022;   &#x2013; the average fixation time in k cluster.</p>
		
        <p>Consequently, we calculated 2K features (two
features for each of  clusters). Features were determined
without data normalization (method labeled  Z&#x2070;) and for
four different normalization methods (labeled  Z&#x00B9;, . . . ,  Z&#x00B3;),
for which standardized z<sup>j</sup><sub>i</sub> values were calculated
according to the general rule:

<fig id="eq06" fig-type="equation" position="anchor">
					<label>(6)</label>
					<graphic id="equation06" xlink:href="jemr-11-03-c-equation-06.png"/>
				</fig>

where   x<sub>i</sub>-number of fixations or fixation duration, 
m-mean value and  &#x3C3;-standard deviation, j=1, 2 or 3 denote
 Z&#x00B9;,  Z&#x00B2; or  Z&#x00B3; normalization method. In the case of  Z&#x00B9; &#x2212;
 m&#x00B9; and  &#x3C3;&#x00B9; refer to all data together. In the case of
 Z&#x00B2; &#x2212;  m&#x00B2; and  &#x3C3;&#x00B2; refer to individual users. In the case of
 Z&#x00B3; &#x2212;  m&#x00B3; and  &#x3C3;&#x00B3; refer to the individual users and viewed
images. Thus, the  z<sub>i</sub>&#x00B3; takes into account individual
differences between people examined separately for each
image.</p>

        <p>After feature extraction, the resulting features were
assigned to specific ROIs. Not all features were equally
useful for classification. Therefore, it seems sensible to
make their selection. We decided to use two known
feature selection methods: t-statistic [
          <xref ref-type="bibr" rid="b23">23</xref>
          ] and sequential
forward selection (SFS) [
          <xref ref-type="bibr" rid="b25">25</xref>
          ]. The first ranking method
allows to determine the best features for the purpose of
distinguishing two classes. Having knowledge about the
observations for experts and non-experts, we were able to
compare feature distribution for each ROI. In this
method, only statistical distribution of the features was used;
the results of classification are not taken into
consideration. Unfortunately, as a result of this method, we often
obtained correlated features. In the second feature
selection method&#x2014;SFS, as a criterion, classification accuracy
calculated for the tested features is used. Consequently,
such selection generates features that are more
independent.</p>
      </sec>
	  
      <sec id="S3c">
        <title>Classification</title>
		
        <p>We used k-nearest neighbors classifier (k-NN) and
support vector machine (SVM) with different types of
kernel functions. K-NN classifier compares the values of
the explanatory variables from the test set with the
variables from the training set. K nearest observations from the
training set were chosen. On the basis of this choice,
classification is made. The definition of &#x201C;nearest
observation&#x201D; boils down to minimizing a metric measuring the
distance between two observation vectors. We applied the
Euclidean metric. K-NN classifier is useful especially
when the relationship between the explanatory and
training variables is complex or unusual.</p>

        <p>The essence of SVM method is separation of a set of
samples from different classes by a hyperplane. SVM
enables classification of data of any structure, not just
linearly separable. There are many possibilities of
determining the hyperplane by using different kernel
functions, but the quality of the divisions is not always the
same. Application of proper kernel function increases the
chances of improving the separability of data and the
efficiency of classification. In our experiments, we used
linear kernel, sigmoid (MLP) kernel, and RBF kernel
[
          <xref ref-type="bibr" rid="b11">11</xref>
          ].</p>
      </sec>
	  
      <sec id="S3d">
        <title>Training and Testing</title>

        <p>We decided not to use the same data at the training
and testing stage. So, we implemented a leave-one-out test [
          <xref ref-type="bibr" rid="b11">11</xref>
          ]. It is a modified cross-validation test (CV), when all
N examples are divided into N subsets, containing one
element. In our case, for testing, data from only one user
was taken, whereas for training the classifier the data
registered for all the other users was used. This procedure
was repeated consecutively for all users, and then the
classification accuracies were averaged. This approach
ensures that classifier was tested and learned on separate
data sets and subsequent averaging provided correct
result.</p>
      </sec>
    </sec>
	
    <sec id="S4">
      <title>Results</title>
	  
      <p>The results comprise the classification accuracies for
two classes: experts and non-experts. Classification
accuracy was defined as the sum of true positives and true
negatives divided by the number of all examples. Tables
1&#x2013;7 include the classification accuracies for respective
images (P1&#x2013;P5) for the various methods of data
standardization (Z&#x2070;, Z&#x00B9;&#x2013;Z&#x00B3;). All details such as type of classifier,
number of features, and feature selection method are
given in the headers of tables. We used variable number
of features (10, 5, and 3) selected using t-statistic or SFS
for classification. The classification results presented in
this study show that it is possible to distinguish an expert
from non-expert using oculographic signals. We obtained
the highest average classification accuracy for the SVM&#x2013;
MLP method for five best features and SFS selection
method (Table 7).</p>

<table-wrap id="t01" position="float">
					<label>Table 1.</label>
					<caption>
						<p>Classification accuracies for SVM-MLP method, 10-best features selected using t-statistic.</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>
          <tr>
            <td rowspan="1" colspan="1">Picture</td>
            <td rowspan="1" colspan="1">Z&#x2070;</td>
            <td rowspan="1" colspan="1">Z&#x00B9;</td>
            <td rowspan="1" colspan="1">Z&#x00B2;</td>
            <td rowspan="1" colspan="1">Z&#x00B3;</td>
            <td rowspan="1" colspan="1">mean</td>
          </tr>
						</thead>
						<tbody>
          <tr>
            <td rowspan="1" colspan="1">P1</td>
            <td rowspan="1" colspan="1">0.75</td>
            <td rowspan="1" colspan="1">0.89</td>
            <td rowspan="1" colspan="1">0.64</td>
            <td rowspan="1" colspan="1">0.67</td>
            <td rowspan="1" colspan="1">0.74</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">P2</td>
            <td rowspan="1" colspan="1">0.51</td>
            <td rowspan="1" colspan="1">0.77</td>
            <td rowspan="1" colspan="1">0.67</td>
            <td rowspan="1" colspan="1">0.62</td>
            <td rowspan="1" colspan="1">0.64</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">P3</td>
            <td rowspan="1" colspan="1">0.57</td>
            <td rowspan="1" colspan="1">0.54</td>
            <td rowspan="1" colspan="1">0.74</td>
            <td rowspan="1" colspan="1">0.71</td>
            <td rowspan="1" colspan="1">0.64</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">P4</td>
            <td rowspan="1" colspan="1">0.65</td>
            <td rowspan="1" colspan="1">0.54</td>
            <td rowspan="1" colspan="1">0.84</td>
            <td rowspan="1" colspan="1">0.62</td>
            <td rowspan="1" colspan="1">0.66</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">P5</td>
            <td rowspan="1" colspan="1">0.73</td>
            <td rowspan="1" colspan="1">0.73</td>
            <td rowspan="1" colspan="1">0.78</td>
            <td rowspan="1" colspan="1">0.63</td>
            <td rowspan="1" colspan="1">0.72</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">mean</td>
            <td rowspan="1" colspan="1">0.64</td>
            <td rowspan="1" colspan="1">0.7</td>
            <td rowspan="1" colspan="1">0.73</td>
            <td rowspan="1" colspan="1">0.65</td>
            <td rowspan="1" colspan="1"/>
          </tr>
						</tbody>
					</table>
					</table-wrap>	
					
<table-wrap id="t02" position="float">
					<label>Table 2.</label>
					<caption>
						<p>Classification accuracies for 3-NN method, 10-best features selected using t-statistic.</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>
          <tr>
            <td rowspan="1" colspan="1">Picture</td>
            <td rowspan="1" colspan="1">Z&#x2070;</td>
            <td rowspan="1" colspan="1">Z&#x00B9;</td>
            <td rowspan="1" colspan="1">Z&#x00B2;</td>
            <td rowspan="1" colspan="1">Z&#x00B3;</td>
            <td rowspan="1" colspan="1">mean</td>
          </tr>
						</thead>
						<tbody>
          <tr>
            <td rowspan="1" colspan="1">P1</td>
            <td rowspan="1" colspan="1">0.75</td>
            <td rowspan="1" colspan="1">0.75</td>
            <td rowspan="1" colspan="1">0.78</td>
            <td rowspan="1" colspan="1">0.81</td>
            <td rowspan="1" colspan="1">0.77</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">P2</td>
            <td rowspan="1" colspan="1">0.54</td>
            <td rowspan="1" colspan="1">0.56</td>
            <td rowspan="1" colspan="1">0.59</td>
            <td rowspan="1" colspan="1">0.59</td>
            <td rowspan="1" colspan="1">0.57</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">P3</td>
            <td rowspan="1" colspan="1">0.34</td>
            <td rowspan="1" colspan="1">0.6</td>
            <td rowspan="1" colspan="1">0.6</td>
            <td rowspan="1" colspan="1">0.6</td>
            <td rowspan="1" colspan="1">0.54</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">P4</td>
            <td rowspan="1" colspan="1">0.68</td>
            <td rowspan="1" colspan="1">0.65</td>
            <td rowspan="1" colspan="1">0.62</td>
            <td rowspan="1" colspan="1">0.65</td>
            <td rowspan="1" colspan="1">0.65</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">P5</td>
            <td rowspan="1" colspan="1">0.65</td>
            <td rowspan="1" colspan="1">0.55</td>
            <td rowspan="1" colspan="1">0.58</td>
            <td rowspan="1" colspan="1">0.55</td>
            <td rowspan="1" colspan="1">0.58</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">mean</td>
            <td rowspan="1" colspan="1">0.59</td>
            <td rowspan="1" colspan="1">0.62</td>
            <td rowspan="1" colspan="1">0.63</td>
            <td rowspan="1" colspan="1">0.64</td>
            <td rowspan="1" colspan="1"/>
          </tr>
						</tbody>
					</table>
					</table-wrap>	

<table-wrap id="t03" position="float">
					<label>Table 3.</label>
					<caption>
						<p>Classification accuracies for SVM-linear method, 10-best features selected using t-statistic.</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>
          <tr>
            <td rowspan="1" colspan="1">Picture</td>
            <td rowspan="1" colspan="1">Z&#x2070;</td>
            <td rowspan="1" colspan="1">Z&#x00B9;</td>
            <td rowspan="1" colspan="1">Z&#x00B2;</td>
            <td rowspan="1" colspan="1">Z&#x00B3;</td>
            <td rowspan="1" colspan="1">mean</td>
          </tr>
						</thead>
						<tbody>
          <tr>
            <td rowspan="1" colspan="1">P1</td>
            <td rowspan="1" colspan="1">0.72</td>
            <td rowspan="1" colspan="1">0.75</td>
            <td rowspan="1" colspan="1">0.75</td>
            <td rowspan="1" colspan="1">0.81</td>
            <td rowspan="1" colspan="1">0.76</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">P2</td>
            <td rowspan="1" colspan="1">0.69</td>
            <td rowspan="1" colspan="1">0.77</td>
            <td rowspan="1" colspan="1">0.64</td>
            <td rowspan="1" colspan="1">0.56</td>
            <td rowspan="1" colspan="1">0.67</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">P3</td>
            <td rowspan="1" colspan="1">0.69</td>
            <td rowspan="1" colspan="1">0.69</td>
            <td rowspan="1" colspan="1">0.57</td>
            <td rowspan="1" colspan="1">0.6</td>
            <td rowspan="1" colspan="1">0.64</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">P4</td>
            <td rowspan="1" colspan="1">0.78</td>
            <td rowspan="1" colspan="1">0.65</td>
            <td rowspan="1" colspan="1">0.68</td>
            <td rowspan="1" colspan="1">0.7</td>
            <td rowspan="1" colspan="1">0.70</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">P5</td>
            <td rowspan="1" colspan="1">0.7</td>
            <td rowspan="1" colspan="1">0.7</td>
            <td rowspan="1" colspan="1">0.73</td>
            <td rowspan="1" colspan="1">0.68</td>
            <td rowspan="1" colspan="1">0.70</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">mean</td>
            <td rowspan="1" colspan="1">0.72</td>
            <td rowspan="1" colspan="1">0.71</td>
            <td rowspan="1" colspan="1">0.67</td>
            <td rowspan="1" colspan="1">0.67</td>
            <td rowspan="1" colspan="1"/>
          </tr>
						</tbody>
					</table>
					</table-wrap>	

<table-wrap id="t04" position="float">
					<label>Table 4.</label>
					<caption>
						<p>Classification accuracies for SVM-RBF method, 10-best features selected using t-statistic.</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>
          <tr>
            <td rowspan="1" colspan="1">Picture</td>
            <td rowspan="1" colspan="1">Z&#x2070;</td>
            <td rowspan="1" colspan="1">Z&#x00B9;</td>
            <td rowspan="1" colspan="1">Z&#x00B2;</td>
            <td rowspan="1" colspan="1">Z&#x00B3;</td>
            <td rowspan="1" colspan="1">mean</td>
          </tr>
						</thead>
						<tbody>
          <tr>
            <td rowspan="1" colspan="1">P1</td>
            <td rowspan="1" colspan="1">0.58</td>
            <td rowspan="1" colspan="1">0.53</td>
            <td rowspan="1" colspan="1">0.64</td>
            <td rowspan="1" colspan="1">0.67</td>
            <td rowspan="1" colspan="1">0.61</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">P2</td>
            <td rowspan="1" colspan="1">0.62</td>
            <td rowspan="1" colspan="1">0.64</td>
            <td rowspan="1" colspan="1">0.44</td>
            <td rowspan="1" colspan="1">0.49</td>
            <td rowspan="1" colspan="1">0.55</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">P3</td>
            <td rowspan="1" colspan="1">0.57</td>
            <td rowspan="1" colspan="1">0.66</td>
            <td rowspan="1" colspan="1">0.54</td>
            <td rowspan="1" colspan="1">0.51</td>
            <td rowspan="1" colspan="1">0.57</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">P4</td>
            <td rowspan="1" colspan="1">0.62</td>
            <td rowspan="1" colspan="1">0.65</td>
            <td rowspan="1" colspan="1">0.59</td>
            <td rowspan="1" colspan="1">0.62</td>
            <td rowspan="1" colspan="1">0.62</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">P5</td>
            <td rowspan="1" colspan="1">0.6</td>
            <td rowspan="1" colspan="1">0.5</td>
            <td rowspan="1" colspan="1">0.55</td>
            <td rowspan="1" colspan="1">0.6</td>
            <td rowspan="1" colspan="1">0.56</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">mean</td>
            <td rowspan="1" colspan="1">0.59</td>
            <td rowspan="1" colspan="1">0.59</td>
            <td rowspan="1" colspan="1">0.55</td>
            <td rowspan="1" colspan="1">0.57</td>
            <td rowspan="1" colspan="1"/>
          </tr>
						</tbody>
					</table>
					</table-wrap>	

<table-wrap id="t05" position="float">
					<label>Table 5.</label>
					<caption>
						<p>Classification accuracies for SVM-MLP method, 5-best features selected using t-statistic.</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>
          <tr>
            <td rowspan="1" colspan="1">Picture</td>
            <td rowspan="1" colspan="1">Z&#x2070;</td>
            <td rowspan="1" colspan="1">Z&#x00B9;</td>
            <td rowspan="1" colspan="1">Z&#x00B2;</td>
            <td rowspan="1" colspan="1">Z&#x00B3;</td>
            <td rowspan="1" colspan="1">mean</td>
          </tr>
						</thead>
						<tbody>
          <tr>
            <td rowspan="1" colspan="1">P1</td>
            <td rowspan="1" colspan="1">0.67</td>
            <td rowspan="1" colspan="1">0.86</td>
            <td rowspan="1" colspan="1">0.64</td>
            <td rowspan="1" colspan="1">0.89</td>
            <td rowspan="1" colspan="1">0.77</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">P2</td>
            <td rowspan="1" colspan="1">0.51</td>
            <td rowspan="1" colspan="1">0.46</td>
            <td rowspan="1" colspan="1">0.51</td>
            <td rowspan="1" colspan="1">0.77</td>
            <td rowspan="1" colspan="1">0.56</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">P3</td>
            <td rowspan="1" colspan="1">0.63</td>
            <td rowspan="1" colspan="1">0.63</td>
            <td rowspan="1" colspan="1">0.69</td>
            <td rowspan="1" colspan="1">0.51</td>
            <td rowspan="1" colspan="1">0.62</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">P4</td>
            <td rowspan="1" colspan="1">0.57</td>
            <td rowspan="1" colspan="1">0.57</td>
            <td rowspan="1" colspan="1">0.7</td>
            <td rowspan="1" colspan="1">0.81</td>
            <td rowspan="1" colspan="1">0.66</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">P5</td>
            <td rowspan="1" colspan="1">0.75</td>
            <td rowspan="1" colspan="1">0.55</td>
            <td rowspan="1" colspan="1">0.6</td>
            <td rowspan="1" colspan="1">0.65</td>
            <td rowspan="1" colspan="1">0.64</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">mean</td>
            <td rowspan="1" colspan="1">0.62</td>
            <td rowspan="1" colspan="1">0.61</td>
            <td rowspan="1" colspan="1">0.62</td>
            <td rowspan="1" colspan="1">0.72</td>
            <td rowspan="1" colspan="1"/>
          </tr>
						</tbody>
					</table>
					</table-wrap>

<table-wrap id="t06" position="float">
					<label>Table 6.</label>
					<caption>
						<p>Classification accuracies for SVM-MLP method, 3-best features selected using SFS.</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>
          <tr>
            <td rowspan="1" colspan="1">Picture</td>
            <td rowspan="1" colspan="1">Z&#x2070;</td>
            <td rowspan="1" colspan="1">Z&#x00B9;</td>
            <td rowspan="1" colspan="1">Z&#x00B2;</td>
            <td rowspan="1" colspan="1">Z&#x00B3;</td>
            <td rowspan="1" colspan="1">mean</td>
          </tr>
						</thead>
						<tbody>
          <tr>
            <td rowspan="1" colspan="1">P1</td>
            <td rowspan="1" colspan="1">0.72</td>
            <td rowspan="1" colspan="1">0.78</td>
            <td rowspan="1" colspan="1">0.83</td>
            <td rowspan="1" colspan="1">0.89</td>
            <td rowspan="1" colspan="1">0.81</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">P2</td>
            <td rowspan="1" colspan="1">0.72</td>
            <td rowspan="1" colspan="1">0.74</td>
            <td rowspan="1" colspan="1">0.59</td>
            <td rowspan="1" colspan="1">0.69</td>
            <td rowspan="1" colspan="1">0.69</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">P3</td>
            <td rowspan="1" colspan="1">0.83</td>
            <td rowspan="1" colspan="1">0.77</td>
            <td rowspan="1" colspan="1">0.71</td>
            <td rowspan="1" colspan="1">0.57</td>
            <td rowspan="1" colspan="1">0.72</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">P4</td>
            <td rowspan="1" colspan="1">0.7</td>
            <td rowspan="1" colspan="1">0.76</td>
            <td rowspan="1" colspan="1">0.76</td>
            <td rowspan="1" colspan="1">0.62</td>
            <td rowspan="1" colspan="1">0.71</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">P5</td>
            <td rowspan="1" colspan="1">0.83</td>
            <td rowspan="1" colspan="1">0.65</td>
            <td rowspan="1" colspan="1">0.7</td>
            <td rowspan="1" colspan="1">0.8</td>
            <td rowspan="1" colspan="1">0.75</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">mean</td>
            <td rowspan="1" colspan="1">0.76</td>
            <td rowspan="1" colspan="1">0.74</td>
            <td rowspan="1" colspan="1">0.72</td>
            <td rowspan="1" colspan="1">0.71</td>
            <td rowspan="1" colspan="1"/>
          </tr>
						</tbody>
					</table>
					</table-wrap>

<table-wrap id="t07" position="float">
					<label>Table 7.</label>
					<caption>
						<p>Classification accuracies for SVM-MLP method, 5-best features selected using SFS.</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>
          <tr>
            <td rowspan="1" colspan="1">Picture</td>
            <td rowspan="1" colspan="1">Z&#x2070;</td>
            <td rowspan="1" colspan="1">Z&#x00B9;</td>
            <td rowspan="1" colspan="1">Z&#x00B2;</td>
            <td rowspan="1" colspan="1">Z&#x00B3;</td>
            <td rowspan="1" colspan="1">mean</td>
          </tr>
						</thead>
						<tbody>
          <tr>
            <td rowspan="1" colspan="1">P1</td>
            <td rowspan="1" colspan="1">0.75</td>
            <td rowspan="1" colspan="1">0.89</td>
            <td rowspan="1" colspan="1">0.92</td>
            <td rowspan="1" colspan="1">0.81</td>
            <td rowspan="1" colspan="1">0.84</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">P2</td>
            <td rowspan="1" colspan="1">0.72</td>
            <td rowspan="1" colspan="1">0.51</td>
            <td rowspan="1" colspan="1">0.79</td>
            <td rowspan="1" colspan="1">0.72</td>
            <td rowspan="1" colspan="1">0.69</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">P3</td>
            <td rowspan="1" colspan="1">0.69</td>
            <td rowspan="1" colspan="1">0.8</td>
            <td rowspan="1" colspan="1">0.6</td>
            <td rowspan="1" colspan="1">0.69</td>
            <td rowspan="1" colspan="1">0.70</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">P4</td>
            <td rowspan="1" colspan="1">0.62</td>
            <td rowspan="1" colspan="1">0.65</td>
            <td rowspan="1" colspan="1">0.76</td>
            <td rowspan="1" colspan="1">0.81</td>
            <td rowspan="1" colspan="1">0.71</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">P5</td>
            <td rowspan="1" colspan="1">0.73</td>
            <td rowspan="1" colspan="1">0.78</td>
            <td rowspan="1" colspan="1">0.73</td>
            <td rowspan="1" colspan="1">0.78</td>
            <td rowspan="1" colspan="1">0.76</td>
          </tr>
          <tr>
            <td rowspan="1" colspan="1">mean</td>
            <td rowspan="1" colspan="1">0.7</td>
            <td rowspan="1" colspan="1">0.72</td>
            <td rowspan="1" colspan="1">0.76</td>
            <td rowspan="1" colspan="1">0.76</td>
            <td rowspan="1" colspan="1"/>
          </tr>
						</tbody>
					</table>
					</table-wrap>					

      <p>For this case, the average classification accuracy for
all images was 0.74. For the considered combination of
algorithms (SVM&#x2013;MLP classifier, five best features, SFS
selection method) we received the classification accuracy
of 0.84 for the image P1, 0.69 for P2, 0.70 for P3, 0.71
for P4 and 0.77 for P5. The classification accuracies
averaged for all tested methods were: 0.74 for the image
P1, 0.64 for P2, 0.64 for P3, 0.66 for P4 and 0.72 for P5.</p>
    </sec>
	
    <sec id="S5">
      <title>Discussion</title>
	  
      <p>In Fig.2 the result of clustering with EM method is
presented. The chosen number of clusters is eight. Each
fixation belonging to the cluster is located near the center
of gravity. EM algorithm allows you to create clusters
using their statistical distributions. The omission of
several fixation does not affect the determination of clusters as
lack of some fixations does not disrupt calculation of
statistical parameters [
        <xref ref-type="bibr" rid="b11">11</xref>
        ]. The method of specifying the
clusters has significant influence on further steps in
quantitative description of a cluster. For example cluster #1
can be easily interpreted as being associated with a
natural concentration of attention on woman&#x2019;s face. Similarly,
cluster #7 (brown) can be interpreted as associated with a
concentration of attention on man&#x2019;s hand. EM method
takes into account statistical dependencies in the
distribution of fixations and, to a large extent, allows the
specification of clusters, which can be interpreted semantically.
Very good results of implementation of mixtures of
normal distributions can be obtained for clusters of elliptical
shape. It was also found that the result of grouping using
the EM algorithm is sensitive to the initial &#x398;<sub>k</sub>  parameter.
For this purpose, the algorithm can be repeated many
times for different initial parameters, and next, we can
choose the best solution meeting the BIC.</p>

<fig id="fig02" fig-type="figure" position="float">
					<label>Figure 2</label>
					<caption>
						<p>Result of clustering for EM method.</p>
					</caption>
					<graphic id="graph02" xlink:href="jemr-11-03-c-figure-02.png"/>
				</fig>

      <p>We assumed that the method of data normalization
could significantly affect the accuracy of classification.
However, there was no such relationship. We did not find
that normalization of data had a significant impact on
classification accuracy. Average accuracies for the tested
classifiers and different data normalized methods are
presented in Table 8.</p>

<table-wrap id="t08" position="float">
					<label>Table 8.</label>
					<caption>
						<p>Average classification accuracies for different data normalization methods.</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>
          <tr>
            <td rowspan="1" colspan="1">Method</td>
            <td rowspan="1" colspan="1">Z&#x2070;</td>
            <td rowspan="1" colspan="1">Z&#x00B9;</td>
            <td rowspan="1" colspan="1">Z&#x00B2;</td>
            <td rowspan="1" colspan="1">Z&#x00B3;</td>
          </tr>
						</thead>
						<tbody>
          <tr>
            <td rowspan="1" colspan="1">Average accuracy</td>
            <td rowspan="1" colspan="1">0.66</td>
            <td rowspan="1" colspan="1">0.67</td>
            <td rowspan="1" colspan="1">0.67</td>
            <td rowspan="1" colspan="1">0.68</td>
          </tr>
						</tbody>
					</table>
					</table-wrap>	

      <p>At classification stage, we used two kinds of features:
number of fixations in a cluster and average fixation
duration for a cluster. It was worth to find the feature
which is a better to distinguish an expert from a
nonexpert. For this purpose, we calculated the sum of t-values for all clusters
of individual pictures (Table 9). It appeared that the better
feature for distinguishing experts from non-experts was
average fixation duration.</p>

      <p>The average of the sum of t-values for individual
images was 7.72 for the number of fixations and 13.12 for
the average fixation duration (as features). The
calculation of p-values and t-values was performed for data
divided into two sets: experts versus non-experts. The
p-values showed that calculated features for certain clusters
enable to distinguish experts from the non-experts. For
average fixation duration for the best cluster, there was
no significant difference only for image P5 (p&#x3E;0.05). The
average p-value calculated for the best clusters of all
images for average fixation duration was 0.03, whereas it
was 0.26 for number of fixations. This confirms that
better feature for distinguishing experts from non-experts
is the average fixation duration than number of fixations.</p>

      <p>An important element of the developed EM algorithm
was assigning fixations to specific clusters and
determination of the appropriate number of clusters. The list of
optimal number of clusters for each image, calculated
using BIC, is presented in Table 9. Proper cluster
determination was significantly affected by the number of
registered fixations. Too small number could be
insufficient to calculate the representative clusters, which cover
all ROIs. The dependence of BIC value on the number of
clusters for P2 picture is illustrated in Fig. 3. In this case,
the smallest (3.59&#xD7;10<sup>-5</sup>) BIC value was for 14 clusters.
The method of division fixations on clusters for different
assumed number of them are presented in Figs. 4&#x2013;6. For
the case presented in Fig. 4, three clusters of fixations
were created. It can be easily observed that it is not an
optimum division. Intuitively, it should select more
clusters in that case. Although cluster #2 represented
fixations on one face, but still, there were not enough clusters
representing the other faces.</p>

<table-wrap id="t09" position="float">
					<label>Table 9</label>
					<caption>
						<p>T-values, p-values and optimum number of clusters for the BIC criteria.</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>
  <tr>
    <td rowspan="1" colspan="1">Parameters</td>
    <td rowspan="1" colspan="1">P1</td>
    <td rowspan="1" colspan="1">P2</td>
    <td rowspan="1" colspan="1">P3</td>
    <td rowspan="1" colspan="1">P4</td>
    <td rowspan="1" colspan="1">P5</td>
    <td rowspan="1" colspan="1">Mean</td>
  </tr>
						</thead>
						<tbody>
  <tr>
    <td rowspan="1" colspan="1">The sum of the coefficients t for the number of fixation</td>
    <td rowspan="1" colspan="1">8.26</td>
    <td rowspan="1" colspan="1">8.54</td>
    <td rowspan="1" colspan="1">9.81</td>
    <td rowspan="1" colspan="1">7.00</td>
    <td rowspan="1" colspan="1">4.98</td>
    <td rowspan="1" colspan="1">7.72</td>
  </tr>
  <tr>
    <td rowspan="1" colspan="1">The sum of the coefficients t for the average fixation duration</td>
    <td rowspan="1" colspan="1">16.35</td>
    <td rowspan="1" colspan="1">13.65</td>
    <td rowspan="1" colspan="1">13.08</td>
    <td rowspan="1" colspan="1">10.96</td>
    <td rowspan="1" colspan="1">11.54</td>
    <td rowspan="1" colspan="1">13.12</td>
  </tr>
  <tr>
    <td rowspan="1" colspan="1">The optimal number of clusters for the BIC criterion</td>
    <td rowspan="1" colspan="1">11</td>
    <td rowspan="1" colspan="1">14</td>
    <td rowspan="1" colspan="1">12</td>
    <td rowspan="1" colspan="1">12</td>
    <td rowspan="1" colspan="1">12</td>
    <td rowspan="1" colspan="1">12.2</td>
  </tr>
  <tr>
    <td rowspan="1" colspan="1">P value for the best cluster for a number of fixation</td>
    <td rowspan="1" colspan="1">0.114</td>
    <td rowspan="1" colspan="1">0.071</td>
    <td rowspan="1" colspan="1">0.853</td>
    <td rowspan="1" colspan="1">0.229</td>
    <td rowspan="1" colspan="1">0.047</td>
    <td rowspan="1" colspan="1">0.26</td>
  </tr>
  <tr>
    <td rowspan="1" colspan="1">P value for the best cluster for the average fixation duration</td>
    <td rowspan="1" colspan="1">0.001</td>
    <td rowspan="1" colspan="1">0.038</td>
    <td rowspan="1" colspan="1">0.041</td>
    <td rowspan="1" colspan="1">0.022</td>
    <td rowspan="1" colspan="1">0.055</td>
    <td rowspan="1" colspan="1">0.03</td>
  </tr>
						</tbody>
					</table>
					</table-wrap>

<fig id="fig03" fig-type="figure" position="float">
					<label>Figure 3</label>
					<caption>
						<p>The dependence of the BIC value on number of clusters for the P2 image.</p>
					</caption>
					<graphic id="graph03" xlink:href="jemr-11-03-c-figure-03.png"/>
				</fig>
				
<fig id="fig04" fig-type="figure" position="float">
					<label>Figure 4</label>
					<caption>
						<p>Fixation assignment to 3 clusters for the P2 image.</p>
					</caption>
					<graphic id="graph04" xlink:href="jemr-11-03-c-figure-04.png"/>
				</fig>				

      <p>For the case of K=6 (Fig. 5), the situation improves,
but still, the number of cluster is too small. Only for
K=14 (Fig. 6), clusters could be interpreted as
representative ROIs. Thus, the clusters #1, #2, and #3 can be
interpreted as the ROI associated with the faces of individual
characters. Cluster #4 is associated with a sword and so
on. For greater clarity, Fig. 7 contains only ellipses of 14
clusters for P2 image.</p>

<fig id="fig05" fig-type="figure" position="float">
					<label>Figure 5</label>
					<caption>
						<p>Fixation assignment to 6 clusters for the P2 image.</p>
					</caption>
					<graphic id="graph05" xlink:href="jemr-11-03-c-figure-05.png"/>
				</fig>	
				
<fig id="fig06" fig-type="figure" position="float">
					<label>Figure 6</label>
					<caption>
						<p>Ellipses for 14 clusters in P2 image.</p>
					</caption>
					<graphic id="graph06" xlink:href="jemr-11-03-c-figure-06.png"/>
				</fig>	

<fig id="fig07" fig-type="figure" position="float">
					<label>Figure 7</label>
					<caption>
						<p>Distribution of the two features (number of clusters and average fixation duration for a cluster) for a group of experts and non-experts.</p>
					</caption>
					<graphic id="graph07" xlink:href="jemr-11-03-c-figure-07.png"/>
				</fig>					

      <p>Table 10 presents the average feature values (number
of fixations) for experts and non-experts, and t-values
calculated for each cluster of P2 image. It can be seen
that there are two clusters for which the distribution of
features suggests significant differences (p&#x3C;0.1) in the
group of experts and non-experts (cluster #1 and cluster
#10). For cluster #1, the average number of fixations for
experts was 13.6 and for non-experts 18.3 (p=0.07). The
biggest statistical significance (p=0.04) was for cluster
#10, for an average fixation duration as feature. For
experts, the average fixation duration was 119.3 ms and for 
non-experts 128.1 ms. This is consistent with results
obtained by other research groups.</p>

<table-wrap id="t10" position="float">
					<label>Table 10</label>
					<caption>
						<p>Average feature values (number of fixations) for experts and not of experts and t-value calculated for individual clusters designated for image P2.</p>
					</caption>
					<table frame="hsides" rules="groups" cellpadding="3">
						<thead>
  <tr>						
    <td rowspan="1" colspan="1"/>
    <td rowspan="1" colspan="14" style="text-align: center;">Cluster number</td>			
  </tr>	
  <tr>						
    <td rowspan="1" colspan="1"/>
    <td rowspan="1" colspan="1">1</td>
    <td rowspan="1" colspan="1">2</td>	
    <td rowspan="1" colspan="1">3</td>	
    <td rowspan="1" colspan="1">4</td>	
    <td rowspan="1" colspan="1">5</td>	
    <td rowspan="1" colspan="1">6</td>	
    <td rowspan="1" colspan="1">7</td>	
    <td rowspan="1" colspan="1">8</td>	
    <td rowspan="1" colspan="1">9</td>	
    <td rowspan="1" colspan="1">10</td>	
    <td rowspan="1" colspan="1">11</td>	
    <td rowspan="1" colspan="1">12</td>	
    <td rowspan="1" colspan="1">13</td>	
    <td rowspan="1" colspan="1">14</td>			
  </tr>	
  
						</thead>
						<tbody>
  <tr>
    <td rowspan="1" colspan="1">The average number of fixations: experts</td>
    <td rowspan="1" colspan="1">13.6</td>
    <td rowspan="1" colspan="1">10.9</td>
    <td rowspan="1" colspan="1">4.9</td>
    <td rowspan="1" colspan="1">4,6</td>
    <td rowspan="1" colspan="1">3.9</td>
    <td rowspan="1" colspan="1">3.5</td>
    <td rowspan="1" colspan="1">3.3</td>
    <td rowspan="1" colspan="1">3.15</td>
    <td rowspan="1" colspan="1">2.55</td>
    <td rowspan="1" colspan="1">1.45</td>
    <td rowspan="1" colspan="1">1.35</td>
    <td rowspan="1" colspan="1">0.85</td>
    <td rowspan="1" colspan="1">0.7</td>
    <td rowspan="1" colspan="1">0.65</td>
  </tr>
  <tr>
    <td rowspan="1" colspan="1">The average number of fixation: non-experts</td>
    <td rowspan="1" colspan="1">18.3</td>
    <td rowspan="1" colspan="1">9.4</td>
    <td rowspan="1" colspan="1">6.58</td>
    <td rowspan="1" colspan="1">5.26</td>
    <td rowspan="1" colspan="1">4.47</td>
    <td rowspan="1" colspan="1">3.89</td>
    <td rowspan="1" colspan="1">3.58</td>
    <td rowspan="1" colspan="1">2.47</td>
    <td rowspan="1" colspan="1">2.32</td>
    <td rowspan="1" colspan="1">1.95</td>
    <td rowspan="1" colspan="1">1.79</td>
    <td rowspan="1" colspan="1">0.74</td>
    <td rowspan="1" colspan="1">0.68</td>
    <td rowspan="1" colspan="1">0.37</td>
  </tr>
 						</tbody> 
						<tbody>  
  <tr>
    <td rowspan="1" colspan="1">p for the number of fixations</td>
    <td rowspan="1" colspan="1">0.07</td>
    <td rowspan="1" colspan="1">0.37</td>
    <td rowspan="1" colspan="1">0.17</td>
    <td rowspan="1" colspan="1">0.47</td>
    <td rowspan="1" colspan="1">0.72</td>
    <td rowspan="1" colspan="1">0.76</td>
    <td rowspan="1" colspan="1">0.76</td>
    <td rowspan="1" colspan="1">0.61</td>
    <td rowspan="1" colspan="1">0.75</td>
    <td rowspan="1" colspan="1">0.47</td>
    <td rowspan="1" colspan="1">0.45</td>
    <td rowspan="1" colspan="1">0.80</td>
    <td rowspan="1" colspan="1">0.96</td>
    <td rowspan="1" colspan="1">0.45</td>
  </tr>
						</tbody>  
						<tbody>  
  <tr>
    <td rowspan="1" colspan="1">Mean fixation time: experts</td>
    <td rowspan="1" colspan="1">214.3</td>
    <td rowspan="1" colspan="1">192.1</td>
    <td rowspan="1" colspan="1">178.8</td>
    <td rowspan="1" colspan="1">218.8</td>
    <td rowspan="1" colspan="1">117</td>
    <td rowspan="1" colspan="1">164.4</td>
    <td rowspan="1" colspan="1">166.8</td>
    <td rowspan="1" colspan="1">126.4</td>
    <td rowspan="1" colspan="1">143.8</td>
    <td rowspan="1" colspan="1">119.3</td>
    <td rowspan="1" colspan="1">128.0</td>
    <td rowspan="1" colspan="1">44.9</td>
    <td rowspan="1" colspan="1">60.7</td>
    <td rowspan="1" colspan="1">44</td>
  </tr>
  <tr>
    <td rowspan="1" colspan="1">Mean fixation time: non-experts</td>
    <td rowspan="1" colspan="1">191.7</td>
    <td rowspan="1" colspan="1">167.9</td>
    <td rowspan="1" colspan="1">185.1</td>
    <td rowspan="1" colspan="1">171.2</td>
    <td rowspan="1" colspan="1">158.7</td>
    <td rowspan="1" colspan="1">184.2</td>
    <td rowspan="1" colspan="1">117.1</td>
    <td rowspan="1" colspan="1">101.1</td>
    <td rowspan="1" colspan="1">108.3</td>
    <td rowspan="1" colspan="1">128.1</td>
    <td rowspan="1" colspan="1">128.1</td>
    <td rowspan="1" colspan="1">50.6</td>
    <td rowspan="1" colspan="1">54.5</td>
    <td rowspan="1" colspan="1">18.7</td>
  </tr>
						</tbody>  
						<tbody>  
  <tr>
    <td rowspan="1" colspan="1">p for fixation time</td>
    <td rowspan="1" colspan="1">0.33</td>
    <td rowspan="1" colspan="1">0.11</td>
    <td rowspan="1" colspan="1">0.56</td>
    <td rowspan="1" colspan="1">0.14</td>
    <td rowspan="1" colspan="1">0.77</td>
    <td rowspan="1" colspan="1">0.75</td>
    <td rowspan="1" colspan="1">0.18</td>
    <td rowspan="1" colspan="1">0.21</td>
    <td rowspan="1" colspan="1">0.10</td>
    <td rowspan="1" colspan="1">0.04</td>
    <td rowspan="1" colspan="1">0.30</td>
    <td rowspan="1" colspan="1">0.52</td>
    <td rowspan="1" colspan="1">0.98</td>
    <td rowspan="1" colspan="1">0.42</td>
  </tr>
						</tbody>
					</table>
					</table-wrap>

      <p>Krupi&#x144;ski [
        <xref ref-type="bibr" rid="b26">26</xref>
        ] and Manning [
        <xref ref-type="bibr" rid="b27">27</xref>
        ] showed that in
comparison to non-experts, experts typically perform
tasks with fewer fixations. At works [
        <xref ref-type="bibr" rid="b28 b29">28,29</xref>
        ] it was shown
that experts had longer fixation durations than novices
when driving a car.</p>
	  
      <p>The distribution of the number of fixations (cluster
#1) and the average fixation duration for cluster (cluster
#1) for group of experts and non-experts is shown in Fig.
7.</p>

      <p>Clusters calculated for all P1&#x2013;P5 images using EM
methods and BIC are given in Supplementary Materials
(available online).</p>
    </sec>
	
    <sec id="S6">
      <title>Conclusions</title>
	  
      <p>The proposed algorithm allows us to automatically
classify a person watching a painting to a group of
experts or not-experts in the field of art. A key role in the
proposed algorithm is EM clustering method. With this
method it is possible to determine ROIs on the image.
With features selected for the ROIs, such as: number of
fixations and average fixation duration, and automatic
classification of an image viewer is possible. The
algorithm was tested in such a way as to get close to the
actual conditions of operation of the expert system.</p>

      <sec id="S6a" sec-type="COI-statement">
        <title>Ethics and Conflict of Interest</title>
		
        <p>The authors declare that the contents of the article are
in agreement with the ethics described in
<ext-link ext-link-type="uri" xlink:href="http://biblio.unibe.ch/portale/elibrary/BOP/jemr/ethics.html" xlink:show="new">http://biblio.unibe.ch/portale/elibrary/BOP/jemr/ethics.html</ext-link> 
and that there is no conflict of interest regarding the
publication of this paper.</p>
      </sec>
	  
      <sec id="S6b">
        <title>Acknowledgements</title>
		
        <p>This work was supported in part by the grant of
National Science Centre (Poland) No.
DEC2013/11/B/HS6/01816.</p>
      </sec>
    </sec>
  </body>
<back>
<ref-list>
<ref id="b1"><label>1</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Antes</surname>, <given-names>J. R.</given-names></name>, &#x26; <name><surname>Kristjanson</surname>, <given-names>A. F.</given-names></name></person-group> (<year>1991</year>). <article-title>Discriminating artists from nonartists by their eye-fixation patterns.</article-title> <source>Perceptual and Motor Skills</source>, <volume>73</volume>(<issue>3 Pt 1</issue>), <fpage>893</fpage>–<lpage>894</lpage>. <pub-id pub-id-type="doi">10.2466/pms.1991.73.3.893</pub-id><pub-id pub-id-type="pmid">1792138</pub-id><issn>0031-5125</issn></mixed-citation></ref>
<ref id="b2"><label>2</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Locher</surname>, <given-names>P. J.</given-names></name></person-group> (<year>2006</year>). <article-title>The usefulness of eye movement recordings to subject an aesthetic episode with visual art to empirical scrutiny.</article-title> <source>Psychological Science</source>, <volume>48</volume>(<issue>2</issue>), <fpage>106</fpage>–<lpage>114</lpage>.<issn>0956-7976</issn></mixed-citation></ref>
<ref id="b3"><label>3</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Henderson</surname>, <given-names>J.</given-names></name>, &#x26; <name><surname>Hollingworth</surname>, <given-names>A.</given-names></name></person-group> (<year>1999</year>). <article-title>The Role of Fixation Position in Detecting Scene Changes Across Saccades.</article-title> <source>Psychological Science</source>, <volume>10</volume>(<issue>5</issue>), <fpage>438</fpage>–<lpage>443</lpage>. <pub-id pub-id-type="doi">10.1111/1467-9280.00183</pub-id><issn>0956-7976</issn></mixed-citation></ref>
<ref id="b4"><label>4</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Massaro</surname>, <given-names>D.</given-names></name>, <name><surname>Savazzi</surname>, <given-names>F.</given-names></name>, <name><surname>Di Dio</surname>, <given-names>C.</given-names></name>, <name><surname>Freedberg</surname>, <given-names>D.</given-names></name>, <name><surname>Gallese</surname>, <given-names>V.</given-names></name>, <name><surname>Gilli</surname>, <given-names>G.</given-names></name>, &#x26; <name><surname>Marchetti</surname>, <given-names>A.</given-names></name></person-group> (<year>2012</year>). <article-title>When art moves the eyes: A behavioral and eye-tracking study.</article-title> <source>PLoS One</source>, <volume>7</volume>(<issue>5</issue>), <fpage>e37285</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0037285</pub-id><pub-id pub-id-type="pmid">22624007</pub-id><issn>1932-6203</issn></mixed-citation></ref>
<ref id="b5"><label>5</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>DeAngelus</surname>, <given-names>M.</given-names></name>, &#x26; <name><surname>Pelz</surname>, <given-names>J.</given-names></name></person-group> (<year>2009</year>). <article-title>Top-down control of eye movements: Yarbus revisited.</article-title> <source>Visual Cognition</source>, <volume>17</volume>(<issue>6-7</issue>), <fpage>790</fpage>–<lpage>811</lpage>. <pub-id pub-id-type="doi">10.1080/13506280902793843</pub-id><issn>1350-6285</issn></mixed-citation></ref>
<ref id="b6"><label>6</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Antes</surname>, <given-names>J. R.</given-names></name></person-group> (<year>1974</year>). <article-title>The time course of picture viewing.</article-title> <source>Journal of Experimental Psychology</source>, <volume>103</volume>(<issue>1</issue>), <fpage>62</fpage>–<lpage>70</lpage>. <pub-id pub-id-type="doi">10.1037/h0036799</pub-id><pub-id pub-id-type="pmid">4424680</pub-id><issn>0022-1015</issn></mixed-citation></ref>
<ref id="b7"><label>7</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Plumhoff</surname>, <given-names>J. E.</given-names></name>, &#x26; <name><surname>Schirillo</surname>, <given-names>J. A.</given-names></name></person-group> (<year>2009</year>). <article-title>Mondrian, eye movements, and the oblique effect.</article-title> <source>Perception</source>, <volume>38</volume>(<issue>5</issue>), <fpage>719</fpage>–<lpage>731</lpage>. <pub-id pub-id-type="doi">10.1068/p6160</pub-id><pub-id pub-id-type="pmid">19662947</pub-id><issn>0301-0066</issn></mixed-citation></ref>
<ref id="b8"><label>8</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Jain</surname>, <given-names>A.</given-names></name></person-group> (<year>2010</year>). <article-title>Data clustering: 50 years beyond K-means.</article-title> <source>Pattern Recognition Letters</source>, <volume>31</volume>(<issue>8</issue>), <fpage>651</fpage>–<lpage>666</lpage>. <pub-id pub-id-type="doi">10.1016/j.patrec.2009.09.011</pub-id><issn>0167-8655</issn></mixed-citation></ref>
<ref id="b9"><label>9</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Celeux</surname>, <given-names>G.</given-names></name>, &#x26; <name><surname>Soromenho</surname>, <given-names>G.</given-names></name></person-group> (<year>1996</year>). <article-title>An entropy criterion for assessing the number of clusters in a mixture model.</article-title> <source>Journal of Classification</source>, <volume>13</volume>(<issue>2</issue>), <fpage>195</fpage>–<lpage>212</lpage>. <pub-id pub-id-type="doi">10.1007/BF01246098</pub-id><issn>0176-4268</issn></mixed-citation></ref>
<ref id="b10"><label>10</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Fraley</surname>, <given-names>C.</given-names></name></person-group> (<year>1998</year>). <article-title>How Many Clusters? Which Clustering Method? Answers Via Model-Based Cluster Analysis.</article-title> <source>The Computer Journal</source>, <volume>41</volume>(<issue>8</issue>), <fpage>578</fpage>–<lpage>588</lpage>. <pub-id pub-id-type="doi">10.1093/comjnl/41.8.578</pub-id><issn>0010-4620</issn></mixed-citation></ref>
<ref id="b11"><label>11</label><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Bishop</surname>, <given-names>C.</given-names></name></person-group> (<year>2006</year>). <source>Pattern recognition and machine learning</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>Springer</publisher-name>.</mixed-citation></ref>
<ref id="b12"><label>12</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Zangemeister</surname>, <given-names>W. H.</given-names></name>, <name><surname>Sherman</surname>, <given-names>K.</given-names></name>, &#x26; <name><surname>Stark</surname>, <given-names>L.</given-names></name></person-group> (<year>1995</year>). <article-title>Evidence for a global scanpath strategy in viewing abstract compared with realistic images.</article-title> <source>Neuropsychologia</source>, <volume>33</volume>(<issue>8</issue>), <fpage>1009</fpage>–<lpage>1025</lpage>. <pub-id pub-id-type="doi">10.1016/0028-3932(95)00014-T</pub-id><pub-id pub-id-type="pmid">8524451</pub-id><issn>0028-3932</issn></mixed-citation></ref>
<ref id="b13"><label>13</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Vogt</surname>, <given-names>S.</given-names></name>, &#x26; <name><surname>Magnussen</surname>, <given-names>S.</given-names></name></person-group> (<year>2007</year>). <article-title>Expertise in pictorial perception: Eye-movement patterns and visual memory in artists and laymen.</article-title> <source>Perception</source>, <volume>36</volume>(<issue>1</issue>), <fpage>91</fpage>–<lpage>100</lpage>. <pub-id pub-id-type="doi">10.1068/p5262</pub-id><pub-id pub-id-type="pmid">17357707</pub-id><issn>0301-0066</issn></mixed-citation></ref>
<ref id="b14"><label>14</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Fuchs</surname>, <given-names>I.</given-names></name>, <name><surname>Ansorge</surname>, <given-names>U.</given-names></name>, <name><surname>Redies</surname>, <given-names>C.</given-names></name>, &#x26; <name><surname>Leder</surname>, <given-names>H.</given-names></name></person-group> (<year>2010</year>). <article-title>Salience in Paintings: Bottom-Up Influences on Eye Fixations.</article-title> <source>Cognitive Computation</source>, <volume>3</volume>(<issue>1</issue>), <fpage>25</fpage>–<lpage>36</lpage>. <pub-id pub-id-type="doi">10.1007/s12559-010-9062-3</pub-id><issn>1866-9956</issn></mixed-citation></ref>
<ref id="b15"><label>15</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Hermens</surname>, <given-names>F.</given-names></name>, <name><surname>Flin</surname>, <given-names>R.</given-names></name>, &#x26; <name><surname>Ahmed</surname>, <given-names>I.</given-names></name></person-group> (<year>2013</year>). <article-title>Eye movements in surgery: A literature review.</article-title> <source>Journal of Eye Movement Research</source>, <volume>6</volume>(<issue>4</issue>).<issn>1995-8692</issn></mixed-citation></ref>
<ref id="b16"><label>16</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Sugano</surname>, <given-names>Y.</given-names></name>, <name><surname>Ozaki</surname>, <given-names>Y.</given-names></name>, <name><surname>Kasai</surname>, <given-names>H.</given-names></name>, <name><surname>Ogaki</surname>, <given-names>K.</given-names></name>, &#x26; <name><surname>Sato</surname>, <given-names>Y.</given-names></name></person-group> (<year>2014</year>). <article-title>Image preference estimation with a data-driven approach: A comparative study between gaze and image features.</article-title> <source>Journal of Eye Movement Research</source>, <volume>7</volume>(<issue>3</issue>).<issn>1995-8692</issn></mixed-citation></ref>
<ref id="b17"><label>17</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Stofer</surname>, <given-names>K.</given-names></name>, &#x26; <name><surname>Che</surname>, <given-names>X.</given-names></name></person-group> (<year>2014</year>). <article-title>Comparing Experts and Novices on Scaffolded Data Visualizations using Eye-tracking.</article-title> <source>Journal of Eye Movement Research</source>, <volume>7</volume>(<issue>5</issue>).<issn>1995-8692</issn></mixed-citation></ref>
<ref id="b18"><label>18</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Boccignone</surname>, <given-names>G.</given-names></name>, <name><surname>Ferraro</surname>, <given-names>M.</given-names></name>, <name><surname>Crespi</surname>, <given-names>S.</given-names></name>, <name><surname>Robino</surname>, <given-names>C.</given-names></name>, &#x26; <name><surname>de&#x2019;Sperati</surname>, <given-names>C.</given-names></name></person-group> (<year>2014</year>). <article-title>Detecting expert&#x2019;s eye using a multiple-kernel Relevance Vector Machine.</article-title> <source>Journal of Eye Movement Research</source>, <volume>7</volume>(<issue>2</issue>).<issn>1995-8692</issn></mixed-citation></ref>
<ref id="b19"><label>19</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Tinker</surname>, <given-names>M.</given-names></name></person-group> (<year>1936</year>). <article-title>How People Look at Pictures.</article-title> <source>Psychological Bulletin</source>, <volume>33</volume>(<issue>2</issue>), <fpage>142</fpage>–<lpage>143</lpage>. <pub-id pub-id-type="doi">10.1037/h0053409</pub-id><issn>0033-2909</issn></mixed-citation></ref>
<ref id="b20"><label>20</label><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Yarbus</surname>, <given-names>A.</given-names></name></person-group> (<year>1967</year>). <source>Eye movements and vision</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>Plenum Press</publisher-name>. <pub-id pub-id-type="doi">10.1007/978-1-4899-5379-7</pub-id></mixed-citation></ref>
<ref id="b21"><label>21</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Mackworth</surname>, <given-names>N.</given-names></name>, &#x26; <name><surname>Morandi</surname>, <given-names>A.</given-names></name></person-group> (<year>1967</year>). <article-title>The gaze selects informative details within pictures.</article-title> <source>Perception &#x26; Psychophysics</source>, <volume>2</volume>(<issue>11</issue>), <fpage>547</fpage>–<lpage>552</lpage>. <pub-id pub-id-type="doi">10.3758/BF03210264</pub-id><issn>0031-5117</issn></mixed-citation></ref>
<ref id="b22"><label>22</label><mixed-citation publication-type="book" specific-use="restruct"><person-group person-group-type="author"><name><surname>Hand</surname>, <given-names>D.</given-names></name>, <name><surname>Mannila</surname>, <given-names>H.</given-names></name>, &#x26; <name><surname>Smyth</surname>, <given-names>P.</given-names></name></person-group> (<year>2012</year>). <source>Principles of data mining</source>. <publisher-loc>New Delhi</publisher-loc>: <publisher-name>PHI Learning Private Limited</publisher-name>.</mixed-citation></ref>
<ref id="b23"><label>23</label><mixed-citation publication-type="conference" specific-use="linked"><person-group person-group-type="author"><name><surname>Jiaxi</surname> <given-names>L.</given-names></name></person-group> <article-title>The application and research of t-test in medicine</article-title>, <source>First International Conference on Networking and Distributed Computing</source>. <conf-loc>Hangzhou, Zhejiang China</conf-loc>, <year>2010</year>; <fpage>321</fpage>–<lpage>323</lpage>. <pub-id pub-id-type="doi">10.1109/ICNDC.2010.70</pub-id></mixed-citation></ref>
<ref id="b24"><label>24</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Coutrot</surname>, <given-names>A.</given-names></name>, <name><surname>Hsiao</surname>, <given-names>J. H.</given-names></name>, &#x26; <name><surname>Chan</surname>, <given-names>A. B.</given-names></name></person-group> (<year>2018</year>). <article-title>Scanpath modeling and classification with hidden Markov models.</article-title> <source>Behavior Research Methods</source>, <volume>50</volume>(<issue>1</issue>), <fpage>362</fpage>–<lpage>379</lpage>. <pub-id pub-id-type="doi">10.3758/s13428-017-0876-8</pub-id><pub-id pub-id-type="pmid">28409487</pub-id><issn>1554-351X</issn></mixed-citation></ref>
<ref id="b25"><label>25</label><mixed-citation publication-type="conference" specific-use="parsed"><person-group person-group-type="author"><name><surname>Ververidis</surname> <given-names>D.</given-names></name> <name><surname>Kotropoulos</surname> <given-names>C.</given-names></name></person-group> <article-title>Sequential forward feature selection with low computational cost</article-title>, <source>Signal Processing Conference</source>. <year>2005</year>;<fpage>1</fpage>–<lpage>4</lpage>.</mixed-citation></ref>
<ref id="b26"><label>26</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Krupinski</surname>, <given-names>E. A.</given-names></name></person-group> (<year>1996</year>). <article-title>Visual scanning patterns of radiologists searching mammograms.</article-title> <source>Academic Radiology</source>, <volume>3</volume>(<issue>2</issue>), <fpage>137</fpage>–<lpage>144</lpage>. <pub-id pub-id-type="doi">10.1016/S1076-6332(05)80381-2</pub-id><pub-id pub-id-type="pmid">8796654</pub-id><issn>1076-6332</issn></mixed-citation></ref>
<ref id="b27"><label>27</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Manning</surname>, <given-names>D.</given-names></name>, <name><surname>Ethell</surname>, <given-names>S.</given-names></name>, <name><surname>Donovan</surname>, <given-names>T.</given-names></name>, &#x26; <name><surname>Crawford</surname>, <given-names>T.</given-names></name></person-group> (<year>2006</year>). <article-title>How do radiologists do it? The influence of experience and training on searching for chest nodules.</article-title> <source>Radiography</source>, <volume>12</volume>(<issue>2</issue>), <fpage>134</fpage>–<lpage>142</lpage>. <pub-id pub-id-type="doi">10.1016/j.radi.2005.02.003</pub-id><issn>1078-8174</issn></mixed-citation></ref>
<ref id="b28"><label>28</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Crundall</surname>, <given-names>D.</given-names></name>, &#x26; <name><surname>Underwood</surname>, <given-names>G.</given-names></name></person-group> (<year>1998</year>). <article-title>Effects of experience and processing demands on visual information acquisition in drivers.</article-title> <source>Ergonomics</source>, <volume>41</volume>(<issue>4</issue>), <fpage>448</fpage>–<lpage>458</lpage>. <pub-id pub-id-type="doi">10.1080/001401398186937</pub-id><issn>0014-0139</issn></mixed-citation></ref>
<ref id="b29"><label>29</label><mixed-citation publication-type="journal" specific-use="restruct"><person-group person-group-type="author"><name><surname>Underwood</surname>, <given-names>G.</given-names></name>, <name><surname>Chapman</surname>, <given-names>P.</given-names></name>, <name><surname>Brocklehurst</surname>, <given-names>N.</given-names></name>, <name><surname>Underwood</surname>, <given-names>J.</given-names></name>, &#x26; <name><surname>Crundall</surname>, <given-names>D.</given-names></name></person-group> (<year>2003</year>). <article-title>Visual attention while driving: Sequences of eye fixations made by experienced and novice drivers.</article-title> <source>Ergonomics</source>, <volume>46</volume>(<issue>6</issue>), <fpage>629</fpage>–<lpage>646</lpage>. <pub-id pub-id-type="doi">10.1080/0014013031000090116</pub-id><pub-id pub-id-type="pmid">12745692</pub-id><issn>0014-0139</issn></mixed-citation></ref>
</ref-list>
</back>
</article>
