Audio-visual integration during overt visual attention

  • Cliodhna Quigley Neurobiopsychology Department, Institute of Cognitive Science, University of Osnabrueck
  • Selim Onat Neurobiopsychology Department, Institute of Cognitive Science, University of Osnabrueck
  • Sue Harding Speech and Hearing Group, Department of Computer Science, University of Sheffield
  • Martin Cooke Speech and Hearing Group, Department of Computer Science, University of Sheffield
  • Peter König Neurobiopsychology Department, Institute of Cognitive Science, University of Osnabrueck
Keywords: eye movements, attention, crossmodal integration

Abstract

How do different sources of information arising from different modalities interact to control where we look? To answer this question with respect to real-world operational conditions we presented natural images and spatially localized sounds in (V)isual, Audio-visual (AV) and (A)uditory conditions and measured subjects' eye-movements. Our results demonstrate that eye-movements in AV conditions are spatially biased towards the part of the image corresponding to the sound source. Interestingly, this spatial bias is dependent on the probability of a given image region to be fixated (saliency) in the V condition. This indicates that fixation behaviour during the AV conditions is the result of an integration process. Regression analysis shows that this integration is best accounted for by a linear combination of unimodal saliencies.
Published
2008-09-17
How to Cite
Quigley, C., Onat, S., Harding, S., Cooke, M., & König, P. (2008). Audio-visual integration during overt visual attention. Journal of Eye Movement Research, 1(2). https://doi.org/10.16910/jemr.1.2.4
Section
Articles

Most read articles by the same author(s)

Similar Articles

You may also start an advanced similarity search for this article.