Probing Bottom-up Processing with Multistable Images

  • Ozgur E. Akman University of Edinburgh
  • Richard A. Clement University College London
  • David S. Broomhead University of Manchester
  • Sabira Mannan Imperial College London
  • Ian Moorhead QinetiQ
  • Hugh R. Wilson York University
Keywords: bottom-up processing, V1, Marroquin pattern, V4, saliency toolbox

Abstract

The selection of fixation targets involves a combination of top-down and bottom-up processing. The role of bottom-up processing can be enhanced by using multistable stimuli because their constantly changing appearance seems to depend predominantly on stimulusdriven factors. We used this approach to investigate whether visual processing models based on V1 need to be extended to incorporate specific computations attributed to V4. Eye movements of 8 subjects were recorded during free viewing of the Marroquin pattern in which illusory circles appear and disappear. Fixations were concentrated on features arranged in concentric rings within the pattern. Comparison with simulated fixation data demonstrated that the saliency of these features can be predicted with appropriate weighting of lateral connections in existing V1 models.
Published
2009-02-09
How to Cite
Akman, O. E., Clement, R. A., Broomhead, D. S., Mannan, S., Moorhead, I., & Wilson, H. R. (2009). Probing Bottom-up Processing with Multistable Images. Journal of Eye Movement Research, 1(3). https://doi.org/10.16910/jemr.1.3.4
Section
Articles