Multistage integration model for human egomotion perception |
| |
Authors: | Zacharias G L Miao A X Warren R |
| |
Affiliation: | Charles River Analytics, Cambridge, Massachusetts 02138-1125, USA. |
| |
Abstract: | Human computational vision models that attempt to account for the dynamic perception of egomotion and relative depth typically assume a common three-stage process: first, compute the optical flow field based on the dynamically changing image; second, estimate the egomotion states based on the flow; and third, estimate the relative depth/shape based on the egomotion states and possibly on a model of the viewed surface. We propose a model more in line with recent work in human vision, employing multistage integration. Here the dynamic image is first processed to generate spatial and temporal image gradients that drive a mutually interconnected state estimator and depth/shape estimator. The state estimator uses the image gradient information in combination with a depth/shape estimate of the viewed surface and an assumed model of the viewer's dynamics to generate current state estimates; in tandem, the depth/shape estimator uses the image gradient information in combination with the viewer's state estimate and assumed shape model to generate current depth/shape estimates. In this paper, we describe the model and compare model predictions with empirical data. |
| |
Keywords: | |
本文献已被 PubMed 等数据库收录! |
|