scispace - formally typeset
Search or ask a question
Topic

Foveal

About: Foveal is a research topic. Over the lifetime, 2652 publications have been published within this topic receiving 94120 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: The results show that the proposedceptual model not only improves the baselines, but also achieves state-of-the-art performance in various datasets at very competitive computational times.

27 citations

01 Jan 2002
TL;DR: The present series of experiments investigated whether localization error might be due, in part, to the binding of the moving stimulus in an action plan, and whether the actions produced in relation to a moving stimulus contribute to the spatial distortion manifested in the localization error.
Abstract: When observers are asked to indicate the final position of a moving stimulus, their localizations are reliably displaced beyond thc final position, in the direction the stimulus was traveling just prior to its offset. Recent experiments indicate that thesc localization errors depend on whether or not observers track the moving stUnulus with eye-movements. If they track, there is a localization error; if not, the error reduces to zero. The present series of experiments investigated whether localization error might be due, in part, to the binding of the moving stimulus in an action plan, Experiment I utilized circular stimulus trajectories, and the eye tracking/ no-tracking discrepancy revealed in previous studies was replicated. Experiment 2 required central fixation by all observers, and either the computer program (i.e. induction) or a button press by the observer (i.e. intention) produced the stimulus offset. The localizations made in the Intention condition were further in the direction Qf the planned action effect than those made in the Induction condition, Experiment 3 demonstrated these differences to be due to the intention to stop the stimulus. not thc button press, And Experiment 4 revealed that action planning has its binding effect on the localization error for a duration that extends beyond the actual moment of action execution. In light of these data, an approach to perception-action coupling is proposed in which spatial perception and spatially directed action are modeled, not as input and output, respectively, but rather, as synergistically coupled control systems. When observers are asked to indicate the final location of an apparently moving, or moving stimulus, the indicated location is reliably displaced beyond the final location, in the direction the target was traveling just prior to its offset (Finke, Freyd, and Shyi 1986; Freyd and Finke 1984; Hubbard 1995). In addition, the magnitude and direction of the displacement varies in a manner that is con­ sistent with the laws of physics (i.e. velocity, friction, gravity; Hubbard 1995). Accounts of these errors are often conceptualized in terms of representational momentum-the notion that the dynamics of the external environment have been internalized into the dynamics of cognitive repres­ entational systems. Given that internal representations, just as external events, have dynamic proper­ ties that cannot simply be brought to a halt upon stimulus offset, dynamic representational transformations are assumed to continue for some time following stimulus offset It is the momentum of these representations thai is assumed to underlie the resulting localization error. Implieit in this account of localization error is the assumption that the actions produced by observers during stimulus movement do not influence the processes underlying the error. In short, action processes and representational momentum processes are assumed to be independent, and the localization error is described as a post-perceptual cognitive phenomenon, Contrary to this assumed independence, the purpose of the present paper is to present a series of experiments that test whether or not the actions produced in relation to a moving stimulus contribute to the spatial distortion manifested in the localization error. These experiments are motivated by the following: (1) data that indicate the localization error may, in part, be due to the action planning required to maintain an ongoing relationship between action and stimulus motion (i.e. action control). and (2) data that indicate that perception and action-planning share common mechanisms (i.e. common neural Action planning affects spatial localization 159 mediation). Collectively, these data imply that the very act of planning an action in relation to a stimulus event serves to transform the processes underlying perceptual mappings of that stimulus event. In short, it implies that action planning influences the localization error. 7.1 Action control and localization error In representational momentum paradigms, observers are free to move their eyes. In fact, in most experiments no instruction is given in this regard, and it is assumed that eye-movements used to pursue and track the target do not contribute to the localization error. It has been demonstrated, however, that the eyes continue to drift in the direction of target motion if a pnrsued target, travel­ ling on a linear trajectory, suddenly vanishes (Mitrani and Dimitrov 1978), and the magnitude of such drift varies directly with tracking velocity (Mitrani, Dimitrov, Yakimoff, and Mateeff 1979). In addition, static stimuli presented in the periphery are localized closer toward the fovea than they actually are (foveal bias; e.g. Mtisseler. Van der Heijden, Mahmud, Deubel, and Ertsey 1999; O' Regan 1984; Osaka 1977; Van der Heijden, Mtisseler, and Bridgeman 1999). In light of these data, it may be the case that when a moving target suddenly disappears, the eyes overshoot the final position of the stimulus. such that the fovea is shifted into the direction of motion. S ubsequently, the foveal bias inherent in static localizations, coupled with the changing position of the fovea due to overshoot, causes the final position of the target to be localized in the direction of the fovea's motion (i.e. in the direction of the target's motion). In short, it may be the case that the localization error is related to eye-movement control. To test this idca. Kerzel, Jordan, and Mi.isseler (in press) conducted a representational momentum experiment in which they asked observers to localize the final position of a moving stimulus. Unlike other representational momentum experiments, however, they devised a condition in which observers were instructed to fixate a stationary fixation point during the presentation of the moving stimulus. This instruction, of course, prevented observers from making the smooth-pursuit movements observers normally make during such tasks. The results are depicted in Fig. 7.1. In the tracking condition, in which observers were allowed to track the moving stimulus, the traditional representa­ tional momentum effect was obtained. Localizations were displaced beyond the vanishing point, in the direction of stimulus motion, and the magnitude of the localization error varied directly with the velocity of the moving stimulus. In the fixation condition, however, there was no displacement in the direction of stimulus motion. There was vertical displacement, probably due to the retinal eccentricity of the vanishing point (i.e. the fixation stimulus was located 2° below the trajectory of the moving stimulus), but there was no horizontal localization error whatsoever. These data strongly imply that the localization errors reported in previous representational momentum experiments may have been due, in part, to the control of the eye movements necessary to track the moving stimulus. To be sure, arguments against an eye-movement account have been posed on many occasions (see Kerzel et aI., in press, for a thorough review of these arguments). These arguments tend to treat the moving eye as a moving camera, however, and they do so by down playing the fact that oculomotor tracking is a controlled action. Given the data of Kerzel et al., it seems this latter point is rather central to the localization error, and really cannot be downplayed. Oculomotor control requires planning, and this planning must (I) take into account anticipated future locations of the moving stimulus, and (2) be generated continuously in order to effectively control eye-target relationships. In light of these demands on eye-movement control. it may be the 160 Common mechanisms in perception and action

27 citations

Journal ArticleDOI
TL;DR: An algorithm is presented for processing and analysis of differential interference contrast (DIC) microscopy images of the fovea to study the cone mosaic and additional algorithms are presented that analyze the cone positions to extract information on cone neighbor relationships as well as the short-range order and domain structure of the mosaic.
Abstract: An algorithm is presented for processing and analysis of differential interference contrast (DIC) microscopy images of the fovea to study the cone mosaic. The algorithm automatically locates the cones and their boundaries in such images and is assessed by comparison with results from manual analysis. Additional algorithms are presented that analyze the cone positions to extract information on cone neighbor relationships as well as the short-range order and domain structure of the mosaic. The methods are applied to DIC images of the human fovea.

27 citations

Proceedings ArticleDOI
01 Mar 2020
TL;DR: It is shown that adding high-resolution input from predicted human driver gaze locations significantly improves the driving accuracy of the model and achieves a significantly higher performance gain in pedestrian-involved critical situations than in other non-critical situations.
Abstract: Inspired by human vision, we propose a new periphery-fovea multi-resolution driving model that predicts vehicle speed from dash camera videos. The peripheral vision module of the model processes the full video frames in low resolution with large receptive fields. Its foveal vision module selects sub-regions and uses high-resolution input from those regions to improve its driving performance. We train the fovea selection module with supervision from driver gaze. We show that adding high-resolution input from predicted human driver gaze locations significantly improves the driving accuracy of the model. Our periphery-fovea multi-resolution model outperforms a uni-resolution periphery-only model that has the same amount of floating-point operations. More importantly, we demonstrate that our driving model achieves a significantly higher performance gain in pedestrian-involved critical situations than in other non-critical situations. Our code is publicly available at https://github.com/pascalxia/periphery_fovea_driving.

27 citations

Journal ArticleDOI
01 May 1991
TL;DR: A visual detection model, including search, is constructed from empirical data on foveal and off-axis contrast thresholds, fixation times, saccadic sizes, and eye response times, which correlates well with published laboratory results at light levels of 1 fL or more and gives intuitively satisfying predictions of field performance.
Abstract: A visual detection model, including search, is constructed from empirical data on foveal and off-axis contrast thresholds, fixation times, saccadic sizes, and eye response times. The model includes one parameter to account for scene complexity and human preparedness, and its accounts quantitatively for the effects of clutter on this parameter and on other variables. The resulting algorithm correlates well with published laboratory results at light levels of 1 fL or more also gives intuitively satisfying predictions of field performance. >

27 citations


Network Information
Related Topics (5)
Retinal
24.4K papers, 718.9K citations
89% related
Visual acuity
32K papers, 797.1K citations
89% related
Retina
28K papers, 1.2M citations
88% related
Retinal ganglion
11.7K papers, 512.9K citations
86% related
Eye movement
14.1K papers, 540.5K citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023144
2022385
202195
2020119
2019108
201883