scispace - formally typeset
Search or ask a question

Showing papers on "Monocular vision published in 2000"


Journal Article
TL;DR: For evaluations of functional visual field influences on task performance, daily activities, and related quality-of-life issues, either the BINOCULAR SUMMATION or BEST LOCATION model provides good estimates of binocular visual field sensitivity.
Abstract: Purpose To compare methods of predicting binocular visual field sensitivity of patients with glaucoma from monocular visual field data. Methods Monocular and binocular visual fields were obtained for 111 patients with varying degrees of glaucomatous damage in one or both eyes, using the Humphrey 30-2 full-threshold procedure. Four binocular sensitivity prediction models were evaluated: BEST EYE, predictions based on individual values for the most sensitive eye, defined by mean deviation (MD); AVERAGE EYE, predictions based on the average sensitivity between eyes at each visual field location; BEST LOCATION, predictions based on the highest sensitivity between eyes at each visual field location; and BINOCUIAR SUMMATION, predictions based on binocular summation of sensitivity between eyes at each location. Differences between actual and predicted binocular sensitivities were calculated for each model. Results The average difference between predicted and actual binocular sensitivities was close to zero for the BINOCULAR SUMMATION and BEST LOCATION models, with 95% of all predictions being within +/-3 dB of actual binocular sensitivities. The best eye (MD) prediction had an average error of 1.5 dB (95% confidence limits [CL], +/-3.7 dB). The average eye prediction was the poorest, with an average error of 3.7 dB (95% CL, +/-4.6 dB). Conclusions The BINOCULAR SUMMATION and BEST LOCATION models provided better predictions of binocular visual field sensitivity than the other two models, with a statistically significant difference in performance. The small difference in performance between the BINOCULAR SUMMATION and BEST LOCATION models was not statistically significant. For evaluations of functional visual field influences on task performance, daily activities, and related quality-of-life issues, either the BINOCULAR SUMMATION or BEST LOCATION model provides good estimates of binocular visual field sensitivity.

233 citations


Journal Article
TL;DR: There is little evidence for binocular inhibition when the monocular acuities in the two eyes are unequal, as opposed to the widely used AMA algorithm for computing binocular visual impairment, but for tasks that are strongly associated with visual acuity, this association can be captured from measures of monocular Acuity and does not require separate assessment of binocular acuity.
Abstract: PURPOSE: To examine the relationship between monocular and binocular visual acuities as predictors of visual disability in a population-based sample of individuals 65 years of age and older. METHODS: Two thousand five hundred twenty community-dwelling residents of Salisbury, Maryland, between the ages of 65 and 84 years of age were recruited for the study. Corrected visual acuity was measured monocularly and binocularly using ETDRS charts. Reading speed, face discrimination, and self-reported difficulty with visual tasks were also determined. RESULTS: Binocular acuity is predicted with reasonable accuracy by acuity in the better eye alone, but not by the widely used American Medical Association (AMA) weighted-average algorithm. The AMA algorithm significantly underestimates binocular acuity when the interocular acuity difference exceeds one line. Monocular acuity and binocular acuity were significantly better predictors of reading speed than the AMA weighted score or a recently proposed Functional Vision Score (FVS). Monocular acuity in the better eye, binocular acuity, and the AMA and FVS algorithms were equally good predictors of self-reported vision disability. None of the acuity measures were good predictors of face recognition ability. CONCLUSIONS: The binocular acuities of older individuals can be inferred from measures of monocular acuity. There is little evidence for binocular inhibition when the monocular acuities in the two eyes are unequal, as opposed to the widely used AMA algorithm for computing binocular visual impairment. For tasks that are strongly associated with visual acuity, such as reading, this association can be captured from measures of monocular acuity and does not require separate assessment of binocular acuity.

150 citations


Journal ArticleDOI
TL;DR: It is concluded that in normal viewing conditions, reaching and grasping movements are less dependent on binocular information than has previously been thought.

85 citations


Journal ArticleDOI
TL;DR: Geometrical considerations based on these results support the existence of two modes of visual detection of body sway, afferent (retinal slippage) and efferent (extraretinal or eye-movement based).
Abstract: Visual control of postural sway during quiet standing was investigated in normal subjects to see if motion parallax cues were able to improve postural stability. In experiment 1, six normal subjects fixated a fluorescent foreground target, either alone or in the presence of full room illumination. The results showed that subjects reduced body sway when the background was visible. This effect, however, could be mediated not only by parallax cues but also by an increase in the total area of visual field involved. In experiment 2, other parameters such as image angular size and target distance were controlled for. Twelve subjects fixated a two light-emitting diode (LED) target placed at 45 cm from their eyes in a dark room. A second similar two-LED target was placed either at 170 cm (maximum parallax) or at 85 cm (medium parallax) from the fixated target, or in the same plane of the fixated target (0 cm, no parallax). It was found that the amplitude of sway was reduced significantly, by approximately 20%, when the two targets were presented in depth (parallax present) as compared to when they were in the same plane (no parallax). The effect was only present in the lateral direction and for low frequency components of sway (up to 0.5 Hz). We confirmed in experiment 3 on eight subjects with a design similar to that used in experiment 2 that the effect of motion parallax on body sway was of monocular origin since observed with monocular and binocular vision. Geometrical considerations based on these results support the existence of two modes of visual detection of body sway, afferent (retinal slippage) and efferent (extraretinal or eye-movement based).

77 citations


Journal ArticleDOI
TL;DR: It is shown that outward drift in Experiment 1 was visually driven, and visually guided reaches were accurate when participants used binocular vision but when they used monocular vision, reaches were distorted.
Abstract: Psychophysical studies reveal distortions in perception of distance and shape. Are reaches calibrated to eliminate distortions? Participants reached to the front, side, or back of a target sphere. In Experiment 1, feedforward reaches yielded distortion and outward drift. In Experiment 2, haptic feedback corrected distortions and instability. In Experiment 3, feedforward reaches with only haptic experience of targets replicated the shape distortions but drifted inward. This showed that outward drift in Experiment 1 was visually driven. In Experiment 4, visually guided reaches were accurate when participants used binocular vision but when they used monocular vision, reaches were distorted. Haptic feedback corrected inaccuracy and instability of distance but did not correct monocular shape distortions. Dynamic binocular vision is representative and accurate and merits further study.

73 citations


Journal ArticleDOI
TL;DR: The results suggest that the monocular underestimation in the prehension task is not a consequence of a purely perceptual bias but rather it is visuomotor in nature – a monocular input to a system that normally calibrates motor output on the basis of binocular vision.
Abstract: Previous work has demonstrated that monocular vision affects the kinematics of skilled visually guided reaching movements in humans. In these experiments, prior to movement onset, subjects appeared to be underestimating the distance of objects (and as a consequence, their size) under monocular viewing relative to their reaches made under binocular control. The present series of experiments was conducted to assess whether this underestimation was a consequence of a purely visual distance underestimation under monocular viewing or whether it was due to some implicit inaccuracy in calibrating the reach by a visuomotor system normally under binocular control. In a purely perceptual task, a group of subjects made similar explicit distance estimations of the objects used in the prehension task under monocular and binocular viewing conditions, with no time constraints. A second group of subjects made these explicit distance estimations with only 500-ms views of the objects. No differences were found between monocular and binocular viewing in either of these explicit distance-estimation tasks. The limited-views subjects also performed a visually guided reaching task under monocular and binocular conditions and showed the previously demonstrated monocular underestimation (in that their monocular grasping movements showed lower peak velocities and smaller grip apertures). A distance underestimation of 4.1 cm in the monocular condition was computed by taking the y intercepts of the monocular and binocular peak velocity functions and dividing them by a common slope that minimised the sum of squares error. This distance underestimation was then used to predict the corresponding underestimation of size that should have been observed in the monocular reaches – a value closely approximating the observed value of 0.61 cm. Taken together, these results suggest that the monocular underestimation in the prehension task is not a consequence of a purely perceptual bias but rather it is visuomotor in nature – a monocular input to a system that normally calibrates motor output on the basis of binocular vision.

72 citations


Journal ArticleDOI
TL;DR: It is concluded that reducing the FOV produces substantial and dissociable effects on reaching and grasping behaviour and that field size must be taken into account in any context where visuo-motor performance is important.
Abstract: It has been observed that wearing goggles that restrict the field of view (FOV) causes familiar objects to appear both smaller and nearer. To investigate this further, we examined the effect of a range of field sizes (4°, 8°, 16°, 32° and 64°) on estimates of object distance and object size used to control reaching and grasping movements of binocular observers. No visual or haptic feedback was available during the experiment. It was found that, as the FOV was decreased, the distance reached by subjects also decreased, whereas the size of their grasp was unaffected. In a second experiment, we compared reaching and grasping responses under binocular and monocular conditions for 8° and 64° field sizes and show that the effects of FOV do not result from the progressive loss of binocular information. We conclude that reducing the FOV produces substantial and dissociable effects on reaching and grasping behaviour and that field size must be taken into account in any context where visuo-motor performance is important.

51 citations


Proceedings ArticleDOI
31 Oct 2000
TL;DR: Its practicability under conditions of continuous localization during motion in real-time (referred to as on-the-fly localization) is investigated in large-scale experiments and very high localization precision is obtained.
Abstract: In this paper a multisensor setup for localization consisting of a 360 degree laser range finder and a monocular vision system is presented. Its practicability under conditions of continuous localization during motion in real-time (referred to as on-the-fly localization) is investigated in large-scale experiments. The features in use are infinite horizontal lines for the laser and vertical lines for the camera providing an extremely compact environment representation. They are extracted using physically well-grounded models for all sensors and passed to the Kalman filter for fusion and position estimation. Very high localization precision is obtained in general. The vision information has been found to further increase this precision, particular in the orientation, already with a moderate number of matched features. The results were obtained with a fully autonomous system where extensive tests with an overall length of more than 1.4 km and 9,500 localization cycles have been conducted. Furthermore, general aspects of multisensor on-the-fly localization are discussed.

44 citations


Journal ArticleDOI
TL;DR: The results suggest that the IOVD cue makes a significant contribution to MID speed perception, despite the fact that the disparity system appears to be unaffected.

41 citations


Journal ArticleDOI
TL;DR: In this study a multiple-view two-dimensional display was compared with a three-dimensional monocular display and a 3D stereoscopic display using a simulated telerobotic task and showed that the multiple-View 2D display was superior to the 3D monocular and the3D stereoscope display in the absence of the visual enhancement depth cues.
Abstract: In this study a multiple-view two-dimensional (2D) display was compared with a three-dimensional (3D) monocular display and a 3D stereoscopic display using a simulated telerobotic task. As visual aids, three new types of visual enhancement cues were provided and evaluated for each display type. The results showed that the multiple-view 2D display was superior to the 3D monocular and the 3D stereoscopic display in the absence of the visual enhancement depth cues. When participants were provided with the proposed visual enhancement cues, the stereoscopic and monocular displays became equivalent to the multiple-view 2D display. Actual or potential applications of this study include the design of visual displays for teleoperation systems.

36 citations


Journal ArticleDOI
TL;DR: Overall, the color-appearance measurements are explained by monocular encoding of chromatic differences at edges, and a central binocular mechanism of Chromatic-contrast gain control.

Journal ArticleDOI
TL;DR: It is concluded that stereopsis improves space perception but does not improve veridicality, and the structure of visual space was investigated, using an exocentric pointing task.
Abstract: Classically, it has been assumed that visual space can be represented by a metric. This means that the distance between points and the angle between lines can be uniquely defined. However, this assumption has never been tested. Also, measurements outdoors, where monocular cues are abundant, conflict with this model. This paper reports on two experiments in which the structure of visual space was investigated, using an exocentric pointing task. In the first experiment, we measured the influence of the separation between pointer and target and of the orientation of the stimuli with respect to the observer. This was done both monocularly and binocularly. It was found that the deviation of the pointer settings depended linearly on the orientation, indicating that visual space is anisotropic. The deviations for configurations that were symmetrical in the median plane were approximately the same, indicating that left/right symmetry was maintained. The results for monocular and binocular conditions were very different, which indicates that stereopsis was an important cue. In both conditions, there were large deviations from the veridical. In the second experiment, the relative distance of the pointer and the target with respect to the observer was varied in both the monocular and the binocular conditions. The relative distance turned out to be the main parameter for the ranges used (1-5 m). Any distance function must have an expanding and a compressing part in order to describe the data. In the binocular case, the results were much more consistent than in the monocular case and had a smaller standard deviation. Nevertheless, the systematic mispointings remained large. It can therefore be concluded that stereopsis improves space perception but does not improve veridicality.

Proceedings ArticleDOI
03 Sep 2000
TL;DR: The method deals with a method designed to recover the 3D geometry of a road from an image sequence provided by an on-board monocular monochromatic camera, which only requires the road edges to be detected in the image.
Abstract: Deals with a method designed to recover the 3D geometry of a road from an image sequence provided by an on-board monocular monochromatic camera. It only requires the road edges to be detected in the image. The reconstruction process is able to compute (1) the 3D coordinates of the road axis points, (2) the vehicle's position on its lane and (3) the prediction of the road edge localization in the next images of the sequence which is very helpful for the detection phase. It also computes the confidence intervals associated with the 3D parameters. The description of the method is followed by the presentation of its most significant results.

Journal ArticleDOI
TL;DR: This result indicates that in binocular vision the integration of left and right eye signals first occurs after retinal and oculomotor signals have been integrated of each eye separately, which challenges the prevailing concept of cyclopean vision and current views about stereoscopic depth perception.

Journal ArticleDOI
TL;DR: This work investigated whether forward or side-to-side head movements yielded more accurate and precise monocular egocentric distance information, as shown by performance in a reaching task, and tested performance in the two head movement conditions when the observers were given haptic feedback.
Abstract: We investigated whether forward or side-to-side head movements yielded more accurate and precise monocular egocentric distance information, as shown by performance in a reaching task. Observers wore a head-mounted camera and display to isolate the optic flow generated by their head movements and had to reach to align a stylus directly under a target surface. Performance in the two head movement conditions was also tested with normal monocular vision. We tested performance in the two head movement conditions when the observers were given haptic feedback and compared performance when haptic feedback was removed. Performance was both more accurate and more precise in the forward head movement condition than in the side-to-side head movement condition. Performance in the side-to-side condition also deteriorated more after the removal of haptic feedback than did performance in the forward head movement condition. In the normal monocular condition, performance was comparable for the two head movement conditions. The implications for enucleated patients are discussed.

Proceedings ArticleDOI
Yi Lu Murphey1, J. Chen1, J.A. Crossman1, J. Zhang1, Paul Richardson, L. Sieh1 
03 Oct 2000
TL;DR: This paper presents a real-time depth detection system DepthFinder, a system that finds the distances of objects through a monocular vision model that can be used with a camera mounted either at the front of side of a moving vehicle.
Abstract: Many military applications require the distance information from a moving vehicle to targets from video image sequences. For indirect driving, lack of perception of depth in view hinders steering and navigation. In this paper we present a real-time depth detection system DepthFinder, a system that finds the distances of objects through a monocular vision model. DepthFinder can be used with a camera mounted either at the front of side of a moving vehicle. A real-time matching algorithm is introduced to improve the matching performance by several orders of magnitude. The experiment results and the performance analysis are presented.

Journal ArticleDOI
TL;DR: Binocular superiority appeared to be most pronounced when participants were unable to adjust their limb control strategy or procedure on the basis of terminal feedback about performance, and binocular vision was associated with greater spatial accuracy.
Abstract: In the present research the authors examined the time course of binocular integration in goal-directed aiming and grasping. With liquid-crystal goggles, the authors manipulated vision independently to the right and left eyes of 10 students during movement preparation and movement execution. Contrary to earlier findings reported in catching experiments (I. Olivier, D. J. Weeks, K. L. Ricker, J. Lyons, & D. Elliott, 1998), neither a temporal nor a spatial binocular advantage was obtained in 1 grasping and 2 aiming studies. That result suggests that, at least in some circumstances, monocular vision is sufficient for the precise control of limb movements. In a final aiming experiment involving 3-dimensional spatial variability and no trial-to-trial visual feedback about performance, binocular vision was associated with greater spatial accuracy. Binocular superiority appeared to be most pronounced when participants were unable to adjust their limb control strategy or procedure on the basis of terminal feedback about performance.

Proceedings ArticleDOI
13 Jun 2000
TL;DR: The augmented reality system superimposes the 3D object wireframe onto the live viewing image taken from the surgical microscope as well as displaying other useful navigation information, while allowing the surgeons to freely change its zoom and focus for viewing.
Abstract: This paper presents a robust and accurate vision-based augmented reality system for surgical navigation. The key point of our system is a robust and real-time monocular vision algorithm to estimate the 3D pose of surgical tools, utilizing specially designed code markers and Kalman filter-based position updating. The vision system is not impaired by occlusion and rapid change of illumination. The augmented reality system superimposes the 3D object wireframe onto the live viewing image taken from the surgical microscope as well as displaying other useful navigation information, while allowing the surgeons to freely change its zoom and focus for viewing. The experimental results verified the robustness and usefulness of the system, and acquired the image registration error less than 2 mm.

Journal ArticleDOI
TL;DR: It is suggested that in every day life enucleated individuals make use of as many optical variables as possible to partially compensate for the lack of binocularity.

Journal ArticleDOI
TL;DR: Experimental results demonstrate the effectiveness of the adaptive controller that does not require an explicit global target position using direct perception of the environment.
Abstract: This paper presents a solution to the problem of manipulation control: target identification and grasping. The proposed controller is designed for a real platform in combination with a monocular vision system. The objective of the controller is to learn an optimal policy to reach and to grasp a spherical object of known size, randomly placed in the environment. In order to accomplish this, the task has been treated as a reinforcement problem, in which the controller learns by a trial and error approach the situation-action mapping. The optimal policy is found by using the Q-Learning algorithm, a model free reinforcement learning technique, that rewards actions that move the arm closer to the target. The vision system uses geometrical computation to simplify the segmentation of the moving target (a spherical object) and determines an estimate of the target parameters. To speed-up the learning time, the simulated knowledge has been ported on the real platform, an industrial robot manipulator PUMA 560. Experimental results demonstrate the effectiveness of the adaptive controller that does not require an explicit global target position using direct perception of the environment.

Journal ArticleDOI
TL;DR: Stereoacuity for second-order stimuli and monocular acuity for non-abutting targets are more likely limited by stimulus-dependent spatial subsampling than first-order stereopsis, which was found to depend solely on blur.

Journal Article
TL;DR: D coordinate vision measuring methods with feature points on workpiece are demonstrated, such as structured light method, laser auto focusing method, binocular vision method, trinocular Vision method, monocular vision Method, etc.
Abstract: D coordinate vision measuring methods with feature points on workpiece are demonstrated, such as structured light method, laser auto focusing method, binocular vision method, trinocular vision method, monocular vision method, etc. The characteristics and measuring precision of each method are analyzed in detial, meanwhile the present development and application status are also introduced.

Journal ArticleDOI
TL;DR: This work investigated stereoscopically perceived aspect ratios of frontoparallel occluding and occluded rectangles for various distances and fixation depths and found that observers did not perceive the distortions that would be predicted on the basis of the above-mentioned comparison of the perceived visual directions of the edges of the rectangle.
Abstract: In monocular vision, the horizontal/vertical aspect ratio (shape) of a frontoparallel rectangle can be based on the comparison of the perceived directions of the rectangle's edges. In binocular vision of a typical three-dimensional scene (when occlusions are present), this is not the case: Frontoparallel rectangles would be perceived in a distorted fashion if an observer were to base perceived aspect ratio on the perceived directions of the rectangle's edges. We psychophysically investigated stereoscopically perceived aspect ratios of frontoparallel occluding and occluded rectangles for various distances and fixation depths. We found that observers did not perceive the distortions that would be predicted on the basis of the above-mentioned comparison of the perceived visual directions of the edges of the rectangle. Our results strongly suggest that the mechanism that determines perceived aspect ratio is dissociated from the mechanism that determines perceived direction. The consequences of the findings for the Kanizsa, Poggendorff, and horizontal/vertical illusions are discussed.

Proceedings ArticleDOI
03 Oct 2000
TL;DR: The work described concerns the detection, location and especially tracking, by monocular vision, of target vehicles equipped with virtual marks, and precise determination of the position and relative speed of the target vehicles.
Abstract: We consider the problems of perception of an adaptive cruise control system. The purpose of this type of system is to regulate the speed of our vehicle so as to respect safety distances relative to vehicles ahead. The work described concerns the detection, location and especially tracking, by monocular vision, of target vehicles equipped with virtual marks. We focus on precise determination of the position and relative speed of the target vehicles. We show an example of cooperation between road detection and obstacle detection. The methods presented are tested on real roads on our VELAC demonstration vehicle. We include results obtained by this means.

Journal Article
Aufrere, Marmoiton, Chapuis, Collange, Derutin 
TL;DR: In this paper, the authors deal with a process designed first to extract the lane of vehicle by on-board monocular vision and then a reconstruction algorithm computes the vehicle location on its lane and the 3D shape of the road.
Abstract: This article deals with a process designed first to extract the lane of vehicle by on-board monocular vision. This detection process is based upon a recursive updating of a statistical model of the lane obtained by a training phase. Once the lane has been located, a reconstruction algorithm computes the vehicle location on its lane and the 3D shape of the road. Thereafter, we are focus at the detection, location and tracking of front vehicles equipped with specific visual markers in order to achieve an accurate determination of the location and speed of these vehicles. Merging these various informations allows to point out the most dangerous obstacle. Each of these three processes is detailed significant examples are provide.

Journal ArticleDOI
TL;DR: Monocular symmetry is neither necessary nor sufficient for dichoptic bilateral symmetry perception; and symmetry mechanisms have no access to monocular signals.

Book ChapterDOI
07 Sep 2000
TL;DR: The paper proposes an alternative method based on a set-membership-based estimation including dynamics that limits the depth ambiguity by considering loose constraint knowledge represented as inequalities and provides the shape recovery of articulated objects.
Abstract: This paper presents a method of estimating both 3-D shapes and moving poses of an articulated object from a monocular image sequence. Instead of using direct depth data, prior loose knowledge about the object, such as possible ranges of joint angles, lengths or widths of parts, and some relationships between them, are referred as system constraints. This paper first points out that the estimate by Kalman filter essentially converge to a wrong state for non-linear unobservable systems. Thus the paper proposes an alternative method based on a set-membership-based estimation including dynamics. The method limits the depth ambiguity by considering loose constraint knowledge represented as inequalities and provides the shape recovery of articulated objects. Effectiveness of the framework is shown by experiments.

Proceedings ArticleDOI
A.L. Maganto1, José Manuel Menéndez, Luis Salgado, E. Rendon, Narciso Garcia 
10 Sep 2000
TL;DR: A basic monocular vision system, focused on road location solely, is described, which obtains excellent final results, succeeding in more than 75% of the analysed images from the tested sequences.
Abstract: The improvement of vehicle security is a major priority of the car industry. Both active and passive security-systems have experienced a great development during the last decade, but the main research is still focused on minimising the errors committed by the driver rather than trying to avoid them. In this paper, a basic monocular vision system, focused on road location solely, is described. Working with only one video camera hinders the exact 3D reconstruction of the scene: no information about distances and dimensions is available, unless a priori artificial constraints are taken. The problem increases because of the movement of the camera with respect to the scene. In the present system, the road searching and following algorithms operate on the two-dimensional image plane, and 2D to 3D conversion is not accomplished. The system obtains excellent final results, succeeding in more than 75% of the analysed images from the tested sequences.

Journal ArticleDOI
TL;DR: In this paper, a method for integrating ultrasonic range readings and monocular visual information in the environmental occupancy grid of an autonomous vehicle is presented. But the main features of the proposed method are the low computational efforts and the low cost of the sensor systems.

Proceedings ArticleDOI
08 Oct 2000
TL;DR: The paper proposes a virtual active vision head with binocular fish-eye lenses to simulate eye movements and real time gaze control is achieved on the proposed virtual binocular active vision system.
Abstract: A human continuously directs his eyes toward interesting points to recognize an object. Such eye movement or visual attention plays an important role in human vision. To simulate such eye movements, the paper proposes a virtual active vision head with binocular fish-eye lenses. Moving objects are detected within the wide angle of view obtained through the fish-eye lens by comparing the input image with the background image estimated by an adaptive M-estimator. The location of the moving object is used as a visual cue to achieve the saccadic eye movement. The proposed virtual active vision system was implemented on a PC cluster. Two PCs are used to capture images through the left and right fish-eye lenses and a PC with four processors is used for image processing. Real time gaze control is achieved on the proposed virtual binocular active vision system.