scispace - formally typeset
Search or ask a question

Showing papers on "Monocular vision published in 1994"


Journal ArticleDOI
TL;DR: It is suggested that contrast acts as a pictorial depth cue simulating the optical effects of aerial perspective, and is an effective depth cue in the absence of any other depth information.

183 citations


Journal ArticleDOI
TL;DR: Simulated cataract resulted in the greatest detriment to driving performance, followed by binocular visual field restriction, and the monocular condition did not significantly affect driving performance for any of the driving tasks assessed.
Abstract: The aim of the study was to determine the effect on driving of restricting vision. This was undertaken by comparing the driving performance of young, normal subjects under conditions of simulated visual impairment with a baseline condition. Visual impairment was simulated using goggles designed to replicate the effects of cataracts, binocular visual field restriction, and monocular vision. All subjects had binocular visual acuity greater than 6/12 when wearing the goggles and thus satisfied the visual requirements for a driver's license. Driving performance was assessed on a closed-road circuit for a series of driving tasks including peripheral awareness, maneuvering, reversing, reaction time, speed estimation, road position, and time to complete the course. Simulated cataract resulted in the greatest detriment to driving performance, followed by binocular visual field restriction. The monocular condition did not significantly affect driving performance for any of the driving tasks assessed.

148 citations


Journal ArticleDOI
TL;DR: A binocular viewbox has been reconstructed and pictorial relief under monocular, ‘synoptic’, and natural binocular viewing is described, which corroborate and go beyond early introspective reports and turn out to pose intriguing problems for modern research.
Abstract: Human observers are apparently well able to judge properties of 'three-dimensional objects' on the basis of flat pictures such as photographs of physical objects. They obtain this 'pictorial relief' without much conscious effort and with little interference from the (flat) picture surface. Methods for 'magnifying' pictorial relief from single pictures include viewing instructions as well as a variety of monocular and binocular 'viewboxes'. Such devices are reputed to yield highly increased pictorial depth, though no methodologies for the objective verification of such claims exist. A binocular viewbox has been reconstructed and pictorial relief under monocular, 'synoptic', and natural binocular viewing is described. The results corroborate and go beyond early introspective reports and turn out to pose intriguing problems for modern research.

121 citations


Journal ArticleDOI
TL;DR: It is clear that binocular vision makes important contributions to both the planning and the on-line control of skilled, visually guided reaching and grasping movements.
Abstract: The contribution of binocular visual feedback to the kinematics of human prehension was studied in two related experiments. In both experiments, the field of view of each eye was independently controlled by means of goggles fitted with liquid-crystal shutters. While wearing these goggles, which permitted either a binocular or a monocular view of the world, subjects were required to reach out and grasp a target object, which varied in size and position from trial to trial. In experiment 1, two viewing conditions were used. In one condition, binocular vision was available throughout the entire trial; in the second condition, the initial binocular view was replaced by a monocular view after the reaching movement had been initiated. When only monocular feedback was available, subjects showed a prolonged deceleration phase, although the time they spent in contact with the object was the same in both conditions. In experiment 2, monocular vision was available throughout a given trial in one condition and was replaced by binocular vision upon movement initiation in the second condition. Subjects in this experiment also displayed a prolonged deceleration phase in the monocular feedback condition relative to their performance in the binocular feedback condition. Unlike experiment 1, however, allowing only monocular feedback resulted in an increase in the amount of time subjects spent in contact with the object. Moreover, the object contact phases under the two conditions of experiment 2 were much longer than those observed in experiment 1, in which subjects received initial binocular views of the object. This latter finding suggests that an initial binocular view provides better information about the size and location of the object-information that allows subjects to form their final grasp more efficiently. In summary, these findings make it clear that binocular vision makes important contributions to both the planning and the on-line control of skilled, visually guided reaching and grasping movements.

114 citations


Journal ArticleDOI
04 Aug 1994-Nature
TL;DR: It is reported here that after 1 week of monocular deprivation, cortical orientation maps for the deprived eye had vanished, but it is discovered that after subsequent reverse occlusion the restored orientation maps were very similar to the original maps.
Abstract: IN the visual system of young kittens, the layout of the cortical maps for ocular dominance and orientation preference converges to an equilibrium state within the first few weeks of life and normally remains largely unchanged. If during the critical period, however, patterned visual experience is restricted to only one eye for a few days, cortical neurons lose their ability to respond to stimulation of the deprived eye. We used the 'reverse occlusion' protocol together with chronical optical imaging to investigate how the profound anatomical changes accompanying monocular deprivation1 affect the spatial pattern of the cortical orientation preference map. We report here that after 1 week of monocular deprivation, cortical orientation maps for the deprived eye had vanished. But we also discovered that after subsequent reverse occlusion the restored orientation maps were very similar to the original maps. This demonstrates that in spite of functional disconnection of one eye after monocular deprivation, the layout of cortical orientation maps, when re-established for this eye, is not formed from scratch but is strongly influenced by previous experience.

104 citations


Journal ArticleDOI
TL;DR: It is concluded that subjects are able monocularly to discriminate differences in the direction of motion in depth, even when both the direction and speed of retinal image translation are removed as reliable cues.

69 citations


Patent
24 Jun 1994
TL;DR: In this article, a process for background information recovery in an image including an image of a moving object, comprises the steps of identifying regions of moving objects relative to the background, deriving a moving constellation containing moving objects in a minimum circumscribing polygon, and combining ones of the partial background images associated with respective positions of the moving constellation.
Abstract: A process for background information recovery in an image including an image of a moving object, comprises the steps of identifying regions of moving objects relative to the background; deriving a moving constellation containing moving objects in a minimum circumscribing polygon; deriving partial background images associated with respective positions of the moving constellation; and combining ones of the partial background images.

68 citations


Journal ArticleDOI
TL;DR: The first experiment showed that both the vibration induced illusions and the pointing shifts disappeared in a structured visual context, which suggests that the processes involved when the target is viewed in darkness might differ from those occurring in structured surroundings.

66 citations


Journal ArticleDOI
TL;DR: It is shown for this model that the overall pattern of stripes produced is strongly influenced by the shape of the cortex, and stripes with a global order similar to that seen biologically can be produced under appropriate conditions.
Abstract: The elastic net (Durbin and Willshaw 1987) can account for the development of both topography and ocular dominance in the mapping from the lateral geniculate nucleus to primary visual cortex (Goodhill and Willshaw 1990). Here it is further shown for this model that (1) the overall pattern of stripes produced is strongly influenced by the shape of the cortex: in particular, stripes with a global order similar to that seen biologically can be produced under appropriate conditions, and (2) the observed changes in stripe width associated with monocular deprivation are reproduced in the model.

44 citations


Journal ArticleDOI
TL;DR: The results show that the rules of cyclopean direction are not sufficient for human vision in this general situation, and Stimulus conditions, in which either one line is presented to one eye and the other line to the other eye, provide a more critical test for validity of the Cyclopean rules.

39 citations


Journal ArticleDOI
TL;DR: Monocular eye-hand coordination was tested in a pointing experiment in the central and peripheral visual field of each eye of strabismic and anisometropic amblyopes, strabistsic alternators and normal controls, and the pointing pattern was similar in the two eyes of these subjects.

Journal ArticleDOI
TL;DR: It is concluded that, in spite of the relative morphological maturity of the peripheral retina, visual acuity develops in the peripheral visual field.

Journal ArticleDOI
TL;DR: Although stereopsis and fusion terminate rivalry, both are initially disrupted for a few hundred milliseconds by rivalry suppression, showing that an appropriately matched stereo pair can break rivalry suppression more easily than can monocular changes in position.
Abstract: Does the shift from binocular rivalry to fusion or stereopsis take time? We measured stereoacuity after rivalry suppression of one half-image of a stereoacuity line target. After the observer signalled that the single stereo half-image had been suppressed, the other half-image was presented for a variable duration. Stereoacuity thresholds were elevated for 150–200 ms. A control experiment demonstrated that the threshold elevation was due to rivalry suppression per se, rather than masking effects associated with the rivalry-inducing target. Monocular Vernier thresholds, measured as the smallest identifiable abrupt shift in the upper line of an aligned Vernier target that had previously been suppressed by rivalry, were elevated for a much longer duration. This result shows that an appropriately matched stereo pair can break rivalry suppression more easily than can monocular changes in position. With the aid of a similar paradigm, we also measured the duration needed to detect a disparate feature in a random...

Journal ArticleDOI
TL;DR: The improvement in binocular acuity compared to monocular visual acuity was less than would occur in normal subjects with minimal associated' phorias although the improvement differed accordingly as to whether the readings were eso‐ or exo‐in nature.

Journal ArticleDOI
TL;DR: The experiments reported here further investigated lateralization and unilateral transfer of memory in food-storing marsh tits, Parus palustris, using the technique of monocular occlusion, and predicted that birds should show better memory performance after 3 and 24 h than after 7 h and memory should be more accurate when both eyes are used during storage than with monocular Occlusion.
Abstract: Two previous experiments on food storing and one-trial associative learning in marsh tits (Clayton 1992a; Clayton and Krebs 1992) demonstrate that information coming into the brain from the left eye disappears from the left eye system between 3 and 24 h after memory formation, whereas that coming into the brain from the right eye remains stable within the right eye system for at least 51 h after memory formation. Performance after a 7 h retention interval appears to represent an intermediate stage in which the information is no longer accessible to the left eye system but is not yet available to the right eye system, suggesting a unilateral transfer of memory. The experiments reported here further investigated lateralization and unilateral transfer of memory in food-storing marsh tits, Parus palustris, using the technique of monocular occlusion. Birds were tested for their ability to retrieve stored seeds after retention intervals of 3, 7 and 24 h under 4 different occlusion treatments. Two predictions were tested: (a) with right eye occlusion during storage, birds should show better memory performance after 3 and 24 h than after 7 h and (b) memory should be more accurate when both eyes are used during storage than with monocular occlusion. The first prediction, which arises from the fact that memory is transferred from the left to the right eye system at about 7 h and is inaccessible during the transfer, was supported by the data. The second prediction, however, was not supported. Previous work has shown that in marsh tits the two eye systems remember preferentially different aspects of the stimulus: the left eye system responds to spatial position and the right eye system to object-specific cues. It is possible that the failure to find superior performance in binocular tests was because the task could be solved by either spatial or object-specific memory.

Journal ArticleDOI
TL;DR: It was shown that acuity in the nonstrabismic eye of some of the strabismics subjects was improved by allowing the strabsic eye to view; these were the subjects with the greatest depths of amblyopia.

Journal ArticleDOI
TL;DR: Monocular habit in normal viewing reinforces other evidence for the unorthodox idea that visual perception arises from a union in consciousness of monocular images that are elaborated independently.
Abstract: Faced with an unobstructed view, both foveas can be readily aligned with a distant visual target. The minor difference in the view of the two eyes (which arises from slightly different lines of sight) presents no special problem and is, indeed, the basis of stereopsis. However, when obstructing objects are present in the foreground, the view provided by one eye becomes wholly or partially incompatible with the view of the other. We have investigated how we cope with this everyday situation by having volunteers observe distant targets through a fenestrated screen. In this circumstance, subjects naturally position themselves to view a target of interest with one eye--usually the right eye. This monocular habit in normal viewing reinforces other evidence for the unorthodox idea that visual perception arises from a union in consciousness of monocular images that are elaborated independently.

Journal ArticleDOI
TL;DR: The results of Expt 2 suggest the possibility that this monocular mechanism which appears to respond directly to the movement of the intersections (or "blobs") in a two-dimensional image is inhibited by binocular exposure.

Journal ArticleDOI
TL;DR: The contrast sensitivity function measured with the VCTS test showed a considerable toss of low‐frequency sensitivity in the authors' subjects compared to a normal population, which was more marked in the more severely impaired subjects.

Proceedings ArticleDOI
12 Sep 1994
TL;DR: An autonomous monocular vision system which is used as a feedback signal to control the position and orientation of the manipulator with respect to a dynamic object part of a 3D scene without any previous knowledge of the part's placement or motion is described.
Abstract: In this paper, we present a model-based vision system using a CCD camera mounted on the end-effector of an industrial robot. We describe an autonomous monocular vision system which is used as a feedback signal to control the position and orientation of the manipulator with respect to a dynamic object part of a 3D scene. Without any previous knowledge of the part's placement or motion, this visual feedback would allow a robot to track a pattern in real time. Applications for such a vision-guided robot include 3D object recognition, by extracting 2D image features and rebuilding 3D features. This is performed via rigidity constraint and search comparison in model database geometric inference. Kinematic redundancy is used for general solution of joint velocity represented by the generalized inverse of Jacobian matrix. The problem of redundancy utilization can generally be formulated in the framework of tasks with the order of priority. We present a visual servoing scheme using the task function approach in term of 6 degrees of freedom of robot's gripper regulation. Redundancy theory allows to control position and orientation of the end-effector of an industrial robot in less than 80 ms of cycle time. >

Journal ArticleDOI
TL;DR: The results show that motion parallax is a useful depth cue when relative motion is extracted from different hemifields and suggests that the precision of interhemispheric comparison for binocular depth is not affected by the absence of the corpus callosum.

Journal ArticleDOI
TL;DR: In monocular viewing conditions, an activational imbalance between the cerebral hemispheres was assumed to develop, the direction of which depends on the side of the viewing eye, and heartbeat perception was more accurate when viewing with the left eye.
Abstract: In monocular viewing conditions, an activational imbalance between the cerebral hemispheres was assumed to develop, the direction of which depends on the side of the viewing eye. This assumption was based on the morphological differences between the nasal and the temporal hemiretinas and on physiological data. It was assumed that the hemisphere receiving visual information via the nasal optic fibers, that is, the hemisphere contralateral to the viewing eye, would be the more activated one. Because heart beat perception is regarded as a predominantly right hemispheric function, it was predicted that during right hemispheric activation created by left monocular viewing heartbeat discrimination performance would be better than during left hemispheric activation created by right monocular viewing. This hypothesis was tested on 30 male right-handed university students who performed a Whitehead-type heartbeat discrimination task while viewing only with the left or with the right eye. Heartbeat perception was more accurate when viewing with the left eye. Additionally, respiratory manipulation during heartbeat discrimination improved performance on this task.

Proceedings ArticleDOI
21 Jun 1994
TL;DR: It is proved by experiments on sequences of real images that this model based approach to locate and track an object in a multi-cameras system leads to results better than the monocular ones in two directions: better accuracy as many different view points are used; better robustness as the system is able to deal gracefully with the loss of some of the captors.
Abstract: Our paper proposes a new model based approach to locate and track an object in a multi-cameras system: this method does not involve triangulation. If the calibration and the mutual geometry of the acquisition systems are known, it is possible to express the whole set of equations pertaining to each monocular system, in an unique reference system. In this way it is possible to show that based model methods are not restricted to monocular vision and that some techniques generally viewed as purely monocular can be readily extended and integrated to a multi-cameras system. We prove by experiments on sequences of real images that this approach leads to results better than the monocular ones in two directions: better accuracy as many different view points are used; better robustness as the system is able to deal gracefully with the loss of some of the captors. >

Journal ArticleDOI
TL;DR: A new, occlusion-free, monocular three-dimensional vision system that can lead to the development of three- dimensional real-image sensing devices for manufacturing, medical, and defense-related applications and if combined with existing technology, it has high potential for theDevelopment ofThree-dimensional television.
Abstract: This paper describes a new, occlusion-free, monocular three-dimensional vision system. A matrix of light beams (lasers, fiber optics, etc.), substantially parallel to the optic axis of the lens of a video camera, is projected onto a scene. The corresponding coordinates of the perspective image generated on the video-camera sensor, the focal length of the camera lens, and the lateral position of the projected beams of light are used to determine the "perspective depth" z* of the three-dimensional real image in the space between the lens and the image plane. Direct inverse perspective transformations are used to reconstruct the three-dimensional real-world scene. This system can lead to the development of three-dimensional real-image sensing devices for manufacturing, medical, and defense-related applications. If combined with existing technology, it has high potential for the development of three-dimensional television.

Patent
02 Sep 1994
TL;DR: In this article, a robot position measuring system using monocular vision is presented, which is high in processing speed and easily controllable by the robot's operator, and the absolute position of the robot is obtained from the distance and the direction.
Abstract: PURPOSE:To provide a robot position measuring system which is operated in monocular vision, high in processing speed and easily controlled CONSTITUTION:Marks V1, V2, V3, V4 are disposed at plural specific position in and cut the robot move area The distance to an object is obtained on the basis of the size of an image photographed of a mark V by a camera 20 The absolute position of the robot is thus obtained from the distance and the direction

Patent
10 Jan 1994
TL;DR: In this article, a wide field monocular is provided which is essentially a wide-field virtual telescope, and two of these monoculars can be placed side-by-side to achieve a binocular with an extended monocular field.
Abstract: A wide field monocular is provided which is essentially a wide field virtual telescope. Two of these monoculars can be placed side-by-side to achieve a binocular with an extended monocular field. Also for such binoculars with objective lenses larger than the interocular separation, lunes can be cut from the objective lenses so that they can be placed adjacent each other, with the eyes essentially aligned with the optic axis of each telescope. This provides a remarkable wide field binocular with stereo vision and depth perception.

Proceedings ArticleDOI
21 Apr 1994
TL;DR: The performance of such a setup in terms of range and heading angle errors is studied and the optimum values of a controllable subspace, consisting of the object height and depression angle, are found by employing the mini-max estimator for the worst case performance and the minimum mean-squared estimators for the average performance.
Abstract: When a vision sensor is used to track an object in an outdoor realistic navigational environment, it is subjected to unexpected movements or vibrations of the mounting platform. In this paper, the performance of such a setup in terms of range and heading angle errors is studied. The noise generated by the navigational environment is represented in two ways: sensor rotation angle errors and image coordinates errors. A consistent detectable region is obtained such that the tracked object is always seen by the sensor. Based on this region, a reliable region consisting of no singularity points is defined so that the range error does not become infinity. The optimum values of a controllable subspace, consisting of the object height and depression angle, with respect to an uncontrollable subspace, consisting of object coordinates and sensor movement errors, are then found by employing the mini-max estimator for the worst case performance and the minimum mean-squared estimator for the average performance. >

Journal ArticleDOI
TL;DR: A monocular approach for generation of the depth maps of an object using computer vision and a matching algorithm based on 3-D features of the object is developed, verified by an experiment to recognize a set of objects.

01 Mar 1994
TL;DR: A robotic system that accepts autonomously generated motion and control commands that provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified.
Abstract: A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.

Proceedings ArticleDOI
27 Jul 1994
TL;DR: A method for position estimation based on monocular vision and triangulation from self-motion is described, which is computationally simple due to a manipulation of the transforriation mathematics, and does not require the placement of specialty designed signs or landmarks in the environment.
Abstract: Reliable navigation of mobile robots demands that the machine be able to extract location information from its environment, Numerous researchers have used machine vision to follow road edges, to read specialty designed signposts or to triangulate position from the sighting of multiple landmarks. This paper describes a method for position estimation based on monocular vision and triangulation from self-motion. The method is computationally simple due to a manipulation of the transforriation mathematics, and does not require the placement of specialty designed signs or landmarks in the environment. "'Natural" landmarks such as comers, light fixtures, or stationary equipment can be used for triangulation. If the position of the landmark is known in some global coordinate frame, the method can be used for fixing the robot's absolute position. The technique can also be used for circumnavigation of obstacles or for other types of "relative" navigation such as docking or close quarters maneuvering. Travel on non-flat surfaces is accounted for by compensation for vehicle pitch and roll. The method requires input from pitch and roll clinometers, pan/tilt position sensors, odometric sensors and the calculation of one position parameter from independent sensors such as ultrasonic range sensors, optical range sensors, a fluxgate compass or another reliable source independent of the vision system. Keywords: mobile robot, navigation, landmark, vision, triangulation.