Combining central and peripheral vision for reactive robot navigation
Summary (2 min read)
1 Introduction
- The term navigation refers to the capability of a system to move autonomously in its environment by using its own sensors.
- The more specific term visual navigation is used for the process of motion control based on the analysis of data gathered by visual sensors.
- The topic of visual navigation is *This research has been carried out during the first author's 1996-97 appointment to CVAPINADAIKTH and was funded under the VIRGO research network (EC Contract No ERBFMRX-CT96-0049) of the TMR Programme.
2 The behavior, the environment and the body
- In this work the authors assume a robot that can translate in the forward direction and rotate (pan) around its vertical axis (Fig. 1 ).
- The term reactive is used to express lack of a particular destination that could be set by using maps of the environment, landmark recognition etc.
- The behavior is based [5] on velocity information computed at the left and right eyes of the bee.
- Second, it turns out that control is facilitated when the cameras are slanted.
- If the robot is in the middle of the free space, it perceives equal distances from the walls only if its pose is parallel to the walls.
3 Method description
- In Eq. ( 13), .F is a quantity that can be directly computed from functions of normal flow that have been extracted from the central and peripheral cameras.
- Function F is equal to zero when the left and right cameras are in equal distances from world points and takes positive or negative values depending on whether the right camera is farther or closer from obstacles compared to the left camera.
- Therefore, the computable quantity T can be used to control the rotational velocity by keeping the quantity F as close to zero as possible, achieving this way the desired behavior.
- In their robot setup the authors avoid this special case.
4 Implementation issues -Experimental results
- In their present algorithm, the authors did not allow for individual motions of the cameras (eye movements).
- This is roughly tantamount to the assumption of approximately known FOE while exhibiting this behavior.
- The authors did not calibrate the head so that the central camera pointed exactly in the forward motion translation direction.
- In fact, the authors noticed that in many of the successful navigation experiments the optical axis of the central camera was 5-10 degrees off the xs # 0.
5 Conclusions
- The method does not make strict assumptions about the environment, it requires very low level information to be extracted from the images, it produces a robust robot behavior and it is computationally efficient.
- Results obtained by both simulations and from a prototype on-line implementation demonstrate the effectiveness of the method.
- Peripheral vision seems to be very useful for achieving certain behaviors and its combination with central vision seems natural and appears to be powerful.
Did you find this useful? Give us your feedback
Citations
144 citations
72 citations
Cites methods from "Combining central and peripheral vi..."
...The method presented in [14] is based on the method proposed in [1] and it is inspired by experiments on the navigational capabilities of bees [3, 11]....
[...]
67 citations
Cites background or methods from "Combining central and peripheral vi..."
...Stereovision systems have been extensively studied for various terrestrial and planetary applications, including object reconstruction in industrial robotics systems [14], [24]; indoor and outdoor mobile robotic navigation [ 2 ], [12], [60], [62], [74]; autonomous vehicles for highway and offroad navigation [6], [11], [55]; and planetary rovers [26], [38], [50], [59]....
[...]
...Stereo imaging is a classic technique that has been utilized commonly for 3-D reconstruction in industrial robotics manufacturing systems [14], [19], [24], [67], [69], as well as for obstacle avoidance and 3-D mapping in the deployment of autonomous terrestrial and space mobile platforms [ 2 ], [12], [26], [38], [50], [59], [60], [62], [74]....
[...]
59 citations
Cites methods from "Combining central and peripheral vi..."
...Since the computation of the quantity L 1,2 involves only the horizontal component of the optical flow, normal flow in selected edge directions (e.g., vertical edges) could have been used instead of optical flow, as in [9]....
[...]
...This problem is corrected in [9], where a trinocular camera system is employed....
[...]
39 citations
References
1,796 citations
"Combining central and peripheral vi..." refers background in this paper
...The interest in purposive vision is largely motivated by the fact that all biological vision systems are highly active and purposive [2]....
[...]
1,537 citations
1,468 citations
"Combining central and peripheral vi..." refers background in this paper
...If the RCS moves with 3D translational velocity (U; V;W ) and 3D rotational velocity ( ; ; ), the equations relating the 2D velocity (u; v) of an image point p(x; y) to the 3D motion parameters of the projected 3D point P (X;Y; Z) are [7]: u = Uf + xW xYs + (xXs + Zsf) + Ysf Z + xy f x2 f + f + y...
[...]
[...]
1,425 citations
338 citations
"Combining central and peripheral vi..." refers background in this paper
...The purposiveness of visual processes enables the formulation and the solution of simpler problems that have a relative small number of possible solutions and can be treated in a qualitative manner [3]....
[...]