Qualitative Vision-Based Path Following
read more
Citations
Mobile robot vision navigation & localization using Gist and Saliency
Simple yet stable bearing-only navigation
Image features for visual teach-and-repeat navigation in changing environments
Simple yet stable bearing-only navigation: Krajník et al.: Simple Yet Stable Bearing-Only Navigation
A Taxonomy of Vision Systems for Ground Mobile Robots
References
Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography
A tutorial on visual servo control
Vision for mobile robot navigation: a survey
Experiences with an interactive museum tour-guide robot
Real-time motion planning for agile autonomous vehicles
Related Papers (5)
Frequently Asked Questions (12)
Q2. What are the future works mentioned in the paper "Qualitative vision-based path following" ?
Future work should be aimed at incorporating higher-level scene knowledge to enable obstacle avoidance and terrain characterization, as well as connecting multiple teaching paths in a graph-based framework to enable autonomous navigation between arbitrary points.
Q3. Why did the algorithm recover from the loss of features caused by the dynamic objects?
Because the milestone images change frequently, the algorithm quickly recovered from the loss of features due to the occlusion caused by the dynamic objects.
Q4. How did the robot navigate the ramp?
the additional maneuvering room enabled the driving and turning speeds to be increased to 750 mm/s (the maximum driving speed of the robot) and 6 degrees per second, respectively.
Q5. How many ms did the earlier algorithm achieve?
While the earlier algorithm works well when the ground is paved and the scenery is rich in texture, the improved algorithm is more robust, achieving maximum errors of only 0.23 m, 1.20 m, and 1.76 m, respectively, compared with 0.45 m, 1.20 m, and 5.68 m for the earlier algorithm.
Q6. What does the algorithm require to be able to do?
The algorithm does not make use of the traditional concepts of Jacobians, homographies, fundamental matrices, or the focus of expansion, and it does not require any camera calibration, including lens calibration.
Q7. What is the odometry of the robot?
At any given time, the desired heading of the robot is given byθd = η 1NN∑i=1θ (i) d + (1 − η)θo, (1)where N is the total number of feature points, θo is the desired heading obtained by sampling a third-order polynomial that is fit to the initial and destination odometry measurements of the segment in the teaching phase, and the factor 0 ≤ η ≤ 1 determines the relative importance of visual measurements versus odometry measurements.
Q8. What is the definition of the funnel lane?
Definition 2: The funnel lane of a fixed landmark λ, a robot location D, and a relative angle α is the set of locations Fλ,D,α ⊂ Fλ,D such that θC − θD = α for each C ∈ Fλ,D,α.Multiple features yield multiple funnel lanes, the intersection of which is the set of locations for which both constraints are satisfied for all the features.
Q9. How was the replay driving speed achieved?
This ability is achieved by setting the replay driving speed to be that of the teaching driving speed, which is decreased during a turn.
Q10. What was the first experiment that the robot did?
In the first, the robot navigated a slanted ramp in a 40 m run, thus verifying that the algorithm does not require a flat ground plane.
Q11. How was the robot able to follow the path?
The second experiment shows the robot following a path along rough terrain, in which roll and tilt angles up to 5 degrees were encountered.
Q12. What is the simplest way to prove the theorem?
If the robot moves toward the destination in a straight line with the same heading direction as that of the destination (i.e., θC = θD), then the point uC will move away from the principal point toward uD, reaching uD when the robot reaches D. This observation is made more precise in the following theorem.