A clickable world: Behavior selection through pointing and context for mobile manipulation
read more
Citations
Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments
EL-E: an assistive mobile manipulator that autonomously fetches objects from flat surfaces
Robots for humanity: using assistive robotics to empower people with disabilities
Real-time perception-guided motion planning for a personal robot
Benchmarking grasping and manipulation: Properties of the Objects of Daily Living
References
An Behavior-based Robotics
Behavior-Based Robotics
Contextual Priming for Object Detection
Non-Intrusive Gaze Tracking Using Artificial Neural Networks
Related Papers (5)
EL-E: an assistive mobile manipulator that autonomously fetches objects from flat surfaces
Frequently Asked Questions (18)
Q2. What are the future works mentioned in the paper "A clickable world: behavior selection through pointing and context for mobile manipulation" ?
In future work, the authors hope to further demonstrate the applicability of their approach to robots that assist motor-impaired people with activities of daily living.
Q3. Why did the robot fail to detect the object?
When placing an object on table surfaces, the three failures were due, respectively, to theobject bouncing off the table surface after being dropped from slightly above the table, a table scan failure, and the robot failing to acquire a second laser pointer detection, probably due to the very shallow angle of incidence with which the user illuminated the table from a long distance.
Q4. What is the possible use of the clickable world interface?
It is conceivable that a clickable world interface could make use of eye gaze, pointing with the hand, and other natural gestures.
Q5. What was the object retrieval command in the human button case?
In the human button case, the task was considered successful if El-E handed the object to the selected seated person, such that the person could retrieve the object from El-E without standing up.
Q6. What is the behavior of the robot when it has an object in its gripper?
If a face is detected then the robot drives towards the person, extends its hand so that the object can be grasped by the person, and releases the object after a preset amount of time.
Q7. How far away was the object from the target square?
For driving to a given location, the only failure resulted from El-E being 5 cm away from the border of the marked target square.
Q8. What is the effect of each module in the cascade?
Each module in this cascade results in a difficult to reverse change in the robot and world state as the robot attempts to collect more information and thereby disambiguate the command.
Q9. What is the current implementation of the detector?
The current implementation of the detector classifies a range rectangle as a vertical surface (wall) if it is not classified as a horizontal surface (table).
Q10. What is the role of the location in a robot's behavior?
if a user wishes to have the robot manipulate a fixed part of the environment, such as a door handle or a light switch, the location of the manipulable device is sufficient to command the robot to reach out and make contact with it.
Q11. What is the role of the clickable world interface?
With their clickable world interface, the authors posit that a location in the world can be a powerful cue for user-directed behavior selection.
Q12. What is the purpose of the experiment?
The experimenter would then instruct the subject to command the robot to perform one of its programmed actions: drive to a location, place the object on a table, or hand the object over to a person.
Q13. What is the power of this approach as a user interface?
As the authors demonstrate in this paper, the power of this approach as a user interface derives from the intuitive relationship between a location and a mobile manipulation behavior.
Q14. What is the relationship between the clickable world interface and the ACT-R system?
Classic work modeling the mechanisms and structure used for cognition such as ACT-R [2], are related to their work in that these systems can also be used for determining the correct behavior to execute given some input.
Q15. What is the function of the robot that determines the distance between the selected object and the target?
This behavior serves a plausible precursor for future behaviors such as operating a light switch or opening a door, which would require that the robot reach out and make contact.
Q16. What was the reason for the failures in the touching task?
the three failures in the touching task were, in two cases, due to the robot touching a location approximately 1 cm away from the border of the yellow square.
Q17. What are the actions that El-E uses to select objects?
When activated, these buttons (illustrated in Figure 4 and 5) trigger the following associated behaviors: Grasp On Floor, Follow Laserpoint, Grasp On Table, Reachout And Touch .
Q18. How many trials did the robot successfully retrieve?
In the object fetching experiment, the robot successfully grasped and returned one object from the floor and a table in 10 out of 10 trials as shown in Figure 11.