Open AccessProceedings Article
Communication with a Semi-Autonomous Robot Combining Natural Language and Gesture
Dennis Perzanowski
- pp 55-59
TLDR
This interface utilizes robust natural language understanding and resolves some of the ambiguities in natural language by means of gesture input to create a natural language and gesture interface to a mobile robot.Abstract:
The Intelligent Multimodal Multimedia and the Adaptive Systems Groups at the Navy Center for Applied Research in Artificial Intelligence have been investigating a natural language and gesture interface to a mobile robot Our interface utilizes robust natural language understanding and resolves some of the ambiguities in natural language by means of gesture input The natural language and gestural information is integrated with knowledge of a particular environment and appropriate robotic responses areread more
Citations
More filters
Ten Years of the AAAI Mobile Robot Competition and Exhibition Looking Back and to the Future
Tucker Balch,Holly A. Yanco +1 more
TL;DR: A look back at the origins of the AAAI Mobile Robot Competition and how it evolved and how the contest has served as an arena for important debates in the AI and robotics communities.
Journal ArticleDOI
Advanced methods for displays and remote control of robots.
TL;DR: This research sheds light on the preferred display type and controlling method for operating robots from a distance, making it easier to cope with the challenges of operating such systems.
Book ChapterDOI
Probabilistic Contextual Situation Analysis
Guy Ramel,Roland Siegwart +1 more
TL;DR: An approach using laser range data to recognize places (such as corridors, crossings, rooms and doors) using Bayesian programming is also developed for both topological navigation in a typical indoor environment and object recognition.
References
More filters
Proceedings Article
Recognizing and interpreting gestures on a mobile robot
TL;DR: This paper describes a real-time, three-dimensional gesture recognition system that resides on-board a mobile robot, capable of recognizing six distinct gestures made by an unadorned human in an unaltered environment, including the coarse model and the active vision approach.
Proceedings ArticleDOI
Toward robust skin identification in video images
D. Saxe,Richard Foulds +1 more
TL;DR: An approach to the identification of skin-colored regions of the image that is robust in terms of variations in skin pigmentation in a single subject, differences in skin Pigmentation across a population of potential users, and subject clothing and image background is described.
Eucalyptus: Integrating Natural Language Input with a Graphical User Interface
TL;DR: Eucalyptus, a natural language (NL) interface that has been integrated with the graphical user interface of the KOALAS Test Planning Tool, a simulated Naval air combat command system, handles both imperative commands and database queries while still allowing full use of the original graphical interface.
Proceedings ArticleDOI
Recognition approach to gesture language understanding
TL;DR: Experimental results are presented suggesting that two features of signing affect recognition accuracy: signing frequency which to a large extent can be accounted for by training a network on the samples of the respective frequency; and coarticulation effect which a network fails to identify.
Proceedings ArticleDOI
Natural Language in Four Spatial Interfaces
TL;DR: This paper describes the experiences building spoken language interfaces to four demonstration applications all involving 2- or 3-D spatial displays or gestural interactions: an air combat command and control simulation, an immersive VR tactical scenario viewer, a map-based air strike simulation tool with cartographic database, and a speech/gesture controller for mobile robots.