Towards Meaningful Robot Gesture
read more
Citations
Hand and Mind: What Gestures Reveal about Thought
Synchronized gesture and speech production for humanoid robots
Artimetrics: Biometrics for Artificial Entities
Towards an integrated model of speech and gesture production for multi-modal robot behavior
Development of a generic method to generate upper-body emotional expressions for different social robots
References
Hand and Mind: What Gestures Reveal about Thought
Building Natural Language Generation Systems
Embodied conversational agents
Hand and Mind: What Gestures Reveal about Thought
BEAT: the Behavior Expression Animation Toolkit
Related Papers (5)
Frequently Asked Questions (13)
Q2. What have the authors contributed in "Towards meaningful robot gesture" ?
The authors present an approach to enable the humanoid robot ASIMO to flexibly produce and synchronize speech and co-verbal gestures at run-time, while not being limited to a predefined repertoire of motor action. Since this research challenge has already been tackled in various ways within the domain of virtual conversational agents, the authors build upon the experience gained from the development of a speech and gesture production model used for their virtual human Max. As an underlying action generation architecture, the authors explain how ACE draws upon a tight, bi-directional coupling of ASIMO ’ s perceptuo-motor system with multi-modal scheduling via both efferent control signals and afferent
Q3. What are the future works in "Towards meaningful robot gesture" ?
To tackle this challenge the cross-modal adaptation mechanisms applied in ACE will be extended to allow for a finer mutual adaptation between robot gesture and speech.
Q4. What are the primary candidates for extending the communicative capabilities of social robots?
Forming an integral part of human communication, hand and arm gestures are primary candidates for extending the communicative capabilities of social robots.
Q5. Why is the outer form of a gesture distorted?
due to deviations from original postures and respective joint angles, the outer form of a gesture might be distorted such that its original meaning is altered.
Q6. What is the main advantage of using ACE as an underlying action generation architecture?
By re-employing ACE as an underlying action generation architecture, the authors draw upon a tight coupling of ASIMO’s perceptuo-motor system with multi-modal scheduling.
Q7. What is the main advantage of the ACE?
Being one of the most sophisticated multi-modalschedulers, the Articulated Communicator Engine (ACE) allows for an on-the-spot production of flexibly planned behavior representations.
Q8. What is the main advantage of this approach to robot control in combination with ACE?
A main advantage of this approach to robot control in combination with ACE is the formulation of the trajectory in terms of effector targets in task space, which are then used to derive a joint space description using the standard WBM controller for ASIMO.
Q9. What is the simplest way to describe a given chunk?
A given chunk consists of an intonation phrase and a co-expressive gesture phrase, concertedly conveying a prominent concept [10].
Q10. What are the common types of gestures used in interactional conversations?
Gestures produced during interactional conversations are generated on-line and mainly consist of human-like arm movements and pointing gestures performed with eyes, head, and arms.
Q11. What is the inverse kinematics of the arm?
inverse kinematics (IK) of the arm is solved on the velocity level using the ASIMO whole body motion (WBM) controller framework [5].
Q12. Where is the research project “Conceptual Motorics” based?
The research project “Conceptual Motorics” is based at the Research Institute for Cognition and Robotics, Bielefeld University, Germany.
Q13. What is the main advantage of the proposed robot control architecture?
This has been realized in a bi-directional robot control architecture which uses both efferent actuator control signals and afferent sensory feedback.