scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Contolling of servomotors according to pitch and yaw and roll motions of accelerometer

Giriraj M1, Pudi Anvesh1
07 Apr 2016-pp 886-889
TL;DR: This project develops the hand movement monitoring system which feeds the data into the computer and gives the 3D visualization of accelerometer according to pitch, yaw and roll motions of accelerometers.
Abstract: Gesture Controlled Robot is a robot that can be moved according to our hand movements. Accelerometer sensor is mounted on hand. The sensor mounted on your hand will judge the movement of hand in a particular direction. This project develops the hand movement monitoring system which feeds the data into the computer and gives the 3D visualization of accelerometer according to pitch, yaw and roll motions of accelerometer. In this project I used Adxl335 as the accelerometer and this project describes the controlling of servomotor(Futabas3003) according to motions of accelerometer and we can view the 3D visualization of accelerometer by interfacing Arduino (at mega 328p) with the processing software and MATLAB is used for graphical results of accelerometer motions.
References
More filters
Book ChapterDOI
01 Jun 1994
TL;DR: A Finite State Machine is used to model four qualitatively distinct phases of a generic gesture, which are representatives for actions of Left, Right, Up, Down, Grab, Rotate, and Stop.
Abstract: This paper presents a method for recognizing human-hand gestures using a model-based approach. A Finite State Machine is used to model four qualitatively distinct phases of a generic gesture. Fingertips are tracked in multiple frames to compute motion trajectories, which are then used for finding the start and stop position of the gesture. Gestures are represented as a list of vectors and are then matched to stored gesture vector models using table lookup based on vector displacements. Results are presented showing recognition of seven gestures using images sampled at 4 Hz on a SPARC-1 without any special hardware. The seven gestures are representatives for actions of Left, Right, Up, Down, Grab, Rotate, and Stop.

151 citations

Proceedings ArticleDOI
14 Mar 2010
TL;DR: A gesture recognition system based primarily on a single 3-axis accelerometer that achieves almost perfect user-dependent recognition and a user-independent recognition accuracy that is competitive with the statistical methods that require significantly a large number of training samples and with the other accelerometer-based gesture recognition systems available in literature.
Abstract: We propose a gesture recognition system based primarily on a single 3-axis accelerometer. The system employs dynamic time warping and affinity propagation algorithms for training and utilizes the sparse nature of the gesture sequence by implementing compressive sensing for gesture recognition. A dictionary of 18 gestures is defined and a database of over 3,700 repetitions is created from 7 users. Our dictionary of gestures is the largest in published studies related to acceleration-based gesture recognition, to the best of our knowledge. The proposed system achieves almost perfect user-dependent recognition and a user-independent recognition accuracy that is competitive with the statistical methods that require significantly a large number of training samples and with the other accelerometer-based gesture recognition systems available in literature.

139 citations

Journal ArticleDOI
TL;DR: This tutorial presents techniques for using the Wiimote in 3DUIs and discusses the device's strengths and how to compensate for its limitations, with implications for future spatially convenient devices.
Abstract: The Nintendo Wii Remote (Wiimote) has served as an input device in 3D user interfaces (3DUIs) but differs from the general-purpose input hardware typically found in research labs and commercial applications. Despite this, no one has systematically evaluated the device in terms of what it offers 3DUI designers. Experience with the Wiimote indicates that it's an imperfect harbinger of a new class of spatially convenient devices, classified in terms of spatial data, functionality, and commodity design. This tutorial presents techniques for using the Wiimote in 3DUIs. It discusses the device's strengths and how to compensate for its limitations, with implications for future spatially convenient devices.

134 citations

Proceedings ArticleDOI
10 Nov 2009
TL;DR: An accelerometer-based system to control an industrial robot using two low-cost and small 3-axis wireless accelerometers and an Artificial Neural Network trained with a back-propagation algorithm to recognize arm gestures and postures, which then will be used in the control of the robot.
Abstract: Most of industrial robots are still programmed using the typical teaching process, through the use of the robot teach pendant. In this paper is proposed an accelerometer-based system to control an industrial robot using two low-cost and small 3-axis wireless accelerometers. These accelerometers are attached to the human arms, capturing its behavior (gestures and postures). An Artificial Neural Network (ANN) trained with a back-propagation algorithm was used to recognize arm gestures and postures, which then will be used as input in the control of the robot. The aim is that the robot starts the movement almost at the same time as the user starts to perform a gesture or posture (low response time). The results show that the system allows the control of an industrial robot in an intuitive way. However, the achieved recognition rate of gestures and postures (92%) should be improved in future, keeping the compromise with the system response time (160 milliseconds). Finally, the results of some tests performed with an industrial robot are presented and discussed.

84 citations

Journal ArticleDOI
TL;DR: A research platform for a perceptually guided robot, which also serves as a demonstrator for a coming generation of service robots, and shows how the calibration of the robot's image-to-world coordinate transform can be learned from experience, thus making detailed and unstable calibration of this important subsystem superfluous.
Abstract: We have designed a research platform for a perceptually guided robot, which also serves as a demonstrator for a coming generation of service robots. In order to operate semi-autonomously, these require a capacity for learning about their environment and tasks, and will have to interact directly with their human operators. Thus, they must be supplied with skills in the fields of human-computer interaction, vision, and manipulation. GripSee is able to autonomously grasp and manipulate objects on a table in front of it. The choice of object, the grip to be used, and the desired final position are indicated by an operator using hand gestures. Grasping is performed similar to human behavior: the object is first fixated, then its form, size, orientation, and position are determined, a grip is planned, and finally the object is grasped, moved to a new position, and released. As a final example for useful autonomous behavior we show how the calibration of the robot‘s image-to-world coordinate transform can be learned from experience, thus making detailed and unstable calibration of this important subsystem superfluous. The integration concepts developed at our institute have led to a flexible library of robot skills that can be easily recombined for a variety of useful behaviors.

65 citations