scispace - formally typeset
Search or ask a question
Journal ArticleDOI

MEMS Accelerometer Based Nonspecific-User Hand Gesture Recognition

TL;DR: A recognition algorithm based on sign sequence and template matching as presented in this paper can be used for nonspecific-users hand-gesture recognition without the time consuming user-training process prior to gesture recognition.
Abstract: This paper presents three different gesture recognition models which are capable of recognizing seven hand gestures, i.e., up, down, left, right, tick, circle, and cross, based on the input signals from MEMS 3-axes accelerometers. The accelerations of a hand in motion in three perpendicular directions are detected by three accelerometers respectively and transmitted to a PC via Bluetooth wireless protocol. An automatic gesture segmentation algorithm is developed to identify individual gestures in a sequence. To compress data and to minimize the influence of variations resulted from gestures made by different users, a basic feature based on sign sequence of gesture acceleration is extracted. This method reduces hundreds of data values of a single gesture to a gesture code of 8 numbers. Finally, the gesture is recognized by comparing the gesture code with the stored templates. Results based on 72 experiments, each containing a sequence of hand gestures (totaling 628 gestures), show that the best of the three models discussed in this paper achieves an overall recognition accuracy of 95.6%, with the correct recognition accuracy of each gesture ranging from 91% to 100%. We conclude that a recognition algorithm based on sign sequence and template matching as presented in this paper can be used for nonspecific-users hand-gesture recognition without the time consuming user-training process prior to gesture recognition.
Citations
More filters
Journal ArticleDOI
TL;DR: An algorithmic framework is proposed to process acceleration and surface electromyographic (SEMG) signals for gesture recognition, which includes a novel segmentation scheme, a score-based sensor fusion scheme, and two new features.
Abstract: An algorithmic framework is proposed to process acceleration and surface electromyographic (SEMG) signals for gesture recognition. It includes a novel segmentation scheme, a score-based sensor fusion scheme, and two new features. A Bayes linear classifier and an improved dynamic time-warping algorithm are utilized in the framework. In addition, a prototype system, including a wearable gesture sensing device (embedded with a three-axis accelerometer and four SEMG sensors) and an application program with the proposed algorithmic framework for a mobile phone, is developed to realize gesture-based real-time interaction. With the device worn on the forearm, the user is able to manipulate a mobile phone using 19 predefined gestures or even personalized ones. Results suggest that the developed prototype responded to each gesture instruction within 300 ms on the mobile phone, with the average accuracy of 95.0% in user-dependent testing and 89.6% in user-independent testing. Such performance during the interaction testing, along with positive user experience questionnaire feedback, demonstrates the utility of the framework.

249 citations


Additional excerpts

  • ...Three new ACC features and a score-based sensor fusion scheme are designed to improve accuracy....

    [...]

Journal ArticleDOI
TL;DR: In this study, existing approaches of using smartphones for ITS applications are analysed and compared, and particular focus is placed on vehicle-based monitoring systems, such as driving behaviour and style recognition, accident detection and road condition monitoring systems.
Abstract: Road crashes are a growing concern of governments and are rising to become one of the leading preventable causes of death, especially in developing countries. The ubiquitous presence of smartphones provides a new platform on which to implement sensor networks and driver-assistance systems, as well as other intelligent transportation system (ITS) applications. In this study, existing approaches of using smartphones for ITS applications are analysed and compared. Particular focus is placed on vehicle-based monitoring systems, such as driving behaviour and style recognition, accident detection and road condition monitoring systems. Further opportunities for use of smartphones in ITS systems are highlighted, and remaining challenges in this emerging field of research are identified.

148 citations


Cites background from "MEMS Accelerometer Based Nonspecifi..."

  • ...An example of such an application is gesture recognition, which is used to answer a call when bringing the phone to one’s ear, or paging through a document by the wave of a hand [1, 2]....

    [...]

Proceedings ArticleDOI
05 Oct 2015
TL;DR: WiG is proposed, a device-free gesture recognition system based solely on Commercial Off-The-Shelf (COTS) WiFi infrastructures and devices that stands out for its systematic simplicity, extremely low cost and high practicability.
Abstract: Most recently, gesture recognition has increasingly attracted intense academic and industrial interest due to its various applications in daily life, such as home automation, mobile games. Present approaches for gesture recognition, mainly including vision-based, sensor-based and RF-based, all have certain limitations which hinder their practical use in some scenarios. For example, the vision-based approaches fail to work well in poor light conditions and the sensor-based ones require users to wear devices. To address these, we propose WiG in this paper, a device-free gesture recognition system based solely on Commercial Off-The-Shelf (COTS) WiFi infrastructures and devices. Compared with existing Radio Frequency (RF)-based systems, WiG stands out for its systematic simplicity, extremely low cost and high practicability. We implemented WiG in indoor environment and conducted experiments to evaluate its performance in two typical scenarios. The results demonstrate that WiG can achieve an average recognition accuracy of 92% in line-of-sight scenario and average accuracy of 88% in the none-line-of sight scenario.

136 citations


Cites background from "MEMS Accelerometer Based Nonspecifi..."

  • ...Some researchers even use accelerometer sensors, surface electromyography(SEMG) sensors [40], [37] and magnetic sensors [17] to recognize different gestures....

    [...]

Journal ArticleDOI
TL;DR: A continuous hand gestures recognition technique that is capable of continuous recognition of hand gestures using three-axis accelerometer and gyroscope sensors in a smart device and a prototype system is developed to recognize the continuously hand gestures-based HMI.
Abstract: Recent advances in smart devices have sustained them as a better alternative for the design of human–machine interaction (HMI), because they are equipped with accelerometer sensor, gyroscope sensor, and an advanced operating system. This paper presents a continuous hand gestures recognition technique that is capable of continuous recognition of hand gestures using three-axis accelerometer and gyroscope sensors in a smart device. To reduce the influence of unstableness of a hand making the gesture and compress the data, a gesture coding algorithm is developed. An automatic gesture spotting algorithm is developed to detect the start and end points of meaningful gesture segments. Finally, a gesture is recognized by comparing the gesture code with gesture database using dynamic time warping algorithm. In addition, a prototype system is developed to recognize the continuous hand gestures-based HMI. With the smartphone, the user is able to perform the predefined gestures and control smart appliances using the Samsung AllShare protocol.

126 citations


Cites methods from "MEMS Accelerometer Based Nonspecifi..."

  • ...Our technique is different to existing work [3], [7]–[9], [16], [18], [19] in that it does not require a camera or dedicated sensors....

    [...]

  • ...[3] presented gesture recognition models based on the input signals from 3-axes accelerometer sensors....

    [...]

Journal ArticleDOI
TL;DR: This work examined the performance of hand gesture classification using Force Myography (FMG) and surface Electromyography (sEMG) technologies by performing 3 sets of 48 hand gestures using a prototyped FMG band and an array of commercial sEMG sensors worn both on the wrist and forearm simultaneously.

122 citations

References
More filters
Book
Christopher M. Bishop1
17 Aug 2006
TL;DR: Probability Distributions, linear models for Regression, Linear Models for Classification, Neural Networks, Graphical Models, Mixture Models and EM, Sampling Methods, Continuous Latent Variables, Sequential Data are studied.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

22,840 citations

Journal ArticleDOI
TL;DR: This book covers a broad range of topics for regular factorial designs and presents all of the material in very mathematical fashion and will surely become an invaluable resource for researchers and graduate students doing research in the design of factorial experiments.
Abstract: (2007). Pattern Recognition and Machine Learning. Technometrics: Vol. 49, No. 3, pp. 366-366.

18,802 citations


"MEMS Accelerometer Based Nonspecifi..." refers methods in this paper

  • ...For sequential data such as measurement of time series and acoustic features at successive time frames used for speech recognition, HMM (Hidden Markov Model) is one of the most important models [16]....

    [...]

Book
01 Jan 1973
TL;DR: The principles of interactive computer graphics are discussed in this article, where the authors propose a set of principles for the development of computer graphics systems, including the principles of Interactive Computer Graphics (ICG).
Abstract: Principles of interactive computer graphics , Principles of interactive computer graphics , مرکز فناوری اطلاعات و اطلاع رسانی کشاورزی

1,243 citations

Proceedings ArticleDOI
18 Feb 2008
TL;DR: The design and evaluation of the sensor-based gesture recognition system is presented, which allows the training of arbitrary gestures by users which can then be recalled for interacting with systems like photo browsing on a home TV.
Abstract: In many applications today user interaction is moving away from mouse and pens and is becoming pervasive and much more physical and tangible. New emerging interaction technologies allow developing and experimenting with new interaction methods on the long way to providing intuitive human computer interaction. In this paper, we aim at recognizing gestures to interact with an application and present the design and evaluation of our sensor-based gesture recognition. As input device we employ the Wii-controller (Wiimote) which recently gained much attention world wide. We use the Wiimote's acceleration sensor independent of the gaming console for gesture recognition. The system allows the training of arbitrary gestures by users which can then be recalled for interacting with systems like photo browsing on a home TV. The developed library exploits Wii-sensor data and employs a hidden Markov model for training and recognizing user-chosen gestures. Our evaluation shows that we can already recognize gestures with a small number of training samples. In addition to the gesture recognition we also present our experiences with the Wii-controller and the implementation of the gesture recognition. The system forms the basis for our ongoing work on multimodal intuitive media browsing and are available to other researchers in the field.

512 citations


"MEMS Accelerometer Based Nonspecifi..." refers methods in this paper

  • ...For sequential data such as measurement of time series and acoustic features at successive time frames used for speech recognition, HMM (Hidden Markov Model) is one of the most important models [16]....

    [...]

Journal ArticleDOI
TL;DR: To illustrate the potential of multilayer neural networks for adaptive interfaces, a VPL Data-Glove connected to a DECtalk speech synthesizer via five neural networks was used to implement a hand-gesture to speech system, demonstrating that neural networks can be used to develop the complex mappings required in a high bandwidth interface that adapts to the individual user.
Abstract: To illustrate the potential of multilayer neural networks for adaptive interfaces, a VPL Data-Glove connected to a DECtalk speech synthesizer via five neural networks was used to implement a hand-gesture to speech system. Using minor variations of the standard backpropagation learning procedure, the complex mapping of hand movements to speech is learned using data obtained from a single 'speaker' in a simple training phase. With a 203 gesture-to-word vocabulary, the wrong word is produced less than 1% of the time, and no word is produced about 5% of the time. Adaptive control of the speaking rate and word stress is also available. The training times and final performance speed are improved by using small, separate networks for each naturally defined subtask. The system demonstrates that neural networks can be used to develop the complex mappings required in a high bandwidth interface that adapts to the individual user. >

394 citations


"MEMS Accelerometer Based Nonspecifi..." refers methods in this paper

  • ...Existing gesture recognition approaches include template-matching [11], dictionary lookup [12], statistical matching [13], linguistic matching [14], and neural network [15]....

    [...]