Other affiliations: Nokia
Bio: Sanna Kallio is an academic researcher from VTT Technical Research Centre of Finland. The author has contributed to research in topics: Gesture recognition & Gesture. The author has an hindex of 8, co-authored 13 publications receiving 734 citations. Previous affiliations of Sanna Kallio include Nokia.
31 Jul 2006
TL;DR: Gesture commands were found to be natural, especially for commands with spatial association in design environment control, and can augment other modalities.
Abstract: Accelerometer-based gesture control is studied as a supplementary or an alternative interaction modality. Gesture commands freely trainable by the user can be used for controlling external devices with handheld wireless sensor unit. Two user studies are presented. The first study concerns finding gestures for controlling a design environment (Smart Design Studio), TV, VCR, and lighting. The results indicate that different people usually prefer different gestures for the same task, and hence it should be possible to personalise them. The second user study concerns evaluating the usefulness of the gesture modality compared to other interaction modalities for controlling a design environment. The other modalities were speech, RFID-based physical tangible objects, laser-tracked pen, and PDA stylus. The results suggest that gestures are a natural modality for certain tasks, and can augment other modalities. Gesture commands were found to be natural, especially for commands with spatial association in design environment control.
27 Oct 2004
TL;DR: In this article, a procedure based on adding noise-distorted signal duplicates to training set is applied and it is shown to increase the recognition accuracy while decreasing user effort in training.
Abstract: Accelerometer based gesture control is proposed as a complementary interaction modality for handheld devices. Predetermined gesture commands or freely trainable by the user can be used for controlling functions also in other devices. To support versatility of gesture commands in various types of personal device applications gestures should be customisable, easy and quick to train. In this paper we experiment with a procedure for training/recognizing customised accelerometer based gestures with minimum amount of user effort in training. Discrete Hidden Markov Models (HMM) are applied. Recognition results are presented for an external device, a DVD player gesture commands. A procedure based on adding noise-distorted signal duplicates to training set is applied and it is shown to increase the recognition accuracy while decreasing user effort in training. For a set of eight gestures, each trained with two original gestures and with two Gaussian noise-distorted duplicates, the average recognition accuracy was 97%, and with two original gestures and with four noise-distorted duplicates, the average recognition accuracy was 98%, cross-validated from a total data set of 240 gestures. Use of procedure facilitates quick and effortless customisation in accelerometer based interaction.
•20 Jan 2005
TL;DR: In this paper, a gesture control system based on the use of gestures and functioning especially in mobile terminals is presented. But this system is limited to a single model of mobile terminals and it cannot be used to control multiple mobile terminals.
Abstract: A control system basing on the use of gestures and functioning especially in mobile terminals. The gesture control system is provided with a general purpose interface (320) with its commands for applications (310) to be controlled. The processing software (330) of the gesture signals includes a training program (331), the trained free-form gestures made by the user being stored in the gesture library, and a recognizing program (332), which matches a gesture made by the user to the stored gestures and chooses the most similar gesture thereof. Gestures can hence be used as commands for controlling any application configured or programmed to receive the command. One and the same application functions in different models of mobile terminals without matching, and in a certain mobile terminal all applications can be run, which applications use specified interface commands. The application (310) can be e.g. a game or an activity being included in the basic implementation of a mobile terminal.
10 Nov 2003
TL;DR: Experimental results show great potential for recognising simple and even more complex gestures with good accuracy, and the design of online gesture recognition for mobile devices sets requirements for data processing.
Abstract: This paper introduces an accelerometer-based online gesture recognition system. Recognition of gestures can be utilised as a part of a human computer interaction for mobile devices, e.g. cell phones, PDAs and remote controllers. Gestures are captured with a small wireless sensor-box that produces three dimensional acceleration signal. Acceleration signal is preprocessed, vector quantised and finally classified using Hidden Markov Models. The design of online gesture recognition for mobile devices sets requirements for data processing. Thus, the system uses a small size codebook and simple preprocessing methods. The recognition accuracy of system is tested with gestures of four degrees of complexity. Experimental results show great potential for recognising simple and even more complex gestures with good accuracy.
TL;DR: The results suggest that HMM is feasible for practical user independent gesture control applications in mobile low-resource embedded environments and recommended as a preferred method due to its better suitability for a continuous-valued signal, and better recognition accuracy.
Abstract: Accelerometer-based gesture recognition facilitates a complementary interaction modality for controlling mobile devices and home appliances. Using gestures for the task of home appliance control requires use of the same device and gestures by different persons, i.e. user independent gesture recognition. The practical application in small embedded low-resource devices also requires high computational performance. The user independent gesture recognition accuracy was evaluated with a set of eight gestures and seven users, with a total of 1120 gestures in the dataset. Twenty-state continuous HMM yielded an average of 96.9% user independent recognition accuracy, which was cross-validated by leaving one user in turn out of the training set. Continuous and discrete five-state HMM computational performances were compared with a reference test in a PC environment, indicating that discrete HMM is 20% faster. Computational performance of discrete five-state HMM was evaluated in an embedded hardware environment with a 104 MHz ARM-9 processor and Symbian OS. The average recognition time per gesture calculated from 1120 gesture repetitions was 8.3 ms. With this result, the computational performance difference between the compared methods is considered insignificant in terms of practical application. Continuous HMM is hence recommended as a preferred method due to its better suitability for a continuous-valued signal, and better recognition accuracy. The results suggest that, according to both evaluation criteria, HMM is feasible for practical user independent gesture control applications in mobile low-resource embedded environments.
TL;DR: The field of AR is described, including a brief definition and development history, the enabling technologies and their characteristics, and some known limitations regarding human factors in the use of AR systems that developers will need to overcome.
Abstract: We are on the verge of ubiquitously adopting Augmented Reality (AR) technologies to enhance our percep- tion and help us see, hear, and feel our environments in new and enriched ways. AR will support us in fields such as education, maintenance, design and reconnaissance, to name but a few. This paper describes the field of AR, including a brief definition and development history, the enabling technologies and their characteristics. It surveys the state of the art by reviewing some recent applications of AR technology as well as some known limitations regarding human factors in the use of AR systems that developers will need to overcome.
••09 Mar 2009
TL;DR: This work evaluates uWave using a large gesture library with over 4000 samples collected from eight users over an elongated period of time for a gesture vocabulary with eight gesture patterns identified by a Nokia research and shows that uWave achieves 98.6% accuracy, competitive with statistical methods that require significantly more training samples.
Abstract: The proliferation of accelerometers on consumer electronics has brought an opportunity for interaction based on gestures or physical manipulation of the devices. We present uWave, an efficient recognition algorithm for such interaction using a single three-axis accelerometer. Unlike statistical methods, uWave requires a single training sample for each gesture pattern and allows users to employ personalized gestures and physical manipulations. We evaluate uWave using a large gesture library with over 4000 samples collected from eight users over an elongated period of time for a gesture vocabulary with eight gesture patterns identified by a Nokia research. It shows that uWave achieves 98.6% accuracy, competitive with statistical methods that require significantly more training samples. Our evaluation data set is the largest and most extensive in published studies, to the best of our knowledge. We also present applications of uWave in gesture-based user authentication and interaction with three-dimensional mobile user interfaces using user created gestures.
••18 Nov 2011
TL;DR: A novel system that uses Dynamic Time Warping (DTW) and smartphone based sensor-fusion to detect, recognize and record potentially-aggressive driving actions without external processing and utilizes Euler representation of device attitude to aid in classification.
Abstract: Driving style can characteristically be divided into two categories: “typical” (non-aggressive) and aggressive. Understanding and recognizing driving events that fall into these categories can aid in vehicle safety systems. Potentially-aggressive driving behavior is currently a leading cause of traffic fatalities in the United States. More often than not, drivers are unaware that they commit potentially-aggressive actions daily. To increase awareness and promote driver safety, we are proposing a novel system that uses Dynamic Time Warping (DTW) and smartphone based sensor-fusion (accelerometer, gyroscope, magnetometer, GPS, video) to detect, recognize and record these actions without external processing. Our system differs from past driving pattern recognition research by fusing related inter-axial data from multiple sensors into a single classifier. It also utilizes Euler representation of device attitude (also based on fused data) to aid in classification. All processing is done completely on the smartphone.
TL;DR: Body posture and finger pointing are a natural modality for human-machine interaction, but first the system must know what it's seeing.
Abstract: Body posture and finger pointing are a natural modality for human-machine interaction, but first the system must know what it's seeing.
••01 Nov 2011
TL;DR: A framework for hand gesture recognition based on the information fusion of a three-axis accelerometer (ACC) and multichannel electromyography (EMG) sensors that facilitates intelligent and natural control in gesture-based interaction.
Abstract: This paper presents a framework for hand gesture recognition based on the information fusion of a three-axis accelerometer (ACC) and multichannel electromyography (EMG) sensors. In our framework, the start and end points of meaningful gesture segments are detected automatically by the intensity of the EMG signals. A decision tree and multistream hidden Markov models are utilized as decision-level fusion to get the final results. For sign language recognition (SLR), experimental results on the classification of 72 Chinese Sign Language (CSL) words demonstrate the complementary functionality of the ACC and EMG sensors and the effectiveness of our framework. Additionally, the recognition of 40 CSL sentences is implemented to evaluate our framework for continuous SLR. For gesture-based control, a real-time interactive system is built as a virtual Rubik's cube game using 18 kinds of hand gestures as control commands. While ten subjects play the game, the performance is also examined in user-specific and user-independent classification. Our proposed framework facilitates intelligent and natural control in gesture-based interaction.