scispace - formally typeset
Search or ask a question

Showing papers on "Sketch recognition published in 2010"


Proceedings ArticleDOI
01 Mar 2010
TL;DR: A system which can identify specific hand gestures and use them to convey information and could achieve up to 89% correct results on a typical test set is designed.
Abstract: Visual Interpretation of gestures can be useful in accomplishing natural Human Computer Interactions (HCI). In this paper we proposed a method for recognizing hand gestures. We have designed a system which can identify specific hand gestures and use them to convey information. At any time, a user can exhibit his/her hand doing a specific gesture in front of a web camera linked to a computer. Firstly, we captured the hand gesture of a user and stored it on disk. Then we read those videos captured one by one, converted them to binary images and created 3D Euclidian Space of binary values. We have used supervised feed-forward neural net based training and back propagation algorithm for classifying hand gestures into ten categories: hand pointing up, pointing down, pointing left, pointing right and pointing front and number of fingers user was showing. We could achieve up to 89% correct results on a typical test set.

132 citations


Proceedings ArticleDOI
10 Apr 2010
TL;DR: This paper describes the first system for a computer to provide direction and feedback for assisting a user to draw a human face as accurately as possible from an image.
Abstract: When asked to draw, many people are hesitant because they consider themselves unable to draw well. This paper describes the first system for a computer to provide direction and feedback for assisting a user to draw a human face as accurately as possible from an image. Face recognition is first used to model the features of a human face in an image, which the user wishes to replicate. Novel sketch recognition algorithms were developed to use the information provided by the face recognition to evaluate the hand-drawn face. Two design iterations and user studies led to nine design principles for providing such instruction, presenting reference media, giving corrective feedback, and receiving actions from the user. The result is a proof-of-concept application that can guide a person through step-by-step instruction and generated feedback toward producing his/her own sketch of a human face in a reference image.

114 citations


Proceedings ArticleDOI
23 Aug 2010
TL;DR: A new face sketch synthesis method is presented, inspired by recent advances in sparse signal representation and neuroscience that human brain probably perceives images using high-level features which are sparse.
Abstract: Face sketch synthesis with a photo is challenging due to that the psychological mechanism of sketch generation is difficult to be expressed precisely by rules. Current learning-based sketch synthesis methods concentrate on learning the rules by optimizing cost functions with low-level image features. In this paper, a new face sketch synthesis method is presented, which is inspired by recent advances in sparse signal representation and neuroscience that human brain probably perceives images using high-level features which are sparse. Sparse representations are desired in sketch synthesis due to that sparseness can adaptively selects the most relevant samples which give best representations of the input photo. We assume that the face photo patch and its corresponding sketch patch follow the same sparse representation. In the feature extraction, we select succinct high-level features by using the sparse coding technique, and in the sketch synthesis process each sketch patch is synthesized with respect to high-level features by solving an $l_1$-norm optimization. Experiments have been given on CUHK database to show that our method can resemble the true sketch fairly well.

76 citations


Proceedings ArticleDOI
02 Apr 2010
TL;DR: A wearable input device which enables the user to input text into a computer via character gestures, like using an imaginary blackboard, and a data glove, equipped with three gyroscopes and three accelerometers to measure hand motion is presented.
Abstract: In this work we present a wearable input device which enables the user to input text into a computer. The text is written into the air via character gestures, like using an imaginary blackboard. To allow hands-free operation, we designed and implemented a data glove, equipped with three gyroscopes and three accelerometers to measure hand motion. Data is sent wirelessly to the computer via Bluetooth. We use HMMs for character recognition and concatenated character models for word recognition. As features we apply normalized raw sensor signals. Experiments on single character and word recognition are performed to evaluate the end-to-end system. On a character database with 10 writers, we achieve an average writer-dependent character recognition rate of 94.8% and a writer-independent character recognition rate of 81.9%. Based on a small vocabulary of 652 words, we achieve a single-writer word recognition rate of 97.5%, a performance we deem is advisable for many applications. The final system is integrated into an online word recognition demonstration system to showcase its applicability.

63 citations


BookDOI
19 Nov 2010
TL;DR: In this article, the application of graph theory to low-level processing of digital images, presents graph-theoretic learning algorithms for high-level computer vision and pattern recognition applications, and provides detailed descriptions of several applications of graph-based methods to real-world pattern recognition tasks.
Abstract: This book presents novel graph-theoretic methods for complex computer vision and pattern recognition tasks. It presents the application of graph theory to low-level processing of digital images, presents graph-theoretic learning algorithms for high-level computer vision and pattern recognition applications, and provides detailed descriptions of several applications of graph-based methods to real-world pattern recognition tasks.

59 citations


BookDOI
16 Dec 2010
TL;DR: This unique text/reference bridges the two complementary research areas of user interaction, and graphical modeling and construction (sketch-based modeling), and discusses the state of the art of this rapidly evolving field of sketch-based interfaces and modeling.
Abstract: The field of sketch-based interfaces and modeling (SBIM) is concerned with developing methods and techniques to enable users to interact with a computer through sketching - a simple, yet highly expressive medium SBIM blends concepts from computer graphics, human-computer interaction, artificial intelligence, and machine learning Recent improvements in hardware, coupled with new machine learning techniques for more accurate recognition, and more robust depth inferencing techniques for sketch-based modeling, have resulted in an explosion of both sketch-based interfaces and pen-based computing devices Presenting the first coherent, unified overview of SBIM, this unique text/reference bridges the two complementary research areas of user interaction (sketch-based interfaces), and graphical modeling and construction (sketch-based modeling) The book discusses the state of the art of this rapidly evolving field, with contributions from an international selection of experts Also covered are sketch-based systems that allow the user to manipulate and edit existing data - from text, images, 3D shapes, and video - as opposed to modeling from scratch Topics and features: reviews pen/stylus interfaces to graphical applications that avoid reliance on user interface modes; describes systems for diagrammatic sketch recognition, mathematical sketching, and sketch-based retrieval of vector drawings; examines pen-based user interfaces for engineering and educational applications; presents a set of techniques for sketch recognition that rely strictly on spatial information; introduces the Teddy system; a pioneering sketching interface for designing free-form 3D models; investigates a range of advanced sketch-based systems for modeling and designing 3D objects, including complex contours, clothing, and hair-styles; explores methods for modeling from just a single sketch or using only a few strokes This text is an essential resource for researchers, practitioners and graduate students involved in human-factors and user interfaces, interactive computer graphics, and intelligent user interfaces and AI

55 citations


Journal ArticleDOI
01 May 2010
TL;DR: The performances of humans and a principle component analysis (PCA)-based algorithm in recognizing face sketches are compared and the algorithm was superior with the sketches of less distinctive features, while humans seemed more efficient in handling tonality (or pigmentation) cues of the sketches that were not processed with advanced transformation functions.
Abstract: Because face sketches represent the original faces in a very concise yet recognizable form, they play an important role in criminal investigations, human visual perception, and face biometrics. In this paper, we compared the performances of humans and a principle component analysis (PCA)-based algorithm in recognizing face sketches. A total of 250 sketches of 50 subjects were involved. All of the sketches were drawn manually by five artists (each artist drew 50 sketches, one for each subject). The experiments were carried out by matching sketches in a probe set to photographs in a gallery set. This study resulted in the following findings: 1) A large interartist variation in terms of sketch recognition rate was observed; 2) fusion of the sketches drawn by different artists significantly improved the recognition accuracy of both humans and the algorithm; 3) human performance seems mildly correlated to that of PCA algorithm; 4) humans performed better in recognizing the caricature-like sketches that show various degrees of geometrical distortion or deviation, given the particular data set used; 5) score level fusion with the sum rule worked well in combining sketches, at least for a small number of artists; and 6) the algorithm was superior with the sketches of less distinctive features, while humans seemed more efficient in handling tonality (or pigmentation) cues of the sketches that were not processed with advanced transformation functions.

52 citations


Journal ArticleDOI
TL;DR: A real time sign recognition architecture including both gesture and movement recognition is presented, which deals specially with problems like sensor noise and simplification of the training phase.

42 citations


Book ChapterDOI
Tai-hoon Kim1
23 Jun 2010
TL;DR: The objective of this review paper is to summarize and compare some of the well-known methods used in various stages of a pattern recognition system using ANN and identify research topics and applications which are at the forefront of this exciting and challenging field.
Abstract: Among the various frameworks in which pattern recognition has been traditionally formulated, the statistical approach has been most intensively studied and used in practice More recently, artificial neural network techniques theory have been receiving increasing attention The design of a recognition system requires careful attention to the following issues: definition of pattern classes, sensing environment, pattern representation, feature extraction and selection, cluster analysis, classifier design and learning, selection of training and test samples, and performance evaluation In spite of almost 50 years of research and development in this field, the general problem of recognizing complex patterns with arbitrary orientation, location, and scale remains unsolved New and emerging applications, such as data mining, web searching, retrieval of multimedia data, face recognition, and cursive handwriting recognition, require robust and efficient pattern recognition techniques The objective of this review paper is to summarize and compare some of the well-known methods used in various stages of a pattern recognition system using ANN and identify research topics and applications which are at the forefront of this exciting and challenging field

39 citations


Proceedings ArticleDOI
01 Nov 2010
TL;DR: A hand gesture detection and recognition system for Ethiopian Sign Language (ESL) has been proposed and Gabor Filter together with Principal Component Analysis (PCA) and Artificial Neural Network (ANN) is used for recognizing the ESL from extracted features and to translate into Amharic voice.
Abstract: Pattern recognition is very challenging multidisciplinary research area attracting researchers and practitioners. Gesture recognition is a specialized pattern recognition task with the goal of interpreting human gestures via mathematical models. One of the usages of gesture recognition is the sign language recognition which is the basic communication method between deaf people. Since there is lack of proficient sign language teachers at schools for the deaf, the teaching and learning process is remaining affected. A system is therefore required to overcome communication barriers facing the deaf community. So, in this paper, a hand gesture detection and recognition system for Ethiopian Sign Language (ESL) has been proposed. Gabor Filter (GF) together with Principal Component Analysis (PCA) has been used for extracting features from the digital images of hand gestures while Artificial Neural Network (ANN) is used for recognizing the ESL from extracted features and to translate into Amharic voice. The experimental results show that the system has produced recognition rate of 98.53%.

37 citations


Proceedings ArticleDOI
13 Jun 2010
TL;DR: This work details a discriminative approach for optimizing one-shot recognition using micro-sets and presents experiments on the Animals with Attributes and Caltech-101 datasets that demonstrate the benefits of the formulation.
Abstract: For object category recognition to scale beyond a small number of classes, it is important that algorithms be able to learn from a small amount of labeled data per additional class One-shot recognition aims to apply the knowledge gained from a set of categories with plentiful data to categories for which only a single exemplar is available for each As with earlier efforts motivated by transfer learning, we seek an internal representation for the domain that generalizes across classes However, in contrast to existing work, we formulate the problem in a fundamentally new manner by optimizing the internal representation for the one-shot task using the notion of micro-sets A micro-set is a sample of data that contains only a single instance of each category, sampled from the pool of available data, which serves as a mechanism to force the learned representation to explicitly address the variability and noise inherent in the one-shot recognition task We optimize our learned domain features so that they minimize an expected loss over micro-sets drawn from the training set and show that these features generalize effectively to previously unseen categories We detail a discriminative approach for optimizing one-shot recognition using micro-sets and present experiments on the Animals with Attributes and Caltech-101 datasets that demonstrate the benefits of our formulation

Journal ArticleDOI
TL;DR: A new technique for static hand gesture recognition, for the Human-Computer Interaction (HCI) based on shape analysis is presented, designed to be a simple and robust gestural interface prototype for various PC applications.
Abstract: Considerable effort has been put towards developing intelligent and natural interfaces between users and computer systems. This is done by means of a variety of modes of information (visual, audio, pen, etc.) either used individually or in combination. The use of gestures as means to convey information is an important part of human communication. The automatic recognition of gestures enriches Human–Computer Interaction by offering a natural and intuitive method of data input. This paper presents a new technique for static hand gesture recognition, for the Human-Computer Interaction (HCI) based on shape analysis. The objective of this effort was to explore the utility of a neural network-based approach to the recognition of the hand gestures. The proposed system used the hand contour as a geometry feature. A unique multi-layer perceptron neural network is build for the classification by using back-propagation learning algorithm. The overall model is designed to be a simple and robust gestural interface prototype for various PC applications.


Journal ArticleDOI
TL;DR: IStraw is presented, a corner finding technique based on the ShortStraw algorithm that addresses deficiencies with ShortSt Straw while maintaining its simplicity and efficiency and develops an extension for ink strokes containing curves and arcs.

Book ChapterDOI
21 Sep 2010
TL;DR: This paper presents a system that fuses and interprets the outputs of several computer vision components as well as speech recognition to obtain a high-level understanding of the perceived scene.
Abstract: Most approaches to the visual perception of humans do not include high-level activity recognitition. This paper presents a system that fuses and interprets the outputs of several computer vision components as well as speech recognition to obtain a high-level understanding of the perceived scene. Our laboratory for investigating new ways of human-machine interaction and teamwork support, is equipped with an assemblage of cameras, some close-talking microphones, and a videowall as main interaction device. Here, we develop state of the art real-time computer vision systems to track and identify users, and estimate their visual focus of attention and gesture activity. We also monitor the users' speech activity in real time. This paper explains our approach to highlevel activity recognition based on these perceptual components and a temporal logic engine.

Proceedings Article
05 Jul 2010
TL;DR: A real-time sketch recognition interface that recognizes 485 freely-drawn military course-of-action symbols that achieves an accuracy of 90% when considering the top 3 interpretations and requiring every aspect of the shape to be correct.
Abstract: Military course-of-action (COA) diagrams are used to depict battle scenarios and include thousands of unique symbols, complete with additional textual and designator modifiers. We have created a real-time sketch recognition interface that recognizes 485 freely-drawn military course-of-action symbols. When the variations (not allowable by other systems) are factored in, our system is several orders of magnitude larger than the next biggest system. On 5,900 hand-drawn symbols, the system achieves an accuracy of 90% when considering the top 3 interpretations and requiring every aspect of the shape (variations, text, symbol, location, orientation) to be correct.

Posted Content
TL;DR: An overview of gesture recognition in real time using the concepts of correlation and Mahalanobis distance is presented and the six universal emotional categories namely joy, anger, fear, disgust, sadness and surprise are considered.
Abstract: Augmenting human computer interaction with automated analysis and synthesis of facial expressions is a goal towards which much research effort has been devoted recently. Facial gesture recognition is one of the important component of natural human-machine interfaces; it may also be used in behavioural science, security systems and in clinical practice. Although humans recognise facial expressions virtually without effort or delay, reliable expression recognition by machine is still a challenge. The face expression recognition problem is challenging because different individuals display the same expression differently. This paper presents an overview of gesture recognition in real time using the concepts of correlation and Mahalanobis distance.We consider the six universal emotional categories namely joy, anger, fear, disgust, sadness and surprise.

Journal ArticleDOI
TL;DR: This paper describes the sketch recognition-based LAMPS system for teaching MPS1 by emulating the naturalness and realism of paper-based workbooks, while extending their functionality with human instructor-level critique and assessment at an automated level.
Abstract: The non-Romanized Mandarin Phonetic Symbols I (MPS1) system is a highly advantageous phonetic system for native English users studying Chinese Mandarin to learn, yet its steep initial learning curve discourages language programs to instead adopt Romanized phonetic systems. Computer-assisted language instruction (CALI) can greatly reduce this learning curve, in order to enable students to sooner benefit from the long-term advantages presented in MPS1 usage during the course of Chinese Mandarin study. Unfortunately, the technologies surrounding existing online handwriting recognition algorithms and CALI applications are insufficient in providing a ''dynamic'' counterpart to traditional paper-based workbooks employed in the classroom setting. In this paper, we describe our sketch recognition-based LAMPS system for teaching MPS1 by emulating the naturalness and realism of paper-based workbooks, while extending their functionality with human instructor-level critique and assessment at an automated level.

Proceedings ArticleDOI
07 Jun 2010
TL;DR: All these issues are significant factors, which substantially affect the ultimate performance of a sketch recognition engine, and the pros and cons of various choices that can be made while building sketch recognizers are presented and discussed.
Abstract: Image-based approaches to sketch recognition typically cast sketch recognition as a machine learning problem. In systems that adopt image-based recognition, the collected ink is generally fed through a standard three stage pipeline consisting of the feature extraction, learning and classification steps. Although these approaches make regular use of machine learning, existing work falls short of presenting a proper treatment of important issues such as feature extraction, feature selection, feature combination, and classifier fusion. In this paper, we show that all these issues are significantfactors, which substantially affect the ultimate performance of a sketch recognition engine. We support our case by experimental results obtained from two databases using representative sets of feature extraction, feature selection, classification, and classifier combination methods. We present the pros and cons of various choices that can be made while building sketch recognizers and discuss their trade-offs.

Proceedings ArticleDOI
08 Nov 2010
TL;DR: In this paper, a gesture recognition system for continuous natural gestures is presented, in which gestures are encountered in spontaneous interaction, rather than a set of artificial gestures chosen to simplify recognition, and achieved 95.6% accuracy on isolated gesture recognition and 73% recognition rate on continuous gesture recognition.
Abstract: Using a new hand tracking technology capable of tracking 3D hand postures in real-time, we developed a recognition system for continuous natural gestures. By natural gestures, we mean those encountered in spontaneous interaction, rather than a set of artificial gestures chosen to simplify recognition. To date we have achieved 95.6% accuracy on isolated gesture recognition, and 73% recognition rate on continuous gesture recognition, with data from 3 users and twelve gesture classes. We connected our gesture recognition system to Google Earth, enabling real time gestural control of a 3D map. We describe the challenges of signal accuracy and signal interpretation presented by working in a real-world environment, and detail how we overcame them.

Journal ArticleDOI
TL;DR: A variant of a popular and simple gesture recognition algorithm that recognizes freely drawn shapes as well as a highly accurate but more complex recognizer designed explicitly for free-sketch recognition are introduced.

Proceedings ArticleDOI
07 Feb 2010
TL;DR: A pen-based geometry theorem proving system that can effectively recognize hand-drawn figures and hand-written proof scripts, and accurately establish the correspondence between geometric components and proof steps is described.
Abstract: Computer-based geometry systems have been widely used for teaching and learning, but largely based on mouse-and-keyboard interaction, these systems usually require users to draw figures by following strict task structures defined by menus, buttons, and mouse and keyboard actions. Pen-based designs offer a more natural way to develop geometry theorem proofs with hand-drawn figures and scripts. This paper describes a pen-based geometry theorem proving system that can effectively recognize hand-drawn figures and hand-written proof scripts, and accurately establish the correspondence between geometric components and proof steps. Our system provides dynamic and intelligent visual assistance to help users understand the process of proving and allows users to manipulate geometric components and proof scripts based on structures rather than strokes. The results from evaluation study show that our system is well perceived and users have high satisfaction with the accuracy of sketch recognition, the effectiveness of visual hints, and the efficiency of structure-based manipulation.

Proceedings ArticleDOI
07 Jun 2010
TL;DR: A sketch-based modeling system, inspired from anatomical drawing, which constructs plausible 3D models of branching vessels from a single sketch, using an expressive rendering method that imitates the aspect of chalk drawing is presented.
Abstract: We present a sketch-based modeling system, inspired from anatomical drawing, which constructs plausible 3D models of branching vessels from a single sketch. The input drawing typically includes non-flat silhouettes and occluded parts. We exploit the sketching conventions used in anatomical drawings to infer depth and curvature from contour and skeleton curves extracted from the sketch. We then model the set of branching vessels as a convolution surface generated by a graph of skeleton curves: while these curves are set to fit the sketch in the front plane, non-uniform B-spline interpolation is used to give them smoothly varying depth values that meet the set of constraints. The final model is displayed using an expressive rendering method that imitates the aspect of chalk drawing. We discuss the future use of this system as a step towards the interactive teaching of anatomy.


Proceedings ArticleDOI
10 Apr 2010
TL;DR: This work's sketch recognition interface recognizes 485 different freely drawn military course-of-action diagram symbols in real time, with each shape containing its own elaborate set of text labels and other variations.
Abstract: Sketch recognition is the automated recognition of hand drawn diagrams. Military course-of-action (COA) diagrams are used to depict battle scenarios. The domain of military course of action diagrams is particularly interesting because it includes tens of thousands of different geometric shapes, complete with many additional textual and designator modifiers. Existing sketch recognition systems recognize on the order of at most 20 different shapes. Our sketch recognition interface recognizes 485 different freely drawn military course-of-action diagram symbols in real time, with each shape containing its own elaborate set of text labels and other variations. We are able to do this by combining multiple recognition techniques in a single system. When the variations (not allowable by other systems) are factored in, our system is several orders of magnitude larger than the next biggest system. On 5,900 hand-drawn symbols drawn by 8 researchers, the system achieves an accuracy of 90% when considering the top 3 interpretations and requiring every aspect of the shape (variations, text, symbol, location, orientation) to be correct.

Journal ArticleDOI
TL;DR: This paper proposes a concept and architecture for a generic geometry-based recognizer that not only recognizes single components, but can also understand sketched diagrams as a whole, and can resolve ambiguities by syntactical and semantical analysis.
Abstract: Many of today's recognition approaches for hand-drawn sketches are feature-based, which is conceptually similar to the recognition of hand-written text. While very suitable for the latter (and more tasks, e.g., for entering gestures as commands), such approaches do not easily allow for clustering and segmentation of strokes, which is crucial to their recognition. This results in applications which do not feel natural but impose artificial restrictions on the user regarding how sketches and single components (shapes) are to be drawn. This paper proposes a concept and architecture for a generic geometry-based recognizer. It is designed for the mentioned issue of clustering and segmentation. All strokes are fed into independent preprocessors called transformers that process and abstract the strokes. The result of the transformers is stored in models. Each model is responsible for a certain type of primitive, e.g., a line or an arc. The advantage of models is that different interpretations of a stroke exist in parallel, and there is no need to rate or sort these interpretations. The recognition of a component in the drawing is then decomposed into the recognition of its primitives that can be directly queried for in the models. Finally, the identified primitives are assembled to the complete component. This process is directed by an automatically computed search plan, which exhibits shape characteristics in order to ensure an efficient recognition. In several case studies the applicability and generality of the proposed approach is shown, as very different types of components can be recognized. Furthermore, the proposed approach is part of a complete system for sketch understanding. This system not only recognizes single components, but can also understand sketched diagrams as a whole, and can resolve ambiguities by syntactical and semantical analysis. A user study was conducted to obtain recognition rates and runtime data of our recognizer.

Journal ArticleDOI
TL;DR: This project covers various issues like what are gesture, their classification, their role in implementing a gesture recognition system, system architecture concepts for implementing a Gesture Recognition System, major issues involved in implemented a simplified gesture Recognition system, exploitation of gestures in experimental systems, importance of gesture recognitionSystem, real time applications and future scope of gestures recognition system.
Abstract: Gestures are a major form of human communication. Hence gestures are found to be an appealing way to interact with computers, as they are already a natural part of how we communicate. A primary goal of gesture recognition is to create a system which can identify specific human gestures and use them to convey information for device control and by implementing real time gesture recognition a user can control a computer by doing a specific gesture in front of a video camera linked to the computer. A primary goal of gesture recognition research is to create a system which can identify specific human gestures and use them to convey information or for device control. This project covers various issues like what are gesture, their classification, their role in implementing a gesture recognition system, system architecture concepts for implementing a gesture recognition system, major issues involved in implementing a simplified gesture recognition system, exploitation of gestures in experimental systems, importance of gesture recognition system, real time applications and future scope of gesture recognition system.The algorithm used in this project are Finger counting algorithm,X-Y axis(To recognize the thumb).

Proceedings ArticleDOI
19 Apr 2010
TL;DR: The aim of this technique is the proposal of a real time vision system for its application within visual interaction environments through hand gesture recognition, using general-purpose hardware and low cost sensors, like a simple personal computer and an USB web cam, so any user could make use of it in his office or home.
Abstract: Hand gestures are an important modality for human computer interaction (HCI) [1]. Compared to many existing interfaces, hand gestures have the advantages of being easy to use, natural, and intuitive. Successful applications of hand gesture recognition include computer games control [2], human-robot interaction [3], and sign language recognition [4], to name a few. Vision-based recognition systems can give computers the capability of understanding and responding to hand gestures. The aim of this technique is the proposal of a real time vision system for its application within visual interaction environments through hand gesture recognition, using general-purpose hardware and low cost sensors, like a simple personal computer and an USB web cam, so any user could make use of it in his office or home. The basis of our approach is a fast segmentation process to obtain the moving hand from the whole image, which is able to deal with a large number of hand shapes against different backgrounds and lighting conditions, and a recognition process that identifies the hand posture from the temporal sequence of segmented hands. The use of a visual memory (Stored database) allows the system to handle variations within a gesture and speed up the recognition process through the storage of different variables related to each gesture. A hierarchical gesture recognition algorithm is introduced to recognize a large number of gestures. Three stages of the proposed algorithm are based on a new hand tracking technique to recognize the actual beginning of a gesture using a Kalman filtering process, hidden Markov models and graph matching. Processing time is important in working with large databases. Therefore, special cares are taken to deal with the large number of gestures.

Book
10 Sep 2010
TL;DR: Machine-based Intelligent Face Recognition discusses the general engineering method of imitating intelligent human brains for video-based face recognition in a fundamental way, which is completely unsupervised, automatic, self-learning,Self-updated and robust.
Abstract: Machine-based Intelligent Face Recognition discusses the general engineering method of imitating intelligent human brains for video-based face recognition in a fundamental way, which is completely unsupervised, automatic, self-learning, self-updated and robust It also overviews state-of-the-art research on cognitive-based biometrics and machine-based biometrics, and especially the advances in face recognition This book is intended for scientists, researchers, engineers, and students in the field of computer vision, machine intelligence, and particularly of face recognition Dr Dengpan Mou, Dr-Ing and MSc from University of Ulm, Germany, is with Harman/Becker Automotive Systems GmbH, working on video processing, computer vision and machine learning research and development topics

Proceedings ArticleDOI
07 Jun 2010
TL;DR: This work provides a method for extracting and classifying stroke segments from a line drawing or sketch with the goal of producing perceptually-valid output in the context of mesh inflation.
Abstract: We provide a method for extracting and classifying stroke segments from a line drawing or sketch with the goal of producing perceptually-valid output in the context of mesh inflation. This is important as processing freehand sketch input is a fundamental task in sketch-based interfaces, yet many systems bypass the problem by forcing simplified, unnatural drawing patterns. Our stroke extraction combines contour tracing with feature-preserving post-processing. The extracted strokes are classified according to the objects and regions in the sketch: object and region boundaries, interior features, and suggestive lines. The outcome of this classification is demonstrated with examples in feature-sensitive mesh inflation.