scispace - formally typeset
Search or ask a question
Topic

Sketch recognition

About: Sketch recognition is a research topic. Over the lifetime, 1611 publications have been published within this topic receiving 40284 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A new method for user-independent gesture recognition from time-varying images using relative motion-dependent feature extraction, together with discriminant analysis and dynamically updated buffer structures for providing online learning/recognition abilities.

14 citations

Journal Article
TL;DR: The experimental result of the system shows that using the method of model matching based on the Hausdorff distance to realize the vision based static gesture recognition is feasible.
Abstract: With the development of the advanced techniques of human computer interaction(HCI), gesture recognition is becoming one of the key techniques of HCI. Due to some notable advantages of vision based gesture recognition(VGR), e.g. more naturalness to HCI, now VGR is an active research topic in the fields of image processing, pattern recognition, computer vision and others. The method of model matching using Hausdorff distance has the characters of low computing cost and strong adaptability. The system described in this paper applies the hausdorff distance for the first time to visually recognize the chinese finger alphabet(CFA) gestures(total 30 gestures) with the recognition features of edge pixels in the distance transform space. In order to improve the robust performance of the system, the modified hausdorff distance(MHD) has been proposed and applied in the recognition process. The average recognition rate of the system using MHD is up to 96 7% on the testing set. The experimental result of the system shows that using the method of model matching based on the Hausdorff distance to realize the vision based static gesture recognition is feasible.

14 citations

Proceedings Article
11 Jul 2009
TL;DR: This work reports here on techniques developed that use information from both sketch and speech to distinguish gesture strokes from non-gestures -- a critical first step in understanding a sketch of a device.
Abstract: Mechanical design tools would be considerably more useful if we could interact with them in the way that human designers communicate design ideas to one another, i.e., using crude sketches and informal speech. Those crude sketches frequently contain pen strokes of two different sorts, one type portraying device structure, the other denoting gestures, such as arrows used to indicate motion. We report here on techniques we developed that use information from both sketch and speech to distinguish gesture strokes from non-gestures -- a critical first step in understanding a sketch of a device. We collected and analyzed unconstrained device descriptions, which revealed six common types of gestures. Guided by this knowledge, we developed a classifier that uses both sketch and speech features to distinguish gesture strokes from nongestures. Experiments with our techniques indicate that the sketch and speech modalities alone produce equivalent classification accuracy, but combining them produces higher accuracy.

14 citations

Book ChapterDOI
15 Sep 2011
TL;DR: This paper demonstrates the integration of a classifier, based on an incremental learning method, in an interactive sketch analyzer based on a competitive breadth-first exploration of the analysis tree for interpreting the 2D architectural floor plans.
Abstract: In this paper, we present the integration of a classifier, based on an incremental learning method, in an interactive sketch analyzer. The classifier recognizes the symbol with a degree of confidence. Sometimes the analyzer considers that the response is insufficient to make the right decision. The decision process then solicits the user to explicitly validate the right decision. The user associates the symbol to an existing class, to a newly created class or ignores this recognition. The classifier learns during the interpretation phase. We can thus have a method for auto-evolutionary interpretation of sketches. In fact, the user participation has a great impact to avoid error accumulation during the analysis. This paper demonstrates this integration in an interactive method based on a competitive breadth-first exploration of the analysis tree for interpreting the 2D architectural floor plans.

14 citations

Proceedings ArticleDOI
21 May 2015
TL;DR: This study proposes an integrated system which has ability to track multiple people at the same time, to recognize their facial expressions, and to identify social atmosphere, and develops algorithms to determine the hands sign via a process called combinatorial approach recognizer equation.
Abstract: In this paper we introduce a multimodal information fusion for human-robot interaction system These multimodal information consists of combining methods for hand sign recognition and emotion recognition of multiple. These different recognition modalities are an essential way for Human-Robot Interaction (HRI). Sign language is the most intuitive and direct way to communication for impaired or disabled people. Through the hand or body gestures, the disabled can easily let caregiver or robot know what message they want to convey. Emotional interaction with human beings is desirable for robots. In this study, we propose an integrated system which has ability to track multiple people at the same time, to recognize their facial expressions, and to identify social atmosphere. Consequently, robots can easily recognize facial expression, emotion variations of different people, and can respond properly. In this paper, we have developed algorithms to determine the hands sign via a process called combinatorial approach recognizer equation. These two recognizers are aimed to complement the ability of discrimination. In our facial expression recognition scheme, we fuse feature vectors based approach and differential-active appearance model feature based approach to obtain not only apposite positions of feature points, but also more information about texture and appearance. We have successfully demonstrated hand gesture recognition and emotion recognition experimentally with proof of concept.

14 citations


Network Information
Related Topics (5)
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Object detection
46.1K papers, 1.3M citations
83% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image segmentation
79.6K papers, 1.8M citations
81% related
Convolutional neural network
74.7K papers, 2M citations
80% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202326
202271
202130
202029
201946
201827