scispace - formally typeset
Search or ask a question
Topic

Sketch recognition

About: Sketch recognition is a research topic. Over the lifetime, 1611 publications have been published within this topic receiving 40284 citations.


Papers
More filters
Proceedings Article
01 Aug 2008
TL;DR: The comparison of both contactless interfaces shows that in spite of the differences in computer vision and ASR techniques applied, they provide similar performances in contactless human-computer interaction.
Abstract: The paper presents two different multimodal interfaces based on automatic recognition and interpretation of speech and gestures of user's head and hands, developed within the framework of the SIMILAR European Network of Excellence. The architectures of ICANDO and MOWGLI multimodal interfaces, modalities recognition, information synchronization and fusion as well as qualitative comparison and quantitative evaluation using Fitt's law experiments are described. The comparison of both contactless interfaces shows that in spite of the differences in computer vision and ASR techniques applied, they provide similar performances in contactless human-computer interaction.

1 citations

Proceedings ArticleDOI
10 Jul 1999
TL;DR: A one-layered, hard-limited perceptron can be used to classify analog pattern vectors if the latter satisfy the PLI condition and an automatic feature extraction scheme can be derived using some N-dimension Euclidean geometry theories.
Abstract: In the author's previous works (1990-1999), a one-layered, hard-limited perceptron can be used to classify analog pattern vectors if the latter satisfy the PLI condition. For most pattern recognition applications, this condition should be satisfied. When this condition is satisfied, then an automatic feature extraction scheme can be derived using some N-dimension Euclidean geometry theories. The scheme will automatically extract the most distinguished parts of the pattern vectors used in the training. It selects the feature vectors automatically according to the descending order of the volumes of the parallelepiped spanned by these sub-vectors. Theoretical derivation and numerical examples revealing the physical nature of this process and its effect in optimizing the robustness of this novel pattern recognition system are reported in detail. An experiment shows that the system gives the learning of 4 handwritten characters near to real time. The recognition of untrained handwritten characters is above 90% correct and the recognition is in real time.

1 citations

Proceedings ArticleDOI
15 Dec 2014
TL;DR: This paper describes significant improvement in On-line handwritten Japanese text recognition that is free from line direction and character orientation constraints and applies a robust context integration model to recognize each text line element.
Abstract: This paper describes significant improvement in On-line handwritten Japanese text recognition that is free from line direction and character orientation constraints. The original system [1, 2] separates freely written text into text line elements, estimates and normalizes character orientation and line direction. Then, it hypothetically segments each text line element into primitive segments, constructs a segmentation-recognition candidate lattice and evaluates the likelihood of candidate segmentation-recognition paths by combining the scores of character recognition, geometric features, as well as linguistic context. In this scheme, we have updated the over-segmentation for each text line element and applied a robust context integration model to recognize each text line element. Experimental results on text from the HANDS-Kondate t bf-2001-11 database demonstrate large improvement in the character recognition rate compared with the previous system [1, 2].

1 citations

Proceedings ArticleDOI
20 Jun 2015
TL;DR: The results of the comprehensive experiments demonstrate that the newly developed alternative system is a more successful candidate (in terms of prediction accuracy and early prediction speed) than the existing system for real-time activity prediction.
Abstract: Recently there has been a growing interest in sketch recognition technologies for facilitating human-computer interaction. Existing sketch recognition studies mainly focus on recognizing pre-defined symbols and gestures. However, just as there is a need for systems that can automatically recognize symbols and gestures, there is also a pressing need for systems that can automatically recognize pen-based manipulation activities (e.g. dragging, maximizing, minimizing, scrolling). There are two main challenges in classifying manipulation activities. First is the inherent lack of characteristic visual appearances of pen inputs that correspond to manipulation activities. Second is the necessity of real-time classification based upon the principle that users must receive immediate and appropriate visual feedback about the effects of their actions. In this paper (1) an existing activity prediction system for pen-based devices is modified for real-time activity prediction and (2) an alternative time-based activity prediction system is introduced. Both systems use eye gaze movements that naturally accompany pen-based user interaction for activity classification. The results of our comprehensive experiments demonstrate that the newly developed alternative system is a more successful candidate (in terms of prediction accuracy and early prediction speed) than the existing system for real-time activity prediction. More specifically, midway through an activity, the alternative system reaches 66% of its maximum accuracy value (i.e. 66% of 70.34%) whereas the existing system reaches only 36% of its maximum accuracy value (i.e. 36% of 55.69%).

1 citations

Book ChapterDOI
19 Mar 1996
TL;DR: A novel hand data model is described as a system evaluator that defines the minimum requirements for a robust recognition system and adopts an estimation of the Bayes error by calculating the classification error using distribution characteristics of the patterns.
Abstract: This paper describes a novel hand data model as a system evaluator that defines the minimum requirements for a robust recognition system. Interface devices for the user’s hand can be as simple as a switch to highly complex multi sensor gloves. A model is required that can cope with both diversity of the input devices and the rich variety of gesture based sign languages and user interfaces. We adopt an estimation of the Bayes error by calculating the classification error using distribution characteristics of the patterns, and this estimation is useful in classifying the gestures, evaluating the gesture input system, and specifying the gesture vocabulary.

1 citations


Network Information
Related Topics (5)
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Object detection
46.1K papers, 1.3M citations
83% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image segmentation
79.6K papers, 1.8M citations
81% related
Convolutional neural network
74.7K papers, 2M citations
80% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202326
202271
202130
202029
201946
201827