scispace - formally typeset
Search or ask a question
Topic

Sketch recognition

About: Sketch recognition is a research topic. Over the lifetime, 1611 publications have been published within this topic receiving 40284 citations.


Papers
More filters
Proceedings ArticleDOI
27 Apr 2015
TL;DR: An improved segmentation model based on HSV and YCbCr mixed skin-colour space is presented and problems such as recognition of similar gestures and movement epenthesis seem to be handled effectively with the proposed techniques.
Abstract: Hand gesture recognition systems are widely used for Human Computer Interaction (HCI) and sign language recognition. The primary requirement of a hand gesture based application system is to segment the hand/palm part from the other body parts and background in the best possible way. In this paper, we report certain techniques for recognizing isolated English alphabets gestures as well as continuous alphabet gestures. We present an improved segmentation model based on HSV and YCbCr mixed skin-colour space. The classification has been done by a 3 layered Multi-layer Perception Artificial Neural Network (MLP-ANN). Problems such as recognition of similar gestures and movement epenthesis seem to be handled effectively with the proposed techniques.

7 citations

01 Jan 2014
TL;DR: This work intends to study and implement a solution, generic enough, able to interpret user commands, composed of a set of dynamic and static gestures, and use those solutions to build an application able to work in a realtime human-computer interaction systems.
Abstract: Hand gesture recognition for human computer interaction is an area of active research in computer vision and machine learning. The primary goal of gesture recognition research, is to create a system, which can identify specific gestures and use them to convey information or for device control. For that, gestures need to be modelled in the spatial and temporal domains, where a hand posture is the static structure of the hand and a gesture is the dynamic movement of the hand. Being hand-pose one of the most important communication tools in human’s daily life, and with the continuous advances of image and video processing techniques, research on human-machine interaction through gesture recognition led to the use of such technology in a very broad range of applications, like touch screens, video game consoles, virtual reality, medical applications, etc. There are areas where this trend is an asset, as for example in the application of these technologies in interfaces that can help people with physical disabilities, or areas where it is a complement to the normal way of communicating. There are basically two types of approaches for hand gesture recognition: vision-based approaches and data glove methods. In the study we will be focusing our attention on vision-based approaches. Why vision-based hand gesture recognition systems? Vision-based hand gesture recognition systems provide a simpler and more intuitive way of communication between a human and a computer. Using visual input in this context makes it possible to communicate remotely with computerized equipment, without the need for physical contact. The main objective of this work is to study and implement solutions that can be generic enough, with the help of machine learning algorithms, allowing its application in a wide range of human-computer interfaces, for online gesture recognition. In pursuit of this, we intend to use a depth sensor camera to detect and extract hand information (hand features), for gesture classification. With the implemented solutions we intend to develop an integrated vision-based hand gesture recognition system, for offline training of static and dynamic hand gestures, in order to create ABSTRACT: Hand gesture recognition, being a natural way of human computer interaction, is an area of active research in computer vision and machine learning. This is an area with many different possible applications, giving users a simpler and more natural way to communicate with robots/systems interfaces, without the need for extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them to convey information or for device control. This work intends to study and implement a solution, generic enough, able to interpret user commands, composed of a set of dynamic and static gestures, and use those solutions to build an application able to work in a realtime human-computer interaction systems. The proposed solution is composed of two modules controlled by a FSM (Finite State Machine): a real time hand tracking and feature extraction system, supported by a SVM (Support Vector Machine) model for static hand posture classification and a set of HMMs (Hidden Markov Models) for dynamic single stroke hand gesture recognition. The experimental results showed that the system works very reliably, being able to recognize the set of defined commands in real-time. The SVM model for hand posture classification, trained with the selected hand features, achieved an accuracy of 99,2%. The proposed solution as the advantage of being computationally simple to train and use, and at the same time generic enough, allowing its application in any robot/system command interface.

7 citations

Journal ArticleDOI
TL;DR: The formal context-free parsing method of Earley is examined and shown to suggest a useful control structure model for integrating top-down and botton-up search in schemata representations.
Abstract: This paper is concerned with generalizing formal recognition methods from parsing theory to schemata knowledge representations. Within Artificial Intelligence, recognition tasks include aspects of natural language understanding, computer vision, episode understanding, speech recognition, and others. The notion of schemata as a suitable knowledge representation for these tasks is discussed. A number of problems with current schemata-based recognition systems are presented. To gain insight into alternative approaches, the formal context-free parsing method of Earley is examined. It is shown to suggest a useful control structure model for integrating top-down and botton-up search in schemata representations.

7 citations

Journal ArticleDOI
TL;DR: This review highlights the superiority of artificial neural networks, a popular area of Artificial Intelligence, over various other available methods like fuzzy logic and genetic algorithm in scene text recognition.
Abstract: scene text recognition has become an important emerging area of research in the field of image processing. In image processing, character recognition boosts the complexity in the area of Artificial Intelligence. Character recognition is not easy for computer programs in comparison to humans. In the broad spectrum of things, it may consider that recognizing patterns is the only thing which humans can do well and computers cannot. There are many reasons including various sources of variability, hypothesis and absence of hard-and-fast rules that define the appearance of a visual character. Hence; there is an unavoidable requirement for heuristic deduction of rules from different samples. This review highlights the superiority of artificial neural networks, a popular area of Artificial Intelligence, over various other available methods like fuzzy logic and genetic algorithm. In this paper, two methods are listed for character recognition - offline and online. The ―Offline‖ methods include Feature Extraction, Clustering, and Pattern Matching. Artificial neural networks use the static image properties. The online methods are divided into two methods, k-NN classifier and direction based algorithm. Thus, the scale of techniques available for scene text recognition deserves an admiration. This review gives a detail survey of use of artificial neural network in scene text recognition. KeywordsRecognition, Scene text recognition,Text extraction,

7 citations

Proceedings ArticleDOI
06 May 2015
TL;DR: With the use of a camera and computer vision technology such as image segmentation and feature extraction, a technique is developed which can be used for computer control using hand gestures.
Abstract: This paper presents a technique to develop a vision based interface system for controlling and performing various computer functions with the aim of making the human-computer interaction. The main aim of human computer interaction is to develop simpler ways for users to interact with computers. One of the main areas of research in human machine interaction is hand gesture recognition. It makes interaction with machines intelligible and effortless. In this investigation, with the use of a camera and computer vision technology such as image segmentation and feature extraction, a technique is developed which can be used for computer control using hand gestures.

7 citations


Network Information
Related Topics (5)
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Object detection
46.1K papers, 1.3M citations
83% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image segmentation
79.6K papers, 1.8M citations
81% related
Convolutional neural network
74.7K papers, 2M citations
80% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202326
202271
202130
202029
201946
201827