Topic
Sketch recognition
About: Sketch recognition is a research topic. Over the lifetime, 1611 publications have been published within this topic receiving 40284 citations.
Papers published on a yearly basis
Papers
More filters
••
29 Sep 2014TL;DR: This paper briefly reviews a number of classification methods used in automatic speech recognition systems and proposes a new back-end classifier that is based on artificial life that can be used in a speech recognition system.
Abstract: After years of research activity, the machine recognition performance of speech still does not match human performance. As speech recognition is a pattern recognition problem, classification is an important part of any speech recognition system. This paper, briefly reviews a number of classification methods that have been used in automatic speech recognition systems and proposes a new back-end classifier that is based on artificial life. The paper describes how the proposed classifier can be used in a speech recognition system.
5 citations
••
01 Jul 2007
TL;DR: This chapter presents a vision-based face and gesture recognition system for human-robot interaction that uses robot eye’s cameras or CCD cameras to identify humans and recognize their gestures based on face and hand poses.
Abstract: This chapter presents a vision-based face and gesture recognition system for human-robot interaction. By using subspace method, face and predefined hand poses are classified from the three largest skin-like regions that are segmented using YIQ color representation system. In the subspace method we consider separate eigenspaces for each class or pose. Face is recognized using pose specific subspace method and gesture is recognized using the rulebased approach whenever the combinations of three skin-like regions at a particular image frame satisfy a predefined condition. These gesture commands are sent to robot through TCP/IP wireless network for human-robot interaction. The effectiveness of this method has been demonstrated by interacting with an entertainment robot named AIBO and a humanoid robot Robovie. Human-robot symbiotic systems have been studied extensively in recent years, considering that robots will play an important role in the future welfare society [Ueno, 2001]. The use of intelligent robots encourages the view of the machine as a partner in communication rather than as a tool. In the near future, robots will interact closely with a group of humans in their everyday environment in the field of entertainment, recreation, health-care, nursing, etc. In human-human interaction, multiple communication modals such as speech, gestures and body movements are frequently used. The standard input methods, such as text input via the keyboard and pointer/location information from a mouse, do not provide a natural, intuitive interaction between humans and robots. Therefore, it is essential to create models for natural and intuitive communication between humans and robots. Furthermore, for intuitive gesture-based interaction between human and robot, the robot should understand the meaning of gesture with respect to society and culture. The ability to understand hand gestures will improve the naturalness and efficiency of human interaction with robot, and allow the user to communicate in complex tasks without using tedious sets of detailed instructions. This interactive system uses robot eye’s cameras or CCD cameras to identify humans and recognize their gestures based on face and hand poses. Vision-based face recognition systems have three major components: image processing or extracting important clues (face pose and position), tracking the facial features (related position or motion of face and hand poses), and face recognition. Vision-based face recognition system varies along a number of
5 citations
••
01 Jan 2017
TL;DR: This study proposes a robust fusion scheme, namely feature-level fusion that use deep convolutional neural networks for recognizing hand-free sketches and develops a sketch recognition application for smartphones based on client-server application architecture.
Abstract: Understanding hand-free sketches with automated methods is a challenging task due to the diversity and abstract structures of the sketches. In this study, we propose a robust fusion scheme, namely feature-level fusion that use deep convolutional neural networks (CNNs) for recognizing hand-free sketches and develop a sketch recognition application for smartphones based on client-server application architecture. We employ inter-layer CNN features to capture different levels of abstractions of sketches along with fusion operator. Our results on TU-Berlin hand-free sketch benchmark dataset show that, our proposed feature-level fusion scheme achieves a recognition accuracy of 69.175%. This result is promising when compared with the human recognition accuracy of 73.1% on the same dataset.
5 citations
••
07 Nov 2005TL;DR: A new framework proposed in this paper offers user more sketching freedom by automatically grouping strokes and recognizing editing gestures as well as over-traced strokes in the framework of Bayesian network.
Abstract: Sketch recognition is an essential process for sketch understanding. The free drawing style of sketching makes it difficult to build a robust sketch recognition system that can support the imprecision and high variability present in sketch. This paper addresses these problems inherent in sketch recognition in the framework of Bayesian network which can readily represent uncertainty in the recognition process and make inference based on partial evidence. To further improve recognition accuracy, context information is incorporated in the recognition process rather than identifying sketches in isolation. The new framework proposed in this paper offers user more sketching freedom by automatically grouping strokes and recognizing editing gestures as well as over-traced strokes.
5 citations
••
07 Jun 1992
TL;DR: The author discusses two complete neural network recognition systems, a character recognition system and a fingerprint classification system that demonstrated state-of-the-art accuracy but both need improvements to be commercially viable.
Abstract: The author discusses two complete neural network recognition systems, a character recognition system and a fingerprint classification system. The requirements for a total vision system must include the capability for image isolation, segmentation, and feature extraction, as well as recognition. The systems were developed on a massively parallel array processor which was used to illustrate the importance of these higher-level functions. Both of these systems demonstrated state-of-the-art accuracy but both need improvements to be commercially viable. The issue in the character recognition system is to provide this accuracy at a speed compatible with commercial requirements of 1 page/s. This will require more sophisticated higher-level image parsing functions without loss of accuracy. The issue in fingerprint classification is the requirement for 99.7% accuracy at current speeds. >
5 citations