scispace - formally typeset
Search or ask a question
Topic

Sketch recognition

About: Sketch recognition is a research topic. Over the lifetime, 1611 publications have been published within this topic receiving 40284 citations.


Papers
More filters
Journal Article
TL;DR: Low complexity and acceptable performance are the most impressive features of this method that let it to be simply implemented in mobile and battery-limited computing devices.
Abstract: In this research, we present a heuristic method for character recognition. For this purpose, a sketch is constructed from the image that contains the character to be recognized. This sketch contains the most important pixels of image that are representatives of original image. These points are the most probable points in pixel-by-pixel matching of image that adapt to target image. Furthermore, a technique called gravity shifting is utilized for taking over the problem of elongation of characters. The consequence of combining sketch and gravity techniques leaded to a language free character recognition method. This method can be implemented independently for real-time uses or in combination of other classifiers as a feature extraction algorithm. Low complexity and acceptable performance are the most impressive features of this method that let it to be simply implemented in mobile and battery-limited computing devices. Results show that in the best case 86% of accuracy is obtained and in the worst case 28% of recognized characters are accurate.
01 Jan 2005
TL;DR: An alternative framework of recognition system that in the future can be used to support analysis and determination of one face based on a sketch and a generative model of face sketch can be produced and used for the future improvement.
Abstract: In this paper, we propose an alternative framework of recognition system that in the future can be used to support analysis and determination of one face based on a sketch. Approach that will be used is different with previous research, in the matter of using component of face as object to be recognized, as an additional to the whole face sketch. Photograph image is transformed into sketch image using image processing steps and the features are detected using geometric based and prototype based algorithms. Matching is performed in the same modality environment using statistical-based classifier. The face database consists of the result of sketch image and related features, including bitmap picture as result of training steps. By developing the method and utilizing the components, a generative model of face sketch can be produced and used for the future improvement.
Journal ArticleDOI
TL;DR: Xiao et al. as discussed by the authors proposed the DifferSketch dataset, which consists of 3,620 freehand multi-view sketches, which are registered with their corresponding 3D objects under certain views.
Abstract: Multiple sketch datasets have been proposed to understand how people draw 3D objects. However, such datasets are often of small scale and cover a small set of objects or categories. In addition, these datasets contain freehand sketches mostly from expert users, making it difficult to compare the drawings by expert and novice users, while such comparisons are critical in informing more effective sketch-based interfaces for either user groups. These observations motivate us to analyze how differently people with and without adequate drawing skills sketch 3D objects. We invited 70 novice users and 38 expert users to sketch 136 3D objects, which were presented as 362 images rendered from multiple views. This leads to a new dataset of 3,620 freehand multi-view sketches, which are registered with their corresponding 3D objects under certain views. Our dataset is an order of magnitude larger than the existing datasets. We analyze the collected data at three levels, i.e., sketch-level, stroke-level, and pixel-level, under both spatial and temporal characteristics, and within and across groups of creators. We found that the drawings by professionals and novices show significant differences at stroke-level, both intrinsically and extrinsically. We demonstrate the usefulness of our dataset in two applications: (i) freehand-style sketch synthesis, and (ii) posing it as a potential benchmark for sketch-based 3D reconstruction. Our dataset and code are available at https://chufengxiao.github.io/DifferSketching/.
Journal ArticleDOI
TL;DR: In this paper , the authors outline human and PC coordination through gesture recognition technology over the past few years and give an insight into the negative factors that hinder the development of gesture recognition.
Abstract: Currently, gesture recognition is the most rapidly advancing field in processing images and artificial technology. Gesture processing entails a procedure whereby gestures of the human body are recognized and used to coordinate PCs and other intelligent devices. This research will broadly outline human and PC coordination through gesture recognition technology over the past few years. Gesture recognition develops a simplified communication path between humans and PCs called Human-Computer Interaction (HCI). This study seeks to expose the significance of gesture recognition technology in recent years. In addition, this research will outlay the various methods in which gesture recognition technology can be applied in different perspectives, namely in education, transport, healthcare, and entertainment. This research will also elaborate on the pros and cons of every application process in each aspect. Lastly, this research will give an insight into the negative factors that hinder the development of gesture recognition technology. The research of gesture recognition mainly focuses on the visual aspect, and gesture provides an easy to understand recognition method. This makes gesture recognition products come into being and are widely used in the world. Now people do not need to touch anything. As long as they can wave their hands in the air, they can realize a series of complex operations that can be completed by touching the screen with the mouse and keyboard in the past.
Journal ArticleDOI
TL;DR: The authors provide a method to deploy the trained model on end devices, by building an edge computing architecture using Kubernetes, and provide developers with a real-time gesture recognition component.
Abstract: As end devices have become ubiquitous in daily life, the use of natural human-machine interfaces has become an important topic. Many researchers have proposed the frameworks to improve the performance of dynamic hand gesture recognition. Some CNN models are widely used to increase the accuracy of dynamic hand gesture recognition. However, most CNN models are not suitable for end devices. This is because image frames are captured continuously and result in lower hand gesture recognition accuracy. In addition, the trained models need to be efficiently deployed on end devices. To solve the problems, the study proposes a dynamic hand gesture recognition framework on end devices. The authors provide a method (i.e., ModelOps) to deploy the trained model on end devices, by building an edge computing architecture using Kubernetes. The research provides developers with a real-time gesture recognition component. The experimental results show that the framework is suitable on end devices.

Network Information
Related Topics (5)
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Object detection
46.1K papers, 1.3M citations
83% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image segmentation
79.6K papers, 1.8M citations
81% related
Convolutional neural network
74.7K papers, 2M citations
80% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202326
202271
202130
202029
201946
201827