scispace - formally typeset
Search or ask a question
Topic

Sketch recognition

About: Sketch recognition is a research topic. Over the lifetime, 1611 publications have been published within this topic receiving 40284 citations.


Papers
More filters
Proceedings ArticleDOI
03 Mar 2016
TL;DR: A near combination algorithm are developed for each process for the purpose of segmentation and zoning phenomena to achieve better accuracy and to overcome all the drawbacks found in all other available OCR algorithm.
Abstract: Hand written character recognition is the phenomena of enabling a machine to automatically recognize the characters or scripts written in user language. Optical character recognition has become one of the most successful applicants of technology in the field of pattern recognition and artificial intelligence in this project a scanned image is translated into machine established text by means of using optical character recognition. Here a hand written English cursive word is scanned and this image is fed into the computer in which it is recognized using neural network and converted into the same work in equivalent printed characters. A near combination algorithm are developed for each process for the purpose of segmentation and zoning phenomena to achieve better accuracy and to overcome all the drawbacks found in all other available OCR algorithm.

4 citations

Proceedings ArticleDOI
12 Oct 2020
TL;DR: This paper proposes to jointly predict the tactile saliency, depth map and semantic category of a sketch in an end-to-end learning-based framework, and proposes to synthesize training data by leveraging a collection of 3D shapes with 3D tactile Saliency information.
Abstract: In this paper, we aim to understand the functionality of 2D sketches by predicting how humans would interact with the objects depicted by sketches in real life. Given a 2D sketch, we learn to predict a tactile saliency map for it, which represents where humans would grasp, press, or touch the object depicted by the sketch. We hypothesize that understanding 3D structure and category of the sketched object would help such tactile saliency reasoning. We thus propose to jointly predict the tactile saliency, depth map and semantic category of a sketch in an end-to-end learning-based framework. To train our model, we propose to synthesize training data by leveraging a collection of 3D shapes with 3D tactile saliency information. Experiments show that our model can predict accurate and plausible tactile saliency maps for both synthetic and real sketches. In addition, we also demonstrate that our predicted tactile saliency is beneficial to sketch recognition and sketch-based 3D shape retrieval, and enables us to establish part-based functional correspondences among sketches.

4 citations

Journal ArticleDOI
TL;DR: SR-Sketch is a sketch creation tool that can act both as a front-end visual query module to visual information retrieval systems and as an aid tool for fast image composition.
Abstract: This article describes SR-Sketch, a sketch creation tool that can act both as a front-end visual query module to visual information retrieval systems and as an aid tool for fast image composition. The system allows the user to draw shapes on the computer screen using the mouse cursor. At any time during this process the user can query a shape database for similar shapes and then select the ones s/he thinks are more relevant. The system can then automatically align and replace user-drawn objects with the chosen database shapes in the user sketch. For any match between a sketch shape and a database shape, the application can provide a visual explanation of how and why two shapes are considered similar. Evaluation results show that the system achieves favorable results with respect to noise tolerance, speed and reliability. SR-Sketch is freely available for download and experimentation.

4 citations

Dissertation
01 Jan 2009
TL;DR: This thesis presents two methods for combining recognition systems and shows that combining several recognition systems based on different representations can improve the accuracy of existing recognition methods.
Abstract: Sketching is a common means of conveying, representing, and preserving information, and it has become a subject of research as a method for human-computer interaction, specifically in the area of computer-aided design. Digitally collected sketches contain both spatial and temporal information; additionally, they may contain a conceptual structure of shapes and subshapes. These multiple aspects suggest several ways of representing sketches, each with advantages and disadvantages for recognition. Most existing sketch recognitions systems are based on a single representation and do not use all available information. We propose combining several representations and systems as a way to improve recognition accuracy. This thesis presents two methods for combining recognition systems. The first improves recognition by improving segmentation, while the second seeks to predict how well systems will recognize a given domain or symbol and combine their outputs accordingly. We show that combining several recognition systems based on different representations can improve the accuracy of existing recognition methods. Thesis Supervisor: Randall Davis Title: Professor Acknowledgments First, I must thank my advisor Randy Davis, who always asks the right question and provides the right way of looking at a problem, and who may very well be the best advisor at MIT. I didn't know what I was doing when I first arrived at MIT and chose a group, but I certainly chose well. I would like to thank my committee members Patrick Winston and Pawan Sinha for their valuable suggestions, insights, and questions, and Professor Winston in particular for sharing his renowned presentation skills. I have been very fortunate not only in my advisors, but in my group mates as well. I have benefited greatly from the advice, feedback, and community the Design Rationale/Multimodal Understanding Group has provided over the years. Thank you in particular to Jacob Eisenstein and Tom Ouyang for being such great officemates and sounding boards, and to Mike Oltmans, Christine Alvarado, and Metin Sezgin for not only being great group mates but also for providing the work upon which this thesis is based. I am particularly indebted to Aaron Adler for being a great officemate and frequent lunch mate, and for his invaluable tech support. Thank you to T!G, and to Anthony in particular, for solving every problem thrown at them, including but not limited to, excessive processor time demands, mice, leaky ceilings, odd smells and extreme temperatures. I am grateful to many outside of CSAIL as well. My time spent with the cycling team will always be one of the best experiences of my life, and the women's team in particular has provided me with friendships that will last beyond MIT. Thank you to my parents for their unwavering love and support (and for not asking too many questions like that pesky "When will you be finished?" one). Thank you to my family, both new and old, for their kindness, support and understanding. Finally, thank you to my husband Bill, who has been wonderfully supportive and encouraging. I am grateful to have had him with me for the end of this journey, and look forward to the beginnings of many more.

4 citations

01 Jan 2005
TL;DR: This paper focuses on models of the shapes of objects that are made up of fixed collections of sub-parts whose dimensions and spatial arrangement exhibit variation, and demonstrates how to use models learned in three dimensions for recognition of two-dimensional sketches of.
Abstract: Artifacts made by humans, such as items of furniture and houses, exhibit an enormous amount of variability in shape. In this paper, we concentrate on models of the shapes of objects that are made up of fixed collections of sub-parts whose dimensions and spatial arrangement exhibit variation. Our goals are: to learn these models from data and to use them for recognition. Our emphasis is on learning and recognition from three-dimensional data, to test the basic shape-modeling methodology. In this paper we also demonstrate how to use models learned in three dimensions for recognition of two-dimensional sketches of

4 citations


Network Information
Related Topics (5)
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Object detection
46.1K papers, 1.3M citations
83% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image segmentation
79.6K papers, 1.8M citations
81% related
Convolutional neural network
74.7K papers, 2M citations
80% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202326
202271
202130
202029
201946
201827