scispace - formally typeset
Search or ask a question
Topic

Sketch recognition

About: Sketch recognition is a research topic. Over the lifetime, 1611 publications have been published within this topic receiving 40284 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A multi-layered architecture for sketch-based interaction within virtual environments focused on table-like projection systems as human-centered output-devices to make sketching an integral part of the next-generation human–computer interface.

82 citations

Proceedings ArticleDOI
01 Jan 2013
TL;DR: This work presents a method for the representation and matching of sketches by exploiting not only local features but also global structures of sketches, through a star graph based ensemble matching strategy, and shows that by encapsulating holistic structure matching and learned bag-of-features models into a single framework, notable recognition performance improvement can be observed.
Abstract: Sketch recognition aims to automatically classify human hand sketches of objects into known categories. This has become increasingly a desirable capability due to recent advances in human computer interaction on portable devices. The problem is nontrivial because of the sparse and abstract nature of hand drawings as compared to photographic images of objects, compounded by a highly variable degree of details in human sketches. To this end, we present a method for the representation and matching of sketches by exploiting not only local features but also global structures of sketches, through a star graph based ensemble matching strategy. Different local feature representations were evaluated using the star graph model to demonstrate the effectiveness of the ensemble matching of structured features. We further show that by encapsulating holistic structure matching and learned bag-of-features models into a single framework, notable recognition performance improvement over the state-of-the-art can be observed. Extensive comparative experiments were carried out using the currently largest sketch dataset released by Eitz et al. [15], with over 20,000 sketches of 250 object categories generated by AMT (Amazon Mechanical Turk) crowd-sourcing.

80 citations

Proceedings ArticleDOI
19 May 2015
TL;DR: In the proposed algorithm, first the deep learning architecture based facial representation is learned using large face database of photos and then the representation is updated using small problem-specific training database.
Abstract: Sketch recognition is one of the integral components used by law enforcement agencies in solving crime. In recent past, software generated composite sketches are being preferred as they are more consistent and faster to construct than hand drawn sketches. Matching these composite sketches to face photographs is a complex task because the composite sketches are drawn based on the witness description and lack minute details which are present in photographs. This paper presents a novel algorithm for matching composite sketches with photographs using transfer learning with deep learning representation. In the proposed algorithm, first the deep learning architecture based facial representation is learned using large face database of photos and then the representation is updated using small problem-specific training database. Experiments are performed on the extended PRIP database and it is observed that the proposed algorithm outperforms recently proposed approach and a commercial face recognition system.

79 citations

Proceedings ArticleDOI
13 Feb 2011
TL;DR: A new sketch recognition framework for chemical structure drawings that combines multiple levels of visual features using a jointly trained conditional random field, improving accuracy and robustness and a novel learning-based approach to corner detection that achieves nearly perfect accuracy in the domain.
Abstract: We describe a new sketch recognition framework for chemical structure drawings that combines multiple levels of visual features using a jointly trained conditional random field. This joint model of appearance at different levels of detail makes our framework less sensitive to noise and drawing variations, improving accuracy and robustness. In addition, we present a novel learning-based approach to corner detection that achieves nearly perfect accuracy in our domain. The result is a recognizer that is better able to handle the wide range of drawing styles found in messy freehand sketches. Our system handles both graphics and text, producing a complete molecular structure as output. It works in real time, providing visual feedback about the recognition progress. On a dataset of chemical drawings our system achieved an accuracy rate of 97.4%, an improvement over the best reported results in literature. A preliminary user study also showed that participants were on average over twice as fast using our sketch-based system compared to ChemDraw, a popular CAD-based tool for authoring chemical diagrams. This was the case even though most of the users had years of experience using ChemDraw and little or no experience using Tablet PCs.

79 citations

01 Jan 2002
TL;DR: This work presents an architecture to support the development of robust recognition systems across multiple domains that maintains a separation between low-level shape information and high-level domain-specific context information, but uses the two sources of information together to improve recognition accuracy.
Abstract: People use sketches to express and record their ideas in many domains, including mechanical engineering, software design, and information architecture. Unfortunately, most computer programs cannot interpret free-hand sketches; designers transfer their sketches into computer design tools through menu-based interfaces. The few existing sketch recognition systems either tightly constrain the user’s drawing style or are fragile and difficult to construct. In previous work we found that domain knowledge can aid recognition. Here we present an architecture to support the development of robust recognition systems across multiple domains. Our architecture maintains a separation between low-level shape information and high-level domain-specific context information, but uses the two sources of information together to improve recognition accuracy.

78 citations


Network Information
Related Topics (5)
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Object detection
46.1K papers, 1.3M citations
83% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image segmentation
79.6K papers, 1.8M citations
81% related
Convolutional neural network
74.7K papers, 2M citations
80% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202326
202271
202130
202029
201946
201827