scispace - formally typeset
Search or ask a question
Topic

Sketch recognition

About: Sketch recognition is a research topic. Over the lifetime, 1611 publications have been published within this topic receiving 40284 citations.


Papers
More filters
01 Jan 2013
TL;DR: Intuitive and naturalness characteristics of “Hand Gestures” in the HCI have been the driving force and motivation to develop an interaction device which can replace current unwieldy tools.
Abstract: The use of the gesture system in our daily life as a natural human-human interaction has inspired the researchers to simulate and utilize this gift in human-machine interaction which is appealing and can take place the bore interaction ones that existed such as television, radio, and various home appliances as well as virtual reality will worth and deserve its name. This kind of interaction ensures promising and satisfying outcomes if applied in systematic approach, and supports unadorned human hand when transferring the message to these devices which is easiest, comfort and desired rather than the communication that requires frills to deliver the message to such devices. With the rapid emergence of 3d applications and virtual environments in computer system the need for a new type of interaction device arises. This is because the traditional devices such as mouse, keyboard and joystick become inefficient and cumbersome within this virtual environments.in other words evolution of user interfaces shapes the change in the HumanComputer Interaction (HCI).Intuitive and naturalness characteristics of “Hand Gestures” in the HCI have been the driving force and motivation to develop an interaction device which can replace current unwieldy tools. A survey on the methods of analysing, modelling and recognizing hand gestures in the context of the HCI is provided in this paper.

34 citations

Proceedings ArticleDOI
14 Jun 2020
TL;DR: This work presents a model of learning Sketch Bidirectional Encoder Representation from Transformer (Sketch-BERT), and generalizes BERT to sketch domain, with the novel proposed components and pre-training algorithms, including the newly designed sketch embedding networks, and the self-supervised learning of sketch gestalt.
Abstract: Previous researches of sketches often considered sketches in pixel format and leveraged CNN based models in the sketch understanding. Fundamentally, a sketch is stored as a sequence of data points, a vector format representation, rather than the photo-realistic image of pixels. SketchRNN studied a generative neural representation for sketches of vector format by Long Short Term Memory networks (LSTM). Unfortunately, the representation learned by SketchRNN is primarily for the generation tasks, rather than the other tasks of recognition and retrieval of sketches. To this end and inspired by the recent BERT model, we present a model of learning Sketch Bidirectional Encoder Representation from Transformer (Sketch-BERT). We generalize BERT to sketch domain, with the novel proposed components and pre-training algorithms, including the newly designed sketch embedding networks, and the self-supervised learning of sketch gestalt. Particularly, towards the pre-training task, we present a novel Sketch Gestalt Model (SGM) to help train the Sketch-BERT. Experimentally, we show that the learned representation of Sketch-BERT can help and improve the performance of the downstream tasks of sketch recognition, sketch retrieval, and sketch gestalt.

34 citations

Journal ArticleDOI
TL;DR: A new approach for static hand gesture recognition is proposed, tuned by a multi-objective evolutionary algorithm based on the Nondominated Sorting Genetic Algorithm II (NSGA-II), which shows good recognition rate and low computational cost.
Abstract: Hand gestures are an intuitive way for humans to interact with computers. They are becoming increasingly popular in several applications, such as smart houses, games, vehicle infotainment systems, kitchens and operating theaters. An effective human–computer interaction system should aim at both good recognition accuracy and speed. This paper proposes a new approach for static hand gesture recognition. A benchmark database with 36 gestures is used, containing variations in scale, illumination and rotation. Several common image descriptors, such as Fourier, Zernike moments, pseudo-Zernike moments, Hu moments, complex moments and Gabor features are comprehensively compared in terms of their respective accuracy and speed. Gesture recognition is undertaken by a multilayer perceptron which has a flexible structure and fast recognition. In order to achieve improved accuracy and minimize computational cost, both the feature vector and the neural network are tuned by a multi-objective evolutionary algorithm based on the Nondominated Sorting Genetic Algorithm II (NSGA-II). The proposed method is compared with state-of-the-art methods. A real-time gesture recognition system based on the proposed descriptor is constructed and evaluated. Experimental results show a good recognition rate, using a descriptor with low computational cost and reduced size.

34 citations

Proceedings ArticleDOI
01 Sep 2000
TL;DR: A recognition system that classifies four kinds of human interactions: shaking hands, pointing at the opposite person, standing hand-in-hand, and an intermediate/transitional state between them with no parsing procedure for sequential data is presented.
Abstract: This paper presents a recognition system that classifies four kinds of human interactions: shaking hands, pointing at the opposite person, standing hand-in-hand, and an intermediate/transitional state between them. Our system achieves recognition by applying the K-nearest neighbor classifier to the parametric human-interaction model, which describes the interpersonal configuration with multiple features from gray scale images (i.e., binary blob, silhouette contour, and intensity distribution). Unlike the algorithms that use temporal information about motion, our system independently classifies each frame by estimating the relative poses of the interacting persons. The system provides a tool to detect the initiation and the termination of an interaction with no parsing procedure for sequential data. Experimental results are presented and illustrated.

34 citations

Journal ArticleDOI
TL;DR: This paper reviews and categorizes the sketch-based modeling systems in four aspects: the input, the knowledge they use, the modeling approach and the output, and discusses about inherent challenges and open problems for researchers in the future.
Abstract: As 3D technology, including computer graphics, virtual reality and 3D printing, has been rapidly developed in the past years, 3D models are gaining an increasingly huge demand. Traditional 3D modeling platforms such as Maya and ZBrush, utilize "windows, icons, menus, pointers" (WIMP) interface paradigms for fine-grained control to construct detailed models. However, the modeling progress can be tedious and frustrating and thus too hard for a novice user or even a well trained artist. Therefore, a more intuitive interface is needed. Sketch, an intuitive communication and modeling tool for human beings, becomes the first choice of modeling community. So far, various sketch-based modeling systems have been created and studied. In this paper, we attempt to show how these systems work and give a comprehensive survey.We review and categorize the systems in four aspects: the input, the knowledge they use, the modeling approach and the output. We also discuss about inherent challenges and open problems for researchers in the future.

34 citations


Network Information
Related Topics (5)
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Object detection
46.1K papers, 1.3M citations
83% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image segmentation
79.6K papers, 1.8M citations
81% related
Convolutional neural network
74.7K papers, 2M citations
80% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202326
202271
202130
202029
201946
201827