scispace - formally typeset
Search or ask a question

Showing papers on "Sketch recognition published in 2020"


Proceedings ArticleDOI
14 Jun 2020
TL;DR: This work presents a model of learning Sketch Bidirectional Encoder Representation from Transformer (Sketch-BERT), and generalizes BERT to sketch domain, with the novel proposed components and pre-training algorithms, including the newly designed sketch embedding networks, and the self-supervised learning of sketch gestalt.
Abstract: Previous researches of sketches often considered sketches in pixel format and leveraged CNN based models in the sketch understanding. Fundamentally, a sketch is stored as a sequence of data points, a vector format representation, rather than the photo-realistic image of pixels. SketchRNN studied a generative neural representation for sketches of vector format by Long Short Term Memory networks (LSTM). Unfortunately, the representation learned by SketchRNN is primarily for the generation tasks, rather than the other tasks of recognition and retrieval of sketches. To this end and inspired by the recent BERT model, we present a model of learning Sketch Bidirectional Encoder Representation from Transformer (Sketch-BERT). We generalize BERT to sketch domain, with the novel proposed components and pre-training algorithms, including the newly designed sketch embedding networks, and the self-supervised learning of sketch gestalt. Particularly, towards the pre-training task, we present a novel Sketch Gestalt Model (SGM) to help train the Sketch-BERT. Experimentally, we show that the learned representation of Sketch-BERT can help and improve the performance of the downstream tasks of sketch recognition, sketch retrieval, and sketch gestalt.

34 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel architecture, named Hybrid CNN, which is composed of A-Net and S-Net, which describe appearance information and shape information, respectively and demonstrates that the Hybrid CNN achieves competitive accuracy compared with the state-of-the-art methods.

34 citations


Proceedings ArticleDOI
Jia Li1, Nan Gao1, Tong Shen, Wei Zhang, Tao Mei, Hui Ren1 
12 Oct 2020
TL;DR: This work proposes a new challenging task sketch enhancement (SE) defined in an ill-posed space, i.e. enhancing a non-professional sketch (NPS) to a professional sketch (PS), which is a creative generation task different from sketch abstraction, sketch completion and sketch variation.
Abstract: Human free-hand sketches have been studied in various fields including sketch recognition, synthesis and sketch-based image retrieval. We propose a new challenging task sketch enhancement (SE) defined in an ill-posed space, i.e. enhancing a non-professional sketch (NPS) to a professional sketch (PS), which is a creative generation task different from sketch abstraction, sketch completion and sketch variation. For the first time we release a database of NPS with PS for anime characters. We cast sketch enhancement as an image-to-image translation problem by exploiting the relationship to corresponding intensive or sparse pixel domains for sketch domain. Specifically, we explore three different routines based on conditional generative adversarial network (cGAN), i.e. Sketch-Sketch (SS), Sketch-Colorization-Sketch (SCS) and Sketch-Abstraction-Sketch (SAS). SS is a one-stage model that directly maps NPS to PS, while SCS and SAS are two-stage models where auxiliary inputs, grayscale parsing and shape parsing, are involved. Multiple metrics are used to evaluate the performance of the models in both the sketch domain and other low-level feature domains. With quantitative and qualitative analysis of the experiments, we have established solid baselines, which, we hope, could encourage more research conducted on this task. Our dataset is publicly available via https://github.com/LCXCUC/SketchMan2020.

9 citations


Proceedings ArticleDOI
06 Jul 2020
TL;DR: This paper proposes a novel graph representation specifically designed for sketches, which follows the inherent hierarchical relationship (segment-stroke-sketch) of sketching elements, and introduces a joint network that encapsulates both the structural and temporal traits of sketches for sketch recognition, termed S3Net.
Abstract: Sketches are distinctly different to photos. They are highly abstract and exhibit a severe lack of visual cues. Prior works have therefore explored additional traits unique to sketches to help recognition such as stroke ordering. In this paper, we pioneer in studying the role of structure in sketches, for the task of sketch recognition. In particular, we propose a novel graph representation specifically designed for sketches, which follows the inherent hierarchical relationship (segment-stroke-sketch”) of sketching elements. By conforming to this hierarchy, we also introduce ajoint network that encapsulates both the structural and temporal traits of sketches for sketch recognition, termed S3Net. S3Net employs a recurrent neural network (RNN) to extract segmentlevel features, followed by a graph convolutional network (GCN) to aggregate them into sketch-level features. The RNN first encodes temporal cues in sketches while its outputs are used as node embedding to construct a hierarchical sketch-graph. The GCN module then takes in this sketchgraph to produce a structure-aware embedding for sketches. Extensive experiments on the QuickDraw dataset, exhibit superior performance over state-of-the-arts, surpassing them by over 4%. Ablative studies further demonstrate the effectiveness of the proposed structural graph for both inter-class, and intra-class feature discrimination. Code is available at: https://github.com/yanglan0225/s3net;.

8 citations


Posted Content
TL;DR: In this article, a model of learning Sketch Bidirectional Encoder Representation from Transformer (Sketch-BERT) was proposed to improve the performance of the downstream tasks of sketch recognition, sketch retrieval, and sketch gestalt.
Abstract: Previous researches of sketches often considered sketches in pixel format and leveraged CNN based models in the sketch understanding. Fundamentally, a sketch is stored as a sequence of data points, a vector format representation, rather than the photo-realistic image of pixels. SketchRNN studied a generative neural representation for sketches of vector format by Long Short Term Memory networks (LSTM). Unfortunately, the representation learned by SketchRNN is primarily for the generation tasks, rather than the other tasks of recognition and retrieval of sketches. To this end and inspired by the recent BERT model, we present a model of learning Sketch Bidirectional Encoder Representation from Transformer (Sketch-BERT). We generalize BERT to sketch domain, with the novel proposed components and pre-training algorithms, including the newly designed sketch embedding networks, and the self-supervised learning of sketch gestalt. Particularly, towards the pre-training task, we present a novel Sketch Gestalt Model (SGM) to help train the Sketch-BERT. Experimentally, we show that the learned representation of Sketch-BERT can help and improve the performance of the downstream tasks of sketch recognition, sketch retrieval, and sketch gestalt.

7 citations


Proceedings ArticleDOI
26 Jun 2020
TL;DR: A novel sketch recognition model based on Convolutional Neural Networks is proposed which outperformed the related work on the same dataset and revealed the model’s efficiency in terms of predicting the classes of the given sketches.
Abstract: Deep neural networks have been widely used for visual recognition tasks based on real images as they have proven their efficiency. Unlike real images, sketches exhibit a high level of abstraction as they lack the rich features that the real images contain such as various colors, backgrounds, and environmental detail. Despite all of these shortages and being drawn with just a few strokes, they are still meaningful enough to encompass an appropriate level of meaning. The efficiency of deep neural networks on sketch recognition has been relatively less studied compared to the visual recognition of real images. To experiment with the efficiency of deep neural networks on sketch recognition, a novel sketch recognition model based on Convolutional Neural Networks is proposed in this study. The proposed model consisted of 21 layers and was tuned in an automated manner to find out the best-optimized model. In order to reveal the proposed model’s efficiency in terms of predicting the classes of the given sketches, the model was evaluated on a gold standard sketch dataset, namely, Quick, Draw!. According to the experimental result, the proposed model’s accuracy was calculated as high as 89.53% which outperformed the related work on the same dataset. The key findings that were obtained during the conducted experiments were discussed to shed light on future studies.

7 citations


Proceedings ArticleDOI
17 Mar 2020
TL;DR: This work created Syn, a synthetic dataset containing 125,000 lo-fi sketches, and used it to train a UI element detector, Meta-Morph, to support future research on UI element sketch detection and automating prototype fidelity transformation.
Abstract: User Interface design is an iterative process that progresses through low-, medium-, and high-fidelity prototypes. A few research projects use deep learning to automate this process by transforming low fidelity (lo-fi) sketches into front-end code. However, these research projects lack a large scale dataset of lo-fi sketches to train detection models. As a solution, we created Syn, a synthetic dataset containing 125,000 lo-fi sketches. These lo-fi sketches were synthetically generated using our UISketch dataset containing 5,917 sketches of 19 UI elements drawn by 350 participants. To realize the usage of Syn, we used it to train a UI element detector, Meta-Morph. It detects UI elements from a lo-fi sketch with 84.9% mAP and 72.7% AR. This work aims to support future research on UI element sketch detection and automating prototype fidelity transformation.

6 citations


Journal ArticleDOI
TL;DR: The proposed Teach Machine to Learn (TML), a few-shot learning model for hand-drawn multi-symbol sketch recognition, outperforms the currently booming image-based deep models in recognition accuracy and is capable to continuously learn new concepts even in one-shot.
Abstract: The ability to sequentially learn from few examples and re-utilize previous knowledge is an important milestone on the path to artificial general intelligence. In this paper, we propose Teach Machine to Learn (TML), a few-shot learning model for hand-drawn multi-symbol sketch recognition. The model decomposes multi-symbol sketch into stroke primitives and then explains the observed sequences in a bayesian criterion. A Bidirectional Long Short Term Memory (BiLSTM) encoder is employed for stroke primitives encoding. Meanwhile, a probabilistic Hidden Markov Model (HMM) is constructed for complete sketch inference and recognition. The challenging task of hand-drawn multi-symbol sketch recognition is implemented on two public datasets. The comparative results indicate that the proposed method outperforms the currently booming image-based deep models in recognition accuracy. Furthermore, our method is capable to continuously learn new concepts even in one-shot. The codes are currently available in https://github.com/chongyupan/Teach-Machine-to-Learn.

5 citations


Proceedings ArticleDOI
01 Feb 2020
TL;DR: A method to retrieve the facial photos of the potential suspect using the sketch as a query and the proposed model performs better with the state-of-the-art method methods on forensic sketch datasets.
Abstract: Faces are the most common biometric used for the identification of a person. Every person has different facial visual attributes that discriminate them from others. Law enforcement agencies use the face as a key evidence to identify the suspect, involved in unlawful activities. To identify the suspect sketches are used to apprehend the suspect. The sketch is the painting of the visual description given by the onlooker. Hand drawn sketch drawn by the sketch artist is uncertain. It depends on the description observation and memory of the eyewitness. Uncertainty of the face visual attributes is generally ignored by the existing methods. Face shape and texture are different in the sketch then the visual features of the facial photo. In this paper, we have proposed a method to retrieve the facial photos of the potential suspect using the sketch as a query. We have used the local facial visual attributes to identify facial features. Firstly, face key points are identified using 81 facial landmarks detector. After that, local facial attributes are extracted and features are calculated. We have utilized the Bayesian classification to retrieve the mugshot images. We have experimented the proposed method on the forensic dataset and compared with state-of-the-art methods. It is evident from the experimental result that the proposed model performs better with the state-of-the-art method methods on forensic sketch datasets.

5 citations


Journal ArticleDOI
TL;DR: This work uses sketches represented as a sequence of strokes, i.e., as vector images, to effectively capture the long-term temporal dependencies in hand-drawn sketches to address the machines’ ability to recognize human drawn sketches.
Abstract: – For the past few decades, machines have replaced humans in several disciplines. However, machine cognition still lags behind the human capabilities. We address the machines’ ability to recognize human drawn sketches in this work. Visual representations such as sketches have long been a medium of communication for humans. For artificially intelligent systems to effectively immerse in interactive environments, it is required that machines understand such notations. The abstract nature and varied artistic styling of these sketches make automatic recognition of drawings more challenging than other areas of image classification. In this paper, we use sketches represented as a sequence of strokes, i.e., as vector images, to effectively capture the long-term temporal dependencies in hand-drawn sketches. The proposed approach combines the self-attention capabilities of Transformers while effectively utilizing the long-term temporal dependencies through Temporal Convolution Networks (TCN) for sketch recognition. The confidence scores obtained from the two techniques are combined using triangular-norm (T-norm). Attention heat-maps are plotted to isolate the discriminating parts of a sketch that contribute to sketch classification. The extensive quantitative and qualitative evaluation confirms that the proposed network performs favorably against state-of-the-art techniques.

5 citations


Journal ArticleDOI
TL;DR: The recognition performances by different feature layers of pretrained VGG-Face model are explored and to accelerate the matching speed, the ball-tree algorithm is adopted to search the nearest neighbors of query sketches from gallery photos.
Abstract: Forensic face sketch-photo recognition attracts considerable interest in the law enforcement agencies. This paper proposes a new face sketch-photo recognition method based on the VGG deep feature and ball-tree searching algorithm. In this paper, the recognition performances by different feature layers of pretrained VGG-Face model are explored. In addition, to accelerate the matching speed, the ball-tree algorithm is adopted to search the nearest neighbors of query sketches from gallery photos. The experimental results on CUFS and IIIT-D datasets demonstrate the superiority of the proposed method compared with existing algorithms.

Proceedings ArticleDOI
31 Mar 2020
TL;DR: This work wrote this work to discover and analyze multiple researches on face sketch recognition, to review the different classes of approaches, of datasets and evaluation protocols used, by analyzing their kernels and identifying their limitations.
Abstract: Face Sketch recognition has been one of the most studied topics in Forensic literature. The automatic retrieval of suspect photos from the mug-shot police data-base can help them, quickly reduce and deduct potential suspects, but in most cases, the photographic image of a suspect is not available. The best substitute is often sketch based on the memory of an eyewitness or a victim. Generally, this process is slow and really not effective, it does not allow to find and arrest the right suspect. So, a stronger algorithm for even partial face sketch recognition can be useful. Although many methods have been proposed in this scenario, especially the techniques that are applied to face recognition system and ranked among the perfect and most effective. The main objective of this paper is to presents a review of recent and different researches on recognizing face sketches. We wrote this work to discover and analyze multiple researches on face sketch recognition. We wrote it to review the different classes of approaches, of datasets and evaluation protocols used, by analyzing their kernels and identifying their limitations.

Proceedings ArticleDOI
17 Mar 2020
TL;DR: An algorithm is developed that can classify rectilinear perspective strokes, as well as classify which of those strokes are accurate on a stroke-by-stroke basis, and an intelligent user interface is developed which can provide real-time accuracy feedback on a user's free-hand digital perspective sketch.
Abstract: Sketching in perspective is a valuable skill for art, and for professional disciplines like industrial design, architecture, and engineering. However, it tends to be a difficult skill to grasp for novices. We have developed an algorithm that can classify rectilinear perspective strokes, as well as classify which of those strokes are accurate on a stroke-by-stroke basis. We also developed an intelligent user interface which can provide real-time accuracy feedback on a user's free-hand digital perspective sketch. To evaluate the system, we conducted a between-subjects user study with 40 novice participants which involved sketching city street corners in 2-point perspective. We discovered that the participants who received real-time intelligent feedback improved their perspective accuracy significantly (p

Proceedings ArticleDOI
12 Oct 2020
TL;DR: This paper explicitly explores the shape properties of sketches, which has almost been neglected before in the context of deep learning, and proposes a sequential dual learning strategy that combines both shape and texture features.
Abstract: Recognizing freehand sketches with high arbitrariness is such a great challenge that the automatic recognition rate has reached a ceiling in recent years. In this paper, we explicitly explore the shape properties of sketches, which has almost been neglected before in the context of deep learning, and propose a sequential dual learning strategy that combines both shape and texture features. We devise a two-stage recurrent neural network to balance these two types of features. Our architecture also considers stroke orders of sketches to reduce the intra-class variations of input features. Extensive experiments on the TU-Berlin benchmark set show that our method achieves over 90% recognition rate for the first time on this task, outperforming both humans and state-of-the-art algorithms by over 19 and 7.5 percentage points, respectively. Especially, our approach can distinguish the sketches with similar textures but different shapes more effectively than recent deep networks. Based on the proposed method, we develop an on-line sketch retrieval and imitation application to teach children or adults to draw. The application is available as Sketch.Draw.

Proceedings ArticleDOI
12 Oct 2020
TL;DR: This paper proposes to jointly predict the tactile saliency, depth map and semantic category of a sketch in an end-to-end learning-based framework, and proposes to synthesize training data by leveraging a collection of 3D shapes with 3D tactile Saliency information.
Abstract: In this paper, we aim to understand the functionality of 2D sketches by predicting how humans would interact with the objects depicted by sketches in real life. Given a 2D sketch, we learn to predict a tactile saliency map for it, which represents where humans would grasp, press, or touch the object depicted by the sketch. We hypothesize that understanding 3D structure and category of the sketched object would help such tactile saliency reasoning. We thus propose to jointly predict the tactile saliency, depth map and semantic category of a sketch in an end-to-end learning-based framework. To train our model, we propose to synthesize training data by leveraging a collection of 3D shapes with 3D tactile saliency information. Experiments show that our model can predict accurate and plausible tactile saliency maps for both synthetic and real sketches. In addition, we also demonstrate that our predicted tactile saliency is beneficial to sketch recognition and sketch-based 3D shape retrieval, and enables us to establish part-based functional correspondences among sketches.

Proceedings Article
01 Jan 2020
TL;DR: A Transformer-based network is proposed, dubbed as TransSketchNet, for sketch recognition, which incorporates ordinal information to perform the classification task in real-time through vector images.
Abstract: Sketches have been employed since the ancient era of cave paintings for simple illustrations to represent real-world entities. The abstract nature and varied artistic styling makes automatic recognition of drawings more challenging than other areas of image classification. Moreover, the representation of sketches as a sequence of strokes instead of raster images introduces them at the correct abstract level. However, dealing with images as a sequence of small information makes it challenging. In this paper, we propose a Transformer-based network, dubbed as TransSketchNet, for sketch recognition. This architecture incorporates ordinal information to perform the classification task in real-time through vector images.

Proceedings ArticleDOI
Xianyi Zhu1, Yi Xiao1, Yan Zheng1, Guanghua Tan1, Shizhe Zhou1 
04 May 2020
TL;DR: A joint pixel and point convolutional neural network for LR sketch image recognition, equipped with both image convolution and point Convolution, that can simultaneously handle both the image and point representation of sketches.
Abstract: Sketch recognition using deep neural networks have become a recent trend. However, traditional pixel (image) based convolutional neural networks show poor recognizing performance on low resolution (LR) sketch image due to the loss of image details. To solve this problem, we propose a joint pixel and point convolutional neural network for LR sketch image recognition. The network, equipped with both image convolution and point convolution, can simultaneously handle both the image and point representation of sketches. Furthermore, we propose a hybrid classifier, a corresponding loss function, and a training scheme to better extract features for recognition. Experimental results show that our method outperforms state-of-art deep neural networks.

Proceedings ArticleDOI
17 Mar 2020
TL;DR: An intelligent user interface called Mechanix is developed to provide automated, real-time feedback on hand-drawn free body diagrams for students, driven by novel sketch recognition algorithms developed for recognizing and comparing trusses, general shapes, and arrows in diagrams.
Abstract: Sketching free body diagrams is an important skill that students learn in introductory physics and engineering classes; however, university class sizes are growing and often have hundreds of students in a single class. This creates a grading challenge for instructors as there is simply not enough time nor resources to provide adequate feedback on every problem. We have developed an intelligent user interface called Mechanix to provide automated, real-time feedback on hand-drawn free body diagrams for students. The system is driven by novel sketch recognition algorithms developed for recognizing and comparing trusses, general shapes, and arrows in diagrams. We have also discovered trends in how the students utilize extra submissions for learning through deployment to five universities with 350 students completing homework on the system over the 2018 and 2019 school year. A study with 57 students showed the system allowed for homework scores similar to other homework mediums while requiring and automatically grading the free body diagrams in addition to answers.

Proceedings ArticleDOI
01 Jul 2020
TL;DR: The implementation of cGANs for synthesizing the images from hand-drawn sketches gives a remarkable output and the performance of the proposed sketch to image translation network was excellent and appreciable.
Abstract: Today, Technology have remarkable charm in the area of Computer Graphics and Vision. Producing absolute images from the poor hand-drawn sketches is a very demanding and laborious task in this area. Hand-drawn sketch recognition is widely used in sketch based image and video retrieval, manipulations and reorganizations. In S ketch to image synthesis, the sketches are translated to realistic images with the use of a generative model. An image is put forward to image translation network that involves in producing a synthesized image from the input sketch via an adversarial process. A novel Conditional Generative Adversarial Network (cGANs) which is an extension of Generative Adversarial Networks (GANs) is used to produce the images with some sort of conditions or attributes. In this work, the implementation of cGANs for synthesizing the images from hand-drawn sketches gives a remarkable output. The performance of the proposed sketch to image translation network was excellent and appreciable.

Dissertation
01 Aug 2020
TL;DR: Zhang et al. as mentioned in this paper proposed an improved Siamese network combined with features extracted from an encoder-decoder network to extract more correlated features from facial photos and the corresponding face sketches.
Abstract: Face sketch recognition refers to automatically identifying a person from a set of facial photos using a face sketch. This thesis focuses on matching facial images between front face photos and front face hand-drawn sketches, and between front face photos and front face composite sketches by software. Because different visual domains, different image forms, and different collection methods exist between the matching image pairs, face sketch recognition is more difficult than traditional facial recognition. In this thesis, three novel deep learning models are presented to increase recognition accuracy on face photo-sketch datasets. An improved Siamese network combined with features extracted from an encoder-decoder network is proposed to extract more correlated features from facial photos and the corresponding face sketches. After that, attention modules are proposed to extract features from the same location in the photos and the sketches. In the third method, in order to reduce the difference between different visual domains, the images are transferred into a graph to increase the relationship for different face attributes and facial landmarks. Meanwhile, the graph neural network is utilized to learn the weights of neighbors adaptively. The first is to fuse more image features from the Siamese network and encoder-decoder network for increased the recognition results. Moreover, the attention modules can fix the similarity positions from different domain images to extract the correlated features. The visualized feature maps exhibit the correlated features which are extracted from the photo and the corresponding face sketch. In addition, a stable deep learning model based on graph structure is introduced to capture the topology of the graph and the relationship after images have been mapped into the graph structure for reducing the gap between face photos and face sketches. The experimental results show that the recognition accuracy of our proposed deep learning models can achieve the state-of-the-art on composite face sketch datasets. Meanwhile, the recognition results on hand-drawn face sketch datasets exceed other deep learning methods.

Journal ArticleDOI
TL;DR: A sketch recognition learning approach is proposed that is based on the Visual Geometry Group16 Convolutional Neural Network (VGG16 CNN) that has been applied to predict the labels of input sketches in order to automatically recognize the label of a sketch.
Abstract: With the rapid development of computer vision technology, increasingly more focus has been put on image recognition. More specifically, a sketch is an important hand-drawn image that is garnering increased attention. Moreover, as handheld devices such as tablets, smartphones, etc. have become more popular, it has become increasingly more convenient for people to hand-draw sketches using this equipment. Hence, sketch recognition is a necessary task to improve the performance of intelligent equipment. In this paper, a sketch recognition learning approach is proposed that is based on the Visual Geometry Group16 Convolutional Neural Network (VGG16 CNN). In particular, in order to diminish the effect of the number of sketches on the learning method, we adopt a strategy of increasing the quantity to improve the diversity and scale of sketches. Initially, sketch features are extracted via the pretrained VGG16 CNN. Additionally, we obtain contextual features based on the traverse stroke scheme. Then, the VGG16 CNN is trained using a joint Bayesian method to update the related network parameters. Moreover, this network has been applied to predict the labels of input sketches in order to automatically recognize the label of a sketch. Last but not least, related experiments are conducted, and the comparison of our method with the state-of-the-art methods is performed, which shows that our approach is superior and feasible

Journal ArticleDOI
TL;DR: A novel sketch recognition algorithm that uses graph to model the input strokes and their relationships, and leverages cycles by local strokes to detect some circuit components is proposed, which outperforms previous state-of-the-art methods numerically.
Abstract: The understanding of circuit diagram is very important for the study of electrical engineering. Existing circuit diagram simulation tools are mostly based on GUI interface and rely on users to click or drag icons with mouse, which requires them to be familiar with the software and distracts a great deal of their attention from the circuit diagram itself. Although a lot of previous works have devoted to designing algorithmic solution to recognize hand-drawn circuit diagrams automatically, there still exists strict constraints on users’ drawing habits and stroke orders. In order to address these inconveniences, this paper proposes a novel sketch recognition algorithm named $LS^{4}D$ . It uses graph to model the input strokes and their relationships, and leverages cycles by local strokes to detect some circuit components. Theoretical derivations have demonstrated that $LS^{4}D$ can efficiently recognize diagrams with different drawing styles and arbitrary stroke orders. To furthermore illustrate the practical value of the proposed approach, we construct a prototype of pen-based circuit diagram system based on $LS^{4}D$ , which enables users to draw circuit diagrams directly on the digital screen without any other restriction. An experiment of 158 samples collected from 17 users is conducted on the designed platform. Our approach has achieved 93.04% recognition accuracy and overall 4.53 from a 5-scale user satisfaction rating, which outperforms previous state-of-the-art methods numerically. It is shown that the same approach can also be generalized to many other sketch recognition applications with minor modifications. To facilitate future researches and applications, we publish our source code, model, and training data at https://github.com/Huage001/ Graph-Based-Circuit-Painter.

Proceedings ArticleDOI
01 Nov 2020
TL;DR: Wang et al. as discussed by the authors presented a new sketch-based interaction method for planning-based interactive storytelling systems, which uses a deep learning model based on a Convolutional Neural Network to recognize digital hand-drawn sketches.
Abstract: Drawings have been used for thousands of years as a visual complement to oral and written storytelling. The evolution of technology and the advent of interactive narratives brings the possibility of exploring drawings and storytelling in new ways. This paper presents a new sketch-based interaction method for planning-based interactive storytelling systems, which uses a deep learning model based on a Convolutional Neural Network to recognize digital hand-drawn sketches. By combining real time sketch recognition with a planning-based plot generation algorithm, the proposed system allows users to interact with narratives by sketching objects on smartphones or tablet computers, which are then recognized by the system and converted into virtual objects in the story world, thereby affecting the plot of the narrative. Preliminary results show that the sketch recognition model has a remarkable accuracy for small sets of sketch classes (accuracy of 95.1 % for 14 classes), which are sufficient to provide a good variety of interaction options. In addition, it can also be extended to more complex scenarios while maintaining a considerable accuracy (87.4% for 172 classes and 71.6% for 345 classes).

Journal ArticleDOI
TL;DR: A sketch recognition system based on the multistroke primitive grouping method based on grouping the strokes that lie within the mutual boundaries between adjacent regions to create line drawings from online freehand axonometric sketches of mechanical models.
Abstract: Multistroke drawing occurs frequently in conceptual design sketches; however, it is almost unsupported by the current sketch-based user interfaces. We proposed a sketch recognition system based on the multistroke primitive grouping method. Based on grouping the strokes that lie within the mutual boundaries between adjacent regions, we create line drawings from online freehand axonometric sketches of mechanical models. First, closed regions and their boundary bands of the sketch were extracted. Then, the strokes that cross the boundary bands of two or more closed regions are segmented, and the strokes that lie within the intersection of two adjacent boundary bands are grouped. Finally, grouped strokes are simplified into a new single stroke and then fitted as a geometric primitive; thus, the input sketches are recognized to the line drawings. We developed a prototype of the sketch recognition system to evaluate the proposed method. The results showed that the input sketches are simplified into the accurate line drawings efficiently. The proposed method can be applied to both multistroke overtracing and nonovertracing sketches.

Proceedings ArticleDOI
04 Nov 2020
TL;DR: In this paper, the authors presented trajectory of attacker and protection way by threat model, as a result of this work is done threat model with attack points in face sketch recognition system, in table is described types of threats and protection measures from face falsification.
Abstract: In this paper is given forensic face sketch recognition paradigm and problems. Also, here is presented trajectory of attacker and protection way by threat model. As a result of this work is done threat model with attack points in face sketch recognition system. In table is described types of threats and protection measures from face falsification.

Proceedings ArticleDOI
16 Oct 2020
TL;DR: In this article, an architecture using convolutional neural networks capable of transforming an image to a sequence of strokes to be replicated by a Poppy humanoid robot using inverse kinematic to reproduce the sketches was implemented.
Abstract: Sketches have been one of the most ancient techniques used by humans to portray their ideas and thoughts. Replicating this ability would help us to better understand the way in which human beings obtain their capabilities. In this work, we implemented an architecture using convolutional neural networks capable of transforming an image to a sequence of strokes to be replicated by a Poppy humanoid robot using inverse kinematic to reproduce the sketches.

Journal ArticleDOI
14 Aug 2020
TL;DR: This paper presents an approach for the recognition of multi-domain hand-drawn diagrams, which exploits Sketch Grammars (SkGs) to model the symbols’ shape and the abstract syntax of diagrammatic notations, able to exploit contextual information for improving recognition accuracy and solving interpretation ambiguities.
Abstract: This paper presents an approach for the recognition of multi-domain hand-drawn diagrams, which exploits Sketch Grammars (SkGs) to model the symbols’ shape and the abstract syntax of diagrammatic notations. The recognition systems automatically generated from SkGs process the input sketches according to the following phases: the user’ strokes are first segmented and interpreted as primitive shapes, then by exploiting the domain context they are clustered into symbols of the domain and, finally, an interpretation of whole diagram is given. The main contribution of this paper is an efficient model of parsing suitable for both interactive and non-interactive sketch-based interfaces, configurable to different domains, and able to exploit contextual information for improving recognition accuracy and solving interpretation ambiguities. The proposed approach was evaluated in the domain of UML class diagrams obtaining good results in terms of recognition accuracy and usability.

Book ChapterDOI
03 Apr 2020
TL;DR: A novel edge–texture characteristic attribute for human face recognition based on the concept of radius of gyration face, which is invariant to changes in illumination, rotation and noise is put forward.
Abstract: It is a well-known fact that most of the edges in an image can be spotted on the facial segments and they have a place on the image’s high-frequency components. Besides edges, another crucial feature for face matching is the texture. Therefore, both edges and texture can have significant contribution in extracting facial attributes and thus in human face recognition. This paper puts forward a novel edge–texture characteristic attribute for human face recognition based on the concept of radius of gyration face, which is invariant to changes in illumination, rotation and noise. The supremacy of the proposed approach in human face recognition is exhibited when its recognition accuracy is compared with other recent state-of-the-art techniques over challenging databases like CMU-PIE database, Extended Yale B database, AR database and CUFS database under varying conditions of illumination, noise, rotation and face sketch recognition.

Book ChapterDOI
07 Oct 2020
TL;DR: In this article, a method for human face sketch gender classification and recognition is presented, which is inspired in our other model which was pre-trained on the same task, but with sixteen features and fuzzy approach.
Abstract: Machine Learning is a subset of artificial intelligence which focuses on the development of computer programs that can access data and use it learn for themselves. Bayes Theorem is widely used in machine learning. The main objective of this paper is to classify the gender of the human being based on their face sketch images by using eyebrows features and Bayes Classifier. This paper presents a method for human face sketch gender classification and recognition. It is inspired in our other model which was pre-trained on the same task, but with sixteen features and fuzzy approach. Toward this end, just three features will be extract from the input face sketch image based on eyebrow face golden ratio and two other measurements. The face detection stage passes by Viola and Jones algorithm. The classification task is evaluated through Bayes classifier. An experimental evaluation demonstrates the satisfactory performance of our approach on CUFS database with 80% for training, 20% for testing. The proposed machine learning algorithm will be a competitor of the proposed relative the stat of the art approaches. The estimate rate reaches more than 98.96% for male gender and 97.38% for female gender.