scispace - formally typeset
Journal ArticleDOI

Universal Face Photo-Sketch Style Transfer via Multiview Domain Translation

TLDR
A novel universal face photo-sketch style transfer method that does not need any image from the source domain for training and flexibly leverages a convolutional neural network representation with hand-crafted features in an optimal way is presented.
Abstract
Face photo-sketch style transfer aims to convert a representation of a face from the photo (or sketch) domain to the sketch (respectively, photo) domain while preserving the character of the subject. It has wide-ranging applications in law enforcement, forensic investigation and digital entertainment. However, conventional face photo-sketch synthesis methods usually require training images from both the source domain and the target domain, and are limited in that they cannot be applied to universal conditions where collecting training images in the source domain that match the style of the test image is unpractical. This problem entails two major challenges: 1) designing an effective and robust domain translation model for the universal situation in which images of the source domain needed for training are unavailable, and 2) preserving the facial character while performing a transfer to the style of an entire image collection in the target domain. To this end, we present a novel universal face photo-sketch style transfer method that does not need any image from the source domain for training. The regression relationship between an input test image and the entire training image collection in the target domain is inferred via a deep domain translation framework, in which a domain-wise adaption term and a local consistency adaption term are developed. To improve the robustness of the style transfer process, we propose a multiview domain translation method that flexibly leverages a convolutional neural network representation with hand-crafted features in an optimal way. Qualitative and quantitative comparisons are provided for universal unconstrained conditions of unavailable training images from the source domain, demonstrating the effectiveness and superiority of our method for universal face photo-sketch style transfer.

read more

Citations
More filters
Journal ArticleDOI

Towards Practical Sketch-Based 3D Shape Generation: The Role of Professional Sketches

TL;DR: This paper collects the first large-scale dataset of professional sketches, where each sketch is paired with a reference 3D shape, and introduces two bespoke designs within a deep adversarial network to tackle the imprecision of human sketches and the unique figure/ground ambiguity problem inherent to sketch-based reconstruction.
Journal ArticleDOI

Memory-Modulated Transformer Network for Heterogeneous Face Recognition

TL;DR: Considering the information deficiency in the input images, this work proposes to formulate this image translation process as a “one-to-many” generation problem, and introduces reference images to guide the generation process.
Journal ArticleDOI

Partial NIR-VIS Heterogeneous Face Recognition With Automatic Saliency Search

TL;DR: Zhang et al. as mentioned in this paper proposed a saliency search network (SSN) to extract domain-invariant identity features, and guided the searching process by an information bottleneck network to mitigate the overfitting problems caused by small datasets.
Journal ArticleDOI

Discriminative shared transform learning for sketch to image matching

TL;DR: In this article, a discriminative shared transform learning (DSTL) algorithm was proposed to learn a shared transform for data belonging to the two domains, while modeling the class variations.
References
More filters
Journal ArticleDOI

Distinctive Image Features from Scale-Invariant Keypoints

TL;DR: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene and can robustly identify objects among clutter and occlusion while achieving near real-time performance.
Journal ArticleDOI

Image quality assessment: from error visibility to structural similarity

TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Proceedings ArticleDOI

Histograms of oriented gradients for human detection

TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Journal ArticleDOI

Nonlinear dimensionality reduction by locally linear embedding.

TL;DR: Locally linear embedding (LLE) is introduced, an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs that learns the global structure of nonlinear manifolds.
Book ChapterDOI

SURF: speeded up robust features

TL;DR: A novel scale- and rotation-invariant interest point detector and descriptor, coined SURF (Speeded Up Robust Features), which approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster.
Related Papers (5)