scispace - formally typeset
Search or ask a question
Author

Daniel Sýkora

Bio: Daniel Sýkora is an academic researcher from Czech Technical University in Prague. The author has contributed to research in topics: Animation & Rendering (computer graphics). The author has an hindex of 19, co-authored 54 publications receiving 1186 citations. Previous affiliations of Daniel Sýkora include Adobe Systems & Trinity College, Dublin.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper presents LazyBrush, a novel interactive tool for painting hand‐made cartoon drawings and animations which is not sensitive to imprecise placement of color strokes which makes painting less tedious and brings significant time savings in the context cartoon animation.
Abstract: In this paper we present LazyBrush, a novel interactive tool for painting hand-made cartoon drawings and animations. Its key advantage is simplicity and e xibility. As opposed to previous custom tailored approaches [SBv05, QWH06] LazyBrush does not rely on style specic features such as homogenous regions or pattern continuity yet still offers comparable or even less manual effort for a broad class of drawing styles. In addition to this, it is not sensitive to imprecise placement of color strokes which makes painting less tedious and brings signicant time savings in the context cartoon animation. LazyBrush originally stems from requirements analysis carried out with professional ink-and-paint illustrators who established a list of useful features for an ideal painting tool. We incorporate this list into an optimization framework leading to a variant of Potts energy with several interesting theoretical properties. We show how to minimize it efciently and demonstrate its usefulness in various practical scenarios including the ink-and-paint production pipeline.

126 citations

Journal ArticleDOI
11 Jul 2016
TL;DR: This work presents an approach to example-based stylization of 3D renderings that better preserves the rich expressiveness of hand-created artwork, based on light propagation in the scene, and demonstrates its effectiveness on a variety of scenes and styles.
Abstract: We present an approach to example-based stylization of 3D renderings that better preserves the rich expressiveness of hand-created artwork. Unlike previous techniques, which are mainly guided by colors and normals, our approach is based on light propagation in the scene. This novel type of guidance can distinguish among context-dependent illumination effects, for which artists typically use different stylization techniques, and delivers a look closer to realistic artwork. In addition, we demonstrate that the current state of the art in guided texture synthesis produces artifacts that can significantly decrease the fidelity of the synthesized imagery, and propose an improved algorithm that alleviates them. Finally, we demonstrate our method's effectiveness on a variety of scenes and styles, in applications like interactive shading study or autocompletion.

124 citations

Proceedings ArticleDOI
07 Jun 2004
TL;DR: A novel color-by-example technique which combines image segmentation, patch-based sampling and probabilistic reasoning is presented, able to automate colorization when new color information is applied on the already designed black-and-white cartoon.
Abstract: We present a novel color-by-example technique which combines image segmentation, patch-based sampling and probabilistic reasoning. This method is able to automate colorization when new color information is applied on the already designed black-and-white cartoon. Our technique is especially suitable for cartoons digitized from classical celluloid films, which were originally produced by a paper or cel based method. In this case, the background is usually a static image and only the dynamic foreground needs to be colored frame-by-frame. We also assume that objects in the foreground layer consist of several well visible outlines which will emphasize the shape of homogeneous regions.

113 citations

Proceedings ArticleDOI
01 Aug 2009
TL;DR: A novel geometrically motivated iterative scheme where point movements are decoupled from shape consistency is proposed by combining locally optimal block matching with as-rigid-as-possible shape regularization to register images undergoing large free-form deformations and appearance variations.
Abstract: We present a new approach to deformable image registration suitable for articulated images such as hand-drawn cartoon characters and human postures. For such type of data state-of-the-art techniques typically yield undesirable results. We propose a novel geometrically motivated iterative scheme where point movements are decoupled from shape consistency. By combining locally optimal block matching with as-rigid-as-possible shape regularization, our algorithm allows us to register images undergoing large free-form deformations and appearance variations. We demonstrate its practical usability in various challenging tasks performed in the cartoon animation production pipeline including unsupervised inbetweening, example-based shape deformation, auto-painting, editing, and motion retargeting.

91 citations

Journal ArticleDOI
TL;DR: A new approach for generating global illumination renderings of hand-drawn characters using only a small set of simple annotations that exploits the concept of bas-relief sculptures, and forms an optimization process that automatically constructs approximate geometry sufficient to evoke the impression of a consistent 3D shape.
Abstract: We present a new approach for generating global illumination renderings of hand-drawn characters using only a small set of simple annotations. Our system exploits the concept of bas-relief sculptures, making it possible to generate 3D proxies suitable for rendering without requiring side-views or extensive user input. We formulate an optimization process that automatically constructs approximate geometry sufficient to evoke the impression of a consistent 3D shape. The resulting renders provide the richer stylization capabilities of 3D global illumination while still retaining the 2D hand-drawn look-and-feel. We demonstrate our approach on a varied set of hand-drawn images and animations, showing that even in comparison to ground-truth renderings of full 3D objects, our bas-relief approximation is able to produce convincing global illumination effects, including self-shadowing, glossy reflections, and diffuse color bleeding.

90 citations


Cited by
More filters
01 Jan 2016
TL;DR: The using multivariate statistics is universally compatible with any devices to read, allowing you to get the most less latency time to download any of the authors' books like this one.
Abstract: Thank you for downloading using multivariate statistics. As you may know, people have look hundreds times for their favorite novels like this using multivariate statistics, but end up in infectious downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they juggled with some harmful bugs inside their laptop. using multivariate statistics is available in our digital library an online access to it is set as public so you can download it instantly. Our books collection saves in multiple locations, allowing you to get the most less latency time to download any of our books like this one. Merely said, the using multivariate statistics is universally compatible with any devices to read.

14,604 citations

Journal ArticleDOI
TL;DR: Computer and Robot Vision Vol.
Abstract: Computer and Robot Vision Vol. 1, by R.M. Haralick and Linda G. Shapiro, Addison-Wesley, 1992, ISBN 0-201-10887-1.

1,426 citations

Journal ArticleDOI
11 Jul 2016
TL;DR: A novel technique to automatically colorize grayscale images that combines both global priors and local image features and can process images of any resolution, unlike most existing approaches based on CNN.
Abstract: We present a novel technique to automatically colorize grayscale images that combines both global priors and local image features. Based on Convolutional Neural Networks, our deep network features a fusion layer that allows us to elegantly merge local information dependent on small image patches with global priors computed using the entire image. The entire framework, including the global and local priors as well as the colorization model, is trained in an end-to-end fashion. Furthermore, our architecture can process images of any resolution, unlike most existing approaches based on CNN. We leverage an existing large-scale scene classification database to train our model, exploiting the class labels of the dataset to more efficiently and discriminatively learn the global priors. We validate our approach with a user study and compare against the state of the art, where we show significant improvements. Furthermore, we demonstrate our method extensively on many different types of images, including black-and-white photography from over a hundred years ago, and show realistic colorizations.

758 citations

Posted Content
TL;DR: A fully automatic image colorization system that leverages recent advances in deep networks, exploiting both low-level and semantic representations, and explores colorization as a vehicle for self-supervised visual representation learning.
Abstract: We develop a fully automatic image colorization system. Our approach leverages recent advances in deep networks, exploiting both low-level and semantic representations. As many scene elements naturally appear according to multimodal color distributions, we train our model to predict per-pixel color histograms. This intermediate output can be used to automatically generate a color image, or further manipulated prior to image formation. On both fully and partially automatic colorization tasks, we outperform existing methods. We also explore colorization as a vehicle for self-supervised visual representation learning.

669 citations