Open AccessPosted Content
Stylizing 3D Scene via Implicit Representation and HyperNetwork.
Reads0
Chats0
TLDR
Zhang et al. as discussed by the authors proposed a joint framework to directly render novel views with the desired style, which consists of two components: an implicit representation of the 3D scene with the neural radiance field model, and a hypernetwork to transfer the style information into the scene representation.Abstract:
In this work, we aim to address the 3D scene stylization problem - generating stylized images of the scene at arbitrary novel view angles A straightforward solution is to combine existing novel view synthesis and image/video style transfer approaches, which often leads to blurry results or inconsistent appearance Inspired by the high quality results of the neural radiance fields (NeRF) method, we propose a joint framework to directly render novel views with the desired style Our framework consists of two components: an implicit representation of the 3D scene with the neural radiance field model, and a hypernetwork to transfer the style information into the scene representation In particular, our implicit representation model disentangles the scene into the geometry and appearance branches, and the hypernetwork learns to predict the parameters of the appearance branch from the reference style image To alleviate the training difficulties and memory burden, we propose a two-stage training procedure and a patch sub-sampling approach to optimize the style and content losses with the neural radiance field model After optimization, our model is able to render consistent novel views at arbitrary view angles with arbitrary style Both quantitative evaluation and human subject study have demonstrated that the proposed method generates faithful stylization results with consistent appearance across different viewsread more
Citations
More filters
Proceedings ArticleDOI
StylizedNeRF: Consistent 3D Scene Stylization as Stylized NeRF via 2D-3D Mutual Learning
TL;DR: Zhang et al. as discussed by the authors proposed a mutual learning framework for 3D scene stylization that combines a 2D image stylization network and neural radiance fields (NeRF) to fuse the stylization ability of 2D stylization networks with the 3D consistency of NeRF.
Journal ArticleDOI
Neural Parameterization for Dynamic Human Head Editing
TL;DR: NeP is capable of photo-realistic rendering while allowing fine-grained editing of the scene geometry and appearance and is a hybrid representation that pro- vides the advantages of both implicit and explicit methods.
Journal ArticleDOI
Recolorable Posterization of Volumetric Radiance Fields Using Visibility‐Weighted Palette Extraction
Kenji Tojo,Nobuyuki Umetani +1 more
TL;DR: This study investigates the artistic posterization of the volumetric radiance fields by extending the recent palette‐based image‐editing framework, which naturally introduces intuitive color manipulation of the posterized results, into the radiance field.
Proceedings ArticleDOI
3D Photo Stylization: Learning to Generate Stylized Novel Views from a Single Image
TL;DR: In this paper , a deep model is proposed to learn geometry-aware content features for stylization from a point cloud representation of the scene, resulting in high-quality stylized images that are consistent across views.
Proceedings ArticleDOI
Artistic Style Novel View Synthesis Based on A Single Image
TL;DR: In this article , a view stylization framework is proposed that can convert a single 2D image into multiple stylized views by estimating dense optical flow between the source and novel views so that the style transfer model can produce consistent results.
References
More filters
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings Article
Auto-Encoding Variational Bayes
Diederik P. Kingma,Max Welling +1 more
TL;DR: A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Book ChapterDOI
Perceptual Losses for Real-Time Style Transfer and Super-Resolution
TL;DR: In this paper, the authors combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image style transfer, where a feedforward network is trained to solve the optimization problem proposed by Gatys et al. in real-time.
Proceedings ArticleDOI
Image Style Transfer Using Convolutional Neural Networks
TL;DR: A Neural Algorithm of Artistic Style is introduced that can separate and recombine the image content and style of natural images and provide new insights into the deep image representations learned by Convolutional Neural Networks and demonstrate their potential for high level image synthesis and manipulation.
Proceedings ArticleDOI
Structure-from-Motion Revisited
TL;DR: This work proposes a new SfM technique that improves upon the state of the art to make a further step towards building a truly general-purpose pipeline.