W
Wenqi Xian
Researcher at Cornell University
Publications - 15
Citations - 2723
Wenqi Xian is an academic researcher from Cornell University. The author has contributed to research in topics: Rendering (computer graphics) & Computer science. The author has an hindex of 9, co-authored 13 publications receiving 1406 citations. Previous affiliations of Wenqi Xian include Georgia Institute of Technology.
Papers
More filters
Proceedings ArticleDOI
BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning
Fisher Yu,Haofeng Chen,Xin Wang,Wenqi Xian,Yingying Chen,Fangchen Liu,Vashisht Madhavan,Trevor Darrell +7 more
TL;DR: This work constructs BDD100K, the largest driving video dataset with 100K videos and 10 tasks to evaluate the exciting progress of image recognition algorithms on autonomous driving and shows that special training strategies are needed for existing models to perform such heterogeneous tasks.
Posted Content
BDD100K: A Diverse Driving Video Database with Scalable Annotation Tooling.
TL;DR: The design and implementation of a scalable annotation system that can provide a comprehensive set of image labels for large-scale driving datasets, and a new driving dataset, which is an order of magnitude larger than previous efforts.
Proceedings ArticleDOI
TextureGAN: Controlling Deep Image Synthesis with Texture Patches
Wenqi Xian,Patsorn Sangkloy,Varun Agrawal,Amit Raj,Jingwan Lu,Chen Fang,Fisher Yu,James Hays +7 more
TL;DR: In this article, a user can place a texture patch on a sketch at arbitrary locations and scales to control the desired output texture, and a generative network learns to synthesize objects consistent with these texture suggestions.
Posted Content
TextureGAN: Controlling Deep Image Synthesis with Texture Patches
Wenqi Xian,Patsorn Sangkloy,Varun Agrawal,Amit Raj,Jingwan Lu,Chen Fang,Fisher Yu,James Hays +7 more
TL;DR: This paper is the first to examine texture control in deep image synthesis guided by sketch, color, and texture and develops a local texture loss in addition to adversarial and content loss to train the generative network.
Posted Content
Space-time Neural Irradiance Fields for Free-Viewpoint Video
TL;DR: A method that learns a spatiotemporal neural irradiance field for dynamic scenes from a single video using the scene depth estimated from video depth estimation methods, aggregating contents from individual frames into a single global representation.