J
Jay Busch
Researcher at Google
Publications - 44
Citations - 1873
Jay Busch is an academic researcher from Google. The author has contributed to research in topics: Rendering (computer graphics) & Motion capture. The author has an hindex of 20, co-authored 44 publications receiving 1350 citations. Previous affiliations of Jay Busch include University of Southern California & Institute for Creative Technologies.
Papers
More filters
Proceedings ArticleDOI
Multiview face capture using polarized spherical gradient illumination
TL;DR: A new pair of linearly polarized lighting patterns are presented which enables multiview diffuse-specular separation under a given spherical illumination condition from just two photographs, enabling more efficient acquisition of diffuse and specular albedo and normal maps from multiple viewpoints.
Journal ArticleDOI
Immersive light field video with a layered mesh representation
Michael Broxton,John Flynn,Ryan Overbeck,Daniel Erickson,Peter Hedman,Matthew DuVall,Jason Dourgarian,Jay Busch,Matt Whalen,Paul Debevec +9 more
TL;DR: Advancing over previous work, this system is able to reproduce challenging content such as view-dependent reflections, semi-transparent surfaces, and near-field objects as close as 34 cm to the surface of the camera rig.
Journal ArticleDOI
Single image portrait relighting
Tiancheng Sun,Jonathan T. Barron,Yun-Ta Tsai,Zexiang Xu,Xueming Yu,Graham Fyffe,Christoph Rhemann,Jay Busch,Paul Debevec,Ravi Ramamoorthi +9 more
TL;DR: In this paper, a neural network is trained on a small database of 18 individuals captured under different directional light sources in a controlled light stage setup consisting of a densely sampled sphere of lights.
Journal ArticleDOI
Achieving eye contact in a one-to-many 3D video teleconferencing system
Andrew Jones,Magnus Lang,Graham Fyffe,Xueming Yu,Jay Busch,Ian E. McDowall,Mark Bolas,Paul Debevec +7 more
TL;DR: A set of algorithms and an associated display system capable of producing correctly rendered eye contact between a three-dimensionally transmitted remote participant and a group of observers in a 3D teleconferencing system able to reproduce the effects of gaze, attention, and eye contact generally missing in traditional teleconferences.
Journal ArticleDOI
The relightables: volumetric performance capture of humans with realistic relighting
Kaiwen Guo,Peter Lincoln,Philip Davidson,Jay Busch,Xueming Yu,Matt Whalen,Geoff Harvey,Sergio Orts-Escolano,Rohit Pandey,Jason Dourgarian,Danhang Tang,Anastasia Tkach,Adarsh Kowdle,Emily Cooper,Mingsong Dou,Sean Fanello,Graham Fyffe,Christoph Rhemann,Jonathan Taylor,Paul Debevec,Shahram Izadi +20 more
TL;DR: Multiple experiments, comparisons, and applications show that The Relightables significantly improves upon the level of realism in placing volumetrically captured human performances into arbitrary CG scenes.