B
Brian McWilliams
Researcher at Disney Research
Publications - 69
Citations - 4354
Brian McWilliams is an academic researcher from Disney Research. The author has contributed to research in topics: Artificial neural network & Partial least squares regression. The author has an hindex of 22, co-authored 65 publications receiving 2978 citations. Previous affiliations of Brian McWilliams include Google & ETH Zurich.
Papers
More filters
Proceedings ArticleDOI
A Benchmark Dataset and Evaluation Methodology for Video Object Segmentation
Federico Perazzi,Jordi Pont-Tuset,Brian McWilliams,L. Van Gool,Markus Gross,Alexander Sorkine-Hornung +5 more
TL;DR: This work presents a new benchmark dataset and evaluation methodology for the area of video object segmentation, named DAVIS (Densely Annotated VIdeo Segmentation), and provides a comprehensive analysis of several state-of-the-art segmentation approaches using three complementary metrics.
Journal ArticleDOI
Kernel-predicting convolutional networks for denoising Monte Carlo renderings
Steve Bako,Thijs Vogels,Brian McWilliams,Mark Meyer,Jan Novák,Alex Harvill,Pradeep Sen,Tony DeRose,Fabrice Rousselle +8 more
TL;DR: A novel, supervised learning approach that allows the filtering kernel to be more complex and general by leveraging a deep convolutional neural network (CNN) architecture and introduces a novel, kernel-prediction network which uses the CNN to estimate the local weighting kernels used to compute each denoised pixel from its neighbors.
Proceedings ArticleDOI
A Fully Progressive Approach to Single-Image Super-Resolution
Yifan Wang,Federico Perazzi,Brian McWilliams,Alexander Sorkine-Hornung,Olga Sorkine-Hornung,Christopher Schroers +5 more
TL;DR: ProGanSR as discussed by the authors proposes a progressive multi-scale GAN architecture that upsamples an image in intermediate steps, while the learning process is organized from easy to hard, as is done in curriculum learning.
Proceedings Article
The shattered gradients problem: if resnets are the answer, then what is the question?
TL;DR: It is shown that the correlation between gradients in standard feedforward networks decays exponentially with depth resulting in gradients that resemble white noise whereas, in contrast, thegradients in architectures with skip-connections are far more resistant to shattering, decaying sublinearly.
Posted Content
The Shattered Gradients Problem: If resnets are the answer, then what is the question?.
TL;DR: This paper showed that the correlation between gradients in standard feedforward networks decays exponentially with depth resulting in gradients that resemble white noise whereas, in contrast, the gradients of architectures with skip-connections are far more resistant to shattering, decaying sublinearly.