scispace - formally typeset
D

David Novotny

Researcher at Facebook

Publications -  47
Citations -  1321

David Novotny is an academic researcher from Facebook. The author has contributed to research in topics: Computer science & Object (computer science). The author has an hindex of 11, co-authored 37 publications receiving 648 citations. Previous affiliations of David Novotny include University College London & Xerox.

Papers
More filters
Posted Content

Accelerating 3D Deep Learning with PyTorch3D

TL;DR: 1. Accelerating 3D Deep Learning with PyTorch3D, arXiv 2007 2. Mesh R-CNN, ICCV 2019 3. SynSin: End-to-end View Synthesis from a Single Image, CVPR 2020 4. Fast Differentiable Raycasting for Neural Rendering using Sphere-based Representations.
Proceedings ArticleDOI

C3DPO: Canonical 3D Pose Networks for Non-Rigid Structure From Motion

TL;DR: This work proposes C3DPO, a method for extracting 3D models of deformable objects from 2D keypoint annotations in unconstrained images by learning a deep network that reconstructs a 3D object from a single view at a time, and introduces a novel regularization technique.
Proceedings ArticleDOI

Learning 3D Object Categories by Looking Around Them

TL;DR: In this article, a Siamese viewpoint factorization network is proposed to align different videos together without explicitly comparing 3D shapes, and a 3D shape completion network is used to extract the full shape of an object from partial observations.
Book ChapterDOI

Semi-convolutional Operators for Instance Segmentation

TL;DR: In this paper, the authors show theoretically and empirically that constructing dense pixel embeddings that can separate object instances cannot be easily achieved using convolutional operators, and they show that simple modifications, which they call semi-convolutional, have a much better chance of succeeding at this task.
Proceedings ArticleDOI

Self-Supervised Learning of Geometrically Stable Features Through Probabilistic Introspection

TL;DR: This paper shows empirically that a network pre- trained in this manner requires significantly less supervision to learn semantic object parts compared to numerous pre-training alternatives, and shows that the pre-trained representation is excellent for semantic object matching.