scispace - formally typeset
C

Christian Rupprecht

Researcher at University of Oxford

Publications -  89
Citations -  5325

Christian Rupprecht is an academic researcher from University of Oxford. The author has contributed to research in topics: Computer science & Convolutional neural network. The author has an hindex of 21, co-authored 70 publications receiving 3637 citations. Previous affiliations of Christian Rupprecht include Johns Hopkins University & Technische Universität München.

Papers
More filters
Proceedings ArticleDOI

Deeper Depth Prediction with Fully Convolutional Residual Networks

TL;DR: A fully convolutional architecture, encompassing residual learning, to model the ambiguous mapping between monocular images and depth maps is proposed and a novel way to efficiently learn feature map up-sampling within the network is presented.
Posted Content

Deeper Depth Prediction with Fully Convolutional Residual Networks

TL;DR: In this article, a fully convolutional architecture, encompassing residual learning, is proposed to model the ambiguous mapping between monocular images and depth maps, which can be trained end-to-end and does not rely on post-processing techniques such as CRFs or other additional refinement steps.
Posted Content

Self-labelling via simultaneous clustering and representation learning

TL;DR: The proposed novel and principled learning formulation is able to self-label visual data so as to train highly competitive image representations without manual labels and yields the first self-supervised AlexNet that outperforms the supervised Pascal VOC detection baseline.
Proceedings Article

Self-labelling via simultaneous clustering and representation learning

TL;DR: In this paper, the authors proposed to maximize the information between labels and input data indices to solve the cross-entropy minimization problem for unsupervised learning of deep neural networks.
Proceedings ArticleDOI

Learning in an Uncertain World: Representing Ambiguity Through Multiple Hypotheses

TL;DR: This work proposes a frame-work for reformulating existing single-prediction models as multiple hypothesis prediction (MHP) models and an associated meta loss and optimization procedure to train them, and finds that MHP models outperform their single-hypothesis counterparts in all cases and expose valuable insights into the variability of predictions.