scispace - formally typeset
Open AccessProceedings ArticleDOI

Learning in an Uncertain World: Representing Ambiguity Through Multiple Hypotheses

Reads0
Chats0
TLDR
This work proposes a frame-work for reformulating existing single-prediction models as multiple hypothesis prediction (MHP) models and an associated meta loss and optimization procedure to train them, and finds that MHP models outperform their single-hypothesis counterparts in all cases and expose valuable insights into the variability of predictions.
Abstract
Many prediction tasks contain uncertainty. In some cases, uncertainty is inherent in the task itself. In future prediction, for example, many distinct outcomes are equally valid. In other cases, uncertainty arises from the way data is labeled. For example, in object detection, many objects of interest often go unlabeled, and in human pose estimation, occluded joints are often labeled with ambiguous values. In this work we focus on a principled approach for handling such scenarios. In particular, we propose a frame-work for reformulating existing single-prediction models as multiple hypothesis prediction (MHP) models and an associated meta loss and optimization procedure to train them. To demonstrate our approach, we consider four diverse applications: human pose estimation, future prediction, image classification and segmentation. We find that MHP models outperform their single-hypothesis counterparts in all cases, and that MHP models simultaneously expose valuable insights into the variability of predictions.

read more

Citations
More filters
Journal ArticleDOI

Hematoma Expansion Context Guided Intracranial Hemorrhage Segmentation and Uncertainty Estimation

TL;DR: Wang et al. as discussed by the authors proposed a slice expansion module (SEM), which can effectively transfer contextual information between two adjacent slices by mapping predictions from one slice to another, and designed two information transmission paths: forward and backward slice expansion, and aggregated results from those paths with a novel weighing strategy.
Journal ArticleDOI

Epistemic and aleatoric uncertainties reduction with rotation variation for medical image segmentation with ConvNets

TL;DR: In this paper , the authors proposed a reduction method by training models with data augmentation to reduce the uncertainty in medical image segmentation tasks by estimating the rotation transformation and noise by Monte Carlo simulation with prior parameter distributions and quantizing the aleatoric uncertainty.
Posted Content

Enabling Viewpoint Learning through Dynamic Label Generation

TL;DR: This work proposes to separate viewpoint selection from rendering through an end‐to‐end learning approach, whereby it reduces the influence of the mesh quality by predicting viewpoints from unstructured point clouds instead of polygonal meshes, and incorporates the label generation into the training procedure, making the label decision adaptive to the current network predictions.
Posted Content

CT Image Synthesis Using Weakly Supervised Segmentation and Geometric Inter-Label Relations For COVID Image Analysis.

TL;DR: In this article, a weakly supervised segmentation method is used to obtain pixel level semantic label map of images, which is used learn the intrinsic relationship of geometry and shape across semantic labels.
Proceedings ArticleDOI

Feasible and Adaptive Multimodal Trajectory Prediction with Semantic Maneuver Fusion

TL;DR: In this paper, a novel Maneuver Fusion layer is proposed to incorporate the logic-based semantic maneuvers into deep neural networks, and a hierarchical multi-task learning framework with adaptive loss is designed to provide a multimodal trajectory prediction.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.