scispace - formally typeset
Search or ask a question

Showing papers by "William G. Macready published in 2019"


Proceedings ArticleDOI
01 Oct 2019
TL;DR: In this article, the authors propose a robust object detection framework that is resilient to noise in bounding box class labels, locations and size annotations. But the model is trained on the target domain using a set of noisy object bounding boxes that are obtained by a detection model trained only in the source domain.
Abstract: Domain shift is unavoidable in real-world applications of object detection. For example, in self-driving cars, the target domain consists of unconstrained road environments which cannot all possibly be observed in training data. Similarly, in surveillance applications sufficiently representative training data may be lacking due to privacy regulations. In this paper, we address the domain adaptation problem from the perspective of robust learning and show that the problem may be formulated as training with noisy labels. We propose a robust object detection framework that is resilient to noise in bounding box class labels, locations and size annotations. To adapt to the domain shift, the model is trained on the target domain using a set of noisy object bounding boxes that are obtained by a detection model trained only in the source domain. We evaluate the accuracy of our approach in various source/target domain pairs and demonstrate that the model significantly improves the state-of-the-art on multiple domain adaptation scenarios on the SIM10K, Cityscapes and KITTI datasets.

206 citations


Journal ArticleDOI
TL;DR: Evaluating tool annotation algorithms based on deep learning for cataract surgery finds that the quality of their annotations are compared to that of human interpretations, and it is expected that they will guide the design of efficient surgery monitoring tools in the near future.

76 citations


Posted Content
TL;DR: A robust object detection framework that is resilient to noise in bounding box class labels, locations and size annotations is proposed that significantly improves the state-of-the-art on multiple domain adaptation scenarios on the SIM10K, Cityscapes and KITTI datasets.
Abstract: Domain shift is unavoidable in real-world applications of object detection. For example, in self-driving cars, the target domain consists of unconstrained road environments which cannot all possibly be observed in training data. Similarly, in surveillance applications sufficiently representative training data may be lacking due to privacy regulations. In this paper, we address the domain adaptation problem from the perspective of robust learning and show that the problem may be formulated as training with noisy labels. We propose a robust object detection framework that is resilient to noise in bounding box class labels, locations and size annotations. To adapt to the domain shift, the model is trained on the target domain using a set of noisy object bounding boxes that are obtained by a detection model trained only in the source domain. We evaluate the accuracy of our approach in various source/target domain pairs and demonstrate that the model significantly improves the state-of-the-art on multiple domain adaptation scenarios on the SIM10K, Cityscapes and KITTI datasets.

56 citations


Posted Content
TL;DR: An efficient method to train undirected approximate posteriors is developed by showing that the gradient of the training objective with respect to the parameters of the Undirected posterior can be computed by backpropagation through Markov chain Monte Carlo updates.
Abstract: The representation of the approximate posterior is a critical aspect of effective variational autoencoders (VAEs). Poor choices for the approximate posterior have a detrimental impact on the generative performance of VAEs due to the mismatch with the true posterior. We extend the class of posterior models that may be learned by using undirected graphical models. We develop an efficient method to train undirected approximate posteriors by showing that the gradient of the training objective with respect to the parameters of the undirected posterior can be computed by backpropagation through Markov chain Monte Carlo updates. We apply these gradient estimators for training discrete VAEs with Boltzmann machines as approximate posteriors and demonstrate that undirected models outperform previous results obtained using directed graphical models. Our implementation is available at this https URL .

4 citations


Posted Content
TL;DR: An efficient method to train undirected posteriors is developed by showing that the gradient of the training objective with respect to the parameters of the Undirected posterior can be computed by backpropagation through Markov chain Monte Carlo updates.
Abstract: The representation of the posterior is a critical aspect of effective variational autoencoders (VAEs). Poor choices for the posterior have a detrimental impact on the generative performance of VAEs due to the mismatch with the true posterior. We extend the class of posterior models that may be learned by using undirected graphical models. We develop an efficient method to train undirected posteriors by showing that the gradient of the training objective with respect to the parameters of the undirected posterior can be computed by backpropagation through Markov chain Monte Carlo updates. We apply these gradient estimators for training discrete VAEs with Boltzmann machine posteriors and demonstrate that undirected models outperform previous results obtained using directed graphical models as posteriors.

2 citations


Patent
05 Jul 2019
TL;DR: In this article, a digital processor runs a machine learning algorithm in parallel with a sampling server and the sampling server continuously or intermittently draws samples for machine learning during execution of the machine learning algorithms.
Abstract: A digital processor runs a machine learning algorithm in parallel with a sampling server The sampling sever may continuously or intermittently draw samples for the machine learning algorithm during execution of the machine learning algorithm, for example on a given problem The sampling server may run in parallel (eg, concurrently, overlapping, simultaneously) with a quantum processor to draw samples from the quantum processor

Patent
15 Aug 2019
TL;DR: In this article, a generative and inference model for variational autoencoders with discrete-variable latent spaces is presented. But the model does not support a wider range of training techniques, such as importance weighting.
Abstract: Generative and inference machine learning models with discrete-variable latent spaces are provided. Discrete variables may be transformed by a smoothing transformation with overlapping conditional distributions or made natively reparametrizable by definition over a GUMBEL distribution. Models may be trained by sampling from different models in the positive and negative phase and/or sample with different frequency in the positive and negative phase. Machine learning models may be defined over high-dimensional quantum statistical systems near a phase transition to take advantage of long-range correlations. Machine learning models may be defined over graph-representable input spaces and use multiple spanning trees to form latent representations. Machine learning models may be relaxed via continuous proxies to support a greater range of training techniques, such as importance weighting. Example architectures for (discrete) variational autoencoders using such techniques are also provided. Techniques for improving training efficacy and sparsity of variational autoencoders are also provided.