scispace - formally typeset
SciSpace - Your AI assistant to discover and understand research papers | Product Hunt

Proceedings ArticleDOI

Whitened Expectation Propagation: Non-Lambertian Shape from Shading and Shadow

23 Jun 2013-pp 1674-1681

TL;DR: This work proposes a variation of EP that exploits regularities in natural scene statistics to achieve run times that are linear in both number of pixels and clique size, and uses large, non-local cliques to exploit cast shadow, which is traditionally ignored in shape from shading.

AbstractFor problems over continuous random variables, MRFs with large cliques pose a challenge in probabilistic inference. Difficulties in performing optimization efficiently have limited the probabilistic models explored in computer vision and other fields. One inference technique that handles large cliques well is Expectation Propagation. EP offers run times independent of clique size, which instead depend only on the rank, or intrinsic dimensionality, of potentials. This property would be highly advantageous in computer vision. Unfortunately, for grid-shaped models common in vision, traditional Gaussian EP requires quadratic space and cubic time in the number of pixels. Here, we propose a variation of EP that exploits regularities in natural scene statistics to achieve run times that are linear in both number of pixels and clique size. We test these methods on shape from shading, and we demonstrate strong performance not only for Lambertian surfaces, but also on arbitrary surface reflectance and lighting arrangements, which requires highly non-Gaussian potentials. Finally, we use large, non-local cliques to exploit cast shadow, which is traditionally ignored in shape from shading.

Topics: Photometric stereo (56%), Clique (56%), Expectation propagation (54%), Gaussian process (52%), Approximate inference (52%)

Summary (2 min read)

1. Introduction

  • Probabilistic inference for large loopy graphical models has become an important subfield with a growing body of applications, including many in computer vision.
  • These methods have resulted in significant progress for several applications.
  • The principal difference between BP and Gaussian EP can thus be summarized by a trade-off in their respective approximating families: BP favors flexible non-Gaussian marginals, while Gaussian EP favors a flexible covariance structure.
  • Another possible explanation is that for a grid-based graphical model with D pixels, Gaussian EP requires O(D2) space and a run time of O(D3).
  • Finally, the authors use the method to efficiently perform inference over large cliques produced by cast shadows and by global spatial priors.

2. Expectation Propagation

  • The family P̃ is chosen so that EP̃ [τj( x)] can be estimated easily.
  • EP achieves this goal by approximating each potential function φi( x) with an exponential family distribution P̃i( xi| θ(i)).
  • Regardless of the rank of each potential, the covariance matrix of the posterior S remains full-rank, and must be stored as a D×D matrix.
  • For large problems with tens of thousands of variables or more, this becomes limiting.
  • When the underlying graphical model is highly sparse, such as a nearest-neighbor pairwiseconnected MRFs, each iteration can be performed in time O(D1.5) [2].

3. Whitened EP

  • For many problems of computer vision, both the number of variables D and the number of potentials N grow linearly with the number of pixels.
  • Low-rank potentials of large clique size have a wide array of promising applications in computer vision [17, 10].
  • Expectation propagation can be made more efficient by limiting the forms of covariance structure expressible by S. Let S denote the covariance matrix for natural scenes.

4. Shape from Shading

  • Whitened EP permits inference over images in linear time with respect to both pixels and clique size.
  • In particular, the authors are interested in whether Gaussian message approximation will be effective when the potentials φi are highly non-Gaussian.
  • In recent years, several methods have been developed that solve the classical SfS problem well as long as surface reflectance R is assumed to be Lambertian [19, 17, 6, 3, 7].
  • For each pixel, one potential φR(p, q|i) enforces the surface normal to be consistent with the known pixel intensity i(x, y).
  • Whitened EP provides two benefits for spatial priors.

5. Conclusions

  • The methods in this paper reduce the run time of EP from cubic to linear in the number of pixels for visual inference, while retaining a run time that is linear in clique size.
  • The computational expense of inference for large cliques has prohibited the investigation of complex probabilistic models for vision.
  • The authors hope is that whitened EP will facilitate further research in these directions.
  • Results for whitened EP on SfS shows that the sacrifice in performance for this approach is small, even in problems with highly non-Gaussian potentials.
  • Performance remained strong for surfaces with arbitrary reflectance and arbitrary lighting, which is a novel finding in SfS.

Did you find this useful? Give us your feedback

...read more

Content maybe subject to copyright    Report

Citations
More filters

Dissertation
31 May 2014
TL;DR: This thesis focuses on studying the statistical properties of single objects and their range images which can bene t shape inference techniques, including laser-acquired depth, binocular stereo, photometric stereo and High Dynamic Range (HDR) photography.
Abstract: Depth inference is a fundamental problem of computer vision with a broad range of potential applications. Monocular depth inference techniques, particularly shape from shading dates back to as early as the 40's when it was rst used to study the shape of the lunar surface. Since then there has been ample research to develop depth inference algorithms using monocular cues. Most of these are based on physical models of image formation and rely on a number of simplifying assumptions that do not hold for real world and natural imagery. Very few make use of the rich statistical information contained in real world images and their 3D information. There have been a few notable exceptions though. The study of statistics of natural scenes has been concentrated on outdoor scenes which are cluttered. Statistics of scenes of single objects has been less studied, but is an essential part of daily human interaction with the environment. Inferring shape of single objects is a very important computer vision problem which has captured the interest of many researchers over the past few decades and has applications in object recognition, robotic grasping, fault detection and Content Based Image Retrieval (CBIR). This thesis focuses on studying the statistical properties of single objects and their range images which can bene t shape inference techniques. I acquired two databases: Single Object Range and HDR (SORH) and the Eton Myers Database of single objects, including laser-acquired depth, binocular stereo, photometric stereo and High Dynamic Range (HDR) photography. I took a data driven approach and studied the statistics of color and range images of real scenes of single objects along with whole 3D objects and uncovered

2 citations


Posted Content
Abstract: This paper presents a new Expectation Propagation (EP) framework for image restoration using patch-based prior distributions. While Monte Carlo techniques are classically used to sample from intractable posterior distributions, they can suffer from scalability issues in high-dimensional inference problems such as image restoration. To address this issue, EP is used here to approximate the posterior distributions using products of multivariate Gaussian densities. Moreover, imposing structural constraints on the covariance matrices of these densities allows for greater scalability and distributed computation. While the method is naturally suited to handle additive Gaussian observation noise, it can also be extended to non-Gaussian noise. Experiments conducted for denoising, inpainting and deconvolution problems with Gaussian and Poisson noise illustrate the potential benefits of such flexible approximate Bayesian method for uncertainty quantification in imaging problems, at a reduced computational cost compared to sampling techniques.

Dissertation
31 Aug 2013
TL;DR: This research builds an intelligent system based on brachiopod fossil images and their descriptions published in Treatise on Invertebrate Paleontology to compare fossil images directly, without referring to textual information.
Abstract: Science advances not only because of new discoveries, but also due to revolutionary ideas drawn from accumulated data. The quality of studies in paleontology, in particular, depends on accessibility of fossil data. This research builds an intelligent system based on brachiopod fossil images and their descriptions published in Treatise on Invertebrate Paleontology. The project is still on going and some significant developments will be discussed here. This thesis has two major parts. The first part describes the digitization, organization and integration of information extracted from the Treatise. The Treatise is in PDF format and it is non-trivial to convert large volumes into a structured, easily accessible digital library. Three important topics will be discussed: (1) how to extract data entries from the text, and save them in a structured manner; (2) how to crop individual specimen images from figures automatically, and associate each image with text entries; (3) how to build a search engine to perform both keyword search and natural language search. The search engine already has a web interface and many useful tasks can be done with ease. Verbal descriptions are second-hand information of fossil images and thus have limitations. The second part of the thesis develops an algorithm to compare fossil images directly, without referring to textual information. After similarities between fossil images are calculated, we can use the results in image search, fossil classification, and so on. The algorithm is based on deformable templates, and utilizes expectation propagation to find the optimal deformation. Specifically, I superimpose a “warp” on each image. Each node of the warp encapsulates a vector of local texture features, and comparing two images involves two steps: (1) deform the warp to the optimal configuration, so the energy function is minimized; and (2) based on the optimal configuration, compute the distance of two images. Experiment results confirmed that the method is reasonable and robust.

References
More filters

Proceedings ArticleDOI
20 Jun 2005
TL;DR: A new measure, the method noise, is proposed, to evaluate and compare the performance of digital image denoising methods, and a new algorithm, the nonlocal means (NL-means), based on a nonlocal averaging of all pixels in the image is proposed.
Abstract: We propose a new measure, the method noise, to evaluate and compare the performance of digital image denoising methods. We first compute and analyze this method noise for a wide class of denoising algorithms, namely the local smoothing filters. Second, we propose a new algorithm, the nonlocal means (NL-means), based on a nonlocal averaging of all pixels in the image. Finally, we present some experiments comparing the NL-means algorithm and the local smoothing filters.

5,832 citations


"Whitened Expectation Propagation: N..." refers background in this paper

  • ...Cues that stem from non-local similarity within a scene have been applied very successfully towards image denoising [2]....

    [...]


Proceedings Article
12 Dec 2011
TL;DR: This paper considers fully connected CRF models defined on the complete set of pixels in an image and proposes a highly efficient approximate inference algorithm in which the pairwise edge potentials are defined by a linear combination of Gaussian kernels.
Abstract: Most state-of-the-art techniques for multi-class image segmentation and labeling use conditional random fields defined over pixels or image regions. While region-level models often feature dense pairwise connectivity, pixel-level models are considerably larger and have only permitted sparse graph structures. In this paper, we consider fully connected CRF models defined on the complete set of pixels in an image. The resulting graphs have billions of edges, making traditional inference algorithms impractical. Our main contribution is a highly efficient approximate inference algorithm for fully connected CRF models in which the pairwise edge potentials are defined by a linear combination of Gaussian kernels. Our experiments demonstrate that dense connectivity at the pixel level substantially improves segmentation and labeling accuracy.

2,822 citations


"Whitened Expectation Propagation: N..." refers methods in this paper

  • ...Others have advanced methods of inference which can be applied to probabilistic models over discrete variables with large cliques[10, 24], or large numbers of small cliques [12]....

    [...]


Journal ArticleDOI
TL;DR: This work explains how to obtain region-based free energy approximations that improve the Bethe approximation, and corresponding generalized belief propagation (GBP) algorithms, and describes empirical results showing that GBP can significantly outperform BP.
Abstract: Important inference problems in statistical physics, computer vision, error-correcting coding theory, and artificial intelligence can all be reformulated as the computation of marginal probabilities on factor graphs. The belief propagation (BP) algorithm is an efficient way to solve these problems that is exact when the factor graph is a tree, but only approximate when the factor graph has cycles. We show that BP fixed points correspond to the stationary points of the Bethe approximation of the free energy for a factor graph. We explain how to obtain region-based free energy approximations that improve the Bethe approximation, and corresponding generalized belief propagation (GBP) algorithms. We emphasize the conditions a free energy approximation must satisfy in order to be a "valid" or "maxent-normal" approximation. We describe the relationship between four different methods that can be used to generate valid approximations: the "Bethe method", the "junction graph method", the "cluster variation method", and the "region graph method". Finally, we explain how to tell whether a region-based approximation, and its corresponding GBP algorithm, is likely to be accurate, and describe empirical results showing that GBP can significantly outperform BP.

1,740 citations


"Whitened Expectation Propagation: N..." refers background in this paper

  • ...In addition to its efficient run time, unifying many pairwise potentials into one large potential increases the fidelity of the Bethe approximation implicit in message passing algorithms [25]....

    [...]


Proceedings Article
02 Aug 2001
TL;DR: Expectation Propagation approximates the belief states by only retaining expectations, such as mean and varitmce, and iterates until these expectations are consistent throughout the network, which makes it applicable to hybrid networks with discrete and continuous nodes.
Abstract: This paper presents a new deterministic approximation technique in Bayesian networks. This method, "Expectation Propagation," unifies two previous techniques: assumed-density filtering, an extension of the Kalman filter, and loopy belief propagation, an extension of belief propagation in Bayesian networks. Loopy belief propagation, because it propagates exact belief states, is useful for a limited class of belief networks, such as those which are purely discrete. Expectation Propagation approximates the belief states by only retaining expectations, such as mean and varitmce, and iterates until these expectations are consistent throughout the network. This makes it applicable to hybrid networks with discrete and continuous nodes. Experiments with Gaussian mixture models show Expectation Propagation to be donvincingly better than methods with similar computational cost: Laplace's method, variational Bayes, and Monte Carlo. Expectation Propagation also provides an efficient algorithm for training Bayes point machine classifiers.

1,386 citations


"Whitened Expectation Propagation: N..." refers background in this paper

  • ...Computing ViSV ′ i here does not require sampling, and can be found analytically [14]....

    [...]

  • ...Minka showed that when x is discrete-valued and the approximating exponential family is a product of independent univariate discrete distributions, then EP is equivalent to classical belief propagation (BP) [14]....

    [...]

  • ...When the approximating family is a product of independent univariate marginals, EP is equivalent to BP [14]....

    [...]

  • ...In 2001, Minka proposed a generalization of BP known as Expectation Propagation [14]....

    [...]

  • ...Recall that when S is constrained to be diagonal, EP is equivalent to belief propagation [14]....

    [...]


Posted Content
Abstract: This paper presents a new deterministic approximation technique in Bayesian networks. This method, "Expectation Propagation", unifies two previous techniques: assumed-density filtering, an extension of the Kalman filter, and loopy belief propagation, an extension of belief propagation in Bayesian networks. All three algorithms try to recover an approximate distribution which is close in KL divergence to the true distribution. Loopy belief propagation, because it propagates exact belief states, is useful for a limited class of belief networks, such as those which are purely discrete. Expectation Propagation approximates the belief states by only retaining certain expectations, such as mean and variance, and iterates until these expectations are consistent throughout the network. This makes it applicable to hybrid networks with discrete and continuous nodes. Expectation Propagation also extends belief propagation in the opposite direction - it can propagate richer belief states that incorporate correlations between nodes. Experiments with Gaussian mixture models show Expectation Propagation to be convincingly better than methods with similar computational cost: Laplace's method, variational Bayes, and Monte Carlo. Expectation Propagation also provides an efficient algorithm for training Bayes point machine classifiers.

1,364 citations