# Whitened Expectation Propagation: Non-Lambertian Shape from Shading and Shadow

TL;DR: This work proposes a variation of EP that exploits regularities in natural scene statistics to achieve run times that are linear in both number of pixels and clique size, and uses large, non-local cliques to exploit cast shadow, which is traditionally ignored in shape from shading.

Abstract: For problems over continuous random variables, MRFs with large cliques pose a challenge in probabilistic inference. Difficulties in performing optimization efficiently have limited the probabilistic models explored in computer vision and other fields. One inference technique that handles large cliques well is Expectation Propagation. EP offers run times independent of clique size, which instead depend only on the rank, or intrinsic dimensionality, of potentials. This property would be highly advantageous in computer vision. Unfortunately, for grid-shaped models common in vision, traditional Gaussian EP requires quadratic space and cubic time in the number of pixels. Here, we propose a variation of EP that exploits regularities in natural scene statistics to achieve run times that are linear in both number of pixels and clique size. We test these methods on shape from shading, and we demonstrate strong performance not only for Lambertian surfaces, but also on arbitrary surface reflectance and lighting arrangements, which requires highly non-Gaussian potentials. Finally, we use large, non-local cliques to exploit cast shadow, which is traditionally ignored in shape from shading.

## Summary (2 min read)

### 1. Introduction

- Probabilistic inference for large loopy graphical models has become an important subfield with a growing body of applications, including many in computer vision.
- These methods have resulted in significant progress for several applications.
- The principal difference between BP and Gaussian EP can thus be summarized by a trade-off in their respective approximating families: BP favors flexible non-Gaussian marginals, while Gaussian EP favors a flexible covariance structure.
- Another possible explanation is that for a grid-based graphical model with D pixels, Gaussian EP requires O(D2) space and a run time of O(D3).
- Finally, the authors use the method to efficiently perform inference over large cliques produced by cast shadows and by global spatial priors.

### 2. Expectation Propagation

- The family P̃ is chosen so that EP̃ [τj( x)] can be estimated easily.
- EP achieves this goal by approximating each potential function φi( x) with an exponential family distribution P̃i( xi| θ(i)).
- Regardless of the rank of each potential, the covariance matrix of the posterior S remains full-rank, and must be stored as a D×D matrix.
- For large problems with tens of thousands of variables or more, this becomes limiting.
- When the underlying graphical model is highly sparse, such as a nearest-neighbor pairwiseconnected MRFs, each iteration can be performed in time O(D1.5) [2].

### 3. Whitened EP

- For many problems of computer vision, both the number of variables D and the number of potentials N grow linearly with the number of pixels.
- Low-rank potentials of large clique size have a wide array of promising applications in computer vision [17, 10].
- Expectation propagation can be made more efficient by limiting the forms of covariance structure expressible by S. Let S denote the covariance matrix for natural scenes.

### 4. Shape from Shading

- Whitened EP permits inference over images in linear time with respect to both pixels and clique size.
- In particular, the authors are interested in whether Gaussian message approximation will be effective when the potentials φi are highly non-Gaussian.
- In recent years, several methods have been developed that solve the classical SfS problem well as long as surface reflectance R is assumed to be Lambertian [19, 17, 6, 3, 7].
- For each pixel, one potential φR(p, q|i) enforces the surface normal to be consistent with the known pixel intensity i(x, y).
- Whitened EP provides two benefits for spatial priors.

### 5. Conclusions

- The methods in this paper reduce the run time of EP from cubic to linear in the number of pixels for visual inference, while retaining a run time that is linear in clique size.
- The computational expense of inference for large cliques has prohibited the investigation of complex probabilistic models for vision.
- The authors hope is that whitened EP will facilitate further research in these directions.
- Results for whitened EP on SfS shows that the sacrifice in performance for this approach is small, even in problems with highly non-Gaussian potentials.
- Performance remained strong for surfaces with arbitrary reflectance and arbitrary lighting, which is a novel finding in SfS.

Did you find this useful? Give us your feedback

...read more

##### Citations

2 citations

##### References

5,832 citations

### "Whitened Expectation Propagation: N..." refers background in this paper

...Cues that stem from non-local similarity within a scene have been applied very successfully towards image denoising [2]....

[...]

2,822 citations

### "Whitened Expectation Propagation: N..." refers methods in this paper

...Others have advanced methods of inference which can be applied to probabilistic models over discrete variables with large cliques[10, 24], or large numbers of small cliques [12]....

[...]

1,740 citations

### "Whitened Expectation Propagation: N..." refers background in this paper

...In addition to its efficient run time, unifying many pairwise potentials into one large potential increases the fidelity of the Bethe approximation implicit in message passing algorithms [25]....

[...]

1,386 citations

### "Whitened Expectation Propagation: N..." refers background in this paper

...Computing ViSV ′ i here does not require sampling, and can be found analytically [14]....

[...]

...Minka showed that when x is discrete-valued and the approximating exponential family is a product of independent univariate discrete distributions, then EP is equivalent to classical belief propagation (BP) [14]....

[...]

...When the approximating family is a product of independent univariate marginals, EP is equivalent to BP [14]....

[...]

...In 2001, Minka proposed a generalization of BP known as Expectation Propagation [14]....

[...]

...Recall that when S is constrained to be diagonal, EP is equivalent to belief propagation [14]....

[...]

1,364 citations