scispace - formally typeset
Search or ask a question

Showing papers by "Andrés Bruhn published in 2012"


Journal ArticleDOI
01 Nov 2012
TL;DR: This approach is the first to capture facial performances of such high quality from a single stereo rig and it is demonstrated that it brings facial performance capture out of the studio, into the wild, and within the reach of everybody.
Abstract: Recent progress in passive facial performance capture has shown impressively detailed results on highly articulated motion. However, most methods rely on complex multi-camera set-ups, controlled lighting or fiducial markers. This prevents them from being used in general environments, outdoor scenes, during live action on a film set, or by freelance animators and everyday users who want to capture their digital selves. In this paper, we therefore propose a lightweight passive facial performance capture approach that is able to reconstruct high-quality dynamic facial geometry from only a single pair of stereo cameras. Our method succeeds under uncontrolled and time-varying lighting, and also in outdoor scenes. Our approach builds upon and extends recent image-based scene flow computation, lighting estimation and shading-based refinement algorithms. It integrates them into a pipeline that is specifically tailored towards facial performance reconstruction from challenging binocular footage under uncontrolled lighting. In an experimental evaluation, the strong capabilities of our method become explicit: We achieve detailed and spatio-temporally coherent results for expressive facial motion in both indoor and outdoor scenes -- even from low quality input images recorded with a hand-held consumer stereo camera. We believe that our approach is the first to capture facial performances of such high quality from a single stereo rig and we demonstrate that it brings facial performance capture out of the studio, into the wild, and within the reach of everybody.

178 citations


Book ChapterDOI
16 Jul 2012
TL;DR: The proposed reparametrization is generic and can be applied to almost every existing algorithm and illustrated by considering the classic TV-L 1 optical flow algorithm as a prototype to demonstrate that this widely used method can produce results that are competitive with current state-of-the-art methods.
Abstract: We consider the problem of interpolating frames in an image sequence. For this purpose accurate motion estimation can be very helpful. We propose to move the motion estimation from the surrounding frames directly to the unknown frame by parametrizing the optical flow objective function such that the interpolation assumption is directly modeled. This reparametrization is a powerful trick that results in a number of appealing properties, in particular the motion estimation becomes more robust to noise and large displacements, and the computational workload is more than halved compared to usual bidirectional methods. The proposed reparametrization is generic and can be applied to almost every existing algorithm. In this paper we illustrate its advantages by considering the classic TV-L 1 optical flow algorithm as a prototype. We demonstrate that this widely used method can produce results that are competitive with current state-of-the-art methods. Finally we show that the scheme can be implemented on graphics hardware such that it becomes possible to double the frame rate of 640×480 video footage at 30 fps, i.e. to perform frame doubling in realtime.

58 citations


Journal ArticleDOI
TL;DR: The results prove that dense variational methods can be a serious alternative even in classical application domains of sparse feature based approaches.
Abstract: There are two main strategies for solving correspondence problems in computer vision: sparse local feature based approaches and dense global energy based methods. While sparse feature based methods are often used for estimating the fundamental matrix by matching a small set of sophistically optimised interest points, dense energy based methods mark the state of the art in optical flow computation. The goal of our paper is to show that this separation into different application domains is unnecessary and can be bridged in a natural way. As a first contribution we present a new application of dense optical flow for estimating the fundamental matrix. Comparing our results with those obtained by feature based techniques we identify cases in which dense methods have advantages over sparse approaches. Motivated by these promising results we propose, as a second contribution, a new variational model that recovers the fundamental matrix and the optical flow simultaneously as the minimisers of a single energy functional. In experiments we show that our coupled approach is able to further improve the estimates of both the fundamental matrix and the optical flow. Our results prove that dense variational methods can be a serious alternative even in classical application domains of sparse feature based approaches.

57 citations


Book ChapterDOI
05 Nov 2012
TL;DR: The key idea of the approach is to use the matching energy of the baseline method to carefully select those locations where feature matches may potentially improve the estimation and improve the reliability of the estimation by identifying unnecessary and unreliable features and thus excluding spurious matches.
Abstract: Despite the significant progress in terms of accuracy achieved by recent variational optical flow methods, the correct handling of large displacements still poses a severe problem for many algorithms. In particular if the motion exceeds the size of an object, standard coarse-to-fine estimation schemes fail to produce meaningful results. While the integration of point correspondences may help to overcome this limitation, such strategies often deteriorate the performance for small displacements due to false or ambiguous matches. In this paper we address the aforementioned problem by proposing an adaptive integration strategy for feature matches. The key idea of our approach is to use the matching energy of the baseline method to carefully select those locations where feature matches may potentially improve the estimation. This adaptive selection does not only reduce the runtime compared to an exhaustive search, it also improves the reliability of the estimation by identifying unnecessary and unreliable features and thus by excluding spurious matches. Results for the Middlebury benchmark and several other image sequences demonstrate that our approach succeeds in handling large displacements in such a way that the performance for small displacements is not compromised. Moreover, experiments even indicate that image sequences with small displacements can benefit from carefully selected point correspondences.

30 citations


Book ChapterDOI
28 Aug 2012
TL;DR: This paper extends the state-of-the-art approach of Zach et al.(2007) in several ways, replacing the isotropic space-variant smoothing behaviour by an anisotropic (direction-dependent) one and using the more accurate closest signed distances instead of directional signed distances when converting range images into 3D signed distance fields.
Abstract: Obtaining high-quality 3D models of real world objects is an important task in computer vision. A very promising approach to achieve this is given by variational range image integration methods: They are able to deal with a substantial amount of noise and outliers, while regularising and thus creating smooth surfaces at the same time. Our paper extends the state-of-the-art approach of Zach et al.(2007) in several ways: (i) We replace the isotropic space-variant smoothing behaviour by an anisotropic (direction-dependent) one. Due to the directional adaptation, a better control of the smoothing with respect to the local structure of the signed distance field can be achieved. (ii) In order to keep data and smoothness term in balance, a normalisation factor is introduced. As a result, oversmoothing of locations that are seen seldom is prevented. This allows high quality reconstructions in uncontrolled capture setups, where the camera positions are unevenly distributed around an object. (iii) Finally, we use the more accurate closest signed distances instead of directional signed distances when converting range images into 3D signed distance fields. Experiments demonstrate that each of our three contributions leads to clearly visible improvements in the reconstruction quality.

24 citations


Proceedings ArticleDOI
01 Jan 2012
TL;DR: For the first time, it becomes possible to theoretically justify the use of the FM method as solver for the Oren-Nayar model which has been applied so far on a purely empirical basis only.
Abstract: Due to their improved capability to handle realistic illumination scenarios, nonLambertian reflectance models are becoming increasingly more popular in the Shape from Shading (SfS) community. One of these advanced models is the Oren-Nayar model which is particularly suited to handle rough surfaces. However, not only the proper selection of the model is important, also the validation of stable and efficient algorithms plays a fundamental role when it comes to the practical applicability. While there are many works dealing with such algorithms in the case of Lambertian SfS, no such analysis has been performed so far for the Oren-Nayar model. In our paper we address this problem and present an in-depth study for such an advanced SfS model. To this end, we investigate under which conditions, i.e. model parameters, the Fast Marching (FM) method can be applied – a method that is known to be one of the most efficient algorithms for solving the underlying partial differential equations of Hamilton-Jacobi type. In this context, we do not only perform a general investigation of the model using Osher’s criterion for verifying the suitability of the FM method. We also conduct a parameter dependent analysis that shows, that FM can safely be used for the model for a wide range of settings relevant for practical applications. Thus, for the first time, it becomes possible to theoretically justify the use of the FM method as solver for the Oren-Nayar model which has been applied so far on a purely empirical basis only. Numerical experiments demonstrate the validity of our theoretical analysis. They show a stable behaviour of the FM method for the predicted range of model parameters.

6 citations