scispace - formally typeset
Search or ask a question
Author

Xavier Descombes

Bio: Xavier Descombes is an academic researcher from French Institute for Research in Computer Science and Automation. The author has contributed to research in topics: Image segmentation & Simulated annealing. The author has an hindex of 32, co-authored 228 publications receiving 4151 citations. Previous affiliations of Xavier Descombes include Télécom ParisTech & Max Planck Society.


Papers
More filters
Journal ArticleDOI
TL;DR: A new method for the extraction of roads from remotely sensed images is proposed, under the assumption that roads form a thin network in the image, by connected line segments by minimizing an energy function.
Abstract: In this paper we propose a new method for the extraction of roads from remotely sensed images. Under the assumption that roads form a thin network in the image, we approximate such a network by connected line segments. To perform this task, we construct a point process able to simulate and detect thin networks. The segments have to be connected, in order to form a line-network. Aligned segments are favored whereas superposition is penalized. These constraints are enforced by the interaction model (called the Candy model). The specific properties of the road network in the image are described by the data term. This term is based on statistical hypothesis tests. The proposed probabilistic model can be written within a Gibbs point process framework. The estimate for the network is found by minimizing an energy function. In order to avoid local minima, we use a simulated annealing algorithm, based on a Monte Carlo dynamics (RJMCMC) for finite point processes. Results are shown on SPOT, ERS and aerial images.

208 citations

Journal ArticleDOI
TL;DR: This paper addresses the problem of unsupervised extraction of line networks from remotely sensed images by model the target line network by an object process, where the objects correspond to interacting line segments, and shows the relevance of using an offline computation of the data potential.
Abstract: This paper addresses the problem of unsupervised extraction of line networks (for example, road or hydrographic networks) from remotely sensed images. We model the target line network by an object process, where the objects correspond to interacting line segments. The prior model, called "quality candy," is designed to exploit as fully as possible the topological properties of the network under consideration, while the radiometric properties of the network are modeled using a data term based on statistical tests. Two techniques are used to compute this term: one is more accurate, the other more efficient. A calibration technique is used to choose the model parameters. Optimization is done via simulated annealing using a reversible jump Markov chain Monte Carlo (RJMCMC) algorithm. We accelerate convergence of the algorithm by using appropriate proposal kernels. The results obtained on satellite and aerial images are quantitatively evaluated with respect to manual extractions. A comparison with the results obtained using a previous model, called the "candy" model, shows the interest of adding quality coefficients with respect to interactions in the prior density. The relevance of using an offline computation of the data potential is shown, in particular, when a proposal kernel based on this computation is added in the RJMCMC algorithm.

186 citations

Journal ArticleDOI
TL;DR: This method has been validated on multiple data set in a wide-resolution interval such as 0.7 m satellite and 0.1 m aerial DSMs, and provides 3D representations on complex buildings and dense urban areas with various levels of detail.
Abstract: We present a new approach for building reconstruction from a single Digital Surface Model (DSM). It treats buildings as an assemblage of simple urban structures extracted from a library of 3D parametric blocks (like a LEGO set). First, the 2D-supports of the urban structures are extracted either interactively or automatically. Then, 3D-blocks are placed on the 2D-supports using a Gibbs model which controls both the block assemblage and the fitting to data. A Bayesian decision finds the optimal configuration of 3D--blocks using a Markov Chain Monte Carlo sampler associated with original proposition kernels. This method has been validated on multiple data set in a wide-resolution interval such as 0.7 m satellite and 0.1 m aerial DSMs, and provides 3D representations on complex buildings and dense urban areas with various levels of detail.

178 citations

Journal ArticleDOI
TL;DR: A new probabilistic method which integrates building extraction with change detection in remotely sensed image pairs is introduced, and a novel nonuniform stochastic object birth process which generates relevant objects with higher probability based on low-level image features is proposed.
Abstract: In this paper, we introduce a new probabilistic method which integrates building extraction with change detection in remotely sensed image pairs. A global optimization process attempts to find the optimal configuration of buildings, considering the observed data, prior knowledge, and interactions between the neighboring building parts. We present methodological contributions in three key issues: 1) We implement a novel object-change modeling approach based on Multitemporal Marked Point Processes, which simultaneously exploits low-level change information between the time layers and object-level building description to recognize and separate changed and unaltered buildings. 2) To answer the challenges of data heterogeneity in aerial and satellite image repositories, we construct a flexible hierarchical framework which can create various building appearance models from different elementary feature-based modules. 3) To simultaneously ensure the convergence, optimality, and computation complexity constraints raised by the increased data quantity, we adopt the quick Multiple Birth and Death optimization technique for change detection purposes, and propose a novel nonuniform stochastic object birth process which generates relevant objects with higher probability based on low-level image features.

147 citations

Journal ArticleDOI
TL;DR: This work details the theory required, and presents an algorithm that is easily implemented and practical in terms of computation time, and demonstrates this algorithm on three MRF models--the standard Potts model, an inhomogeneous variation of the Pottsmodel, and a long-range interaction model better adapted to modeling real-world images.
Abstract: Developments in statistics now allow maximum likelihood estimators for the parameters of Markov random fields (MRFs) to be constructed. We detail the theory required, and present an algorithm that is easily implemented and practical in terms of computation time. We demonstrate this algorithm on three MRF models-the standard Potts model, an inhomogeneous variation of the Potts model, and a long-range interaction model, better adapted to modeling real-world images. We estimate the parameters from a synthetic and a real image, and then resynthesize the models to demonstrate which features of the image have been captured by the model. Segmentations are computed based on the estimated parameters and conclusions drawn.

139 citations


Cited by
More filters
Book
24 Aug 2012
TL;DR: This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach, and is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.
Abstract: Today's Web-enabled deluge of electronic data calls for automated methods of data analysis. Machine learning provides these, developing methods that can automatically detect patterns in data and then use the uncovered patterns to predict future data. This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach. The coverage combines breadth and depth, offering necessary background material on such topics as probability, optimization, and linear algebra as well as discussion of recent developments in the field, including conditional random fields, L1 regularization, and deep learning. The book is written in an informal, accessible style, complete with pseudo-code for the most important algorithms. All topics are copiously illustrated with color images and worked examples drawn from such application domains as biology, text processing, computer vision, and robotics. Rather than providing a cookbook of different heuristic methods, the book stresses a principled model-based approach, often using the language of graphical models to specify models in a concise and intuitive way. Almost all the models described have been implemented in a MATLAB software package--PMTK (probabilistic modeling toolkit)--that is freely available online. The book is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.

8,059 citations

Journal ArticleDOI
TL;DR: In this paper, the authors describe the steps involved in VBM, with particular emphasis on segmenting gray matter from MR images with non-uniformity artifact and provide evaluations of the assumptions that underpin the method, including the accuracy of the segmentation and the assumptions made about the statistical distribution of the data.

8,049 citations

Journal ArticleDOI
TL;DR: This work considers approximate Bayesian inference in a popular subset of structured additive regression models, latent Gaussian models, where the latent field is Gaussian, controlled by a few hyperparameters and with non‐Gaussian response variables and can directly compute very accurate approximations to the posterior marginals.
Abstract: Structured additive regression models are perhaps the most commonly used class of models in statistical applications. It includes, among others, (generalized) linear models, (generalized) additive models, smoothing spline models, state space models, semiparametric regression, spatial and spatiotemporal models, log-Gaussian Cox processes and geostatistical and geoadditive models. We consider approximate Bayesian inference in a popular subset of structured additive regression models, latent Gaussian models, where the latent field is Gaussian, controlled by a few hyperparameters and with non-Gaussian response variables. The posterior marginals are not available in closed form owing to the non-Gaussian response variables. For such models, Markov chain Monte Carlo methods can be implemented, but they are not without problems, in terms of both convergence and computational time. In some practical applications, the extent of these problems is such that Markov chain Monte Carlo sampling is simply not an appropriate tool for routine analysis. We show that, by using an integrated nested Laplace approximation and its simplified version, we can directly compute very accurate approximations to the posterior marginals. The main benefit of these approximations is computational: where Markov chain Monte Carlo algorithms need hours or days to run, our approximations provide more precise estimates in seconds or minutes. Another advantage with our approach is its generality, which makes it possible to perform Bayesian analysis in an automatic, streamlined way, and to compute model comparison criteria and various predictive measures so that models can be compared and the model under study can be challenged.

4,164 citations

01 Jan 2004
TL;DR: Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance and describes numerous important application areas such as image based rendering and digital libraries.
Abstract: From the Publisher: The accessible presentation of this book gives both a general view of the entire computer vision enterprise and also offers sufficient detail to be able to build useful applications. Users learn techniques that have proven to be useful by first-hand experience and a wide range of mathematical methods. A CD-ROM with every copy of the text contains source code for programming practice, color images, and illustrative movies. Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance. Topics are discussed in substantial and increasing depth. Application surveys describe numerous important application areas such as image based rendering and digital libraries. Many important algorithms broken down and illustrated in pseudo code. Appropriate for use by engineers as a comprehensive reference to the computer vision enterprise.

3,627 citations

Proceedings Article
01 Jan 1989
TL;DR: A scheme is developed for classifying the types of motion perceived by a humanlike robot and equations, theorems, concepts, clues, etc., relating the objects, their positions, and their motion to their images on the focal plane are presented.
Abstract: A scheme is developed for classifying the types of motion perceived by a humanlike robot. It is assumed that the robot receives visual images of the scene using a perspective system model. Equations, theorems, concepts, clues, etc., relating the objects, their positions, and their motion to their images on the focal plane are presented. >

2,000 citations