scispace - formally typeset
Search or ask a question

Showing papers by "Jean-Christophe Pesquet published in 2014"


Journal ArticleDOI
TL;DR: An acceleration strategy based on the use of variable metrics and of the Majorize–Minimize principle is proposed and the sequence generated by the resulting Variable Metric Forward–Backward algorithm converges to a critical point of G.
Abstract: We consider the minimization of a function G defined on ${ \mathbb{R} } ^{N}$ , which is the sum of a (not necessarily convex) differentiable function and a (not necessarily differentiable) convex function. Moreover, we assume that G satisfies the Kurdyka---?ojasiewicz property. Such a problem can be solved with the Forward---Backward algorithm. However, the latter algorithm may suffer from slow convergence. We propose an acceleration strategy based on the use of variable metrics and of the Majorize---Minimize principle. We give conditions under which the sequence generated by the resulting Variable Metric Forward---Backward algorithm converges to a critical point of G. Numerical results illustrate the performance of the proposed algorithm in an image reconstruction application.

213 citations


Posted Content
TL;DR: This article aims to present the principles of primal?dual approaches while providing an overview of the numerical methods that have been proposed in different contexts and lead to algorithms that are easily parallelizable.
Abstract: Optimization methods are at the core of many problems in signal/image processing, computer vision, and machine learning. For a long time, it has been recognized that looking at the dual of an optimization problem may drastically simplify its solution. Deriving efficient strategies which jointly brings into play the primal and the dual problems is however a more recent idea which has generated many important new contributions in the last years. These novel developments are grounded on recent advances in convex analysis, discrete optimization, parallel processing, and non-smooth optimization with emphasis on sparsity issues. In this paper, we aim at presenting the principles of primal-dual approaches, while giving an overview of numerical methods which have been proposed in different contexts. We show the benefits which can be drawn from primal-dual algorithms both for solving large-scale convex optimization problems and discrete ones, and we provide various application examples to illustrate their usefulness.

118 citations


Journal Article
TL;DR: The proposed approach can be used to develop novel asynchronous distributed primal-dual algorithms in a multi-agent context and may be useful for reducing computational complexity and memory requirements.
Abstract: Based on a preconditioned version of the randomized block-coordinate forward-backward algorithm recently proposed in [23], several variants of block-coordinate primal-dual algo-rithms are designed in order to solve a wide array of monotone inclusion problems. These methods rely on a sweep of blocks of variables which are activated at each iteration according to a random rule, and they allow stochastic errors in the evaluation of the involved operators. Then, this framework is employed to derive block-coordinate primal-dual proximal algorithms for solving composite convex variational problems. The resulting algorithm implementations may be useful for reducing computational complexity and memory requirements. Furthermore, we show that the proposed approach can be used to develop novel asynchronous distributed primal-dual algorithms in a multi-agent context.

107 citations


Posted Content
TL;DR: The objective of this paper is to show that a number of existing algorithms can be derived from a general form of the forward-backward algorithm applied in a suitable product space, and to develop useful extensions ofexisting algorithms by introducing a variable metric.
Abstract: A wide array of image recovery problems can be abstracted into the problem of minimizing a sum of composite convex functions in a Hilbert space. To solve such problems, primal-dual proximal approaches have been developed which provide efficient solutions to large-scale optimization problems. The objective of this paper is to show that a number of existing algorithms can be derived from a general form of the forward-backward algorithm applied in a suitable product space. Our approach also allows us to develop useful extensions of existing algorithms by introducing a variable metric. An illustration to image restoration is provided.

92 citations


Journal ArticleDOI
TL;DR: The results demonstrate the interest of introducing a nonlocal ST regularization and show that the proposed approach leads to significant improvements in terms of convergence speed over current state-of-the-art methods, such as the alternating direction method of multipliers.
Abstract: Nonlocal total variation (NLTV) has emerged as a useful tool in variational methods for image recovery problems. In this paper, we extend the NLTV-based regularization to multicomponent images by taking advantage of the structure tensor (ST) resulting from the gradient of a multicomponent image. The proposed approach allows us to penalize the nonlocal variations, jointly for the different components, through various l(1, p)-matrix-norms with p ≥ 1. To facilitate the choice of the hyperparameters, we adopt a constrained convex optimization approach in which we minimize the data fidelity term subject to a constraint involving the ST-NLTV regularization. The resulting convex optimization problem is solved with a novel epigraphical projection method. This formulation can be efficiently implemented because of the flexibility offered by recent primal-dual proximal algorithms. Experiments are carried out for color, multispectral, and hyperspectral images. The results demonstrate the interest of introducing a nonlocal ST regularization and show that the proposed approach leads to significant improvements in terms of convergence speed over current state-of-the-art methods, such as the alternating direction method of multipliers.

84 citations


Proceedings ArticleDOI
27 Oct 2014
TL;DR: In this article, a general form of the forward-backward algorithm applied in a suitable product space is shown to provide efficient solutions to large-scale optimization problems for image recovery.
Abstract: A wide array of image recovery problems can be abstracted into the problem of minimizing a sum of composite convex functions in a Hilbert space. To solve such problems, primal-dual proximal approaches have been developed which provide efficient solutions to large-scale optimization problems. The objective of this paper is to show that a number of existing algorithms can be derived from a general form of the forward-backward algorithm applied in a suitable product space. Our approach also allows us to develop useful extensions of existing algorithms by introducing a variable metric. An illustration to image restoration is provided.

80 citations


Journal ArticleDOI
TL;DR: The algorithm is shown to outperform recent optimization strategies in terms of convergence speed and can handle various subsampling schemes, both convex and nonconvex penalization functions and different possibly redundant frame representations.

49 citations


Journal ArticleDOI
TL;DR: In this article, a new penalty term based on a smooth approximation to the l1/l2 ratio regularization function was proposed to solve the nonconvex and nonsmooth minimization problems resulting from the use of such a penalty term in current restoration methods.
Abstract: The l1/l2 ratio regularization function has shown good performance for retrieving sparse signals in a number of recent works, in the context of blind deconvolution. Indeed, it benefits from a scale invariance property much desirable in the blind context. However, the l1/l2 function raises some difficulties when solving the nonconvex and nonsmooth minimization problems resulting from the use of such a penalty term in current restoration methods. In this paper, we propose a new penalty based on a smooth approximation to the l1/l2 function. In addition, we develop a proximal-based algorithm to solve variational problems involving this function and we derive theoretical convergence results. We demonstrate the effectiveness of our method through a comparison with a recent alternating optimization strategy dealing with the exact l1/l2 term, on an application to seismic data blind deconvolution.

47 citations


Journal ArticleDOI
TL;DR: The designed primal-dual algorithm solves a constrained minimization problem that alleviates standard regularization issues in finding hyperparameters, and demonstrates significantly good performance in low signal-to-noise ratio conditions, both for simulated and real field seismic data.
Abstract: Unveiling meaningful geophysical information from seismic data requires to deal with both random and structured "noises". As their amplitude may be greater than signals of interest (primaries), additional prior information is especially important in performing efficient signal separation. We address here the problem of multiple reflections, caused by wave-field bouncing between layers. Since only approximate models of these phenomena are available, we propose a flexible framework for time-varying adaptive filtering of seismic signals, using sparse representations, based on inaccurate templates. We recast the joint estimation of adaptive filters and primaries in a new convex variational formulation. This approach allows us to incorporate plausible knowledge about noise statistics, data sparsity and slow filter variation in parsimony-promoting wavelet frames. The designed primal-dual algorithm solves a constrained minimization problem that alleviates standard regularization issues in finding hyperparameters. The approach demonstrates significantly good performance in low signal-to-noise ratio conditions, both for simulated and real field data.

38 citations


Journal ArticleDOI
TL;DR: In this article, a non-local structure tensor regularization for multispectral and hyperspectral image recovery is proposed, which penalizes the nonlocal variations jointly for the different components through various matrix norms with different hyper-parameters.
Abstract: Non-Local Total Variation (NLTV) has emerged as a useful tool in variational methods for image recovery problems In this paper, we extend the NLTV-based regularization to multicomponent images by taking advantage of the Structure Tensor (ST) resulting from the gradient of a multicomponent image The proposed approach allows us to penalize the non-local variations, jointly for the different components, through various $\ell_{1,p}$ matrix norms with $p \ge 1$ To facilitate the choice of the hyper-parameters, we adopt a constrained convex optimization approach in which we minimize the data fidelity term subject to a constraint involving the ST-NLTV regularization The resulting convex optimization problem is solved with a novel epigraphical projection method This formulation can be efficiently implemented thanks to the flexibility offered by recent primal-dual proximal algorithms Experiments are carried out for multispectral and hyperspectral images The results demonstrate the interest of introducing a non-local structure tensor regularization and show that the proposed approach leads to significant improvements in terms of convergence speed over current state-of-the-art methods

36 citations


Journal ArticleDOI
TL;DR: The algorithm is shown to provide reliable estimates of the mean/variance of the Gaussian noise and of the scale parameter of the Poisson component, as well as of its exponential decay rate, which can be interpreted as a good-quality denoised version of the data.
Abstract: The problem of estimating the parameters of a Poisson-Gaussian model from experimental data has recently raised much interest in various applications, for instance in confocal fluorescence microscopy. In this context, a field of independent random variables is observed, which is varying both in time and space. Each variable is a sum of two components, one following a Poisson and the other a Gaussian distribution. In this paper, a general formulation is considered where the associated Poisson process is nonstationary in space and also exhibits an exponential decay in time, whereas the Gaussian component corresponds to a stationary white noise with arbitrary mean. To solve the considered parametric estimation problem, we follow an iterative Expectation-Maximization (EM) approach. The parameter update equations involve deriving finite approximation of infinite sums. Expressions for the maximum error incurred in the process are also given. Since the problem is non-convex, we pay attention to the EM initialization, using a moment-based method where recent optimization tools come into play. We carry out a performance analysis by computing the Cramer-Rao bounds on the estimated variables. The practical performance of the proposed estimation procedure is illustrated on both synthetic data and real fluorescence macroscopy image sequences. The algorithm is shown to provide reliable estimates of the mean/variance of the Gaussian noise and of the scale parameter of the Poisson component, as well as of its exponential decay rate. In particular, the mean estimate of the Poisson component can be interpreted as a good-quality denoised version of the data.

Posted Content
TL;DR: Based on a preconditioned version of the randomized block-coordinate forward-backward algorithm recently proposed in [Combettes,Pesquet, 2014], several variants of blockcoordinate primal-dual algorithms are designed in order to solve a wide array of monotone inclusion problems as mentioned in this paper.
Abstract: Based on a preconditioned version of the randomized block-coordinate forward-backward algorithm recently proposed in [Combettes,Pesquet,2014], several variants of block-coordinate primal-dual algorithms are designed in order to solve a wide array of monotone inclusion problems These methods rely on a sweep of blocks of variables which are activated at each iteration according to a random rule, and they allow stochastic errors in the evaluation of the involved operators Then, this framework is employed to derive block-coordinate primal-dual proximal algorithms for solving composite convex variational problems The resulting algorithm implementations may be useful for reducing computational complexity and memory requirements Furthermore, we show that the proposed approach can be used to develop novel asynchronous distributed primal-dual algorithms in a multi-agent context

Journal ArticleDOI
TL;DR: This paper extends 2D regularization in the wavelet domain to 3D-wavelet representations and the 3D sparsity-promoting regularization term, in order to address reconstruction artifacts that propagate across adjacent slices, and outperforms the SENSE reconstruction at the subject and group levels.
Abstract: Background: Parallel magnetic resonance imaging (MRI) is a fast imaging technique that helps acquiring highly resolved images in space/time. Its performance depends on the reconstruction algorithm, which can proceed either inthe k-space or in the image domain.Objective: and methods To improve the performance of the widely used SENSE algorithm, 2D regularization in thewavelet domain has been investigated. In this paper, we first extend this approach to 3D-wavelet representationsand the 3D sparsity-promoting regularization term, in order to address reconstruction artifacts that propagate acrossadjacent slices. The resulting optimality criterion is convex but nonsmooth, and we resort to the parallel proximalalgorithm to minimize it. Second, to account for temporal correlation between successive scans in functional MRI(fMRI), we extend our first contribution to 3D +t acquisition schemes by incorporating a prior along the time axis into the objective function.Results: Our first method (3D-UWR-SENSE) is validated on T1-MRI anatomical data for gray/white matter segmentation.The second method (4D-UWR-SENSE) is validated for detecting evoked activity during a fast eventrelatedfunctional MRI protocol.Conclusion: We show that our algorithm outperforms the SENSE reconstruction at the subject and group levels (15 subjects) for different contrasts of interest (motor or computation tasks) and two parallel acceleration factors (R = 2 and R = 4) on 2x2x3 mm3 echo planar imaging (EPI) images.

Proceedings ArticleDOI
01 Oct 2014
TL;DR: A novel phase retrieval approach is proposed, which is based on a smooth nonconvex approximation of the standard data fidelity term, which allows it to employ a wide range of convex separable regularization functions.
Abstract: With the development of new imaging systems delivering large-size data sets, phase retrieval has become recently the focus of much attention. The problem is especially challenging due to its intrinsically nonconvex formulation. In addition, the applicability of many existing solutions may be limited either by their estimation performance or by their computational cost, especially in the case of non-Fourier measurements. In this paper, we propose a novel phase retrieval approach, which is based on a smooth nonconvex approximation of the standard data fidelity term. In addition, the proposed method allows us to employ a wide range of convex separable regularization functions. The optimization process is performed by a block coordinate proximal algorithm which is amenable to solving large-scale problems. An application of this algorithm to an image reconstruction problem shows that it may be very competitive with respect to state-of-the-art methods.

Posted Content
TL;DR: This work proposes block-coordinate fixed point algorithms with applications to nonlinear analysis and optimization in Hilbert spaces and relies on a notion of stochastic quasi-Fejer monotonicity for its asymptotic analysis.
Abstract: This work proposes block-coordinate fixed point algorithms with applications to nonlinear analysis and optimization in Hilbert spaces. The asymptotic analysis relies on a notion of stochastic quasi-Fej\'er monotonicity, which is thoroughly investigated. The iterative methods under consideration feature random sweeping rules to select arbitrarily the blocks of variables that are activated over the course of the iterations and they allow for stochastic errors in the evaluation of the operators. Algorithms using quasinonexpansive operators or compositions of averaged nonexpansive operators are constructed, and weak and strong convergence results are established for the sequences they generate. As a by-product, novel block-coordinate operator splitting methods are obtained for solving structured monotone inclusion and convex minimization problems. In particular, the proposed framework leads to random block-coordinate versions of the Douglas-Rachford and forward-backward algorithms and of some of their variants. In the standard case of $m=1$ block, our results remain new as they incorporate stochastic perturbations.

Journal ArticleDOI
TL;DR: In this article, the authors developed an efficient bit allocation strategy for subband-based image coding systems based on a rate-distortion optimality criterion and formulated the problem as a convex optimization problem.
Abstract: In this paper, we develop an efficient bit allocation strategy for subband-based image coding systems. More specifically, our objective is to design a new optimization algorithm based on a rate-distortion optimality criterion. To this end, we consider the uniform scalar quantization of a class of mixed distributed sources following a Bernoulli-generalized Gaussian distribution. This model appears to be particularly well-adapted for image data, which have a sparse representation in a wavelet basis. In this paper, we propose new approximations of the entropy and the distortion functions using piecewise affine and exponential forms, respectively. Because of these approximations, bit allocation is reformulated as a convex optimization problem. Solving the resulting problem allows us to derive the optimal quantization step for each subband. Experimental results show the benefits that can be drawn from the proposed bit allocation method in a typical transform-based coding application.

Proceedings ArticleDOI
01 Sep 2014
TL;DR: A novel method is proposed for tuning the related drift term of Langevin diffusion where the proposal accounts for a directional component and is preconditioned by an adaptive matrix based on a Majorize-Minimize strategy.
Abstract: One challenging task in MCMC methods is the choice of the proposal density. It should ideally provide an accurate approximation of the target density with a low computational cost. In this paper, we are interested in Langevin diffusion where the proposal accounts for a directional component. We propose a novel method for tuning the related drift term. This term is preconditioned by an adaptive matrix based on a Majorize-Minimize strategy. This new procedure is shown to exhibit a good performance in a multispectral image restoration example.

Proceedings ArticleDOI
04 May 2014
TL;DR: This work combines the Forward-Backward algorithm with an alternating minimization strategy to address a broad class of optimization problems involving large-size signals and an application example to a nonconvex spectral unmixing problem will be presented.
Abstract: Many inverse problems require to minimize a criterion being the sum of a non necessarily smooth function and a Lipschitz differentiable function. Such an optimization problem can be solved with the Forward-Backward algorithm which can be accelerated thanks to the use of variable metrics derived from the Majorize-Minimize principle. The convergence of this approach is guaranteed provided that the criterion satisfies some additional technical conditions. Combining this method with an alternating minimization strategy will be shown to allow us to address a broad class of optimization problems involving large-size signals. An application example to a nonconvex spectral unmixing problem will be presented.

Proceedings ArticleDOI
01 Sep 2014
TL;DR: This work proposes a new blind deconvolution algorithm for the restoration of old analog television sequences based on a variational formulation of the problem, which accounts for motion between frames, while enforcing some level of temporal continuity through the use of a novel penalty function involving optical flow operators.
Abstract: Old analog television sequences suffer from a number of degradations. Some of them can be modeled through convolution with a kernel and an additive noise term. In this work, we propose a new blind deconvolution algorithm for the restoration of such sequences based on a variational formulation of the problem. Our method accounts for motion between frames, while enforcing some level of temporal continuity through the use of a novel penalty function involving optical flow operators, in addition to an edge-preserving regularization. The optimization process is performed by a proximal alternating minimization scheme benefiting from theoretical convergence guarantees. Simulation results on synthetic and real video sequences confirm the effectiveness of our method.

Proceedings ArticleDOI
27 Oct 2014
TL;DR: This paper deals with noise parameter estimation from a single image under Poisson-Gaussian noise statistics, formulated within a mixed discrete-continuous optimization framework and inspired from a spatial regularization approach for vector quantization.
Abstract: This paper deals with noise parameter estimation from a single image under Poisson-Gaussian noise statistics. The problem is formulated within a mixed discrete-continuous optimization framework. The proposed approach jointly estimates the signal of interest and the noise parameters. This is achieved by introducing an adjustable regularization term inside an optimized criterion, together with a data fidelity error measure. The optimal solution is sought iteratively by alternating the minimization of a label field and of a noise parameter vector. Noise parameters are updated at each iteration using an Expectation-Maximization approach. The proposed algorithm is inspired from a spatial regularization approach for vector quantization. We illustrate the usefulness of our approach on macroconfocal images. The identified noise parameters are applied to a denoising algorithm, so yielding a complete denoising scheme.

Proceedings ArticleDOI
04 May 2014
TL;DR: A learning algorithm for multiclass support vector machines is designed that allows us to enforce sparsity through various nonsmooth regularizations, such as the mixed ℓ1, p-norm with p ≥ 1, and the proposed constrained convex optimization approach involves an epigraphical constraint.
Abstract: Sparsity inducing penalizations are useful tools in variational methods for machine learning. In this paper, we design a learning algorithm for multiclass support vector machines that allows us to enforce sparsity through various nonsmooth regularizations, such as the mixed l 1, p -norm with p ≥ 1. The proposed constrained convex optimization approach involves an epigraphical constraint for which we derive the closed-form expression of the associated projection. This sparse multiclass SVM problem can be efficiently implemented thanks to the flexibility offered by recent primal-dual proximal algorithms. Experiments carried out for handwritten digits demonstrate the interest of considering nonsmooth sparsity-inducing regularizations and the efficiency of the proposed epigraphical projection method.

Proceedings ArticleDOI
04 May 2014
TL;DR: In this article, a quantized Bernoulli-generalized Gaussian source with a sparse representation in a transformed domain is considered and the authors provide accurate approximations of the entropy and the distortion functions evaluated through a p-th order error measure.
Abstract: The objective of this paper is to study rate-distortion properties of a quantized Bernoulli-Generalized Gaussian source. Such source model has been found to be well-adapted for signals having a sparse representation in a transformed domain. We provide here accurate approximations of the entropy and the distortion functions evaluated through a p-th order error measure. These theoretical results are then validated experimentally. Finally, the benefit that can be drawn from the proposed approximations in bit allocation problems is illustrated for a wavelet-based compression scheme.

Proceedings ArticleDOI
13 Nov 2014
TL;DR: Simulation results illustrate the good practical performance of the proposed MM Memory Gradient (3MG) algorithm when applied to 2D filter identification.
Abstract: Stochastic optimization plays an important role in solving many problems encountered in machine learning or adaptive processing. In this context, the second-order statistics of the data are often un-known a priori or their direct computation is too intensive, and they have to be estimated online from the related signals. In the context of batch optimization of an objective function being the sum of a data fidelity term and a penalization (e.g. a sparsity promoting function), Majorize-Minimize (MM) subspace methods have recently attracted much interest since they are fast, highly flexible and effective in ensuring convergence. The goal of this paper is to show how these methods can be successfully extended to the case when the cost function is replaced by a sequence of stochastic approximations of it. Simulation results illustrate the good practical performance of the proposed MM Memory Gradient (3MG) algorithm when applied to 2D filter identification.

Proceedings ArticleDOI
04 May 2014
TL;DR: In this paper, the under-determined problem is formulated as a convex optimization one, providing estimates of both filters and primaries, and the criterion to be minimized mainly consists of two parts: a data fidelity term and hard constraints model-ing a priori information.
Abstract: Random and structured noise both affect seismic data, hiding the reflections of interest (primaries) that carry meaningful geo-physical interpretation. When the structured noise is composed of multiple reflections, its adaptive cancellation is obtained through time-varying filtering, compensating inaccuracies in given approximate templates. The under-determined problem can then be formulated as a convex optimization one, providing estimates of both filters and primaries. Within this framework, the criterion to be minimized mainly consists of two parts: a data fidelity term and hard constraints model-ing a priori information. This formulation may avoid, or at least facilitate, some parameter determination tasks, usually difficult to perform in inverse problems. Not only classical constraints, such as sparsity, are considered here, but also constraints expressed through hyperplanes, onto which the projection is easy to compute. The latter constraints lead to improved performance by further constraining the space of geophysically sound solutions.

Proceedings ArticleDOI
29 May 2014
TL;DR: This paper addresses the more challenging case of an uncountable set of vectors parameterized by a real variable, and presents a proximal forward-backward algorithm to minimize an ℓ0 penalized cost, which allows the derived bounds to be approached.
Abstract: Complex-valued data play a prominent role in a number of signal and image processing applications. The aim of this paper is to establish some theoretical results concerning the Cramer-Rao bound for estimating a sparse complex-valued vector. Instead of considering a countable dictionary of vectors, we address the more challenging case of an uncountable set of vectors parameterized by a real variable. We also present a proximal forward-backward algorithm to minimize an l 0 penalized cost, which allows us to approach the derived bounds. These results are illustrated on a spectrum analysis problem in the case of irregularly sampled observations.

Proceedings ArticleDOI
18 Sep 2014
TL;DR: The main focus of this work is the estimation of a complex valued signal assumed to have a sparse representation in an uncountable dictionary of signals which is recast as a constrained sparse perturbed model.
Abstract: The main focus of this work is the estimation of a complex valued signal assumed to have a sparse representation in an uncountable dictionary of signals. The dictionary elements are parameterized by a real-valued vector and the available observations are corrupted with an additive noise. By applying a linearization technique, the original model is recast as a constrained sparse perturbed model. The problem of the computation of the involved multiple parameters is addressed from a nonconvex optimization viewpoint. A cost function is defined including an arbitrary Lipschitz differentiable data fidelity term accounting for the noise statistics, and an l0-like penalty. A proximal algorithm is then employed to solve the resulting nonconvex and nonsmooth minimization problem. Experimental results illustrate the good practical performance of the proposed approach when applied to 2D spectrum analysis.

Posted Content
TL;DR: Within this framework, the criterion to be minimized mainly consists of a data fidelity term and hard constraints modeling a priori information, which may avoid, or at least facilitate, some parameter determination tasks, usually difficult to perform in inverse problems.
Abstract: Random and structured noise both affect seismic data, hiding the reflections of interest (primaries) that carry meaningful geophysical interpretation. When the structured noise is composed of multiple reflections, its adaptive cancellation is obtained through time-varying filtering, compensating inaccuracies in given approximate templates. The under-determined problem can then be formulated as a convex optimization one, providing estimates of both filters and primaries. Within this framework, the criterion to be minimized mainly consists of two parts: a data fidelity term and hard constraints modeling a priori information. This formulation may avoid, or at least facilitate, some parameter determination tasks, usually difficult to perform in inverse problems. Not only classical constraints, such as sparsity, are considered here, but also constraints expressed through hyperplanes, onto which the projection is easy to compute. The latter constraints lead to improved performance by further constraining the space of geophysically sound solutions.