scispace - formally typeset
Search or ask a question

Showing papers on "Maxima and minima published in 2009"


Journal ArticleDOI
01 Dec 2009
TL;DR: In this paper, the local extrema of the input image is used to extract information about oscillations, a key property that distinguishes textures from individual edges, and an algorithm for decomposing images into multiple scales of superposed oscillations is developed.
Abstract: We propose a new model for detail that inherently captures oscillations, a key property that distinguishes textures from individual edges. Inspired by techniques in empirical data analysis and morphological image analysis, we use the local extrema of the input image to extract information about oscillations: We define detail as oscillations between local minima and maxima. Building on the key observation that the spatial scale of oscillations are characterized by the density of local extrema, we develop an algorithm for decomposing images into multiple scales of superposed oscillations.Current edge-preserving image decompositions assume image detail to be low contrast variation. Consequently they apply filters that extract features with increasing contrast as successive layers of detail. As a result, they are unable to distinguish between high-contrast, fine-scale features and edges of similar contrast that are to be preserved. We compare our results with existing edge-preserving image decomposition algorithms and demonstrate exciting applications that are made possible by our new notion of detail.

262 citations


Journal ArticleDOI
15 Jul 2009
TL;DR: This paper proposes the Auto Diffusion Function (ADF) which is a linear combination of the eigenfunctions of the Laplace‐Beltrami operator in a way that has a simple physical interpretation.
Abstract: Scalar functions defined on manifold triangle meshes is a starting point for many geometry processing algorithms such as mesh parametrization, skeletonization, and segmentation In this paper, we propose the Auto Diffusion Function (ADF) which is a linear combination of the eigenfunctions of the Laplace-Beltrami operator in a way that has a simple physical interpretation The ADF of a given 3D object has a number of further desirable properties: Its extrema are generally at the tips of features of a given object, its gradients and level sets follow or encircle features, respectively, it is controlled by a single parameter which can be interpreted as feature scale, and, finally, the ADF is invariant to rigid and isometric deformations We describe the ADF and its properties in detail and compare it to other choices of scalar functions on manifolds As an example of an application, we present a pose invariant, hierarchical skeletonization and segmentation algorithm which makes direct use of the ADF

180 citations


01 Jan 2009
TL;DR: A new model for detail is proposed that inherently captures oscillations, a key property that distinguishes textures from individual edges, and an algorithm for decomposing images into multiple scales of superposed oscillations is developed.
Abstract: We propose a new model for detail that inherently captures oscillations, a key property that distinguishes textures from individual edges. Inspired by techniques in empirical data analysis and morphological image analysis, we use the local extrema of the input image to extract information about oscillations: We define detail as oscillations between local minima and maxima. Building on the key observation that the spatial scale of oscillations are characterized by the density of local extrema, we develop an algorithm for decomposing images into multiple scales of superposed oscillations.Current edge-preserving image decompositions assume image detail to be low contrast variation. Consequently they apply filters that extract features with increasing contrast as successive layers of detail. As a result, they are unable to distinguish between high-contrast, fine-scale features and edges of similar contrast that are to be preserved. We compare our results with existing edge-preserving image decomposition algorithms and demonstrate exciting applications that are made possible by our new notion of detail.

175 citations


Journal ArticleDOI
TL;DR: The optimization scheme is based on ideas from global optimization theory, in particular convex underestimators in combination with branch-and-bound methods, and provides a provably optimal algorithm and demonstrates good performance on both synthetic and real data.
Abstract: In this paper, we propose a practical and efficient method for finding the globally optimal solution to the problem of determining the pose of an object. We present a framework that allows us to use point-to-point, point-to-line, and point-to-plane correspondences for solving various types of pose and registration problems involving euclidean (or similarity) transformations. Traditional methods such as the iterative closest point algorithm or bundle adjustment methods for camera pose may get trapped in local minima due to the nonconvexity of the corresponding optimization problem. Our approach of solving the mathematical optimization problems guarantees global optimality. The optimization scheme is based on ideas from global optimization theory, in particular convex underestimators in combination with branch-and-bound methods. We provide a provably optimal algorithm and demonstrate good performance on both synthetic and real data. We also give examples of where traditional methods fail due to the local minima problem.

162 citations


Proceedings ArticleDOI
20 Jun 2009
TL;DR: This work introduces a new technique that can reduce any higher-order Markov random field with binary labels into a first-order one that has the same minima as the original, and combines the reduction with the fusion-move and QPBO algorithms to optimize higher- order multi-label problems.
Abstract: We introduce a new technique that can reduce any higher-order Markov random field with binary labels into a first-order one that has the same minima as the original. Moreover, we combine the reduction with the fusion-move and QPBO algorithms to optimize higher-order multi-label problems. While many vision problems today are formulated as energy minimization problems, they have mostly been limited to using first-order energies, which consist of unary and pairwise clique potentials, with a few exceptions that consider triples. This is because of the lack of efficient algorithms to optimize energies with higher-order interactions. Our algorithm challenges this restriction that limits the representational power of the models, so that higher-order energies can be used to capture the rich statistics of natural scenes. To demonstrate the algorithm, we minimize a third-order energy, which allows clique potentials with up to four pixels, in an image restoration problem. The problem uses the fields of experts model, a learned spatial prior of natural images that has been used to test two belief propagation algorithms capable of optimizing higher-order energies. The results show that the algorithm exceeds the BP algorithms in both optimization performance and speed.

154 citations


Journal ArticleDOI
TL;DR: Three optimization problems related to the generalized Nash equilibrium problem are presented, using a regularized Nikaido-Isoda-function, to obtain a smooth constrained optimization problem whose global minima correspond to so-called normalized Nash equilibria.
Abstract: We consider the generalized Nash equilibrium problem which, in contrast to the standard Nash equilibrium problem, allows joint constraints of all players involved in the game Using a regularized Nikaido-Isoda-function, we then present three optimization problems related to the generalized Nash equilibrium problem The first optimization problem is a complete reformulation of the generalized Nash game in the sense that the global minima are precisely the solutions of the game However, this reformulation is nonsmooth We then modify this approach and obtain a smooth constrained optimization problem whose global minima correspond to so-called normalized Nash equilibria The third approach uses the difference of two regularized Nikaido-Isoda-functions in order to get a smooth unconstrained optimization problem whose global minima are, once again, precisely the normalized Nash equilibria Conditions for stationary points to be global minima of the two smooth optimization problems are also given Some numerical results illustrate the behaviour of our approaches

133 citations


Journal ArticleDOI
TL;DR: In this paper, analytic methods for finding the loss-minimizing solution are studied, where the solution lies either in the interior or on the voltage limit boundary, two different cases are dealt with separately.
Abstract: Normally, lookup-table-based methods are being utilized for loss-minimizing control of permanent magnet synchronous motors (PMSMs). But numerous repetitive experiments are required to make a lookup table, and the program size becomes bulky. In this paper, analytic methods for finding the loss-minimizing solution are studied. Since the solution lies either in the interior or on the voltage limit boundary, two different cases are dealt with separately. In both cases, fourth-order polynomials are derived. To obtain approximate solutions, methods of order reduction and linear approximation are utilized. The accuracies are good enough for practical use. These approximate solutions are fused into a proposed loss-minimizing algorithm and implemented in an inverter digital signal processor. Experiments were done with a real PMSM developed for a sport utility fuel cell electric vehicle. The analytically derived minima were justified by experimental evidences, and the dynamic performances over a wide range of speed were shown to be satisfactory.

102 citations


Journal ArticleDOI
TL;DR: In this paper, an optimization strategy that incorporates several techniques, including grid, frequency, and time-window continuation, primal-dual methods for treating bound inequality constraints and total variation regularization, and inexact matrix-free Newton-Krylov optimization was developed.
Abstract: Full-waveform seismic inversion, i.e., the iterative minimization of the misfit between observed seismic data and synthetic data obtained by a numerical solution of the wave equation provides a systematic, flexible, general mechanism for reconstructing earth models from observed ground motion. However, many difficulties arise for highly resolved models and the associated large-dimensional parameter spaces and high-frequency sources. First, the least-squares data-misfit functional suffers from spurious local minima, which necessitates an accurate initial guess for the smooth background model. Second, total variation regularization methods that are used to resolve sharp interfaces create significant numerical difficulties because of their nonlinearity and near-degeneracy. Third, bound constraints on continuous model parameters present considerable difficulty for commonly used active-set or interior-point methods for inequality constraints because of the infinite-dimensional nature of the parameters. Finally, common gradient-based optimization methods have difficulties scaling to the many model parameters that result when the continuous parameter fields are discretized. We have developed an optimization strategy that incorporates several techniques address these four difficulties, including grid, frequency, and time-window continuation; primal-dual methods for treating bound inequality constraints and total variation regularization; and inexact matrix-free Newton-Krylov optimization. Using this approach, several computations were performed effectively for a 1D setting with synthetic observations.

96 citations


Journal ArticleDOI
TL;DR: A new average offspring recombination operator is introduced and compared with previously used operators in the field of cluster structure prediction and minima hopping is improved with a softening method and a stronger feedback mechanism.
Abstract: We compare evolutionary algorithms with minima hopping for global optimization in the field of cluster structure prediction. We introduce a new average offspring recombination operator and compare it with previously used operators. Minima hopping is improved with a softening method and a stronger feedback mechanism. Test systems are atomic clusters with Lennard-Jones interaction as well as silicon and gold clusters described by force fields. The improved minima hopping is found to be well-suited to all these homoatomic problems. The evolutionary algorithm is more efficient for systems with compact and symmetric ground states, including LJ150, but it fails for systems with very complex energy landscapes and asymmetric ground states, such as LJ75 and silicon clusters with more than 30 atoms. Both successes and failures of the evolutionary algorithm suggest ways for its improvement.

94 citations


Journal ArticleDOI
TL;DR: In this article, the connection between local minima in the problem Hamiltonian and first-order quantum phase transitions during adiabatic quantum computation was investigated, and an analytical formula that cannot only predict the behavior of the gap, but also provide insight on how to controllably vary the gap size by changing the parameters.
Abstract: We investigate the connection between local minima in the problem Hamiltonian and first-order quantum phase transitions during adiabatic quantum computation. We demonstrate how some properties of the local minima can lead to an extremely small gap that is exponentially sensitive to the Hamiltonian parameters. Using perturbation expansion, we derive an analytical formula that cannot only predict the behavior of the gap, but also provide insight on how to controllably vary the gap size by changing the parameters. We show agreement with numerical calculations for a weighted maximum independent set problem instance.

91 citations


Journal ArticleDOI
TL;DR: In this paper, an adaptive stochastic finite elements approach with Newton-Cotes quadrature and simplex elements is developed for resolving the effect of random parameters in flow problems.

Journal ArticleDOI
TL;DR: The Gaussian-mixture umbrella sampling method (GAMUS) is introduced, a biased molecular dynamics technique based on adaptive umbrella sampling that efficiently escapes free energy minima in multidimensional problems.
Abstract: We introduce the Gaussian-mixture umbrella sampling method (GAMUS) , a biased molecular dynamics technique based on adaptive umbrella sampling that efficiently escapes free energy minima in multidimensional problems. The prior simulation data are reweighted with a maximum likelihood formulation, and the new approximate probability density is fit to a Gaussian-mixture model, augmented by information about the unsampled areas. The method can be used to identify free energy minima in multidimensional reaction coordinates. To illustrate GAMUS , we apply it to the alanine dipeptide (2D reaction coordinate) and tripeptide (4D reaction coordinate).

Journal ArticleDOI
TL;DR: In this article, a polar coordinate integration grid with a smaller grid spacing in the radial direction than in the angular direction is proposed to model high-magnification planetary microlensing events.
Abstract: I present a previously unpublished method for modeling multiple lens microlensing events that is based on the image centered ray shooting approach of Bennett and Rhie. It has been used to model all a wide variety of binary and triple lens systems, but it is designed to efficiently model high-magnification planetary microlensing events, because these are, by far, the most challenging events to model. It is designed to be efficient enough to handle microlensing events with more than two lens masses and lens orbital motion. This method uses a polar coordinate integration grid with a smaller grid spacing in the radial direction than in the angular direction, and it employs an integration scheme specifically designed to handle limb darkened sources. I present tests that show that these features achieve second order accuracy for the light curves of a number of high-magnification planetary events. They improve the precision of the calculations by a factor of >100 compared to first order integration schemes with the same grid spacing in both directions (for a fixed number of grid points). This method also includes a Metropolis algorithm chi^2 minimization method that allows the jump function to vary in a way that allows quick convergence to chi^2 minima. Finally, I introduce a global parameter space search strategy that allows a blind search of parameter space for light curve models without requiring chi^2 minimization over a large grid of fixed parameters. Instead, the parameter space is explored on a grid of initial conditions for a set of chi^2 minimizations using the full parameter space. While this method may be faster than methods that find the chi^2 minima over a large grid of parameters, I argue that the main strength of this method is for events with the signals of multiple planets, where a much higher dimensional parameter space must be explored to find the correct light curve model.

Journal ArticleDOI
TL;DR: In this article, a linearized algorithm based on inequality constraints is introduced for the inversion of observed dispersion curves, which provides a flexible way to insert a priori information as well as physical constraints into the linearized inversion process.
Abstract: The multichannel analysis of the surface waves method is based on the inversion of observed Rayleigh‐wave phase‐velocity dispersion curves to estimate the shear‐wave velocity profile of the site under investigation. This inverse problem is nonlinear and it is often solved using ‘local’ or linearized inversion strategies. Among linearized inversion algorithms, least‐squares methods are widely used in research and prevailing in commercial software; the main drawback of this class of methods is their limited capability to explore the model parameter space. The possibility for the estimated solution to be trapped in local minima of the objective function strongly depends on the degree of nonuniqueness of the problem, which can be reduced by an adequate model parameterization and/or imposing constraints on the solution. In this article, a linearized algorithm based on inequality constraints is introduced for the inversion of observed dispersion curves; this provides a flexible way to insert a priori information as well as physical constraints into the inversion process. As linearized inversion methods are strongly dependent on the choice of the initial model and on the accuracy of partial derivative calculations, these factors are carefully reviewed. Attention is also focused on the appraisal of the inverted solution, using resolution analysis and uncertainty estimation together with a posteriori effective‐velocity modelling. Efficiency and stability of the proposed approach are demonstrated using both synthetic and real data; in the latter case, cross‐hole S‐wave velocity measurements are blind‐compared with the results of the inversion process.

Journal ArticleDOI
TL;DR: The model parameters of a two-layer dielectric structure with random slightly rough boundaries are retrieved from data that consist of the backscattering coefficients for multiple polarizations, angles, and frequencies using the small perturbation method.
Abstract: In this paper, the model parameters of a two-layer dielectric structure with random slightly rough boundaries are retrieved from data that consist of the backscattering coefficients for multiple polarizations, angles, and frequencies. We use the small perturbation method to solve the forward problem. The inversion problem is then formulated as a least square problem and is solved using a global optimization method known as simulated annealing, which is shown to be a robust retrieval algorithm for our purpose. The algorithm performance depends on several parameters. We make recommendations on these parameters and propose a technique for exiting local minima when encountered. We test the sensitivity of the inversion scheme to measurement noise and present the noise analysis results.

Journal ArticleDOI
TL;DR: A novel contouring algorithm for the construction of a discrete Reeb graph with a minimal number of nodes, which correspond to the critical points of f and its level sets passing through the saddle points, which is competitive with respect to the O(n log n) cost of previous work.
Abstract: Given a manifold surface M and a continuous scalar function f:Mrarr IR, the Reeb graph of (M, f) is a widely used high-level descriptor of M and its usefulness has been demonstrated for a variety of applications, which range from shape parameterization and abstraction to deformation and comparison. In this context, we propose a novel contouring algorithm for the construction of a discrete Reeb graph with a minimal number of nodes, which correspond to the critical points of f (i.e., minima, maxima, and saddle points) and its level sets passing through the saddle points. In this way, we do not need to sample, sweep, or increasingly sort the f-values. Since most of the computation uses only local information on the mesh connectivity, equipped with the f-values at the surface vertices, the proposed approach is insensitive to noise and requires a small-memory footprint and temporary data structures. Furthermore, we maintain the parametric nature of the Reeb graph with respect to the input scalar function and we efficiently extract the Reeb graph of time-varying maps. Indicating with n and s the number of vertices of M and saddle points of f, the overall computational cost O(sn) is competitive with respect to the O(n log n) cost of previous work. This cost becomes optimal if M is highly sampled or s les log n, as it happens for Laplacian eigenfunctions, harmonic maps, and one-forms.

Journal ArticleDOI
TL;DR: Two variants of the extended Rosenbrock function are analyzed in order to find the stationary points, shown to possess a single stationary point, the global minimum, and the remaining saddle points have a predictable form, and a method is proposed to estimate their number.
Abstract: Two variants of the extended Rosenbrock function are analyzed in order to find the stationary points. The first variant is shown to possess a single stationary point, the global minimum. The second variant has numerous stationary points for high dimensionality. A previously proposed method is shown to be numerically intractable, requiring arbitrary precision computation in many cases to enumerate candidate solutions. Instead, a standard Newtonian method with multi-start is applied to locate stationary points. The relative magnitude of the negative and positive eigenvalues of the Hessian is also computed, in order to characterize the saddle points. For dimensions up to 100, only two local minimizers are found, but many saddle points exist. Two saddle points with a single negative eigenvalue exist for high dimensionality, which may appear as “near” local minima. The remaining saddle points we found have a predictable form, and a method is proposed to estimate their number. Monte Carlo simulation indicates that it is unlikely to escape these saddle points using uniform random search. A standard particle swarm algorithm also struggles to improve upon a saddle point contained within the initial population.

Proceedings ArticleDOI
19 Apr 2009
TL;DR: The aim of this paper is to propose a tempering scheme that favors convergence of IS-NMF to global minima, based on NMF with the beta-divergence, where the shape parameter beta acts as a temperature parameter.
Abstract: In this paper we are interested in non-negative matrix factorization (NMF) with the Itakura-Saito (IS) divergence. Previous work has demonstrated the relevance of this cost function for the decomposition of audio power spectrograms. This is in particular due to its scale invariance, which makes it more robust to the wide dynamics of audio, a property which is not shared by other popular costs such as the Euclidean distance or the generalized Kulback-Leibler (KL) divergence. However, while the latter two cost functions are convex, the IS divergence is not, which makes it more prone to convergence to irrelevant local minima, as observed empirically. Thus, the aim of this paper is to propose a tempering scheme that favors convergence of IS-NMF to global minima. Our algorithm is based on NMF with the beta-divergence, where the shape parameter beta acts as a temperature parameter. Results on both synthetical and music data (in a transcription context) show the relevance of our approach.

Journal ArticleDOI
TL;DR: A series of techniques is presented for overcoming some of the numerical instabilities associated with SIMP materials and a robust topology optimization algorithm designed to be able to accommodate a large suite of problems that more closely resemble those found in industry applications is created.
Abstract: A series of techniques is presented for overcoming some of the numerical instabilities associated with SIMP materials. These techniques are combined to create a robust topology optimization algorithm designed to be able to accommodate a large suite of problems that more closely resemble those found in industry applications. A variant of the Kreisselmeier–Steinhauser (KS) function in which the aggregation parameter is dynamically increased over the course of the optimization is used to handle multi-load problems. Results from this method are compared with those obtained using the bound formulation. It is shown that the KS aggregation method produces results superior to those of the bound formulation, which can be highly susceptible to local minima. Adaptive mesh-refinement is presented as a means of addressing the mesh-dependency problem. It is shown that successive mesh-refinement cycles can generate smooth, well-defined structures, and when used in combination with nine-node elements, virtually eliminate...

Journal ArticleDOI
TL;DR: An inversion algorithm is presented that is based on hybridization of the adjoint scheme for calculating gradient directions with the method of moments and the numerical results show that implementing the well known frequency hopping technique helps the algorithm avoid dropping in local minima.
Abstract: An inversion algorithm is presented that is based on hybridization of the adjoint scheme for calculating gradient directions with the method of moments. The goal is to reconstruct the shapes of 3D objects immersed in a lossy medium. The irregular shape of the reference object is modeled by a representation with spherical harmonics functions, whereas during the reconstruction, individual surface nodes are updated. In the adjoint scheme, gradient directions for the least squares data misfit cost functional are calculated by solving the forward problem twice in each iteration, regardless of the number of spherical harmonics parameters used in the reference model or the number of surface nodes used for the discretization of the shapes. The numerical results show that implementing the well known frequency hopping technique helps the algorithm avoid dropping in local minima.

Journal ArticleDOI
TL;DR: The error expression of Fourier interpolation is discussed and derives its error upper bound for general band-limited signals, which implies Fouriers interpolation yields errors especially near the boundary when the signal is non-integer-period sampled.

Journal ArticleDOI
TL;DR: The performance and the application of a radial basis function artificial neural network (RBF-ANN) type, in the inversion of seismic data, shows that the inverted acoustic impedance section was efficient.

Journal ArticleDOI
TL;DR: An efficient technique ANMBP for training single hidden layer neural network to improve convergence speed and to escape from local minima is presented.

Journal ArticleDOI
TL;DR: It is shown that axially symmetric soft discoids can self-assemble into helical columnar arrangements, and the molecular origin of such spatial organisation has important implications for the rational design of materials with useful optoelectronic applications.
Abstract: Nature has mastered the art of creating complex structures through self-assembly of simpler building blocks. Adapting such a bottom-up view provides a potential route to the fabrication of novel materials. However, this approach suffers from the lack of a sufficiently detailed understanding of the noncovalent forces that hold the self-assembled structures together. Here we demonstrate that nature can indeed guide us, as we explore routes to helicity with achiral building blocks driven by the interplay between two competing length scales for the interactions, as in DNA. By characterizing global minima for clusters, we illustrate several realizations of helical architecture, the simplest one involving ellipsoids of revolution as building blocks. In particular, we show that axially symmetric soft discoids can self-assemble into helical columnar arrangements. Understanding the molecular origin of such spatial organisation has important implications for the rational design of materials with useful optoelectronic applications.

Journal ArticleDOI
TL;DR: In this paper, the authors describe the application of optimization methods to 2D and 3D trishear inverse modeling, which traverse the parameter space in search for the combination of trisheear parameters that best restores a fold profile to a straight line in 2D, or a folded surface to a plane in 3D.

Journal ArticleDOI
TL;DR: A stochastic global optimization method based on Multistart is presented, in which the local search is conditionally applied with a probability that takes in account the topology of the objective function at the detail offered by the current status of exploration.

Journal ArticleDOI
TL;DR: Numerical approximations of a constraint minimization problem, where the object function is a quadratic Dirichlet functional for vector fields and the interior constraint is given by a convex function, are considered.
Abstract: In this paper we consider numerical approximations of a constraint minimization problem, where the object function is a quadratic Dirichlet functional for vector fields and the interior constraint is given by a convex function. The solutions of this problem are usually referred to as harmonic maps. The solution is characterized by a nonlinear saddle point problem, and the corresponding linearized problem is well-posed near strict local minima. The main contribution of the present paper is to establish a corresponding result for a proper finite element discretization in the case of two space dimensions. Iterative schemes of Newton type for the discrete nonlinear saddle point problems are investigated, and mesh independent preconditioners for the iterative methods are proposed.

ReportDOI
20 Aug 2009
TL;DR: This work provides a method for computing a global solution for the (non-convex) multi-phase piecewise constant Mumford-Shah (spatially continuous Potts) image segmentation problem and believes it to be the first in the literature that can make this claim.
Abstract: : Most variational models for multi-phase image segmentation are non-convex and possess multiple local minima, which makes solving for a global solution an extremely difficult task. In this work, we provide a method for computing a global solution for the (non-convex) multi-phase piecewise constant Mumford-Shah (spatially continuous Potts) image segmentation problem. Our approach is based on using a specific representation of the problem due to Lie et al. [27]. We then rewrite this representation using the dual formulation for total variation so that a variational convexification technique due to Pock et al. [30] may be employed. Unlike some recent methods in this direction, our method can guarantee that a global solution is obtained. We believe our method to be the first in the literature that can make this claim. Once we have the convex optimization problem, we give an algorithm to compute a global solution. We demonstrate our algorithm on several multi-phase image segmentation examples, including a medical imaging application.

Journal ArticleDOI
TL;DR: This paper explores the use of damped least-squares methods for a purpose that goes beyond local optimization, and enables a better understanding of peculiarities encountered with damped at-risk algorithms in conventional local optimization tasks.
Abstract: In lens design, damped least-squares methods are typically used to find the nearest local minimum to a starting configuration in the merit function landscape. In this paper, we explore the use of such a method for a purpose that goes beyond local optimization. The merit function barrier, which separates an unsatisfactory solution from a neighboring one that is better, can be overcome by using low damping and by allowing the merit function to temporarily increase. However, such an algorithm displays chaos, chaotic transients and other types of complex behavior. A successful escape of the iteration trajectory from a poor local minimum to a better one is associated with a crisis phenomenon that transforms a chaotic attractor into a chaotic saddle. The present analysis also enables a better understanding of peculiarities encountered with damped least-squares algorithms in conventional local optimization tasks.

Book ChapterDOI
18 Aug 2009
TL;DR: An efficient and global minimization method for the binary level set representation of the multiphase Chan-Vese model based on graph cuts is developed and a novel method for minimizing nonsubmodular functions is proposed with particular emphasis on this energy function.
Abstract: The Mumford-Shah model is an important variational image segmentation model. A popular multiphase level set approach, the Chan-Vese model, was developed for this model by representing the phases by several overlapping level set functions. Recently, exactly the same model was also formulated by using binary level set functions. In both approaches, the gradient descent equations had to be solved numerically, a procedure which is slow and has the potential of getting stuck in a local minima. In this work, we develop an efficient and global minimization method for the binary level set representation of the multiphase Chan-Vese model based on graph cuts. If the average intensity values of the different phases are sufficiently evenly distributed, the discretized energy function becomes submodular. Otherwise, a novel method for minimizing nonsubmodular functions is proposed with particular emphasis on this energy function.