scispace - formally typeset
Search or ask a question

Showing papers on "Maxima and minima published in 2007"


Journal ArticleDOI
TL;DR: In this paper, the authors present an updated reconstruction of sunspot number over multiple millennia, from 14 C data by means of a physics-based model, using an updated model of the evolution of the solar open magnetic flux.
Abstract: Aims. Using a reconstruction of sunspot numbers stretching over multiple millennia, we analyze the statistics of the occurrence of grand minima and maxima and set new observational constraints on long-term solar and stellar dynamo models. Methods. We present an updated reconstruction of sunspot number over multiple millennia, from 14 C data by means of a physicsbased model, using an updated model of the evolution of the solar open magnetic flux. A list of grand minima and maxima of solar activity is presented for the Holocene (since 9500 BC) and the statistics of both the length of individual events as well as the waiting time between them are analyzed. Results. The occurrence of grand minima/maxima is driven not by long-term cyclic variability, but by a stochastic/chaotic process. The waiting time distribution of the occurrence of grand minima/maxima deviates from an exponential distribution, implying that these events tend to cluster together with long event-free periods between the clusters. Two different types of grand minima are observed: short (30–90 years) minima of Maunder type and long (>110 years) minima of Sporer type, implying that a deterministic behaviour of the dynamo during a grand minimum defines its length. The duration of grand maxima follows an exponential distribution, suggesting that the duration of a grand maximum is determined by a random process. Conclusions. These results set new observational constraints upon the long-term behaviour of the solar dynamo.

397 citations


Journal ArticleDOI
TL;DR: In this paper, a robust adaptive method is presented that is able to cope with contaminated data, formulated as an iterative re-weighted Kalman filter and Annealing is introduced to avoid local minima in the optimization.
Abstract: Vertex fitting frequently has to deal with both mis-associated tracks and mis-measured track errors. A robust, adaptive method is presented that is able to cope with contaminated data. The method is formulated as an iterative re-weighted Kalman filter. Annealing is introduced to avoid local minima in the optimization. For the initialization of the adaptive filter a robust algorithm is presented that turns out to perform well in a wide range of applications. The tuning of the annealing schedule and of the cut-off parameter is described using simulated data from the CMS experiment. Finally, the adaptive property of the method is illustrated in two examples.

214 citations


Proceedings ArticleDOI
17 Jun 2007
TL;DR: A touch-expand algorithm for finding a minimum cut on a huge 3D grid using an automatically adjusted band overcomes prohibitively high memory cost of graph cuts when computing globally optimal surfaces at high-resolution.
Abstract: We propose a global optimization framework for 3D shape reconstruction from sparse noisy 3D measurements frequently encountered in range scanning, sparse feature-based stereo, and shape-from-X. In contrast to earlier local or banded optimization methods for shape fitting, we compute global optimum in the whole volume removing dependence on initial guess and sensitivity to numerous local minima. Our global method is based on two main ideas. First, we suggest a new regularization functional with a data alignment term that maximizes the number of (weakly-oriented) data points contained by a surface while allowing for some measurement errors. Second, we propose a touch-expand algorithm for finding a minimum cut on a huge 3D grid using an automatically adjusted band. This overcomes prohibitively high memory cost of graph cuts when computing globally optimal surfaces at high-resolution. Our results for sparse or incomplete 3D data from laser scanning and passive multi-view stereo are robust to noise, outliers, missing parts, and varying sampling density.

159 citations


Journal ArticleDOI
TL;DR: In this paper, a GA-based inversion of shear-wave velocity and layer thickness while fixing compressional wave velocity and density according to user-defined Poisson's ratios and velocity-density relationship is presented.

147 citations


Journal ArticleDOI
TL;DR: This work introduces a framework for computing statistically optimal estimates of geometric reconstruction problems with a hierarchy of convex relaxations to solve non-convex optimization problems with polynomials and shows how one can detect whether the global optimum is attained at a given relaxation.
Abstract: We introduce a framework for computing statistically optimal estimates of geometric reconstruction problems. While traditional algorithms often suffer from either local minima or non-optimality--or a combination of both--we pursue the goal of achieving global solutions of the statistically optimal cost-function. Our approach is based on a hierarchy of convex relaxations to solve non-convex optimization problems with polynomials. These convex relaxations generate a monotone sequence of lower bounds and we show how one can detect whether the global optimum is attained at a given relaxation. The technique is applied to a number of classical vision problems: triangulation, camera pose, homography estimation and last, but not least, epipolar geometry estimation. Experimental validation on both synthetic and real data is provided. In practice, only a few relaxations are needed for attaining the global optimum.

136 citations


Journal ArticleDOI
TL;DR: In this paper, the replica approach to statistical mechanics of a single classical particle placed in a random N(1)-dimensional Gaussian landscape and confined by a spherically symmetric potential suitably growing at infinity is discussed.
Abstract: We start with a rather detailed, general discussion of recent results of the replica approach to statistical mechanics of a single classical particle placed in a random N(≫1)-dimensional Gaussian landscape and confined by a spherically symmetric potential suitably growing at infinity. Then we employ random matrix methods to calculate the density of stationary points, as well as minima, of the associated energy surface. This is used to show that for a generic smooth, concave confining potentials the condition of the zero-temperature replica symmetry breaking coincides with one signaling that both mean total number of stationary points in the energy landscape, and the mean number of minima are exponential in N. For such systems the (annealed) complexity of minima vanishes cubically when approaching the critical confinement, whereas the cumulative annealed complexity vanishes quadratically. Different behaviour reported in our earlier short communication (Fyodorov et al. in JETP Lett. 85:261, 2007) was due to non-analyticity of the hard-wall confinement potential. Finally, for the simplest case of parabolic confinement we investigate how the complexity depends on the index of stationary points. In particular, we show that in the vicinity of critical confinement the saddle-points with a positive annealed complexity must be close to minima, as they must have a vanishing fraction of negative eigenvalues in the Hessian.

133 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used the Nash embedding theorem to obtain consistent unbiased estimators for the true p values based on new results of Taylor and Adler for random fields on manifolds, which replace the Euclidean metric by the variogram.
Abstract: Brain mapping data have been modeled as Gaussian random fields, and local increases in mean are detected by local maxima of a random field of test statistics derived from these data. Accurate p values for local maxima are readily available for isotropic data based on the expected Euler characteristic of the excursion set of the test statistic random field. In this article we give a simple method for dealing with nonisotropic data. Our approach has connections to the model of Sampson and Guttorp for nonisotropy in which there exists an unknown mapping of the support of the data to a space in which the random fields are isotropic. Heuristic justification for our approach comes from the Nash embedding theorem. Formally, we show that our method gives consistent unbiased estimators for the true p values based on new results of Taylor and Adler for random fields on manifolds, which replace the Euclidean metric by the variogram. The results are used to detect gender differences in the cortical thickness of the b...

125 citations


Journal ArticleDOI
TL;DR: Theoretical results assure that imposing some constraints on the eigenvalues of the covariance matrices of the multivariate normal components leads to a constrained parameter space with no singularities and at least a smaller number of local maxima of the likelihood function.

113 citations


Journal ArticleDOI
TL;DR: In this paper, a response surface model is developed using radial basis functions, producing a model whose objective function values match those of the original system at all sampled data points, and interpolation to any other point is easily accomplished and generates a model which represents the system over the entire parameter space.

111 citations


Proceedings Article
11 Mar 2007
TL;DR: This paper demonstrates how deterministic annealing can be applied to different SVM formulations of the multiple-instance learning (MIL) problem and proposes a new objective function which together with the deterministicAnnealing algorithm finds better local minima and achieves better performance on a set of benchmark datasets.
Abstract: In this paper we demonstrate how deterministic annealing can be applied to different SVM formulations of the multiple-instance learning (MIL) problem. Our results show that we find better local minima compared to the heuristic methods those problems are usually solved with. However this does not always translate into a better test error suggesting an inadequacy of the objective function. Based on this finding we propose a new objective function which together with the deterministic annealing algorithm finds better local minima and achieves better performance on a set of benchmark datasets. Furthermore the results also show how the structure of MIL datasets influence the performance of MIL algorithms and we discuss how future benchmark datasets for the MIL problem should be designed.

107 citations


Journal ArticleDOI
TL;DR: A novel, fast, and flexible dual front implementation of active contours, motivated by minimal path techniques and utilizing fast sweeping algorithms, which is easily manipulated to yield minima with variable "degrees" of localness and globalness.
Abstract: Most variational active contour models are designed to find local minima of data-dependent energy functionals with the hope that reasonable initial placement of the active contour will drive it toward a "desirable" local minimum as opposed to an undesirable configuration due to noise or complex image structure. As such, there has been much research into the design of complex region-based energy functionals that are less likely to yield undesirable local minima when compared to simpler edge-based energy functionals whose sensitivity to noise and texture is significantly worse. Unfortunately, most of these more "robust" region-based energy functionals are applicable to a much narrower class of imagery compared to typical edge-based energies due to stronger global assumptions about the underlying image data. Devising new implementation algorithms for active contours that attempt to capture more global minimizers of already proposed image-based energies would allow us to choose an energy that makes sense for a particular class of energy without concern over its sensitivity to local minima. Such implementations have been proposed for capturing global minima. However, sometimes the completely-global minimum is just as undesirable as a minimum that is too local. In this paper, we propose a novel, fast, and flexible dual front implementation of active contours, motivated by minimal path techniques and utilizing fast sweeping algorithms, which is easily manipulated to yield minima with variable "degrees" of localness and globalness. By simply adjusting the size of active regions, the ability to gracefully move from capturing minima that are more local (according to the initial placement of the active contour/surface) to minima that are more global allows this model to more easily obtain "desirable" minimizers (which often are neither the most local nor the most global). Experiments on various 2D and 3D images and comparisons with some active contour models and region-growing methods are also given to illustrate the properties of this model and its performance in a variety of segmentation applications

Journal ArticleDOI
TL;DR: A method that allows for the merger of the good features of sliding-mode control and neural network (NN) design is presented, and it has been proven that the selected cost function has no local minima in controller parameter space.
Abstract: In this paper, a method that allows for the merger of the good features of sliding-mode control and neural network (NN) design is presented. Design is performed by applying an NN to minimize the cost function that is selected to depend on the distance from the sliding-mode manifold, thus providing that the NN controller enforces sliding-mode motion in a closed-loop system. It has been proven that the selected cost function has no local minima in controller parameter space, so under certain conditions, selection of the NN weights guarantees that the global minimum is reached, and then the sliding-mode conditions are satisfied; thus, closed-loop motion is robust against parameter changes and disturbances. For controller design, the system states and the nominal value of the control input matrix are used. The design for both multiple-input-multiple-output and single-input-single-output systems is discussed. Due to the structure of the (M)ADALINE network used in control calculation, the proposed algorithm can also be interpreted as a sliding-mode-based control parameter adaptation scheme. The controller performance is verified by experimental results

Proceedings ArticleDOI
18 Feb 2007
TL;DR: A gradient descent flow based on a novel energy functional that is capable of producing robust and accurate segmentations of medical images is presented and compared to standard techniques using medical and synthetic images to demonstrate the proposed method's robustness and accuracy.
Abstract: In this paper we present a gradient descent flow based on a novel energy functional that is capable of producing robust and accurate segmentations of medical images. This flow is a hybridization of local geodesic active contours and more global region-based active contours. The combination of these two methods allows curves deforming under this energy to find only significant local minima and delineate object borders despite noise, poor edge information, and heterogeneous intensity profiles. To accomplish this, we construct a cost function that is evaluated along the evolving curve. In this cost, the value at each point on the curve is based on the analysis of interior and exterior means in a local neighborhood around that point. We also demonstrate a novel mathematical derivation used to implement this and other similar flows. Results for this algorithm are compared to standard techniques using medical and synthetic images to demonstrate the proposed method's robustness and accuracy as compared to both edge-based and region-based alone.

Proceedings ArticleDOI
15 Apr 2007
TL;DR: This paper proof the convergence of an iterative hard thresholding algorithm and show, that the fixed points of that algorithm are local minima of the sparse approximation cost function, which measures both, the reconstruction error and the number of elements in the representation.
Abstract: Sparse signal approximations are approximations that use only a small number of elementary waveforms to describe a signal. In this paper we proof the convergence of an iterative hard thresholding algorithm and show, that the fixed points of that algorithm are local minima of the sparse approximation cost function, which measures both, the reconstruction error and the number of elements in the representation. Simulation results suggest that the algorithm is comparable in performance to a commonly used alternative method.

Proceedings ArticleDOI
01 Dec 2007
TL;DR: If there is sufficient time-scale separation between the fast dither and slow gradient motions of the leader vehicle, the followers only respond to the gradient motion, and filter out the dither component, while keeping the prescribed formation.
Abstract: We consider a gradient climbing problem where the objective is to steer a group of vehicles to the extrema of an unknown scalar field distribution while keeping a prescribed formation. We address this task by developing a scheme in which the leader performs extremum seeking for the minima or maxima of the field, and other vehicles follow according to passivity-based coordination rules. The extremum-seeking approach generates approximate gradients of the field locally by "dithering" sensor positions. We show that if there is sufficient time-scale separation between the fast dither and slow gradient motions of the leader vehicle, the followers only respond to the gradient motion, and filter out the dither component, while keeping the prescribed formation.

Journal ArticleDOI
TL;DR: In this article, a least-squares optimization method for solving the inverse problem of determining the crustal velocity and intrinsic attenuation properties of sedimentary valleys in earthquake-prone regions is presented.
Abstract: We present a least-squares optimization method for solving the nonlinear full waveform inverse problem of determining the crustal velocity and intrinsic at- tenuation properties of sedimentary valleys in earthquake-prone regions. Given a known earthquake source and a set of seismograms generated by the source, the in- verse problem is to reconstruct the anelastic properties of a heterogeneous medium with possibly discontinuous wave velocities. The inverse problem is formulated as a constrained optimization problem, where the constraints are the partial and ordinary differential equations governing the anelastic wave propagation from the source to the receivers in the time domain. This leads to a variational formulation in terms of the material model plus the state variables and their adjoints. We employ a wave propaga- tion model in which the intrinsic energy-dissipating nature of the soil medium is mod- eled by a set of standard linear solids. The least-squares optimization approach to inverse wave propagation presents the well-known difficulties of ill posedness and multiple minima. To overcome ill posedness, we include a total variation regulariza- tion functional in the objective function, which annihilates highly oscillatory material property components while preserving discontinuities in the medium. To treat multi- ple minima, we use a multilevel algorithm that solves a sequence of subproblems on increasingly finer grids with increasingly higher frequency source components to re- main within the basin of attraction of the global minimum. We illustrate the metho- dology with high-resolution inversions for two-dimensional sedimentary models of the San Fernando Valley, under SH-wave excitation. We perform inversions for both the seismic velocity and the intrinsic attenuation using synthetic waveforms at the observer locations as pseudoobserved data.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a new optimization technique by modifying a chaos optimization algorithm (COA) based on the fractal theory, in which the chaotic property is used to determine the initial choice of the optimization parameters both in the starting step and in the mutations applied when a convergence to local minima occurred.

Journal ArticleDOI
TL;DR: In this article, the hybrid input-output algorithm was improved by a lower-dimensional subspace saddle-point optimization, which can produce images at resolutions beyond the capabilities of lens-based optical methods.
Abstract: Iterative algorithms with feedback are among the most powerful and versatile optimization methods for phase retrieval. Among these, the hybrid input-output algorithm has demonstrated practical solutions to giga-element nonlinear phase retrieval problems, escaping local minima and producing images at resolutions beyond the capabilities of lens-based optical methods. Here the input-output iteration is improved by a lower-dimensional subspace saddle-point optimization.

Journal ArticleDOI
TL;DR: This paper presents a practical approach for the characterization of critical points on conical intersection seams as either local minima or saddle points using second-derivative technology and illustrates the latter idea for the cyclopentadienyl radical.
Abstract: In this paper, we present a practical approach for the characterization of critical points on conical intersection seams as either local minima or saddle points using second-derivative technology. ...

Journal Article
TL;DR: The hybrid input-output algorithm has demonstrated practical solutions to giga-element nonlinear phase retrieval problems, escaping local minima and producing images at resolutions beyond the capabilities of lens-based optical methods.
Abstract: Iterative algorithms with feedback are among the most powerful and versatile optimization methods for phase retrieval. Among these, the hybrid input-output algorithm has demonstrated practical solutions to giga-element nonlinear phase retrieval problems, escaping local minima and producing images at resolutions beyond the capabilities of lens-based optical methods. Here the input-output iteration is improved by a lower-dimensional subspace saddle-point optimization.

Journal ArticleDOI
TL;DR: In this article, a generalized Takahashi's existence theorem is given and equivalence relations between generalized Caristi's fixed point theorems and generalized Ekeland's variational principles are established.
Abstract: In this paper, we first give a generalized Takahashi’s existence theorem From the existence theorem, we establish some equivalence relations between generalized Caristi’s fixed point theorems and generalized Ekeland’s variational principles Some applications to the existence theorems of weak sharp minima and global error bounds for lower semicontinuous functions are also given

Book ChapterDOI
18 Nov 2007
TL;DR: This paper presents a practical method for obtaining the global minimum to the least-squares (L2) triangulation problem and proposes a simpler branch-and-bound algorithm to approach the global estimate.
Abstract: This paper presents a practical method for obtaining the global minimum to the least-squares (L2) triangulation problem. Although optimal algorithms for the triangulation problem under L8-norm have been given, finding an optimal solution to the L2 triangulation problem is difficult. This is because the cost function under L2-norm is not convex. Since there are no ideal techniques for initialization, traditional iterative methods that are sensitive to initialization may be trapped in local minima. A branch-and-bound algorithm was introduced in [1] for finding the optimal solution and it theoretically guarantees the global optimality within a chosen tolerance. However, this algorithm is complicated and too slow for large-scale use. In this paper, we propose a simpler branch-and-bound algorithm to approach the global estimate. Linear programming algorithms plus iterative techniques are all we need in implementing our method. Experiments on a large data set of 277,887 points show that it only takes on average 0.02s for each triangulation problem.

Journal ArticleDOI
Changchun Yin1, Greg Hodges1
TL;DR: In this paper, a simulated annealing (SA) algorithm is proposed for airborne EM inversion, which runs both downhill and uphill searches, rendering the searching process to easily jump out of local minima and converge to a global minimum.
Abstract: The traditional algorithms for airborne electromagnetic (EM) inversion, e.g., the Marquardt-Levenberg method, generally run only a downhill search. Consequently, the model solutions are strongly dependent on the starting model and are easily trapped in local minima. Simulated annealing (SA) starts from the Boltzmann distribution and runs both downhill and uphill searches, rendering the searching process to easily jump out of local minima and converge to a global minimum. In the SA process, the calculation of Jacobian derivatives can be avoided because no preferred searching direction is required as in the case of the traditional algorithms. We apply SA technology for airborne EM inversion by comparing the inversion with a thermodynamic process, and we discuss specifically the SA procedure with respect to model configuration, random walk for model updates, objective function, and annealing schedule. We demonstrate the SA flexibility for starting models by allowing the model parameters to vary in a large range (far away from the true model). Further, we choose a temperature-dependent random walk for model updates and an exponential cooling schedule for the SA searching process. The initial temperature for the SA cooling scheme is chosen differently for different model parameters according to their resolvabilities. We examine the effectiveness of the algorithm for airborne EM by inverting both theoretical and survey data and by comparing the results with those from the traditional algorithms.

Journal ArticleDOI
TL;DR: This paper investigates the performance of genetic optimization in a nonlinear system for active noise control based on Volterra filters and shows that a simple GA is able to find satisfactory solutions even in the presence of nonlinearities in the secondary path.
Abstract: This paper investigates the performance of genetic optimization in a nonlinear system for active noise control based on Volterra filters. While standard Filtered-X algorithms may converge to local minima, genetic algorithms (GAs) may handle this problem efficiently. In addition, this class of algorithms does not require the identification of the secondary paths. This is a key advantage of the proposed approach. Computer simulations show that a simple GA is able to find satisfactory solutions even in the presence of nonlinearities in the secondary path. The results are more accurate than using the linear techniques and the nonlinear systems based on classical LMS algorithms.

Journal ArticleDOI
TL;DR: It is proven that the generated sequence of approximate minima converges to the exact one and the convergence rate with respect to the number of degrees of freedom is optimal in that it coincides with the one of nonlinear or adaptive approximation.
Abstract: We consider obstacle problems where a quadratic functional associated with the Laplacian is minimized in the set of functions above a possibly discontinuous and thin but piecewise affine obstacle. In order to approximate minimum point and value, we propose an adaptive algorithm that relies on minima with respect to admissible linear finite element functions and on an a posteriori estimator for the error in the minimum value. It is proven that the generated sequence of approximate minima converges to the exact one. Furthermore, our numerical results in two and three dimensions indicate that the convergence rate with respect to the number of degrees of freedom is optimal in that it coincides with the one of nonlinear or adaptive approximation.

Journal ArticleDOI
TL;DR: In this paper, a quantum control landscape is defined as the physical objective as a function of the control variables and three practically important properties of the objective function are found: (a) the absence of local maxima or minima; (b) the existence of multi-dimensional sub-manifolds of optimal solutions corresponding to the global maximum and minimum; and (c) the connectivity of each level set.
Abstract: A quantum control landscape is defined as the physical objective as a function of the control variables. In this paper the control landscapes for two-level open quantum systems, whose evolution is described by general completely positive trace preserving maps (i.e., Kraus maps), are investigated in details. The objective function, which is the expectation value of a target system operator, is defined on the Stiefel manifold representing the space of Kraus maps. Three practically important properties of the objective function are found: (a) the absence of local maxima or minima (i.e., false traps); (b) the existence of multi-dimensional sub-manifolds of optimal solutions corresponding to the global maximum and minimum; and (c) the connectivity of each level set. All of the critical values and their associated critical sub-manifolds are explicitly found for any initial system state. Away from the absolute extrema there are no local maxima or minima, and only saddles may exist, whose number and the explicit structure of the corresponding critical sub-manifolds are determined by the initial system state. There are no saddles for pure initial states, one saddle for a completely mixed initial state, and two saddles for other initial states. In general, the landscape analysis of critical points and optimal manifolds is relevant to the problem of explaining the relative ease of obtaining good optimal control outcomes in the laboratory, even in the presence of the environment.

Journal ArticleDOI
01 Jan 2007
TL;DR: It is shown that triangulating a set of points with elevations such that the number of local minima of the resulting terrain is minimized is NP-hard for degenerate point sets.
Abstract: For hydrologic applications, terrain models should have few local minima, and drainage lines should coincide with edges. We show that triangulating a set of points with elevations such that the number of local minima of the resulting terrain is minimized is NP-hard for degenerate point sets. The same result applies when there are no degeneracies for higher-order Delaunay triangulations. Two heuristics are presented to reduce the number of local minima for higher-order Delaunay triangulations, which start out with the Delaunay triangulation. We give efficient algorithms for their implementation, and test on real-world data how well they perform. We also study another desirable drainage characteristic, few valley components, and how to obtain it for higher-order Delaunay triangulations. This gives rise to a third heuristic. Tables and visualizations show how the heuristics perform for the drainage characteristics on real-world data.

Proceedings ArticleDOI
17 Jun 2007
TL;DR: A novel progressive finite Newton optimization scheme for the non-rigid surface detection problem, which is reduced to only solving a set of linear equations, and which takes advantage of the rank information of detected correspondences.
Abstract: Detecting nonrigid surfaces is an interesting research problem for computer vision and image analysis. One important challenge of nonrigid surface detection is how to register a nonrigid surface mesh having a large number of free deformation parameters. This is particularly significant for detecting nonrigid surfaces from noisy observations. Nonrigid surface detection is usually regarded as a robust parameter estimation problem, which is typically solved iteratively from a good initialization in order to avoid local minima. In this paper, we propose a novel progressive finite Newton optimization scheme for the non-rigid surface detection problem, which is reduced to only solving a set of linear equations. The key of our approach is to formulate the nonrigid surface detection as an unconstrained quadratic optimization problem which has a closed-form solution for a given set of observations. Moreover, we employ a progressive active-set selection scheme, which takes advantage of the rank information of detected correspondences. We have conducted extensive experiments for performance evaluation on various environments, whose promising results show that the proposed algorithm is more efficient and effective than the existing iterative methods.

Proceedings ArticleDOI
TL;DR: In this paper, the authors proposed to compute the gradient and objective function locally, within several or even one depth step at a time, to reduce the possibility of falling into a false minimum and significantly reduce the number of iterations required in the optimization.
Abstract: Current wave-equation tomography techniques based on migrated image differences, such as those observed in 4D data sets, use the image difference as a measure of velocity misfit. Computation of the objective function gradient is accomplished by the adjoint application of the derivative of the imaging operator to this image difference. In all techniques developed to date this process is carried out by computing the gradient over a relatively large number of depth steps, and then optimizing the objective function globally over the entire range. In this abstract, we extend this concept to compute the gradient and objective function locally, within several or even one depth step at a time. In principle, for objective functions that are sharply peaked around the global minimum, and have other minima elsewhere, this localization should reduce the possibility of falling into a false minimum, and significantly reduce the number of iterations required in the optimization. In addition, since the velocity is optimized in depth as the extrapolation proceeds, the method is significantly more immune to cycle-skipping at higher frequencies than global methods.

Journal ArticleDOI
TL;DR: Comparisons with several representative existing algorithms show that the proposed nonparametric clustering algorithm can robustly identify major clusters even when there are complex configurations and/or large overlaps.