scispace - formally typeset
Search or ask a question

Showing papers on "Maxima and minima published in 2006"


Journal ArticleDOI
TL;DR: The sequential tree-reweighted message passing (STE-TRW) algorithm as discussed by the authors is a modification of Tree-Reweighted Maximum Product Message Passing (TRW), which was proposed by Wainwright et al.
Abstract: Algorithms for discrete energy minimization are of fundamental importance in computer vision. In this paper, we focus on the recent technique proposed by Wainwright et al. (Nov. 2005)- tree-reweighted max-product message passing (TRW). It was inspired by the problem of maximizing a lower bound on the energy. However, the algorithm is not guaranteed to increase this bound - it may actually go down. In addition, TRW does not always converge. We develop a modification of this algorithm which we call sequential tree-reweighted message passing. Its main property is that the bound is guaranteed not to decrease. We also give a weak tree agreement condition which characterizes local maxima of the bound with respect to TRW algorithms. We prove that our algorithm has a limit point that achieves weak tree agreement. Finally, we show that, our algorithm requires half as much memory as traditional message passing approaches. Experimental results demonstrate that on certain synthetic and real problems, our algorithm outperforms both the ordinary belief propagation and tree-reweighted algorithm in (M. J. Wainwright, et al., Nov. 2005). In addition, on stereo problems with Potts interactions, we obtain a lower energy than graph cuts

1,116 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider some elementary functions of the components of a regularly varying random vector such as linear combinations, products, min- ima, maxima, order statistics, powers, and give conditions under which these functions are again regularly varying.
Abstract: We consider some elementary functions of the components of a regularly varying random vector such as linear combinations, products, min- ima, maxima, order statistics, powers. We give conditions under which these functions are again regularly varying, possibly with a dieren t index.

414 citations


Journal ArticleDOI
TL;DR: A survey of regularity results for both minima of variational integrals and solutions to non-linear elliptic, and sometimes parabolic, systems of partial differential equations can be found in this article.
Abstract: I am presenting a survey of regularity results for both minima of variational integrals, and solutions to non-linear elliptic, and sometimes parabolic, systems of partial differential equations. I will try to take the reader to the Dark Side...

410 citations


Journal ArticleDOI
TL;DR: This paper shows that the n-dimensional (n = 4∼30) Rosenbrock function has 2 minima, and analysis is proposed to verify this and demonstrates that one of the "local minima" for the 20-variable Rosenbrok function found by Deb might not in fact be a local minimum.
Abstract: The Rosenbrock function is a well-known benchmark for numerical optimization problems, which is frequently used to assess the performance of Evolutionary Algorithms. The classical Rosenbrock function, which is a two-dimensional unimodal function, has been extended to higher dimensions in recent years. Many researchers take the high-dimensional Rosenbrock function as a unimodal function by instinct. In 2001 and 2002, Hansen and Deb found that the Rosenbrock function is not a unimodal function for higher dimensions although no theoretical analysis was provided. This paper shows that the n-dimensional (n = 4∼30) Rosenbrock function has 2 minima, and analysis is proposed to verify this. The local minima in some cases are presented. In addition, this paper demonstrates that one of the "local minima" for the 20-variable Rosenbrock function found by Deb might not in fact be a local minimum.

285 citations


Journal ArticleDOI
TL;DR: An exact and parameter-free algorithm to build scale-sets image descriptions whose sections constitute a monotone sequence of upward global minima of a multi-scale energy, which is called the “scale climbing” algorithm is introduced.
Abstract: This paper introduces a multi-scale theory of piecewise image modelling, called the scale-sets theory, and which can be regarded as a region-oriented scale-space theory The first part of the paper studies the general structure of a geometrically unbiased region-oriented multi-scale image description and introduces the scale-sets representation, a representation which allows to handle such a description exactly The second part of the paper deals with the way scale-sets image analyses can be built according to an energy minimization principle We consider a rather general formulation of the partitioning problem which involves minimizing a two-term-based energy, of the form � C + D, where D is a goodness-of-fit term and C is a regularization term We describe the way such energies arise from basic principles of approximate modelling and we relate them to operational rate/distorsion problems involved in lossy compression problems We then show that an important subset of these energies constitutes a class of multi-scale energies in that the minimal cut of a hierarchy gets coarser and coarser as parameter � increases This allows us to devise a fast dynamic-programming procedure to find the complete scale-sets representation of this family of minimal cuts Considering then the construction of the hierarchy from which the minimal cuts are extracted, we end up with an exact and parameter-free algorithm to build scale-sets image descriptions whose sections constitute a monotone sequence of upward global minima of a multi-scale energy, which is called the "scale climbing" algorithm This algorithm can be viewed as a continuation method along the scale dimension or as a minimum pursuit along the operational rate/distorsion curve Furthermore, the solution verifies a linear scale invariance property which allows to completely postpone the tuning of the scale parameter to a subsequent stage For computational reasons, the scale climbing algorithm is approximated by a pair-wise region merging scheme: however the principal properties of the solutions are kept Some results obtained with Mumford-Shah's piece-wise constant model and a variant are provided and different applications of the proposed multi-scale analyses are finally sketched

238 citations


Journal ArticleDOI
TL;DR: In this article, a discrete path sampling approach is used to obtain phenomenological two-state rate for a set of local minima with a particular structural motif. But the authors do not consider the transition states that link individual local minimima, and evaluate rate constants for the corresponding elementary rearrangements.
Abstract: The stationary points of a potential energy surface provide a convenient framework for coarse-graining calculations of thermodynamics and kinetics. Thermodynamic properties can be extracted from a database of local minima using the superposition approach, where the total partition function is written as a sum over the contributions from each minimum. To analyse kinetics, we must also consider the transition states that link individual local minima, and evaluate rate constants for the corresponding elementary rearrangements. For small molecules the assignment of separate thermodynamic quantities, such as free energies, to individual isomers, and the notion of isomerisation rates between these structures, is usually straightforward. However, for larger systems the experimental states of interest generally correspond to sets of local minima with some common feature, such as a particular structural motif. This review focuses upon the discrete path sampling approach to obtaining phenomenological two-state rate...

195 citations


Journal ArticleDOI
TL;DR: Stepping between the local minima of V provides powerful methods for locating the global potential energy minimum, and for calculating global thermodynamic properties, and when the transition states that link local minata are also sampled the authors can exploit statistical rate theory to obtain insight into global dynamics and rare events.
Abstract: Familiar concepts for small molecules may require reinterpretation for larger systems. For example, rearrangements between geometrical isomers are usually considered in terms of transitions between the corresponding local minima on the underlying potential energy surface, V. However, transitions between bulk phases such as solid and liquid, or between the denatured and native states of a protein, are normally addressed in terms of free energy minima. To reestablish a connection with the potential energy surface we must think in terms of representative samples of local minima of V, from which a free energy surface is projected by averaging over most of the coordinates. The present contribution outlines how this connection can be developed into a tool for quantitative calculations. In particular, stepping between the local minima of V provides powerful methods for locating the global potential energy minimum, and for calculating global thermodynamic properties. When the transition states that link local minima are also sampled we can exploit statistical rate theory to obtain insight into global dynamics and rare events. Visualizing the potential energy landscape helps to explain how the network of local minima and transition states determines properties such as heat capacity features, which signify transitions between free energy minima. The organization of the landscape also reveals how certain systems can reliably locate particular structures on the experimental time scale from among an exponentially large number of local minima. Such directed searches not only enable proteins to overcome Levinthal's paradox but may also underlie the formation of "magic numbers" in molecular beams, the self-assembly of macromolecular structures, and crystallization.

190 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the Hausdorff dimension of the singular set of minima of general variational integrals is always strictly less than n, where Open image in new window is suitably convex with respect to Dv and Holder continuous in (x,v).
Abstract: In this paper we provide upper bounds for the Hausdorff dimension of the singular set of minima of general variational integrals Open image in new window where F is suitably convex with respect to Dv and Holder continuous with respect to (x,v). In particular, we prove that the Hausdorff dimension of the singular set is always strictly less than n, where Open image in new window.

175 citations


Proceedings ArticleDOI
25 Jun 2006
TL;DR: This paper proposes to use a global optimization technique known as continuation to alleviate the problem of non-convex optimization of S3VMs, which often results in suboptimal performances.
Abstract: Semi-Supervised Support Vector Machines (S3VMs) are an appealing method for using unlabeled data in classification: their objective function favors decision boundaries which do not cut clusters. However their main problem is that the optimization problem is non-convex and has many local minima, which often results in suboptimal performances. In this paper we propose to use a global optimization technique known as continuation to alleviate this problem. Compared to other algorithms minimizing the same objective function, our continuation method often leads to lower test errors.

173 citations


Journal ArticleDOI
TL;DR: Investigations made in this paper help to better understand the learning procedure of feedforward neural networks in terms of adaptive learning rate, convergence speed, and local minima.
Abstract: This paper investigates new learning algorithms (LF I and LF II) based on Lyapunov function for the training of feedforward neural networks. It is observed that such algorithms have interesting parallel with the popular backpropagation (BP) algorithm where the fixed learning rate is replaced by an adaptive learning rate computed using convergence theorem based on Lyapunov stability theory. LF II, a modified version of LF I, has been introduced with an aim to avoid local minima. This modification also helps in improving the convergence speed in some cases. Conditions for achieving global minimum for these kind of algorithms have been studied in detail. The performances of the proposed algorithms are compared with BP algorithm and extended Kalman filtering (EKF) on three bench-mark function approximation problems: XOR, 3-bit parity, and 8-3 encoder. The comparisons are made in terms of number of learning iterations and computational time required for convergence. It is found that the proposed algorithms (LF I and II) are much faster in convergence than other two algorithms to attain same accuracy. Finally, the comparison is made on a complex two-dimensional (2-D) Gabor function and effect of adaptive learning rate for faster convergence is verified. In a nutshell, the investigations made in this paper help us better understand the learning procedure of feedforward neural networks in terms of adaptive learning rate, convergence speed, and local minima

164 citations


Journal ArticleDOI
01 Jul 2006
TL;DR: A batch variant of NG which shows much faster convergence and can be interpreted as an optimization of the cost function by the Newton method, and a variant for non-vectorial proximity data can be introduced.
Abstract: Neural Gas (NG) constitutes a very robust clustering algorithm given Euclidean data which does not suffer from the problem of local minima like simple vector quantization, or topological restrictions like the self-organizing map. Based on the cost function of NG, we introduce a batch variant of NG which shows much faster convergence and which can be interpreted as an optimization of the cost function by the Newton method. This formulation has the additional benefit that, based on the notion of the generalized median in analogy to Median SOM, a variant for non-vectorial proximity data can be introduced. We prove convergence of batch and median versions of NG, SOM, and k-means in a unified formulation, and we investigate the behavior of the algorithms in several experiments.

Journal ArticleDOI
TL;DR: Applications of this work range from coherent remeshing of geometry with respect to the symmetries of a shape to geometric compression, intelligent mesh editing, and automatic instantiation.
Abstract: We propose an automatic method for finding symmetries of 3D shapes, that is, isometric transforms which leave a shape globally unchanged. These symmetries are deterministically found through the use of an intermediate quantity: the generalized moments. By examining the extrema and spherical harmonic coefficients of these moments, we recover the parameters of the symmetries of the shape. The computation for large composite models is made efficient by using this information in an incremental algorithm capable of recovering the symmetries of a whole shape using the symmetries of its subparts. Applications of this work range from coherent remeshing of geometry with respect to the symmetries of a shape to geometric compression, intelligent mesh editing, and automatic instantiation.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the statistical properties of vacua and inflationary trajectories associated with a random multifield potential and show that if the cross-couplings (off-diagonal terms) are of the same order as the self-coupplings (diagonal term), essentially all extrema are saddles, and the number of minima is effectively zero.
Abstract: We consider the statistical properties of vacua and inflationary trajectories associated with a random multifield potential. Our underlying motivation is the string landscape, but our calculations apply to general potentials. Using random matrix theory, we analyse the Hessian matrices associated with the extrema of this potential. These potentials generically have a vast number of extrema. We show that if the cross-couplings (off-diagonal terms) are of the same order as the self-couplings (diagonal terms), essentially all extrema are saddles, and the number of minima is effectively zero. Avoiding this requires the same separation of scales as is needed to ensure that Newton's constant is stable against radiative corrections in a string landscape. Using the central limit theorem we find that even if the number of extrema is enormous, the typical distance between extrema is still substantial—with challenging implications for inflationary models that depend on the existence of a complicated path inside the landscape.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a new method of analysis based on determining local minimum and maximum points of the fluctuating scalar field via gradient trajectories starting from every grid point in the directions of ascending and descending scalar gradients.
Abstract: In order to extract small-scale statistical information from passive scalar fields obtained by direct numerical simulation (DNS) a new method of analysis is introduced. It consists of determining local minimum and maximum points of the fluctuating scalar field via gradient trajectories starting from every grid point in the directions of ascending and descending scalar gradients. The ensemble of grid cells from which the same pair of extremal points is reached determines a spatial region which is called a 'dissipation element'. This region may be highly convoluted but on average it has an elongated shape with, on average, a nearly constant diameter of a few Kolmogorov scales and a variable length that has the mean of a Taylor scale. We parameterize the geometry of these elements by the linear distance between their extremal points and their scalar structure by the absolute value of the scalar difference at these points. The joint p.d.f. of these two parameters contains most of the information needed to reconstruct the statistics of the scalar field. It is decomposed into a marginal p.d.f. of the linear distance and a conditional p.d.f. of the scalar difference. It is found that the conditional mean of the scalar difference follows the 1/3 inertial-range Kolmogorov scaling over a large range of length-scales even for the relatively small Reynolds number of the present simulations. This surprising result is explained by the additional conditioning on minima and maxima points. A stochastic evolution equation for the marginal p.d.f. of the linear distance is derived and solved numerically. The stochastic problem that we consider consists of a Poisson process for the cutting of linear elements and a reconnection process due to molecular diffusion. The resulting length-scale distribution compares well with those obtained from the DNS.

Journal ArticleDOI
TL;DR: In this paper, Pallaschke et al. developed various (exact) calculus rules for Frechet lower and upper subgradients of extended real-valued functions in real Banach spaces.
Abstract: We develop various (exact) calculus rules for Frechet lower and upper subgradients of extended-real-valued functions in real Banach spaces. Then we apply this calculus to derive new necessary optimality conditions for some remarkable classes of problems in constrained optimization including minimization problems for difference-type functions under geometric and operator constraints as well as subdifferential optimality conditions for the so-called weak sharp minima. §Dedicated to Diethard Pallaschke in honor of his 65th birthday.

Journal ArticleDOI
TL;DR: STMD shows a superior ability to find local minima in proteins and new global minima are found for the 55 bead AB model in two and three dimensions and Calculations of the occupation probabilities of individual protein inherent structures provide new insights into folding and misfolding.
Abstract: A simulation method is presented that achieves a flat energy distribution by updating the statistical temperature instead of the density of states in Wang-Landau sampling. A novel molecular dynamics algorithm (STMD) applicable to complex systems and a Monte Carlo algorithm are developed from this point of view. Accelerated convergence for large energy bins, essential for large systems, is demonstrated in tests on the Ising model, the Lennard-Jones fluid, and bead models of proteins. STMD shows a superior ability to find local minima in proteins and new global minima are found for the 55 bead AB model in two and three dimensions. Calculations of the occupation probabilities of individual protein inherent structures provide new insights into folding and misfolding.

Journal ArticleDOI
TL;DR: This paper represents the local mean surface of the data, a key step in EMD, as a linear combination of a set of two-dimensional linear basis functions smoothed with bi-cubic spline interpolation, and develops a fast algorithm for implementation of the EMD.
Abstract: Empirical mode decomposition (EMD) is a powerful tool for analysis of non-stationary and nonlinear signals, and has drawn significant attention in various engineering application areas. This paper presents a finite element-based EMD method for two-dimensional data analysis. Specifically, we represent the local mean surface of the data, a key step in EMD, as a linear combination of a set of two-dimensional linear basis functions smoothed with bi-cubic spline interpolation. The coefficients of the basis functions in the linear combination are obtained from the local extrema of the data using a generalized low-pass filter. By taking advantage of the principle of finite-element analysis, we develop a fast algorithm for implementation of the EMD. The proposed method provides an effective approach to overcome several challenging difficulties in extending the original one-dimensional EMD to the two-dimensional EMD. Numerical experiments using both simulated and practical texture images show that the proposed method works well.

Journal ArticleDOI
TL;DR: Two algorithmic enhancements to the GML method are presented that retain its strengths, but which overcome its weaknesses in the face of local optima.

Journal ArticleDOI
TL;DR: The application of a multiscale strategy integrated with a stochastic technique to the solution of nonlinear inverse scattering problems is presented and allows the explicit and effective handling of many difficulties associated with such problems ranging from ill-conditioning to nonlinearity and false solutions drawback.
Abstract: The application of a multiscale strategy integrated with a stochastic technique to the solution of nonlinear inverse scattering problems is presented. The approach allows the explicit and effective handling of many difficulties associated with such problems ranging from ill-conditioning to nonlinearity and false solutions drawback. The choice of a finite dimensional representation for the unknowns, due to the upper bound to the essential dimension of the data, is iteratively accomplished by means of an adaptive multiresolution model, which offers a considerable flexibility for the use of the information on the scattering domain acquired during the iterative steps of the multiscaling process. Even though a suitable representation of the unknowns could limit the local minima problem, the multiresolution strategy is integrated with a customized stochastic optimizer based on the behavior of a particle swarm, which prevents the solution from being trapped into false solutions without a large increasing of the overall computational burden. Selected examples concerned with a two-dimensional microwave imaging problem are presented for illustrating the key features of the integrated stochastic multiscaling strategy.

Journal ArticleDOI
TL;DR: The basin-sampling approach proves to be efficient for systems involving broken ergodicity and has allowed us to calculate converged heat capacity curves for systems that could previously only be treated using the harmonic superposition approximation.
Abstract: We present a “basin-sampling” approach for calculation of the potential energy density of states for classical statistical models. It combines a Wang-Landau-type uniform sampling of local minima and a novel approach for approximating the relative contributions from local minima in terms of the volumes of basins of attraction. We have employed basin-sampling to study phase changes in atomic clusters modeled by the Lennard-Jones potential and for ionic clusters. The approach proves to be efficient for systems involving broken ergodicity and has allowed us to calculate converged heat capacity curves for systems that could previously only be treated using the harmonic superposition approximation. Benchmarks are also provided by comparison with parallel tempering and Wang-Landau simulations, where these proved feasible.

Proceedings ArticleDOI
Romer Rosales1, Glenn Fung1
20 Aug 2006
TL;DR: A method for constructing relative-distance preserving low-dimensional mapping (sparse mappings) that allows learning unknown distance functions (or approximating known functions) with the additional property of reducing distance computation time.
Abstract: Calculation of object similarity, for example through a distance function, is a common part of data mining and machine learning algorithms. This calculation is crucial for efficiency since distances are usually evaluated a large number of times, the classical example being query-by-example (find objects that are similar to a given query object). Moreover, the performance of these algorithms depends critically on choosing a good distance function. However, it is often the case that (1) the correct distance is unknown or chosen by hand, and (2) its calculation is computationally expensive (e.g., such as for large dimensional objects). In this paper, we propose a method for constructing relative-distance preserving low-dimensional mapping (sparse mappings). This method allows learning unknown distance functions (or approximating known functions) with the additional property of reducing distance computation time. We present an algorithm that given examples of proximity comparisons among triples of objects (object i is more like object j than object k), learns a distance function, in as few dimensions as possible, that preserves these distance relationships. The formulation is based on solving a linear programming optimization problem that finds an optimal mapping for the given dataset and distance relationships. Unlike other popular embedding algorithms, this method can easily generalize to new points, does not have local minima, and explicitly models computational efficiency by finding a mapping that is sparse, i.e. one that depends on a small subset of features or dimensions. Experimental evaluation shows that the proposed formulation compares favorably with a state-of-the art method in several publicly available datasets.

Proceedings ArticleDOI
11 Sep 2006
TL;DR: This paper provides new results with PSO using the Exponential probability distribution aiming at improvement in performance, and tests the suitability of PSO-E, a version of the algorithm tested on a suite of well-known benchmark functions with many local optima.
Abstract: Studies with the Gaussian and Cauchy probability distributions have shown that the performance of the standard PSO algorithm can be improved. But these versions may also get stuck in local minima when optimizing functions with many local minima in high dimensional search space. In this paper, we will provide new results with PSO using the Exponential probability distribution aiming at improvement in performance. This version of the algorithm, termed PSO-E, was tested on a suite of well-known benchmark functions with many local optima and the results were compared with those obtained by the standard PSO (constriction factor). Simulation results show the suitability of PSO– E.

Journal ArticleDOI
TL;DR: In this paper, the scaling on which the present � -convergence analysis is based has the effect of separating the bulk and surface contributions to the energy, and it differs crucially from other scalings employed in the past in that it renders both contributions of the same order.
Abstract: A simple model of cleavage in brittle crystals consists of a layer of material containing N atomic planes separating in accordance with an interplanar potential under the action of an opening displacement δ prescribed on the boundary of the layer. The problem addressed in this work concerns the characterization of the constrained minima of the energy EN of the layer as a function of δ as N becomes large. These minima determine the effective or macroscopic cohesive law of the crystal. The main results presented in this communication are: (i) the computation of the � limit E0 of EN as N →∞ ; (ii) the characterization of the minimum values of E0 as a function of the macroscopic opening displacement; (iii) a proof of uniform convergence of the minima of EN for the case of nearest-neighbor interactions; and (iv) a proof of uniform convergence of the derivatives of EN , or tractions, in the same case. The scaling on which the present � -convergence analysis is based has the effect of separating the bulk and surface contributions to the energy. It differs crucially from other scalings employed in the past in that it renders both contributions of the same order.

Posted Content
TL;DR: This work presents an easy-to-implement online algorithm requiring no more than 3 comparisons per element, in the worst case, to compute the running maximum (or minimum) filter in 1.5 compared per element.
Abstract: The running maximum-minimum (max-min) filter computes the maxima and minima over running windows of size w. This filter has numerous applications in signal processing and time series analysis. We present an easy-to-implement online algorithm requiring no more than 3 comparisons per element, in the worst case. Comparatively, no algorithm is known to compute the running maximum (or minimum) filter in 1.5 comparisons per element, in the worst case. Our algorithm has reduced latency and memory usage.

Journal ArticleDOI
TL;DR: In this paper, a power quality (PQ) event detection and classification method using higher order cumulants as the feature parameter, and quadratic classifiers as the classification method is presented.
Abstract: In this paper, we present a novel power-quality (PQ) event detection and classification method using higher order cumulants as the feature parameter, and quadratic classifiers as the classification method. We have observed that local higher order statistical parameters that are estimated from short segments of 50-Hz notch-filtered voltage waveform data carry discriminative features for PQ events analyzed herein. A vector with six parameters consisting of local minimas and maximas of higher order central cumulants starting from the second (variance) up to the fourth cumulant is used as the feature vector. Local vector magnitudes and simple thresholding provide an immediate event detection criterion. After the detection of a PQ event, local maxima and minima of the cumulants around the event instant are used for the event-type classification. We have observed that the minima and maxima for each statistical order produces clusters in the feature space. These clusters were observed to exhibit noncircular topology; hence, quadratic-type classifiers that require the Mahalanobis distance metric are proposed. The events investigated and presented are line-to-ground arcing faults and voltage sags due to the induction motor starting. Detection and classification results obtained from an experimentally staged PQ event data set are presented.

Journal ArticleDOI
TL;DR: In this article, a stochastic Partial Differential Equation (SPDE) is used to model the evolution of a curve and a Stratonovich differential has been introduced to guarantee the wellposedness of the evolution and to make it independent of the implicit representation of the initial curve.
Abstract: Based on recent work on Stochastic Partial Differential Equations (SPDEs), this paper presents a simple and well-founded method to implement the stochastic evolution of a curve. First, we explain why great care should be taken when considering such an evolution in a Level Set framework. To guarantee the well-posedness of the evolution and to make it independent of the implicit representation of the initial curve, a Stratonovich differential has to be introduced. To implement this differential, a standard Ito plus drift approximation is proposed to turn an implicit scheme into an explicit one. Subsequently, we consider shape optimization techniques, which are a common framework to address various applications in Computer Vision, like segmentation, tracking, stereo vision etc. The objective of our approach is to improve these methods through the introduction of stochastic motion principles. The extension we propose can deal with local minima and with complex cases where the gradient of the objective function with respect to the shape is impossible to derive exactly. Finally, as an application, we focus on image segmentation methods, leading to what we call Stochastic Active Contours.

Journal Article
TL;DR: In this article, an online algorithm for the running maximum-minimum (MAX-MIN) filter was proposed, which requires no more than 3 comparisons per element, in the worst case.
Abstract: The running maximum-minimum (MAX-MIN) filter computes the maxima and minima over running windows of size w. This filter has numerous applications in signal processing and time series analysis. We present an easy-to-implement online algorithm requiring no more than 3 comparisons per element, in the worst case. Comparatively, no algorithm is known to compute the running maximum (or minimum) filter in 1.5 comparisons per element, in the worst case. Our algorithm has reduced latency and memory usage.

Proceedings Article
01 Jan 2006
TL;DR: This paper presents a simple and well-founded method to implement the stochastic evolution of a curve, and considers shape optimization techniques, which are a common framework to address various applications in Computer Vision, like segmentation, tracking, stereo vision etc.
Abstract: Based on recent work on Stochastic Partial Differential Equations (SPDEs), this paper presents a simple and well-founded method to implement the stochastic evolution of a curve. First, we explain why great care should be taken when considering such an evolution in a Level Set framework. To guarantee the well-posedness of the evolution and to make it independent of the implicit representation of the initial curve, a Stratonovich differential has to be introduced. To implement this differential, a standard Ito plus drift approximation is proposed to turn an implicit scheme into an explicit one. Subsequently, we consider shape optimization techniques, which are a common framework to address various applications in Computer Vision, like segmentation, tracking, stereo vision etc. The objective of our approach is to improve these methods through the introduction of stochastic motion principles. The extension we propose can deal with local minima and with complex cases where the gradient of the objective function with respect to the shape is impossible to derive exactly. Finally, as an application, we focus on image segmentation methods, leading to what we call Stochastic Active Contours.

Journal ArticleDOI
TL;DR: A new method for segmentation of molecular surfaces using topological analysis of a scalar function defined on the surface and its associated gradient field to study the role of cavities and protrusions in protein-protein interactions is presented.

Proceedings ArticleDOI
17 Jun 2006
TL;DR: A practical and efficient method for finding the globally optimal solution to the problem of pose estimation of a known object, based on ideas from global optimization theory, in particular, convex under-estimators in combination with branch and bound.
Abstract: In this paper we propose a practical and efficient method for finding the globally optimal solution to the problem of pose estimation of a known object. We present a framework that allows us to use both point-to-point, point-to-line and point-to-plane correspondences in the optimization algorithm. Traditional methods such as the iterative closest point algorithm may get trapped in local minima due to the non-convexity of the problem, however, our approach guarantees global optimality. The approach is based on ideas from global optimization theory, in particular, convex under-estimators in combination with branch and bound. We provide a provably optimal algorithm and demonstrate good performance on both synthetic and real data.