scispace - formally typeset
Search or ask a question

Showing papers on "Maxima and minima published in 2005"


Proceedings Article
01 Jan 2005
TL;DR: This paper develops a modification of the recent technique proposed by Wainwright et al. (Nov. 2005), called sequential tree-reweighted message passing, which outperforms both the ordinary belief propagation and tree- reweighted algorithm in both synthetic and real problems.
Abstract: Algorithms for discrete energy minimization are of fundamental importance in computer vision. In this paper, we focus on the recent technique proposed by Wainwright et al. (Nov. 2005)- tree-reweighted max-product message passing (TRW). It was inspired by the problem of maximizing a lower bound on the energy. However, the algorithm is not guaranteed to increase this bound - it may actually go down. In addition, TRW does not always converge. We develop a modification of this algorithm which we call sequential tree-reweighted message passing. Its main property is that the bound is guaranteed not to decrease. We also give a weak tree agreement condition which characterizes local maxima of the bound with respect to TRW algorithms. We prove that our algorithm has a limit point that achieves weak tree agreement. Finally, we show that, our algorithm requires half as much memory as traditional message passing approaches. Experimental results demonstrate that on certain synthetic and real problems, our algorithm outperforms both the ordinary belief propagation and tree-reweighted algorithm in (M. J. Wainwright, et al., Nov. 2005). In addition, on stereo problems with Potts interactions, we obtain a lower energy than graph cuts

1,172 citations


Journal ArticleDOI
TL;DR: The local minima of LRSDPr are classified and the optimal convergence of a slight variant of the successful, yet experimental, algorithm of Burer and Monteiro is proved, which handles L RSDPr via the nonconvex change of variables X=RRT.
Abstract: The low-rank semidefinite programming problem LRSDPr is a restriction of the semidefinite programming problem SDP in which a bound r is imposed on the rank of X, and it is well known that LRSDPr is equivalent to SDP if r is not too small. In this paper, we classify the local minima of LRSDPr and prove the optimal convergence of a slight variant of the successful, yet experimental, algorithm of Burer and Monteiro [5], which handles LRSDPr via the nonconvex change of variables X=RRT. In addition, for particular problem classes, we describe a practical technique for obtaining lower bounds on the optimal solution value during the execution of the algorithm. Computational results are presented on a set of combinatorial optimization relaxations, including some of the largest quadratic assignment SDPs solved to date.

444 citations


Proceedings ArticleDOI
20 Jun 2005
TL;DR: A novel algorithm aiming to estimate the 3D shape, the texture of a human face, along with the3D pose and the light direction from a single photograph by recovering the parameters of a 3D morphable model is presented.
Abstract: We present a novel algorithm aiming to estimate the 3D shape, the texture of a human face, along with the 3D pose and the light direction from a single photograph by recovering the parameters of a 3D morphable model. Generally, the algorithms tackling the problem of 3D shape estimation from image data use only the pixels intensity as input to drive the estimation process. This was previously achieved using either a simple model, such as the Lambertian reflectance model, leading to a linear fitting algorithm. Alternatively, this problem was addressed using a more precise model and minimizing a non-convex cost function with many local minima. One way to reduce the local minima problem is to use a stochastic optimization algorithm. However, the convergence properties (such as the radius of convergence) of such algorithms, are limited. Here, as well as the pixel intensity, we use various image features such as the edges or the location of the specular highlights. The 3D shape, texture and imaging parameters are then estimated by maximizing the posterior of the parameters given these image features. The overall cost function obtained is smoother and, hence, a stochastic optimization algorithm is not needed to avoid the local minima problem. This leads to the multi-features fitting algorithm that has a wider radius of convergence and a higher level of precision. This is shown on some example photographs, and on a recognition experiment performed on the CMU-PIE image database.

341 citations


01 Jan 2005
TL;DR: The proposed method retains spatial coherence on initial data characteristic of curve evolution techniques, as well as the balance between a pixel/voxel’s proximity to the curve and its intention to cross over the curve from the underlying energy.
Abstract: In this paper, we first draw a connection between a level set algorithm and k-Means plus nonlinear diffusion preprocessing. Then, we exploit this link to develop a new hybrid numerical technique for segmentation that draws on the speed and simplicity of k-Means procedures, and the robustness of level set algorithms. The proposed method retains spatial coherence on initial data characteristic of curve evolution techniques, as well as the balance between a pixel/voxel’s proximity to the curve and its intention to cross over the curve from the underlying energy. However, it is orders of magnitude faster than standard curve evolutions. Moreover, it does not suffer from the limitations of k-Means due to inaccurate local minima and allows for segmentation results ranging from k-Means clustering type partitioning to level set partitions.

178 citations


Journal ArticleDOI
TL;DR: In this paper, a framework for dealing with several problems related to the analysis of shapes is proposed, such as the definition of the relevant set of shapes and defining a metric on it.
Abstract: This paper proposes a framework for dealing with several problems related to the analysis of shapes. Two related such problems are the definition of the relevant set of shapes and that of defining a metric on it. Following a recent research monograph by Delfour and Zolesio [11], we consider the characteristic functions of the subsets of R2 and their distance functions. The L2 norm of the difference of characteristic functions, the L∞ and the W1,2 norms of the difference of distance functions define interesting topologies, in particular the well-known Hausdorff distance. Because of practical considerations arising from the fact that we deal with image shapes defined on finite grids of pixels, we restrict our attention to subsets of ℝ2 of positive reach in the sense of Federer [16], with smooth boundaries of bounded curvature. For this particular set of shapes we show that the three previous topologies are equivalent. The next problem we consider is that of warping a shape onto another by infinitesimal gradient descent, minimizing the corresponding distance. Because the distance function involves an inf, it is not differentiable with respect to the shape. We propose a family of smooth approximations of the distance function which are continuous with respect to the Hausdorff topology, and hence with respect to the other two topologies. We compute the corresponding Gâteaux derivatives. They define deformation flows that can be used to warp a shape onto another by solving an initial value problem.We show several examples of this warping and prove properties of our approximations that relate to the existence of local minima. We then use this tool to produce computational definitions of the empirical mean and covariance of a set of shape examples. They yield an analog of the notion of principal modes of variation. We illustrate them on a variety of examples.

157 citations


Journal ArticleDOI
TL;DR: A new algorithm for constructing pathways between local minima that involve a large number of intervening transition states on the potential energy surface and applications to the formation of buckminsterfullerene and to the folding of various biomolecules are presented.
Abstract: We report a new algorithm for constructing pathways between local minima that involve a large number of intervening transition states on the potential energy surface. A significant improvement in efficiency has been achieved by changing the strategy for choosing successive pairs of local minima that serve as endpoints for the next search. We employ Dijkstra's algorithm [E. W. Dijkstra, Numer. Math. 1, 269 (1959)] to identify the "shortest" path corresponding to missing connections within an evolving database of local minima and the transition states that connect them. The metric employed to determine the shortest missing connection is a function of the minimized Euclidean distance. We present applications to the formation of buckminsterfullerene and to the folding of various biomolecules: the B1 domain of protein G, tryptophan zippers, and the villin headpiece subdomain. The corresponding pathways contain up to 163 transition states and will be used in future discrete path sampling calculations.

153 citations


01 Jan 2005
TL;DR: This paper describes a “merely” exponential space/time algorithm for finding a Bayesian network that corresponds to a global maxima of a decomposable scoring function, such as BDeu or BIC.
Abstract: Finding the Bayesian network that maximizes a score function is known as structure learning or structure discovery. Most approaches use local search in the space of acyclic digraphs, which is prone to local maxima. Exhaustive enumeration requires super-exponential time. In this paper we describe a “merely” exponential space/time algorithm for finding a Bayesian network that corresponds to a global maxima of a decomposable scoring function, such as BDeu or BIC. NSF IIS-0325581, NSERC PGS-B

150 citations


Proceedings ArticleDOI
17 Oct 2005
TL;DR: A solution for optimal triangulation in three views is presented and it is shown experimentally that the number of stationary points that are local minima and lie in front of each camera is small but does depend on the scene geometry.
Abstract: We present a solution for optimal triangulation in three views. The solution is guaranteed to find the optimal solution because it computes all the stationary points of the (maximum likelihood) objective function. Internally, the solution is found by computing roots of multivariate polynomial equations, directly solving the conditions for stationarity. The solver makes use of standard methods from computational commutative algebra to convert the root-finding problem into a 47 /spl times/ 47 nonsymmetric eigenproblem. Although there are in general 47 roots, counting both real and complex ones, the number of real roots is usually much smaller. We also show experimentally that the number of stationary points that are local minima and lie in front of each camera is small but does depend on the scene geometry.

143 citations


Journal ArticleDOI
TL;DR: An algorithm based on a constrained optimization method, which allows us to choose a set of measurement configurations by selecting iteratively one pose after another inside the workspace, which maximizes an index of observability associated with the identification Jacobian is proposed.
Abstract: The robustness of robot calibration with respect to sensor noise is sensitive to the manipulator poses used to collect measurement data. In this paper we propose an algorithm based on a constrained optimization method, which allows us to choose a set of measurement configurations. It works by selecting iteratively one pose after another inside the workspace. After a few steps, a set of configurations is obtained, which maximizes an index of observability associated with the identification Jacobian. This algorithm has been shown, in a former work, to be sensitive to local minima. This is why we propose here meta-heuristic methods to decrease this sensibility of our algorithm. Finally, a validation through the simulation of a calibration experience shows that using selected configurations significantly improve the kinematic parameter identification by dividing by 10-15 the noise associated with the results. Also, we present an application to the calibration of a parallel robot with a vision-based measurement device.

125 citations


Journal ArticleDOI
01 Oct 2005
TL;DR: A novel approach to the tomographic reconstruction of binary objects from few projection directions within a limited range of angles with robustness against local minima and excellent reconstruction performance using five projections within a range of 90^@?
Abstract: We present a novel approach to the tomographic reconstruction of binary objects from few projection directions within a limited range of angles. A quadratic objective functional over binary variables comprising the squared projection error and a prior penalizing non-homogeneous regions, is supplemented with a concave functional enforcing binary solutions. Application of a primal-dual subgradient algorithm to a suitable decomposition of the objective functional into the difference of two convex functions leads to an algorithm which provably converges with parallel updates to binary solutions. Numerical results demonstrate robustness against local minima and excellent reconstruction performance using five projections within a range of 90^@?. Our approach is applicable to quite general objective functions over binary variables with constraints and thus applicable to a wide range of problems within and beyond the field of discrete tomography.

123 citations


Journal ArticleDOI
TL;DR: It is confirmed that mode coupling theory successfully predicts the nonmonotonic behavior of dynamics and the presence of multiple glass phases, providing strong evidence that structure (the only input of mode coupling Theory) controls dynamics.
Abstract: We compare theoretical and simulation results for static and dynamic properties for a model of particles interacting via a spherically symmetric repulsive ramp potential. The model displays anomalies similar to those found in liquid water, namely, expansion upon cooling and an increase of diffusivity upon compression. In particular, we calculate the state points P, T from the simulation and successfully compare it with the state points P, T obtained using the Rogers-Young RY closure for the Ornstein-Zernike OZ equation. Both the theoretical and the numerical calculations confirm the presence of a line of isobaric density maxima, and lines of compressibility minima and maxima. Indirect evidence of a liquid-liquid critical point is found. Dynamic properties also show anomalies. Along constant temperature paths, as the density increases, the dynamics alternate between slowing down and speeding up, and we associate this behavior with the progressive structuring and destructuring of the liquid. Finally we confirm that mode coupling theory successfully predicts the nonmonotonic behavior of dynamics and the presence of multiple glass phases, providing strong evidence that structure the only input of mode coupling theory controls dynamics.

Proceedings ArticleDOI
12 Dec 2005
TL;DR: Simulation results show that the GPSO with Gaussian and Cauchy jump outperforms the standard one and presents a very competitive performance compared to PSO with constriction factor and also self-adaptive evolutionary programming.
Abstract: Gaussian particle swarm optimization (GPSO) algorithm has shown promising results for solving multimodal optimization problems in low dimensional search space. But similar to evolutionary algorithms (EAs), GPSO may also get stuck in local minima when optimizing functions with many local minima like the Rastrigin or Riewank functions in high dimensional search space. In this paper, an approach which consists of a GPSO with jumps to escape from local minima is presented. The jump strategy is implemented as a mutation operator based on the Gaussian and Cauchy probability distribution. The new algorithm was tested on a suite of well-known benchmark functions with many local optima and the results were compared with those obtained by the standard PSO algorithm, and PSO with constriction factor. Simulation results show that the GPSO with Gaussian and Cauchy jump outperforms the standard one and presents a very competitive performance compared to PSO with constriction factor and also self-adaptive evolutionary programming.

Journal ArticleDOI
TL;DR: This paper proposes and proves a characterization of the points that can be lowered during a W-thinning, which may be checked locally and efficiently implemented thanks to a data structure called component tree, and proposes quasi-linear algorithms for computing M-watersheds and topological watersheds.
Abstract: The watershed transformation is an efficient tool for segmenting grayscale images. An original approach to the watershed (Bertrand, Journal of Mathematical Imaging and Vision, Vol. 22, Nos. 2/3, pp. 217--230, 2005.; Couprie and Bertrand, Proc. SPIE Vision Geometry VI, Vol. 3168, pp. 136--146, 1997.) consists in modifying the original image by lowering some points while preserving some topological properties, namely, the connectivity of each lower cross-section. Such a transformation (and its result) is called a W-thinning, a topological watershed being an "ultimate" W-thinning. In this paper, we study algorithms to compute topological watersheds. We propose and prove a characterization of the points that can be lowered during a W-thinning, which may be checked locally and efficiently implemented thanks to a data structure called component tree. We introduce the notion of M-watershed of an image F, which is a W-thinning of F in which the minima cannot be extended anymore without changing the connectivity of the lower cross-sections. The set of points in an M-watershed of F which do not belong to any regional minimum corresponds to a binary watershed of F. We propose quasi-linear algorithms for computing M-watersheds and topological watersheds. These algorithms are proved to give correct results with respect to the definitions, and their time complexity is analyzed.

Journal ArticleDOI
TL;DR: The basic results on weak sharp minima in Part I are applied to a number of important problems in convex programming and applications to the linear regularity and boundedlinear regularity of a finite collection of convex sets are studied.
Abstract: The notion of weak sharp minima is an important tool in the analysis of the perturbation behavior of certain classes of optimization problems as well as in the convergence analysis of algorithms designed to solve these problems. It has been studied extensively by several authors. This paper is the second of a series on this subject where the basic results on weak sharp minima in Part I are applied to a number of important problems in convex programming. In Part II we study applications to the linear regularity and bounded linear regularity of a finite collection of convex sets as well as global error bounds in convex programming. We obtain both new results and reproduce several existing results from a fresh perspective.

Journal ArticleDOI
TL;DR: In this article, the problem of finding optimal point correspondences between images related by a homography is addressed, and the problem is reduced to the solution of a polynomial of degree eight in a single variable, which can be computed numerically.

Journal ArticleDOI
TL;DR: It is shown how geometrical constrains can be implemented in an approach based on nonredundant curvilinear coordinates avoiding the inclusion of the constraints in the set of redundant coordinates used to define the internal coordinates.
Abstract: A modification of the constrained geometry optimization method by Anglada and Bofill (Anglada, J. M.; Bofill, J. M. J. Comput. Chem. 1997, 18, 992-1003) is designed and implemented. The changes include the choice of projection, quasi-line-search, and the use of a Rational Function optimization approach rather than a reduced-restricted-quasi-Newton-Raphson method in the optimization step. Furthermore, we show how geometrical constrains can be implemented in an approach based on nonreclunclant curvilinear coordinates avoiding the inclusion of the constraints in the set of redundant coordinates used to define the internal coordinates. The behavior of the new implementation is demonstrated in geometry optimizations featuring single or multiple geometrical constraints (bond lengths, angles, etc.), optimizations on hyperspherical cross sections (as in the computation of steepest descent paths), and location of energy minima on the intersection subspace of two potential energy surfaces (i.e. minimum energy crossing points). In addition, a novel scheme to determine the crossing point geometrically nearest to a given molecular structure is proposed.

Journal ArticleDOI
TL;DR: In the tabulation of smallest base energies found at various lengths, statistical evidence suggests the authors have good candidates for global minima or ground states up to length 45 and an algorithm applying stochastic methods and calculus to find polyphase sequences that are good local minima for the base energy.
Abstract: Low autocorrelation for sequences is usually described in terms of low base energy, i.e., the sum of the sidelobe energies, or the maximum modulus of its autocorrelations, a Barker sequence occurring when this value is /spl les/ 1. We describe first an algorithm applying stochastic methods and calculus to the problem of finding polyphase sequences that are good local minima for the base energy. Starting from these, a second algorithm uses calculus to locate sequences that are local minima for the maximum modulus on autocorrelations. In our tabulation of smallest base energies found at various lengths, statistical evidence suggests we have good candidates for global minima or ground states up to length 45. We extend the list of known polyphase Barker sequences to length 63.

Journal ArticleDOI
TL;DR: The multilevel structure of global optimization problems, which can often be seen at different levels, is discussed, which represents a more complete measure of the difficulty of the problem with respect to the standard measure given by the total number of local minima.
Abstract: In this paper we will discuss the multilevel structure of global optimization problems. Such problems can often be seen at different levels, the number of which varies from problem to problem. At each level different objects are observed, but all levels display a similar structure. The number of levels which can be recognized for a given optimization problem represents a more complete measure of the difficulty of the problem with respect to the standard measure given by the total number of local minima. Moreover, the subdivision in levels will also suggest the introduction of appropriate tools, which will be different for each level but, in accordance with the fact that all levels display a similar structure, will all be based on a common concept namely that of local move. Some computational experiments will reveal the effectiveness of such tools.

Journal ArticleDOI
TL;DR: A new continuity correction to the P value for local maxima of a statistical parametric map is presented, which resulted in P values that were approximately 43% lower than the best of Bonferroni or random field theory methods when applied to a typical fMRI data set.

Journal ArticleDOI
TL;DR: The results presented here shows that an image can be successfully decomposed into a number of intrinsic mode functions and a residue image with a minimum number of extrema points and that subsampling offers a way to keep the total number of samples generated by empirical mode decomposition approximately equal to the number of pixels of the original image.
Abstract: Previous work on empirical mode decomposition in two dimensions typically generates a residue with many extrema points. In this paper we propose an improved method to decompose an image into a number of intrinsic mode functions and a residue image with a minimum number of extrema points. We further propose a method for the variable sampling of the two-dimensional empirical mode decomposition. Since traditional frequency concept is not applicable in this work, we introduce the concept of empiquency, shortform for empirical mode frequency, to describe the signal oscillations. The very special properties of the intrinsic mode functions are used for variable sampling in order to reduce the number of parameters to represent the image. This is done blockwise using the occurrence of extrema points of the intrinsic mode function to steer the sampling rate of the block. A method of using overlapping 7 × 7 blocks is introduced to overcome blocking artifacts and to further reduce the number of parameters required to represent the image. The results presented here shows that an image can be successfully decomposed into a number of intrinsic mode functions and a residue image with a minimum number of extrema points. The results also show that subsampling offers a way to keep the total number of samples generated by empirical mode decomposition approximately equal to the number of pixels of the original image.

Journal ArticleDOI
TL;DR: A probabilistic approach for estimating parameters of an option pricing model from a set of observed option prices is proposed, based on a stochastic optimization algorithm which generates a random sample from the set of global minima of the in-sample pricing error and allows for the existence of multipleglobal minima.
Abstract: We propose a probabilistic approach for estimating parameters of an option pricing model from a set of observed option prices. Our approach is based on a stochastic optimization algorithm which generates a random sample from the set of global minima of the in-sample pricing error and allows for the existence of multiple global minima. Starting from an IID population of candidate solutions drawn from a prior distribution of the set of model parameters, the population of parameters is updated through cycles of independent random moves followed by “selection” according to pricing performance. We examine conditions under which such an evolving population converges to a sample of calibrated models. The heterogeneity of the obtained sample can then be used to quantify the degree of ill–posedness of the inverse problem: it provides a natural example of a coherent measure of risk, which is compatible with observed prices of benchmark (“vanilla”) options and takes into account the model uncertainty resulting from incomplete identification of the model. We describe in detail the algorithm in the case of a diffusion model, where one aims at retrieving the unknown local volatility surface from a finite set of option prices, and illustrate its performance on simulated and empirical data sets of index options.

Journal ArticleDOI
TL;DR: If the learning parameters of the three-term BP algorithm satisfy the conditions given in this paper, then it is guaranteed that the system is stable and will converge to a local minimum, and it is proved that if at least one of the eigenvalues of matrix F is negative, then the system becomes unstable.

Journal ArticleDOI
TL;DR: The Short Transformation Method (STM) is proposed as an attractive alternative to the original TM, which allows reconstructing the fuzzy FRF from a much lower number of deterministic computations, with only a small reduction in the accuracy of FRFs.

DissertationDOI
01 Jan 2005
TL;DR: A novel algorithm for fitting a Three-Dimensional Morphable Model of faces to a 2D input image that has a wider radius of convergence and a higher level of precision and is applied for such tasks as face identification, facial expression transfer from one image to another image and face tracking across 3D pose and expression variations.
Abstract: The main contribution of this thesis is a novel algorithm for fitting a Three-Dimensional Morphable Model of faces to a 2D input image. This fitting algorithm enables the estimation of the 3D shape, the texture, the 3D pose and the light direction from a single input image. Generally, the algorithms tackling the problem of 3D shape estimation from image data use only the pixels intensity as input to drive the estimation process. This was previously achieved using either a simple model, such as the Lambertian reflectance model, leading to a linear fitting algorithm. Alternatively, this problem was addressed using a more precise model and minimizing a non-convex cost function with many local minima. One way to reduce the local minima problem is to use a stochastic optimization algorithm. However, the convergence properties (such as the radius of convergence) of such algorithms, are limited. Here, as well as the pixel intensity, we use various image features such as the edges or the location of the specular highlights. The 3D shape, texture and imaging parameters are then estimated by maximizing the posterior of the parameters given these image features. The overall cost function obtained is smoother and, hence, a stochastic optimization algorithm is not needed to avoid the local minima problem. This leads to the Multi-Features Fitting algorithm that has a wider radius of convergence and a higher level of precision. The new Multi-Feature Fitting algorithm is applied for such tasks as face identification, facial expression transfer from one image to another image (of different individuals) and face tracking across 3D pose and expression variations. The second contribution of this thesis is a careful comparison of well known fitting algorithms used in the context of face modelling and recognition. It is shown that these algorithms achieve high run time efficiency at the cost of accuracy and generality (few face images may be analysed). The third and last contribution is the Matlab Morphable Model toolbox, a set of software tools developed in the Matlab programing environment. It allows (i) the generation of 3D faces from model parameters, (ii) the rendering of 3D faces, (iii) the fitting of an input image using the Multi-Features Fitting algorithm and (iv) identification from model parameters. The toolbox has a modular design that allows anyone to builds on it and, for instance, to improve the fitting algorithm by incorporating new features in the cost function.

Journal ArticleDOI
TL;DR: Small modifications to the conjugate gradient method for solving symmetric positive definite systems have resulted in an increase in performance over LU decomposition by a factor of around 84, and the behaviour of the new algorithm has been tested against the crystallographic problems of Pawley refinement, rigid-body and general crystal structure refinement.
Abstract: Small modifications to the conjugate gradient method for solving symmetric positive definite systems have resulted in an increase in performance over LU decomposition by a factor of around 84 for solving a dense system of 1325 unknowns. Performance is further increased in the case of applying upper- and lower-bound parameter constraints. For structure solution employing simulated annealing and the Newton–Raphson method of non-linear least squares, the overall performance gain can be a factor of four, depending on the applied constraints. In addition, the new algorithm with bounding constraints often seeks out lower minima than would otherwise be attainable without constraints. The behaviour of the new algorithm has been tested against the crystallographic problems of Pawley refinement, rigid-body and general crystal structure refinement.

Journal ArticleDOI
TL;DR: In this paper, the Hausdorff dimension of the singular sets of convex variational integrals in the vectorial case n,N≥2 has been investigated.
Abstract: We consider ω-minima Open image in new window of convex variational integrals in the vectorial case n,N≥2, and we provide estimates for the Hausdorff dimension of their singular sets.

Journal ArticleDOI
TL;DR: The GA was found to perform exceptionally well for all cases considered, whereas SQP, although a more computationally efficient method, was somewhat limited for two error function choices due to local minima trapping.
Abstract: In this paper, two separate techniques, i.e., sequential quadratic programming (SQP) and a genetic algorithm (GA), were used to estimate the complex permittivity of each layer in a multilayer composite structure. The relative performance of the algorithms was characterized by applying each algorithm to one of three different error functions. Computer generated S-parameter data sets were initially used in order to establish the achievable accuracy of each algorithm. Based on these data sets and S-parameter measurements of single and multilayer samples obtained using a standard X-band waveguide procedure, the GA was determined to be the more robust algorithm in terms of minimizing rms error of measured/generated and formulated S-parameters. The GA was found to perform exceptionally well for all cases considered, whereas SQP, although a more computationally efficient method, was somewhat limited for two error function choices due to local minima trapping.

Proceedings ArticleDOI
17 Oct 2005
TL;DR: This work introduces a framework for computing statistically optimal estimates of geometric reconstruction problems with a hierarchy of convex relaxations to solve nonconvex optimization problems with polynomials and shows how one can detect whether the global optimum is attained at a given relaxation.
Abstract: We introduce a framework for computing statistically optimal estimates of geometric reconstruction problems. While traditional algorithms often suffer from either local minima or nonoptimality - or a combination of both - we pursue the goal of achieving global solutions of the statistically optimal cost-function. Our approach is based on a hierarchy of convex relaxations to solve nonconvex optimization problems with polynomials. These convex relaxations generate a monotone sequence of lower bounds and we show how one can detect whether the global optimum is attained at a given relaxation. The technique is applied to a number of classical vision problems: triangulation, camera pose, homography estimation and last, but not least, epipolar geometry estimation. Experimental validation on both synthetic and real data is provided. In practice, only a few relaxations are needed for attaining the global optimum

Journal ArticleDOI
TL;DR: The reliability of two different implementations of a quadratic phase retrieval approach to the problem of determining the far field of a radiating system from only square amplitude information on the near field zone is studied in this paper.
Abstract: The reliability of two different implementations of a quadratic phase retrieval approach to the problem of determining the far field of a radiating system from only square amplitude information on the near field zone is studied. The first implementation exploits square amplitude data over two scanning surfaces. The second one exploits the square amplitude of the voltages received by two different probes moving over a single scanning surface. It is pointed out how the diversity between the two scanning surface data or the two probes gives rise to "cancellation effects" which help in avoiding the local minima problem. Numerical examples are shown to discuss the global convergence properties of the two algorithms.

Journal ArticleDOI
TL;DR: This work exploits a fictitious dynamics between the basins of attraction of local minima, since the objective is to find the lowest minimum, rather than to reproduce the thermodynamics or dynamics.
Abstract: Thermodynamic and dynamic properties of biomolecules can be calculated using a coarse-grained approach based upon sampling stationary points of the underlying potential energy surface. The superposition approximation provides an overall partition function as a sum of contributions from the local minima, and hence functions such as internal energy, entropy, free energy and the heat capacity. To obtain rates we must also sample transition states that link the local minima, and the discrete path sampling method provides a systematic means to achieve this goal. A coarse-grained picture is also helpful in locating the global minimum using the basin-hopping approach. Here we can exploit a fictitious dynamics between the basins of attraction of local minima, since the objective is to find the lowest minimum, rather than to reproduce the thermodynamics or dynamics.