scispace - formally typeset
Search or ask a question

Showing papers on "Maxima and minima published in 2008"


Journal ArticleDOI
TL;DR: If a highly accurate MEP is desired, it is found to be more efficient to descend from the saddle to the minima than to use a chain-of-states method with many images.
Abstract: A comparison of chain-of-states based methods for finding minimum energy pathways (MEPs) is presented. In each method, a set of images along an initial pathway between two local minima is relaxed to find a MEP. We compare the nudged elastic band (NEB), doubly nudged elastic band, string, and simplified string methods, each with a set of commonly used optimizers. Our results show that the NEB and string methods are essentially equivalent and the most efficient methods for finding MEPs when coupled with a suitable optimizer. The most efficient optimizer was found to be a form of the limited-memory Broyden-Fletcher-Goldfarb-Shanno method in which the approximate inverse Hessian is constructed globally for all images along the path. The use of a climbing-image allows for finding the saddle point while representing the MEP with as few images as possible. If a highly accurate MEP is desired, it is found to be more efficient to descend from the saddle to the minima than to use a chain-of-states method with many images. Our results are based on a pairwise Morse potential to model rearrangements of a heptamer island on Pt(111), and plane-wave based density functional theory to model a rollover diffusion mechanism of a Pd tetramer on MgO(100) and dissociative adsorption and diffusion of oxygen on Au(111).

1,409 citations


Journal ArticleDOI
TL;DR: This paper proposes a continuation method where one tracks the minimizers along a sequence of approximate nonsmooth energies, the first of which being strictly convex and the last one the original energy to minimize for the segmentation task.
Abstract: We consider the restoration of piecewise constant images where the number of the regions and their values are not fixed in advance, with a good difference of piecewise constant values between neighboring regions, from noisy data obtained at the output of a linear operator (e.g., a blurring kernel or a Radon transform). Thus we also address the generic problem of unsupervised segmentation in the context of linear inverse problems. The segmentation and the restoration tasks are solved jointly by minimizing an objective function (an energy) composed of a quadratic data-fidelity term and a nonsmooth nonconvex regularization term. The pertinence of such an energy is ensured by the analytical properties of its minimizers. However, its practical interest used to be limited by the difficulty of the computational stage which requires a nonsmooth nonconvex minimization. Indeed, the existing methods are unsatisfactory since they (implicitly or explicitly) involve a smooth approximation of the regularization term and often get stuck in shallow local minima. The goal of this paper is to design a method that efficiently handles the nonsmooth nonconvex minimization. More precisely, we propose a continuation method where one tracks the minimizers along a sequence of approximate nonsmooth energies $\{J_\eps\}$, the first of which being strictly convex and the last one the original energy to minimize. Knowing the importance of the nonsmoothness of the regularization term for the segmentation task, each $J_\eps$ is nonsmooth and is expressed as the sum of an $\ell_1$ regularization term and a smooth nonconvex function. Furthermore, the local minimization of each $J_{\eps}$ is reformulated as the minimization of a smooth function subject to a set of linear constraints. The latter problem is solved by the modified primal-dual interior point method, which guarantees the descent direction at each step. Experimental results are presented and show the effectiveness and the efficiency of the proposed method. Comparison with simulated annealing methods further shows the advantage of our method.

191 citations


Journal ArticleDOI
TL;DR: Simulation results demonstrate that FABEMD is not only faster and adaptive, but also outperforms the original BEMD in terms of the quality of the BIMFs.
Abstract: A novel approach for bidimensional empirical mode decomposition (BEMD) is proposed in this paper. BEMD decomposes an image into multiple hierarchical components known as bidimensional intrinsic mode functions (BIMFs). In each iteration of the process, two-dimensional (2D) interpolation is applied to a set of local maxima (minima) points to form the upper (lower) envelope. But, 2D scattered data interpolation methods cause huge computation time and other artifacts in the decomposition. This paper suggests a simple, but effective, method of envelope estimation that replaces the surface interpolation. In this method, order statistics filters are used to get the upper and lower envelopes, where filter size is derived from the data. Based on the properties of the proposed approach, it is considered as fast and adaptive BEMD (FABEMD). Simulation results demonstrate that FABEMD is not only faster and adaptive, but also outperforms the original BEMD in terms of the quality of the BIMFs.

188 citations


Proceedings ArticleDOI
12 Jul 2008
TL;DR: This work exhaustively extracts inherent networks of combinatorial fitness landscapes by adapting the notion of inherent networks proposed for energy surfaces by using the well-known family of $NK$ landscapes as an example.
Abstract: We propose a network characterization of combinatorial fitness landscapes by adapting the notion of inherent networks proposed for energy surfaces (Doye, 2002). We use the well-known family of $NK$ landscapes as an example. In our case the inherent network is the graph where the vertices are all the local maxima and edges mean basin adjacency between two maxima. We exhaustively extract such networks on representative small NK landscape instances, and show that they are 'small-worlds'. However, the maxima graphs are not random, since their clustering coefficients are much larger than those of corresponding random graphs. Furthermore, the degree distributions are close to exponential instead of Poissonian. We also describe the nature of the basins of attraction and their relationship with the local maxima network.

167 citations


Journal ArticleDOI
TL;DR: In this article, the authors propose a method based on rectifying the singular points in the parameter space by using a blow-up argument and then asymptotically matching the approximations around such points with the regular approximation away from them.
Abstract: Our starting point is a parameterized family of functionals (a ‘theory’) for which we are interested in approximating the global minima of the energy when one of these parameters goes to zero. The goal is to develop a set of increasingly accurate asymptotic variational models allowing one to deal with the cases when this parameter is ‘small’ but finite. Since Γ-convergence may be non-uniform within the ‘theory’, we pose a problem of finding a uniform approximation. To achieve this goal we propose a method based on rectifying the singular points in the parameter space by using a blow-up argument and then asymptotically matching the approximations around such points with the regular approximation away from them. We illustrate the main ideas with physically meaningful examples covering a broad set of subjects from homogenization and dimension reduction to fracture and phase transitions. In particular, we give considerable attention to the problem of transition from discrete to continuum when the internal and external scales are not well separated, and one has to deal with the so-called ‘size’ or ‘scale’ effects.

129 citations


Journal ArticleDOI
TL;DR: In this article, a linear velocity model was used to obtain estimates for the maximum depth of a full waveform tomography, and the authors investigated how frequencies should be selected if the seismic data are modelled in the frequency domain and showed that the condition to avoid local minima requires more frequencies than needed for sufficient spectral coverage.
Abstract: The least‐squares error measures the difference between observed and modelled seismic data. Because it suffers from local minima, a good initial velocity model is required to avoid convergence to the wrong model when using a gradient‐based minimization method. If a data set mainly contains reflection events, it is difficult to update the velocity model with the least‐squares error because the minimization method easily ends up in the nearest local minimum without ever reaching the global minimum. Several authors observed that the model could be updated by diving waves, requiring a wide‐angle or large‐offset data set. This full waveform tomography is limited to a maximum depth. Here, we use a linear velocity model to obtain estimates for the maximum depth. In addition, we investigate how frequencies should be selected if the seismic data are modelled in the frequency domain. In the presence of noise, the condition to avoid local minima requires more frequencies than needed for sufficient spectral coverage. We also considered acoustic inversion of a synthetic marine data set created by an elastic time‐domain finite‐difference code. This allowed us to validate the estimates made for the linear velocity model. The acoustic approximation leads to a number of problems when using long‐offset data. Nevertheless, we obtained reasonable results. The use of a variable density in the acoustic inversion helped to match the data at the expense of accuracy in the inversion result for the density.

127 citations


Journal ArticleDOI
TL;DR: In this paper, the authors focus on the model updating of complex structural systems and identify physically different local minima, giving the analyst the power to decide what model would better describe the system base on his/her experience and engineering judgment.

116 citations


Book ChapterDOI
06 Sep 2008
TL;DR: In this work, a novel method for determining the principal directions (maxima) of the diffusion orientation distribution function (ODF) is proposed and the use of the principal curvatures of the graph of the ODF function as a measure of the degree of diffusion anisotropy in that direction is proposed.
Abstract: In this work, a novel method for determining the principal directions (maxima) of the diffusion orientation distribution function(ODF) is proposed. We represent the ODF as a symmetric high-order Cartesian tensor restricted to the unit sphere and show that the extrema of the ODF are solutions to a system of polynomial equations whose coefficients are polynomial functions of the tensor elements. In addition to demonstrating the ability of our methods to identify the principal directions in real data, we show that this method correctly identifies the principal directions under a range of noise levels. We also propose the use of the principal curvatures of the graph of the ODF function as a measure of the degree of diffusion anisotropy in that direction. We present simulated results illustrating the relationship between the mean principal curvature, measured at the maxima, and the fractional anisotropy of the underlying diffusion tensor.

110 citations


Journal ArticleDOI
TL;DR: In this article, the existence of heteroclinic connections for Hamiltonian systems of N 2nd order differential equations with potentials possessing possibly more than two global minima is proved under very weak nondegeneracy hypotheses on the potential.
Abstract: The problem considered is the existence of heteroclinic connections for Hamiltonian systems of N 2nd order differential equations with potential possessing possibly more than two global minima. First restricting to potentials with exactly two global minima we give an existence theorem under very weak nondegeneracy hypotheses on the potential. Our approach is variational: we prove existence by showing that the Action functional has a minimizer on the set of maps connecting the two minima. Next, allowing more than two minima but restricting to systems of two 2nd order equations, we analyze the phenomenon of nonexistence. In particular, by extending a result from [3], we conclude that generally nonexistence is robust under small analytic perturbations of the potential.

107 citations


Journal ArticleDOI
TL;DR: An approach that allows to produce the two-body density matrix during the density matrix renormalization group (DMRG) run without an additional increase in the current disk and memory requirements is presented.
Abstract: We present an approach that allows to produce the two-body density matrix during the density matrix renormalization group (DMRG) run without an additional increase in the current disk and memory requirements. The computational cost of producing the two-body density matrix is proportional to O(M3k2+M2k4). The method is based on the assumption that different elements of the two-body density matrix can be calculated during different steps of a sweep. Hence, it is desirable that the wave function at the convergence does not change during a sweep. We discuss the theoretical structure of the wave function ansatz used in DMRG, concluding that during the one-site DMRG procedure, the energy and the wave function are converging monotonically at every step of the sweep. Thus, the one-site algorithm provides an opportunity to obtain the two-body density matrix free from the N-representability problem. We explain the problem of local minima that may be encountered in the DMRG calculations. We discuss theoretically why and when the one- and two-site DMRG procedures may get stuck in a metastable solution, and we list practical solutions helping the minimization to avoid the local minima.

106 citations


Journal ArticleDOI
TL;DR: In this paper, two semi-analytic procedures for the detection of wheel-rail contact points are presented, named the DIST and the DIFF methods, which consider the wheel and the rail as two surfaces whose analytic expressions are known.
Abstract: The multibody simulation of railway vehicle dynamics needs a reliable and efficient method to determine the location of the contact points between wheel and rail that represent the application points of the contact forces and influence their directions and intensities. In this work, two semi-analytic procedures for the detection of the wheel–rail contact points (named the DIST and the DIFF methods) are presented. Both the methods consider the wheel and the rail as two surfaces whose analytic expressions are known. The first method is based on the idea that the contact points are located in the point in which the distance between the contact surfaces has local maxima, and is equivalent to solve an algebraic 4D-system. The second method is based on the idea that in the contact points the difference between the surfaces has local minima and is equivalent to solve an algebraic 2D-system. In both cases, the original problem can be reduced analytically to a simple 1D-problem that can be easily solved numerically.

Journal ArticleDOI
TL;DR: A new, extended artificial potential field method, which uses dynamic internal agent states that manipulate the potential field in which the agent is situated, which successfully solves reactive path planning problems, such as a complex maze with multiple local minima, which cannot be solved using conventional static potential fields.

Journal ArticleDOI
TL;DR: In this article, the long-term dynamics of cyclic solar activity in the form of Grand Minima can be associated with random fluctuations of the parameters governing the solar dynamo.
Abstract: We consider to what extent the long-term dynamics of cyclic solar activity in the form of Grand Minima can be associated with random fluctuations of the parameters governing the solar dynamo.We consider fluctuations of the alpha coefficient in the conventional Parker migratory dynamo, and also in slightly more sophisticated dynamo models, and demonstrate that they can mimic the gross features of the phenomenon of the occurrence of Grand Minima over suitable parameter ranges. The temporal distribution of these Grand Minima appears chaotic, with a more or less exponential waiting time distribution, typical of Poisson processes. In contrast, however, the available reconstruction of Grand Minima statistics based on cosmogenic isotope data demonstrates substantial deviations from this exponential law.We were unable to reproduce the non-Poissonic tail of the waiting time distribution either in the framework of a simple alpha-quenched Parker model or in its straightforward generalization, nor in simple models with feedback on the differential rotation. We suggest that the disagreement may only be apparent and is plausibly related to the limited observational data, and that the observations and results of numerical modeling can be consistent and represent physically similar dynamo regimes.

Journal ArticleDOI
TL;DR: In this article, the authors compare evolutionary algorithms with Minima Hopping for global optimization in the field of cluster structure prediction and find that the evolutionary algorithm is more efficient for systems with compact and symmetric ground states.
Abstract: We compare Evolutionary Algorithms with Minima Hopping for global optimization in the field of cluster structure prediction. We introduce a new {\em average offspring} recombination operator and compare it with previously used operators. Minima Hopping is improved with a {\em softening} method and a stronger feedback mechanism. Test systems are atomic clusters with Lennard-Jones interaction as well as silicon and gold clusters described by force fields. The improved Minima Hopping is found to be well-suited to all these homoatomic problems. The evolutionary algorithm is more efficient for systems with compact and symmetric ground states, including LJ$_{150}$, but it fails for systems with very complex energy landscapes and asymmetric ground states, such as LJ$_{75}$ and silicon clusters with more than 30 atoms. Both successes and failures of the evolutionary algorithm suggest ways for its improvement.

Proceedings Article
09 Jul 2008
TL;DR: This paper introduces two efficient BP-like algorithms that are guaranteed to converge to the global minimum, for any graph, over the class of energies known as "convex free energies" and proposes an efficient heuristic for setting the parameters of the convex free energy based on the structure of the graph.
Abstract: Inference problems in graphical models can be represented as a constrained optimization of a free energy function. It is known that when the Bethe free energy is used, the fixed-points of the belief propagation (BP) algorithm correspond to the local minima of the free energy. However BP fails to converge in many cases of interest. Moreover, the Bethe free energy is non-convex for graphical models with cycles thus introducing great difficulty in deriving efficient algorithms for finding local minima of the free energy for general graphs. In this paper we introduce two efficient BP-like algorithms, one sequential and the other parallel, that are guaranteed to converge to the global minimum, for any graph, over the class of energies known as "convex free energies". In addition, we propose an efficient heuristic for setting the parameters of the convex free energy based on the structure of the graph.

Journal ArticleDOI
TL;DR: It is shown that the recently introduced label propagation method for detecting communities in complex networks is equivalent to finding the local minima of a simple Potts model.
Abstract: We show that the recently introduced label propagation method for detecting communities in complex networks is equivalent to finding the local minima of a simple Potts model. Applying to empirical data, the number of such local minima was found to be very high, much larger than the number of nodes in the graph. The aggregation method for combining information from more local minima shows a tendency to fragment the communities into very small pieces.

Journal ArticleDOI
TL;DR: The parameters of the planes that pass through the planar faces of an object as the variables of the objective function leads to a set of linear constraints on the planes of the object, resulting in a much lower dimensional null space where optimization is easier to achieve.
Abstract: In previous optimization-based methods of 3D planar-faced object reconstruction from single 2D line drawings, the missing depths of the vertices of a line drawing (and other parameters in some methods) are used as the variables of the objective functions. A 3D object with planar faces is derived by finding values for these variables that minimize the objective functions. These methods work well for simple objects with a small number TV of variables. As N grows, however, it is very difficult for them to find the expected objects. This is because with the nonlinear objective functions in a space of large dimension , the search for optimal solutions can easily get trapped into local minima. In this paper, we use the parameters of the planes that pass through the planar faces of an object as the variables of the objective function. This leads to a set of linear constraints on the planes of the object, resulting in a much lower dimensional null space where optimization is easier to achieve. We prove that the dimension of this null space is exactly equal to the minimum number of vertex depths that define the 3D object. Since a practical line drawing is usually not an exact projection of a 3D object, we expand the null space to a larger space based on the singular value decomposition of the projection matrix of the line drawing. In this space, robust 3D reconstruction can be achieved. Compared with the two most related methods, our method not only can reconstruct more complex 3D objects from 2D line drawings but also is computationally more efficient.

Journal ArticleDOI
TL;DR: A range of density functional theory methods, including conventional hybrid and meta-hybrids functionals, a double-hybrid functional, and DFT-D (DFT augmented with an empirical dispersion term) were assessed for their ability to describe the three minima along the ϕGly rotational profile of one particular Tyr-Gly conformer.
Abstract: A range of density functional theory methods, including conventional hybrid and meta-hybrid functionals, a double-hybrid functional, and DFT-D (DFT augmented with an empirical dispersion term) were assessed for their ability to describe the three minima along the ϕGly rotational profile of one particular Tyr-Gly conformer. Previous work had shown that these minima are sensitive to intramolecular dispersion and basis set superposition error, the latter rendering MP2 calculations with small to medium-sized basis sets unsuitable for describing this molecule. Energy profiles for variation of the ϕGly torsion angle were compared to an estimated CCSD(T)/CBS reference profile. The hybrid functionals and the meta-hybrid PWB6K failed to predict all three minima; the meta-hybrid functionals M05−2X and M06−2X and the nonhybrid meta functional M06-L as well as the double-hybrid mPW2-PLYP and the B3LYP-D method did find all three minima but underestimated the relative stability of the two with rotated C-terminus. The ...

Journal ArticleDOI
TL;DR: This paper reports a novel methodology for the free-energy minimization of crystal structures exhibiting strong, anisotropic interactions due to hydrogen bonding, and demonstrates that the majority ( approximately 75%) of lattice-energy minima are thermally stable at ambient conditions, and hence, thefree-energy surface is complex and highly undulating.
Abstract: This paper reports a novel methodology for the free-energy minimization of crystal structures exhibiting strong, anisotropic interactions due to hydrogen bonding. The geometry of the thermally expanded cell was calculated by exploiting the dependence of the free-energy derivatives with respect to cell lengths and angles on the average pressure tensor computed in short molecular dynamics simulations. All dynamic simulations were performed with an elaborate anisotropic potential based on a distributed multipole analysis of the isolated molecule charge density. Changes in structure were monitored via simulated X-ray diffraction patterns. The methodology was used to minimize the free energy at ambient conditions of a set of experimental and hypothetical 5-fluorouracil crystal structures, generated in a search for lattice-energy minima with the same model potential. Our results demonstrate that the majority (similar to 75%) of lattice-energy minima are thermally stable at ambient conditions, and hence, the free-energy (like the lattice-energy) surface is complex and highly undulating. Metadynamics trajectories (Laio, A.; Parrinello, M. Proc. Natl. Acad. Sci. U.S.A. 2002, 99, 12562) started from the free-energy minima only produced transitions that preserved the hydrogen-bonding motif, and thus, further developments are needed for this method to efficiently explore such free-energy surfaces. The existence of so many free-energy minima, with large barriers for the alteration of the hydrogen-bonding motif, is consistent with the range of motifs observed in crystal structures of 5-fluorouracil and other 5-substituted uracils.

Journal ArticleDOI
TL;DR: This work proposes an algorithm which shows stable performance on training despite of the large number of hidden nodes in backpropagation, called separate learning algorithm in which hidden-to-output and input- to-hidden separately trained.

Journal ArticleDOI
TL;DR: In this article, the authors analyzed the non-reversible Markov jump process in stochastic networks under a thermodynamic limit regime, i.e., when the networks have some symmetry properties and when the number of nodes goes to infinity.
Abstract: This paper analyzes stochastic networks consisting of a set of finite capacity sites where different classes of individuals move according to some routing policy. The associated (non-reversible) Markov jump processes are analyzed under a thermodynamic limit regime, i.e. when the networks have some symmetry properties and when the number of nodes goes to infinity. A metastability property is proved: under some conditions on the parameters, it is shown that, in the limit, several equilibrium points coexist for the empirical distribution. The key ingredient of the proof of this property is a dimension reduction achieved by the introduction of two energy functions and a convenient mapping of their local minima and saddle points. Cases with a unique equilibrium point are also presented.

Journal ArticleDOI
TL;DR: A new model that explicitly enforces positivity in the light sources with the assumption that the object is Lambertian and its albedo is piecewise constant is developed and it is shown that the new model significantly improves the accuracy and robustness relative to existing approaches.
Abstract: We propose a variational algorithm to jointly estimate the shape, albedo, and light configuration of a Lambertian scene from a collection of images taken from different vantage points. Our work can be thought of as extending classical multi-view stereo to cases where point correspondence cannot be established, or extending classical shape from shading to the case of multiple views with unknown light sources. We show that a first naive formalization of this problem yields algorithms that are numerically unstable, no matter how close the initialization is to the true geometry. We then propose a computational scheme to overcome this problem, resulting in provably stable algorithms that converge to (local) minima of the cost functional. We develop a new model that explicitly enforces positivity in the light sources with the assumption that the object is Lambertian and its albedo is piecewise constant and show that the new model significantly improves the accuracy and robustness relative to existing approaches.

Journal ArticleDOI
TL;DR: A novel algorithm for learning mixture models from multivariate data using TRUST-TECH to compute neighborhood local maxima on the likelihood surface using stability regions and can be easily generalized to any other parametric finite mixture model.
Abstract: The expectation maximization (EM) algorithm is widely used for learning finite mixture models despite its greedy nature. Most popular model-based clustering techniques might yield poor clusters if the parameters are not initialized properly. To reduce the sensitivity of initial points, a novel algorithm for learning mixture models from multivariate data is introduced in this paper. The proposed algorithm takes advantage of TRUST-TECH (TRansformation Under STability-reTaining Equilibria CHaracterization) to compute neighborhood local maxima on the likelihood surface using stability regions. Basically, our method coalesces the advantages of the traditional EM with that of the dynamic and geometric characteristics of the stability regions of the corresponding nonlinear dynamical system of the log-likelihood function. Two phases, namely, the EM phase and the stability region phase, are repeated alternatively in the parameter space to achieve local maxima with improved likelihood values. The EM phase obtains the local maximum of the likelihood function and the stability region phase helps to escape out of the local maximum by moving toward the neighboring stability regions. Though applied to Gaussian mixtures in this paper, our technique can be easily generalized to any other parametric finite mixture model. The algorithm has been tested on both synthetic and real data sets and the improvements in the performance compared to other approaches are demonstrated. The robustness with respect to initialization is also illustrated experimentally.

Journal ArticleDOI
TL;DR: I will report on some recent developments concerning the problem of estimating the Hausdorff dimension of the singular sets of solutions to elliptic and variational problems.
Abstract: I will report on some recent developments concerning the problem of estimating the Hausdorff dimension of the singular sets of solutions to elliptic and variational problems. Emphasis will be given on some open issues. Connections with measure data problems will be outlined.

Journal ArticleDOI
TL;DR: The main theorems not only recover known convergence results in this field but also provide a theoretical basis for the development of new iterative methods.
Abstract: We present iterative methods for finding the critical points and/or the minima of extended real valued functions of the form $\phi = \psi+ g-h$, where $\psi$ is a differentiable function and $g$ and $h$ are convex, proper, and lower semicontinuous. The underlying idea relies upon the discretization of a first order dissipative dynamical system which allows us to preserve the local feature and to obtain some convergence results. The main theorems not only recover known convergence results in this field but also provide a theoretical basis for the development of new iterative methods.

Journal ArticleDOI
TL;DR: In this article, basin-hopping global optimization can identify low-lying minima for the corresponding mildly frustrated energy landscapes and improve the energy surface by employing bioinformatic techniques.
Abstract: Associative memory Hamiltonian structure prediction potentials are not overly rugged, thereby suggesting their landscapes are like those of actual proteins. In the present contribution we show how basin-hopping global optimization can identify low-lying minima for the corresponding mildly frustrated energy landscapes. For small systems the basin-hopping algorithm succeeds in locating both lower minima and conformations closer to the experimental structure than does molecular dynamics with simulated annealing. For large systems the efficiency of basin-hopping decreases for our initial implementation, where the steps consist of random perturbations to the Cartesian coordinates. We implemented umbrella sampling using basin-hopping to further confirm when the global minima are reached. We have also improved the energy surface by employing bioinformatic techniques for reducing the roughness or variance of the energy surface. Finally, the basin-hopping calculations have guided improvements in the excluded volume of the Hamiltonian, producing better structures. These results suggest a novel and transferable optimization scheme for future energy function development.

Journal ArticleDOI
TL;DR: The DIviding RECTangles (DIRECT) method as discussed by the authors was used to solve six benchmark phase equilibrium examples drawn from the literature and converged to the global minimum in the tangent plane distance function for all examples evaluated.

Journal ArticleDOI
TL;DR: Comparison with the global minima reported in the literature shows that the present method reproduces theglobal minima for clusters with n = 6, 8, 13, 19, 28, 30, and 32 and yields new globalMinima for (CO2)23, ( CO2)25, and (CO 2)35.
Abstract: Geometry optimization of carbon dioxide clusters (CO 2 ) n with the size of 4 ≤n < 40 is performed by a heuristic and unbiased method combined with geometrical perturbations. Comparison with the global minima reported in the literature shows that the present method reproduces the global minima for clusters with n = 6, 8, 13, 19, 28, 30, and 32 and yields new global minima for (CO 2 ) 23 , (CO 2 ) 25 , and (CO 2 ) 35 . For the other clusters under investigation, global minima are first reported in this article. Structural features of CO 2 clusters and efficiency of the optimization method are discussed.

Journal ArticleDOI
TL;DR: This work applies the foundation for the theory of weak sharp minima in the infinite-dimensional setting to error bounds for differentiable convex inclusions and applies the results to linear regularity and error limits for nondifferentiability convex inequalities.
Abstract: The notion of weak sharp minima unifies a number of important ideas in optimization. Part I of this work provides the foundation for the theory of weak sharp minima in the infinite-dimensional setting. Part II discusses applications of these results to linear regularity and error bounds for nondifferentiable convex inequalities. This work applies the results of Part I to error bounds for differentiable convex inclusions. A number of standard constraint qualifications for such inclusions are also examined.

Journal ArticleDOI
15 Aug 2008-Proteins
TL;DR: The stability of clusters for enzyme–inhibitor and antibody–antigen complexes in the Protein Docking Benchmark is studied, and all clusters that are close to the native structure are stable, and the combined approach is less dependent on a priori information than exploring the potential conformational space by Monte Carlo minimizations.
Abstract: Fast Fourier Transform (FFT) correlation methods of protein-protein docking, combined with the clustering of low energy conformations, can find a number of local minima on the energy surface. For most complexes the locations of the near-native structures can be constrained to the 30 largest clusters, each surrounding a local minimum. However, no reliable further discrimination can be obtained by energy measures because the differences in the energy levels between the minima are comparable to the errors in the energy evaluation. In fact, no current scoring function accounts for the entropic contributions that relate to the width rather than the depth of the minima. Since structures at narrow minima loose more entropy, some of the non-native states can be detected by determining whether or not a local minimum is surrounded by a broad region of attraction on the energy surface. The analysis is based on starting Monte Carlo Minimization (MCM) runs from random points around each minimum, and observing whether a certain fraction of trajectories converge to a small region within the cluster. The cluster is considered stable if such a strong attractor exists, has at least 10 convergent trajectories, is relatively close to the original cluster center, and contains a low energy structure. We studied the stability of clusters for enzyme-inhibitor and antibody-antigen complexes in the Protein Docking Benchmark. The analysis yields three main results. First, all clusters that are close to the native structure are stable. Second, restricting considerations to stable clusters eliminates around half of the false positives, i.e., solutions that are low in energy but far from the native structure of the complex. Third, dividing the conformational space into clusters and determining the stability of each cluster, the combined approach is less dependent on a priori information than exploring the potential conformational space by Monte Carlo minimizations.