scispace - formally typeset
Search or ask a question

Showing papers in "Computational Optimization and Applications in 2004"


Journal ArticleDOI
TL;DR: The proposed stand-alone Newton method can handle classification problems in very high dimensional spaces, and generates a classifier that depends on very few input features, such as 7 out of the original 28,032.
Abstract: A fast Newton method, that suppresses input space features, is proposed for a linear programming formulation of support vector machine classifiers. The proposed stand-alone method can handle classification problems in very high dimensional spaces, such as 28,032 dimensions, and generates a classifier that depends on very few input features, such as 7 out of the original 28,032. The method can also handle problems with a large number of data points and requires no specialized linear programming packages but merely a linear equation solver. For nonlinear kernel classifiers, the method utilizes a minimal number of kernel functions in the classifier that it generates.

304 citations


Journal ArticleDOI
TL;DR: The problem of optimizing OSPF weights for a given a set of projected demands so as to avoid congestion is shown to be NP-hard, even for approximation, and a local search heuristic is proposed to solve it.
Abstract: Open Shortest Path First (OSPF) is one of the most commonly used intra-domain internet routing protocol. Traffic flow is routed along shortest paths, splitting flow evenly at nodes where several outgoing links are on shortest paths to the destination. The weights of the links, and thereby the shortest path routes, can be changed by the network operator. The weights could be set proportional to the physical lengths of the links, but often the main goal is to avoid congestion, i.e. overloading of links, and the standard heuristic recommended by Cisco (a major router vendor) is to make the weight of a link inversely proportional to its capacity. We study the problem of optimizing OSPF weights for a given a set of projected demands so as to avoid congestion. We show this problem is NP-hard, even for approximation, and propose a local search heuristic to solve it. We also provide worst-case results about the performance of OSPF routing vs. an optimal multi-commodity flow routing. Our numerical experiments compare the results obtained with our local search heuristic to the optimal multi-commodity flow routing, as well as simple and commonly used heuristics for setting the weights. Experiments were done with a proposed next-generation AT&T WorldNet backbone as well as synthetic internetworks.

254 citations


Journal ArticleDOI
TL;DR: Two types of preconditioners which use some form of incomplete Cholesky factorization for indefinite systems are proposed in this paper, and it is revealed that the solution times for such problems on a modern PC are measured in minutes when direct methods are used and drop to seconds when iterative methods with appropriate preconditionsers are used.
Abstract: Every Newton step in an interior-point method for optimization requires a solution of a symmetric indefinite system of linear equations. Most of today's codes apply direct solution methods to perform this task. The use of logarithmic barriers in interior point methods causes unavoidable ill-conditioning of linear systems and, hence, iterative methods fail to provide sufficient accuracy unless appropriately preconditioned. Two types of preconditioners which use some form of incomplete Cholesky factorization for indefinite systems are proposed in this paper. Although they involve significantly sparser factorizations than those used in direct approaches they still capture most of the numerical properties of the preconditioned system. The spectral analysis of the preconditioned matrix is performed: for convex optimization problems all the eigenvalues of this matrix are strictly positive. Numerical results are given for a set of public domain large linearly constrained convex quadratic programming problems with sizes reaching tens of thousands of variables. The analysis of these results reveals that the solution times for such problems on a modern PC are measured in minutes when direct methods are used and drop to seconds when iterative methods with appropriate preconditioners are used.

163 citations


Journal ArticleDOI
TL;DR: This work provides a direction which adequately substitutes for the projected gradient, and establishes results which mirror those available for the scalar-valued case, namely stationarity of the cluster points (if any) without convexity assumptions, and convergence of the full sequence generated by the algorithm to a weakly efficient optimum in the convex case, under mild assumptions.
Abstract: Vector optimization problems are a significant extension of multiobjective optimization, which has a large number of real life applications. In vector optimization the preference order is related to an arbitrary closed and convex cone, rather than the nonnegative orthant. We consider extensions of the projected gradient gradient method to vector optimization, which work directly with vector-valued functions, without using scalar-valued objectives. We provide a direction which adequately substitutes for the projected gradient, and establish results which mirror those available for the scalar-valued case, namely stationarity of the cluster points (if any) without convexity assumptions, and convergence of the full sequence generated by the algorithm to a weakly efficient optimum in the convex case, under mild assumptions. We also prove that our results still hold when the search direction is only approximately computed.

156 citations


Journal ArticleDOI
TL;DR: A simple algorithm to compute a temperature which is compatible with a given acceptance ratio is proposed and the properties of the acceptance probability are studied, showing that this function is convex for low temperatures and concave for high temperatures.
Abstract: The classical version of simulated annealing is based on a cooling schedule. Generally, the initial temperature is set such that the acceptance ratio of bad moves is equal to a certain value χ0. In this paper, we first propose a simple algorithm to compute a temperature which is compatible with a given acceptance ratio. Then, we study the properties of the acceptance probability. It is shown that this function is convex for low temperatures and concave for high temperatures. We also provide a lower bound for the number of plateaux of a simulated annealing based on a geometric cooling schedule. Finally, many numerical experiments are reported.

148 citations


Journal ArticleDOI
TL;DR: A trust-region feasibility-perturbed sequential quadratic programming algorithm is described, and its adaptation to the problems arising in nonlinear model predictive control is discussed.
Abstract: Model predictive control requires the solution of a sequence of continuous optimization problems that are nonlinear if a nonlinear model is used for the plant. We describe briefly a trust-region feasibility-perturbed sequential quadratic programming algorithm (developed in a companion report), then discuss its adaptation to the problems arising in nonlinear model predictive control. Computational experience with several representative sample problems is described, demonstrating the effectiveness of the proposed approach.

113 citations


Journal ArticleDOI
TL;DR: This paper makes use of modified secant condition of quasi-Newton methods given by Zhang et al. (1999) and Zhang and Xu (2001) and proposes a new conjugate gradient method following to Dai and Liao (2001).
Abstract: Conjugate gradient methods are appealing for large scale nonlinear optimization problems. Recently, expecting the fast convergence of the methods, Dai and Liao (2001) used secant condition of quasi-Newton methods. In this paper, we make use of modified secant condition given by Zhang et al. (1999) and Zhang and Xu (2001) and propose a new conjugate gradient method following to Dai and Liao (2001). It is new features that this method takes both available gradient and function value information and achieves a high-order accuracy in approximating the second-order curvature of the objective function. The method is shown to be globally convergent under some assumptions. Numerical results are reported.

111 citations


Journal ArticleDOI
TL;DR: This article proves the superlinear convergence of Algorithm 4.3 under some suitable conditions.
Abstract: The BFGS method is the most effective of the quasi-Newton methods for solving unconstrained optimization problems. Wei, Li, and Qi [16] have proposed some modified BFGS methods based on the new quasi-Newton equation Bk+1sk e y*k, where y*k is the sum of yk and Aksk, and Ak is some matrix. The average performance of Algorithm 4.3 in [16] is better than that of the BFGS method, but its superlinear convergence is still open. This article proves the superlinear convergence of Algorithm 4.3 under some suitable conditions.

103 citations


Journal ArticleDOI
Volker Stix1
TL;DR: Algorithms are provided to track all maximal cliques in a fully dynamic graph to solve fuzzy clustering problems in models with non-disjunct clusters.
Abstract: Clustering applications dealing with perception based or biased data lead to models with non-disjunct clusters. There, objects to be clustered are allowed to belong to several clusters at the same time which results in a fuzzy clustering. It can be shown that this is equivalent to searching all maximal cliques in dynamic graphs like Gt e (V,Et), where Et − 1 ⊂ Et, t e 1,…,T; E0 e p. In this article algorithms are provided to track all maximal cliques in a fully dynamic graph.

90 citations


Journal ArticleDOI
TL;DR: A Matlab solver for constrained nonlinear equations based on the affine scaling trust-region method STRN, recently proposed by the authors, is presented and its features and capabilities are illustrated by numerical experiments.
Abstract: In this paper a Matlab solver for constrained nonlinear equations is presented. The code, called STRSCNE, is based on the affine scaling trust-region method STRN, recently proposed by the authors. The approach taken in implementing the key steps of the method is discussed. The structure and the usage of STRSCNE are described and its features and capabilities are illustrated by numerical experiments. The results of a comparison with high quality codes for nonlinear optimization are shown.

70 citations


Journal ArticleDOI
TL;DR: This work describes in detail how to convert several application problems to SOCP, and a proof is given of the existence of the step for the infeasible long-step path-following method.
Abstract: Interior point methods (IPM) have been developed for all types of constrained optimization problems. In this work the extension of IPM to second order cone programming (SOCP) is studied based on the work of Andersen, Roos, and Terlaky. SOCP minimizes a linear objective function over the direct product of quadratic cones, rotated quadratic cones, and an affine set. It is described in detail how to convert several application problems to SOCP. Moreover, a proof is given of the existence of the step for the infeasible long-step path-following method. Furthermore, variants are developed of both long-step path-following and of predictor-corrector algorithms. Numerical results are presented and analyzed for those variants using test cases obtained from a number of application problems.

Journal ArticleDOI
TL;DR: An ant colony optimisation (ACO) algorithm is introduced to generate good solutions to flexible machine layout problems, with ACO obtaining better solutions than the reduction heuristic.
Abstract: Flexible machine layout problems describe the dynamic arrangement of machines to optimise the trade-off between material handling and rearrangement costs under changing and uncertain production environments. A previous study used integer-programming techniques to solve heuristically reduced versions of the problem. As an alternative, this paper introduces an ant colony optimisation (ACO) algorithm to generate good solutions. Experimental results are presented, with ACO obtaining better solutions than the reduction heuristic.

Journal ArticleDOI
TL;DR: It is shown that if the objective function is LC2, then the methods possess local quadratic convergence under a local error bound condition without the requirement of isolated nonsingular solutions.
Abstract: This paper studies convergence properties of regularized Newton methods for minimizing a convex function whose Hessian matrix may be singular everywhere. We show that if the objective function is LC2, then the methods possess local quadratic convergence under a local error bound condition without the requirement of isolated nonsingular solutions. By using a backtracking line search, we globalize an inexact regularized Newton method. We show that the unit stepsize is accepted eventually. Limited numerical experiments are presented, which show the practical advantage of the method.

Journal ArticleDOI
TL;DR: This analysis explains theoretically why the extra-gradient methods usually outperform the forward-backward splitting methods for monotone variational inequalities.
Abstract: In this paper, we study the relationship between the forward-backward splitting method and the extra-gradient method for monotone variational inequalities. Both of the methods can be viewed as prediction-correction methods. The only difference is that they use different search directions in the correction-step. Our analysis explains theoretically why the extra-gradient methods usually outperform the forward-backward splitting methods. We suggest some modifications for the two methods and numerical results are given to verify the superiority of the modified methods.

Journal ArticleDOI
TL;DR: It is shown that a class of mathematical programs with P-matrix linear complementarity constraints can be reformulated as a piecewise convex program and solved through a sequence of continuously differentiable convex programs.
Abstract: We consider a mathematical program whose constraints involve a parametric P-matrix linear complementarity problem with the design (upper level) variables as parameters. Solutions of this complementarity problem define a piecewise linear function of the parameters. We study a smoothing function of this function for solving the mathematical program. We investigate the limiting behaviour of optimal solutions, KKT points and B-stationary points of the smoothing problem. We show that a class of mathematical programs with P-matrix linear complementarity constraints can be reformulated as a piecewise convex program and solved through a sequence of continuously differentiable convex programs. Preliminary numerical results indicate that the method and convex reformulation are promising.

Journal ArticleDOI
TL;DR: The augmented Lagrangian algorithm is specially designed for solving very large scale MCNF instances and provides near-optimal solutions to instances with over 3,600 nodes, 14,000 arcs and 80,000 commodities within reasonable computing time.
Abstract: The linear multicommodity network flow (MCNF) problem has many applications in the areas of transportation and telecommunications. It has therefore received much attention, and many algorithms that exploit the problem structure have been suggested and implemented. The practical difficulty of solving MCNF models increases fast with respect to the problem size, and especially with respect to the number of commodities. Applications in telecommunications typically lead to instances with huge numbers of commodities, and tackling such instances computationally is challenging. In this paper, we describe and evaluate a fast and convergent lower-bounding procedure which is based on an augmented Lagrangian reformulation of MCNF, that is, a combined Lagrangian relaxation and penalty approach. The algorithm is specially designed for solving very large scale MCNF instances. Compared to a standard Lagrangian relaxation approach, it has more favorable convergence characteristics. To solve the nonlinear augmented Lagrangian subproblem, we apply a disaggregate simplicial decomposition scheme, which fully exploits the structure of the subproblem and has good reoptimization capabilities. Finally, the augmented Lagrangian algorithm can also be used to provide heuristic upper bounds. The efficiency of the augmented Lagrangian method is demonstrated through computational experiments on large scale instances. In particular, it provides near-optimal solutions to instances with over 3,600 nodes, 14,000 arcs and 80,000 commodities within reasonable computing time.

Journal ArticleDOI
TL;DR: The results suggest that the GR algorithm provides an efficient way to identify subsets of preferred Pareto optima from larger sets in multi-objective optimization.
Abstract: Algorithms for multi-objective optimization problems are designed to generate a single Pareto optimum (non-dominated solution) or a set of Pareto optima that reflect the preferences of the decision-maker. If a set of Pareto optima are generated, then it is useful for the decision-maker to be able to obtain a small set of preferred Pareto optima using an unbiased technique of filtering solutions. This suggests the need for an efficient selection procedure to identify such a preferred subset that reflects the preferences of the decision-maker with respect to the objective functions. Selection procedures typically use a value function or a scalarizing function to express preferences among objective functions. This paper introduces and analyzes the Greedy Reduction (GR) algorithm for obtaining subsets of Pareto optima from large solution sets in multi-objective optimization. Selection of these subsets is based on maximizing a scalarizing function of the vector of percentile ordinal rankings of the Pareto optima within the larger set. A proof of optimality of the GR algorithm that relies on the non-dominated property of the vector of percentile ordinal rankings is provided. The GR algorithm executes in linear time in the worst case. The GR algorithm is illustrated on sets of Pareto optima obtained from five interactive methods for multi-objective optimization and three non-linear multi-objective test problems. These results suggest that the GR algorithm provides an efficient way to identify subsets of preferred Pareto optima from larger sets.

Journal ArticleDOI
TL;DR: Through extensive computational testing, it is shown that the shortest path algorithm serves as an effective heuristic for the product-level subproblem, yielding high quality solutions with only a fraction of the computer time.
Abstract: We propose a planning model for products manufactured across multiple manufacturing facilities sharing similar production capabilities. The need for cross-facility capacity management is most evident in high-tech industries that have capital-intensive equipment and a short technology life cycle. We propose a multicommodity flow network model where each commodity represents a product and the network structure represents manufacturing facilities in the supply chain capable of producing the products. We analyze in depth the product-level (single-commodity, multi-facility) subproblem when the capacity constraints are relaxed. We prove that even the general-cost version of this uncapacitated subproblem is NP-complete. We show that there exists an optimization algorithm that is polynomial in the number of facilities, but exponential in the number of periods. We further show that under special cost structures the shortest-path algorithm could achieve optimality. We analyze cases when the optimal solution does not correspond to a source-to-sink path, thus the shortest path algorithm would fail. To solve the overall (multicommodity) planning problem we develop a Lagrangean decomposition scheme, which separates the planning decisions into a resource subproblem, and a number of product-level subproblems. The Lagrangean multipliers are updated iteratively using a subgradient search algorithm. Through extensive computational testing, we show that the shortest path algorithm serves as an effective heuristic for the product-level subproblem (a mixed integer program), yielding high quality solutions with only a fraction (roughly 2%) of the computer time.

Journal ArticleDOI
TL;DR: This work describes a new algorithm based on Successive Quadratic Programming (SQP) and constrains the SQP steps in a trust region for global convergence and considers the second-order information in three ways: quasi-Newton updates, Gauss- newton approximation, and exact second derivatives, and they are compared.
Abstract: We describe a new algorithm for a class of parameter estimation problems, which are either unconstrained or have only equality constraints and bounds on parameters. Due to the presence of unobservable variables, parameter estimation problems may have non-unique solutions for these variables. These can also lead to singular or ill-conditioned Hessians and this may be responsible for slow or non-convergence of nonlinear programming (NLP) algorithms used to solve these problems. For this reason, we need an algorithm that leads to strong descent and converges to a stationary point. Our algorithm is based on Successive Quadratic Programming (SQP) and constrains the SQP steps in a trust region for global convergence. We consider the second-order information in three ways: quasi-Newton updates, Gauss-Newton approximation, and exact second derivatives, and we compare their performance. Finally, we provide results of tests of our algorithm on various problems from the CUTE and COPS sets.

Journal ArticleDOI
Ping-Qi Pan1
TL;DR: In this paper, the proposed dual projective simplex method is recast in a more compact form so that it can get itself started from scratch with any dual (basic or nonbasic) feasible solution.
Abstract: Recently, a linear programming problem solver, called dual projective simplex method, was proposed (Pan, Computers and Mathematics with Applications, vol. 35, no. 6, pp. 119–135, 1998). This algorithm requires a crash procedure to provide an initial (normal or deficient) basis. In this paper, it is recast in a more compact form so that it can get itself started from scratch with any dual (basic or nonbasic) feasible solution. A new dual Phase-1 approach for producing such a solution is proposed. Reported are also computational results obtained with a set of standard NETLIB problems.

Journal ArticleDOI
TL;DR: A result is presented that gives general guidelines for constructing convex and concave envelopes of functions of two variables on bounded quadrilaterals and it is shown how one can use this result to construct convexand concave envelope of bilinear and fractional functions on rectangles, parallelograms and trapezoids.
Abstract: Convex and concave envelopes play important roles in various types of optimization problems. In this article, we present a result that gives general guidelines for constructing convex and concave envelopes of functions of two variables on bounded quadrilaterals. We show how one can use this result to construct convex and concave envelopes of bilinear and fractional functions on rectangles, parallelograms and trapezoids. Applications of these results to global optimization are indicated.

Journal ArticleDOI
TL;DR: This paper model the network capacity design for uncertain demand in telecommunication networks with integer link capacities as a two-stage mixed integer program, which is solved using a stochastic subgradient procedure, the Barahona's volume approach and the Benders decomposition.
Abstract: The expansion of telecommunication services has increased the number of users sharing network resources. When a given service is highly demanded, some demands may be unmet due to the limited capacity of the network links. Moreover, for such demands, telecommunication operators should pay penalty costs. To avoid rejecting demands, we can install more capacities in the existing network. In this paper we report experiments on the network capacity design for uncertain demand in telecommunication networks with integer link capacities. We use Poisson demands with bandwidths given by normal or log-normal distribution functions. The expectation function is evaluated using a predetermined set of realizations of the random parameter. We model this problem as a two-stage mixed integer program, which is solved using a stochastic subgradient procedure, the Barahona's volume approach and the Benders decomposition.

Journal ArticleDOI
TL;DR: This paper provides three different characterizations of difference-of-convex (d.d.c.c.) decompositions for indefinite quadratic functions, and shows that there is an infinity of undominated decomposition for indefinite quadruatic functions.
Abstract: In this paper we analyze difference-of-convex (d.c.) decompositions for indefinite quadratic functions. Given a quadratic function, there are many possible ways to decompose it as a difference of two convex quadratic functions. Some decompositions are dominated, in the sense that other decompositions exist with a lower curvature. Obviously, undominated decompositions are of particular interest. We provide three different characterizations of such decompositions, and show that there is an infinity of undominated decompositions for indefinite quadratic functions. Moreover, two different procedures will be suggested to find an undominated decomposition starting from a generic one. Finally, we address applications where undominated d.c.d.s may be helpful: in particular, we show how to improve bounds in branch-and-bound procedures for quadratic optimization problems.

Journal ArticleDOI
TL;DR: The problem of designing at minimum cost a two-connected network such that each edge belongs to a cycle using at most K edges is shown to be strongly NP-complete for any fixed K, and a new class of facet defining inequalities is derived.
Abstract: We study the problem of designing at minimum cost a two-connected network such that each edge belongs to a cycle using at most K edges. This problem is a particular case of the two-connected networks with bounded meshes problem studied by Fortz, Labbe and Maffioli (Operations Research, vol. 48, no. 6, pp. 866–877, 2000). In this paper, we compute a lower bound on the number of edges in a feasible solution, we show that the problem is strongly NP-complete for any fixed K, and we derive a new class of facet defining inequalities. Numerical results obtained with a branch-and-cut algorithm using these inequalities show their effectiveness for solving the problem.

Journal ArticleDOI
TL;DR: An algorithm is described for computing an optimal p* for any specified set of m data points, and computational results are presented showing that the optimal q(p*,x) can be obtained efficiently.
Abstract: For some applications it is desired to approximate a set of m data points in \Bbb Rn with a convex quadratic function. Furthermore, it is required that the convex quadratic approximation underestimate all m of the data points. It is shown here how to formulate and solve this problem using a convex quadratic function with s e (n + 1)(n + 2)/2 parameters, s ≤ m, so as to minimize the approximation error in the L1 norm. The approximating function is q(p,x), where p ∈ \Bbb Rs is the vector of parameters, and x ∈ \Bbb Rn. The Hessian of q(p,x) with respect to x (for fixed p) is positive semi-definite, and its Hessian with respect to p (for fixed x) is shown to be positive semi-definite and of rank ≤n. An algorithm is described for computing an optimal p* for any specified set of m data points, and computational results (for n e 4,6,10,15) are presented showing that the optimal q(p*,x) can be obtained efficiently. It is shown that the approximation will usually interpolate s of the m data points.

Journal ArticleDOI
TL;DR: The theory and experimental results demonstrate the ability to apply the LF function to dynamic and static design of survivable connection oriented networks.
Abstract: Issues of computer network survivability have gained much attention in recent years since computer networks plays an important role in modern world. Many organizations, institutions, companies use computer networks as a basic tool for transmitting many kinds of information. Service disruptions in modern networks are expected to be significant because loss of services and traffic in high-speed fiber systems could cause a lot of damages including economic loses, political conflicts, human health problems. In this paper we focus on problems of survivable connection oriented network design. A new objective function LF for primary routes assignment applying the local-destination rerouting strategy is defined. Next, an optimization problem of primary routes assignment using the LF function is formulated. Moreover, a branch and bound algorithm for that problem is proposed. The theory and experimental results demonstrate the ability to apply the LF function to dynamic and static design of survivable connection oriented networks.

Journal ArticleDOI
TL;DR: This paper considers a network improvement problem, called vertex-to-vertices distance reduction problem, and presents a strongly polynomial algorithm to solve the problem and shows that achieving an approximation ratio O(log(|V|)) is NP-hard.
Abstract: In this paper, we first consider a network improvement problem, called vertex-to-vertices distance reduction problem. The problem is how to use a minimum cost to reduce lengths of the edges in a network so that the distances from a given vertex to all other vertices are within a given upper bound. We use l∞, l1 and l2 norms to measure the total modification cost respectively. Under l∞ norm, we present a strongly polynomial algorithm to solve the problem, and under l1 or weighted l2 norm, we show that achieving an approximation ratio O(log(vVv)) is NP-hard. We also extend the results to the vertex-to-points distance reduction problem, which is to reduce the lengths of edges most economically so that the distances from a given vertex to all points on the edges of the network are within a given upper bound.

Journal ArticleDOI
TL;DR: This work proposes alternative objective functions to solve the (generalized) eigenproblem via (unconstrained) optimization, and describes the variational properties of these functions.
Abstract: In certain circumstances, it is advantageous to use an optimization approach in order to solve the generalized eigenproblem, Ax e λBx, where A and B are real symmetric matrices and B is positive definite. In particular, this is the case when the matrices A and B are very large and the computational cost, prohibitive, of solving, with high accuracy, systems of equations involving these matrices. Usually, the optimization approach involves optimizing the Rayleigh quotient. We first propose alternative objective functions to solve the (generalized) eigenproblem via (unconstrained) optimization, and we describe the variational properties of these functions. We then introduce some optimization algorithms (based on one of these formulations) designed to compute the largest eigenpair. According to preliminary numerical experiments, this work could lead the way to practical methods for computing the largest eigenpair of a (very) large symmetric matrix (pair).

Journal ArticleDOI
Igor Griva1
TL;DR: The paper shows that in certain cases when the interior point method fails to achieve the solution with the high level of accuracy, the use of the exterior point method (EPM) can remedy this situation.
Abstract: The paper presents an algorithm for solving nonlinear programming problems. The algorithm is based on the combination of interior and exterior point methods. The latter is also known as the primal-dual nonlinear rescaling method. The paper shows that in certain cases when the interior point method (IPM) fails to achieve the solution with the high level of accuracy, the use of the exterior point method (EPM) can remedy this situation. The result is demonstrated by solving problems from COPS and CUTE problem sets using nonlinear programming solver LOQO that is modified to include the exterior point method subroutine.

Journal ArticleDOI
TL;DR: Lower bounds and upper bounds for the interatomic distance in cluster of atoms minimizing the Lennard-Jones energy are proved in dimension three and in the two-dimensional case.
Abstract: We prove in this article lower bounds and upper bounds for the interatomic distance in cluster of atoms minimizing the Lennard-Jones energy. Our main result is in dimension three, but we also prove it in the two-dimensional case, since it seems interesting from a theoretical point of view.