scispace - formally typeset
Search or ask a question

Showing papers on "Solution set published in 2009"


Journal ArticleDOI
TL;DR: A way to deal harmoniously with a larger set of problems while giving a fine control on the solving mechanisms is given, to give more freedom in solver design by introducing programming concepts where only configuration parameters were previously available.

222 citations


Journal ArticleDOI
TL;DR: A general projective framework for finding a zero of the sum of $n$ maximal monotone operators over a real Hilbert space is described, which gives rise to a family of splitting methods of unprecedented flexibility.
Abstract: We describe a general projective framework for finding a zero of the sum of $n$ maximal monotone operators over a real Hilbert space Unlike prior methods for this problem, we neither assume $n=2$ nor first reduce the problem to the case $n=2$ Our analysis defines a closed convex extended solution set for which we can construct a separating hyperplane by individually evaluating the resolvent of each operator At the cost of a single, computationally simple projection step, this framework gives rise to a family of splitting methods of unprecedented flexibility: numerous parameters, including the proximal stepsize, may vary by iteration and by operator The order of operator evaluation may vary by iteration and may be either serial or parallel The analysis essentially generalizes our prior results for the case $n=2$ We also include a relative error criterion for approximately evaluating resolvents, which was not present in our earlier work

109 citations


Journal ArticleDOI
TL;DR: Recently developed pruning techniques for incremental search space reduction in combination with subdivision techniques for the approximation of the entire solution set, the so-called Pareto set, are used.
Abstract: A multi-objective problem is addressed consisting of finding optimal low-thrust gravity-assist trajectories for interplanetary and orbital transfers. For this, recently developed pruning techniques for incremental search space reduction – which will be extended for the current situation – in combination with subdivision techniques for the approximation of the entire solution set, the so-called Pareto set, are used. Subdivision techniques are particularly promising for the numerical treatment of these multi-objective design problems since they are characterized (amongst others) by highly disconnected feasible domains, which can easily be handled by these set oriented methods. The complexity of the novel pruning techniques is analysed, and finally the usefulness of the novel approach is demonstrated by showing some numerical results for two realistic cases.

56 citations


Journal ArticleDOI
TL;DR: This article presents computational evidence to illustrate that the use of this new algorithm greatly reduces the cost of so-called “junk-point filtering,” previously a significant bottleneck in the computation of a numerical irreducible decomposition.
Abstract: The solution set $V$ of a polynomial system, i.e., the set of common zeroes of a set of multivariate polynomials with complex coefficients, may contain several components, e.g., points, curves, surfaces, etc. Each component has attached to it a number of quantities, one of which is its dimension. Given a numerical approximation to a point $\mathbf{p}$ on the set $V$, this article presents an efficient algorithm to compute the maximum dimension of the irreducible components of $V$ which pass through $\mathbf{p}$, i.e., a local dimension test. Such a test is a crucial element in the homotopy-based numerical irreducible decomposition algorithms of Sommese, Verschelde, and Wampler. This article presents computational evidence to illustrate that the use of this new algorithm greatly reduces the cost of so-called “junk-point filtering,” previously a significant bottleneck in the computation of a numerical irreducible decomposition. For moderate size examples, this results in well over an order of magnitude improvement in the computation of a numerical irreducible decomposition. As the computation of a numerical irreducible decomposition is a fundamental backbone operation, gains in efficiency in the irreducible decomposition algorithm carry over to the many computations which require this decomposition as an initial step. Another feature of a local dimension test is that one can now compute the irreducible components in a prescribed dimension without first computing the numerical irreducible decomposition of all higher dimensions. For example, one may compute the isolated solutions of a polynomial system without having to carry out the full numerical irreducible decomposition.

56 citations


Journal ArticleDOI
S.J. Li1, C.R. Chen1
TL;DR: In this paper, a key assumption is introduced by virtue of a parametric gap function, and sufficient conditions of the continuity and Hausdorff continuity of a solution set map for a weak vector variational inequality are obtained in Banach spaces with the objective space being finite-dimensional.
Abstract: In this paper, a key assumption is introduced by virtue of a parametric gap function. Then, by using the key assumption, sufficient conditions of the continuity and Hausdorff continuity of a solution set map for a parametric weak vector variational inequality are obtained in Banach spaces with the objective space being finite-dimensional.

48 citations


Journal ArticleDOI
TL;DR: It is shown that this representativeness of the soil constitutive models from geotechnical measurements by inverse analysis is controlled by the GA population size for which an optimal value can be defined.
Abstract: SUMMARY This study concerns the identification of parameters of soil constitutive models from geotechnical measurements by inverse analysis. To deal with the non-uniqueness of the solution, the inverse analysis is based on a genetic algorithm (GA) optimization process. For a given uncertainty on the measurements, the GA identifies a set of solutions. A statistical method based on a principal component analysis (PCA) is, then, proposed to evaluate the representativeness of this set. It is shown that this representativeness is controlled by the GA population size for which an optimal value can be defined. The PCA also gives a first-order approximation of the solution set of the inverse problem as an ellipsoid. These developments are first made on a synthetic excavation problem and on a pressuremeter test. Some experimental applications are, then, studied in a companion paper, to show the reliability of the method. Copyright q 2009 John Wiley & Sons, Ltd.

45 citations


Journal ArticleDOI
TL;DR: This work addresses the problem of determining a robust maximum flow value in a network with uncertain link capacities taken in a polyhedral uncertainty set and shows this class of problems to be polynomially solvable for planar graphs, but NP-hard for graphs without special structure.
Abstract: We address the problem of determining a robust maximum flow value in a network with uncertain link capacities taken in a polyhedral uncertainty set. Besides a few polynomial cases, we focus on the case where the uncertainty set is taken to be the solution set of an associated (continuous) knapsack problem. This class of problems is shown to be polynomially solvable for planar graphs, but NP-hard for graphs without special structure. The latter result provides evidence of the fact that the problem investigated here has a structure fundamentally different from the robust network flow models proposed in various other published works.

39 citations


Journal ArticleDOI
TL;DR: In this paper, two basic lemmas on exact and approximate solutions of inclusions and equations in general spaces are presented, which characterize calmness, lower semicontinuity and the Aubin property of solution sets in some Hoelder-type setting.

39 citations


Journal ArticleDOI
TL;DR: An iterative algorithm for the variational inequality problem for a monotone operator over the fixed point set of a nonexpansive mapping is presented and the strong convergence for the proposed algorithm to the solution is guaranteed under some assumptions.
Abstract: The variational inequality problem for a monotone operator over the fixed point set of a nonexpansive mapping is connected with many signal processing problems, and such problems have hierarchical structure, for example, the convex optimization problem over the solution set of the variational inequality problem over the fixed point set has triple-hierarchical structure. In this paper, we present an iterative algorithm for this problem. The strong convergence for the proposed algorithm to the solution is guaranteed under some assumptions.

38 citations


Journal ArticleDOI
TL;DR: It is shown that the nonmonotone algorithm is globally convergent under an assumption that the solution set of the problem concerned is nonempty, which is weaker than those given in most existing algorithms for solving optimization problems over symmetric cones.
Abstract: In this paper, we propose a smoothing algorithm for solving the monotone symmetric cone complementarity problems (SCCP for short) with a nonmonotone line search. We show that the nonmonotone algorithm is globally convergent under an assumption that the solution set of the problem concerned is nonempty. Such an assumption is weaker than those given in most existing algorithms for solving optimization problems over symmetric cones. We also prove that the solution obtained by the algorithm is a maximally complementary solution to the monotone SCCP under some assumptions.

38 citations


Posted Content
TL;DR: The main result of as discussed by the authors states that a dimension-free problem can be made convex if and only if it can be represented by a convex solution set of a linear matrix inequality (LMI).
Abstract: The (matricial) solution set of a Linear Matrix Inequality (LMI) is a convex basic non-commutative semi-algebraic set. The main theorem of this paper is a converse, a result which has implications for both semidefinite programming and systems engineering. For p(x) a non-commutative polynomial in free variables x= (x1, ... xg) we can substitute a tuple of symmetric matrices X= (X1, ... Xg) for x and obtain a matrix p(X). Assume p is symmetric with p(0) invertible, let Ip denote the set {X: p(X) is an invertible matrix}, and let Dp denote the component of Ip containing 0. THEOREM: If the set Dp is uniformly bounded independent of the size of the matrix tuples, then Dp has an LMI representation if and only if it is convex. Linear engineering systems problems are called "dimension free" if they can be stated purely in terms of a signal flow diagram with L2 performance measures, e.g., H-infinity control. Conjecture: A dimension free problem can be made convex if and only it can be made into an LMI. The theorem here settles the core case affirmatively.

Journal ArticleDOI
TL;DR: In this paper, the generalized Ekeland vector variational principle was used to define sharp efficiency in locally convex spaces and generalized Takahashi's condition and generalized Hamel's condition for vector-valued functions.
Abstract: In this paper, we give a generalized Ekeland vector variational principle. By using the principle, we extend and improve the related results in sharp efficiency. In the framework of locally convex spaces, we introduce two kinds of generalized sharp efficiencies and prove that they are equivalent. In particular, we show that a sharp efficient solution with respect to an interior point of the ordering cone is also one with respect to every interior point. Moreover, we introduce the generalized Takahashi’s condition and the generalized Hamel’s condition for vector-valued functions. From the generalized Ekeland principle we deduce that the two conditions are equivalent. From this, we discuss the relationship between the ‘distance’ of f ( x ) from E ( f ( X ) ) and the distance of x from E ( f ) , where E ( f ( X ) ) denotes the efficient point set of f ( X ) and E ( f ) denotes the efficient solution set.

Journal ArticleDOI
TL;DR: Under mild conditions, a new projection method for solving a system of nonlinear equations with convex constraints is presented, and if an error bound assumption holds in addition, it is shown to be superlinearly convergent.
Abstract: In this paper, a new projection method for solving a system of nonlinear equations with convex constraints is presented. Compared with the existing projection method for solving the problem, the projection region in this new algorithm is modified which makes an optimal stepsize available at each iteration and hence guarantees that the next iterate is more closer to the solution set. Under mild conditions, we show that the method is globally convergent, and if an error bound assumption holds in addition, it is shown to be superlinearly convergent. Preliminary numerical experiments also show that this method is more efficient and promising than the existing projection method.

Book ChapterDOI
21 Apr 2009
TL;DR: This paper presents a multi-population multiobjective optimization framework and demonstrates its usefulness on several test problems and a sensor network application.
Abstract: Most existing evolutionary approaches to multiobjective optimization aim at finding an appropriate set of compromise solutions, ideally a subset of the Pareto-optimal set. That means they are solving a set problem where the search space consists of all possible solution sets. Taking this perspective, multiobjective evolutionary algorithms can be regarded as hill-climbers on solution sets: the population is one element of the set search space and selection as well as variation implement a specific type of set mutation operator. Therefore, one may ask whether a `real' evolutionary algorithm on solution sets can have advantages over the classical single-population approach. This paper investigates this issue; it presents a multi-population multiobjective optimization framework and demonstrates its usefulness on several test problems and a sensor network application.

Journal ArticleDOI
TL;DR: In this article, the authors suggest and analyze some extragradient iterative methods for finding the common element of the fixed points of a nonexpansive mapping and the solution set of the variational inequality for an inverse strongly monotone mapping in a Hilbert space.
Abstract: In this paper, we suggest and analyze some new extragradient iterative methods for finding the common element of the fixed points of a nonexpansive mapping and the solution set of the variational inequality for an inverse strongly monotone mapping in a Hilbert space. We also consider the strong convergence of the proposed method under some mild conditions. Several special cases are also discussed. Results proved in this paper may be viewed as improvement and refinement of the previously known results.

Journal ArticleDOI
TL;DR: It is shown that under some monotonicity conditions, an iterated MP-maximizer is robust to incomplete information and absorbing and globally accessible under perfect foresight dynamics for a small friction.

Book ChapterDOI
01 Jan 2009
TL;DR: A risk averse SBTP approach aiming to optimize for the worst-case scenario, which can be formulated as a min-max problem, and develops a scheme such that the solution set of an affine UE can be explicitly expressed.
Abstract: Existing second best toll pricing (SBTP) models determine optimal tolls of a subset of links in a transportation network by minimizing certain system objective, while the traffic flow pattern is assumed to follow user equilibrium (UE). We show in this paper that such toll design approach is risk prone, which tries to optimize for the best-case scenario, if the UE problem has multiple solutions. Accordingly, we propose a risk averse SBTP approach aiming to optimize for the worst-case scenario, which can be formulated as a min-max problem. We establish a general solution existence condition for the risk averse model and discuss in detail that such a condition may not be always satisfied in reality. In case a solution does not exist, it is possible to replace the exact UE solution set by a set of approximate solutions. This replacement guarantees the solution existence of the risk averse model. We then develop a scheme such that the solution set of an affine UE can be explicitly expressed. Using this explicit representation, an improved simplex method can be adopted to solve the risk averse SBTP model.

Journal ArticleDOI
TL;DR: An iterative algorithm is constructed to solve the minimum Frobenius norm residual problem: [email protected]?

Journal ArticleDOI
TL;DR: In this paper, the minimization of a pseudoinvex function over an invex subset was studied and several new and simple characterizations of the solution set of pseudo-vex extremum problems were provided.
Abstract: In this paper, we study the minimization of a pseudoinvex function over an invex subset and provide several new and simple characterizations of the solution set of pseudoinvex extremum problems. By means of the basic properties of pseudoinvex functions, the solution set of a pseudoinvex program is characterized, for instance, by the equality $ abla f(x)^{T}\eta(\bar{x},x)=0$ , for each feasible point x, where $\bar{x}$ is in the solution set. Our study improves naturally and extends some previously known results in Mangasarian (Oper. Res. Lett. 7: 21---26, 1988) and Jeyakumar and Yang (J. Opt. Theory Appl. 87: 747---755, 1995).

Journal ArticleDOI
TL;DR: In this article, a multi-objective optimization model is developed to minimize the comprehensive cost and the whole production load with time-sequence constraints, and a non-dominated sorting genetic algorithm (NSGA-II) is applied to solve optimization functions.
Abstract: To realize the sharing and optimization deployment of manufacturing resources, a concept of collaborative manufacturing chain (CMC) is proposed for the manufacturing of complex products in a networked manufacturing environment. To acquire the optimal CMC, a multi-objective optimization model is developed to minimize the comprehensive cost and the whole production load with time-sequence constraints. Non-dominated sorting genetic algorithm (NSGA-II) is applied to solve optimization functions. The optimal solution set of Pareto is obtained. The technique for order preference by similarity to ideal solution (TOPSIS) approach is then used to identify the optimal compromise solution from the optimal solution set of Pareto. Simulation results obtained in this study indicate that the proposed model and algorithm are able to obtain satisfactory solutions.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the connectedness and path-connectedness of the solution sets for symmetric vector equilibrium problems in locally convex Hausdorff topological vector spaces under some suitable assumptions.
Abstract: In this paper, we study the connectedness and path-connectedness of the solution sets for symmetric vector equilibrium problems in locally convex Hausdorff topological vector spaces under some suitable assumptions. The results presented in this paper generalize some known results in [10, 14, 24, 32, 33].

Book ChapterDOI
Marco Terzer1, Jörg Stelling1
13 Sep 2009
TL;DR: This work presents and compares two approaches for the parallel enumeration of extreme rays, both based on the double description method, and introduces a born/die matrix to store intermediary results, allowing for parallelization across several iteration steps.
Abstract: Pathway analysis is a powerful tool to study metabolic reaction networks under steady state conditions. An Elementary pathway constitutes a minimal set of reactions that can operate at steady state such that each reaction also proceeds in the appropriate direction. In mathematical terms, elementary pathways are the extreme rays of a polyhedral cone--the solution set of homogeneous equality and inequality constraints. Enumerating all extreme rays--given the constraints--is difficult especially if the problem is degenerate and high dimensional. We present and compare two approaches for the parallel enumeration of extreme rays, both based on the double description method. This iterative algorithm has proven efficient especially for degenerate problems, but is difficult to parallelize due to its sequential operation. The first approach parallelizes single iteration steps individually. In the second approach, we introduce a born/die matrix to store intermediary results, allowing for parallelization across several iteration steps. We test our multicore implementations on a 16 core machine using large examples from combinatorics and biology.

Journal ArticleDOI
01 Jul 2009
TL;DR: This work decomposes a positive dimensional solution set of a polynomial system into irreducible components and reduces the expected number of homotopy continuation paths tracked, which gives a faster serial algorithm.
Abstract: Our problem is to decompose a positive dimensional solution set of a polynomial system into irreducible components. This solution set is represented by a witness set, which can be partitioned into irreducible subsets according to the action of monodromy. Our straightforward parallel version of the original monodromy breakup algorithm suffers from synchronisation issues. The new approach not only resolves these issues, but also, according to accumulated statistics, reduces the expected number of homotopy continuation paths tracked. The latter property gives a faster serial algorithm.

Journal ArticleDOI
TL;DR: In this article, the authors give sufficient conditions for the local openness/local closedness properties and the lower/upper semicontinuity properties of the solution sets of a general model which includes as special cases many generalized vector quasi-equilibrium problems with set-valued maps.
Abstract: In this paper, we give sufficient conditions for the local openness/local closedness properties and the lower/upper semicontinuity properties of the solution sets of a general model which includes as special cases many generalized vector quasi-equilibrium problems with set-valued maps. The obtained results generalize and improve several known results. An application is also given for a model which can be interpreted as a system of generalized vector quasi-equilibrium problems.

Journal ArticleDOI
TL;DR: In this article, the nonemptiness and compactness of solution sets for Stampacchia vector variational-like inequalities with generalized bifunctions defined on nonconvex sets are investigated by introducing the concepts of generalized weak cone-pseudomonotonicity and generalized (proper) cone-suboddness.
Abstract: In this paper, the nonemptiness and compactness of solution sets for Stampacchia vector variational-like inequalities (for short, SVVLIs) and Minty vector variational-like inequalities (for short, MVVLIs) with generalized bifunctions defined on nonconvex sets are investigated by introducing the concepts of generalized weak cone-pseudomonotonicity and generalized (proper) cone-suboddness. Moreover, some equivalent relations between a solution of SVVLIs and MVVLIs, and a generalized weakly efficient solution of vector optimization problems (for short, VOPs) are established under the assumptions of generalized pseudoconvexity and generalized invexity in the sense of Clarke generalized directional derivative. These results extend and improve the corresponding results of others.

Journal ArticleDOI
TL;DR: In this paper, the extremality results for the variational inequality were extended to a higher order evolution hemivariational inequality and an existence theorem of solution for the higher-order evolution hemivariational inequality was given.
Abstract: In this paper, we extend the extremality results for the variational inequality to a higher order evolution hemivariational inequality. More precisely, we give an existence theorem of solution for the higher order evolution hemivariational inequality by using the sub–super-solution method. We prove the compactness of the solution set within an order interval formed by the sub-solution and super-solution. We also show an existence theorem of the extremal solution for the higher order evolution hemivariational inequality under consideration.

Journal ArticleDOI
TL;DR: A new one-step smoothing Newton method proposed for solving the non-linear complementarity problem with P"0-function based on a new smoothing NCP-function that shows that any accumulation point of the iteration sequence generated by the algorithm is a solution of P" 0-NCP.

Journal ArticleDOI
TL;DR: In this paper, the authors present an algorithm for converting between solution sets in quadrilateral and standard coordinates, yielding both the speed of Quadrilateral coordinates and the wider applicability of standard coordinates.
Abstract: The enumeration of normal surfaces is a crucial but very slow operation in algorithmic 3–manifold topology. At the heart of this operation is a polytope vertex enumeration in a high-dimensional space (standard coordinates). Tollefson’s Q–theory speeds up this operation by using a much smaller space (quadrilateral coordinates), at the cost of a reduced solution set that might not always be sufficient for our needs. In this paper we present algorithms for converting between solution sets in quadrilateral and standard coordinates. As a consequence we obtain a new algorithm for enumerating all standard vertex normal surfaces, yielding both the speed of quadrilateral coordinates and the wider applicability of standard coordinates. Experimentation with the software package Regina shows this new algorithm to be extremely fast in practice, improving speed for large cases by factors from thousands up to millions.

Journal ArticleDOI
TL;DR: A strict mathematical result is shown that ensures, under rather mild conditions, convergence of the current solution set to the set of Pareto-optimal solutions of bicriteria combinatorial optimization (CO) problems under uncertainty.
Abstract: We propose a general-purpose algorithm APS (Adaptive Pareto-Sampling) for determining the set of Pareto-optimal solutions of bicriteria combinatorial optimization (CO) problems under uncertainty, where the objective functions are expectations of random variables depending on a decision from a finite feasible set. APS is iterative and population-based and combines random sampling with the solution of corresponding deterministic bicriteria CO problem instances. Special attention is given to the case where the corresponding deterministic bicriteria CO problem can be formulated as a bicriteria integer linear program (ILP). In this case, well-known solution techniques such as the algorithm by Chalmet et al. can be applied for solving the deterministic subproblem. If the execution of APS is terminated after a given number of iterations, only an approximate solution is obtained in general, such that APS must be considered a metaheuristic. Nevertheless, a strict mathematical result is shown that ensures, under rather mild conditions, convergence of the current solution set to the set of Pareto-optimal solutions. A modification replacing or supporting the bicriteria ILP solver by some metaheuristic for multicriteria CO problems is discussed. As an illustration, we outline the application of the method to stochastic bicriteria knapsack problems by specializing the general framework to this particular case and by providing computational examples.

Proceedings Article
11 Jul 2009
TL;DR: This work proposes the approach of finding a representative subset of the pareto set using the Integrated Convex Preference (ICP) model, originally developed in the OR community and implements several heuristic approaches based on the Metric-LPG planner to find a good solution set according to this measure.
Abstract: In many real-world planning scenarios, the users are interested in optimizing multiple objectives (such as makespan and execution cost), but are unable to express their exact tradeoff between those objectives. When a planner encounters such partial preference models, rather than look for a single optimal plan, it needs to present the pareto set of plans and let the user choose from them. This idea of presenting the full pareto set is fraught with both computational and user-interface challenges. To make it practical, we propose the approach of finding a representative subset of the pareto set. We measure the quality of this representative set using the Integrated Convex Preference (ICP) model, originally developed in the OR community. We implement several heuristic approaches based on the Metric-LPG planner to find a good solution set according to this measure. We present empirical results demonstrating the promise of our approach.