scispace - formally typeset
Search or ask a question

Showing papers in "Optimization Letters in 2010"


Journal ArticleDOI
Jonathan M. Borwein1
TL;DR: The history of the subject is surveyed and why maximal monotone operators are both interesting and important is explained, with a description of the remarkable progress made during the past decade.
Abstract: Maximal monotone operator theory is about to turn (or just has turned) 50. I intend to briefly survey the history of the subject. I shall try to explain why maximal monotone operators are both interesting and important—culminating with a description of the remarkable progress made during the past decade.

90 citations


Journal ArticleDOI
TL;DR: In this note, a system of absolute value equations (AVEs) is reformulate as a standard linear complementarity problem (LCP) without any assumption and existence and convexity results for the solution set of the AVE are proposed.
Abstract: In this note, we reformulate a system of absolute value equations (AVEs) as a standard linear complementarity problem (LCP) without any assumption. Utilizing some known results for the LCP, existence and convexity results for the solution set of the AVE are proposed.

75 citations


Journal ArticleDOI
TL;DR: A proximal point algorithm (PPA) for maximal monotone operators with appropriate regularization parameters is considered and a strong convergence result is stated and proved under the general condition that the error sequence tends to zero in norm.
Abstract: In this paper a proximal point algorithm (PPA) for maximal monotone operators with appropriate regularization parameters is considered. A strong convergence result for PPA is stated and proved under the general condition that the error sequence tends to zero in norm. Note that Rockafellar (SIAM J Control Optim 14:877–898, 1976) assumed summability for the error sequence to derive weak convergence of PPA in its initial form, and this restrictive condition on errors has been extensively used so far for different versions of PPA. Thus this Note provides a solution to a long standing open problem and in particular offers new possibilities towards the approximation of the minimum points of convex functionals.

66 citations


Journal ArticleDOI
TL;DR: It is shown that for any representation of K that satisfies a mild nondegeneracy assumption, every minimizer is a Karush-Kuhn-Tucker (KKT) point and conversely every KKT point is a minimizer.
Abstract: We consider the convex optimization problem $${\min_{\mathbf{x}} \{f(\mathbf{x}): g_j(\mathbf{x})\leq 0, j=1,\ldots,m\}}$$ where f is convex, the feasible set $${\mathbf{K}}$$ is convex and Slater’s condition holds, but the functions g j ’s are not necessarily convex. We show that for any representation of $${\mathbf{K}}$$ that satisfies a mild nondegeneracy assumption, every minimizer is a Karush-Kuhn-Tucker (KKT) point and conversely every KKT point is a minimizer. That is, the KKT optimality conditions are necessary and sufficient as in convex programming where one assumes that the g j ’s are convex. So in convex optimization, and as far as one is concerned with KKT points, what really matters is the geometry of $${\mathbf{K}}$$ and not so much its representation.

55 citations


Journal ArticleDOI
TL;DR: Branch and bound algorithm using simplicial partitions and combination of Lipschitz bounds has been investigated and results may be expected for other branch and bound algorithms.
Abstract: Speed and memory requirements of branch and bound algorithms depend on the selection strategy of which candidate node to process next. The goal of this paper is to experimentally investigate this influence to the performance of sequential and parallel branch and bound algorithms. The experiments have been performed solving a number of multidimensional test problems for global optimization. Branch and bound algorithm using simplicial partitions and combination of Lipschitz bounds has been investigated. Similar results may be expected for other branch and bound algorithms.

55 citations


Journal ArticleDOI
TL;DR: This paper proposes an approach to solve the two problems jointly, making use of a biased random-key genetic algorithm for the optimization of transportation network performance by strategically allocating tolls on some of the links of the road network.
Abstract: One of the main goals in transportation planning is to achieve solutions for two classical problems, the traffic assignment and toll pricing problems. The traffic assignment problem aims to minimize total travel delay among all travelers. Based on data derived from the first problem, the toll pricing problem determines the set of tolls and corresponding tariffs that would collectively benefit all travelers and would lead to a user equilibrium solution. Obtaining high-quality solutions for this framework is a challenge for large networks. In this paper, we propose an approach to solve the two problems jointly, making use of a biased random-key genetic algorithm for the optimization of transportation network performance by strategically allocating tolls on some of the links of the road network. Since a transportation network may have thousands of intersections and hundreds of road segments, our algorithm takes advantage of mechanisms for speeding up shortest-path algorithms.

42 citations


Journal ArticleDOI
TL;DR: Under no any convexity assumptions, necessary and sufficient optimality conditions are obtained for weakly efficient solutions of set-valued optimization problems by employing the generalized higher-order derivatives.
Abstract: In this paper, generalized higher-order contingent (adjacent) derivatives of set-valued maps are introduced and some of their properties are discussed Under no any convexity assumptions, necessary and sufficient optimality conditions are obtained for weakly efficient solutions of set-valued optimization problems by employing the generalized higher-order derivatives

31 citations


Journal ArticleDOI
TL;DR: Those developments for wireless networking, dominating, and packing problems for dominating sets, connected dominating sets and their variations motivated from various applications in wireless networks and social networks are surveyed.
Abstract: Recently, there are many papers published in the literature on study of dominating sets, connected dominating sets and their variations motivated from various applications in wireless networks and social networks. In this article, we survey those developments for wireless networking, dominating, and packing problems.

27 citations


Journal ArticleDOI
TL;DR: The implication from a topological point of view is interpreted, showing that the Minkowski sum of the lifted feasible set and the lifted recession cone gives exactly the closure of the former.
Abstract: In an important paper, Burer (Math. Program Ser. A 120:479–495, 2009) recently showed how to reformulate general mixed-binary quadratic optimization problems (QPs) into copositive programs where a linear functional is minimized over a linearly constrained subset of the cone of completely positive matrices. In this note we interpret the implication from a topological point of view, showing that the Minkowski sum of the lifted feasible set and the lifted recession cone gives exactly the closure of the former. We also discuss why feasibility of the copositive program implies feasibility of the original mixed-binary QP, which can be derived from the arguments in Burer (Math. Program Ser. A 120:479–495, 2009) without any further condition.

27 citations


Journal ArticleDOI
TL;DR: Weak, strong and converse duality theorems are established under second-order K–F-convexity/K–η-bonvexITY assumptions and a self duality theorem is obtained by assuming the functions involved to be skew-symmetric.
Abstract: In this paper, a pair of Mond–Weir type nondifferentiable multiobjective second-order symmetric dual programs over arbitrary cones is formulated Weak, strong and converse duality theorems are established under second-order K–F-convexity/K–η-bonvexity assumptions A self duality theorem is also obtained by assuming the functions involved to be skew-symmetric

26 citations


Journal ArticleDOI
TL;DR: The hardness of this problem is proved, including the inapproximability result and an approximation algorithm together with an efficient heuristic is presented that can yield a routing backbone for broadcast protocols.
Abstract: We study the following problem: Given a weighted graph G = (V, E, w) with \({w: E \rightarrow \mathbb{Z}^+}\) , the dominating tree (DT) problem asks us to find a minimum total edge weight tree T such that for every \({v \in V}\) , v is either in T or adjacent to a vertex in T. To the best of our knowledge, this problem has not been addressed in the literature. Solving the DT problem can yield a routing backbone for broadcast protocols since (1) each node does not have to construct their own broadcast tree, (2) utilize the virtual backbone to reduce the message overhead, and (3) the weight of backbone representing the energy consumption is minimized. We prove the hardness of this problem, including the inapproximability result and present an approximation algorithm together with an efficient heuristic. Finally, we verify the effectiveness of our proposal through simulation.

Journal ArticleDOI
TL;DR: New bounds for ratios involving the volume of the unit ball in R in $${\mathbb{R}^{n}}$$ are established.
Abstract: The aim of this paper is to establish new bounds for ratios involving the volume of the unit ball in \({\mathbb{R}^{n}}\).

Journal ArticleDOI
TL;DR: The equivalent relationship between the generalized Tykhonov well-posedness of the system of vector quasi-equilibrium problems and that of the minimization problems is established.
Abstract: In this paper, the notion of the generalized Tykhonov well-posedness for system of vector quasi-equilibrium problems are investigated. By using the gap functions of the system of vector quasi-equilibrium problems, we establish the equivalent relationship between the generalized Tykhonov well-posedness of the system of vector quasi-equilibrium problems and that of the minimization problems. We also present some metric characterizations for the generalized Tykhonov well-posedness of the system of vector quasi-equilibrium problems. The results in this paper are new and extend some known results in the literature.

Journal ArticleDOI
TL;DR: The fuzzy Delphi method (FDM) is used to extract the development characteristics and demands of a regional sector, and the analytic network process (ANP), used to create a quantitative evaluation model to convert the abstract concept of sustainability into an understandable network model for evaluating different development projects.
Abstract: When sustainable development becomes the global trend on residential environment, the transformation of abstract concepts into practicable implementations is necessary. In consequence, an objective evaluation model on regional property needs to established and carried out to examine the effectiveness and performance of local action. Because of the limitations of the small area and dense population in Taiwan, the development trend of the residential environment is toward multistory and high-density housing communities. This paper discusses the development characteristics and demands of a regional sector, and the fuzzy Delphi method (FDM) is used to extract the factors. To consider possible interdependencies among dimensions and among selected factors, the analytic network process (ANP) is used to create a quantitative evaluation model to convert the abstract concept of sustainability into an understandable network model for evaluating different development projects. Our objective and practical evaluation model can be used in related living environment planning fields, and it can be tailored and applied in other management studies.

Journal ArticleDOI
TL;DR: This study proposes the development of classification models that assign the banking sectors of various countries in three classes, labelled “low stability”, “medium stability’, and “high stability“, using a sample of 114 banking sectors.
Abstract: Banking crises can be damaging for the economy, and as the recent experience has shown, nowadays they can spread rapidly across the globe with contagious effects. Therefore, the assessment of the stability of a county’s banking sector is important for regulators, depositors, investors and the general public. In the present study, we propose the development of classification models that assign the banking sectors of various countries in three classes, labelled “low stability”, “medium stability”, and “high stability”. The models are developed using three multicriteria decision aid techniques, which are well-suited to ordinal classification problems. We use a sample of 114 banking sectors (i.e., countries), and a set of criteria that includes indicators of the macroeconomic, institutional and regulatory environment, as well as basic characteristics of the banking and financial sector. The models are developed and tested using a tenfold cross-validation approach and they are benchmarked against models developed with discriminant analysis and logistic regression.

Journal ArticleDOI
TL;DR: The equilibrium conditions governing the model are provided and the evolutionary variational inequality formulation is derived, and the existence of solutions is investigated and a numerical example is presented.
Abstract: In this paper we develop the time-dependent pollution control problem in which different countries aim to determine the optimal investment allocation in environmental projects and the tolerable pollutant emissions, so as to maximize their welfare. We provide the equilibrium conditions governing the model and derive the evolutionary variational inequality formulation. The existence of solutions is investigated and a numerical example is also presented.

Journal ArticleDOI
TL;DR: Several classical approximation results for independent sets in UDGs are extended to co-k-plexes, and a recent conjecture on the approximability of co- k-plex coloring inUDGs is settled.
Abstract: This article studies a degree-bounded generalization of independent sets called co-k-plexes. Constant factor approximation algorithms are developed for the maximum co-k-plex problem on unit-disk graphs. The related problem of minimum co-k-plex coloring that generalizes classical vertex coloring is also studied in the context of unit-disk graphs. We extend several classical approximation results for independent sets in UDGs to co-k-plexes, and settle a recent conjecture on the approximability of co-k-plex coloring in UDGs.

Journal ArticleDOI
TL;DR: Second-order optimality conditions of Fritz John and Karush–Kuhn–Tucker types for the problem with inequality constraints in nonsmooth settings are obtained using a new second-order directional derivative of Hadamard type.
Abstract: In this paper we obtain second-order optimality conditions of Fritz John and Karush–Kuhn–Tucker types for the problem with inequality constraints in nonsmooth settings using a new second-order directional derivative of Hadamard type. We derive necessary and sufficient conditions for a point \({\bar x}\) to be a local minimizer and an isolated local one of order two. In the primal necessary conditions we suppose that all functions are locally Lipschitz, but in all other conditions the data are locally Lipschitz, regular in the sense of Clarke, Gâteaux differentiable at \({\bar x}\), and the constraint functions are second-order Hadamard differentiable at \({\bar x}\) in every direction. It is shown by an example that regularity and Gâteaux differentiability cannot be removed from the sufficient conditions.

Journal ArticleDOI
TL;DR: The equivalence among the general quasi variational inequality and implicit fixed point problems and the Wiener–Hopf equations is established and this equivalent formulation is used to suggest and analyze some iterative algorithms for solving thegeneral quasi-variational inequality.
Abstract: In this paper, we introduce a new class of variational inequalities, which is called the general quasi-variational inequality. We establish the equivalence among the general quasi variational inequality and implicit fixed point problems and the Wiener–Hopf equations. We use this equivalent formulation to discuss the existence of a solution of the general quasi-variational inequality. This equivalent formulation is used to suggest and analyze some iterative algorithms for solving the general quasi-variational inequality. We also discuss the convergence analysis of these iterative methods. Several special cases are also discussed.

Journal ArticleDOI
TL;DR: This paper designs the first polynomial time approximation scheme for d-hop connected dominating set (d-CDS) problem in growth-bounded graphs, which is a general type of graphs including unit disk graph, unit ball graph, etc.
Abstract: In this paper, we design the first polynomial time approximation scheme for d-hop connected dominating set (d-CDS) problem in growth-bounded graphs, which is a general type of graphs including unit disk graph, unit ball graph, etc. Such graphs can represent majority types of existing wireless networks. Our algorithm does not need geometric representation (e.g., specifying the positions of each node in the plane) beforehand. The main strategy is clustering partition. We select the d-CDS for each subset separately, union them together, and then connect the induced graph of this set. We also provide detailed performance and complexity analysis.

Journal ArticleDOI
TL;DR: A general labeling method, as well as several implementations, are presented for finding shortest paths and detecting negative cycles in networks for which arc costs can vary with time, each arc has a transit time, and parking with a corresponding time-varying cost is allowed at the nodes.
Abstract: This paper concerns the problem of finding shortest paths from one node to all other nodes in networks for which arc costs can vary with time, each arc has a transit time, and parking with a corresponding time-varying cost is allowed at the nodes. The transit times can also take negative values. A general labeling method, as well as several implementations, are presented for finding shortest paths and detecting negative cycles under the assumption that arc traversal costs are piecewise linear and node parking costs are piecewise constant.

Journal ArticleDOI
S. J. Li1, P. Zhao1
TL;DR: A dual scheme for a mixed vector equilibrium problem is introduced by using the method of Fenchel conjugate function and the solutions of MVEP and DMVEP are proved relating to the saddle points of an associated Lagrangian mapping.
Abstract: In this paper, a dual scheme for a mixed vector equilibrium problem is introduced by using the method of Fenchel conjugate function. Under the stabilization condition, the relationships between the solutions of mixed vector equilibrium problem (MVEP) and dual mixed vector equilibrium problem (DMVEP) are discussed. Moreover, under the same condition, the solutions of MVEP and DMVEP are proved relating to the saddle points of an associated Lagrangian mapping. As applications, this dual scheme is applied to vector convex optimization and vector variational inequality.

Journal ArticleDOI
TL;DR: In this paper, a nonlinear integer programming model is used to determine the optimal solution under the extended basic period and power-of-two policy and a small-step search algorithm is presented to find a solution which approaches optimal when the step size approaches zero.
Abstract: The economic lot scheduling problem schedules the production of several different products on a single machine over an infinite planning horizon. In this paper, a nonlinear integer programming model is used to determine the optimal solution under the extended basic period and power-of-two policy. A small-step search algorithm is presented to find a solution which approaches optimal when the step size approaches zero, where a divide-and-conquer procedure is introduced to speed up the search. Further a faster heuristic algorithm is proposed which finds the same solutions in almost all the randomly generated sample cases.

Journal ArticleDOI
TL;DR: This paper designs an algorithm based on a multistart randomized scheme which exploits an adapted extension of the augmenting path algorithm to produce starting solutions for the authors' problem, which are then enhanced by means of an iterative improvement routine.
Abstract: In this paper, we propose a fast heuristic algorithm for the maximum concurrent k-splittable flow problem In such an optimization problem, one is concerned with maximizing the routable demand fraction across a capacitated network, given a set of commodities and a constant k expressing the number of paths that can be used at most to route flows for each commodity Starting from known results on the k-splittable flow problem, we design an algorithm based on a multistart randomized scheme which exploits an adapted extension of the augmenting path algorithm to produce starting solutions for our problem, which are then enhanced by means of an iterative improvement routine The proposed algorithm has been tested on several sets of instances, and the results of an extensive experimental analysis are provided in association with a comparison to the results obtained by a different heuristic approach and an exact algorithm based on branch and bound rules

Journal ArticleDOI
TL;DR: Performance of some EAs on classical benchmarks is evaluated on these functions and compared to that evaluated on results from classical benchmarks, which are available in literature, which reveals a considerable drop in the performance.
Abstract: Previous researches have disclosed that the excellent performance of some evolutionary algorithms (EAs) highly depends on existence of some properties in the structure of the objective function. Unlike classical benchmark functions, randomly generated multimodal functions do not have any of these properties. Having been improved, a function generator is utilized to generate a number of six benchmarks with random structure. Performance of some EAs is evaluated on these functions and compared to that evaluated on results from classical benchmarks, which are available in literature. The comparison reveals a considerable drop in the performance, even though some of these methods have all possible invariances. This demonstrates that in addition to properties, classical benchmarks have special patterns which may be exploited by EAs. Unlike properties, these patterns are not eliminated under linear transformation of the coordinates or the objective function; hence, limitations should be considered while generalizing performance of EAs on classical benchmarks to practical problems, where these properties or patterns do not necessarily exist.

Journal ArticleDOI
TL;DR: One of the two strategies is naturally suggested by the convergence theory of the PR method and has been devised to reduce the initial values of the duality gap and the infeasibility measure, with the objective of decreasing the number of PR iterations.
Abstract: We present two strategies for choosing a “hot” starting-point in the context of an infeasible potential reduction (PR) method for convex quadratic programming. The basic idea of both strategies is to select a preliminary point and to suitably scale it in order to obtain a starting point such that its nonnegative entries are sufficiently bounded away from zero, and the ratio between the duality gap and a suitable measure of the infeasibility is small. One of the two strategies is naturally suggested by the convergence theory of the PR method; the other has been devised to reduce the initial values of the duality gap and the infeasibility measure, with the objective of decreasing the number of PR iterations. Numerical experiments show that the second strategy generally performs better than the first, and both outperform a starting-point strategy based on the affine-scaling step.

Journal ArticleDOI
TL;DR: New classes of second order (F, α, ρ, d)-V-type I functions for a nondifferentiable multiobjective programming problem are introduced and weak strong and strict converse duality theorems are studied.
Abstract: In this paper, new classes of second order (F, α, ρ, d)-V-type I functions for a nondifferentiable multiobjective programming problem are introduced Furthermore, second order Mangasarian type and general Mond-Weir type duals problems are formulated for a nondifferentiable multiobjective programming problem Weak strong and strict converse duality theorems are studied in both cases assuming the involved functions to be second order (F, α, ρ, d)-V-type I

Journal ArticleDOI
TL;DR: This work considers some eigenvalue problems for p-Laplacian with variable domain, calculates the first variation of this functional, and investigates behavior of the eigenvalues when the domain varies.
Abstract: In this work we consider some eigenvalue problems for p-Laplacian with variable domain. Eigenvalues of this operator are taken as a functional of the domain. We calculate the first variation of this functional, using the obtained formula investigate behavior of the eigenvalues when the domain varies. Then we consider one shape optimization problem for the first eigenvalue, prove the necessary condition of optimality relatively domain, offer an algorithm for the numerical solution of this problem.

Journal ArticleDOI
Zi Xu1
TL;DR: A new combined direction stochastic approximation algorithm which employs a weighted combination of the current noisy negative gradient and some former noisynegative gradient as iterative direction and outperforms the classical RM algorithm.
Abstract: The stochastic approximation problem is to find some root or minimum of a nonlinear function in the presence of noisy measurements. The classical algorithm for stochastic approximation problem is the Robbins-Monro (RM) algorithm, which uses the noisy negative gradient direction as the iterative direction. In order to accelerate the classical RM algorithm, this paper gives a new combined direction stochastic approximation algorithm which employs a weighted combination of the current noisy negative gradient and some former noisy negative gradient as iterative direction. Both the almost sure convergence and the asymptotic rate of convergence of the new algorithm are established. Numerical experiments show that the new algorithm outperforms the classical RM algorithm.

Journal ArticleDOI
TL;DR: A residual existence theorem for linear equations is proved and unique solvability of the absolute value equation Ax + B|x| = b is given.
Abstract: A residual existence theorem for linear equations is proved: if \({A \in \mathbb{R}^{m\times n}}\), \({b \in \mathbb{R}^{m}}\) and if X is a finite subset of \({\mathbb{R}^{n}}\) satisfying \({{\rm max}_{x \in X}p^T(Ax-b) \geq 0}\) for each \({p \in \mathbb{R}^{m}}\), then the system of linear equations Ax = b has a solution in the convex hull of X. An application of this result to unique solvability of the absolute value equation Ax + B|x| = b is given.