Author

# Joydeep Dutta

Other affiliations: Rensselaer Polytechnic Institute, Indian Institutes of Technology, Indian Statistical Institute ...read more

Bio: Joydeep Dutta is an academic researcher from Indian Institute of Technology Kanpur. The author has contributed to research in topics: Vector optimization & Convex optimization. The author has an hindex of 22, co-authored 63 publications receiving 1263 citations. Previous affiliations of Joydeep Dutta include Rensselaer Polytechnic Institute & Indian Institutes of Technology.

##### Papers published on a yearly basis

##### Papers

More filters

••

TL;DR: It is shown that global optimal Solutions of the MPCC correspond to global optimal solutions of the bilevel problem provided the lower-level problem satisfies the Slater’s constraint qualification and that this correspondence can fail if the Slater's constraint qualification fails to hold at lower level.

Abstract: Bilevel programming problems are often reformulated using the Karush–Kuhn–Tucker conditions for the lower level problem resulting in a mathematical program with complementarity constraints(MPCC). Clearly, both problems are closely related. But the answer to the question posed is “No” even in the case when the lower level programming problem is a parametric convex optimization problem. This is not obvious and concerns local optimal solutions. We show that global optimal solutions of the MPCC correspond to global optimal solutions of the bilevel problem provided the lower-level problem satisfies the Slater’s constraint qualification. We also show by examples that this correspondence can fail if the Slater’s constraint qualification fails to hold at lower-level. When we consider the local solutions, the relationship between the bilevel problem and its corresponding MPCC is more complicated. We also demonstrate the issues relating to a local minimum through examples.

211 citations

••

TL;DR: This work reduces a basic optimistic model in bilevel programming to a one-level framework of nondifferentiable programs formulated via (nonsmooth) optimal value function of the parametric lower-level problem in the original model and derives new necessary optimality conditions for bileVEL programs reflecting significant phenomena that have never been observed earlier.

Abstract: The article is devoted to the study of the so-called optimistic version of bilevel programming in finite-dimensional spaces. Problems of this type are intrinsically nonsmooth (even for smooth initial data) and can be treated by using appropriate tools of modern variational analysis and generalized differentiation. Considering a basic optimistic model in bilevel programming, we reduce it to a one-level framework of nondifferentiable programs formulated via (nonsmooth) optimal value function of the parametric lower-level problem in the original model. Using advanced formulas for computing basic subgradients of value/marginal functions in variational analysis, we derive new necessary optimality conditions for bilevel programs reflecting significant phenomena that have never been observed earlier. In particular, our optimality conditions for bilevel programs do not depend on the partial derivatives with respect to parameters of the smooth objective function in the parametric lower-level problem. We present ef...

140 citations

••

TL;DR: The proposed KKT-proximity measure can be used as a termination condition to optimization algorithms and helps to find Lagrange multipliers correspond to near-optimal solutions which can be of importance to practitioners.

Abstract: Karush---Kuhn---Tucker (KKT) optimality conditions are often checked for investigating whether a solution obtained by an optimization algorithm is a likely candidate for the optimum. In this study, we report that although the KKT conditions must all be satisfied at the optimal point, the extent of violation of KKT conditions at points arbitrarily close to the KKT point is not smooth, thereby making the KKT conditions difficult to use directly to evaluate the performance of an optimization algorithm. This happens due to the requirement of complimentary slackness condition associated with KKT optimality conditions. To overcome this difficulty, we define modified $${\epsilon}$$ -KKT points by relaxing the complimentary slackness and equilibrium equations of KKT conditions and suggest a KKT-proximity measure, that is shown to reduce sequentially to zero as the iterates approach the KKT point. Besides the theoretical development defining the modified $${\epsilon}$$ -KKT point, we present extensive computer simulations of the proposed methodology on a set of iterates obtained through an evolutionary optimization algorithm to illustrate the working of our proposed procedure on smooth and non-smooth problems. The results indicate that the proposed KKT-proximity measure can be used as a termination condition to optimization algorithms. As a by-product, the method helps to find Lagrange multipliers correspond to near-optimal solutions which can be of importance to practitioners. We also provide a comparison of our KKT-proximity measure with the stopping criterion used in popular commercial softwares.

93 citations

••

TL;DR: In this paper, the existence of the Lagrange multipliers for vector optimization problems in the case where the ordering cone in the codomain has an empty interior has been shown.

Abstract: This paper presents some results concerning the existence of the Lagrange multipliers for vector optimization problems in the case where the ordering cone in the codomain has an empty interior. The main tool for deriving our assertions is a scalarization by means of a functional introduced by Hiriart-Urruty (Math. Oper. Res. 4:79–97, 1979) (the so-called oriented distance function). Moreover, we explain some applications of our results to a vector equilibrium problem, to a vector control-approximation problem and to an unconstrainted vector fractional programming problem.

69 citations

••

TL;DR: In this article, the notion of approximate saddle point is introduced and the relation between approximate saddle points and the approximate minima is established, and necessary and sufficient conditions are obtained for the existence of approximate minimum in vector optimization problems.

Abstract: Necessary and sufficient conditions are obtained for the existence of approximate minima in vector optimization problems. The notion of approximate saddle point is introduced and the relation between approximate saddle points and the approximate minima are established.

59 citations

##### Cited by

More filters

••

[...]

TL;DR: “Multivalued Analysis” is the theory of set-valued maps (called multifonctions) and has important applications in many different areas and there is no doubt that a modern treatise on “Nonlinear functional analysis” can not afford the luxury of ignoring multivalued analysis.

Abstract: “Multivalued Analysis” is the theory of set-valued maps (called multifonctions) and has important applications in many different areas. Multivalued analysis is a remarkable mixture of many different parts of mathematics such as point-set topology, measure theory and nonlinear functional analysis. It is also closely related to “Nonsmooth Analysis” (Chapter 5) and in fact one of the main motivations behind the development of the theory, was in order to provide necessary analytical tools for the study of problems in nonsmooth analysis. It is not a coincidence that the development of the two fields coincide chronologically and follow parallel paths. Today multivalued analysis is a mature mathematical field with its own methods, techniques and applications that range from social and economic sciences to biological sciences and engineering. There is no doubt that a modern treatise on “Nonlinear Functional Analysis” can not afford the luxury of ignoring multivalued analysis. The omission of the theory of multifunctions will drastically limit the possible applications.

996 citations

••

TL;DR: A comprehensive review on bilevel optimization from the basic principles to solution strategies is provided in this paper, where a number of potential application problems are also discussed and an automated text-analysis of an extended list of papers has been performed.

Abstract: Bilevel optimization is defined as a mathematical program, where an optimization problem contains another optimization problem as a constraint. These problems have received significant attention from the mathematical programming community. Only limited work exists on bilevel problems using evolutionary computation techniques; however, recently there has been an increasing interest due to the proliferation of practical applications and the potential of evolutionary algorithms in tackling these problems. This paper provides a comprehensive review on bilevel optimization from the basic principles to solution strategies; both classical and evolutionary. A number of potential application problems are also discussed. To offer the readers insights on the prominent developments in the field of bilevel optimization, we have performed an automated text-analysis of an extended list of papers published on bilevel optimization to date. This paper should motivate evolutionary computation researchers to pay more attention to this practical yet challenging area.

588 citations

••

01 Jan 2011TL;DR: This chapter provides a brief introduction to its operating principles and outline the current research and application studies of evolutionary multi-objective optmisation (EMO).

Abstract: As the name suggests, multi-objective optimisation involves optimising a number of objectives simultaneously. The problem becomes challenging when the objectives are of conflicting characteristics to each other, that is, the optimal solution of an objective function is different from that of the other. In the course of solving such problems, with or without the presence of constraints, these problems give rise to a set of trade-off optimal solutions, popularly known as Pareto-optimal solutions. Because of the multiplicity in solutions, these problems were proposed to be solved suitably using evolutionary algorithms using a population approach in its search procedure. Starting with parameterised procedures in early 90s, the so-called evolutionary multi-objective optimisation (EMO) algorithms is now an established field of research and application with many dedicated texts and edited books, commercial softwares and numerous freely downloadable codes, a biannual conference series running successfully since 2001, special sessions and workshops held at all major evolutionary computing conferences, and full-time researchers from universities and industries from all around the globe. In this chapter, we provide a brief introduction to its operating principles and outline the current research and application studies of evolutionary multi-objective optmisation (EMO).

564 citations