scispace - formally typeset
Search or ask a question
Author

R. T. Rockafellar

Other affiliations: University of Grenoble, University of Florida, Bowdoin College  ...read more
Bio: R. T. Rockafellar is an academic researcher from University of Washington. The author has contributed to research in topics: Convex analysis & Subderivative. The author has an hindex of 59, co-authored 142 publications receiving 19132 citations. Previous affiliations of R. T. Rockafellar include University of Grenoble & University of Florida.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, a new approach to optimize or hedging a portfolio of financial instruments to reduce risk is presented and tested on applications, which focuses on minimizing Conditional Value-at-Risk (CVaR) rather than minimizing Value at Risk (VaR), but portfolios with low CVaR necessarily have low VaR as well.
Abstract: A new approach to optimizing or hedging a portfolio of nancial instruments to reduce risk is presented and tested on applications. It focuses on minimizing Conditional Value-at-Risk (CVaR) rather than minimizing Value-at-Risk (VaR), but portfolios with low CVaR necessarily have low VaR as well. CVaR, also called Mean Excess Loss, Mean Shortfall, or Tail VaR, is anyway considered to be a more consistent measure of risk than VaR. Central to the new approach is a technique for portfolio optimization which calculates VaR and optimizes CVaR simultaneously. This technique is suitable for use by investment companies, brokerage rms, mutual funds, and any business that evaluates risks. It can be combined with analytical or scenario-based methods to optimize portfolios with large numbers of instruments, in which case the calculations often come down to linear programming or nonsmooth programming. The methodology can be applied also to the optimization of percentiles in contexts outside of nance.

5,622 citations

Journal ArticleDOI
TL;DR: This paper develops for the first time a rigorous algorithmic procedure for determining a robust decision policy in response to any weighting of the scenarios.
Abstract: A common approach in coping with multiperiod optimization problems under uncertainty where statistical information is not really enough to support a stochastic programming model, has been to set up and analyze a number of scenarios. The aim then is to identify trends and essential features on which a robust decision policy can be based. This paper develops for the first time a rigorous algorithmic procedure for determining such a policy in response to any weighting of the scenarios. The scenarios are bundled at various levels to reflect the availability of information, and iterative adjustments are made to the decision policy to adapt to this structure and remove the dependence on hindsight.

1,321 citations

Journal ArticleDOI
TL;DR: The theory of the proximal point algorithm for maximal monotone operators is applied to three algorithms for solving convex programs, one of which has not previously been formulated and is shown to have much the same convergence properties, but with some potential advantages.
Abstract: The theory of the proximal point algorithm for maximal monotone operators is applied to three algorithms for solving convex programs, one of which has not previously been formulated Rate-of-convergence results for the “method of multipliers,” of the strong sort already known, are derived in a generalized form relevant also to problems beyond the compass of the standard second-order conditions for oplimality The new algorithm, the “proximal method of multipliers,” is shown to have much the same convergence properties, but with some potential advantages

1,221 citations

Journal ArticleDOI
TL;DR: In this article, it was shown that for any two monotone operators Tx and T2 from X to X*, the operator F», + T2 is again monotonous.
Abstract: is called the effective domain of F, and F is said to be locally bounded at a point x e D(T) if there exists a neighborhood U of x such that the set (1.4) T(U) = (J{T(u)\ueU} is a bounded subset of X. It is apparent that, given any two monotone operators Tx and T2 from X to X*, the operator F», + T2 is again monotone, where (1 5) (Ti + T2)(x) = Tx(x) + T2(x) = {*? +x% I xf e Tx(x), xt e T2(x)}. If Tx and F2 are maximal, it does not necessarily follow, however, that F», + T2 is maximal—some sort of condition is needed, since for example the graph of Tx + T2 can even be empty (as happens when D(Tx) n D(T2)= 0). The problem of determining conditions under which Tx + T2 is maximal turns out to be of fundamental importance in the theory of monotone operators. Results in this direction have been proved by Lescarret [9] and Browder [5], [6], [7]. The strongest result which is known at present is :

922 citations

Journal ArticleDOI
TL;DR: In this article, it was shown that the subdifferential of a lower semicontinuous proper convex function on a Banach space is a maximal monotone operator, as well as a maximal cyclically monotonous operator.
Abstract: The subdifferential of a lower semicontinuous proper convex function on a Banach space is a maximal monotone operator, as well as a maximal cyclically monotone operator. This result was announced by the author in a previous paper, but the argument given there was incomplete; the result is proved here by a different method, which is simpler in the case of reflexive Banach spaces. At the same time, a new fact is established about the relationship between the subdifferential of a convex function and the subdifferential of its conjugate in the nonreflexive case.

681 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Book
23 May 2011
TL;DR: It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
Abstract: Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.

17,433 citations

Book
01 Nov 2008
TL;DR: Numerical Optimization presents a comprehensive and up-to-date description of the most effective methods in continuous optimization, responding to the growing interest in optimization in engineering, science, and business by focusing on the methods that are best suited to practical problems.
Abstract: Numerical Optimization presents a comprehensive and up-to-date description of the most effective methods in continuous optimization. It responds to the growing interest in optimization in engineering, science, and business by focusing on the methods that are best suited to practical problems. For this new edition the book has been thoroughly updated throughout. There are new chapters on nonlinear interior methods and derivative-free methods for optimization, both of which are used widely in practice and the focus of much current research. Because of the emphasis on practical methods, as well as the extensive illustrations and exercises, the book is accessible to a wide audience. It can be used as a graduate text in engineering, operations research, mathematics, computer science, and business. It also serves as a handbook for researchers and practitioners in the field. The authors have strived to produce a text that is pleasant to read, informative, and rigorous - one that reveals both the beautiful nature of the discipline and its practical side.

17,420 citations

Book
01 Feb 1993
TL;DR: Inequalities for mixed volumes 7. Selected applications Appendix as discussed by the authors ] is a survey of mixed volumes with bounding boxes and quermass integrals, as well as a discussion of their applications.
Abstract: 1. Basic convexity 2. Boundary structure 3. Minkowski addition 4. Curvature measure and quermass integrals 5. Mixed volumes 6. Inequalities for mixed volumes 7. Selected applications Appendix.

3,954 citations