Author

# Eric Chu

Other affiliations: Monash University, National Tsing Hua University, Massachusetts Institute of Technology ...read more

Bio: Eric Chu is an academic researcher from University of California, Davis. The author has contributed to research in topic(s): Urban planning & Urban climate. The author has an hindex of 31, co-authored 96 publication(s) receiving 19139 citation(s). Previous affiliations of Eric Chu include Monash University & National Tsing Hua University.

##### Papers published on a yearly basis

##### Papers

More filters

•

23 May 2011

TL;DR: It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.

Abstract: Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.

14,958 citations

••

17 Jul 2013

TL;DR: This paper describes the embedded conic solver (ECOS), an interior-point solver for second-order cone programming (SOCP) designed specifically for embedded applications, written in low footprint, single-threaded, library-free ANSI-C and so runs on most embedded platforms.

Abstract: In this paper, we describe the embedded conic solver (ECOS), an interior-point solver for second-order cone programming (SOCP) designed specifically for embedded applications. ECOS is written in low footprint, single-threaded, library-free ANSI-C and so runs on most embedded platforms. The main interior-point algorithm is a standard primal-dual Mehrotra predictor-corrector method with Nesterov-Todd scaling and self-dual embedding, with search directions found via a symmetric indefinite KKT system, chosen to allow stable factorization with a fixed pivoting order. The indefinite system is solved using Davis' SparseLDL package, which we modify by adding dynamic regularization and iterative refinement for stability and reliability, as is done in the CVXGEN code generation system, allowing us to avoid all numerical pivoting; the elimination ordering is found entirely symbolically. This keeps the solver simple, only 750 lines of code, with virtually no variation in run time. For small problems, ECOS is faster than most existing SOCP solvers; it is still competitive for medium-sized problems up to tens of thousands of variables.

511 citations

••

TL;DR: In this article, the alternating directions method of multipliers is used to solve the homogeneous self-dual embedding, an equivalent feasibility problem involving finding a nonzero point in the intersection of a subspace and a cone.

Abstract: We introduce a first-order method for solving very large convex cone programs. The method uses an operator splitting method, the alternating directions method of multipliers, to solve the homogeneous self-dual embedding, an equivalent feasibility problem involving finding a nonzero point in the intersection of a subspace and a cone. This approach has several favorable properties. Compared to interior-point methods, first-order methods scale to very large problems, at the cost of requiring more time to reach very high accuracy. Compared to other first-order methods for cone programs, our approach finds both primal and dual solutions when available or a certificate of infeasibility or unboundedness otherwise, is parameter free, and the per-iteration cost of the method is the same as applying a splitting method to the primal or dual alone. We discuss efficient implementation of the method in detail, including direct and indirect methods for computing projection onto the subspace, scaling the original problem data, and stopping criteria. We describe an open-source implementation, which handles the usual (symmetric) nonnegative, second-order, and semidefinite cones as well as the (non-self-dual) exponential and power cones and their duals. We report numerical results that show speedups over interior-point cone solvers for large problems, and scaling to very large general cone programs.

441 citations

•

27 Nov 2013

TL;DR: It is shown that this message passing method Converges to a solution when the device objective and constraints are convex, and the method is fast enough that even a serial implementation can solve substantial problems in reasonable time frames.

Abstract: We consider a network of devices, such as generators, fixed loads, deferrable loads, and storage devices, each with its owndynamic constraints and objective, connected by AC and DC lines. The problem is to minimize the total network objective subject tothe device and line constraints, over a given time horizon. This is a large optimization problem, with variables for consumptionor generation for each device, power flow for each line, and voltage phase angles at AC buses, in each time period. In this paperwe develop a decentralized method for solving this problem called proximal message passing. The method is iterative: At each step,each device exchanges simple messages with its neighbors in the network and then solves its own optimization problem, minimizing itsown objective function, augmented by a term determined by the messages it has received. We show that this message passing methodconverges to a solution when the device objective and constraints are convex. The method is completely decentralized, and needs noglobal coordination other than synchronizing iterations; the problems to be solved by each device can typically be solved extremelyefficiently and in parallel. The method is fast enough that even a serial implementation can solve substantial problems inreasonable time frames. We report results for several numerical experiments, demonstrating the method's speed and scaling,including the solution of a problem instance with over 10 million variables in under 50 minutes for a serial implementation;with decentralized computing, the solve time would be less than one second.

318 citations

##### Cited by

More filters

•

01 Jan 1996

TL;DR: In this article, Jacobi describes the production of space poetry in the form of a poetry collection, called Imagine, Space Poetry, Copenhagen, 1996, unpaginated and unedited.

Abstract: ‘The Production of Space’, in: Frans Jacobi, Imagine, Space Poetry, Copenhagen, 1996, unpaginated.

6,698 citations

•

4,708 citations

•

[...]

TL;DR: The many different interpretations of proximal operators and algorithms are discussed, their connections to many other topics in optimization and applied mathematics are described, some popular algorithms are surveyed, and a large number of examples of proxiesimal operators that commonly arise in practice are provided.

Abstract: This monograph is about a class of optimization algorithms called proximal algorithms. Much like Newton's method is a standard tool for solving unconstrained smooth optimization problems of modest size, proximal algorithms can be viewed as an analogous tool for nonsmooth, constrained, large-scale, or distributed versions of these problems. They are very generally applicable, but are especially well-suited to problems of substantial recent interest involving large or high-dimensional datasets. Proximal methods sit at a higher level of abstraction than classical algorithms like Newton's method: the base operation is evaluating the proximal operator of a function, which itself involves solving a small convex optimization problem. These subproblems, which generalize the problem of projecting a point onto a convex set, often admit closed-form solutions or can be solved very quickly with standard or simple specialized methods. Here, we discuss the many different interpretations of proximal operators and algorithms, describe their connections to many other topics in optimization and applied mathematics, survey some popular algorithms, and provide a large number of examples of proximal operators that commonly arise in practice.

3,174 citations