scispace - formally typeset
Search or ask a question
Author

Jia Kang

Other affiliations: Purdue University
Bio: Jia Kang is an academic researcher from Texas A&M University. The author has contributed to research in topics: Nonlinear programming & Interior point method. The author has an hindex of 5, co-authored 7 publications receiving 113 citations. Previous affiliations of Jia Kang include Purdue University.

Papers
More filters
Journal ArticleDOI
TL;DR: This paper shows that this bottleneck can be overcome by solving the Schur-complement equations implicitly, using a quasi-Newton preconditioned conjugate gradient method and dramatically reduces the computational cost for problems with many coupling variables.

48 citations

Journal ArticleDOI
TL;DR: This paper presents a decomposition strategy applicable to DAE constrained optimization problems to find the optimal control profile of a combined cycle power plant with promising results on both distributed memory and shared memory computing architectures with speedups of over 50 times possible.
Abstract: This paper presents a decomposition strategy applicable to DAE constrained optimization problems. A common solution method for such problems is to apply a direct transcription method and solve the resulting nonlinear program using an interior-point algorithm. For this approach, the time to solve the linearized KKT system at each iteration typically dominates the total solution time. In our proposed method, we exploit the structure of the KKT system resulting from a direct collocation scheme for approximating the DAE constraints in order to compute the necessary linear algebra operations on multiple processors. This approach is applied to find the optimal control profile of a combined cycle power plant with promising results on both distributed memory and shared memory computing architectures with speedups of over 50 times possible.

31 citations

Proceedings ArticleDOI
01 Dec 2015
TL;DR: Fundamental concepts that enable the solution of large-scale structured nonlinear programming problems on high-performance computers are discussed, including linear algebra parallelization strategies and how such strategies influence the choice of algorithmic frameworks.
Abstract: In this tutorial we discuss fundamental concepts that enable the solution of large-scale structured nonlinear programming problems on high-performance computers. We focus on linear algebra parallelization strategies and discuss how such strategies influence the choice of algorithmic frameworks capable of enforcing global convergence and deal with nonconvexities. We also discuss how the characteristics of different computing architectures influence the choice of algorithmic strategies.

26 citations

Proceedings ArticleDOI
01 Jul 2015
TL;DR: The proposed approach outperforms the non-decomposed ADMM approach and compares favorably with Gurobi, a commercial QP solver, on a number of MPC problems derived from stopping control of a transportation system.
Abstract: We present a scenario-decomposition based Alternating Direction Method of Multipliers (ADMM) algorithm for the efficient solution of scenario-based Model Predictive Control (MPC) problems which arise for instance in the control of stochastic systems. We duplicate the variables involved in the non-anticipativity constraints which allows to develop an ADMM algorithm in which the computations scale linearly in the number of scenarios. Further, the decomposition allows for using different values of the ADMM stepsize parameter for each scenario. We provide convergence analysis and derive the optimal selection of the parameter for each scenario. The proposed approach outperforms the non-decomposed ADMM approach and compares favorably with Gurobi, a commercial QP solver, on a number of MPC problems derived from stopping control of a transportation system.

13 citations

Journal ArticleDOI
30 Jun 2016
TL;DR: This paper solves the optimization problems resulting from a robust nonlinear model predictive control strategy at each sampling instance using the parallel Schur complement method developed to solve stochastic programs on distributed and shared memory machines.
Abstract: Representing the uncertainties with a set of scenarios, the optimization problem resulting from a robust nonlinear model predictive control (NMPC) strategy at each sampling instance can be viewed as a large-scale stochastic program. This paper solves these optimization problems using the parallel Schur complement method developed to solve stochastic programs on distributed and shared memory machines. The control strategy is illustrated with a case study of a multidimensional unseeded batch crystallization process. For this application, a robust NMPC based on min–max optimization guarantees satisfaction of all state and input constraints for a set of uncertainty realizations, and also provides better robust performance compared with open-loop optimal control, nominal NMPC, and robust NMPC minimizing the expected performance at each sampling instance. The performance of robust NMPC can be improved by generating optimization scenarios using Bayesian inference. With the efficient parallel solver, the solution time of one optimization problem is reduced from 6.7 min to 0.5 min, allowing for real-time application.

6 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The purpose of this paper is to demonstrate that, using simple standard building blocks from nonlinear programming, combined with a structure-exploiting linear system solver, it is possible to achieve computation times in the range typical of solvers for QPs, while retaining nonlinearities and solving the nonlinear programs (NLP) to local optimality.
Abstract: Real-time implementation of optimisation-based control and trajectory planning can be very challenging for nonlinear systems. As a result, if an implementation based on a fixed linearisatio...

168 citations

Journal ArticleDOI
TL;DR: It is suggested that some proposals are too complex and computationally demanding for application in this area and some tentative proposals for research on robust and stochastic model predictive control to aid applicability are made.

121 citations

Journal ArticleDOI
TL;DR: Pyomo.dae is an open source Python-based modeling framework that enables high-level abstract specification of optimization problems with differential and algebraic equations, providing a high degree of modeling flexibility and the ability to express constraints that cannot be easily specified in other modeling frameworks.
Abstract: We describe pyomo.dae, an open source Python-based modeling framework that enables high-level abstract specification of optimization problems with differential and algebraic equations. The pyomo.dae framework is integrated with the Pyomo open source algebraic modeling language, and is available at http://www.pyomo.org . One key feature of pyomo.dae is that it does not restrict users to standard, predefined forms of differential equations, providing a high degree of modeling flexibility and the ability to express constraints that cannot be easily specified in other modeling frameworks. Other key features of pyomo.dae are the ability to specify optimization problems with high-order differential equations and partial differential equations, defined on restricted domain types, and the ability to automatically transform high-level abstract models into finite-dimensional algebraic problems that can be solved with off-the-shelf solvers. Moreover, pyomo.dae users can leverage existing capabilities of Pyomo to embed differential equation models within stochastic and integer programming models and mathematical programs with equilibrium constraint formulations. Collectively, these features enable the exploration of new modeling concepts, discretization schemes, and the benchmarking of state-of-the-art optimization solvers.

96 citations

Journal ArticleDOI
TL;DR: This work proposes to use community detection in network representations of optimization problems as a systematic method of partitioning the optimization variables into groups, such that the variables in the same groups generally share more constraints than variables between different groups.

54 citations