scispace - formally typeset
Search or ask a question
Author

Arash Hassibi

Bio: Arash Hassibi is an academic researcher from Stanford University. The author has contributed to research in topics: Convex optimization & Linear matrix inequality. The author has an hindex of 14, co-authored 15 publications receiving 2534 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: This tutorial paper collects together in one place the basic background material needed to do GP modeling, and shows how to recognize functions and problems compatible with GP, and how to approximate functions or data in a formcompatible with GP.
Abstract: A geometric program (GP) is a type of mathematical optimization problem characterized by objective and constraint functions that have a special form. Recently developed solution methods can solve even large-scale GPs extremely efficiently and reliably; at the same time a number of practical problems, particularly in circuit design, have been found to be equivalent to (or well approximated by) GPs. Putting these two together, we get effective solutions for the practical problems. The basic approach in GP modeling is to attempt to express a practical problem, such as an engineering analysis or design problem, in GP format. In the best case, this formulation is exact; when this is not possible, we settle for an approximate formulation. This tutorial paper collects together in one place the basic background material needed to do GP modeling. We start with the basic definitions and facts, and some methods used to transform problems into GP format. We show how to recognize functions and problems compatible with GP, and how to approximate functions or data in a form compatible with GP (when this is possible). We give some simple and representative examples, and also describe some common extensions of GP, along with methods for solving (or approximately solving) them.

1,215 citations

Proceedings ArticleDOI
28 Jun 2000
TL;DR: This work presents a V-K iteration algorithm to design switching and non-switching controllers for digital control systems with random but bounded delays in the feedback loop, with the transition jumps being modeled as finite-state Markov chains.
Abstract: Digital control systems with random but bounded delays in the feedback loop can be modeled as finite-dimensional, discrete-time jump linear systems, with the transition jumps being modeled as finite-state Markov chains. This type of system can be called a "stochastic hybrid system". Due to the structure of the augmented state-space model, control of such a system is an output feedback problem, even if a state feedback law is intended for the original system. We present a V-K iteration algorithm to design switching and non-switching controllers for such systems. This algorithm uses an outer iteration loop to perturb the transition probability matrix. Inside this loop, one or more steps of V-K iteration is used to do controller synthesis, which requires the solution of two convex optimization problems constrained by LMIs.

410 citations

Proceedings ArticleDOI
02 Jun 1999
TL;DR: In this paper, a path-following (homotopy) method for solving bilinear matrix inequality (BMI) problems in control is presented, where the BMI is linearized using a first order perturbation approximation, and then iteratively computed a perturbations that "slightly" improves the controller performance by solving a semidefinite program.
Abstract: We present a path-following (homotopy) method for (locally) solving bilinear matrix inequality (BMI) problems in control. The method is to linearize the BMI using a first order perturbation approximation, and then iteratively compute a perturbation that "slightly" improves the controller performance by solving a semidefinite program. This process is repeated until the desired performance is achieved, or the performance cannot be improved any further. While this is an approximate method for solving BMIs, we present several examples that illustrate the effectiveness of the approach.

350 citations

Proceedings ArticleDOI
21 Jun 1998
TL;DR: In this paper, the authors consider analysis and controller synthesis of piecewise-linear systems based on constructing quadratic and piecewisequadratic Lyapunov functions that prove stability and performance for the system.
Abstract: We consider analysis and controller synthesis of piecewise-linear systems. The method is based on constructing quadratic and piecewise-quadratic Lyapunov functions that prove stability and performance for the system. It is shown that proving stability and performance, or designing (state-feedback) controllers, can be cast, as convex optimization problems involving linear matrix inequalities that can be solved very efficiently. A couple of simple examples are included to demonstrate applications of the methods described.

330 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A novel method for sparse signal recovery that in many situations outperforms ℓ1 minimization in the sense that substantially fewer measurements are needed for exact recovery.
Abstract: It is now well understood that (1) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and (2) that this can be done by constrained l1 minimization. In this paper, we study a novel method for sparse signal recovery that in many situations outperforms l1 minimization in the sense that substantially fewer measurements are needed for exact recovery. The algorithm consists of solving a sequence of weighted l1-minimization problems where the weights used for the next iteration are computed from the value of the current solution. We present a series of experiments demonstrating the remarkable performance and broad applicability of this algorithm in the areas of sparse signal recovery, statistical estimation, error correction and image processing. Interestingly, superior gains are also achieved when our method is applied to recover signals with assumed near-sparsity in overcomplete representations—not by reweighting the l1 norm of the coefficient sequence as is common, but by reweighting the l1 norm of the transformed object. An immediate consequence is the possibility of highly efficient data acquisition protocols by improving on a technique known as Compressive Sensing.

4,869 citations

Journal ArticleDOI
TL;DR: It is shown that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space.
Abstract: The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard because it contains vector cardinality minimization as a special case. In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is sufficiently large. The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this preexisting concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to minimizing the nuclear norm and illustrate our results with numerical examples.

3,432 citations

Journal Article
TL;DR: In this paper, it was shown that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space.
Abstract: The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard because it contains vector cardinality minimization as a special case. In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is sufficiently large. The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this preexisting concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to minimizing the nuclear norm and illustrate our results with numerical examples.

2,742 citations

Journal ArticleDOI
TL;DR: This work considers the problem of finding a linear iteration that yields distributed averaging consensus over a network, i.e., that asymptotically computes the average of some initial values given at the nodes, and gives several extensions and variations on the basic problem.

2,692 citations

Book
03 Jan 2018
TL;DR: This monograph summarizes many years of research insights in a clear and self-contained way and providest the reader with the necessary knowledge and mathematical toolsto carry out independent research in this area.
Abstract: Massive multiple-input multiple-output MIMO is one of themost promising technologies for the next generation of wirelesscommunication networks because it has the potential to providegame-changing improvements in spectral efficiency SE and energyefficiency EE. This monograph summarizes many years ofresearch insights in a clear and self-contained way and providesthe reader with the necessary knowledge and mathematical toolsto carry out independent research in this area. Starting froma rigorous definition of Massive MIMO, the monograph coversthe important aspects of channel estimation, SE, EE, hardwareefficiency HE, and various practical deployment considerations.From the beginning, a very general, yet tractable, canonical systemmodel with spatial channel correlation is introduced. This modelis used to realistically assess the SE and EE, and is later extendedto also include the impact of hardware impairments. Owing tothis rigorous modeling approach, a lot of classic "wisdom" aboutMassive MIMO, based on too simplistic system models, is shownto be questionable.

1,352 citations