scispace - formally typeset
Search or ask a question
Author

Dimitry Gorinevsky

Bio: Dimitry Gorinevsky is an academic researcher from Stanford University. The author has contributed to research in topics: Control theory & Control system. The author has an hindex of 32, co-authored 149 publications receiving 5371 citations. Previous affiliations of Dimitry Gorinevsky include Honeywell Aerospace & Technische Universität München.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the preconditioned conjugate gradients (PCG) algorithm is used to compute the search direction for sparse least-squares programs (LSPs), which can be reformulated as convex quadratic programs, and then solved by several standard methods such as interior-point methods.
Abstract: Recently, a lot of attention has been paid to regularization based methods for sparse signal reconstruction (e.g., basis pursuit denoising and compressed sensing) and feature selection (e.g., the Lasso algorithm) in signal processing, statistics, and related fields. These problems can be cast as -regularized least-squares programs (LSPs), which can be reformulated as convex quadratic programs, and then solved by several standard methods such as interior-point methods, at least for small and medium size problems. In this paper, we describe a specialized interior-point method for solving large-scale -regularized LSPs that uses the preconditioned conjugate gradients algorithm to compute the search direction. The interior-point method can solve large sparse problems, with a million variables and observations, in a few tens of minutes on a PC. It can efficiently solve large dense problems, that arise in sparse signal recovery with orthogonal transforms, by exploiting fast algorithms for these transforms. The method is illustrated on a magnetic resonance imaging data set.

2,047 citations

Journal ArticleDOI
TL;DR: This paper proposes a variation on Hodrick-Prescott (H-P) filtering, a widely used method for trend estimation that substitutes a sum of absolute values for the sum of squares used in H-P filtering to penalize variations in the estimated trend.
Abstract: The problem of estimating underlying trends in time series data arises in a variety of disciplines. In this paper we propose a variation on Hodrick-Prescott (H-P) filtering, a widely used method for trend estimation. The proposed $\ell_1$ trend filtering method substitutes a sum of absolute values (i.e., $\ell_1$ norm) for the sum of squares used in H-P filtering to penalize variations in the estimated trend. The $\ell_1$ trend filtering method produces trend estimates that are piecewise linear, and therefore it is well suited to analyzing time series with an underlying piecewise linear trend. The kinks, knots, or changes in slope of the estimated trend can be interpreted as abrupt changes or events in the underlying dynamics of the time series. Using specialized interior-point methods, $\ell_1$ trend filtering can be carried out with not much more effort than H-P filtering; in particular, the number of arithmetic operations required grows linearly with the number of data points. We describe the method and some of its basic properties and give some illustrative examples. We show how the method is related to $\ell_1$ regularization-based methods in sparse signal recovery and feature selection, and we list some extensions of the basic method.

577 citations

Journal ArticleDOI
TL;DR: The author formulates and proves a PE condition on both the system state parameters and control inputs and study affine RBF network identification that is important for affine nonlinear system control.
Abstract: Considers radial basis function (RBF) network approximation of a multivariate nonlinear mapping as a linear parametric regression problem. Linear recursive identification algorithms applied to this problem are known to converge, provided the regressor vector sequence has the persistency of excitation (PE) property. The main contribution of this paper is formulation and proof of PE conditions on the input variables. In the RBF network identification, the regressor vector is a nonlinear function of these input variables. According to the formulated condition, the inputs provide PE, if they belong to domains around the network node centers. For a two-input network with Gaussian RBF that have typical width and are centered on a regular mesh, these domains cover about 25% of the input domain volume. The authors further generalize the proposed solution of the standard RBF network identification problem and study affine RBF network identification that is important for affine nonlinear system control. For the affine RBF network, the author formulates and proves a PE condition on both the system state parameters and control inputs. >

182 citations

Journal ArticleDOI
TL;DR: The control system of the hexapod walking vehicle is extended to effect the control of foot-contact forces and locomotion in soft soil, and a number of algorithms are proposed to control vertical force components (loads on legs) and leg sink age in locomotions in elastic and consolidating soils.
Abstract: The control system of the hexapod walking vehicle, designed at the Institute for Mechanics at Moscow State University and at the Institute for Problems of Information Transmis sion at the USSR Academy of Sciences, is extended to effect the control of foot-contact forces and locomotion in soft soil. The previously developed positional control system enables the computation of commanded motion of the vehicle legs and positional feedback to track this commanded motion. Force feedback is added to the control system, in addition to computation of commanded forces and leg position correc tions for leg sinkage during soft soil locomotion.Such an elaborate control system has made it possible to solve the problems of controlling the distribution of vertical foot force components in locomotion over a rigid surface and of foot-force vectors in locomotion between planes forming a dihedral angle. A number of algorithms are proposed to control vertical force components (loads on legs) and leg sink age in locomotion in ...

115 citations


Cited by
More filters
Book
23 May 2011
TL;DR: It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
Abstract: Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.

17,433 citations

Book
01 Jan 2009

8,216 citations

Journal ArticleDOI
TL;DR: A novel method for sparse signal recovery that in many situations outperforms ℓ1 minimization in the sense that substantially fewer measurements are needed for exact recovery.
Abstract: It is now well understood that (1) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and (2) that this can be done by constrained l1 minimization. In this paper, we study a novel method for sparse signal recovery that in many situations outperforms l1 minimization in the sense that substantially fewer measurements are needed for exact recovery. The algorithm consists of solving a sequence of weighted l1-minimization problems where the weights used for the next iteration are computed from the value of the current solution. We present a series of experiments demonstrating the remarkable performance and broad applicability of this algorithm in the areas of sparse signal recovery, statistical estimation, error correction and image processing. Interestingly, superior gains are also achieved when our method is applied to recover signals with assumed near-sparsity in overcomplete representations—not by reweighting the l1 norm of the coefficient sequence as is common, but by reweighting the l1 norm of the transformed object. An immediate consequence is the possibility of highly efficient data acquisition protocols by improving on a technique known as Compressive Sensing.

4,869 citations

Journal ArticleDOI
TL;DR: A new iterative recovery algorithm called CoSaMP is described that delivers the same guarantees as the best optimization-based approaches and offers rigorous bounds on computational cost and storage.

3,970 citations