scispace - formally typeset
Search or ask a question
Author

Perinkulam S. Krishnaprasad

Bio: Perinkulam S. Krishnaprasad is an academic researcher from University of Maryland, College Park. The author has contributed to research in topics: Lie group & Nonlinear system. The author has an hindex of 44, co-authored 204 publications receiving 11714 citations. Previous affiliations of Perinkulam S. Krishnaprasad include United States Department of the Army & Harvard University.


Papers
More filters
Proceedings ArticleDOI
01 Nov 1993
TL;DR: A modification to the matching pursuit algorithm of Mallat and Zhang (1992) that maintains full backward orthogonality of the residual at every step and thereby leads to improved convergence is proposed.
Abstract: We describe a recursive algorithm to compute representations of functions with respect to nonorthogonal and possibly overcomplete dictionaries of elementary building blocks e.g. affine (wavelet) frames. We propose a modification to the matching pursuit algorithm of Mallat and Zhang (1992) that maintains full backward orthogonality of the residual (error) at every step and thereby leads to improved convergence. We refer to this modified algorithm as orthogonal matching pursuit (OMP). It is shown that all additional computation required for the OMP algorithm may be performed recursively. >

4,607 citations

Journal ArticleDOI
TL;DR: In this article, the authors developed the geometry and dynamics of nonholonomic systems using an Ehresmann connection to model the constraints, and showed how the curvature of this connection entered into Lagrange's equations.
Abstract: This work develops the geometry and dynamics of mechanical systems with nonholonomic constraints and symmetry from the perspective of Lagrangian mechanics and with a view to control theoretical applications. The basic methodology is that of geometric mechanics applied to the formulation of Lagrange d'Alembert, generalizing the use of connections and momentum maps associated with a given symmetry group to this case. We begin by formulating the mechanics of nonholonomic systems using an Ehresmann connection to model the constraints, and show how the curvature of this connection enters into Lagrange's equations. Unlike the situation with standard configuration space constraints, the presence of symmetries in the nonholonomic case may or may not lead to conservation laws. However, the momentum map determined by the symmetry group still satisfies a useful differential equation that decouples from the group variables. This momentum equation, which plays an important role in control problems, involves parallel transport operators and is computed explicitly in coordinates. An alternative description using a ``body reference frame'' relates part of the momentum equation to the components of the Euler-Poincar\'{e} equations along those symmetry directions consistent with the constraints. One of the purposes of this paper is to derive this evolution equation for the momentum and to distinguish geometrically and mechanically the cases where it is conserved and those where it is not. An example of the former is a ball or vertical disk rolling on a flat plane and an example of the latter is the snakeboard, a modified version of the skateboard which uses momentum coupling for locomotion generation. We construct a synthesis of the mechanical connection and the Ehresmann connection defining the constraints, obtaining an important new object we call the nonholonomic connection. When the nonholonomic connection is a principal connection for the given symmetry group, we show how to perform Lagrangian reduction in the presence of nonholonomic constraints, generalizing previous results which only held in special cases. Several detailed examples are given to illustrate the theory. September 1994 Revised, March 1995 Revised, June 1995

763 citations

Journal ArticleDOI
TL;DR: It is shown that by appropriate grouping of terms, feedforward neural networks with sigmoidal activation functions can be viewed as architectures which implement affine wavelet decompositions of mappings.
Abstract: A representation of a class of feedforward neural networks in terms of discrete affine wavelet transforms is developed. It is shown that by appropriate grouping of terms, feedforward neural networks with sigmoidal activation functions can be viewed as architectures which implement affine wavelet decompositions of mappings. It is shown that the wavelet transform formalism provides a mathematical framework within which it is possible to perform both analysis and synthesis of feedforward networks. For the purpose of analysis, the wavelet formulation characterizes a class of mappings which can be implemented by feedforward networks as well as reveals an exact implementation of a given mapping in this class. Spatio-spectral localization properties of wavelets can be exploited in synthesizing a feedforward network to perform a given approximation task. Two synthesis procedures based on spatio-spectral localization that reduce the training problem to one of convex optimization are outlined. >

434 citations

Journal ArticleDOI
TL;DR: The set of all possible (relative) equilibria for arbitrary G -invariant curvature controls is described, and a global convergence result for the two-vehicle control law is proved.

413 citations

Journal ArticleDOI
TL;DR: In this paper, the authors studied the perturbation of a Lie-Poisson (or equivalently an Euler-Poincare) system by a special dissipation term that has Brockett's double bracket form and showed that a formally unstable equilibrium of the unperturbed system becomes a spectrally and hence nonlinearly unstable equilibrium after the dissipation is added.
Abstract: This paper studies the perturbation of a Lie-Poisson (or, equivalently an Euler-Poincare) system by a special dissipation term that has Brockett's double bracket form. We show that a formally unstable equilibrium of the unperturbed system becomes a spectrally and hence nonlinearly unstable equilibrium after the perturbation is added. We also investigate the geometry of this dissipation mechanism and its relation to Rayleigh dissipation functions. This work complements our earlier work (Bloch, Krishnaprasad, Marsden and Ratiu [1991, 1994]) in which we studied the corresponding problem for systems with symmetry with the dissipation added to the internal variables; here it is added directly to the group or Lie algebra variables. The mechanisms discussed here include a number of interesting examples of physical interest such as the Landau-Lifschitz equations for ferromagnetism, certain models for dissipative rigid body dynamics and geophysical fluids, and certain relative equilibria in plasma physics and stellar dynamics.

248 citations


Cited by
More filters
Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Book
01 Jan 1998
TL;DR: An introduction to a Transient World and an Approximation Tour of Wavelet Packet and Local Cosine Bases.
Abstract: Introduction to a Transient World. Fourier Kingdom. Discrete Revolution. Time Meets Frequency. Frames. Wavelet Zoom. Wavelet Bases. Wavelet Packet and Local Cosine Bases. An Approximation Tour. Estimations are Approximations. Transform Coding. Appendix A: Mathematical Complements. Appendix B: Software Toolboxes.

17,693 citations

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations

Journal ArticleDOI
TL;DR: A novel algorithm for adapting dictionaries in order to achieve sparse signal representations, the K-SVD algorithm, an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data.
Abstract: In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field has concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a prespecified set of linear transforms or adapting the dictionary to a set of training signals. Both of these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method-the K-SVD algorithm-generalizing the K-means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results both on synthetic tests and in applications on real image data

8,905 citations