scispace - formally typeset
Search or ask a question
Author

Eduardo D. Sontag

Bio: Eduardo D. Sontag is an academic researcher from Northeastern University. The author has contributed to research in topics: Nonlinear system & Linear system. The author has an hindex of 97, co-authored 661 publications receiving 49633 citations. Previous affiliations of Eduardo D. Sontag include University of California, Santa Cruz & University of Minnesota.


Papers
More filters
Book
01 Jan 1990
TL;DR: This book covers what constitutes the common core of control theory and is unique in its emphasis on foundational aspects, covering a wide range of topics written in a standard theorem/proof style and develops the necessary techniques from scratch.
Abstract: Geared primarily to an audience consisting of mathematically advanced undergraduate or beginning graduate students, this text may additionally be used by engineering students interested in a rigorous, proof-oriented systems course that goes beyond the classical frequency-domain material and more applied courses. The minimal mathematical background required is a working knowledge of linear algebra and differential equations. The book covers what constitutes the common core of control theory and is unique in its emphasis on foundational aspects. While covering a wide range of topics written in a standard theorem/proof style, it also develops the necessary techniques from scratch. In this second edition, new chapters and sections have been added, dealing with time optimal control of linear systems, variational and numerical approaches to nonlinear control, nonlinear controllability via Lie-algebraic methods, and controllability of recurrent nets and of linear systems with bounded controls.

3,353 citations

Journal ArticleDOI
TL;DR: In this paper, it was shown that coprime right factorizations exist for the input-to-state mapping of a continuous-time nonlinear system provided that the smooth feedback stabilization problem is solvable for this system.
Abstract: It is shown that coprime right factorizations exist for the input-to-state mapping of a continuous-time nonlinear system provided that the smooth feedback stabilization problem is solvable for this system. It follows that feedback linearizable systems admit such fabrications. In order to establish the result, a Lyapunov-theoretic definition is proposed for bounded-input-bounded-output stability. The notion of stability studied in the state-space nonlinear control literature is related to a notion of stability under bounded control perturbations analogous to those studied in operator-theoretic approaches to systems; in particular it is proved that smooth stabilization implies smooth input-to-state stabilization. >

2,504 citations

Journal ArticleDOI
TL;DR: The most comprehensive list so far of human p53-regulated genes and their experimentally validated, functional binding sites that confer p53 regulation is presented.
Abstract: The p53 protein regulates the transcription of many different genes in response to a wide variety of stress signals. Following DNA damage, p53 regulates key processes, including DNA repair, cell-cycle arrest, senescence and apoptosis, in order to suppress cancer. This Analysis article provides an overview of the current knowledge of p53-regulated genes in these pathways and others, and the mechanisms of their regulation. In addition, we present the most comprehensive list so far of human p53-regulated genes and their experimentally validated, functional binding sites that confer p53 regulation.

1,799 citations

Journal ArticleDOI
TL;DR: In this paper, the Lyapunov sufficient condition for "input-to-state stability" (ISS) is also shown to be necessary and sufficient, which is an open question raised by several authors.

1,672 citations

Journal ArticleDOI
TL;DR: In this article, the existence of a smooth control-Lyapunov function implies smooth stabilizability, and the result is extended to real-analytic and rational cases as well.

1,210 citations


Cited by
More filters
Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Book
01 Jan 1995
TL;DR: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition, and is designed as a text, with over 100 exercises, to benefit anyone involved in the fields of neural computation and pattern recognition.
Abstract: From the Publisher: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition. After introducing the basic concepts, the book examines techniques for modelling probability density functions and the properties and merits of the multi-layer perceptron and radial basis function network models. Also covered are various forms of error functions, principal algorithms for error function minimalization, learning and generalization in neural networks, and Bayesian techniques and their applications. Designed as a text, with over 100 exercises, this fully up-to-date work will benefit anyone involved in the fields of neural computation and pattern recognition.

19,056 citations

28 Jul 2005
TL;DR: PfPMP1)与感染红细胞、树突状组胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作�ly.
Abstract: 抗原变异可使得多种致病微生物易于逃避宿主免疫应答。表达在感染红细胞表面的恶性疟原虫红细胞表面蛋白1(PfPMP1)与感染红细胞、内皮细胞、树突状细胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作用。每个单倍体基因组var基因家族编码约60种成员,通过启动转录不同的var基因变异体为抗原变异提供了分子基础。

18,940 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations

Book ChapterDOI
TL;DR: The chapter discusses two important directions of research to improve learning algorithms: the dynamic node generation, which is used by the cascade correlation algorithm; and designing learning algorithms where the choice of parameters is not an issue.
Abstract: Publisher Summary This chapter provides an account of different neural network architectures for pattern recognition. A neural network consists of several simple processing elements called neurons. Each neuron is connected to some other neurons and possibly to the input nodes. Neural networks provide a simple computing paradigm to perform complex recognition tasks in real time. The chapter categorizes neural networks into three types: single-layer networks, multilayer feedforward networks, and feedback networks. It discusses the gradient descent and the relaxation method as the two underlying mathematical themes for deriving learning algorithms. A lot of research activity is centered on learning algorithms because of their fundamental importance in neural networks. The chapter discusses two important directions of research to improve learning algorithms: the dynamic node generation, which is used by the cascade correlation algorithm; and designing learning algorithms where the choice of parameters is not an issue. It closes with the discussion of performance and implementation issues.

13,033 citations