scispace - formally typeset
Search or ask a question
Author

Yu-Chi Ho

Bio: Yu-Chi Ho is an academic researcher from Harvard University. The author has contributed to research in topics: Ordinal optimization & Optimization problem. The author has an hindex of 50, co-authored 169 publications receiving 14968 citations. Previous affiliations of Yu-Chi Ho include National University of Singapore & Tsinghua University.


Papers
More filters
Book
01 Oct 1979
TL;DR: This best-selling text focuses on the analysis and design of complicated dynamics systems and is recommended by engineers, applied mathematicians, and undergraduates.
Abstract: This best-selling text focuses on the analysis and design of complicated dynamics systems CHOICE called it "a high-level, concise book that could well be used as a reference by engineers, applied mathematicians, and undergraduates The format is good, the presentation clear, the diagrams instructive, the examples and problems helpfulReferences and a multiple-choice examination are included"

2,782 citations

Journal ArticleDOI
TL;DR: Differential games theory with nonzero sum for application to economic analysis, discussing Nash equilibrium, minimax and noninferior strategies set as mentioned in this paper, discussed Nash equilibrium and non-zero sum.
Abstract: Differential games theory with nonzero sum for application to economic analysis, discussing Nash equilibrium, minimax and noninferior strategies set

719 citations

Journal ArticleDOI
TL;DR: Equivalence relations in information and in control functions among different systems are developed and aid in the solving of many general problems by relating their solutions to those of the systems with "perfect memory".
Abstract: General dynamic team decision problems with linear information structures and quadratic payoff functions are studied. The primitive random variables are jointly Gaussian. No constraints on the information structures are imposed except causality. Equivalence relations in information and in control functions among different systems are developed. These equivalence relations aid in the solving of many general problems by relating their solutions to those of the systems with "perfect memory." The latter can be obtained by the method derived in Part I. A condition is found which enables each decision maker to infer the information available to his precedents, while at the same time the controls which will affect the information assessed can be proven optimal. When this condition fails, upper and lower bounds of the payoff function can still be obtained systematically, and suboptimal controls can be obtained.

677 citations

Journal ArticleDOI
TL;DR: In this paper, a general class of stochastic estimation and control problems is formulated from the Bayesian Decision-Theoretic viewpoint, and a discussion as to how these problems can be solved step by step in principle and practice from this approach is presented.
Abstract: In this paper, a general class of stochastic estimation and control problems is formulated from the Bayesian Decision-Theoretic viewpoint. A discussion as to how these problems can be solved step by step in principle and practice from this approach is presented. As a specific example, the closed form Wiener-Kalman solution for linear estimation in Gaussian noise is derived. The purpose of the paper is to show that the Bayesian approach provides; 1) a general unifying framework within which to pursue further researches in stochastic estimation and control problems, and 2) the necessary computations and difficulties that must be overcome for these problems. An example of a nonlinear, non-Gaussian estimation problem is also solved.

653 citations


Cited by
More filters
Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Book
31 Jul 1981
TL;DR: Books, as a source that may involve the facts, opinion, literature, religion, and many others are the great friends to join with, becomes what you need to get.
Abstract: New updated! The latest book from a very famous author finally comes out. Book of pattern recognition with fuzzy objective function algorithms, as an amazing reference becomes what you need to get. What's for is this book? Are you still thinking for what the book is? Well, this is what you probably will get. You should have made proper choices for your better life. Book, as a source that may involve the facts, opinion, literature, religion, and many others are the great friends to join with.

15,662 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations

Journal ArticleDOI
TL;DR: Both optimal and suboptimal Bayesian algorithms for nonlinear/non-Gaussian tracking problems, with a focus on particle filters are reviewed.
Abstract: Increasingly, for many application areas, it is becoming important to include elements of nonlinearity and non-Gaussianity in order to model accurately the underlying dynamics of a physical system. Moreover, it is typically crucial to process data on-line as it arrives, both from the point of view of storage costs as well as for rapid adaptation to changing signal characteristics. In this paper, we review both optimal and suboptimal Bayesian algorithms for nonlinear/non-Gaussian tracking problems, with a focus on particle filters. Particle filters are sequential Monte Carlo methods based on point mass (or "particle") representations of probability densities, which can be applied to any state-space model and which generalize the traditional Kalman filtering methods. Several variants of the particle filter such as SIR, ASIR, and RPF are introduced within a generic framework of the sequential importance sampling (SIS) algorithm. These are discussed and compared with the standard EKF through an illustrative example.

11,409 citations

Book
01 Jan 1994
TL;DR: In this paper, the authors present a brief history of LMIs in control theory and discuss some of the standard problems involved in LMIs, such as linear matrix inequalities, linear differential inequalities, and matrix problems with analytic solutions.
Abstract: Preface 1. Introduction Overview A Brief History of LMIs in Control Theory Notes on the Style of the Book Origin of the Book 2. Some Standard Problems Involving LMIs. Linear Matrix Inequalities Some Standard Problems Ellipsoid Algorithm Interior-Point Methods Strict and Nonstrict LMIs Miscellaneous Results on Matrix Inequalities Some LMI Problems with Analytic Solutions 3. Some Matrix Problems. Minimizing Condition Number by Scaling Minimizing Condition Number of a Positive-Definite Matrix Minimizing Norm by Scaling Rescaling a Matrix Positive-Definite Matrix Completion Problems Quadratic Approximation of a Polytopic Norm Ellipsoidal Approximation 4. Linear Differential Inclusions. Differential Inclusions Some Specific LDIs Nonlinear System Analysis via LDIs 5. Analysis of LDIs: State Properties. Quadratic Stability Invariant Ellipsoids 6. Analysis of LDIs: Input/Output Properties. Input-to-State Properties State-to-Output Properties Input-to-Output Properties 7. State-Feedback Synthesis for LDIs. Static State-Feedback Controllers State Properties Input-to-State Properties State-to-Output Properties Input-to-Output Properties Observer-Based Controllers for Nonlinear Systems 8. Lure and Multiplier Methods. Analysis of Lure Systems Integral Quadratic Constraints Multipliers for Systems with Unknown Parameters 9. Systems with Multiplicative Noise. Analysis of Systems with Multiplicative Noise State-Feedback Synthesis 10. Miscellaneous Problems. Optimization over an Affine Family of Linear Systems Analysis of Systems with LTI Perturbations Positive Orthant Stabilizability Linear Systems with Delays Interpolation Problems The Inverse Problem of Optimal Control System Realization Problems Multi-Criterion LQG Nonconvex Multi-Criterion Quadratic Problems Notation List of Acronyms Bibliography Index.

11,085 citations