scispace - formally typeset
Search or ask a question
Author

Tong Heng Lee

Bio: Tong Heng Lee is an academic researcher from National University of Singapore. The author has contributed to research in topics: Control theory & Adaptive control. The author has an hindex of 81, co-authored 756 publications receiving 23333 citations. Previous affiliations of Tong Heng Lee include University of California, Berkeley & Harvard University.


Papers
More filters
Journal ArticleDOI
TL;DR: A barrier Lyapunov function (BLF) is introduced to address two open and challenging problems in the neuro-control area: for any initial compact set, how to determine a priori the compact superset on which NN approximation is valid; and how to ensure that the arguments of the unknown functions remain within the specified compact supersets.
Abstract: In this brief, adaptive neural control is presented for a class of output feedback nonlinear systems in the presence of unknown functions. The unknown functions are handled via on-line neural network (NN) control using only output measurements. A barrier Lyapunov function (BLF) is introduced to address two open and challenging problems in the neuro-control area: 1) for any initial compact set, how to determine a priori the compact superset, on which NN approximation is valid; and 2) how to ensure that the arguments of the unknown functions remain within the specified compact superset. By ensuring boundedness of the BLF, we actively constrain the argument of the unknown functions to remain within a compact superset such that the NN approximation conditions hold. The semiglobal boundedness of all closed-loop signals is ensured, and the tracking error converges to a neighborhood of zero. Simulation results demonstrate the effectiveness of the proposed approach.

818 citations

Book
30 Nov 2001
TL;DR: In this article, the Stable Adaptive Neural Network Control offers an in-depth study of stable adaptive control designs using approximation-based techniques, and presents rigorous analysis for system stability and control performance.
Abstract: While neural network control has been successfully applied in various practical applications, many important issues, such as stability, robustness, and performance, have not been extensively researched for neural adaptive systems. Motivated by the need for systematic neural control strategies for nonlinear systems, Stable Adaptive Neural Network Control offers an in-depth study of stable adaptive control designs using approximation-based techniques, and presents rigorous analysis for system stability and control performance. Both linearly parameterized and multi-layer neural networks (NN) are discussed and employed in the design of adaptive NN control systems for completeness. Stable adaptive NN control has been thoroughly investigated for several classes of nonlinear systems, including nonlinear systems in Brunovsky form, nonlinear systems in strict-feedback and pure-feedback forms, nonaffine nonlinear systems, and a class of MIMO nonlinear systems. In addition, the developed design methodologies are not only applied to typical example systems, but also to real application-oriented systems, such as the variable length pendulum system, the underactuated inverted pendulum system and nonaffine nonlinear chemical processes (CSTR).

665 citations

Journal ArticleDOI
01 Feb 2004
TL;DR: It is proved that the proposed backstepping design method is able to guarantee semi-global uniformly ultimately boundedness of all the signals in the closed-loop.
Abstract: In this paper, adaptive neural control is presented for a class of strict-feedback nonlinear systems with unknown time delays. The proposed design method does not require a priori knowledge of the signs of the unknown virtual control coefficients. The unknown time delays are compensated for using appropriate Lyapunov-Krasovskii functionals in the design. It is proved that the proposed backstepping design method is able to guarantee semi-global uniformly ultimately boundedness of all the signals in the closed-loop. In addition, the output of the system is proven to converge to a small neighborhood of the origin. Simulation results are provided to show the effectiveness of the proposed approach.

629 citations

Journal ArticleDOI
TL;DR: Complete geometric criteria are presented for controllability and reachability of switched linear systems.

464 citations

Journal ArticleDOI
TL;DR: The proposed composite nonlinear feedback control technique is capable of beating the well-known time-optimal control in the asymptotic tracking situations and can be applied to design servo systems that deal with "point-and-shoot" fast targeting.
Abstract: We study in this paper the theory and applications of a nonlinear control technique, i.e., the so-called composite nonlinear feedback control, for a class of linear systems with actuator nonlinearities. It consists of a linear feedback law and a nonlinear feedback law without any switching element. The linear feedback part is designed to yield a closed-loop system with a small damping ratio for a quick response, while at the same time not exceeding the actuator limits for the desired command input levels. The nonlinear feedback law is used to increase the damping ratio of the closed-loop system as the system output approaches the target reference to reduce the overshoot caused by the linear part. It is shown that the proposed technique is capable of beating the well-known time-optimal control in the asymptotic tracking situations. The application of such a new technique to an actual hard disk drive servo system shows that it outperforms the conventional method by more than 30%. The technique can be applied to design servo systems that deal with "point-and-shoot" fast targeting.

434 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: Experimental results have demonstrated that MOEA/D with simple decomposition methods outperforms or performs similarly to MOGLS and NSGA-II on multiobjective 0-1 knapsack problems and continuous multiobjectives optimization problems.
Abstract: Decomposition is a basic strategy in traditional multiobjective optimization. However, it has not yet been widely used in multiobjective evolutionary optimization. This paper proposes a multiobjective evolutionary algorithm based on decomposition (MOEA/D). It decomposes a multiobjective optimization problem into a number of scalar optimization subproblems and optimizes them simultaneously. Each subproblem is optimized by only using information from its several neighboring subproblems, which makes MOEA/D have lower computational complexity at each generation than MOGLS and nondominated sorting genetic algorithm II (NSGA-II). Experimental results have demonstrated that MOEA/D with simple decomposition methods outperforms or performs similarly to MOGLS and NSGA-II on multiobjective 0-1 knapsack problems and continuous multiobjective optimization problems. It has been shown that MOEA/D using objective normalization can deal with disparately-scaled objectives, and MOEA/D with an advanced decomposition method can generate a set of very evenly distributed solutions for 3-objective test instances. The ability of MOEA/D with small population, the scalability and sensitivity of MOEA/D have also been experimentally investigated in this paper.

6,657 citations

Book
30 Jun 2002
TL;DR: This paper presents a meta-anatomy of the multi-Criteria Decision Making process, which aims to provide a scaffolding for the future development of multi-criteria decision-making systems.
Abstract: List of Figures. List of Tables. Preface. Foreword. 1. Basic Concepts. 2. Evolutionary Algorithm MOP Approaches. 3. MOEA Test Suites. 4. MOEA Testing and Analysis. 5. MOEA Theory and Issues. 3. MOEA Theoretical Issues. 6. Applications. 7. MOEA Parallelization. 8. Multi-Criteria Decision Making. 9. Special Topics. 10. Epilog. Appendix A: MOEA Classification and Technique Analysis. Appendix B: MOPs in the Literature. Appendix C: Ptrue & PFtrue for Selected Numeric MOPs. Appendix D: Ptrue & PFtrue for Side-Constrained MOPs. Appendix E: MOEA Software Availability. Appendix F: MOEA-Related Information. Index. References.

5,994 citations