scispace - formally typeset
Search or ask a question
Author

Kannan Parthasarathy

Other affiliations: Motorola, Yale University, Samsung SDS
Bio: Kannan Parthasarathy is an academic researcher from Citrix Systems. The author has contributed to research in topics: Artificial neural network & Dynamical systems theory. The author has an hindex of 15, co-authored 32 publications receiving 9438 citations. Previous affiliations of Kannan Parthasarathy include Motorola & Yale University.

Papers
More filters
Journal ArticleDOI
TL;DR: It is demonstrated that neural networks can be used effectively for the identification and control of nonlinear dynamical systems and the models introduced are practically feasible.
Abstract: It is demonstrated that neural networks can be used effectively for the identification and control of nonlinear dynamical systems. The emphasis is on models for both identification and control. Static and dynamic backpropagation methods for the adjustment of parameters are discussed. In the models that are introduced, multilayer and recurrent networks are interconnected in novel configurations, and hence there is a real need to study them in a unified fashion. Simulation results reveal that the identification and adaptive control schemes suggested are practically feasible. Basic concepts and definitions are introduced throughout, and theoretical questions that have to be addressed are also described. >

7,692 citations

Journal ArticleDOI
TL;DR: An extension of the backpropagation method, termed dynamic back Propagation, which can be applied in a straightforward manner for the optimization of the weights (parameters) of multilayer neural networks is discussed.
Abstract: An extension of the backpropagation method, termed dynamic backpropagation, which can be applied in a straightforward manner for the optimization of the weights (parameters) of multilayer neural networks is discussed. The method is based on the fact that gradient methods used in linear dynamical systems can be combined with backpropagation methods for neural networks to obtain the gradient of a performance index of nonlinear dynamical systems. The method can be applied to any complex system which can be expressed as the interconnection of linear dynamical systems and multilayer neural networks. To facilitate the practical implementation of the proposed method, emphasis is placed on the diagrammatic representation of the system which generates the gradient of the performance function. >

662 citations

Patent
13 Aug 1998
TL;DR: A method and apparatus for implementing a graphical user interface keyboard (10) and a text buffer (12) on an electronic device is described in this paper, where a character that is active upon pointer-up is accepted as a text character.
Abstract: A method and apparatus for implementing a graphical user interface keyboard (10) and a text buffer (12) on an electronic device. A character that is active upon pointer-up is accepted as a text character, even though the character that is active upon pointer-up is different from a character that was active and inserted in the text buffer (12) upon pointer-down.

307 citations

Patent
28 Aug 1997
TL;DR: In this paper, a method and apparatus for recognition of handwritten input is disclosed where handwritten input composed of a sequence of (x, y, pen) points, is preprocessed into a series of strokes.
Abstract: A method and apparatus for recognition of handwritten input is disclosed where handwritten input composed of a sequence of (x, y, pen) points, is preprocessed into a sequence of strokes. A short list of candidate characters that are likely matches for the handwritten input is determined by finding a fast matching distance between the input sequence of strokes and a sequence of strokes representing each candidate character of a large character set where the sequence of strokes for each candidate character is derived from statistical analysis of empirical data. A the final sorted list of candidate characters which are likely matches for the handwritten input is determined by finding a detailed matching distance between the input sequence of strokes and the sequence of strokes for each candidate character of the short list. A final selectable list of candidate characters is presented to a user.

167 citations

Patent
Kannan Parthasarathy1
09 Feb 1998
TL;DR: In this article, a storage medium having stored thereon a set of instructions, which when loaded into a microprocessor, causes the microprocessor to extract strokes from a plurality of characters (76), derive a pre-defined number of stroke models based on the strokes extracted from the plurality of character (78 ), and represent the plurality-of-character as sequences of strokes models (79 ).
Abstract: A storage medium ( 72 ) having stored thereon a set of instructions, which when loaded into a microprocessor ( 74 ), causes the microprocessor ( 74 ) to extract strokes from a plurality of characters ( 76 ), derive a pre-defined number of stroke models based on the strokes extracted from the plurality of character ( 78 ) and represent the plurality of characters as sequences of stroke models ( 80 ).

137 citations


Cited by
More filters
Journal ArticleDOI
01 May 1993
TL;DR: The architecture and learning procedure underlying ANFIS (adaptive-network-based fuzzy inference system) is presented, which is a fuzzy inference System implemented in the framework of adaptive networks.
Abstract: The architecture and learning procedure underlying ANFIS (adaptive-network-based fuzzy inference system) is presented, which is a fuzzy inference system implemented in the framework of adaptive networks. By using a hybrid learning procedure, the proposed ANFIS can construct an input-output mapping based on both human knowledge (in the form of fuzzy if-then rules) and stipulated input-output data pairs. In the simulation, the ANFIS architecture is employed to model nonlinear functions, identify nonlinear components on-line in a control system, and predict a chaotic time series, all yielding remarkable results. Comparisons with artificial neural networks and earlier work on fuzzy modeling are listed and discussed. Other extensions of the proposed ANFIS and promising applications to automatic control and signal processing are also suggested. >

15,085 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations

Journal ArticleDOI
TL;DR: It is demonstrated that neural networks can be used effectively for the identification and control of nonlinear dynamical systems and the models introduced are practically feasible.
Abstract: It is demonstrated that neural networks can be used effectively for the identification and control of nonlinear dynamical systems. The emphasis is on models for both identification and control. Static and dynamic backpropagation methods for the adjustment of parameters are discussed. In the models that are introduced, multilayer and recurrent networks are interconnected in novel configurations, and hence there is a real need to study them in a unified fashion. Simulation results reveal that the identification and adaptive control schemes suggested are practically feasible. Basic concepts and definitions are introduced throughout, and theoretical questions that have to be addressed are also described. >

7,692 citations

BookDOI
01 Jan 2001
TL;DR: This book presents the first comprehensive treatment of Monte Carlo techniques, including convergence results and applications to tracking, guidance, automated target recognition, aircraft navigation, robot navigation, econometrics, financial modeling, neural networks, optimal control, optimal filtering, communications, reinforcement learning, signal enhancement, model averaging and selection.
Abstract: Monte Carlo methods are revolutionizing the on-line analysis of data in fields as diverse as financial modeling, target tracking and computer vision. These methods, appearing under the names of bootstrap filters, condensation, optimal Monte Carlo filters, particle filters and survival of the fittest, have made it possible to solve numerically many complex, non-standard problems that were previously intractable. This book presents the first comprehensive treatment of these techniques, including convergence results and applications to tracking, guidance, automated target recognition, aircraft navigation, robot navigation, econometrics, financial modeling, neural networks, optimal control, optimal filtering, communications, reinforcement learning, signal enhancement, model averaging and selection, computer vision, semiconductor design, population biology, dynamic Bayesian networks, and time series analysis. This will be of great value to students, researchers and practitioners, who have some basic knowledge of probability. Arnaud Doucet received the Ph. D. degree from the University of Paris-XI Orsay in 1997. From 1998 to 2000, he conducted research at the Signal Processing Group of Cambridge University, UK. He is currently an assistant professor at the Department of Electrical Engineering of Melbourne University, Australia. His research interests include Bayesian statistics, dynamic models and Monte Carlo methods. Nando de Freitas obtained a Ph.D. degree in information engineering from Cambridge University in 1999. He is presently a research associate with the artificial intelligence group of the University of California at Berkeley. His main research interests are in Bayesian statistics and the application of on-line and batch Monte Carlo methods to machine learning. Neil Gordon obtained a Ph.D. in Statistics from Imperial College, University of London in 1993. He is with the Pattern and Information Processing group at the Defence Evaluation and Research Agency in the United Kingdom. His research interests are in time series, statistical data analysis, and pattern recognition with a particular emphasis on target tracking and missile guidance.

6,574 citations

Journal ArticleDOI
TL;DR: The general regression neural network (GRNN) is a one-pass learning algorithm with a highly parallel structure that provides smooth transitions from one observed value to another.
Abstract: A memory-based network that provides estimates of continuous variables and converges to the underlying (linear or nonlinear) regression surface is described. The general regression neural network (GRNN) is a one-pass learning algorithm with a highly parallel structure. It is shown that, even with sparse data in a multidimensional measurement space, the algorithm provides smooth transitions from one observed value to another. The algorithmic form can be used for any regression problem in which an assumption of linearity is not justified. >

4,091 citations