scispace - formally typeset
Search or ask a question

Showing papers on "Function approximation published in 2012"


Journal ArticleDOI
TL;DR: Simulation and real-world experiments demonstrate that CVNNs with amplitude-phase-type activation function show smaller generalization error than real-valued networks, such as bivariate and dual-univariate real- valued neural networks.
Abstract: Applications of complex-valued neural networks (CVNNs) have expanded widely in recent years-in particular in radar and coherent imaging systems. In general, the most important merit of neural networks lies in their generalization ability. This paper compares the generalization characteristics of complex-valued and real-valued feedforward neural networks in terms of the coherence of the signals to be dealt with. We assume a task of function approximation such as interpolation of temporal signals. Simulation and real-world experiments demonstrate that CVNNs with amplitude-phase-type activation function show smaller generalization error than real-valued networks, such as bivariate and dual-univariate real-valued neural networks. Based on the results, we discuss how the generalization characteristics are influenced by the coherence of the signals depending on the degree of freedom in the learning and on the circularity in neural dynamics.

249 citations


Journal ArticleDOI
TL;DR: It is proved that an ELM with adaptive growth of hidden nodes (AG-ELM), which provides a new approach for the automated design of networks, can approximate any Lebesgue p-integrable function on a compact input set.
Abstract: Extreme learning machines (ELMs) have been proposed for generalized single-hidden-layer feedforward networks which need not be neuron-like and perform well in both regression and classification applications. In this brief, we propose an ELM with adaptive growth of hidden nodes (AG-ELM), which provides a new approach for the automated design of networks. Different from other incremental ELMs (I-ELMs) whose existing hidden nodes are frozen when the new hidden nodes are added one by one, in AG-ELM the number of hidden nodes is determined in an adaptive way in the sense that the existing networks may be replaced by newly generated networks which have fewer hidden nodes and better generalization performance. We then prove that such an AG-ELM using Lebesgue p-integrable hidden activation functions can approximate any Lebesgue p-integrable function on a compact input set. Simulation results demonstrate and verify that this new approach can achieve a more compact network architecture than the I-ELM.

206 citations


Journal ArticleDOI
TL;DR: Many aspects associated with the RBF network, such as network structure, universal approimation capability, radial basis functions,RBF network learning, structure optimization, normalized RBF networks, application to dynamic system modeling, and nonlinear complex-valued signal processing, are described.
Abstract: The radial basis function (RBF) network has its foundation in the conventional approximation theory. It has the capability of universal approximation. The RBF network is a popular alternative to the well-known multilayer perceptron (MLP), since it has a simpler structure and a much faster training process. In this paper, we give a comprehensive survey on the RBF network and its learning. Many aspects associated with the RBF network, such as network structure, universal approimation capability, radial basis functions, RBF network learning, structure optimization, normalized RBF networks, application to dynamic system modeling, and nonlinear complex-valued signal processing, are described. We also compare the features and capability of the two models.

162 citations


Journal ArticleDOI
TL;DR: The analysis suggests that using the Chebyshev measure to precondition the ‘1-minimization, which has been shown to be numerically advantageous in one dimension in the literature, may in fact become less efficient in high dimensions.
Abstract: The idea of ‘1-minimization is the basis of the widely adopted compressive sensing method for function approximation. In this paper, we extend its application to high-dimensional stochastic collocation methods. To facilitate practical implementation, we employ orthogonal polynomials, particularly Legendre polynomials, as basis functions, and focus on the cases where the dimensionality is high such that one can not afford to construct high-degree polynomial approximations. We provide theoretical analysis on the validity of the approach. The analysis also suggests that using the Chebyshev measure to precondition the ‘1-minimization, which has been shown to be numerically advantageous in one dimension in the literature, may in fact become less efficient in high dimensions. Numerical tests are provided to examine the performance of the methods and validate the theoretical findings.

129 citations


Journal ArticleDOI
TL;DR: In this paper, a sampling choice of the points to be drawn at random for each function approximation is presented, whose complexity is at most polynomial in the dimension d and in the number m of points.
Abstract: Let us assume that f is a continuous function defined on the unit ball of ℝ d , of the form f(x)=g(Ax), where A is a k×d matrix and g is a function of k variables for k≪d We are given a budget m∈ℕ of possible point evaluations f(x i ), i=1,…,m, of f, which we are allowed to query in order to construct a uniform approximating function Under certain smoothness and variation assumptions on the function g, and an arbitrary choice of the matrix A, we present in this paper a sampling choice of the points {xi } drawn at random for each function approximation; algorithms (Algorithm 1 and Algorithm 2) for computing the approximating function, whose complexity is at most polynomial in the dimension d and in the number m of points Due to the arbitrariness of A, the sampling points will be chosen according to suitable random distributions, and our results hold with overwhelming probability Our approach uses tools taken from the compressed sensing framework, recent Chernoff bounds for sums of positive semidefinite matrices, and classical stability bounds for invariant subspaces of singular value decompositions

104 citations



Journal ArticleDOI
TL;DR: An online actor–critic reinforcement learning algorithm with function approximation for a problem of control under inequality constraints and it is proved the asymptotic almost sure convergence of the algorithm to a locally optimal solution.
Abstract: We develop an online actor–critic reinforcement learning algorithm with function approximation for a problem of control under inequality constraints. We consider the long-run average cost Markov decision process (MDP) framework in which both the objective and the constraint functions are suitable policy-dependent long-run averages of certain sample path functions. The Lagrange multiplier method is used to handle the inequality constraints. We prove the asymptotic almost sure convergence of our algorithm to a locally optimal solution. We also provide the results of numerical experiments on a problem of routing in a multi-stage queueing network with constraints on long-run average queue lengths. We observe that our algorithm exhibits good performance on this setting and converges to a feasible point.

92 citations


Journal ArticleDOI
TL;DR: An alternative method of maximum ground surface settlement prediction, which is based on integration between wavelet theory and Artificial Neural Network (ANN), or wavelet network (wavenet), is presented and demonstrates its ability to enhance the function approximation capability and consequently exhibits excellent learning ability.

83 citations


Journal ArticleDOI
TL;DR: Performance studies on a function approximation and real-valued classification problems show that proposed McFCRN performs better than the existing results reported in the literature.

62 citations


Journal ArticleDOI
TL;DR: In this article, a backstepping-like adaptive controller based on function approximation technique (FAT) is designed so that the system in the new space can be stabilised with uniformly ultimately bounded performance which further implies the same stability property to the original space.
Abstract: This study proposes a strategy for the control of a class of underactuated mechanical systems. The Olfati transformation is applied firstly to represent the system into a special cascade form. Since, in general cases, some of the terms in the system dynamic equation represented in the new space might become too complex to derive, they are regarded as uncertainties. These uncertainties would enter the system in a mismatched fashion, and their variation bounds are not available; therefore, most conventional robust or adaptive designs fail to stabilise the closed-loop dynamics. A backstepping-like adaptive controller based on function approximation technique (FAT) is designed so that the system in the new space can be stabilised with uniformly ultimately bounded performance which further implies the same stability property to the system in the original space. Simulation cases for the control of an inverted pendulum and a translational oscillator/rotational actuator (TORA) system are presented to justify the proposed design.

60 citations


Journal ArticleDOI
01 Jun 2012
TL;DR: A hybrid approach, Hybrid Algorithm (HA), which combines evolutionary and gradient-based learning methods to estimate the architecture, weights and node topology of GRBFNN classifiers is described, which leads to a promising improvement in accuracy.
Abstract: Radial Basis Function Neural Networks (RBFNNs) have been successfully employed in several function approximation and pattern recognition problems. The use of different RBFs in RBFNN has been reported in the literature and here the study centres on the use of the Generalized Radial Basis Function Neural Networks (GRBFNNs). An interesting property of the GRBF is that it can continuously and smoothly reproduce different RBFs by changing a real parameter @t. In addition, the mixed use of different RBF shapes in only one RBFNN is allowed. Generalized Radial Basis Function (GRBF) is based on Generalized Gaussian Distribution (GGD), which adds a shape parameter, @t, to standard Gaussian Distribution. Moreover, this paper describes a hybrid approach, Hybrid Algorithm (HA), which combines evolutionary and gradient-based learning methods to estimate the architecture, weights and node topology of GRBFNN classifiers. The feasibility and benefits of the approach are demonstrated by means of six gene microarray classification problems taken from bioinformatic and biomedical domains. Three filters were applied: Fast Correlation-Based Filter (FCBF), Best Incremental Ranked Subset (BIRS), and Best Agglomerative Ranked Subset (BARS); this was done in order to identify salient expression genes from among the thousands of genes in microarray data that can directly contribute to determining the class membership of each pattern. After different gene subsets were obtained, the proposed methodology was performed using the selected gene subsets as new input variables. The results confirm that the GRBFNN classifier leads to a promising improvement in accuracy.

Journal ArticleDOI
TL;DR: The result shows that the proposed algorithm is able to identify simulated examples correctly, and identifies the adequate model for real process data based on a set of solutions called the Pareto optimal set, from which the best network can be selected.
Abstract: The problem of constructing an adequate and parsimonious neural network topology for modeling non-linear dynamic system is studied and investigated. Neural networks have been shown to perform function approximation and represent dynamic systems. The network structures are usually guessed or selected in accordance with the designer’s prior knowledge. However, the multiplicity of the model parameters makes it troublesome to get an optimum structure. In this paper, an alternative algorithm based on a multi-objective optimization algorithm is proposed. The developed neural network model should fulfil two criteria or objectives namely good predictive accuracy and minimum model structure. The result shows that the proposed algorithm is able to identify simulated examples correctly, and identifies the adequate model for real process data based on a set of solutions called the Pareto optimal set, from which the best network can be selected.

Journal ArticleDOI
TL;DR: It is proved that all signals in the closed-loop system are semiglobally uniformly bounded and control errors converge to an adjustable neighborhood of the origin.

Journal ArticleDOI
TL;DR: An offline–online computational procedure for the calculation of the reduced basis approximation and associated error bound is developed and it is shown that these bounds are rigorous upper bounds for the approximation error under certain conditions on the function interpolation, thus addressing the demand for certainty of the approximation.
Abstract: We present reduced basis approximations and associated a posteriori error bounds for parabolic partial differential equations involving (i) a nonaffine dependence on the parameter and (ii ) a nonlinear dependence on the field variable. The method employs the Empirical Interpolation Method in order to construct "affine" coefficient-function approximations of the "nonaffine" (or nonlinear) parametrized functions. We consider linear time-invariant as well as linear time-varying nonaffine functions and introduce a new sampling approach to generate the function approximation space for the latter case. Our a posteriori error bounds take both error contributions explicitly into account — the error introduced by the reduced basis approximation and the error induced by the coefficient function interpolation. We show that these bounds are rigorous upper bounds for the approximation error under certain conditions on the function interpolation, thus addressing the demand for certainty of the approximation. As regards efficiency, we develop an offline–online computational procedure for the calculation of the reduced basis approximation and associated error bound. The method is thus ideally suited for the many-query or real-time contexts. Numerical results are presented to confirm and test our approach.

Journal ArticleDOI
TL;DR: The work establishes the superiority of feed-forward backpropagation neural nets for the airfoil geometry determination due to good function approximation properties of the neural architecture and the use of Bezier–PARSEC 3434 parameterization scheme.

Book
30 Oct 2012
TL;DR: Fuzzy Rule Extraction for Function Approximation from Numerical Data and Solutions to Problems.
Abstract: Foreword Anca Ralescu. Preface. Introduction. 1. Overview of Neural Networks. 2. The Hopfield Network. 3. Multilayered Networks. 4. Other Neural Networks. 5. Overview of Fuzzy Systems. 6. Fuzzy Rule Extraction for Pattern Classification from Numerical Data. 7. Fuzzy Rule Extraction for Function Approximation from Numerical Data. 8. Composite Systems. References. Solutions to Problems. Index: Subject Index. Author Index.

Proceedings Article
22 Jul 2012
TL;DR: This work derives an adaptive upper bound on the step-size parameter to guarantee that online TD learning with linear function approximation will not diverge, and empirically evaluates algorithms using this upper bound as a heuristic for adapting the stepsize parameter online.
Abstract: The step-size, often denoted as α, is a key parameter for most incremental learning algorithms. Its importance is especially pronounced when performing online temporal difference (TD) learning with function approximation. Several methods have been developed to adapt the step-size online. These range from straightforward back-off strategies to adaptive algorithms based on gradient descent. We derive an adaptive upper bound on the step-size parameter to guarantee that online TD learning with linear function approximation will not diverge. We then empirically evaluate algorithms using this upper bound as a heuristic for adapting the stepsize parameter online. We compare performance with related work including HL(λ) and Autostep. Our results show that this adaptive upper bound heuristic out-performs all existing methods without requiring any meta-parameters. This effectively eliminates the need to tune the learning rate of temporal difference learning with linear function approximation.

Journal ArticleDOI
TL;DR: This paper presents an algorithm based on stochastic optimization to tune the thresholds that are associated with a TLC algorithm for optimal performance, and proposes the following three novel TLC algorithms: a full-state Q- learning algorithm with state aggregation, a Q-learning algorithm with function approximation that involves an enhanced feature selection scheme, and a priority-based TLC scheme.
Abstract: Adaptive control of traffic lights is a key component of any intelligent transportation system. Many real-time traffic light control (TLC) algorithms are based on graded thresholds, because precise information about the traffic congestion in the road network is hard to obtain in practice. For example, using thresholds L1 and L2 , we could mark the congestion level on a particular lane as “low,” “medium,” or “high” based on whether the queue length on the lane is below L1, between L1 and L2, or above L2 , respectively. However, the TLC algorithms that were proposed in the literature incorporate fixed values for the thresholds, which, in general, are not optimal for all traffic conditions. In this paper, we present an algorithm based on stochastic optimization to tune the thresholds that are associated with a TLC algorithm for optimal performance. We also propose the following three novel TLC algorithms: 1) a full-state Q-learning algorithm with state aggregation, 2) a Q-learning algorithm with function approximation that involves an enhanced feature selection scheme, and 3) a priority-based TLC scheme. All these algorithms are threshold based. Next, we combine the threshold-tuning algorithm with the three aforementioned algorithms. Such a combination results in several interesting consequences. For example, in the case of Q-learning with full-state representation, our threshold-tuning algorithm suggests an optimal way of clustering states to reduce the cardinality of the state space, and in the case of the Q-learning algorithm with function approximation, our (threshold-tuning) algorithm provides a novel feature adaptation scheme to obtain an “optimal” selection of features. Our tuning algorithm is an incremental-update online scheme with proven convergence to the optimal values of thresholds. Moreover, the additional computational effort that is required because of the integration of the tuning scheme in any of the graded-threshold-based TLC algorithms is minimal. Simulation results show a significant gain in performance when our threshold-tuning algorithm is used in conjunction with various TLC algorithms compared to the original TLC algorithms without tuning and with fixed thresholds.

Journal ArticleDOI
TL;DR: A neural network approximation to direct trajectory optimization methods is presented and provides a continuously dierentiable function approximation which may be advantageous when a discontinuous objective function is used in a nonlinear solver.
Abstract: A neural network approximation to direct trajectory optimization methods is presented. The method uses neural networks to approximate the dynamics and objective equations integrated over a given time interval. The trajectory is then built recursively and treated as a nonlinear programming problem. The method is compared to a direct collocation method as well as more recent pseudospectral methods and shows competitive results while being computationally faster. In addition, a neural network provides a continuously dierentiable function approximation which may be advantageous when a discontinuous objective function is used in a nonlinear solver. A surveillance trajectory planning problem for an unmanned aerial vehicle is given as an example application and results are presented for all three methods.

Journal ArticleDOI
01 Jan 2012
TL;DR: An efficient approach is presented to improve the local and global approximation and modelling capability of Takagi-Sugeno (T-S) fuzzy model and the high accuracy obtained in approximating nonlinear and unstable systems locally and globally in comparison with the original T-S model.
Abstract: An efficient approach is presented to improve the local and global approximation and modelling capability of Takagi-Sugeno (T-S) fuzzy model. The main aim is obtaining high function approximation accuracy. The main problem is that T-S identification method cannot be applied when the membership functions are overlapped by pairs. This restricts the use of the T-S method because this type of membership function has been widely used during the last two decades in the stability, controller design and are popular in industrial control applications. The approach developed here can be considered as a generalized version of T-S method with optimized performance in approximating nonlinear functions. A simple approach with few computational effort, based on the well known parameters' weighting method is suggested for tuning T-S parameters to improve the choice of the performance index and minimize it. A global fuzzy controller (FC) based Linear Quadratic Regulator (LQR) is proposed in order to show the effectiveness of the estimation method developed here in control applications. Illustrative examples of an inverted pendulum and Van der Pol system are chosen to evaluate the robustness and remarkable performance of the proposed method and the high accuracy obtained in approximating nonlinear and unstable systems locally and globally in comparison with the original T-S model. Simulation results indicate the potential, simplicity and generality of the algorithm.

Journal ArticleDOI
TL;DR: In this article, a linear parameterization (LP) model is proposed to represent the tyre friction and a modified version of the recursive least squares, subject to a set of equality constraints on parameters, is employed to identify the LP in real time.
Abstract: Spurred by the problem of identifying, in real-time, the adhesion levels between the tyre and the road, a practical, linear parameterisation (LP) model is proposed to represent the tyre friction. Towards that aim, results from the theory of function approximation, together with optimisation techniques, are explored to approximate the non-linear Burckhardt model with a new LP representation. It is shown that, compared with other approximations described in the literature, the proposed LP model is more efficient, that is, it requires a smaller number of parameters, and provides better approximation capabilities. Next, a modified version of the recursive least squares, subject to a set of equality constraints on parameters, is employed to identify the LP in real time. The inclusion of these constraints, arising from the parametric relationships present when the tyre is in free-rolling mode, reduces the variance of the parametric estimation and improves the convergence of the identification algorithm, particularly in situations with low tyre slips. The simulation results obtained with the full-vehicle CarSim model under different road adhesion conditions demonstrate the effectiveness of the proposed LP and the robustness of the friction peak estimation method. Furthermore, the experimental tests, performed with an electric vehicle under low-grip roads, provide further validation of the accuracy and potential of the estimation technique.

Journal ArticleDOI
TL;DR: An adaptive modeling and control scheme for drug delivery system based on the proposed SGFNN is presented and the ability of the proposed approach for estimating the drug's effect and regulating blood pressure at a prescribed level is demonstrated.
Abstract: In this paper, a novel efficient learning algorithm towards self-generating fuzzy neural network (SGFNN) is proposed based on ellipsoidal basis function (EBF) and is functionally equivalent to a Takagi-Sugeno-Kang (TSK) fuzzy system. The proposed algorithm is simple and efficient and is able to generate a fuzzy neural network with high accuracy and compact structure. The structure learning algorithm of the proposed SGFNN combines criteria of fuzzy-rule generation with a pruning technology. The Kalman filter (KF) algorithm is used to adjust the consequent parameters of the SGFNN. The SGFNN is employed in a wide range of applications ranging from function approximation and nonlinear system identification to chaotic time-series prediction problem and real-world fuel consumption prediction problem. Simulation results and comparative studies with other algorithms demonstrate that a more compact architecture with high performance can be obtained by the proposed algorithm. In particular, this paper presents an adaptive modeling and control scheme for drug delivery system based on the proposed SGFNN. Simulation study demonstrates the ability of the proposed approach for estimating the drug's effect and regulating blood pressure at a prescribed level.

Journal ArticleDOI
01 Jan 2012-Robotica
TL;DR: A backstepping-like procedure incorporating the model reference adaptive control strategy is employed to construct the impedance controller and the function approximation technique is applied to estimate time-varying uncertainties in the system dynamics.
Abstract: To the best of our knowledge, this is the first paper focus on the adaptive impedance control of robot manipulators with consideration of joint flexibility and actuator dynamics. Controller design for this problem is difficult because each joint of the robot has to be described by a fifth-order cascade differential equation. In this paper, a backstepping-like procedure incorporating the model reference adaptive control strategy is employed to construct the impedance controller. The function approximation technique is applied to estimate time-varying uncertainties in the system dynamics. The proposed control law is free from the calculation of the tedious regressor matrix, which is a significant simplification in implementation. Closed-loop stability and boundedness of internal signals are proved by the Lyapunov-like analysis with consideration of the function approximation error. Computer simulation results are presented to demonstrate the usefulness of the proposed scheme.

Journal ArticleDOI
TL;DR: A novel robust control approach to the learning problems of FNNs is further investigated in this study in order to develop efficient learning algorithms which can be implemented with optimal parameter settings and considering noise effect in the data.

Journal ArticleDOI
TL;DR: A weight theory inspired by quasi-Monte Carlo theory is developed to identify which functions have low effective dimension using the ANOVA expansion in different norms using Jacobi polynomial chaos to represent the terms of the expansion.
Abstract: We focus on the analysis of variance (ANOVA) method for high dimensional function approximation using Jacobi polynomial chaos to represent the terms of the expansion. First, we develop a weight theory inspired by quasi-Monte Carlo theory to identify which functions have low effective dimension using the ANOVA expansion in different norms. We then present estimates for the truncation error in the ANOVA expansion and for the interpolation error using multielement polynomial chaos in the weighted Korobov spaces over the unit hypercube. We consider both the standard ANOVA expansion using the Lebesgue measure and the anchored ANOVA expansion using the Dirac measure. The optimality of different sets of anchor points is also examined through numerical examples.

Journal ArticleDOI
01 Jun 2012
TL;DR: This work proposes a new approach to approximate any known function by a Takagi-Sugeno-Kang fuzzy system with a guaranteed upper bound on the approximation error and provides sufficient conditions to be universal approximators with specified error bounds.
Abstract: Fuzzy systems are excellent approximators of known functions or for the dynamic response of a physical system. We propose a new approach to approximate any known function by a Takagi-Sugeno-Kang fuzzy system with a guaranteed upper bound on the approximation error. The new approach is also used to approximately represent the behavior of a dynamic system from its input-output pairs using experimental data with known error bounds. We provide sufficient conditions for this class of fuzzy systems to be universal approximators with specified error bounds. The new conditions require a smaller number of membership functions than all previously published conditions. We illustrate the new results and compare them to published error bounds through numerical examples.

Journal ArticleDOI
TL;DR: A computationally efficient optimization ofANFIS networks is proposed, based on a hierarchical constructive procedure, by which the number of rules is progressively increased and the optimal one is automatically determined on the basis of learning theory in order to maximize the generalization capability of the resulting ANFIS network.
Abstract: Adaptive neurofuzzy inference systems (ANFIS) represent an efficient technique for the solution of function approximation problems. When numerical samples are available in this regard, the synthesis of ANFIS networks can be carried out exploiting clustering algorithms. Starting from a hyperplane clustering synthesis in the joint input-output space, a computationally efficient optimization of ANFIS networks is proposed in this paper. It is based on a hierarchical constructive procedure, by which the number of rules is progressively increased and the optimal one is automatically determined on the basis of learning theory in order to maximize the generalization capability of the resulting ANFIS network. Extensive computer simulations prove the validity of the proposed algorithm and show a favorable comparison with other well-established techniques.

Proceedings ArticleDOI
30 Jul 2012
TL;DR: In this article, specific divergence examples of value-iteration for several major Reinforcement Learning and Adaptive Dynamic Programming algorithms, when using a function approximator for the value function, were given.
Abstract: This paper gives specific divergence examples of value-iteration for several major Reinforcement Learning and Adaptive Dynamic Programming algorithms, when using a function approximator for the value function. These divergence examples differ from previous divergence examples in the literature, in that they are applicable for a greedy policy, i.e. in a “value iteration” scenario. Perhaps surprisingly, with a greedy policy, it is also possible to get divergence for the algorithms TD(1) and Sarsa(1). In addition to these divergences, we also achieve divergence for the Adaptive Dynamic Programming algorithms HDP, DHP and GDHP.

Proceedings ArticleDOI
13 Aug 2012
TL;DR: This paper investigates two classes of nonparametric adaptive elements, that is, adaptive elements whose number of parameters grow in response to data, and Gaussian Processes based adaptive elements which generalize the notion of Gaussian Distribution to function approximation.
Abstract: Most current model reference adaptive control methods rely on parametric adaptive elements, in which the number of parameters of the adaptive element are xed a-priori, often through expert judgment. Examples of such adaptive elements are the commonly used Radial Basis Function Neural Networks (RBF-NN) with centers allocated a priori based on the expected operating domain. If the system operates outside of the expected operating domain, such adaptive elements can become non-eective, thus rendering the adaptive controller only semi-global in nature. This paper investigates two classes of nonparametric adaptive elements, that is, adaptive elements whose number of parameters grow in response to data. This includes RBF adaptive elements with centers that are allocated dynamically as the system evolves using a Kernel linear independence test, and Gaussian Processes based adaptive elements which generalize the notion of Gaussian Distribution to function approximation. We show that these nonparametric adaptive elements result in good closed loop performance without requiring any prior knowledge about the domain of the uncertainty. These results indicate that the use of such nonparametric adaptive elements can improve the global stability properties adaptive controllers.

Journal ArticleDOI
TL;DR: This paper considers the tracking control of a nonaffine system and proposes a performance-dependent self-organizing approximation approach, which monitors the tracking performance and adds basis elements only as needed to achieve the tracking specification.
Abstract: This paper considers tracking control for single-input, single-output nonaffine dynamic systems A performance-dependent self-organizing approximation-based approach is proposed The designer specifies a positive tracking error criterion The self-organizing approximation-based controller then monitors the tracking performance and adds basis elements only as needed to achieve the tracking specification Even though the system is not affine, the approach is defined such that the approximated function is independent of the control variable u Stability is proved and the self-organization is derived in a Lyapunov-based methodology To illustrate certain novel aspects of the proposed controller, a numerical example is included