scispace - formally typeset
Open AccessJournal ArticleDOI

Orthogonal least squares methods and their application to non-linear system identification

Sheng Chen, +2 more
- 01 Nov 1989 - 
- Vol. 50, Iss: 5, pp 1873-1896
TLDR
Identification algorithms based on the well-known linear least squares methods of gaussian elimination, Cholesky decomposition, classical Gram-Schmidt, modified Gram- Schmidt, Householder transformation, Givens method, and singular value decomposition are reviewed.
Abstract
Identification algorithms based on the well-known linear least squares methods of gaussian elimination, Cholesky decomposition, classical Gram-Schmidt, modified Gram-Schmidt, Householder transformation, Givens method, and singular value decomposition are reviewed. The classical Gram-Schmidt, modified Gram-Schmidt, and Householder transformation algorithms are then extended to combine structure determination, or which terms to include in the model, and parameter estimation in a very simple and efficient manner for a class of multivariate discrete-time non-linear stochastic systems which are linear in the parameters.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings Article

Bayesian pursuit algorithms

TL;DR: This paper shows that the Lagrangian formulation of the standard SR problem can be regarded as a limit case of a general maximum a posteriori (MAP) problem involving Bernoulli-Gaussian variables, and proposes different tractable implementations of this MAP problem.
Journal ArticleDOI

Identification of nonlinear time-varying systems using an online sliding-window and common model structure selection CMSS approach with applications to EEG

TL;DR: The identification of nonlinear time-varying systems using linear-in-the-parameter models is investigated and an efficient common model structure selection (CMSS) algorithm is proposed to select a commonmodel structure, with application to EEG data modelling.
Journal ArticleDOI

An adaptive wavelet neural network for spatio-temporal system identification

TL;DR: A novel two-stage hybrid training scheme is developed for constructing a parsimonious network model, where a ranked list of wavelet neurons, according to the capability of each neuron to represent the total variance in the system output signal is produced.
Journal ArticleDOI

Homotopy Based Algorithms for $\ell _{\scriptscriptstyle 0}$ -Regularized Least-Squares

Abstract: Sparse signal restoration is usually formulated as the minimization of a quadratic cost function $\Vert { \mbi { y}} - { \mbi { A}} { \mbi { x}} \Vert_{2}^{2}$ where $ { \mbi { A}} $ is a dictionary and $ { \mbi { x}} $ is an unknown sparse vector. It is well-known that imposing an $\ell _{0}$ constraint leads to an NP-hard minimization problem. The convex relaxation approach has received considerable attention, where the $\ell _{0}$ -norm is replaced by the $\ell _{1}$ -norm. Among the many effective $\ell _{1}$ solvers, the homotopy algorithm minimizes $\Vert { \mbi { y}} - { \mbi { A}} { \mbi { x}} \Vert_{2}^{2}+\lambda \Vert { \mbi { x}} \Vert _{1}$ with respect to $ { \mbi { x}} $ for a continuum of $\lambda $ ’s. It is inspired by the piecewise regularity of the $\ell _{1}$ -regularization path, also referred to as the homotopy path. In this paper, we address the minimization problem $\Vert { \mbi { y}} - { \mbi { A}} { \mbi { x}} \Vert_{2}^{2}+\lambda \Vert { \mbi { x}} \Vert _{0}$ for a continuum of $\lambda $ ’s and propose two heuristic search algorithms for $\ell _{0}$ -homotopy. Continuation Single Best Replacement is a forward–backward greedy strategy extending the Single Best Replacement algorithm, previously proposed for $\ell _{0}$ -minimization at a given $\lambda $ . The adaptive search of the $\lambda $ -values is inspired by $\ell _{1}$ -homotopy. $\ell _{0}$ Regularization Path Descent is a more complex algorithm exploiting the structural properties of the $\ell _{0}$ -regularization path, which is piecewise constant with respect to $\lambda $ . Both algorithms are empirically evaluated for difficult inverse problems involving ill-conditioned dictionaries. Finally, we show that they can be easily coupled with usual methods of model order selection.
References
More filters
Book

Applied Regression Analysis

TL;DR: In this article, the Straight Line Case is used to fit a straight line by least squares, and the Durbin-Watson Test is used for checking the straight line fit.
Journal ArticleDOI

Singular value decomposition and least squares solutions

TL;DR: The decomposition of A is called the singular value decomposition (SVD) and the diagonal elements of ∑ are the non-negative square roots of the eigenvalues of A T A; they are called singular values.
Book

Linear regression analysis

TL;DR: In this paper, the authors take into serious consideration the further development of regression computer programs that are efficient, accurate, and considered an important part of statistical research, and provide up-to-date accounts of computational methods and algorithms currently in use without getting entrenched in minor computing details.
Journal ArticleDOI

Input-output parametric models for non-linear systems Part II: stochastic non-linear systems

TL;DR: Recursive input-output models for non-linear multivariate discrete-time systems are derived, and sufficient conditions for their existence are defined.