scispace - formally typeset
Journal ArticleDOI

Computational aspects of maximum likelihood estimation and reduction in sensitivity function calculations

N. K. Gupta, +1 more
- 01 Dec 1974 - 
- Vol. 19, Iss: 6, pp 774-783
Reads0
Chats0
TLDR
Different gradient-based nonlinear programming methods are discussed in a unified framework and their applicability to maximum likelihood estimation is examined and new results on the calculation of state sensitivity functions via reduced order models are given.
Abstract
This paper discusses numerical aspects of computing maximum likelihood estimates for linear dynamical systems in state-vector form. Different gradient-based nonlinear programming methods are discussed in a unified framework and their applicability to maximum likelihood estimation is examined. The problems due to singular Hessian or singular information matrix that are common in practice are discussed in detail and methods for their solution are proposed. New results on the calculation of state sensitivity functions via reduced order models are given. Several methods for speeding convergence and reducing computation time are also discussed.

read more

Citations
More filters
Book

Inference in Hidden Markov Models

TL;DR: This book is a comprehensive treatment of inference for hidden Markov models, including both algorithms and statistical theory, and builds on recent developments to present a self-contained view.
Journal ArticleDOI

An approach to time series smoothing and forecasting using the em algorithm

TL;DR: In this article, an approach to smoothing and forecasting for time series with missing observations is proposed, where the EM algorithm is used in conjunction with the conventional Kalman smoothed estimators to derive a simple recursive procedure for estimating the parameters.

Bayesian Filtering and Smoothing

Simo Särkkä
TL;DR: This compact, informal introduction for graduate students and advanced undergraduates presents the current state-of-the-art filtering and smoothing methods in a unified Bayesian framework and learns what non-linear Kalman filters and particle filters are, how they are related, and their relative advantages and disadvantages.

贝叶斯滤波与平滑 (Bayesian filtering and smoothing)

Simo Särkkä
TL;DR: This compact, informal introduction for graduate students and advanced undergraduates presents the current state-of-the-art filtering and smoothing methods in a unified Bayesian framework and learns what non-linear Kalman filters and particle filters are, how they are related, and their relative advantages and disadvantages.
Book

Bayesian Filtering and Smoothing

TL;DR: This compact, informal introduction for graduate students and advanced undergraduates presents the current state-of-the-art filtering and smoothing methods in a unified Bayesian framework, learning what non-linear Kalman filters and particle filters are, how they are related, and their relative advantages and disadvantages.
References
More filters
Journal ArticleDOI

A method for the solution of certain non – linear problems in least squares

TL;DR: In this article, the problem of least square problems with non-linear normal equations is solved by an extension of the standard method which insures improvement of the initial solution, which can also be considered an extension to Newton's method.
Journal ArticleDOI

A Rapidly Convergent Descent Method for Minimization

TL;DR: A number of theorems are proved to show that it always converges and that it converges rapidly, and this method has been used to solve a system of one hundred non-linear simultaneous equations.
Journal ArticleDOI

The differentiation of pseudoinverses and nonlinear least squares problems whose variables separate.

TL;DR: Algorithms are presented which make extensive use of well-known reliable linear least squares techniques, and numerical results and comparisons are given.
Journal ArticleDOI

Maximum likelihood identification of Gaussian autoregressive moving average models

TL;DR: It is shown that the procedure described by Hannan (1969) for the estimation of the parameters of one-dimensional autoregressive moving average processes is equivalent to a three-stage realization of one step of the NewtonRaphson procedure for the numerical maximization of the likelihood function, using the gradient and the approximate Hessian.
Related Papers (5)