scispace - formally typeset
Search or ask a question

Showing papers on "Maximum a posteriori estimation published in 1970"


Journal ArticleDOI
TL;DR: Applications to well‐known problems of distribution fitting, quantal responses and least‐squares curve fitting, and sequential minimization and nested minimization can be used to solve particular problems are described.
Abstract: Maximum‐likelihood estimation problems can be solved numerically using function minimization algorithms, but the amount of computing required and the accuracy of the results depend on the way the algorithms are used. Attention to the analytical properties of the model, to the relationship between the model and the data, and to descriptive properties of the data can greatly simplify the problem, sometimes providing a method of solution on a desk calculator. This paper describes how parameter transformation, sequential minimization and nested minimization can be used to solve particular problems. Applications to well‐known problems of distribution fitting, quantal responses and least‐squares curve fitting are described. The implications for computer programming are discussed.

108 citations



Journal ArticleDOI
TL;DR: The general problem of determining the photoelectron "counting" distribution resulting from an electromagnetic field impinging on a quantum detector is formulated and various limiting forms of this distribution are derived, including the necessary conditions for those commonly accepted.
Abstract: In this paper we formulate the general problem of determining the photoelectron "counting" distribution resulting from an electromagnetic field impinging on a quantum detector. Although the detector model used was derived quantum mechanically, our treatment is wholly classical and includes all results known to date. This combination is commonly referred to as the semiclassical approach. The emphasis, however, lies in directing the problem towards optical communication. The electromagnetic field is assumed to be the sum of a deterministic signal and a zero-mean narrow-band Gaussian random process, and is expanded in a Karhunen-Loeve series of orthogonal functions. Several examples are given. It is shown that all the results obtainable can be written explicitly in terms of the noise covariance function. Particular attention is given to the case of a signal plus white Gaussian noise, both of which are band-limited to \pm B Hz. Since the result is a fundamental one, to add some physical insight, we show four methods by which it can be obtained. Various limiting forms of this distribution are derived, including the necessary conditions for those commonly accepted. The likelihood functional is established and is shown to be the product of Laguerre polynomials. For the problem of continuous estimation, the Fisher information kernel is derived and an important limiting form is obtained. The maximum a posteriori (MAP) and maximum-likelihood (ML) estimation equations are also derived. In the latter case the results are also functions of Laguerre polynomials.

68 citations


Journal ArticleDOI
T. Bohlin1
TL;DR: Further developments of the maximum likelihood principle of estimation applied to the linear black-box identification problem have been presented, the reliability and speed of the identification algorithm have been improved, and the method has been made easier to use.
Abstract: The maximum likelihood principle of estimation applied to the linear black-box identification problem gives models with theoretically attractive properties. Also, the method has been applied to industrial data (various processes in paper production) and proved able to work in practice.This paper presents further developments of the method in the case of a single output. The reliability and speed of the identification algorithm have been improved, and the method has been made easier to use. A rather sophisticated computer program, however, was needed. It employs a generalized model structure, an improved hill-climbing algorithm, and an automatic procedure for determining model orders and transport delays. Some statistics from performance tests of the program are presented.

44 citations


Proceedings ArticleDOI
01 Dec 1970
TL;DR: In this article, the problem of simultaneous estimation of state and parameters in linear discrete-time dynamic systems is formulated under the assumption that the system parameters are unknown constants and the solution is obtained by iterating between two systems of linear equations.
Abstract: The topic of this paper is the simultaneous estimation of state and parameters in linear discrete-time dynamic systems. The system is subject to a known arbitrary input (control), a random input (additive driving noise) and the output observation is also contaminated by noise. The noises are Gaussian, zero-mean, independent and with known variances. The problem is formulated under the assumption that the system parameters are unknown constants. Previous works in the literature treated this problem assuming that each parameter can take values over a finite set with known a priori probabilities. The proposed scheme yields the maximum a posteriori and maximum likelihood estimates for the system's state and parameters, respectively. They are obtained by solving the likelihood equations-a system of nonlinear equations with the state and parameters as unknowns. Use is made of the fact that the dynamical system considered is linear and the problem is separated into two interconnected linear problems: one for the state, the other for the parameters. The solution is obtained by iterating between two systems of linear equations. The estimation technique presented is optimal in the following sense: no approximations are involved and the estimates of the parameters converge to the true values at the fastest possible rate, as given by the Cram�r-Rao lower bound, i.e. they are asympotically efficient. This is proved, using a theorem which states that, under certain general conditions, the maximum likelihood estimate with dependent observations is consistent and asymptotically efficient. The problem of uniqueness of the solution is discussed for the case of a scalar unknown parameter. Use is made of a theorem due to Perlman, generalized for the case of dependent observations. Due to the fact that the estimation-identification is done in the presence of input and output noise and an arbitrary known input, the procedure can be considered an on-line technique. Since estimates are available after each measurement, this estimationidentification procedure is suited for use in the adaptive control of unknown (or partially known) linear plants.

20 citations


Journal ArticleDOI
TL;DR: In this article, a summary derivation of maximum a posteriori estimation for continuous and discrete non-linear systems is presented. But the authors do not discuss the use of the maximum a priori estimation with Gaussian statistics.
Abstract: This paper discusses a summary derivation of maximum a posteriori estimation for continuous and discrete non-linear systems. It is known that with Gaussian a priori statistics, the maximum a posteriori estimate is equivalent to an appropriate least squares fit. Filtering, fixed interval and fixed point smoothing algorithms for approximate non-linear estimation are obtained for the least squares eurve fit using ‘ running time ’ and ‘ fixed time ’ invariant embedding. Examples illustrating the use of the algorithms are presented.

17 citations


Journal ArticleDOI
TL;DR: In this article, the approximate solution of the non-linear two-point boundary value problem for maximum a posteriori estimation is presented, which is obtained by the discrete invariant embedding procedure.
Abstract: This paper presents the approximate solution of the non-linear two-point boundary value problem for maximum a posteriori estimation. Filtering, fixed point smoothing, fixed interval smoothing, and fixed lag smoothing algorithms are obtained by the discrete invariant embedding procedure. These are tabulated as well as the associated approximate error variances.

15 citations


Proceedings ArticleDOI
01 Dec 1970
TL;DR: This paper presents the use ofmaximum likelihood estimation, optimization theory, and discrete invariant imbedding in the development of algorithms for the maximum likelihood estimation of bias coefficients in discrete linear systems with stochastic inputs and disturbances.
Abstract: This paper presents the use of maximum likelihood estimation, optimization theory, and discrete invariant imbedding in the development of algorithms for the maximum likelihood estimation of bias coefficients in discrete linear systems with stochastic inputs and disturbances.