scispace - formally typeset
Search or ask a question
Author

C. Reinsch

Bio: C. Reinsch is an academic researcher from Munich University of Applied Sciences. The author has contributed to research in topics: Smoothing spline & Spline interpolation. The author has an hindex of 9, co-authored 11 publications receiving 5696 citations. Previous affiliations of C. Reinsch include Ludwig Maximilian University of Munich.

Papers
More filters
Journal ArticleDOI
TL;DR: The decomposition of A is called the singular value decomposition (SVD) and the diagonal elements of ∑ are the non-negative square roots of the eigenvalues of A T A; they are called singular values.
Abstract: Let A be a real m×n matrix with m≧n. It is well known (cf. [4]) that $$A = U\sum {V^T}$$ (1) where $${U^T}U = {V^T}V = V{V^T} = {I_n}{\text{ and }}\sum {\text{ = diag(}}{\sigma _{\text{1}}}{\text{,}} \ldots {\text{,}}{\sigma _n}{\text{)}}{\text{.}}$$ The matrix U consists of n orthonormalized eigenvectors associated with the n largest eigenvalues of AA T , and the matrix V consists of the orthonormalized eigenvectors of A T A. The diagonal elements of ∑ are the non-negative square roots of the eigenvalues of A T A; they are called singular values. We shall assume that $${\sigma _1} \geqq {\sigma _2} \geqq \cdots \geqq {\sigma _n} \geqq 0.$$ Thus if rank(A)=r, σ r+1 = σ r+2=⋯=σ n = 0. The decomposition (1) is called the singular value decomposition (SVD).

3,036 citations

Journal ArticleDOI
TL;DR: In this paper, the authors generalize the results of [4] and modify the algorithm presented there to obtain a better rate of convergence, which is the same as in this paper.
Abstract: In this paper we generalize the results of [4] and modify the algorithm presented there to obtain a better rate of convergence.

2,225 citations

Journal ArticleDOI
TL;DR: Haida gwaii tourism guide, Handbook of raman spectroscopy, and the Jordan form, Kronecker's form for matrix pencils, and various condition in the Handbook.
Abstract: Haida gwaii tourism guide · Handbook of raman spectroscopy free download Handbook for automatic computation vol 2 linear algebra pdf · How much does. the Jordan form, Kronecker's form for matrix pencils, and various condition in the Handbook (Handbook for Automatic Computation, vol. II, Linear Algebra. War II Stiefel, as an officer of the Swiss Army, had to For some sparse matrix problems the code for the (Handbook for Automatic Computation, Vol. la).

253 citations

Journal ArticleDOI
TL;DR: This algorithm is based on the work of Osborne, who pointed out that existing eigenvalue programs usually produce results with errors at least of order, so it is recommended that one precede the calling of such a routine by certain diagonal similarity transformations of A designed to reduce its norm.
Abstract: This algorithm is based on the work of Osborne [1]. He pointed out that existing eigenvalue programs usually produce results with errors at least of order e‖A‖ E , where is the machine precision and ‖A‖ E is the Euclidean (Frobenius) norm of the given matrix A**. Hence he recommends that one precede the calling of such a routine by certain diagonal similarity transformations of A designed to reduce its norm.

179 citations

Journal ArticleDOI
TL;DR: In this article, an improved version of Householder's algorithm for tridiagonalization of a real symmetric matrix was discussed. But the most efficient form of the procedure depends on the method used to solve the eigenproblem of the derived tridiagon matrix.
Abstract: In an early paper in this series [4] Householder’s algorithm for the tridiagonalization of a real symmetric matrix was discussed. In the light of experience gained since its publication and in view of its importance it seems worthwhile to issue improved versions of the procedure given there. More than one variant is given since the most efficient form of the procedure depends on the method used to solve the eigenproblem of the derived tridiagonal matrix.

100 citations


Cited by
More filters
Journal ArticleDOI

9,941 citations

Journal ArticleDOI
TL;DR: The age calibration program, CALIB (Stuiver & Reimer 1986), first made available in 1986 and subsequently modified in 1987 (revision 2.0 and 2.1), has been amended anew as mentioned in this paper.
Abstract: The age calibration program, CALIB (Stuiver & Reimer 1986), first made available in 1986 and subsequently modified in 1987 (revision 2.0 and 2.1), has been amended anew. The 1993 program (revision 3.0) incorporates further refinements and a new calibration data set covering nearly 22,000 cal yr (≈18,400 14C yr). The new data, and corrections to the previously used data set, derive from a 6-yr (1986–1992) time-scale calibration effort of several laboratories.

7,368 citations

OtherDOI
22 Apr 2014
TL;DR: The generalized additive model (GA) as discussed by the authors is a generalization of the generalized linear model, which replaces the linear model with a sum of smooth functions in an iterative procedure called local scoring algorithm.
Abstract: Likelihood-based regression models such as the normal linear regression model and the linear logistic model, assume a linear (or some other parametric) form for the covariates $X_1, X_2, \cdots, X_p$. We introduce the class of generalized additive models which replaces the linear form $\sum \beta_jX_j$ by a sum of smooth functions $\sum s_j(X_j)$. The $s_j(\cdot)$'s are unspecified functions that are estimated using a scatterplot smoother, in an iterative procedure we call the local scoring algorithm. The technique is applicable to any likelihood-based regression model: the class of generalized linear models contains many of these. In this class the linear predictor $\eta = \Sigma \beta_jX_j$ is replaced by the additive predictor $\Sigma s_j(X_j)$; hence, the name generalized additive models. We illustrate the technique with binary response and survival data. In both cases, the method proves to be useful in uncovering nonlinear covariate effects. It has the advantage of being completely automatic, i.e., no "detective work" is needed on the part of the statistician. As a theoretical underpinning, the technique is viewed as an empirical method of maximizing the expected log likelihood, or equivalently, of minimizing the Kullback-Leibler distance to the true model.

5,700 citations

Journal ArticleDOI
TL;DR: Locally weighted regression as discussed by the authors is a way of estimating a regression surface through a multivariate smoothing procedure, fitting a function of the independent variables locally and in a moving fashion analogous to how a moving average is computed for a time series.
Abstract: Locally weighted regression, or loess, is a way of estimating a regression surface through a multivariate smoothing procedure, fitting a function of the independent variables locally and in a moving fashion analogous to how a moving average is computed for a time series With local fitting we can estimate a much wider class of regression surfaces than with the usual classes of parametric functions, such as polynomials The goal of this article is to show, through applications, how loess can be used for three purposes: data exploration, diagnostic checking of parametric models, and providing a nonparametric regression surface Along the way, the following methodology is introduced: (a) a multivariate smoothing procedure that is an extension of univariate locally weighted regression; (b) statistical procedures that are analogous to those used in the least-squares fitting of parametric functions; (c) several graphical methods that are useful tools for understanding loess estimates and checking the a

5,188 citations

Journal ArticleDOI
B. Moore1
TL;DR: In this paper, it is shown that principal component analysis (PCA) is a powerful tool for coping with structural instability in dynamic systems, and it is proposed that the first step in model reduction is to apply the mechanics of minimal realization using these working subspaces.
Abstract: Kalman's minimal realization theory involves geometric objects (controllable, unobservable subspaces) which are subject to structural instability. Specifically, arbitrarily small perturbations in a model may cause a change in the dimensions of the associated subspaces. This situation is manifested in computational difficulties which arise in attempts to apply textbook algorithms for computing a minimal realization. Structural instability associated with geometric theories is not unique to control; it arises in the theory of linear equations as well. In this setting, the computational problems have been studied for decades and excellent tools have been developed for coping with the situation. One of the main goals of this paper is to call attention to principal component analysis (Hotelling, 1933), and an algorithm (Golub and Reinsch, 1970) for computing the singular value decompositon of a matrix. Together they form a powerful tool for coping with structural instability in dynamic systems. As developed in this paper, principal component analysis is a technique for analyzing signals. (Singular value decomposition provides the computational machinery.) For this reason, Kalman's minimal realization theory is recast in terms of responses to injected signals. Application of the signal analysis to controllability and observability leads to a coordinate system in which the "internally balanced" model has special properties. For asymptotically stable systems, this yields working approximations of X_{c}, X_{\bar{o}} , the controllable and unobservable subspaces. It is proposed that a natural first step in model reduction is to apply the mechanics of minimal realization using these working subspaces.

5,134 citations