scispace - formally typeset
Search or ask a question
Author

Friedrich Pukelsheim

Bio: Friedrich Pukelsheim is an academic researcher. The author has contributed to research in topics: Matrix (mathematics) & Optimal design. The author has an hindex of 1, co-authored 1 publications receiving 1817 citations.

Papers
More filters
Book
08 Mar 1993
TL;DR: Experimental designs in linear models Optimal designs for Scalar Parameter Systems Information Matrices Loewner Optimality Real Optimality Criteria Matrix Means The General Equivalence Theorem Optimal Moment Matrices and Optimal Designs D-, A-, E-, T-Optimality Admissibility of moment and information matrices Bayes Designs and Discrimination Designs Efficient Designs for Finite Sample Sizes Invariant Design Problems Kiefer Optimality Rotatability and Response Surface Designs Comments and References Biographies Bibliography Index as discussed by the authors
Abstract: Experimental Designs in Linear Models Optimal Designs for Scalar Parameter Systems Information Matrices Loewner Optimality Real Optimality Criteria Matrix Means The General Equivalence Theorem Optimal Moment Matrices and Optimal Designs D-, A-, E-, T-Optimality Admissibility of Moment and Information Matrices Bayes Designs and Discrimination Designs Efficient Designs for Finite Sample Sizes Invariant Design Problems Kiefer Optimality Rotatability and Response Surface Designs Comments and References Biographies Bibliography Index.

1,823 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper reviews the literature on Bayesian experimental design, both for linear and nonlinear models, and presents a uniied view of the topic by putting experimental design in a decision theoretic framework.
Abstract: This paper reviews the literature on Bayesian experimental design. A unified view of this topic is presented, based on a decision-theoretic approach. This framework casts criteria from the Bayesian literature of design as part of a single coherent approach. The decision-theoretic structure incorporates both linear and nonlinear design problems and it suggests possible new directions to the experimental design problem, motivated by the use of new utility functions. We show that, in some special cases of linear design problems, Bayesian solutions change in a sensible way when the prior distribution and the utility function are modified to allow for the specific structure of the experiment. The decision-theoretic approach also gives a mathematical justification for selecting the appropriate optimality criterion.

1,903 citations

Journal ArticleDOI
TL;DR: This paper describes a heuristic, based on convex optimization, that gives a subset selection as well as a bound on the best performance that can be achieved by any selection of k sensor measurements.
Abstract: We consider the problem of choosing a set of k sensor measurements, from a set of m possible or potential sensor measurements, that minimizes the error in estimating some parameters. Solving this problem by evaluating the performance for each of the (m k) possible choices of sensor measurements is not practical unless m and k are small. In this paper, we describe a heuristic, based on convex optimization, for approximately solving this problem. Our heuristic gives a subset selection as well as a bound on the best performance that can be achieved by any selection of k sensor measurements. There is no guarantee that the gap between the performance of the chosen subset and the performance bound is always small; but numerical experiments suggest that the gap is small in many cases. Our heuristic method requires on the order of m 3 operations; for m= 1000 possible sensors, we can carry out sensor selection in a few seconds on a 2-GHz personal computer.

1,251 citations

Journal ArticleDOI
TL;DR: In this paper, the authors demonstrate that this common practice can lead to grossly misguided inferences, unless the scoring function and the forecasting task are carefully matched, and demonstrate that point forecasting methods are compared by means of an error measure or scoring function, with the absolute error and the squared error being key examples.
Abstract: Typically, point forecasting methods are compared and assessed by means of an error measure or scoring function, with the absolute error and the squared error being key examples. The individual scores are averaged over forecast cases, to result in a summary measure of the predictive performance, such as the mean absolute error or the mean squared error. I demonstrate that this common practice can lead to grossly misguided inferences, unless the scoring function and the forecasting task are carefully matched. Effective point forecasting requires that the scoring function be specified ex ante, or that the forecaster receives a directive in the form of a statistical functional, such as the mean or a quantile of the predictive distribution. If the scoring function is specified ex ante, the forecaster can issue the optimal point forecast, namely, the Bayes rule. If the forecaster receives a directive in the form of a functional, it is critical that the scoring function be consistent for it, in the sense that t...

924 citations

Journal ArticleDOI
TL;DR: It is shown that UD's have many desirable properties for a wide variety of applications and the global optimization algorithm, threshold accepting, is used to generate UD's with low discrepancy.
Abstract: A uniform design (UD) seeks design points that are uniformly scattered on the domain. It has been popular since 1980. A survey of UD is given in the first portion: The fundamental idea and construction method are presented and discussed and examples are given for illustration. It is shown that UD's have many desirable properties for a wide variety of applications. Furthermore, we use the global optimization algorithm, threshold accepting, to generate UD's with low discrepancy. The relationship between uniformity and orthogonality is investigated. It turns out that most UD's obtained here are indeed orthogonal.

825 citations

Journal Article
TL;DR: This paper proposes a new method called importance weighted cross validation (IWCV), for which its unbiasedness even under the covariate shift is proved, and the IWCV procedure is the only one that can be applied for unbiased classification under covariates.
Abstract: A common assumption in supervised learning is that the input points in the training set follow the same probability distribution as the input points that will be given in the future test phase However, this assumption is not satisfied, for example, when the outside of the training region is extrapolated The situation where the training input points and test input points follow different distributions while the conditional distribution of output values given input points is unchanged is called the covariate shift Under the covariate shift, standard model selection techniques such as cross validation do not work as desired since its unbiasedness is no longer maintained In this paper, we propose a new method called importance weighted cross validation (IWCV), for which we prove its unbiasedness even under the covariate shift The IWCV procedure is the only one that can be applied for unbiased classification under covariate shift, whereas alternatives to IWCV exist for regression The usefulness of our proposed method is illustrated by simulations, and furthermore demonstrated in the brain-computer interface, where strong non-stationarity effects can be seen between training and test sessions

807 citations