scispace - formally typeset
Search or ask a question

Showing papers by "Pritam Ranjan published in 2011"


Journal ArticleDOI
TL;DR: In this paper, the authors propose a lower bound on the nugget that minimizes the over-smoothing and an iterative regularization approach to construct a predictor that further improves the inter...
Abstract: For many expensive deterministic computer simulators, the outputs do not have replication error and the desired metamodel (or statistical emulator) is an interpolator of the observed data. Realizations of Gaussian spatial processes (GP) are commonly used to model such simulator outputs. Fitting a GP model to n data points requires the computation of the inverse and determinant of n×n correlation matrices, R, that are sometimes computationally unstable due to near-singularity of R. This happens if any pair of design points are very close together in the input space. The popular approach to overcome near-singularity is to introduce a small nugget (or jitter) parameter in the model that is estimated along with other model parameters. The inclusion of a nugget in the model often causes unnecessary over-smoothing of the data. In this article, we propose a lower bound on the nugget that minimizes the over-smoothing and an iterative regularization approach to construct a predictor that further improves the inter...

106 citations


Posted Content
TL;DR: This small study suggests that GP models are fertile ground for further implementation on CPU+GPU systems, and demonstrates that the computational cost of implementing GP models can be significantly reduced by using a CPU+ GPU heterogeneous computing system over an analogous implementation on a traditional computing system with no GPU acceleration.
Abstract: The graphics processing unit (GPU) has emerged as a powerful and cost effective processor for general performance computing. GPUs are capable of an order of magnitude more floating-point operations per second as compared to modern central processing units (CPUs), and thus provide a great deal of promise for computationally intensive statistical applications. Fitting complex statistical models with a large number of parameters and/or for large datasets is often very computationally expensive. In this study, we focus on Gaussian process (GP) models -- statistical models commonly used for emulating expensive computer simulators. We demonstrate that the computational cost of implementing GP models can be significantly reduced by using a CPU+GPU heterogeneous computing system over an analogous implementation on a traditional computing system with no GPU acceleration. Our small study suggests that GP models are fertile ground for further implementation on CPU+GPU systems.

19 citations


Journal ArticleDOI
TL;DR: In this paper, branch and bound algorithms for efficiently maximizing the expected improvement function in specific problems, including the simultaneous estimation of a global maximum and minimum, and in the estimation of contours, are presented.

14 citations


Journal ArticleDOI
TL;DR: This paper develops a methodology for selecting optimal follow-up designs based on integrated mean squared error that help to capture and reduce prediction uncertainty as much as possible.
Abstract: In many branches of physical science, when the complex physical phenomena are either too expensive or too time consuming to observe, deterministic computer codes are often used to simulate these processes. Nonetheless, true physical processes are also observed in some disciplines. It is preferred to integrate both the true physical process and the computer model data for better understanding of the underlying phenomena. In this paper, we develop a methodology for selecting optimal follow-up designs based on integrated mean squared error that help us capture and reduce prediction uncertainty as much as possible. We also compare the efficiency of the optimal designs with the intuitive choices for the follow-up computer and field trials.

14 citations