scispace - formally typeset
Search or ask a question
Author

Pritam Ranjan

Bio: Pritam Ranjan is an academic researcher from Indian Institute of Management Indore. The author has contributed to research in topics: Gaussian process & Computer experiment. The author has an hindex of 14, co-authored 64 publications receiving 1038 citations. Previous affiliations of Pritam Ranjan include Simon Fraser University & Acadia University.


Papers
More filters
Journal ArticleDOI
TL;DR: The main idea is to first find a good local neighborhood for every input location, and then emulate the simulator output via a singular value decomposition (SVD) based Gaussian process (GP) model.
Abstract: The recent accelerated growth in the computing power has generated popularization of experimentation with dynamic computer models in various physical and engineering applications. Despite the extensive statistical research in computer experiments, most of the focus had been on the theoretical and algorithmic innovations for the design and analysis of computer models with scalar responses. In this article, we propose a computationally efficient statistical emulator for a large-scale dynamic computer simulator (i.e., simulator which gives time series outputs). The main idea is to first find a good local neighborhood for every input location, and then emulate the simulator output via a singular value decomposition (SVD) based Gaussian process (GP) model. We develop a new design criterion for sequentially finding this local neighborhood set of training points. Several test functions and a real-life application have been used to demonstrate the performance of the proposed approach over a naive method o...

6 citations

01 Jan 2010
TL;DR: In this paper, the authors proposed new factorial and fractional factorial designs that minimize the number of effects that have to be sacrificed to assess the significance of some of the factorial effects.
Abstract: Factorial and fractional factorial designs are widely used for assessing the impact of several factors on a process. Frequently, restrictions are placed on the randomization of the experimental trials. The randomization structure of such a factorial design can be characterized by its set of randomization defining contrast subspaces. It turns out that in many practical situations, these subspaces will overlap, thereby making it impossible to assess the significance of some of the factorial effects. In this article, we propose new designs that minimize the number of effects that have to be sacrificed. We also propose new designs, called stars, that are easy to construct and allow the assessment of a large number of factorial effects under an appropriately chosen overlapping strategy.

6 citations

Journal ArticleDOI
TL;DR: In this paper, a modified history matching approach was proposed for calibrating the time-series rainfall-runoff models with respect to real data collected from the state of Georgia, USA.
Abstract: Calibration of hydrological time-series models is a challenging task since these models give a wide spectrum of output series and calibration procedures require significant amount of time. From a statistical standpoint, this model parameter estimation problem simplifies to finding an inverse solution of a computer model that generates pre-specified time-series output (i.e., realistic output series). In this paper, we propose a modified history matching approach for calibrating the time-series rainfall-runoff models with respect to the real data collected from the state of Georgia, USA. We present the methodology and illustrate the application of the algorithm by carrying a simulation study and the two case studies. Several goodness-of-fit statistics were calculated to assess the model performance. The results showed that the proposed history matching algorithm led to a significant improvement, of 30% and 14% (in terms of root mean squared error) and 26% and 118% (in terms of peak percent threshold statistics), for the two case-studies with Matlab-Simulink and SWAT models, respectively.

5 citations

Journal ArticleDOI
Pritam Ranjan1
TL;DR: The author believes that the methodologies presented by Picheny et al. are innovative and should be useful for computer experiment practitioners.
Abstract: The author believes that the methodologies presented by Picheny et al. are innovative and should be useful for computer experiment practitioners.

5 citations


Cited by
More filters
Journal Article
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON

13,333 citations

Journal ArticleDOI
01 Jan 2016
TL;DR: This review paper introduces Bayesian optimization, highlights some of its methodological aspects, and showcases a wide range of applications.
Abstract: Big Data applications are typically associated with systems involving large numbers of users, massive complex software systems, and large-scale heterogeneous computing and storage architectures. The construction of such systems involves many distributed design choices. The end products (e.g., recommendation systems, medical analysis tools, real-time game engines, speech recognizers) thus involve many tunable configuration parameters. These parameters are often specified and hard-coded into the software by various developers or teams. If optimized jointly, these parameters can result in significant improvements. Bayesian optimization is a powerful tool for the joint optimization of design choices that is gaining great popularity in recent years. It promises greater automation so as to increase both product quality and human productivity. This review paper introduces Bayesian optimization, highlights some of its methodological aspects, and showcases a wide range of applications.

3,703 citations

Journal ArticleDOI
TL;DR: An iterative approach based on Monte Carlo Simulation and Kriging metamodel to assess the reliability of structures in a more efficient way and is shown to be very efficient as the probability of failure obtained with AK-MCS is very accurate and this, for only a small number of calls to the performance function.

1,234 citations

Journal ArticleDOI
TL;DR: This paper develops an efficient reliability analysis method that accurately characterizes the limit state throughout the random variable space and is both accurate for any arbitrarily shaped limit state and computationally efficient even for expensive response functions.
Abstract: Many engineering applications are characterized by implicit response functions that are expensive to evaluate and sometimes nonlinear in their behavior, making reliability analysis difficult. This paper develops an efficient reliability analysis method that accurately characterizes the limit state throughout the random variable space. The method begins with a Gaussian process model built from a very small number of samples, and then adaptively chooses where to generate subsequent samples to ensure that the model is accurate in the vicinity of the limit state. The resulting Gaussian process model is then sampled using multimodal adaptive importance sampling to calculate the probability of exceeding (or failing to exceed) the response level of interest. By locating multiple points on or near the limit state, more complex and nonlinear limit states can be modeled, leading to more accurate probability integration. By concentrating the samples in the area where accuracy is important (i.e., in the vicinity of the limit state), only a small number of true function evaluations are required to build a quality surrogate model. The resulting method is both accurate for any arbitrarily shaped limit state and computationally efficient even for expensive response functions. This new method is applied to a collection of example problems including one that analyzes the reliability of a microelectromechanical system device that current available methods have difficulty solving either accurately or efficiently.

804 citations

Journal ArticleDOI
TL;DR: In many situations across computational science and engineering, multiple computational models are available that describe a system of interest as discussed by the authors, and these different models have varying evaluation costs, i.e.
Abstract: In many situations across computational science and engineering, multiple computational models are available that describe a system of interest. These different models have varying evaluation costs...

678 citations