scispace - formally typeset
Search or ask a question
Author

Pritam Ranjan

Bio: Pritam Ranjan is an academic researcher from Indian Institute of Management Indore. The author has contributed to research in topics: Gaussian process & Computer experiment. The author has an hindex of 14, co-authored 64 publications receiving 1038 citations. Previous affiliations of Pritam Ranjan include Simon Fraser University & Acadia University.


Papers
More filters
Journal ArticleDOI
TL;DR: The proposed approach uses Gaussian Process (GP) modeling to improve upon SELC, and hence named G-SELC and is implemented on a real pharmaceutical data set for finding a group of chemical compounds with optimal properties.
Abstract: Identifying promising compounds from a vast collection of feasible compounds is an important and yet challenging problem in the pharmaceutical industry An efficient solution to this problem will help reduce the expenditure at the early stages of drug discovery In an attempt to solve this problem, Mandal, Wu and Johnson [Technometrics 48 (2006) 273--283] proposed the SELC algorithm Although powerful, it fails to extract substantial information from the data to guide the search efficiently, as this methodology is not based on any statistical modeling The proposed approach uses Gaussian Process (GP) modeling to improve upon SELC, and hence named $\mathcal{G}$-SELC The performance of the proposed methodology is illustrated using four and five dimensional test functions Finally, we implement the new algorithm on a real pharmaceutical data set for finding a group of chemical compounds with optimal properties

14 citations

Journal ArticleDOI
TL;DR: The estimation of the inverse solution, i.e., to find the set(s) of input combinations of the simulator that generates a pre-determined simulator output, for a time-series valued simulator.
Abstract: For an expensive to evaluate computer simulator, even the estimate of the overall surface can be a challenging problem. In this paper, we focus on the estimation of the inverse solution, i.e., to find the set(s) of input combinations of the simulator that generates a pre-determined simulator output. Ranjan et al. [1] proposed an expected improvement criterion under a sequential design framework for the inverse problem with a scalar valued simulator. In this paper, we focus on the inverse problem for a time-series valued simulator. We have used a few simulated and two real examples for performance comparison.

13 citations

Journal ArticleDOI
TL;DR: In this paper, a new class of space-filling LHDs based on Orthogonal Arrays (OAs) derived from stars of P G (p − 1, 2 ) was presented.

12 citations

Journal ArticleDOI
TL;DR: In this paper, the authors used Gaussian Process (GP) modeling to improve upon SELC, and hence named $\mathcal{G}$-SELC. The performance of the proposed methodology is illustrated using four and five dimensional test functions.
Abstract: Identifying promising compounds from a vast collection of feasible compounds is an important and yet challenging problem in the pharmaceutical industry. An efficient solution to this problem will help reduce the expenditure at the early stages of drug discovery. In an attempt to solve this problem, Mandal, Wu and Johnson [Technometrics 48(2006) 273–283] proposed the SELC algorithm. Although powerful, it fails to extract substantial information from the data to guide the search efficiently, as this methodology is not based on any statistical modeling. The proposed approach uses Gaussian Process (GP) modeling to improve upon SELC, and hence named $\mathcal{G}$-SELC. The performance of the proposed methodology is illustrated using four and five dimensional test functions. Finally, we implement the new algorithm on a real pharmaceutical data set for finding a group of chemical compounds with optimal properties.

11 citations

Posted Content
TL;DR: In this article, a new class of space-filling LHDs based on Orthogonal Arrays (OAs) derived from stars of PG(p-1, 2).
Abstract: Latin hypercube designs (LHDs) with space-filling properties are widely used for emulating computer simulators. Over the last three decades, a wide spectrum of LHDs have been proposed with space-filling criteria like minimum correlation among factors, maximin interpoint distance, and orthogonality among the factors via orthogonal arrays (OAs). Projective geometric structures like spreads, covers and stars of PG(p-1,q) can be used to characterize the randomization restriction of multistage factorial experiments. These geometric structures can also be used for constructing OAs and nearly OAs (NOAs). In this paper, we present a new class of space-filling LHDs based on NOAs derived from stars of PG(p-1, 2).

10 citations


Cited by
More filters
Journal Article
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON

13,333 citations

Journal ArticleDOI
01 Jan 2016
TL;DR: This review paper introduces Bayesian optimization, highlights some of its methodological aspects, and showcases a wide range of applications.
Abstract: Big Data applications are typically associated with systems involving large numbers of users, massive complex software systems, and large-scale heterogeneous computing and storage architectures. The construction of such systems involves many distributed design choices. The end products (e.g., recommendation systems, medical analysis tools, real-time game engines, speech recognizers) thus involve many tunable configuration parameters. These parameters are often specified and hard-coded into the software by various developers or teams. If optimized jointly, these parameters can result in significant improvements. Bayesian optimization is a powerful tool for the joint optimization of design choices that is gaining great popularity in recent years. It promises greater automation so as to increase both product quality and human productivity. This review paper introduces Bayesian optimization, highlights some of its methodological aspects, and showcases a wide range of applications.

3,703 citations

Journal ArticleDOI
TL;DR: An iterative approach based on Monte Carlo Simulation and Kriging metamodel to assess the reliability of structures in a more efficient way and is shown to be very efficient as the probability of failure obtained with AK-MCS is very accurate and this, for only a small number of calls to the performance function.

1,234 citations

Journal ArticleDOI
TL;DR: This paper develops an efficient reliability analysis method that accurately characterizes the limit state throughout the random variable space and is both accurate for any arbitrarily shaped limit state and computationally efficient even for expensive response functions.
Abstract: Many engineering applications are characterized by implicit response functions that are expensive to evaluate and sometimes nonlinear in their behavior, making reliability analysis difficult. This paper develops an efficient reliability analysis method that accurately characterizes the limit state throughout the random variable space. The method begins with a Gaussian process model built from a very small number of samples, and then adaptively chooses where to generate subsequent samples to ensure that the model is accurate in the vicinity of the limit state. The resulting Gaussian process model is then sampled using multimodal adaptive importance sampling to calculate the probability of exceeding (or failing to exceed) the response level of interest. By locating multiple points on or near the limit state, more complex and nonlinear limit states can be modeled, leading to more accurate probability integration. By concentrating the samples in the area where accuracy is important (i.e., in the vicinity of the limit state), only a small number of true function evaluations are required to build a quality surrogate model. The resulting method is both accurate for any arbitrarily shaped limit state and computationally efficient even for expensive response functions. This new method is applied to a collection of example problems including one that analyzes the reliability of a microelectromechanical system device that current available methods have difficulty solving either accurately or efficiently.

804 citations

Journal ArticleDOI
TL;DR: In many situations across computational science and engineering, multiple computational models are available that describe a system of interest as discussed by the authors, and these different models have varying evaluation costs, i.e.
Abstract: In many situations across computational science and engineering, multiple computational models are available that describe a system of interest. These different models have varying evaluation costs...

678 citations