scispace - formally typeset
Search or ask a question
Author

Pritam Ranjan

Bio: Pritam Ranjan is an academic researcher from Indian Institute of Management Indore. The author has contributed to research in topics: Gaussian process & Computer experiment. The author has an hindex of 14, co-authored 64 publications receiving 1038 citations. Previous affiliations of Pritam Ranjan include Simon Fraser University & Acadia University.


Papers
More filters
Posted Content
TL;DR: This paper focuses on the estimation of the inverse solution, i.e., to find the set(s) of input combinations of the simulator that generates (or gives good approximation of) a pre-determined simulator output.
Abstract: For an expensive to evaluate computer simulator, even the estimate of the overall surface can be a challenging problem. In this paper, we focus on the estimation of the inverse solution, i.e., to find the set(s) of input combinations of the simulator that generates (or gives good approximation of) a pre-determined simulator output. Ranjan et al. (2008) proposed an expected improvement criterion under a sequential design framework for the inverse problem with a scalar valued simulator. In this paper, we focus on the inverse problem for a time-series valued simulator. We have used a few simulated and two real examples for performance comparison.

1 citations

Journal ArticleDOI
TL;DR: In this paper, a mean-correction for such an SVM for discrete-time returns with non-zero correlation is proposed, and the performance of the proposed and classical SVMs on S&P 500 index returns obtained from NYSE.
Abstract: In an efficient stock market, the log-returns and their time-dependent variances are often jointly modelled by stochastic volatility models (SVMs). Many SVMs assume that errors in log-return and latent volatility process are uncorrelated, which is unrealistic. It turns out that if a non-zero correlation is included in the SVM (e.g., \cite{Shephard05}), then the expected log-return at time $t$ conditional on the past returns is non-zero, which is not a desirable feature of an efficient stock market. In this paper, we propose a mean-correction for such an SVM for discrete-time returns with non-zero correlation. We also find closed form analytical expressions for higher moments of log-return and its lead-lag correlations with the volatility process. We compare the performance of the proposed and classical SVMs on S\&P 500 index returns obtained from NYSE.

1 citations

Journal ArticleDOI
TL;DR: In this paper, a unified approach to the construction of regular factorial designs with randomization restrictions using randomization defining contrast subspaces for the representation of randomization restriction is presented.
Abstract: Regular factorial designs with randomization restrictions are widely used in practice. This paper provides a unified approach to the construction of such designs using randomization defining contrast subspaces for the representation of randomization restrictions. We use finite projective geometry to determine the existence of designs with the required structure and develop a systematic approach for their construction. An attractive feature is that commonly used factorial designs with randomization restrictions are special cases of this general representation. Issues related to the use of these designs for particular factorial experiments are also addressed.

1 citations

DOI
TL;DR: In this paper , the authors assess the degree to which patent social value can be connected to the private value of patents across discrete and complex innovation, and they have used a variety of logit regression model for the impact assessment analysis.
Abstract: This study assesses the degree to which the social value of patents can be connected to the private value of patents across discrete and complex innovation. The underlying theory suggests that the social value of cumulative patents is less related to the private value of patents. We use the patents applied between 1995 and 2002 and granted on or before December 2018 from the Indian Patent Office (IPO). Here, the patent renewal information is utilised as a proxy for the private value of the patent. We have used a variety of logit regression model for the impact assessment analysis. The results reveal that the technology classification (i.e., discrete vs. complex innovations) plays an important role in patent value assessment, and some technologies are significantly different than the others even within the two broader classifications. Moreover, the non-resident patents in India are more likely to have a higher value than the resident patents. According to the conclusions of this study, only a few technologies from the discrete and complex innovation categories have some private value. There is no evidence that patent social value indicators are less useful in complicated technical classes than in discrete ones.

Cited by
More filters
Journal Article
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON

13,333 citations

Journal ArticleDOI
01 Jan 2016
TL;DR: This review paper introduces Bayesian optimization, highlights some of its methodological aspects, and showcases a wide range of applications.
Abstract: Big Data applications are typically associated with systems involving large numbers of users, massive complex software systems, and large-scale heterogeneous computing and storage architectures. The construction of such systems involves many distributed design choices. The end products (e.g., recommendation systems, medical analysis tools, real-time game engines, speech recognizers) thus involve many tunable configuration parameters. These parameters are often specified and hard-coded into the software by various developers or teams. If optimized jointly, these parameters can result in significant improvements. Bayesian optimization is a powerful tool for the joint optimization of design choices that is gaining great popularity in recent years. It promises greater automation so as to increase both product quality and human productivity. This review paper introduces Bayesian optimization, highlights some of its methodological aspects, and showcases a wide range of applications.

3,703 citations

Journal ArticleDOI
TL;DR: An iterative approach based on Monte Carlo Simulation and Kriging metamodel to assess the reliability of structures in a more efficient way and is shown to be very efficient as the probability of failure obtained with AK-MCS is very accurate and this, for only a small number of calls to the performance function.

1,234 citations

Journal ArticleDOI
TL;DR: This paper develops an efficient reliability analysis method that accurately characterizes the limit state throughout the random variable space and is both accurate for any arbitrarily shaped limit state and computationally efficient even for expensive response functions.
Abstract: Many engineering applications are characterized by implicit response functions that are expensive to evaluate and sometimes nonlinear in their behavior, making reliability analysis difficult. This paper develops an efficient reliability analysis method that accurately characterizes the limit state throughout the random variable space. The method begins with a Gaussian process model built from a very small number of samples, and then adaptively chooses where to generate subsequent samples to ensure that the model is accurate in the vicinity of the limit state. The resulting Gaussian process model is then sampled using multimodal adaptive importance sampling to calculate the probability of exceeding (or failing to exceed) the response level of interest. By locating multiple points on or near the limit state, more complex and nonlinear limit states can be modeled, leading to more accurate probability integration. By concentrating the samples in the area where accuracy is important (i.e., in the vicinity of the limit state), only a small number of true function evaluations are required to build a quality surrogate model. The resulting method is both accurate for any arbitrarily shaped limit state and computationally efficient even for expensive response functions. This new method is applied to a collection of example problems including one that analyzes the reliability of a microelectromechanical system device that current available methods have difficulty solving either accurately or efficiently.

804 citations

Journal ArticleDOI
TL;DR: In many situations across computational science and engineering, multiple computational models are available that describe a system of interest as discussed by the authors, and these different models have varying evaluation costs, i.e.
Abstract: In many situations across computational science and engineering, multiple computational models are available that describe a system of interest. These different models have varying evaluation costs...

678 citations