Author

# Tapas Kumar Samanta

Bio: Tapas Kumar Samanta is an academic researcher from Uluberia College. The author has contributed to research in topics: Space (mathematics) & Normed vector space. The author has an hindex of 11, co-authored 89 publications receiving 847 citations.

##### Papers published on a yearly basis

##### Papers

More filters

•

27 Jul 2006

TL;DR: This paper presents a meta-analyses of Bayesian inference and decision theory from a large number of perspectives on utility, prior, and Bayesian Robustness, and some applications.

Abstract: Statistical Preliminaries.- Bayesian Inference and Decision Theory.- Utility, Prior, and Bayesian Robustness.- Large Sample Methods.- Choice of Priors for Low-dimensional Parameters.- Hypothesis Testing and Model Selection.- Bayesian Computations.- Some Common Problems in Inference.- High-dimensional Problems.- Some Applications.

306 citations

01 Jan 2006

TL;DR: Advances in both low-dimensional and high-dimensional problems are covered, as well as important topics such as empirical Bayes and hierarchical Bayes methods and Markov chain Monte Carlo techniques.

Abstract: This is a graduate-level textbook on Bayesian analysis blending modern Bayesian theory, methods, and applications. Starting from basic statistics, undergraduate calculus and linear algebra, ideas of both subjective and objective Bayesian analysis are developed to a level where real-life data can be analyzed using the current techniques of statistical computing.
Advances in both low-dimensional and high-dimensional problems are covered, as well as important topics such as empirical Bayes and hierarchical Bayes methods and Markov chain Monte Carlo (MCMC) techniques.
Many topics are at the cutting edge of statistical research. Solutions to common inference problems appear throughout the text along with discussion of what prior to choose. There is a discussion of elicitation of a subjective prior as well as the motivation, applicability, and limitations of objective priors. By way of important applications the book presents microarrays, nonparametric regression via wavelets as well as DMA mixtures of normals, and spatial analysis with illustrations using simulated and real data. Theoretical topics at the cutting edge include high-dimensional model selection and Intrinsic Bayes Factors, which the authors have successfully applied to geological mapping.
The style is informal but clear. Asymptotics is used to supplement simulation or understand some aspects of the posterior.

136 citations

••

TL;DR: In this paper, it was shown that this condition is also sufficient to imply the posterior convergence, which is a necessary condition for convergence of a suitably centered (and normalized) posterior to a constant limit in terms of the limiting likelihood ratio process.

Abstract: Z.A general (asymptotic) theory of estimation was developed by Ibragimov and Has’minskii under certain conditions on the normalized likelihood ratios. In an earlier work, the present authors studied the limiting behaviour of the posterior distributions under the general setup of Ibragimov and Has’minskii. In particular, they obtained a necessary condition for the convergence of a suitably centered (and normalized) posterior to a constant limit in terms of the limiting likelihood ratio process. In this paper, it is shown that this condition is also sufficient to imply the posterior convergence. Some related results are also presented.

88 citations

01 Jan 2001

TL;DR: To the extent that large data sets are increasingly common because of advances in information technology, selecting a model has tended to become an essential part of analysis of such data and some of the major statistical developments in this area are reviewed.

Abstract: FOR many scientists models are synonymous with paradigms. They are models of some aspects of reality as depicted in a particular science. So the problem of choosing a model appears when that science is at the crossroads. An example of this was the situation in the twenties, when physicists had to choose between Newton’s classical theory of gravitation and the theory of gravitation in Einstein’s general theory of relativity. One of our examples, Example 2, illustrates this sort of problem, but most others are of a different kind. They occur all the time. Typically, when one has to analyse data arising from complex scientific experiments or observational studies in social sciences and epidemiology, there are various aspects that are not deterministic. One way of modelling nondeterministic phenomena is through a probability model. For complex phenomena it is quite rare to have only one plausible model, instead there are several to choose from. In all such situations model selection becomes a fundamental problem. To the extent that large data sets are increasingly common because of advances in information technology, selecting a model has tended to become an essential part of analysis of such data. They present challenging methodological, computational and theoretical problems and have led to a fast-growing literature in both statistics and computer science. This article reviews some of the major statistical developments in this area. No previous background in model selection is assumed. The next section presents a brief background, followed by six examples, some theory and analysis of some of the examples in later sections. The last section provides some concluding remarks. The section ‘State-of-the-art’ is based mainly on Shao and Mukhopadhyay. Background

33 citations

##### Cited by

More filters

•

03 Jan 2001TL;DR: This paper proposed a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams, and Hof-mann's aspect model, also known as probabilistic latent semantic indexing (pLSI).

Abstract: We propose a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams [6], and Hof-mann's aspect model, also known as probabilistic latent semantic indexing (pLSI) [3]. In the context of text modeling, our model posits that each document is generated as a mixture of topics, where the continuous-valued mixture proportions are distributed as a latent Dirichlet random variable. Inference and learning are carried out efficiently via variational algorithms. We present empirical results on applications of this model to problems in text modeling, collaborative filtering, and text classification.

25,546 citations

••

10 May 2011

TL;DR: A Markov chain Monte Carlo based analysis of a multilevel model for functional MRI data and its applications in environmental epidemiology, educational research, and fisheries science are studied.

Abstract: Foreword Stephen P. Brooks, Andrew Gelman, Galin L. Jones, and Xiao-Li Meng Introduction to MCMC, Charles J. Geyer A short history of Markov chain Monte Carlo: Subjective recollections from in-complete data, Christian Robert and George Casella Reversible jump Markov chain Monte Carlo, Yanan Fan and Scott A. Sisson Optimal proposal distributions and adaptive MCMC, Jeffrey S. Rosenthal MCMC using Hamiltonian dynamics, Radford M. Neal Inference and Monitoring Convergence, Andrew Gelman and Kenneth Shirley Implementing MCMC: Estimating with confidence, James M. Flegal and Galin L. Jones Perfection within reach: Exact MCMC sampling, Radu V. Craiu and Xiao-Li Meng Spatial point processes, Mark Huber The data augmentation algorithm: Theory and methodology, James P. Hobert Importance sampling, simulated tempering and umbrella sampling, Charles J.Geyer Likelihood-free Markov chain Monte Carlo, Scott A. Sisson and Yanan Fan MCMC in the analysis of genetic data on related individuals, Elizabeth Thompson A Markov chain Monte Carlo based analysis of a multilevel model for functional MRI data, Brian Caffo, DuBois Bowman, Lynn Eberly, and Susan Spear Bassett Partially collapsed Gibbs sampling & path-adaptive Metropolis-Hastings in high-energy astrophysics, David van Dyk and Taeyoung Park Posterior exploration for computationally intensive forward models, Dave Higdon, C. Shane Reese, J. David Moulton, Jasper A. Vrugt and Colin Fox Statistical ecology, Ruth King Gaussian random field models for spatial data, Murali Haran Modeling preference changes via a hidden Markov item response theory model, Jong Hee Park Parallel Bayesian MCMC imputation for multiple distributed lag models: A case study in environmental epidemiology, Brian Caffo, Roger Peng, Francesca Dominici, Thomas A. Louis, and Scott Zeger MCMC for state space models, Paul Fearnhead MCMC in educational research, Roy Levy, Robert J. Mislevy, and John T. Behrens Applications of MCMC in fisheries science, Russell B. Millar Model comparison and simulation for hierarchical models: analyzing rural-urban migration in Thailand, Filiz Garip and Bruce Western

2,415 citations

••

TL;DR: This work suggests an approach that exploits the profile likelihood that enables to detect structural non-identifiabilities, which manifest in functionally related model parameters, that might arise due to limited amount and quality of experimental data.

Abstract: Motivation: Mathematical description of biological reaction networks by differential equations leads to large models whose parameters are calibrated in order to optimally explain experimental data. Often only parts of the model can be observed directly. Given a model that sufficiently describes the measured data, it is important to infer how well model parameters are determined by the amount and quality of experimental data. This knowledge is essential for further investigation of model predictions. For this reason a major topic in modeling is identifiability analysis.
Results: We suggest an approach that exploits the profile likelihood. It enables to detect structural non-identifiabilities, which manifest in functionally related model parameters. Furthermore, practical non-identifiabilities are detected, that might arise due to limited amount and quality of experimental data. Last but not least confidence intervals can be derived. The results are easy to interpret and can be used for experimental planning and for model reduction.
Availability: An implementation is freely available for MATLAB and the PottersWheel modeling toolbox at http://web.me.com/andreas.raue/profile/software.html.
Contact: andreas.raue@me.com
Supplementary information:Supplementary data are available at Bioinformatics online.

1,150 citations

•

Virginia Tech

^{1}TL;DR: A comprehensive and systematic development of the basic concepts, principles, and procedures for verification and validation of models and simulations that are described by partial differential and integral equations and the simulations that result from their numerical solution.

Abstract: Advances in scientific computing have made modelling and simulation an important part of the decision-making process in engineering, science, and public policy. This book provides a comprehensive and systematic development of the basic concepts, principles, and procedures for verification and validation of models and simulations. The emphasis is placed on models that are described by partial differential and integral equations and the simulations that result from their numerical solution. The methods described can be applied to a wide range of technical fields, from the physical sciences, engineering and technology and industry, through to environmental regulations and safety, product and plant safety, financial investing, and governmental regulations. This book will be genuinely welcomed by researchers, practitioners, and decision makers in a broad range of fields, who seek to improve the credibility and reliability of simulation results. It will also be appropriate either for university courses or for independent study.

966 citations

••

TL;DR: This review is an introduction to Bayesian methods in cosmology and astrophysics and recent results in the field, and presents Bayesian probability theory and its conceptual underpinnings, Bayes' Theorem and the role of priors.

Abstract: The application of Bayesian methods in cosmology and astrophysics has flourished over the past decade, spurred by data sets of increasing size and complexity. In many respects, Bayesian methods have proven to be vastly superior to more traditional statistical tools, offering the advantage of higher efficiency and of a consistent conceptual basis for dealing with the problem of induction in the presence of uncertainty. This trend is likely to continue in the future, when the way we collect, manipulate and analyse observations and compare them with theoretical models will assume an even more central role in cosmology. This review is an introduction to Bayesian methods in cosmology and astrophysics and recent results in the field. I first present Bayesian probability theory and its conceptual underpinnings, Bayes' Theorem and the role of priors. I discuss the problem of parameter inference and its general solution, along with numerical techniques such as Monte Carlo Markov Chain methods. I then review the th...

962 citations