Topic

# Bayes estimator

About: Bayes estimator is a research topic. Over the lifetime, 6481 publications have been published within this topic receiving 143436 citations. The topic is also known as: Bayes action.

##### Papers published on a yearly basis

##### Papers

More filters

•

[...]

01 Jun 1970

TL;DR: In this article, the authors present a survey of probability theory in the context of sample spaces and decision problems, including the following: 1.1 Experiments and Sample Spaces, and Probability 2.2.3 Random Variables, Random Vectors and Distributions Functions.

Abstract: Foreword.Preface.PART ONE. SURVEY OF PROBABILITY THEORY.Chapter 1. Introduction.Chapter 2. Experiments, Sample Spaces, and Probability.2.1 Experiments and Sample Spaces.2.2 Set Theory.2.3 Events and Probability.2.4 Conditional Probability.2.5 Binomial Coefficients.Exercises.Chapter 3. Random Variables, Random Vectors, and Distributions Functions.3.1 Random Variables and Their Distributions.3.2 Multivariate Distributions.3.3 Sums and Integrals.3.4 Marginal Distributions and Independence.3.5 Vectors and Matrices.3.6 Expectations, Moments, and Characteristic Functions.3.7 Transformations of Random Variables.3.8 Conditional Distributions.Exercises.Chapter 4. Some Special Univariate Distributions.4.1 Introduction.4.2 The Bernoulli Distributions.4.3 The Binomial Distribution.4.4 The Poisson Distribution.4.5 The Negative Binomial Distribution.4.6 The Hypergeometric Distribution.4.7 The Normal Distribution.4.8 The Gamma Distribution.4.9 The Beta Distribution.4.10 The Uniform Distribution.4.11 The Pareto Distribution.4.12 The t Distribution.4.13 The F Distribution.Exercises.Chapter 5. Some Special Multivariate Distributions.5.1 Introduction.5.2 The Multinomial Distribution.5.3 The Dirichlet Distribution.5.4 The Multivariate Normal Distribution.5.5 The Wishart Distribution.5.6 The Multivariate t Distribution.5.7 The Bilateral Bivariate Pareto Distribution.Exercises.PART TWO. SUBJECTIVE PROBABILITY AND UTILITY.Chapter 6. Subjective Probability.6.1 Introduction.6.2 Relative Likelihood.6.3 The Auxiliary Experiment.6.4 Construction of the Probability Distribution.6.5 Verification of the Properties of a Probability Distribution.6.6 Conditional Likelihoods.Exercises.Chapter 7. Utility.7.1 Preferences Among Rewards.7.2 Preferences Among Probability Distributions.7.3 The Definitions of a Utility Function.7.4 Some Properties of Utility Functions.7.5 The Utility of Monetary Rewards.7.6 Convex and Concave Utility Functions.7.7 The Anxiomatic Development of Utility.7.8 Construction of the Utility Function.7.9 Verification of the Properties of a Utility Function.7.10 Extension of the Properties of a Utility Function to the Class ?E.Exercises.PART THREE. STATISTICAL DECISION PROBLEMS.Chapter 8. Decision Problems.8.1 Elements of a Decision Problem.8.2 Bayes Risk and Bayes Decisions.8.3 Nonnegative Loss Functions.8.4 Concavity of the Bayes Risk.8.5 Randomization and Mixed Decisions.8.6 Convex Sets.8.7 Decision Problems in Which ~2 and D Are Finite.8.8 Decision Problems with Observations.8.9 Construction of Bayes Decision Functions.8.10 The Cost of Observation.8.11 Statistical Decision Problems in Which Both ? and D contains Two Points.8.12 Computation of the Posterior Distribution When the Observations Are Made in More Than One Stage.Exercises.Chapter 9. Conjugate Prior Distributions.9.1 Sufficient Statistics.9.2 Conjugate Families of Distributions.9.3 Construction of the Conjugate Family.9.4 Conjugate Families for Samples from Various Standard Distributions.9.5 Conjugate Families for Samples from a Normal Distribution.9.6 Sampling from a Normal Distribution with Unknown Mean and Unknown Precision.9.7 Sampling from a Uniform Distribution.9.8 A Conjugate Family for Multinomial Observations.9.9 Conjugate Families for Samples from a Multivariate Normal Distribution.9.10 Multivariate Normal Distributions with Unknown Mean Vector and Unknown Precision matrix.9.11 The Marginal Distribution of the Mean Vector.9.12 The Distribution of a Correlation.9.13 Precision Matrices Having an Unknown Factor.Exercises.Chapter 10. Limiting Posterior Distributions.10.1 Improper Prior Distributions.10.2 Improper Prior Distributions for Samples from a Normal Distribution.10.3 Improper Prior Distributions for Samples from a Multivariate Normal Distribution.10.4 Precise Measurement.10.5 Convergence of Posterior Distributions.10.6 Supercontinuity.10.7 Solutions of the Likelihood Equation.10.8 Convergence of Supercontinuous Functions.10.9 Limiting Properties of the Likelihood Function.10.10 Normal Approximation to the Posterior Distribution.10.11 Approximation for Vector Parameters.10.12 Posterior Ratios.Exercises.Chapter 11. Estimation, Testing Hypotheses, and linear Statistical Models.11.1 Estimation.11.2 Quadratic Loss.11.3 Loss Proportional to the Absolute Value of the Error.11.4 Estimation of a Vector.11.5 Problems of Testing Hypotheses.11.6 Testing a Simple Hypothesis About the Mean of a Normal Distribution.11.7 Testing Hypotheses about the Mean of a Normal Distribution.11.8 Deciding Whether a Parameter Is Smaller or larger Than a Specific Value.11.9 Deciding Whether the Mean of a Normal Distribution Is Smaller or larger Than a Specific Value.11.10 Linear Models.11.11 Testing Hypotheses in Linear Models.11.12 Investigating the Hypothesis That Certain Regression Coefficients Vanish.11.13 One-Way Analysis of Variance.Exercises.PART FOUR. SEQUENTIAL DECISIONS.Chapter 12. Sequential Sampling.12.1 Gains from Sequential Sampling.12.2 Sequential Decision Procedures.12.3 The Risk of a Sequential Decision Procedure.12.4 Backward Induction.12.5 Optimal Bounded Sequential Decision procedures.12.6 Illustrative Examples.12.7 Unbounded Sequential Decision Procedures.12.8 Regular Sequential Decision Procedures.12.9 Existence of an Optimal Procedure.12.10 Approximating an Optimal Procedure by Bounded Procedures.12.11 Regions for Continuing or Terminating Sampling.12.12 The Functional Equation.12.13 Approximations and Bounds for the Bayes Risk.12.14 The Sequential Probability-ratio Test.12.15 Characteristics of Sequential Probability-ratio Tests.12.16 Approximating the Expected Number of Observations.Exercises.Chapter 13. Optimal Stopping.13.1 Introduction.13.2 The Statistician's Reward.13.3 Choice of the Utility Function.13.4 Sampling Without Recall.13.5 Further Problems of Sampling with Recall and Sampling without Recall.13.6 Sampling without Recall from a Normal Distribution with Unknown Mean.13.7 Sampling with Recall from a Normal Distribution with Unknown Mean.13.8 Existence of Optimal Stopping Rules.13.9 Existence of Optimal Stopping Rules for Problems of Sampling with Recall and Sampling without Recall.13.10 Martingales.13.11 Stopping Rules for Martingales.13.12 Uniformly Integrable Sequences of Random Variables.13.13 Martingales Formed from Sums and Products of Random Variables.13.14 Regular Supermartingales.13.15 Supermartingales and General Problems of Optimal Stopping.13.16 Markov Processes.13.17 Stationary Stopping Rules for Markov Processes.13.18 Entrance-fee Problems.13.19 The Functional Equation for a Markov Process.Exercises.Chapter 14. Sequential Choice of Experiments.14.1 Introduction.14.2 Markovian Decision Processes with a Finite Number of Stages.14.3 Markovian Decision Processes with an Infinite Number of Stages.14.4 Some Betting Problems.14.5 Two-armed-bandit Problems.14.6 Two-armed-bandit Problems When the Value of One Parameter Is Known.14.7 Two-armed-bandit Problems When the Parameters Are Dependent.14.8 Inventory Problems.14.9 Inventory Problems with an Infinite Number of Stages.14.10 Control Problems.14.11 Optimal Control When the Process Cannot Be Observed without Error.14.12 Multidimensional Control Problems.14.13 Control Problems with Actuation Errors.14.14 Search Problems.14.15 Search Problems with Equal Costs.14.16 Uncertainty Functions and Statistical Decision Problems.14.17 Sufficient Experiments.14.18 Examples of Sufficient Experiments.Exercises.References.Supplementary Bibliography.Name Index.Subject Index.

4,255 citations

•

[...]

TL;DR: This chapter discusses how to improve the accuracy of Maximum Likelihood Analyses by extending EM to Multivariate Data, and the role of First Derivatives in this process.

Abstract: Part 1. An Introduction to Missing Data. 1.1 Introduction. 1.2 Chapter Overview. 1.3 Missing Data Patterns. 1.4 A Conceptual Overview of Missing Data heory. 1.5 A More Formal Description of Missing Data Theory. 1.6 Why Is the Missing Data Mechanism Important? 1.7 How Plausible Is the Missing at Random Mechanism? 1.8 An Inclusive Analysis Strategy. 1.9 Testing the Missing Completely at Random Mechanism. 1.10 Planned Missing Data Designs. 1.11 The Three-Form Design. 1.12 Planned Missing Data for Longitudinal Designs. 1.13 Conducting Power Analyses for Planned Missing Data Designs. 1.14 Data Analysis Example. 1.15 Summary. 1.16 Recommended Readings. Part 2. Traditional Methods for Dealing with Missing Data. 2.1 Chapter Overview. 2.2 An Overview of Deletion Methods. 2.3 Listwise Deletion. 2.4 Pairwise Deletion. 2.5 An Overview of Single Imputation Techniques. 2.6 Arithmetic Mean Imputation. 2.7 Regression Imputation. 2.8 Stochastic Regression Imputation. 2.9 Hot-Deck Imputation. 2.10 Similar Response Pattern Imputation. 2.11 Averaging the Available Items. 2.12 Last Observation Carried Forward. 2.13 An Illustrative Simulation Study. 2.14 Summary. 2.15 Recommended Readings. Part 3. An Introduction to Maximum Likelihood Estimation. 3.1 Chapter Overview. 3.2 The Univariate Normal Distribution. 3.3 The Sample Likelihood. 3.4 The Log-Likelihood. 3.5 Estimating Unknown Parameters. 3.6 The Role of First Derivatives. 3.7 Estimating Standard Errors. 3.8 Maximum Likelihood Estimation with Multivariate Normal Data. 3.9 A Bivariate Analysis Example. 3.10 Iterative Optimization Algorithms. 3.11 Significance Testing Using the Wald Statistic. 3.12 The Likelihood Ratio Test Statistic. 3.13 Should I Use the Wald Test or the Likelihood Ratio Statistic? 3.14 Data Analysis Example 1. 3.15 Data Analysis Example 2. 3.16 Summary. 3.17 Recommended Readings. Part 4. Maximum Likelihood Missing Data Handling. 4.1 Chapter Overview. 4.2 The Missing Data Log-Likelihood. 4.3 How Do the Incomplete Data Records Improve Estimation? 4.4 An Illustrative Computer Simulation Study. 4.5 Estimating Standard Errors with Missing Data. 4.6 Observed Versus Expected Information. 4.7 A Bivariate Analysis Example. 4.8 An Illustrative Computer Simulation Study. 4.9 An Overview of the EM Algorithm. 4.10 A Detailed Description of the EM Algorithm. 4.11 A Bivariate Analysis Example. 4.12 Extending EM to Multivariate Data. 4.13 Maximum Likelihood Software Options. 4.14 Data Analysis Example 1. 4.15 Data Analysis Example 2. 4.16 Data Analysis Example 3. 4.17 Data Analysis Example 4. 4.18 Data Analysis Example 5. 4.19 Summary. 4.20 Recommended Readings. Part 5. Improving the Accuracy of Maximum Likelihood Analyses. 5.1 Chapter Overview. 5.2 The Rationale for an Inclusive Analysis Strategy. 5.3 An Illustrative Computer Simulation Study. 5.4 Identifying a Set of Auxiliary Variables. 5.5 Incorporating Auxiliary Variables Into a Maximum Likelihood Analysis. 5.6 The Saturated Correlates Model. 5.7 The Impact of Non-Normal Data. 5.8 Robust Standard Errors. 5.9 Bootstrap Standard Errors. 5.10 The Rescaled Likelihood Ratio Test. 5.11 Bootstrapping the Likelihood Ratio Statistic. 5.12 Data Analysis Example 1. 5.13 Data Analysis Example 2. 5.14 Data Analysis Example 3. 5.15 Summary. 5.16 Recommended Readings. Part 6. An Introduction to Bayesian Estimation. 6.1 Chapter Overview. 6.2 What Makes Bayesian Statistics Different? 6.3 A Conceptual Overview of Bayesian Estimation. 6.4 Bayes' Theorem. 6.5 An Analysis Example. 6.6 How Does Bayesian Estimation Apply to Multiple Imputation? 6.7 The Posterior Distribution of the Mean. 6.8 The Posterior Distribution of the Variance. 6.9 The Posterior Distribution of a Covariance Matrix. 6.10 Summary. 6.11 Recommended Readings. Part 7. The Imputation Phase of Multiple Imputation. 7.1 Chapter Overview. 7.2 A Conceptual Description of the Imputation Phase. 7.3 A Bayesian Description of the Imputation Phase. 7.4 A Bivariate Analysis Example. 7.5 Data Augmentation with Multivariate Data. 7.6 Selecting Variables for Imputation. 7.7 The Meaning of Convergence. 7.8 Convergence Diagnostics. 7.9 Time-Series Plots. 7.10 Autocorrelation Function Plots. 7.11 Assessing Convergence from Alternate Starting Values. 7.12 Convergence Problems. 7.13 Generating the Final Set of Imputations. 7.14 How Many Data Sets Are Needed? 7.15 Summary. 7.16 Recommended Readings. Part 8. The Analysis and Pooling Phases of Multiple Imputation. 8.1 Chapter Overview. 8.2 The Analysis Phase. 8.3 Combining Parameter Estimates in the Pooling Phase. 8.4 Transforming Parameter Estimates Prior to Combining. 8.5 Pooling Standard Errors. 8.6 The Fraction of Missing Information and the Relative Increase in Variance. 8.7 When Is Multiple Imputation Comparable to Maximum Likelihood? 8.8 An Illustrative Computer Simulation Study. 8.9 Significance Testing Using the t Statistic. 8.10 An Overview of Multiparameter Significance Tests. 8.11 Testing Multiple Parameters Using the D1 Statistic. 8.12 Testing Multiple Parameters by Combining Wald Tests. 8.13 Testing Multiple Parameters by Combining Likelihood Ratio Statistics. 8.14 Data Analysis Example 1. 8.15 Data Analysis Example 2. 8.16 Data Analysis Example 3. 8.17 Summary. 8.18 Recommended Readings. Part 9. Practical Issues in Multiple Imputation. 9.1 Chapter Overview. 9.2 Dealing with Convergence Problems. 9.3 Dealing with Non-Normal Data. 9.4 To Round or Not to Round? 9.5 Preserving Interaction Effects. 9.6 Imputing Multiple-Item Questionnaires. 9.7 Alternate Imputation Algorithms. 9.8 Multiple Imputation Software Options. 9.9 Data Analysis Example 1. 9.10 Data Analysis Example 2. 9.11 Summary. 9.12 Recommended Readings. Part 10. Models for Missing Not at Random Data. 10.1 Chapter Overview. 10.2 An Ad Hoc Approach to Dealing with MNAR Data. 10.3 The Theoretical Rationale for MNAR Models. 10.4 The Classic Selection Model. 10.5 Estimating the Selection Model. 10.6 Limitations of the Selection Model. 10.7 An Illustrative Analysis. 10.8 The Pattern Mixture Model. 10.9 Limitations of the Pattern Mixture Model. 10.10 An Overview of the Longitudinal Growth Model. 10.11 A Longitudinal Selection Model. 10.12 Random Coefficient Selection Models. 10.13 Pattern Mixture Models for Longitudinal Analyses. 10.14 Identification Strategies for Longitudinal Pattern Mixture Models. 10.15 Delta Method Standard Errors. 10.16 Overview of the Data Analysis Examples. 10.17 Data Analysis Example 1. 10.18 Data Analysis Example 2. 10.19 Data Analysis Example 3. 10.20 Data Analysis Example 4. 10.21 Summary. 10.22 Recommended Readings. Part 11. Wrapping Things Up: Some Final Practical Considerations. 11.1 Chapter Overview. 11.2 Maximum Likelihood Software Options. 11.3 Multiple Imputation Software Options. 11.4 Choosing between Maximum Likelihood and Multiple Imputation. 11.5 Reporting the Results from a Missing Data Analysis. 11.6 Final Thoughts. 11.7 Recommended Readings.

3,910 citations

••

[...]

TL;DR: In this article, the authors considered the problem of estimating latent ability using the entire response pattern of free-response items, first in the general case and then in the case where the items are scored in a graded way, especially when the thinking process required for solving each item is assumed to be homogeneous.

Abstract: Estimation of latent ability using the entire response pattern of free-response items is discussed, first in the general case and then in the case where the items are scored in a graded way, especially when the thinking process required for solving each item is assumed to be homogeneous.
The maximum likelihood estimator, the Bayes modal estimator, and the Bayes estimator obtained by using the mean-square error multiplied by the density function of the latent variate as the loss function are taken as our estimators. Sufficient conditions for the existence of a unique maximum likelihood estimator and a unique Bayes modal estimator are formulated with respect to an individual item rather than with respect to a whole set of items, which are useful especially in the situation where we are free to choose optimal items for a particular examinee out of the item library in which a sufficient number of items are stored with reliable quality controls.
Advantages of the present methods are investigated by comparing them with those which make use of conventional dichotomous items or test scores, theoretically as well as empirically, in terms of the amounts of information, the standard errors of estimators, and the mean-square errors of estimators. The utility of the Bayes modal estimator as a computational compromise for the Bayes estimator is also discussed and observed. The relationship between the formula for the item characteristic function and the philosophy of scoring is observed with respect to dichotomous items.

2,849 citations

••

[...]

TL;DR: A new, fast, approximate likelihood-ratio test (aLRT) for branches is presented here as a competitive alternative to nonparametric bootstrap and Bayesian estimation of branch support and is shown to be accurate, powerful, and robust to certain violations of model assumptions.

Abstract: We revisit statistical tests for branches of evolutionary trees reconstructed upon molecular data. A new, fast, approximate likelihood-ratio test (aLRT) for branches is presented here as a competitive alternative to nonparametric bootstrap and Bayesian estimation of branch support. The aLRT is based on the idea of the conventional LRT, with the null hypothesis corresponding to the assumption that the inferred branch has length 0. We show that the LRT statistic is asymptotically distributed as a maximum of three random variables drawn from the chi(0)2 + chi(1)2 distribution. The new aLRT of interior branch uses this distribution for significance testing, but the test statistic is approximated in a slightly conservative but practical way as 2(l1- l2), i.e., double the difference between the maximum log-likelihood values corresponding to the best tree and the second best topological arrangement around the branch of interest. Such a test is fast because the log-likelihood value l2 is computed by optimizing only over the branch of interest and the four adjacent branches, whereas other parameters are fixed at their optimal values corresponding to the best ML tree. The performance of the new test was studied on simulated 4-, 12-, and 100-taxon data sets with sequences of different lengths. The aLRT is shown to be accurate, powerful, and robust to certain violations of model assumptions. The aLRT is implemented within the algorithm used by the recent fast maximum likelihood tree estimation program PHYML (Guindon and Gascuel, 2003).

2,135 citations

••

[...]

TL;DR: These approximations to the posterior means and variances of positive functions of a real or vector-valued parameter, and to the marginal posterior densities of arbitrary parameters can also be used to compute approximate predictive densities.

Abstract: This article describes approximations to the posterior means and variances of positive functions of a real or vector-valued parameter, and to the marginal posterior densities of arbitrary (ie, not necessarily positive) parameters These approximations can also be used to compute approximate predictive densities To apply the proposed method, one only needs to be able to maximize slightly modified likelihood functions and to evaluate the observed information at the maxima Nevertheless, the resulting approximations are generally as accurate and in some cases more accurate than approximations based on third-order expansions of the likelihood and requiring the evaluation of third derivatives The approximate marginal posterior densities behave very much like saddle-point approximations for sampling distributions The principal regularity condition required is that the likelihood times prior be unimodal

1,927 citations