scispace - formally typeset
Search or ask a question

Showing papers on "Estimator published in 1991"


Journal ArticleDOI
TL;DR: Using these results, data-dependent automatic bandwidth/lag truncation parameters are introduced and asymptotically optimal kernel/weighting scheme and bandwidth/agreement parameters are obtained.
Abstract: This paper is concerned with the estimation of covariance matrices in the presence of heteroskedasticity and autocorrelation of unknown forms. Currently available estimators that are designed for this context depend upon the choice of a lag truncation parameter and a weighting scheme. Results in the literature provide a condition on the growth rate of the lag truncation parameter as T \rightarrow \infty that is sufficient for consistency. No results are available, however, regarding the choice of lag truncation parameter for a fixed sample size, regarding data-dependent automatic lag truncation parameters, or regarding the choice of weighting scheme. In consequence, available estimators are not entirely operational and the relative merits of the estimators are unknown. This paper addresses these problems. The asymptotic truncated mean squared errors of estimators in a given class are determined and compared. Asymptotically optimal kernel/weighting scheme and bandwidth/lag truncation parameters are obtained using an asymptotic truncated mean squared error criterion. Using these results, data-dependent automatic bandwidth/lag truncation parameters are introduced. The finite sample properties of the estimators are analyzed via Monte Carlo simulation.

4,219 citations


Journal ArticleDOI
TL;DR: A case-control design involving only cases may be used when brief exposure causes a transient change in risk of a rare acute-onset disease and self-matching of cases eliminates the threat of control-selection bias and increases efficiency.
Abstract: A case-control design involving only cases may be used when brief exposure causes a transient change in risk of a rare acute-onset disease. The design resembles a retrospective nonrandomized crossover study but differs in having only a sample of the base population-time. The average incidence rate ratio for a hypothesized effect period following the exposure is estimable using the Mantel-Haenszel estimator. The duration of the effect period is assumed to be that which maximizes the rate ratio estimate. Self-matching of cases eliminates the threat of control-selection bias and increases efficiency. Pilot data from a study of myocardial infarction onset illustrate the control of within-individual confounding due to temporal association of exposures.

2,042 citations


Book
13 Mar 1991
TL;DR: The Martingale Central Limit Theorem as mentioned in this paper is a generalization of the central limit theorem of the Counting Process and the Local Square Integrable Martingales (LSIM) framework.
Abstract: Preface. 0. The Applied Setting. 1. The Counting Process and Martingale Framework. 2. Local Square Integrable Martingales. 3. Finite Sample Moments and Large Sample Consistency of Tests and Estimators. 4. Censored Data Regression Models and Their Application. 5. Martingale Central Limit Theorem. 6. Large Sample results of the Kaplan-Meier Estimator. 7. Weighted Logrank Statistics. 8. Distribution Theory for Proportional Hazards Regression. Appendix A: Some Results from stieltjes Integration and Probability Theory. Appendix B: An Introduction to Weak convergence. Appendix C: The Martingale Central Limit Theorem: Some Preliminaries. Appendix D: Data. Appendix E: Exercises. Bibliography. Notation. Author Index. Subject Index.

1,997 citations


Book
01 Jan 1991
TL;DR: In this article, the authors introduce Rudiments of Linear Algebra and Multivariate Normal Theory, and introduce Neyman-Pearson Detectors and Maximum Likelihood Estimators.
Abstract: 1. Introduction. 2. Rudiments of Linear Algebra and Multivariate Normal Theory. 3. Sufficiency and MVUB Estimators. 4. Neyman-Pearson Detectors. 5. Bayes Detectors. 6. Maximum Likelihood Estimators. 7. Bayes Estimators. 8. Minimum Mean-Squared Error Estimators. 9. Least Squares. 10. Linear Prediction. 11. Modal Analysis.

1,670 citations


Journal ArticleDOI
TL;DR: In this article, an asymptotic optimality theory for the estimation of cointegration regressions is developed, which applies to a reasonably wide class of estimators without making any specific assumptions about the probability distribution or short-run dynamics of the data-generating process.
Abstract: An asymptotic optimality theory for the estimation of cointegration regressions is developed in this paper. The theory applies to a reasonably wide class of estimators without making any specific assumptions about the probability distribution or short-run dynamics of the data-generating process. Due to the nonstandard nature of the estimation problem, the conventional minimum variance criterion does not provide a convenient measure of asymptotic efficiency. An alternative criterion, based on the concentration or peakedness of the limiting distribution of an estimator, is therefore adopted. The limiting distribution of estimators with maximum asymptotic efficiency is characterized in the paper and used to discuss the optimality of some known estimators. A new asymptotically efficient estimator is also introduced. This estimator is obtained from the ordinary least-squares estimator by a time domain correction which is nonparametric in the sense that no assumption of a finite parameter model is required. The estimator can be computed with least squares without any initial estimations.

1,151 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the difficulty of deconvolution depends on the smoothness of error distributions: the smoother, the harder it is to estimate the density of a random variable.
Abstract: Deconvolution problems arise in a variety of situations in statistics. An interesting problem is to estimate the density $f$ of a random variable $X$ based on $n$ i.i.d. observations from $Y = X + \varepsilon$, where $\varepsilon$ is a measurement error with a known distribution. In this paper, the effect of errors in variables of nonparametric deconvolution is examined. Insights are gained by showing that the difficulty of deconvolution depends on the smoothness of error distributions: the smoother, the harder. In fact, there are two types of optimal rates of convergence according to whether the error distribution is ordinary smooth or supersmooth. It is shown that optimal rates of convergence can be achieved by deconvolution kernel density estimators.

945 citations


Proceedings ArticleDOI
01 Aug 1991
TL;DR: A control-theoretic approach to reactive flow control in networks that do not reserve bandwidth is presented, and a technique to extract and use additional information from the system to develop a continuous-time system model is presented.
Abstract: This paper presents a control-theoretic approach to reactive flow control in networks that do not reserve bandwidth. We assume a round-robin-like queue service discipline in the output queues of the network's switches, and propose deterministic and stochastic models for a single conversation in a network of such switches. These models motivate the Packet-Pair rate probing technique, and a provably stable rate-based flow control scheme. A Kalman state estimator is derived from discrete-time state space analysis, but there are difficulties in using the estimator in practice. These difficulties are overcome by a novel estimation scheme based on fuzzy logic. We then present a technique to extract and use additional information from the system to develop a continuous-time system model. This is used to design a variant of the control law that is also provably stable, and, in addition, takes control action as rapidly as possible. Finally, practical issues such as correcting parameter drift and coordination with window flow control are described.

790 citations


Journal ArticleDOI
TL;DR: It is shown that in the state-feedback case one can come arbitrarily close to the optimal (even over full information controllers) mixed H/sub 2//H/sub infinity / performance measure using constant gain state feedback.
Abstract: The problem of finding an internally stabilizing controller that minimizes a mixed H/sub 2//H/sub infinity / performance measure subject to an inequality constraint on the H/sub infinity / norm of another closed-loop transfer function is considered. This problem can be interpreted and motivated as a problem of optimal nominal performance subject to a robust stability constraint. Both the state-feedback and output-feedback problems are considered. It is shown that in the state-feedback case one can come arbitrarily close to the optimal (even over full information controllers) mixed H/sub 2//H/sub infinity / performance measure using constant gain state feedback. Moreover, the state-feedback problem can be converted into a convex optimization problem over a bounded subset of (n*n and n*q, where n and q are, respectively, the state and input dimensions) real matrices. Using the central H/sub infinity / estimator, it is shown that the output feedback problem can be reduced to a state-feedback problem. In this case, the dimension of the resulting controller does not exceed the dimension of the generalized plant. >

762 citations


Journal ArticleDOI
David Pollard1
TL;DR: The LAD estimator of the vector parameter in a linear regression is defined by minimizing the sum of the absolute values of the residuals as mentioned in this paper, and it is shown in this paper that it converges at a 1/n rate for a first-order autoregression with Cauchy errors.
Abstract: The LAD estimator of the vector parameter in a linear regression is defined by minimizing the sum of the absolute values of the residuals. This paper provides a direct proof of asymptotic normality for the LAD estimator. The main theorem assumes deterministic carriers. The extension to random carriers includes the case of autoregressions whose error terms have finite second moments. For a first-order autoregression with Cauchy errors the LAD estimator is shown to converge at a 1/n rate.

676 citations


Journal ArticleDOI
TL;DR: In this article, a new estimator of the price of a share based on the high, low, and closing prices in a day's trading is proposed, which has the merit of being unbiased regardless of the drift.
Abstract: The log of the price of a share is commonly modelled as a Brownian motion with drift, $\sigma B_t + ct$, where the constants $c$ and $\sigma$ are unknown. In order to use the Black-Scholes option pricing formula, one needs an estimate of $\sigma$, though not of $c$. In this paper, we propose a new estimator of $\sigma$ based on the high, low, and closing prices in a day's trading. This estimator has the merit of being unbiased whatever the drift $c$. In common with other estimators of $\sigma$, the approximation of the true high and low values of the drifting Brownian motion by the high and low values of a random walk introduces error, often quite a serious error. We shall show how a simple correction can overcome this error almost completely.

582 citations


Journal ArticleDOI
TL;DR: An index of resolvability is proved to bound the rate of convergence of minimum complexity density estimators as well as the information-theoretic redundancy of the corresponding total description length to demonstrate the statistical effectiveness of the minimum description-length principle as a method of inference.
Abstract: The authors introduce an index of resolvability that is proved to bound the rate of convergence of minimum complexity density estimators as well as the information-theoretic redundancy of the corresponding total description length. The results on the index of resolvability demonstrate the statistical effectiveness of the minimum description-length principle as a method of inference. The minimum complexity estimator converges to true density nearly as fast as an estimator based on prior knowledge of the true subclass of densities. Interpretations and basic properties of minimum complexity estimators are discussed. Some regression and classification problems that can be examined from the minimum description-length framework are considered. >

Journal ArticleDOI
TL;DR: In this paper, the authors present a review of recent developments in nonparametric density estimation and include topics that have been omitted from review articles and books on the subject, such as the histogram, kernel estimators, and orthogonal series estimators.
Abstract: Advances in computation and the fast and cheap computational facilities now available to statisticians have had a significant impact upon statistical research, and especially the development of nonparametric data analysis procedures. In particular, theoretical and applied research on nonparametric density estimation has had a noticeable influence on related topics, such as nonparametric regression, nonparametric discrimination, and nonparametric pattern recognition. This article reviews recent developments in nonparametric density estimation and includes topics that have been omitted from review articles and books on the subject. The early density estimation methods, such as the histogram, kernel estimators, and orthogonal series estimators are still very popular, and recent research on them is described. Different types of restricted maximum likelihood density estimators, including order-restricted estimators, maximum penalized likelihood estimators, and sieve estimators, are discussed, where restrictions are imposed upon the class of densities or on the form of the likelihood function. Nonparametric density estimators that are data-adaptive and lead to locally smoothed estimators are also discussed; these include variable partition histograms, estimators based on statistically equivalent blocks, nearest-neighbor estimators, variable kernel estimators, and adaptive kernel estimators. For the multivariate case, extensions of methods of univariate density estimation are usually straightforward but can be computationally expensive. A method of multivariate density estimation that did not spring from a univariate generalization is described, namely, projection pursuit density estimation, in which both dimensionality reduction and density estimation can be pursued at the same time. Finally, some areas of related research are mentioned, such as nonparametric estimation of functionals of a density, robust parametric estimation, semiparametric models, and density estimation for censored and incomplete data, directional and spherical data, and density estimation for dependent sequences of observations.

Journal ArticleDOI
TL;DR: In this paper, a semiparametric autoregressive conditional heteroscedasticity (ARCH) model is proposed, which has conditional first and second moments given by auto-gressive moving average and ARCH parametric formulations but a conditional density that is assumed only to be sufficiently smooth to be approximated by a nonparametric density estimator.
Abstract: This article introduces a semiparametric autoregressive conditional heteroscedasticity (ARCH) model that has conditional first and second moments given by autoregressive moving average and ARCH parametric formulations but a conditional density that is assumed only to be sufficiently smooth to be approximated by a nonparametric density estimator. For several particular conditional densities, the relative efficiency of the quasi-maximum likelihood estimator is compared with maximum likelihood under correct specification. These potential efficiency gains for a fully adaptive procedure are compared in a Monte Carlo experiment with the observed gains from using the proposed semiparametric procedure, and it is found that the estimator captures a substantial proportion of the potential. The estimator is applied to daily stock returns from small firms that are found to exhibit conditional skewness and kurtosis and to the British pound to dollar exchange rate.

Book
01 Nov 1991
TL;DR: Time Series: The Asymptotic Distribution of Auto-Correlation Coefficients On a Test of Serial Correlation for Regression Models with Lagged Dependent Variables
Abstract: Time Series: The Asymptotic Distribution of Auto-Correlation Coefficients On a Test of Serial Correlation for Regression Models with Lagged Dependent Variables Directional Data Analysis: Optimal Robust Estimators for the Concentration Parameter of a Von Mises-Fisher Distribution On Watson's Anova for Directions Compositional and Shape Data Analysis: Spherical Triangles Revisited New Directions in Shape Analysis Technical Problems in Inference: Tests of Fit for Logistic Models A Class of Nearly Exact Saddlepoint Approximations Spatial Statistics: A Comparison of Variogram Estimation with Covariogram Estimation Statistics and Genetics: Stochastic Comparisons Between Means and Medians for Random Variables Case Studies on Issues of Public Policy: Parameter Estimation in the Operational Modelling of HIV/AIDS. @20 Intermediate @21 E2 @12 0471 93110 1 approx 400pp approx $106.30 #49.95 @13 A Wiley UK Title. @15 PR15 @16 Mardia @17 Watson @18 Chichester P&R

Journal ArticleDOI
TL;DR: Finite-sample replacement breakdown points are derived for different types of estimators of multivariate location and covariance matrices in this paper, and the breakdown point is related to a measure of performance based on large deviations probabilities.
Abstract: Finite-sample replacement breakdown points are derived for different types of estimators of multivariate location and covariance matrices The role of various equivariance properties is illustrated The breakdown point is related to a measure of performance based on large deviations probabilities Finally, we show that one-step reweighting preserves the breakdown point

Journal ArticleDOI
TL;DR: In this paper, a formal econometric treatment of the estimation of the parameters of a fully specified stochastic equilibrium model is proposed, which yields an estimator which is shown to have an asymptotic normal distribution.

Journal ArticleDOI
01 May 1991-Genetics
TL;DR: The maximum likelihood estimator is proposed, which is relatively insensitive to violation of the assumptions made during analysis and is the method of choice for estimating the genetic length of a genome from counts of recombinants and nonrecombinants studied from a backcross linkage experiment.
Abstract: The genetic length of a genome, in units of Morgans or centimorgans, is a fundamental characteristic of an organism. We propose a maximum likelihood method for estimating this quantity from counts of recombinants and nonrecombinants between marker locus pairs studied from a backcross linkage experiment, assuming no interference and equal chromosome lengths. This method allows the calculation of the standard deviation of the estimate and a confidence interval containing the estimate. Computer simulations have been performed to evaluate and compare the accuracy of the maximum likelihood method and a previously suggested method-of-moments estimator. Specifically, we have investigated the effects of the number of meioses, the number of marker loci, and variation in the genetic lengths of individual chromosomes on the estimate. The effect of missing data, obtained when the results of two separate linkage studies with a fraction of marker loci in common are pooled, is also investigated. The maximum likelihood estimator, in contrast to the method-of-moments estimator, is relatively insensitive to violation of the assumptions made during analysis and is the method of choice. The various methods are compared by application to partial linkage data from Xiphophorus.

Journal ArticleDOI
TL;DR: In this paper, the probability density functions for one and three-dimensional fields in a mode-stirred chamber were derived and verified with chi-square goodness-of-fit tests on experimental data.
Abstract: The probability density functions for one- and three-dimensional fields in a mode-stirred chamber are derived and verified with chi-square goodness-of-fit tests on experimental data. Each of the three components of the field in the chamber is Rayleigh distributed, which is the same as chi distributed with six degrees of freedom. Each component of the power density is then exponentially distributed. Experimental data confirm these distributions, though unexpected high values, or outliers, were consistently found. Maximum-likelihood estimators of the functions' parameters are derived, and their accuracy is determined as a function of the amount of data. These results are applied to estimating chamber Q. The amount of data required for a given accuracy is determined. >

Journal ArticleDOI
TL;DR: In this article, the authors established the asymptotic normality of series estimators for nonparametric regression models, such as additive interactive regression, semiparametric regression, and semi-parametric index regression.
Abstract: This paper establishes the asymptotic normality of series estimators for nonparametric regression models. Gallant's Fourier flexible form estimators, trigonometric series estimators, and polynomial series estimators are prime examples of the estimators covered by the results. The results apply to a wide variety of estimates in the regression model under consideration, including derivatives and integrals of the regression function. The errors in the model may be homoskedastic or heteroskedastic. The paper also considers series estimators for additive interactive regression, semiparametric regression, and semiparametric index regression models, and shows them to be consistent and asymptotically normal. Copyright 1991 by The Econometric Society.

Journal ArticleDOI
TL;DR: In this article, the authors propose a large sample theory for estimating the parameters of a class of semi-parametric failure-time models, the rank-preserving structural failure time models, using a set of rank estimators.
Abstract: We propose correcting for non-compliance in randomized trials by estimating the parameters of a class of semi-parametric failure time models, the rank preserving structural failure time models, using a class of rank estimators. These models are the structural or strong version of the “accelerated failure time model with time-dependent covariates” of Cox and Oakes (1984). In this paper we develop a large sample theory for these estimators, derive the optimal estimator within this class, and briefly consider the construction of “partially adaptive” estimators whose efficiency may approach that of the optimal estimator. We show that in the absence of censoring the optimal estimator attains the semiparametric efficiency bound for the model.

Journal ArticleDOI
TL;DR: The localization of multiple near-field sources in a spatially white Gaussian noise environment is studied and it is shown that in the single source situation, the covariances of both the 2-D MUSIC estimator and the maximum likelihood estimator (MLE) approach the Cramer-Rao lower bound as the number of snapshots increases to infinity.
Abstract: The localization of multiple near-field sources in a spatially white Gaussian noise environment is studied. A modified two-dimensional (2-D) version of the multiple signal classification (MUSIC) algorithm is used to localize the signal sources; range and bearing. A global-optimum maximum likelihood searching approach to localize these sources is discussed. It is shown that in the single source situation, the covariances of both the 2-D MUSIC estimator and the maximum likelihood estimator (MLE) approach the Cramer-Rao lower bound as the number of snapshots increases to infinity. In the multiple source situation, it is observed that for a high signal-to-noise ratio (SNR) and a large number of snapshots, the root mean square errors (RMSEs) of both localization techniques are relatively small. However, for low SNR and/or small number of snapshots, the performance of the MLE is much superior that of the modified 2-D MUSIC. >

Journal ArticleDOI
TL;DR: A new empirical Bayes estimator, with parameters simply estimated by moments, is proposed and compared with iterative alternatives suggested by Clayton and Kaldor.
Abstract: Methods for estimating regional mortality and disease rates with a view to mapping disease are discussed. A new empirical Bayes estimator with parameters simply estimated by moments is proposed and compared with iterative alternatives suggested by Clayton and Kaldor. The author develops a local shrinkage estimator in which a crude disease rate is shrunk toward a local neighborhood rate. The estimators are compared using simulations and an empirical example based on infant mortality data for Auckland New Zealand. (EXCERPT)

Journal ArticleDOI
TL;DR: In this paper, the authors proposed new ratio and product type estimators for estimating the mean of the finite population using information on single auxiliary variable and the bias and mean square error of these estimators have been obtained.
Abstract: This paper proposes new ratio and product type estimators for estimating the mean of the finite population using information on single auxiliary variable. The bias and mean square error of these estimators have been obtained. These estimators are compared for their precision with usual mean per unit, ratio and product estimators and are found to be more efficient in many practical situations. Further, it is shown that these estimators reduce to regression estimator.

Journal ArticleDOI
TL;DR: In this article, a state estimator design scheme for linear dynamical systems driven by partially unknown inputs is presented, where no prior assumption is made about the nature of these inputs.
Abstract: A novel state estimator design scheme for linear dynamical systems driven by partially unknown inputs is presented. It is assumed that there is no information available about the unknown inputs, and thus no prior assumption is made about the nature of these inputs. A simple approach for designing a reduced-order unknown input observer (UIO) with pole-placement capability is proposed. By carefully examining the dynamic system involved and simple algebraic manipulations, it is possible to rewrite equations eliminating the unknown inputs from part of the system and to put them into a form where it could be partitioned into two interconnected subsystems, one of which is directly driven by known inputs only. This makes it possible to use a conventional Luenberger observer with a slight modification for the purpose of estimating the state of the system. As a result, it is also possible to state similar necessary and sufficient conditions to those of a conventional observer for the existence of a stable estimator and also arbitrary placement of the eigenvalues of the observer. The design and computational complexities involved in designing UIOs are greatly reduced in the proposed approach. >

Journal ArticleDOI
TL;DR: This article proposes to transform the data with the intention that a global window width is more appropriate for the density of the transformed data, and explores choosing the transformation from suitable parametric families.
Abstract: For the density estimation problem the global window width kernel density estimator does not perform well when the underlying density has features that require different amounts of smoothing at different locations. In this article we propose to transform the data with the intention that a global window width is more appropriate for the density of the transformed data. The density estimate of the original data is the “back-transform” by change of variables of the global window width estimate of the transformed data's density. We explore choosing the transformation from suitable parametric families. Data-based selection rules for the choice of transformations and the window width are discussed. Application to real and simulated data demonstrates the usefulness of our proposals.

Journal ArticleDOI
TL;DR: A novel lattice-based adaptive infinite impulse response (IIR) notch filter is developed which features independent tuning of the notch frequency and attenuation bandwidth, and the estimation of extremal frequencies is less prone to overflow instability than previously reported structures.
Abstract: A novel lattice-based adaptive infinite impulse response (IIR) notch filter is developed which features independent tuning of the notch frequency and attenuation bandwidth. The internal structure is based on planar rotators, ensuring reliable numerical behaviour and high processing rates in CORDIC environments. A simple update law allows a simpler implementation than previously proposed designs. Rather than minimizing an output error cost function, the algorithm is designed to achieve a stable associated differential equation, resulting in a globally convergent unbiased frequency estimator in the single sinusoid case, independent of the notch filter bandwidth. Using a second-order structure in the multiple sinusoid case, unbiased estimation of one of the input frequencies is achieved by thinning the notch bandwidth. The tracking behavior is superior to conventional output error designs, and the estimation of extremal frequencies is less prone to overflow instability than previously reported structures. >

Journal ArticleDOI
TL;DR: It is recommended that the proportional bias in logarithmic regressions be estimated from the ratio of the arithmetic sample mean and the mean of the back-transformed predicted values from the regression under the assumption of a lognormal distribution of errors.
Abstract: It is recommended that the proportional bias in logarithmic regressions be estimated from the ratio of the arithmetic sample mean and the mean of the back-transformed predicted values from the regr...

Journal ArticleDOI
TL;DR: In this paper, the problem of estimating a smooth monotone regression function $m$ is studied, where the estimator is composed of a smoothing step and an isotonisation step.
Abstract: The problem of estimating a smooth monotone regression function $m$ will be studied. We will consider the estimator $m_{SI}$ consisting of a smoothing step (application of a kernel estimator based on a kernel $K$) and of a isotonisation step (application of the pool adjacent violator algorithm). The estimator $m_{SI}$ will be compared with the estimator $m_{IS}$ where these two steps are interchanged. A higher order stochastic expansion of these estimators will be given which show that $m_{SI}$ and $m_{SI}$ are asymptotically first order equivalent and that $m_{IS}$ has a smaller mean squared error than $m_{SI}$ if and only if the kernel function of the kernel estimator is not too smooth.

Journal ArticleDOI
TL;DR: In this paper, the authors review recent developments in nonparametric density estimation and include topics that have been omitted from review articles and books on the subject and discuss different types of restricted maximum likelihood density estimators, including order-restricted estimators and sieve estimators.
Abstract: Advances in computation and the fast and cheap computational facilities now available to statisticians have had a significant impact upon statistical research, and especially the development of nonparametric data analysis procedures. In particular, theoretical and applied research on nonparametric density estimation has had a noticeable influence on related topics, such as nonparametric regression, nonparametric discrimination, and nonparametric pattern recognition. This article reviews recent developments in nonparametric density estimation and includes topics that have been omitted from review articles and books on the subject. The early density estimation methods, such as the histogram, kernel estimators, and orthogonal series estimators are still very popular, and recent research on them is described. Different types of restricted maximum likelihood density estimators, including order-restricted estimators, maximum penalized likelihood estimators, and sieve estimators, are discussed, where re...

Journal ArticleDOI
A. Khalili1, K. Kromp1
TL;DR: In this paper, the Weibull modulus distribution was estimated for data produced by Monte Carlo simulations using three different approaches: linear regression, moments method, and maximum likelihood method.
Abstract: The Weibull parameters were estimated for data produced by Monte Carlo simulations using three different approaches: linear regression, moments method, and maximum likelihood method. The last of these was shown to be the most appropriate approach for the whole range of sample sizes of 4 to 100 for estimating the Weibull parameters of a brittle material. In each simulation 10000 estimators were produced. Using these values histograms of the estimators were created, which showed the asymmetry of the Weibull modulus distribution. The integrals of these density functions were directly used to determine confidence intervals for the estimated Weibull moduli. Furthermore it was reaffirmed that a minimum of 30 samples are required for a good characterization of the strength of a brittle material.