scispace - formally typeset
Search or ask a question

Showing papers by "Donald B. Rubin published in 1994"


Journal ArticleDOI
TL;DR: ECME as discussed by the authors is a generalization of the ECM algorithm, which is itself an extension of the EM algorithm (Dempster, Laird & Rubin, 1977), which can be obtained by replacing some CM-steps of ECM, which maximise the constrained expected complete-data loglikelihood function, with steps that maximize the correspondingly constrained actual likelihood function.
Abstract: A generalisation of the ECM algorithm (Meng & Rubin, 1993), which is itselfan extension of the EM algorithm (Dempster, Laird & Rubin, 1977), can be obtained by replacing some CM-steps of ECM, which maximise the constrained expected complete-data loglikelihood function, with steps that maximise the correspondingly constrained actual likelihood function. This algorithm, which we call ECME algorithm, for Expectation /Conditional Maximisation Either, shares with both EM and ECM their stable monotone convergence and basic simplicity of implementation relative to competing faster converging methods. Moreover, ECME can have a substantially faster convergence rate than either EM or ECM, measured using either the number of iterations or actual computer time

604 citations


Journal ArticleDOI
TL;DR: In this article, the Counternull value of an effect size is defined as the non-null magnitude of effect size that is supported by exactly the same amount of evidence as supports the null value of the effect size.
Abstract: We introduce a new, readily computed statistic, the counternull value of an obtained effect size, which is the nonnull magnitude of effect size that is supported by exactly the same amount of evidence as supports the null value of the effect size In other words, if the counternull value were taken as the null hypothesis, the resulting p value would be the same as the obtained p value for the actual null hypothesis Reporting the counternull, in addition to the p value, virtually eliminates two common errors (a) equating failure to reject the null with the estimation of the effect size as equal to zero and (b) taking the rejection of a null hypothesis on the basis of a significant p value to imply a scientifically important finding In many common situations with a one-degree-of-freedom effect size, the value of the counternull is simply twice the magnitude of the obtained effect size, but the counternull is defined in general, even with multi-degree-of-freedom effect sizes, and therefore can be applied when...

132 citations


Journal ArticleDOI
TL;DR: This article presents a general description on how and when the componentwise rates differ, as well as their relationships with the global rate, and provides an example, a standard contaminated normal model, to show that such phenomena are not necessarily pathological, but can occur in useful statistical models.

87 citations



Journal ArticleDOI
TL;DR: In this article, the authors showed that high power RF processing (HPP) is an effective technique to reduce field emission in superconducting cavities, so higher accelerating gradients can be reached.
Abstract: In the previous companion paper we showed that high power RF processing (HPP) is an effective technique to reduce field emission in superconducting cavities, so higher accelerating gradients can be reached. In this work we show improved understanding of the mechanisms at work when field emitters process. Thermometry measurements of the outer wall of single-cell cavities reveal the field emission from localized sites and also the reduction in field emission by processing. Subsequent scanning electron microscope (SEM) examination of the RF surface at the emission/processed sites reveals 5–10 μm sized molten craters, micron sized molten particles of foreign elements, and sub-mm sized spots shaped like starbursts. These features indicate that processing occurs through a violent melting/vaporization phenomenon. A “model” for RF processing is presented based upon the experimental evidence, both from this study and from others.

19 citations


Journal ArticleDOI
TL;DR: In this article, the problem of equating a new standardized test to an old reference test is considered when the samples for equating are not randomly selected from the target population of test takers.
Abstract: The problem of equating a new standardized test to an old reference test is considered when the samples for equating are not randomly selected from the target population of test takers. Two problems with equating from biased samples are distinguished: (a) bias in the equating function arising from nonrandom selection of the equating sample, and (b) excessive variance in the equating function at scores that are relatively underrepresented in the equating sample relative to the target population. A theorem is presented that suggests that bias may not be a major problem for equating, even when the marginal distributions of scores are distorted by selection. Empirical analysis of data for equating the Armed Services Vocational Aptitude Battery (ASVAB) based on samples of recruits and applicants supports this contention. Analysis of ASVAB data also indicates that excessive variance in the equating function is a more serious issue. Variance-reducing methods, which smooth the test score distributions using extended beta binomial and loglinear polynomial models before equating by the equipercentile method, are presented. Empirical evidence suggests that these smoothing models are successful and yield equating functions that improve on both equipercentile and linear equating of the raw scores.

18 citations


Journal ArticleDOI
Hal S. Stern1, Doreen Arcus1, Jerome Kagan1, Donald B. Rubin1, Nancy Snidman1 
TL;DR: In this paper, a finite mixture model was applied to two sets of longitudinal observations of infants and young children, and a measure of predictive efficacy was described for comparing the mixture model with competing models, principally a linear regression analysis.
Abstract: Temperamental characteristics can be conceptualized as either continuous dimensions or qualitative categories. The distinction concerns the underlying temperamental characteristics rather than the measured variables, which can usually be recorded as either continuous or categorical variables. A finite mixture model captures the categorical view, and we apply such a model here to two sets of longitudinal observations of infants and young children. A measure of predictive efficacy is described for comparing the mixture model with competing models, principally a linear regression analysis. The mixture model performs mildly better than the linear regression model with respect to this measure of fit to the data; however, the primary advantage of the mixture model relative to competing approaches, is that, because it matches our a priori theory, it can be easily used to address improvements and corrections to the theory, and to suggest extensions of the research.

16 citations


Journal ArticleDOI
TL;DR: In this paper, the effects of pulsed high power RF processing (HPP) as a method of reducing field emission (FE) in superconducting radio frequency (SRF) cavities to reach higher accelerating gradients for future particle accelerators were presented.
Abstract: A systematic study is presented of the effects of pulsed high power RF processing (HPP) as a method of reducing field emission (FE) in superconducting radio frequency (SRF) cavities to reach higher accelerating gradients for future particle accelerators. The processing apparatus was built to provide up to 150 kW peak RF power to 3 GHz cavities, for pulse lengths from 200 μs to 1 ms. Single-cell and nine-cell cavities were tested extensively. The thermal conductivity of the niobium for these cavities was made as high as possible to ensure stability against thermal breakdown of superconductivity. HPP proves to be a highly successful method of reducing FE loading in nine-cell SRF cavities. Attainable continuous wave (CW) fields increase by as much as 80% from their pre-HPP limits. The CW accelerating field achieved with nine-cell cavities improved from 8–15 MV/m with HPP to 14–20 MV/m. The benefits are stable with subsequent exposure to dust-free air. More importantly, HPP also proves effective against new field emission subsequently introduced by cold and warm vacuum “accidents” which admitted “dirty” air into the cavities. Clear correlations are obtained linking FE reduction with the maximum surface electric field attained during processing. In single cells the maximums reached were E peak = 72 MV/m and H peak = 1660 Oe. Thermal breakdown, initiated by accompanying high surface magnetic fields is the dominant limitation on the attainable fields for pulsed processing, as well as for final CW and long pulse operation. To prove that the surface magnetic field rather than the surface electric fields is the limitation to HPP effectiveness, a special two-cell cavity with a reduced magnetic to electric field ratio is successfully tested. During HPP, pulsed fields reach E peak = 113 MV/m ( H peak = 1600 Oe) and subsequent CW low power measurement reached E peak = 100 MV/m, the highest CW field ever measured in a superconducting accelerator cavity.

14 citations