scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Applied Statistics in 2011"


Journal ArticleDOI
TL;DR: This book, with its four parts, represents a valuable reference to probability, Markov Chains, queuing systems and computer simulation, and gives various statistical characteristics for Markov chains and processes.
Abstract: Probability, Markov chains, queues, and simulation, by William J. Stewart, Princeton, Princeton University Press, 2009, xviii+758 pp., £55.00 or US$80.00 (hardback), ISBN 978-0-691-14062-9 This boo...

167 citations


Journal ArticleDOI
Heonsang Lim1, Bong-Jin Yum1
TL;DR: In this article, optimal accelerated degradation test (ADT) plans are developed assuming that the constant-stress loading method is employed and the degradation characteristic follows a Wiener process, and the test stress levels and the proportion of test units allocated to each stress level such that the asymptotic variance of the maximum-likelihood estimator of the qth quantile of the lifetime distribution at the use condition is minimized.
Abstract: Optimal accelerated degradation test (ADT) plans are developed assuming that the constant-stress loading method is employed and the degradation characteristic follows a Wiener process. Unlike the previous works on planning ADTs based on stochastic process models, this article determines the test stress levels and the proportion of test units allocated to each stress level such that the asymptotic variance of the maximum-likelihood estimator of the qth quantile of the lifetime distribution at the use condition is minimized. In addition, compromise plans are also developed for checking the validity of the relationship between the model parameters and the stress variable. Finally, using an example, sensitivity analysis procedures are presented for evaluating the robustness of optimal and compromise plans against the uncertainty in the pre-estimated parameter value, and the importance of optimally determining test stress levels and the proportion of units allocated to each stress level are illustrated.

153 citations


Journal ArticleDOI
TL;DR: This book gives an overview of the present multiple hypothesis testing that goes on in the pharmaceutical field and addresses the multiple hypothesis problem through the use of the conventional p-value approach as well as the parametric and resampling approaches.
Abstract: Multiple testing problems in pharmaceutical statistics, edited by A. Dmitrienko, A.C. Tamhame, and F. Bretz, Boca Raton, Chapman and Hall/CRC, 2010, xvi+304 pp., £57.99 or US$89.95 (hardback), ISBN...

123 citations


Journal ArticleDOI
TL;DR: A review of several statistical methods that are currently in use for outlier identification is presented, and their performances are compared theoretically for typical statistical distributions of experimental data, considering values derived from the distribution of extreme order statistics as reference terms as mentioned in this paper.
Abstract: A review of several statistical methods that are currently in use for outlier identification is presented, and their performances are compared theoretically for typical statistical distributions of experimental data, considering values derived from the distribution of extreme order statistics as reference terms. A simple modification of a popular, broadly used method based upon box-plot is introduced, in order to overcome a major limitation concerning sample size. Examples are presented concerning exploitation of methods considered on two data sets: a historical one concerning evaluation of an astronomical constant performed by a number of leading observatories and a substantial database pertaining to an ongoing investigation on absolute measurement of gravity acceleration, exhibiting peculiar aspects concerning outliers. Some problems related to outlier treatment are examined, and the requirement of both statistical analysis and expert opinion for proper outlier management is underlined.

121 citations


Journal ArticleDOI
TL;DR: In this paper, a robust, heteroscedastic generalization of Cohen's d is proposed, which has the additional advantage of the ability to generalize to a large number of tasks.
Abstract: Motivated by involvement in an intervention study, the paper proposes a robust, heteroscedastic generalization of what is popularly known as Cohen's d. The approach has the additional advantage of ...

114 citations


Journal ArticleDOI
TL;DR: The authors proposed a corrected variance inflation factor (VIF) measure to evaluate the impact of the correlation among the explanatory variables in the variance of the ordinary least squares estimators, and showed that the real impact on variance can be overestimated by the traditional VIF when the explanatory variable contain no redundant information about the dependent variable and a corrected version of this multicollinearity indicator becomes necessary.
Abstract: In this paper, we propose a new corrected variance inflation factor (VIF) measure to evaluate the impact of the correlation among the explanatory variables in the variance of the ordinary least squares estimators. We show that the real impact on variance can be overestimated by the traditional VIF when the explanatory variables contain no redundant information about the dependent variable and a corrected version of this multicollinearity indicator becomes necessary.

105 citations


Journal ArticleDOI
TL;DR: This work builds-up the samples with non-neighbouring items, according to the time they were produced, to counteract the undesired effect of autocorrelation.
Abstract: Measurement error and autocorrelation often exist in quality control applications. Both have an adverse effect on the X¯ chart's performance. To counteract the undesired effect of autocorrelation, we build-up the samples with non-neighbouring items, according to the time they were produced. To counteract the undesired effect of measurement error, we measure the quality characteristic of each item of the sample several times. The chart's performance is assessed when multiple measurements are applied and the samples are built by taking one item from the production line and skipping one, two or more before selecting the next.

90 citations


Journal ArticleDOI
TL;DR: In this paper, a Bayesian procedure for fitting the monotonic regression model by adapting currently available variable selection procedures was developed, where the Bernstein polynomials were used to provide a smooth estimate over equidistant knots.
Abstract: One of the standard problems in statistics consists of determining the relationship between a response variable and a single predictor variable through a regression function. Background scientific knowledge is often available that suggests that the regression function should have a certain shape (e.g. monotonically increasing or concave) but not necessarily a specific parametric form. Bernstein polynomials have been used to impose certain shape restrictions on regression functions. The Bernstein polynomials are known to provide a smooth estimate over equidistant knots. Bernstein polynomials are used in this paper due to their ease of implementation, continuous differentiability, and theoretical properties. In this work, we demonstrate a connection between the monotonic regression problem and the variable selection problem in the linear model. We develop a Bayesian procedure for fitting the monotonic regression model by adapting currently available variable selection procedures. We demonstrate the effectiv...

83 citations


Journal ArticleDOI
Boran Gazi1
TL;DR: In this article, the authors discuss crucial and fundamental topics concerning credit risk management, including credit scoring, credit ratings, risk modelling and measurement, portfolio models and the Basel II accord.
Abstract: In financial organisations, credit risk management has been one of the most crucial issues. Credit risk concerns separating good customers from bad customers. For this reason, leading financial organisations employ advanced quantitative models and decision support tools for predicting default events and making effective lending decisions. Nowadays, with the help of computers and technology, such decisions can be made quickly, accurately and without having to rely solely on human judgement. Meanwhile, banks not only need to allocate sufficient capital as a buffer to absorb losses incurred from credit, but also optimally allocate its capital so that the most effective investments can be made. With recent failures in the financial sector, it is becoming more evident that banks and other financial institutions need to invest more in their financial risk models and internal rating systems. In their book, the authors discuss crucial and fundamental topics concerning credit risk management. These include credit scoring, credit ratings, risk modelling and measurement, portfolio models and the Basel II accord. This book is first of a three-part book series. The first book aims to lay the foundation for financial risk management and provides a comprehensive overview of credit scoring, ratings systems and regulatory risk. It is intended for practitioners, academics and also for those who are new to the field and want to understand credit risk management concepts. The other two books provide more in-depth discussions mainly concentrating on developing credit risk systems (Book II), and risk model validation, monitoring, auditing and regulation (Book III). The book starts with a history of banking and the types of risk challenging financial organisations. It then introduces two important topics crucial to credit risk management: credit scoring and credit ratings. Credit scoring is concerned with predicting customers’ level of risk and their exposure at different stages of customer life cycle. Internal ratings are used in regulatory capital calculations and internal risk management, whereas external ratings are used for benchmarking and also by investors for investment decisions. Authors also provide overviews of the model development life cycle and portfolio models. The book concludes with an overview of the Basel II capital accord. This book is well-organised and presented. The authors introduce each topic by building on knowledge and understanding from previous chapters. Newcomers to the field and those practitioners who are involved in financial risk and internal rating models oversight would greatly benefit from this book.

68 citations


Journal ArticleDOI
John Pemberton1
TL;DR: Time series analysis with applications in R, Second edition, by Jonathan D. Cryer and Kung-Sik Chan, New York, Springer, 2008, xiii+491 pp., £55.99 or US$84.95 (hardback), ISBN 978-0-387-75958-6 Th...
Abstract: Time Series Analysis with Applications in R, Second edition, by Jonathan D. Cryer and Kung-Sik Chan, New York, Springer, 2008, xiii+491 pp., £55.99 or US$84.95 (hardback), ISBN 978-0-387-75958-6 Th...

68 citations


Journal ArticleDOI
TL;DR: In this paper, a non-parametric method of multivariate singular spectrum analysis (MSSA) for multi-vintage data is proposed. But it does not address the problem of forecasting the final vintages of data.
Abstract: Real-time data on national accounts statistics typically undergo an extensive revision process, leading to multiple vintages on the same generic variable. The time between the publication of the initial and final data is a lengthy one and raises the question of how to model and forecast the final vintage of data – an issue that dates from seminal articles by Mankiw et al., Mankiw and Shapiro and Nordhaus. To solve this problem, we develop the non-parametric method of multivariate singular spectrum analysis (MSSA) for multi-vintage data. MSSA is much more flexible than the standard methods of modelling that involve at least one of the restrictive assumptions of linearity, normality and stationarity. The benefits are illustrated with data on the UK index of industrial production: neither the preliminary vintages nor the competing models are as accurate as the forecasts using MSSA.

Journal ArticleDOI
TL;DR: In this article, a systematic procedure for the derivation of linearized variables for the estimation of sampling errors of complex nonlinear statistics involved in the analysis of poverty and income inequality is developed.
Abstract: A systematic procedure for the derivation of linearized variables for the estimation of sampling errors of complex nonlinear statistics involved in the analysis of poverty and income inequality is developed. The linearized variable extends the use of standard variance estimation formulae, developed for linear statistics such as sample aggregates, to nonlinear statistics. The context is that of cross-sectional samples of complex design and reasonably large size, as typically used in population-based surveys. Results of application of the procedure to a wide range of poverty and inequality measures are presented. A standardized software for the purpose has been developed and can be provided to interested users on request. Procedures are provided for the estimation of the design effect and its decomposition into the contribution of unequal sample weights and of other design complexities such as clustering and stratification. The consequence of treating a complex statistic as a simple ratio in estimating its ...

Journal ArticleDOI
TL;DR: This work expands the set of alternatives to allow for the consideration of multiple change-points, and proposes a model selection algorithm using sequential testing for the piecewise constant hazard model.
Abstract: The National Cancer Institute (NCI) suggests a sudden reduction in prostate cancer mortality rates, likely due to highly successful treatments and screening methods for early diagnosis. We are interested in understanding the impact of medical breakthroughs, treatments, or interventions, on the survival experience for a population. For this purpose, estimating the underlying hazard function, with possible time change points, would be of substantial interest, as it will provide a general picture of the survival trend and when this trend is disrupted. Increasing attention has been given to testing the assumption of a constant failure rate against a failure rate that changes at a single point in time. We expand the set of alternatives to allow for the consideration of multiple change-points, and propose a model selection algorithm using sequential testing for the piecewise constant hazard model. These methods are data driven and allow us to estimate not only the number of change points in the hazard function ...

Journal ArticleDOI
TL;DR: In this article, the authors deal with a Bayesian analysis for right-censored survival data suitable for populations with a cure rate based on the negative binomial distribution, encompassing as a special case the promotion time cure model.
Abstract: In this paper we deal with a Bayesian analysis for right-censored survival data suitable for populations with a cure rate. We consider a cure rate model based on the negative binomial distribution, encompassing as a special case the promotion time cure model. Bayesian analysis is based on Markov chain Monte Carlo (MCMC) methods. We also present some discussion on model selection and an illustration with a real data set.

Journal ArticleDOI
TL;DR: This paper investigates the PrCA data of Louisiana from the Surveillance, Epidemiology, and End Results program and the violation of the PH assumption suggests that the spatial survival model based on the AFT model is more appropriate for this data set.
Abstract: Prostate cancer is the most common cancer diagnosed in American men and the second leading cause of death from malignancies. There are large geographical variation and racial disparities existing in the survival rate of prostate cancer. Much work on the spatial survival model is based on the proportional hazards model, but few focused on the accelerated failure time model. In this paper, we investigate the prostate cancer data of Louisiana from the SEER program and the violation of the proportional hazards assumption suggests the spatial survival model based on the accelerated failure time model is more appropriate for this data set. To account for the possible extra-variation, we consider spatially-referenced independent or dependent spatial structures. The deviance information criterion (DIC) is used to select a best fitting model within the Bayesian frame work. The results from our study indicate that age, race, stage and geographical distribution are significant in evaluating prostate cancer survival.

Journal ArticleDOI
TL;DR: In this paper, a non-parametric maximum likelihood approach in a finite mixture context was used to estimate the probability of university drop-out by using a multinomial latent effects model with endogeneity that accounts for both heterogeneity and omitted covariates.
Abstract: University drop-out is a topic of increasing concern in Italy as well as in other countries. In empirical analysis, university drop-out is generally measured by means of a binary variable indicating the drop-out versus retention. In this paper, we argue that the withdrawal decision is one of the possible outcomes of a set of four alternatives: retention in the same faculty, drop out, change of faculty within the same university, and change of institution. We examine individual-level data collected by the administrative offices of “Sapienza” University of Rome, which cover 117 072 students enrolling full-time for a 3-year degree in the academic years from 2001/2002 to 2006/2007. Relying on a non-parametric maximum likelihood approach in a finite mixture context, we introduce a multinomial latent effects model with endogeneity that accounts for both heterogeneity and omitted covariates. Our estimation results show that the decisions to change faculty or university have their own peculiarities, thus we sugge...

Journal ArticleDOI
TL;DR: The book as discussed by the authors is more suitable for the reader who already knows something about time series than as an introduction to time series analysis, and the wide range of examples and the scope of models and methods covered ensure that the book covers anybody's basic needs as well as showing where we might move to next.
Abstract: important for the applied statistician. On the other hand, I think that the reader should expect to not be misled especially when it comes to such matters. In my opinion, the book is a lot more suitable for the reader who already knows something about time series than as an introduction to time series analysis. Despite these shortcomings, I genuinely liked the book. The wide range of examples and the scope of models and methods covered ensure that the book covers anybody’s basic needs as well as showing where we might move to next. Knowing a little about time series and using R regularly, I learned quite a bit and found the large variety of examples are very inspiring. I think it is highly suitable as a book for anyone with some knowledge of time series and of R, and I also think that it will be useful as a supplementary textbook for introductory courses on time series.

Journal ArticleDOI
TL;DR: In this paper, a Birnbaum-Saunders distribution with an unknown shift parameter was discussed and applied to wind energy modeling, including structural aspects of this distribution including properties, moments, mode and hazard and shape analyses.
Abstract: In this paper, we discuss a Birnbaum–Saunders distribution with an unknown shift parameter and apply it to wind energy modeling We describe structural aspects of this distribution including properties, moments, mode and hazard and shape analyses We also discuss estimation, goodness of fit and diagnostic methods for this distribution A computational implementation in R language of the obtained results is provided Finally, we apply such results to two unpublished real wind speed data from Chile, which allows us to show the characteristics of this statistical distribution and to model wind energy flux

Journal ArticleDOI
TL;DR: A number of different Hurst parameter estimation methods are compared in the context of a wide range of simulated, laboratory-generated, and real data sets to reveal deep insights on how well the laboratory data mimic the real data.
Abstract: Long-range-dependent time series are endemic in the statistical analysis of Internet traffic. The Hurst parameter provides a good summary of important self-similar scaling properties. We compare a number of different Hurst parameter estimation methods and some important variations. This is done in the context of a wide range of simulated, laboratory-generated, and real data sets. Important differences between the methods are highlighted. Deep insights are revealed on how well the laboratory data mimic the real data. Non-stationarities, which are local in time, are seen to be central issues and lead to both conceptual and practical recommendations.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a double-sampling (DS) control chart for the detection of increases of 100% or more in the fraction non-conforming and compared it with the single-sampled chart, variable sample size chart, CUSUM chart, and EWMA chart.
Abstract: In this article, we propose a double-sampling (DS) np control chart. We assume that the time interval between samples is fixed. The choice of the design parameters of the proposed chart and also comparisons between charts are based on statistical properties, such as the average number of samples until a signal. The optimal design parameters of the proposed control chart are obtained. During the optimization procedure, constraints are imposed on the in-control average sample size and on the in-control average run length. In this way, required statistical properties can be assured. Varying some input parameters, the proposed DS np chart is compared with the single-sampling np chart, variable sample size np chart, CUSUM np and EWMA np charts. The comparisons are carried out considering the optimal design for each chart. For the ranges of parameters considered, the DS scheme is the fastest one for the detection of increases of 100% or more in the fraction non-conforming and, moreover, the DS np chart is easy ...

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a method to assess influence in skew-Birnbaum-Saunders regression models, which are an extension based on the skew-normal distribution of the usual Birnbaum -Saunders (BS) regression model.
Abstract: In this paper, we propose a method to assess influence in skew-Birnbaum–Saunders regression models, which are an extension based on the skew-normal distribution of the usual Birnbaum–Saunders (BS) regression model. An interesting characteristic that the new regression model has is the capacity of predicting extreme percentiles, which is not possible with the BS model. In addition, since the observed likelihood function associated with the new regression model is more complex than that from the usual model, we facilitate the parameter estimation using a type-EM algorithm. Moreover, we employ influence diagnostic tools that considers this algorithm. Finally, a numerical illustration includes a brief simulation study and an analysis of real data in order to show the proposed methodology.

Journal ArticleDOI
TL;DR: A more extensive simulation study is used to further investigate the in-control robustness (to non-normality) of the three different EWMA designs studied by Borror et al.
Abstract: The traditional exponentially weighted moving average (EWMA) chart is one of the most popular control charts used in practice today. The in-control robustness is the key to the proper design and implementation of any control chart, lack of which can render its out-of-control shift detection capability almost meaningless. To this end, Borror et al. [5] studied the performance of the traditional EWMA chart for the mean for i.i.d. data. We use a more extensive simulation study to further investigate the in-control robustness (to non-normality) of the three different EWMA designs studied by Borror et al. [5]. Our study includes a much wider collection of non-normal distributions including light- and heavy-tailed and symmetric and asymmetric bi-modal as well as the contaminated normal, which is particularly useful to study the effects of outliers. Also, we consider two separate cases: (i) when the process mean and standard deviation are both known and (ii) when they are both unknown and estimated from an in-co...

Journal ArticleDOI
TL;DR: In this article, the maximum likelihood estimates of the three parameters of the Dagum distribution are determined from samples with type I right and type II doubly censored data, and a probability plot to provide graphical check of the appropriateness of the proposed model for right censored data is constructed, and details are given in the appendix.
Abstract: In this work, we show that the Dagum distribution [3] may be a competitive model for describing data which include censored observations in lifetime and reliability problems. Maximum likelihood estimates of the three parameters of the Dagum distribution are determined from samples with type I right and type II doubly censored data. We perform an empirical analysis using published censored data sets: in certain cases, the Dagum distribution fits the data better than other parametric distributions that are more commonly used in survival and reliability analysis. Graphical comparisons confirm that the Dagum model behaves better than a number of competitive distributions in describing the empirical hazard rate of the analyzed data. A probability plot to provide graphical check of the appropriateness of the Dagum model for right censored data is constructed, and the details are given in the appendix. Finally, a simulation study that shows the good performance of the maximum likelihood estimators of the Dagum s...

Journal ArticleDOI
TL;DR: In this paper, the impact of imputation model misspecification on the quality of parameter estimates by employing multiple imputation under assumptions of a normal model (MI/NM) with two-level cross-sectional data when values are missing at random on the dependent variable at rates of 10, 30, and 50%.
Abstract: When modeling multilevel data, it is important to accurately represent the interdependence of observations within clusters. Ignoring data clustering may result in parameter misestimation. However, it is not well established to what degree parameter estimates are affected by model misspecification when applying missing data techniques (MDTs) to incomplete multilevel data. We compare the performance of three MDTs with incomplete hierarchical data. We consider the impact of imputation model misspecification on the quality of parameter estimates by employing multiple imputation under assumptions of a normal model (MI/NM) with two-level cross-sectional data when values are missing at random on the dependent variable at rates of 10%, 30%, and 50%. Five criteria are used to compare estimates from MI/NM to estimates from MI assuming a linear mixed model (MI/LMM) and maximum likelihood estimation to the same incomplete data sets. With 10% missing data (MD), techniques performed similarly for fixed-effects estimate...

Journal ArticleDOI
TL;DR: This book is strongly recommended for those interested in the use of asymptotics in statistics and probability problems, and provides a quick and accessible overview of the available results, providing both a basic understanding of their context and references to sources where a more detailed treatment can be found.
Abstract: Asymptotic Theory of Statistics and Probability, by Anirban DasGupta, New York, Springer, 2008, xxvii+722 pp., £55.99 or US$89.95 (hardback), ISBN 978-0-387-75970-8 Unlike many books on asymptotic ...

Journal ArticleDOI
TL;DR: Agung et al. as mentioned in this paper provided a practical guidance on time-series data analysis using EViews, and provided a guidance on how to use EView data for time series data analysis.
Abstract: Time-series data analysis using EViews, by I. Gusti Ngurah Agung, Chichester, Wiley, 2009, xx+609 pp., £80.00 or US$115 (hardback), ISBN 978-0-470-82367-5 This book provides a practical guidance on...

Journal ArticleDOI
TL;DR: The proposed method is easy to implement and can be extended to all kinds of SSMs in a straightforward manner and is precisely that of a finite-state hidden Markov model (HMM).
Abstract: Nonlinear and non-Gaussian state–space models (SSMs) are fitted to different types of time series. The applications include homogeneous and seasonal time series, in particular earthquake counts, polio counts, rainfall occurrence data, glacial varve data and daily returns on a share. The considered SSMs comprise Poisson, Bernoulli, gamma and Student-t distributions at the observation level. Parameter estimations for the SSMs are carried out using a likelihood approximation that is obtained after discretization of the state space. The approximation can be made arbitrarily accurate, and the approximated likelihood is precisely that of a finite-state hidden Markov model (HMM). The proposed method enables us to apply standard HMM techniques. It is easy to implement and can be extended to all kinds of SSMs in a straightforward manner.

Journal ArticleDOI
TL;DR: A general hazard model is proposed which accommodates comprehensive families of cure rate models as particular cases, including the model proposed by Berkson and Gage, and the maximum-likelihood-estimation procedure is discussed.
Abstract: Historically, the cure rate model has been used for modeling time-to-event data within which a significant proportion of patients are assumed to be cured of illnesses, including breast cancer, non-Hodgkin lymphoma, leukemia, prostate cancer, melanoma, and head and neck cancer. Perhaps the most popular type of cure rate model is the mixture model introduced by Berkson and Gage [1]. In this model, it is assumed that a certain proportion of the patients are cured, in the sense that they do not present the event of interest during a long period of time and can found to be immune to the cause of failure under study. In this paper, we propose a general hazard model which accommodates comprehensive families of cure rate models as particular cases, including the model proposed by Berkson and Gage. The maximum-likelihood-estimation procedure is discussed. A simulation study analyzes the coverage probabilities of the asymptotic confidence intervals for the parameters. A real data set on children exposed to HIV by v...

Journal ArticleDOI
TL;DR: In this article, the authors analyzed the leading journals in Neurosciences using quantifiable research assessment measures (RAM), highlights the similarities and differences in alternative RAM, shows that several RAM capture similar performance characteristics of highly cited journals, and shows that some other RAM have low correlations with each other, and hence add significant informational value.
Abstract: The paper analyses the leading journals in Neurosciences using quantifiable Research Assessment Measures (RAM), highlights the similarities and differences in alternative RAM, shows that several RAM capture similar performance characteristics of highly cited journals, and shows that some other RAM have low correlations with each other, and hence add significant informational value. Alternative RAM are discussed for the Thomson Reuters ISI Web of Science database (hereafter ISI). The RAM that are calculated annually or updated daily include the classic 2-year impact factor (2YIF), 5-year impact factor (5YIF), Immediacy (or zero-year impact factor (0YIF)), Eigenfactor score, Article Influence score, C3PO (Citation Performance Per Paper Online), h-index, Zinfluence, PI-BETA (Papers Ignored - By Even The Authors), 2-year and historical Self-citation Threshold Approval Ratings (STAR), Impact Factor Inflation (IFI), and Cited Article Influence (CAI). The RAM are analysed for 26 highly cited journals in the ISI category of Neurosciences. The paper finds that the Eigenfactor score and PI-BETA are not highly correlated with the other RAM scores, so that they convey additional information regarding journal rankings, that Article Influence is highly correlated with some existing RAM, so that it has little informative incremental value, and that CAI has additional informational value to that of Article Influence. Harmonic mean rankings of the 13 RAM criteria for the 26 highly cited journals are also presented. Emphasizing the 2-year impact factor of a journal to the exclusion of other informative RAM criteria is shown to lead to a distorted evaluation of journal performance and influence, especially given the informative value of several other RAM.

Journal ArticleDOI
TL;DR: In this article, the authors proposed the one-sided s exponentially weighted moving average (EWMA) control chart, which is based on a new type of rounding operation for detecting positive shifts in the mean of a Poisson INAR(1) process.
Abstract: Processes of serially dependent Poisson counts are commonly observed in real-world applications and can often be modeled by the first-order integer-valued autoregressive (INAR) model. For detecting positive shifts in the mean of a Poisson INAR(1) process, we propose the one-sided s exponentially weighted moving average (EWMA) control chart, which is based on a new type of rounding operation. The s-EWMA chart allows computing average run length (ARLs) exactly and efficiently with a Markov chain approach. Using an implementation of this procedure for ARL computation, the s-EWMA chart is easily designed, which is demonstrated with a real-data example. Based on an extensive study of ARLs, the out-of-control performance of the chart is analyzed and compared with that of a c chart and a one-sided cumulative sum (CUSUM) chart. We also investigate the robustness of the chart against departures from the assumed Poisson marginal distribution.