scispace - formally typeset
Search or ask a question

Showing papers in "Austrian Journal of Statistics in 2019"


Journal ArticleDOI
TL;DR: A full Bayesian bootstrap prior ANOVA test function is developed within the framework of parametric empirical Bayes and it is found that the test function developed was later used for variable screening in multiclass classification scenario.
Abstract: In this paper, the one-way ANOVA model and its application in Bayesian multi-class variable selection is considered. A full Bayesian bootstrap prior ANOVA test function is developed within the framework of parametric empirical Bayes. The test function developed was later used for variable screening in multiclass classification scenario. Performance comparison between the proposed method and existing classical ANOVA method was achieved using simulated and real life gene expression datasets. Analysis results revealed lower false positive rate and higher sensitivity for the proposed method.

7 citations


Journal ArticleDOI
TL;DR: An alternative average design-based mean squared error estimator is proposed which always produces positive estimates and accounts for the bias component usually present in synthetic estimators.
Abstract: The use of area-specific design-based mean squared error (MSE) to measure the uncertainty associated with synthetic and direct estimators is appealing since the same model-free criterion is applied. However, the small sample size is often a difficulty in obtaining a reliable estimator of the area-specific design-based MSE. Moreover, the area-specific design-based mean squared error estimator might yield undesirable negative values under certain circumstances. The existing solution to overcome the problem of small sample size is to consider average design-based MSE, average being taken over the available small areas. This may not solve the other problem of negative MSE. An alternative average design-based mean squared error estimator is proposed which always produces positive estimates. Simulation shows that this estimator performs better than the existing average design-based MSEs as it always produces positive estimates and accounts for the bias component usually present in synthetic estimators.

4 citations


Journal ArticleDOI
TL;DR: In this article, a new lifetime distribution by compounding the gamma and Lindley distributions is proposed, which can be interpreted in the viewpoint of the reliability analysis and Bayesian inference, and the distribution has decreasing and unimodal hazard rate shapes.
Abstract: In this paper, we propose a new lifetime distribution by compounding the gamma and Lindley distributions. Construction of it can be interpreted in the viewpoint of the reliability analysis and Bayesian inference. Moreover, the distribution has decreasing and unimodal hazard rate shapes. Several properties of the distribution are obtained, involving characteristics of the (reverse) hazard rate function, quantiles, moments, extreme order statistics and some stochastic order relations. Estimating the distribution parameters is discussed by some estimation methods and their performance is evaluated by a simulation study. Also, estimation of the stress-strength parameter is investigated. Usefulness of the distribution among other models is illustrated by fitting two failure data sets and using some goodness-of-fit measures.

4 citations


Journal ArticleDOI
TL;DR: In this article, two measures of reliability functions, namely R(t)=P(X>t) and P(X
Abstract: Two measures of reliability functions, namely R(t)=P(X>t) and P=P(X

3 citations


Journal ArticleDOI
TL;DR: In this article, preliminary test estimators (PTEs) and preliminary test confidence interval (PTCI) are derived for the Moore and Bilikam (1978) family of lifetime distributions.
Abstract: We consider two measures of reliability functions namely R(t)=P(X>t) and P=P(X>Y) for the Moore and Bilikam (1978) family of lifetime distributions which covers fourteen distributions as specific cases. For record data from this family of distributions, preliminary test estimators (PTEs) and preliminary test confidence interval (PTCI) based on uniformly minimum variance unbiased estimator (UMVUE), maximum likelihood estimator (MLE), empirical Bayes estimator (EBE) are obtained for the parameter. The bias and mean square error (MSE) (exact and asymptotic) of the proposed estimators are derived to study their relative efficiency and through simulation studies we establish that PTEs perform better than ordinary UMVUE, MLE and EBE. We also obtain the coverage probability (CP) and the expected length of the PTCI of the parameter and establish that the confidence intervals based on MLE are more precise. An application of the ordinary preliminary test estimator is also considered. To the best of the knowledge of the authors, no PTEs have been derived for R(t) and P based on records and thus we define improved PTEs based on MLE and UMVUE of R(t) and P. A comparative study of different methods of estimation done through simulations establishes that PTEs perform better than ordinary UMVUE and MLE.

3 citations


Journal ArticleDOI
TL;DR: In this article, the authors considered the problem of optimal inspection times for the progressive interval type-I censoring scheme where uncertainty in the process is governed by the two-parameter Rayleigh distribution.
Abstract: In this paper, we have considered the problem of optimal inspection times for the progressive interval type-I censoring scheme where uncertainty in the process is governed by the two-parameter Rayleigh distribution. Here, we also introduced some optimality criterion and determined the optimum inspection times, accordingly. The effect of the number of inspections and choice of optimally spaced inspection times based on the asymptotic relative efficiencies of the maximum likelihood estimates of the parameters are also investigated. Further, we have discussed the optimal progressive type-I interval censoring plan when the inspection times and the expected proportions of total failures in the experiment are under control.

3 citations


Journal ArticleDOI
TL;DR: In this article, a new point estimation method based on Kullback-Leibler divergence of survival functions (KLS), measuring the distance between an empirical and prescribed survival functions, has been used to estimate the parameter of Lindley distribution.
Abstract: A new point estimation method based on Kullback-Leibler divergence of survival functions (KLS), measuring the distance between an empirical and prescribed survival functions, has been used to estimate the parameter of Lindley distribution. The simulation studies have been carried out to compare the performance of the proposed estimator with the corresponding Least square (LS), Maximum likelihood (ML) and Maximum product spacing (MPS) methods of estimation.

3 citations


Journal ArticleDOI
TL;DR: In this article, the authors performed linear regression analysis on a continuous aggregate outcome from a Bayesian perspective using a Markov chain Monte Carlo algorithm (Gibbs sampling) for systolic blood pressure.
Abstract: The main purpose of this paper is to perform linear regression analysis on a continuous aggregate outcome from a Bayesian perspective using a Markov chain Monte Carlo algorithm (Gibbs sampling). In many situations, data are partially available due to privacy and confidentiality of the subjects in the sample. So, in this study, the vector of outcomes, Y, is realistically assumed to be missing and is partially available through summary statistics, sum(Y), aggregated over groups of subjects, while the covariate values, X, are availablefor all subjects in the sample. The results of the simulation study highlight both the efficiency of the regression parameter estimates and the predictive power of the proposed model compared with classicalmethods. The proposed approach is fully implemented in an example regarding systolic blood pressure for illustrative purposes.

2 citations


Journal ArticleDOI
TL;DR: Modified Maximum Likelihood Estimator for mean and standard deviation is a pair of statistics with good robust properties introduced to control charting process and investigate the advantages of using it.
Abstract: Shewhart control chart is the most popular and widely used Statistical process Control tool to monitor process. It is developed under the assumption of independent and normally distributed process. In order to control process mean and standard deviation, robust estimator of these parameters can be better alternatives as charts based on that are more resistant to moderate changes in process distribution. Modified Maximum Likelihood Estimator (MMLE) for mean and standard deviation is a pair of statistics with good robust properties. Authors introduced these measures to control charting process and investigate the advantages of using it. A modification to mean based on MMLE and its standard deviation are introduced to improve industrial process performance. Using Monte Carlo simulation method, performance of this chart is compared with classical control chart. Performance is also studied based on the Average Run Length.

2 citations


Journal ArticleDOI
TL;DR: Two methods to sample uniform subtrees from graphs using Metropolis-Hastings algorithms are presented and simulation studies are presented which confirm the theoretical convergence results on these methods by monitoring the convergence of Markov chains to the equilibrium distribution.
Abstract: This article presents two methods to sample uniform subtrees from graphs using Metropolis-Hastings algorithms. One method is an independent Metropolis-Hastings and the other one is a type of add-and-delete MCMC. In addition to the theoretical contributions, we present simulation studies which confirm the theoretical convergence results on our methods by monitoring the convergence of our Markov chains to the equilibrium distribution.

1 citations


Journal ArticleDOI
TL;DR: Two recursive filtering algorithms, the optimized particle filter, and the Viterbi algorithm, which allow the joint estimation of states and parameters of continuous-time stochastic volatility models, such as the Cox Ingersoll Ross and Heston model are described and implemented.
Abstract: In this paper, we describe and implement two recursive filtering algorithms, the optimized particle filter, and the Viterbi algorithm, which allow the joint estimation of states and parameters of continuous-time stochastic volatility models, such as the Cox Ingersoll Ross and Heston model. In practice, good parameter estimates are required so that the models are able to generate accurate forecasts. To achieve the objectives the proposed algorithms were implemented using daily empirical data from the time series of the $S\&P500$ returns of the stock exchange index. The proposed methodology facilitates computational calculations of the marginal likelihood of states and allows the reconstruction of unknown states in a suitable way, and reliable estimation of the parameters. To measure the quality of estimation of the algorithms, we used the square root of the mean square error and relative deviation standard as measures of goodness of fit. The estimated errors are insignificant for the analyzed data and the two models considered. We also calculated the execution times of the algorithms, demonstrating that the Viterbi algorithm has less execution time than the optimized particle filter.

Journal ArticleDOI
TL;DR: A class of adaptive designs is formulated in two stages for clinical trials to favour the better performing treatment for further allocation in an efficient way and compare with the existing allocation designs.
Abstract: A class of adaptive designs is formulated in two stages for clinical trials to favour the better performing treatment for further allocation in an efficient way. The first stage of the allocation consists in randomizing subjects to each treatment arm with equal probability and performing a test of equality of treatment effects. The resulting p value and the available estimate of a treatment difference measure is used to assign the incoming second stage subjects. Considering binary and normal responses, several exact and asymptotic properties of the proposed allocation are thoroughly examined and compared with the existing allocation designs.

Journal ArticleDOI
TL;DR: The dlsem package for Rimplementing inference functionalities for DLSEMs is presented, which is illustrated through an example on simulated data and a real-world application aiming at assessing the impact of agricultural research expenditure on multiple dimensions in Europe.
Abstract: In this paper, an extension of linear Markovian structural causal models is introduced,called distributed-lag linear structural equation models (DLSEMs),where each factor of the joint probability distribution is adistributed-lag linear regression with constrained lag shapes.DLSEMs account for temporal delays in the dependence relationshipsamong the variables and allow to assess dynamic causal effects.As such, they represent a suitable methodology to investigate the effectof an external impulse on a multidimensional system through time.In this paper, we present the dlsem package for Rimplementing inference functionalities for DLSEMs.The use of the package is illustrated through an example on simulated dataand a real-world application aiming at assessing the impact of agriculturalresearch expenditure on multiple dimensions in Europe.

Journal ArticleDOI
TL;DR: In this paper, a multivariate version generalized from the univariate normality test based on kurtosis from the literature is proposed, and power is investigated through the Monte Carlo Simulation with different significance level, dimension, and sample size.
Abstract: In this paper, we first transform a multivariate normal random vector into a random vector with elements that are approximately independent standard normal random variables. Then we propose the multivariate version generalized from the univariate normality test based on kurtosis from the literature. Power is investigated through the Monte Carlo Simulation with different significance level, dimension, and sample size. To assess the validity and accuracy of the new tests, we carry a comparative study with several other existing tests by selecting certain types of symmetric and asymmetric alternative distributions.

Journal ArticleDOI
TL;DR: A technique of elicitation of prior hyperparameters based on a well known multinomial-Dirichlet model for ovarian cancer patients is provided and inferences on odds ratios and interaction parameters are provided.
Abstract: Elicitation of prior plays a very important role in Bayesian paradigm especially when dealing with rare disease problems in medical field. The reason being that we do not get enough data to draw valid inferences always. Since the subject of study is human population, one cannot do experiments with their health. The prior distribution supports the final results by some additional information gained from the experts. In any case if an appropriate expert is not available, we can use past data to get information about the prior and its hyperparameters. The present paper provides a technique of elicitation of prior hyperparameters based on a well known multinomial-Dirichlet model. Since the main focus is on medical data problems, the inferences on odds ratios and interaction parameters are also provided. Numerical illustration is based on a real dataset from Israel on patients having ovarian cancer. Although the details have been given in the context of ovarian cancer patients, the development in the paper is equally well applicable for any such disease.

Journal ArticleDOI
TL;DR: The possibility of improving the current algorithm of the isotonic design for escalation and de-escalation is investigated, and a stopping rule to avoid any severely toxic dose as the MTD is proposed.
Abstract: One of the most challenging tasks in clinical trials is finding the maximum tolerated dose (MTD) to be tested in the next phase. An assurance for the safety of the patients and recommendation of a suitable dose for phase II are the main objectives of a phase I trial. The MTD can be identified through various approaches. A non-parametric approach, known as the isotonic design, has been explored in our study. The design relies on the monotonicity assumption of the dose-toxicity relationship. Usually the number of patients in a trial have an impact on the adequacy of dose recommendation. This paper is a humble attempt to see the impact of cohort size and total cohorts on the isotonic design. It investigates the possibility of improving the current algorithm of the isotonic design for escalation and de-escalation. Also, the paper proposes a stopping rule to avoid any severely toxic dose as the MTD. The simulation study shows that along with total cohort, cohort size also has an appreciable effect on the MTD selection. The proposed modification of the algorithm has also been found to work satisfactorily in majority of the cases.