scispace - formally typeset
Search or ask a question
Author

Bo Henry Lindqvist

Other affiliations: University of Oslo, SINTEF, Norwegian Institute of Technology  ...read more
Bio: Bo Henry Lindqvist is an academic researcher from Norwegian University of Science and Technology. The author has contributed to research in topics: Renewal theory & Markov chain. The author has an hindex of 24, co-authored 114 publications receiving 2229 citations. Previous affiliations of Bo Henry Lindqvist include University of Oslo & SINTEF.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the problem of estimating the proportion of true null hypotheses, π0, i n a multiple-hypothesis set-up, is considered and the tests are based on observed p-values.
Abstract: Summary. We consider the problem of estimating the proportion of true null hypotheses, π0 ,i n a multiple-hypothesis set-up. The tests are based on observed p-values. We first review published estimators based on the estimator that was suggested by Schweder and Spjotvoll. Then we derive new estimators based on nonparametric maximum likelihood estimation of the p-value density, restricting to decreasing and convex decreasing densities. The estimators of π0 are all derived under the assumption of independent test statistics. Their performance under dependence is investigated in a simulation study. We find that the estimators are relatively robust with respect to the assumption of independence and work well also for test statistics with moderate dependence.

302 citations

Journal ArticleDOI
TL;DR: Screening data with tumor measurements can provide population-based estimates of tumor growth and screen test sensitivity directly linked to tumor size, and there is a large variation in breast cancer tumor growth, with faster growth among younger women.
Abstract: Knowledge of tumor growth is important in the planning and evaluation of screening programs, clinical trials, and epidemiological studies. Studies of tumor growth rates in humans are usually based on small and selected samples. In the present study based on the Norwegian Breast Cancer Screening Program, tumor growth was estimated from a large population using a new estimating procedure/model. A likelihood-based estimating procedure was used, where both tumor growth and the screen test sensitivity were modeled as continuously increasing functions of tumor size. The method was applied to cancer incidence and tumor measurement data from 395,188 women aged 50 to 69 years. Tumor growth varied considerably between subjects, with 5% of tumors taking less than 1.2 months to grow from 10 mm to 20 mm in diameter, and another 5% taking more than 6.3 years. The mean time a tumor needed to grow from 10 mm to 20 mm in diameter was estimated as 1.7 years, increasing with age. The screen test sensitivity was estimated to increase sharply with tumor size, rising from 26% at 5 mm to 91% at 10 mm. Compared with previously used Markov models for tumor progression, the applied model gave considerably higher model fit (85% increased predictive power) and provided estimates directly linked to tumor size. Screening data with tumor measurements can provide population-based estimates of tumor growth and screen test sensitivity directly linked to tumor size. There is a large variation in breast cancer tumor growth, with faster growth among younger women.

179 citations

Journal Article
TL;DR: In this article, the authors present a framework where the ob-served events are modeled as marked point processes, with marks labeling the types of events, and the emphasis is more on modeling than on statistical inference.
Abstract: We review basic modeling approaches for failure and mainte- nance data from repairable systems. In particular we consider imperfect re- pair models, defined in terms of virtual age processes, and the trend-renewal process which extends the nonhomogeneous Poisson process and the renewal process. In the case where several systems of the same kind are observed, we show how observed covariates and unobserved heterogeneity can be included in the models. We also consider various approaches to trend testing. Modern reliability data bases usually contain information on the type of failure, the type of maintenance and so forth in addition to the failure times themselves. Basing our work on recent literature we present a framework where the ob- served events are modeled as marked point processes, with marks labeling the types of events. Throughout the paper the emphasis is more on modeling than on statistical inference.

177 citations

Journal ArticleDOI
TL;DR: A framework where the ob- served events are modeled as marked point processes, with marks labeling the types of events is presented, where the emphasis is more on modeling than on statistical inference.
Abstract: We review basic modeling approaches for failure and mainte- nance data from repairable systems. In particular we consider imperfect re- pair models, defined in terms of virtual age processes, and the trend-renewal process which extends the nonhomogeneous Poisson process and the renewal process. In the case where several systems of the same kind are observed, we show how observed covariates and unobserved heterogeneity can be included in the models. We also consider various approaches to trend testing. Modern reliability data bases usually contain information on the type of failure, the type of maintenance and so forth in addition to the failure times themselves. Basing our work on recent literature we present a framework where the ob- served events are modeled as marked point processes, with marks labeling the types of events. Throughout the paper the emphasis is more on modeling than on statistical inference.

176 citations

Journal ArticleDOI
TL;DR: The trend-renewal process (TRP) is a time-transformed renewal process having both the ordinary renewal process and the nonhomogeneous Poisson process as special cases.
Abstract: The most commonly used models for the failure process of a repairable system are nonhomogeneous Poisson processes, corresponding to minimal repairs, and renewal processes, corresponding to perfect repairs. This article introduces and studies a more general model for recurrent events, the trend-renewal process (TRP). The TRP is a time-transformed renewal process having both the ordinary renewal process and the nonhomogeneous Poisson process as special cases. Parametric inference in the TRP model is studied, with emphasis on the case in which several systems are observed in the presence of a possible unobserved heterogeneity between systems.

128 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The philosophy and design of the limma package is reviewed, summarizing both new and historical features, with an emphasis on recent enhancements and features that have not been previously described.
Abstract: limma is an R/Bioconductor software package that provides an integrated solution for analysing data from gene expression experiments. It contains rich features for handling complex experimental designs and for information borrowing to overcome the problem of small sample sizes. Over the past decade, limma has been a popular choice for gene discovery through differential expression analyses of microarray and high-throughput PCR data. The package contains particularly strong facilities for reading, normalizing and exploring such data. Recently, the capabilities of limma have been significantly expanded in two important directions. First, the package can now perform both differential expression and differential splicing analyses of RNA sequencing (RNA-seq) data. All the downstream analysis tools previously restricted to microarray data are now available for RNA-seq as well. These capabilities allow users to analyse both RNA-seq and microarray data with very similar pipelines. Second, the package is now able to go past the traditional gene-wise expression analyses in a variety of ways, analysing expression profiles in terms of co-regulated sets of genes or in terms of higher-order expression signatures. This provides enhanced possibilities for biological interpretation of gene expression differences. This article reviews the philosophy and design of the limma package, summarizing both new and historical features, with an emphasis on recent enhancements and features that have not been previously described.

22,147 citations

Journal ArticleDOI
TL;DR: The hierarchical model of Lonnstedt and Speed (2002) is developed into a practical approach for general microarray experiments with arbitrary numbers of treatments and RNA samples and the moderated t-statistic is shown to follow a t-distribution with augmented degrees of freedom.
Abstract: The problem of identifying differentially expressed genes in designed microarray experiments is considered. Lonnstedt and Speed (2002) derived an expression for the posterior odds of differential expression in a replicated two-color experiment using a simple hierarchical parametric model. The purpose of this paper is to develop the hierarchical model of Lonnstedt and Speed (2002) into a practical approach for general microarray experiments with arbitrary numbers of treatments and RNA samples. The model is reset in the context of general linear models with arbitrary coefficients and contrasts of interest. The approach applies equally well to both single channel and two color microarray experiments. Consistent, closed form estimators are derived for the hyperparameters in the model. The estimators proposed have robust behavior even for small numbers of arrays and allow for incomplete data arising from spot filtering or spot quality weights. The posterior odds statistic is reformulated in terms of a moderated t-statistic in which posterior residual standard deviations are used in place of ordinary standard deviations. The empirical Bayes approach is equivalent to shrinkage of the estimated sample variances towards a pooled estimate, resulting in far more stable inference when the number of arrays is small. The use of moderated t-statistics has the advantage over the posterior odds that the number of hyperparameters which need to estimated is reduced; in particular, knowledge of the non-null prior for the fold changes are not required. The moderated t-statistic is shown to follow a t-distribution with augmented degrees of freedom. The moderated t inferential approach extends to accommodate tests of composite null hypotheses through the use of moderated F-statistics. The performance of the methods is demonstrated in a simulation study. Results are presented for two publicly available data sets.

11,864 citations

Journal ArticleDOI
TL;DR: Convergence of Probability Measures as mentioned in this paper is a well-known convergence of probability measures. But it does not consider the relationship between probability measures and the probability distribution of probabilities.
Abstract: Convergence of Probability Measures. By P. Billingsley. Chichester, Sussex, Wiley, 1968. xii, 253 p. 9 1/4“. 117s.

5,689 citations

Journal ArticleDOI
29 Jun 1997
TL;DR: It is proved that sequences of codes exist which, when optimally decoded, achieve information rates up to the Shannon limit, and experimental results for binary-symmetric channels and Gaussian channels demonstrate that practical performance substantially better than that of standard convolutional and concatenated codes can be achieved.
Abstract: We study two families of error-correcting codes defined in terms of very sparse matrices "MN" (MacKay-Neal (1995)) codes are recently invented, and "Gallager codes" were first investigated in 1962, but appear to have been largely forgotten, in spite of their excellent properties The decoding of both codes can be tackled with a practical sum-product algorithm We prove that these codes are "very good", in that sequences of codes exist which, when optimally decoded, achieve information rates up to the Shannon limit This result holds not only for the binary-symmetric channel but also for any channel with symmetric stationary ergodic noise We give experimental results for binary-symmetric channels and Gaussian channels demonstrating that practical performance substantially better than that of standard convolutional and concatenated codes can be achieved; indeed, the performance of Gallager codes is almost as close to the Shannon limit as that of turbo codes

3,842 citations

Book ChapterDOI
01 Jan 2011
TL;DR: Weakconvergence methods in metric spaces were studied in this article, with applications sufficient to show their power and utility, and the results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables.
Abstract: The author's preface gives an outline: "This book is about weakconvergence methods in metric spaces, with applications sufficient to show their power and utility. The Introduction motivates the definitions and indicates how the theory will yield solutions to problems arising outside it. Chapter 1 sets out the basic general theorems, which are then specialized in Chapter 2 to the space C[0, l ] of continuous functions on the unit interval and in Chapter 3 to the space D [0, 1 ] of functions with discontinuities of the first kind. The results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables. " The book develops and expands on Donsker's 1951 and 1952 papers on the invariance principle and empirical distributions. The basic random variables remain real-valued although, of course, measures on C[0, l ] and D[0, l ] are vitally used. Within this framework, there are various possibilities for a different and apparently better treatment of the material. More of the general theory of weak convergence of probabilities on separable metric spaces would be useful. Metrizability of the convergence is not brought up until late in the Appendix. The close relation of the Prokhorov metric and a metric for convergence in probability is (hence) not mentioned (see V. Strassen, Ann. Math. Statist. 36 (1965), 423-439; the reviewer, ibid. 39 (1968), 1563-1572). This relation would illuminate and organize such results as Theorems 4.1, 4.2 and 4.4 which give isolated, ad hoc connections between weak convergence of measures and nearness in probability. In the middle of p. 16, it should be noted that C*(S) consists of signed measures which need only be finitely additive if 5 is not compact. On p. 239, where the author twice speaks of separable subsets having nonmeasurable cardinal, he means "discrete" rather than "separable." Theorem 1.4 is Ulam's theorem that a Borel probability on a complete separable metric space is tight. Theorem 1 of Appendix 3 weakens completeness to topological completeness. After mentioning that probabilities on the rationals are tight, the author says it is an

3,554 citations