scispace - formally typeset
Search or ask a question

Showing papers in "Journal of the American Statistical Association in 1998"



Journal ArticleDOI
TL;DR: This book can be used by researchers and graduate students in machine learning, data mining, and knowledge discovery, who wish to understand techniques of feature extraction, construction and selection for data pre-processing and to solve large size, real-world problems.
Abstract: From the Publisher: The book can be used by researchers and graduate students in machine learning, data mining, and knowledge discovery, who wish to understand techniques of feature extraction, construction and selection for data pre-processing and to solve large size, real-world problems. The book can also serve as a reference book for those who are conducting research about feature extraction, construction and selection, and are ready to meet the exciting challenges ahead of us.

953 citations


Journal ArticleDOI
TL;DR: In this paper, a Bayesian analysis of linear regression models that can account for skewed error distributions with fat tails is presented. But the authors do not consider whether the tail behavior is affected by skewness.
Abstract: We consider a Bayesian analysis of linear regression models that can account for skewed error distributions with fat tails The latter two features are often observed characteristics of empirical datasets, and we formally incorporate them in the inferential process A general procedure for introducing skewness into symmetric distributions is first proposed Even though this allows for a great deal of flexibility in distributional shape, tail behavior is not affected Applying this skewness procedure to a Student t distribution, we generate a “skewed Student” distribution, which displays both flexible tails and possible skewness, each entirely controlled by a separate scalar parameter The linear regression model with a skewed Student error term is the main focus of the article We first characterize existence of the posterior distribution and its moments, using standard improper priors and allowing for inference on skewness and tail parameters For posterior inference with this model, we suggest

829 citations


Journal ArticleDOI
TL;DR: A Bayesian approach for finding classification and regression tree (CART) models by having the prior induce a posterior distribution that will guide the stochastic search toward more promising CART models.
Abstract: In this article we put forward a Bayesian approach for finding classification and regression tree (CART) models. The two basic components of this approach consist of prior specification and stochastic search. The basic idea is to have the prior induce a posterior distribution that will guide the stochastic search toward more promising CART models. As the search proceeds, such models can then be selected with a variety of criteria, such as posterior probability, marginal likelihood, residual sum of squares or misclassification rates. Examples are used to illustrate the potential superiority of this approach over alternative methods.

749 citations


Journal ArticleDOI
Keming Yu1, M. C. Jones1
TL;DR: In this paper, a nonparametric regression quantile estimation by kernel weighted local linear fitting (KWLLEF) is proposed. But it is based on localizing the characterization of a regression quantiles as the minimizer of E{pp (Y − a)|X = x, where ρp is the appropriate check function.
Abstract: In this article we study nonparametric regression quantile estimation by kernel weighted local linear fitting. Two such estimators are considered. One is based on localizing the characterization of a regression quantile as the minimizer of E{pp (Y — a)|X = x}, where ρp is the appropriate “check” function. The other follows by inverting a local linear conditional distribution estimator and involves two smoothing parameters, rather than one. Our aim is to present fully operational versions of both approaches and to show that each works quite well; although either might be used in practice, we have a particular preference for the second. Our automatic smoothing parameter selection method is novel; the main regression quantile smoothing parameters are chosen by rule-of-thumb adaptations of state-of-the-art methods for smoothing parameter selection for regression mean estimation. The techniques are illustrated by application to two datasets and compared in simulations.

662 citations


Journal ArticleDOI
TL;DR: In this article, the authors use predictive residuals to construct a test statistic for detecting threshold nonlinearity in a vector time series and propose a procedure for building a multivariate threshold model.
Abstract: Threshold autoregressive models in which the process is piecewise linear in the threshold space have received much attention in recent years. In this article I use predictive residuals to construct a test statistic for detecting threshold nonlinearity in a vector time series and propose a procedure for building a multivariate threshold model. The thresholds and the model are selected jointly based on the Akaike information criterion. The finite-sample performance of the proposed test is studied by simulation. The modeling procedure is then used to study arbitrage in security markets and results in a threshold cointegration between logarithms of future contracts and spot prices of a security after adjusting for the cost of carrying the contracts. In this particular application, thresholds are determined in part by the transaction costs. I also apply the proposed procedure to U.S. monthly interest rates and two river flow series of Iceland.

660 citations


Journal ArticleDOI
TL;DR: The concept of GDF offers a unified framework under which complex and highly irregular modeling procedures can be analyzed in the same way as classical linear models and many difficult problems can be solved easily.
Abstract: In the theory of linear models, the concept of degrees of freedom plays an important role. This concept is often used for measurement of model complexity, for obtaining an unbiased estimate of the error variance, and for comparison of different models. I have developed a concept of generalized degrees of freedom (GDF) that is applicable to complex modeling procedures. The definition is based on the sum of the sensitivity of each fitted value to perturbation in the corresponding observed value. The concept is nonasymptotic in nature and does not require analytic knowledge of the modeling procedures. The concept of GDF offers a unified framework under which complex and highly irregular modeling procedures can be analyzed in the same way as classical linear models. By using this framework, many difficult problems can be solved easily. For example, one can now measure the number of observations used in a variable selection process. Different modeling procedures, such as a tree-based regression and a ...

525 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a model for estimating the intensity of long-range dependence in finite and infinite variance time series, which is based on the maximally-skewed stable distributions.
Abstract: Part 1 Applications: heavy tailed probability distributions in the World Wide Web, M.E. Crovella et al self-similarity and heavy tails - structural modelling of network traffic, W. Willinger et al heavy tails in high-frequency financial data, U.A. Muller et al stable paretian modelling in finance - some empirical and theoretical aspects, S. Mittnik et al risk management and quantile estimation, F. Bassi et al. Part 2 Time series: analyzing stable time series, R.J. Adler et al inference for linear processes with stable noise, m. Calder, R.A. Davis on estimating the intensity of long-range dependence in finite and infinite variance time series, M.S. Taqqu, V. Teverovsky why non-linearities can ruin the heavy tailed modeller's day, S.I. Resnick periodogram estimates from heavy-tailed data, T. Mikosch Bayesian inference for time series with infinite variance stable innovations, N. Ravishanker, Z. Qiou. Part 3 Heavy tail estimation: hill, bootstrap and jackknife estimators for heavy tails, O.V. Pictet et al characteristic function based estimation of stable distribution parameters, S.M. Kogan. D.B. Williams. Part 4 Regression: bootstrapping signs and permutations for regression with heavy tailed errors - a robust resampling, R. LePage et al linear regression with stable disturbances, J.H. McCulloch. Part 5 Signal processing: deviation from normality in statistical signal processing - parameter estimation with alpha-stable distributions, P. Tsakalides, C.L. Nikias statistical modelling and receiver design for multi-user communication networks, G.A. Tsihrintzis. Part 6 Model structures: subexponential distributions, C.M. Goldie, C. Kluppelberg structure of stationary stable processes, J. Rosinski tail behaviour of some shot noise processes, G. Samorodnitsky. Part 7 Numerical procedures: numerical approximation of the symmetric stable distribution and density, J.H. McCulloch table of the maximally-skewed stable distributions, J.H. McCulloch, D.B. Panton multivariate stable distributions - approximation, estimation, simulation and identification, J.P. Nolan univariate stable distributions -parametrizations and software, J.P. Nolan.

500 citations


Journal ArticleDOI
TL;DR: The authors propose a class of augmented inverse probability of response weighted estimators that are consistent and asymptotically normal (CAN) for estimating β* when the response probabilities can be parametrically modeled and a CAN estimator exists.
Abstract: We consider inference about the parameter β* indexing the conditional mean of a vector of correlated outcomes given a vector of explanatory variables when some of the outcomes are missing in a subsample of the study and the probability of response depends on both observed and unobserved data values; that is, nonresponse is nonignorable. We propose a class of augmented inverse probability of response weighted estimators that are consistent and asymptotically normal (CAN) for estimating β* when the response probabilities can be parametrically modeled and a CAN estimator exists. The proposed estimators do not require full specification of a parametric likelihood, and their computation does not require numerical integration. Our estimators can be viewed as an extension of generalized estimating equation estimators that allows for nonignorable nonresponse. We show that our class essentially consists of all CAN estimators of β*. We also show that the asymptotic variance of the optimal estimator in our ...

470 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a survey question design approach from theoretical concept to survey question, from theoretical concepts to survey questions, and from the theoretical concept of survey question to the survey question.
Abstract: Partial table of contents: QUESTIONNAIRE DESIGN. From Theoretical Concept to Survey Question (J. Hox). Designing Rating Scales for Effective Measurement in Surveys (J. Krosnick & L. Fabrigar). DATA COLLECTION. Developing a Speech Recognition Application for Survey Research (B. Blyth). Children as Respondents: Methods for Improving Data Quality (J. Scott). POST SURVEY PROCESSING AND OPERATIONS. Integrated Control Systems for Survey Processing (J. Bethlehem). QUALITY ASSESSMENT AND CONTROL. Continuous Quality Improvement in Statistical Agencies (D. Morganstein & D. Marker). ERROR EFFECTS ON ESTIMATION, ANALYSES, AND INTERPRETATION. Categorical Data Analysis and Misclassification (J. Kuha & C. Skinner). Index.

461 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of detecting features, such as minefields or seismic faults, in spatial point processes when there is substantial clutter is considered, and model-based clustering based on a mixture model for the process, in which features are assumed to generate points according to highly linear multivariate normal densities.
Abstract: We consider the problem of detecting features, such as minefields or seismic faults, in spatial point processes when there is substantial clutter. We use model-based clustering based on a mixture model for the process, in which features are assumed to generate points according to highly linear multivariate normal densities, and the clutter arises according to a spatial Poisson process. Nonlinear features are represented by several densities, giving a piecewise linear representation. Hierarchical model-based clustering provides a first estimate of the features, and this is then refined using the EM algorithm. The number of features is estimated from an approximation to its posterior distribution. The method gives good results for the minefield and seismic fault problems. Software to implement it is available on the World Wide Web.

Journal ArticleDOI
TL;DR: In this paper, a class of models for an additive decomposition of groups of curves stratified by crossed and nested factors is introduced, and the model parameters are estimated using a highly efficient implementation of the EM algorithm for restricted maximum likelihood (REML) estimation based on a preliminary eigenvector decomposition.
Abstract: We introduce a class of models for an additive decomposition of groups of curves stratified by crossed and nested factors, generalizing smoothing splines to such samples by associating them with a corresponding mixed-effects model. The models are also useful for imputation of missing data and exploratory analysis of variance. We prove that the best linear unbiased predictors (BLUPs) from the extended mixed-effects model correspond to solutions of a generalized penalized regression where smoothing parameters are directly related to variance components, and we show that these solutions are natural cubic splines. The model parameters are estimated using a highly efficient implementation of the EM algorithm for restricted maximum likelihood (REML) estimation based on a preliminary eigenvector decomposition. Variability of computed estimates can be assessed with asymptotic techniques or with a novel hierarchical bootstrap resampling scheme for nested mixed-effects models. Our methods are applied to me...

Journal ArticleDOI
TL;DR: A class of orthogonal Latin hypercubes that preserves orthogonality among columns is proposed, and can facilitate nonparametric fitting procedures, because one can select good space-filling designs within the class of OrthogonalLatinhypercubes according to selection criteria.
Abstract: Latin hypercubes have been frequently used in conducting computer experiments. In this paper, a class of orthogonal Latin hypercubes that preserves orthogonality among columns is proposed. Applying an orthogonal Latin hypercube design to a computer experiment benefits the data analysis in two ways. First, it retains the orthogonality of traditional experimental designs. The estimates of linear effects of all factors are uncorrelated not only with each other, but also with the estimates of all quadratic effects and bilinear interactions. Second, it can facilitate nonparametric fitting procedures, because one can select good space-filling designs within the class of orthogonal Latin hypercubes according to selection criteria.

BookDOI
TL;DR: The contribution of Ragnar Frisch to economics and econometrics is discussed in this article, where the authors discuss the influence of Frisch on macroeconomic planning and policy in Norway.
Abstract: Introduction List of contributors Part I. Ragnar Frisch and his Contributions to Economics: 1. Ragnar Frisch at the University of Oslo Jens C. Andvig and Tore Thonstad 2. Ragnar Frisch and the foundation of the econometric society and Econometrica Olav Bjerkholt 3. The contributions of Ragnar Frisch to economics and econometrics John S. Chipman Part II. Utility Measurement: 4. Nonparametric estimation of exact consumer surplus and deadweight loss Jerry A. Hausman and Whitney K. Newey 5. Consumer demand and intertemporal allocations: Engle, Slutsky and Frisch Richard Blundell Part III. Production Theory: 6. Production functions: the search for identification Zvi Griliches and Jacques Mairesse 7. Investment and growth Dale W. Jorgenson Part IV. Microeconomic Policy: 8. Evaluating the Welfare State James J. Heckman and Jeffrey Smith 9. Frisch, hotelling and the marginal-cost pricing controversy Jean-Jacques Laffont Part V. Econometrics Methods: 10. Scientific explanations in econometrics Bernt P. Stigum 11. An autoregressive distribution-lag modelling approach to cointegration analysis M. Hashem Pesaran and Yongcheol Shin 12. Econometric issues related to errors in variables in financial models G. S. Maddala 13. Statistical analysis of some nonstationary time series Soren Johansen Part VI. Macrodynamics: 14. Frisch's vision and explanation of the trade-cycle phenomenon: his connection with Wicksell, Akerman and Schumpeter Bjorn Thalberg 15. Ragnar Frisch's conception of the business cycle Lawrence R. Klein 16. Business cycles: real facts or fallacies? Gunnar Bardsen, Paul G. Fisher and Ragnar Nymoen Part VII. Macroeconomic Planning: 17. The influence of Ragnar Frisch on macroeconomic planning and policy in Norway Petter Jakob Bjerve 18. How Frisch saw in the 1960s the contribution of economists to development planning E. Malinvaud 19. On the need for macroeconomic planning in market economies: three examples from the European Monetary Union project A. J. Hughes Hallett Author index Subject index.

Journal ArticleDOI
TL;DR: In this paper, a self-organizing filter and smoother for the general nonlinear non-Gaussian state-space model is proposed, which is defined by augmenting the state vector with the unknown parameters of the original state space model.
Abstract: A self-organizing filter and smoother for the general nonlinear non-Gaussian state-space model is proposed. An expanded state-space model is defined by augmenting the state vector with the unknown parameters of the original state-space model. The state of the augmented state-space model, and hence the state and the parameters of the original state-space model, are estimated simultaneously by either a non-Gaussian filter/smoother or a Monte Carlo filter/smoother. In contrast to maximum likelihood estimation of model parameters in ordinary state-space modeling, for which the recursive filter computation has to be done many times, model parameter estimation in the proposed self-organizing filter/smoother is achieved with only two passes of the recursive filter and smoother operations. Examples such as automatic tuning of dispersion and the shape parameters, adaptation to changes of the amplitude of a signal in seismic data, state estimation for a nonlinear state space model with unknown parameters. and seasonal adjustment with a nonlinear model with changing variance parameters are shown to exemplify the usefulness of the proposed method.

Journal ArticleDOI
TL;DR: This paper developed asymptotically median unbiased estimators by inverting quantile functions of regression-based parameter stability test statistics, computed under the constant-parameter null, and applied these results to an unobserved component model of trend growth in postwar U.S. per capita gross domestic product.
Abstract: This article considers inference about the variance of coefficients in time-varying parameter models with stationary regressors. The Gaussian maximum likelihood estimator (MLE) has a large point mass at 0. We thus develop asymptotically median unbiased estimators and asymptotically valid confidence intervals by inverting quantile functions of regression-based parameter stability test statistics, computed under the constant-parameter null. These estimators have good asymptotic relative efficiencies for small to moderate amounts of parameter variability. We apply these results to an unobserved components model of trend growth in postwar U.S. per capita gross domestic product. The MLE implies that there has been no change in the trend growth rate, whereas the upper range of the median-unbiased point estimates imply that the annual trend growth rate has fallen by 0.9% per annum since the 1950s.


Journal ArticleDOI
TL;DR: In this article, the authors describe a framework based on the concept of Markov chain regeneration, which allows adaptation to occur infinitely often but does not disturb the stationary distribution of the chain or the consistency of sample path averages.
Abstract: Markov chain Monte Carlo (MCMC) is used for evaluating expectations of functions of interest under a target distribution π. This is done by calculating averages over the sample path of a Markov chain having π as its stationary distribution. For computational efficiency, the Markov chain should be rapidly mixing. This sometimes can be achieved only by careful design of the transition kernel of the chain, on the basis of a detailed preliminary exploratory analysis of π. An alternative approach might be to allow the transition kernel to adapt whenever new features of π are encountered during the MCMC run. However, if such adaptation occurs infinitely often, then the stationary distribution of the chain may be disturbed. We describe a framework, based on the concept of Markov chain regeneration, which allows adaptation to occur infinitely often but does not disturb the stationary distribution of the chain or the consistency of sample path averages.

Journal ArticleDOI
Brani Vidakovic1
TL;DR: A wavelet shrinkage by coherent Bayesian inference in the wavelet domain is proposed and the methods are tested on standard Donoho-Johnstone test functions.
Abstract: Wavelet shrinkage, the method proposed by the seminal work of Donoho and Johnstone is a disarmingly simple and efficient way of denoising data. Shrinking wavelet coefficients was proposed from several optimality criteria. In this article a wavelet shrinkage by coherent Bayesian inference in the wavelet domain is proposed. The methods are tested on standard Donoho-Johnstone test functions.

Journal ArticleDOI
TL;DR: In this paper, the observed Kth nearest neighbor distances are modeled as a mixture distribution, the parameters of which are estimated by a simple EM algorithm, which allows for detection of generally shaped features that need not be path connected.
Abstract: We consider the problem of detecting features in spatial point processes in the presence of substantial clutter. One example is the detection of minefields using reconnaissance aircraft images that identify many objects that are not mines. Our solution uses Kth nearest neighbor distances of points in the process to classify them as clutter or otherwise. The observed Kth nearest neighbor distances are modeled as a mixture distribution, the parameters of which are estimated by a simple EM algorithm. This method allows for detection of generally shaped features that need not be path connected. In the minefield example this method yields high detection and low false-positive rates. Another application, to outlining seismic faults, is considered with some success. The method works well in high dimensions. The method can also be used to produce very high-breakdown-point–robust estimators of a covariance matrix.

Journal ArticleDOI
TL;DR: In this article, the authors consider inference for a semiparametric stochastic mixed model for longitudinal data and derive maximum penalized likelihood estimators of the regression coefficients and the nonparametric function.
Abstract: We consider inference for a semiparametric stochastic mixed model for longitudinal data. This model uses parametric fixed effects to represent the covariate effects and an arbitrary smooth function to model the time effect and accounts for the within-subject correlation using random effects and a stationary or nonstationary stochastic process. We derive maximum penalized likelihood estimators of the regression coefficients and the nonparametric function. The resulting estimator of the nonparametric function is a smoothing spline. We propose and compare frequentist inference and Bayesian inference on these model components. We use restricted maximum likelihood to estimate the smoothing parameter and the variance components simultaneously. We show that estimation of all model components of interest can proceed by fitting a modified linear mixed model. We illustrate the proposed method by analyzing a hormone dataset and evaluate its performance through simulations.


Journal ArticleDOI
TL;DR: In this article, the authors show that the distribution of this process may be approximated by the wild bootstrap, which is applied to simulated datasets as well as to real data.
Abstract: Let M = mθθ θ be a parametric model for an unknown regression function m. For example, M may consist of all polynomials or trigonometric polynomials with a given bound on the degree. To check the full model M (i.e., to test for H 0: m e M), it is known that optimal tests should be based on the empirical process of the regressors marked by the residuals. In this article we show that the distribution of this process may be approximated by the wild bootstrap. The method is applied to simulated datasets as well as to real data.

Journal ArticleDOI
TL;DR: In this article, an alternative approach is proposed: E-sufficiency, asymptotic confidence zones of minimum size, bypassing the Likelihood, and by hypothesis testing.
Abstract: The General Framework.- An Alternative Approach: E-Sufficiency.- Asymptotic Confidence Zones of Minimum Size.- Asymptotic Quasi-Likelihood.- Combining Estimating Functions.- Projected Quasi-Likelihood.- Bypassing the Likelihood.- Hypothesis Testing.- Infinite Dimensional Problems.- Miscellaneous Applications.- Consistency and Asymptotic Normality for Estimating Functions.- Complements and Strategies for Application.

BookDOI
TL;DR: In this article, the authors define a single-index model and propose a set of extensions of the maximum score and smoothed maximum score estimators, which are then used to reduce the dimension of the data.
Abstract: 1. Introduction.- 2. Single-Index Models.- 2.1 Definition of a Single-Index Model.- 2.2 Why Single-Index Models Are Useful.- 2.3 Other Approaches to Dimension Reduction.- 2.4 Identification of Single-Index Models.- 2.5 EstimatingGin a Single-Index Modei.- 2.6 Optimization Estimators ofss.- 2.7 Direct Semiparametric Estimators.- 2.8 Bandwidth Selection.- 2.9 An Empirical Example.- 3. Binary Response Models.- 3.1 Random-Coefficients Models.- 3.2 Identification.- 3.3 Estimation.- 3.4 Extensions of the Maximum Score and Smoothed Maximum Score Estimators.- 3.5 An Empirical Example.- 4. Deconvolution Problems.- 4.1 A Model of Measurement Error.- 4.2 Models for Panel Data.- 4.3 Extensions.- 4.4 An Empirical Example.- 5. Transformation Models.- 5.1 Estimation with ParametricTand NonparametricF.- 5.2 Estimation with NonparametricTand ParametricF.- 5.3 Estimation when BothTandFare Nonparametric.- 5.4 Predicting Y Conditional onX.- 5.5 An Empirical Example.- Appendix: Nonparametric Estimation.- A.1 Nonparametric Density Estimation.- A.2 Nonparametric Mean Regression.- References.

Journal ArticleDOI
TL;DR: Kullback-Leibler discrimination information and the Chernoff information measure are developed for the multivariate non-Gaussian case for discrimination between different classes of multivariate time series.
Abstract: Minimum discrimination information provides a useful generalization of likelihood methodology for classification and clustering of multivariate time series. Discrimination between different classes of multivariate time series that can be characterized by differing covariance or spectral structures is of importance in applications occurring in the analysis of geophysical and medical time series data. For discrimination between such multivariate series, Kullback-Leibler discrimination information and the Chernoff information measure are developed for the multivariate non-Gaussian case. Asymptotic error rates and limiting distributions are given for a generalized spectral disparity measure that includes the foregoing criteria as special cases. Applications to problems of clustering and classifying earthquakes and mining explosions are given.

Journal ArticleDOI
TL;DR: In this paper, the estimation of the k + 1-dimensional nonparametric component β(t) of the varying-coefficient model was considered, and asymptotic distributions were established for a kernel estimate of β (t) that minimizes a local least squares criterion.
Abstract: We consider the estimation of the k + 1-dimensional nonparametric component β(t) of the varying-coefficient model Y(t) = X T (t)β(t) + e(t) based on longitudinal observations (Yij , X i (tij ), tij ), i = 1, …, n, j = 1, …, ni , where tij is the jth observed design time point t of the ith subject and Yij and X i (tij ) are the real-valued outcome and R k+1 valued covariate vectors of the ith subject at tij. The subjects are independently selected, but the repeated measurements within subject are possibly correlated. Asymptotic distributions are established for a kernel estimate of β(t) that minimizes a local least squares criterion. These asymptotic distributions are used to construct a class of approximate pointwise and simultaneous confidence regions for β(t). Applying these methods to an epidemiological study, we show that our procedures are useful for predicting CD4 (T-helper lymphocytes) cell changes among HIV (human immunodeficiency virus)-infected persons. The finite-sample properties of o...

Journal ArticleDOI
TL;DR: In this article, a computationally simple method for estimation and prediction using binary or indicator data in space is proposed based on pairwise likelihood contributions, and the large-sample properties of the estimators are obtained in a straightforward manner.
Abstract: Conventional geostatistics addresses the problem of estimation and prediction for continuous observations. But in many practical applications in public health, environmental remediation, or ecological research the most commonly available data are in the form of counts (e.g., number of cases) or indicator variables denoting above or below threshold values. Also, in many situations it is less expensive to obtain an imprecise categorical observation than to obtain precise measurements of the variable of interest (such as a contaminant). This article proposes a computationally simple method for estimation and prediction using binary or indicator data in space. The proposed method is based on pairwise likelihood contributions, and the large-sample properties of the estimators are obtained in a straightforward manner. We illustrate the methodology through application to indicator data related to gypsy moth defoliation in Massachusetts.

Journal ArticleDOI
TL;DR: The current article develops the theoretical framework of variants of the origin-destination flow problem and introduces Bayesian approaches to analysis and inference.
Abstract: We study Bayesian models and methods for analysing network traffic counts in problems of inference about the traffic intensity between directed pairs of origins and destinations in networks. This is a class of problems very recently discussed by Vardi in a 1996 JASA article and is of interest in both communication and transportation network studies. The current article develops the theoretical framework of variants of the origin-destination flow problem and introduces Bayesian approaches to analysis and inference. In the first, the so-called fixed routing problem, traffic or messages pass between nodes in a network, with each message originating at a specific source node, and ultimately moving through the network to a predetermined destination node. All nodes are candidate origin and destination points. The framework assumes no travel time complications, considering only the number of messages passing between pairs of nodes in a specified time interval. The route count, or route flow, problem is ...

Journal ArticleDOI
TL;DR: In this article, a comparison of forecasting performance for a variety of linear and nonlinear time series models using the U.S. unemployment rate is presented, and the results show that significant improvements in forecasting accuracy can be obtained over existing methods.
Abstract: This article presents a comparison of forecasting performance for a variety of linear and nonlinear time series models using the U.S. unemployment rate. Our main emphases are on measuring forecasting performance during economic expansions and contractions by exploiting the asymmetric cyclical behavior of unemployment numbers, on building vector models that incorporate initial jobless claims as a leading indicator, and on utilizing additional information provided by the monthly rate for forecasting the quarterly rate. Comparisons are also made with the consensus forecasts from the Survey of Professional Forecasters. In addition, the forecasts of nonlinear models are combined with the consensus forecasts. The results show that significant improvements in forecasting accuracy can be obtained over existing methods.