scispace - formally typeset
Search or ask a question

Showing papers in "Biometrika in 2010"


Journal ArticleDOI
TL;DR: In this article, the authors proposed a new approach to sparsity called the horseshoe estimator, which is a member of the same family of multivariate scale mixtures of normals.
Abstract: This paper proposes a new approach to sparsity called the horseshoe estimator. The horseshoe is a close cousin of other widely used Bayes rules arising from, for example, double-exponential and Cauchy priors, in that it is a member of the same family of multivariate scale mixtures of normals. But the horseshoe enjoys a number of advantages over existing approaches, including its robustness, its adaptivity to dierent sparsity patterns, and its analytical tractability. We prove two theorems that formally characterize both the horseshoe’s adeptness at large outlying signals, and its super-ecient rate of convergence to the correct estimate of the sampling density in sparse situations. Finally, using a combination of real and simulated data, we show that the horseshoe estimator corresponds quite closely to the answers one would get by pursuing a full Bayesian model-averaging approach using a discrete mixture prior to model signals and noise.

1,260 citations


Journal ArticleDOI
Zhiqiang Tan1
TL;DR: This paper proposed doubly robust estimators, which have desirable properties in efficiency if the propensity score model is correctly specified, and in boundedness even if the inverse probability weights are highly variable.
Abstract: Consider estimating the mean of an outcome in the presence of missing data or estimating population average treatment effects in causal inference. A doubly robust estimator remains consistent if an outcome regression model or a propensity score model is correctly specified. We build on a previous nonparametric likelihood approach and propose new doubly robust estimators, which have desirable properties in efficiency if the propensity score model is correctly specified, and in boundedness even if the inverse probability weights are highly variable. We compare the new and existing estimators in a simulation study and find that the robustified likelihood estimators yield overall the smallest mean squared errors.

286 citations


Journal ArticleDOI
TL;DR: A simple constrained version of quantile regression is proposed to avoid the crossing problem for both linear and nonparametric quantile curves.
Abstract: Since quantile regression curves are estimated individually, the quantile curves can cross, leading to an invalid distribution for the response. A simple constrained version of quantile regression is proposed to avoid the crossing problem for both linear and nonparametric quantile curves. A simulation study and a reanalysis of tropical cyclone intensity data shows the usefulness of the procedure. Asymptotic properties of the estimator are equivalent to the typical approach under standard conditions, and the proposed estimator reduces to the classical one if there is no crossing. The performance of the constrained estimator has shown significant improvement by adding smoothing and stability across the quantile levels.

265 citations


Journal ArticleDOI
TL;DR: The authors showed that the marginal AIC is not an asymptotically unbiased estimator of the Akaike information, and that ignoring estimation uncertainty in the random effects covariance matrix induces a bias that can lead to the selection of any random effect not predicted to be exactly zero.
Abstract: In linear mixed models, model selection frequently includes the selection of random effects. Two versions of the Akaike information criterion, AIC, have been used, based either on the marginal or on the conditional distribution. We show that the marginal AIC is not an asymptotically unbiased estimator of the Akaike information, and favours smaller models without random effects. For the conditional AIC, we show that ignoring estimation uncertainty in the random effects covariance matrix, as is common practice, induces a bias that can lead to the selection of any random effect not predicted to be exactly zero. We derive an analytic representation of a corrected version of the conditional AIC, which avoids the high computational cost and imprecision of available numerical approximations. An implementation in an R package (R Development Core Team, 2010) is provided. All theoretical results are illustrated in simulation studies, and their impact in practice is investigated in an analysis of childhood malnutrition in Zambia.

249 citations


Journal ArticleDOI
TL;DR: A family of scalable schemes is proposed based on the sum of the local cumulative sum, cusum, statistics from each individual data stream, and is shown to asymptotically minimize the detection delays for each and every possible combination of affected data streams, subject to the global false alarm constraint.
Abstract: The sequential changepoint detection problem is studied in the context of global online monitoring of a large number of independent data streams. We are interested in detecting an occurring event as soon as possible, but we do not know when the event will occur, nor do we know which subset of data streams will be affected by the event. A family of scalable schemes is proposed based on the sum of the local cumulative sum, CUSUM, statistics from each individual data stream, and is shown to asymptotically minimize the detection delays for each and every possible combination of affected data streams, subject to the global false alarm constraint. The usefulness and limitations of our asymptotic optimality results are illustrated by numerical simulations and heuristic arguments. The Appendices contain a probabilistic result on the first epoch to simultaneous record values for multiple independent random walks.

213 citations


Journal ArticleDOI
TL;DR: It is shown using replicates and parent-child comparisons that pooling data across samples results in more accurate detection of copy number variants and the multisample segmentation algorithm is applied to the analysis of a cohort of tumour samples containing complex nested and overlapping copy number aberrations.
Abstract: We discuss the detection of local signals that occur at the same location in multiple one-dimensional noisy sequences, with particular attention to relatively weak signals that may occur in only a fraction of the sequences. We propose simple scan and segmentation algorithms based on the sum of the chi-squared statistics for each individual sample, which is equivalent to the generalized likelihood ratio for a model where the errors in each sample are independent. The simple geometry of the statistic allows us to derive accurate analytic approximations to the significance level of such scans. The formulation of the model is motivated by the biological problem of detecting recurrent DNA copy number variants in multiple samples. We show using replicates and parent-child comparisons that pooling data across samples results in more accurate detection of copy number variants. We also apply the multisample segmentation algorithm to the analysis of a cohort of tumour samples containing complex nested and overlapping copy number aberrations, for which our method gives a sparse and intuitive cross-sample summary.

184 citations


Journal ArticleDOI
TL;DR: In this paper, an efficient penalized likelihood method for estimating the adjacency matrix of directed acyclic graphs, when variables inherit a natural ordering, was proposed, and the adaptive lasso can consistently estimate the true graph under the usual regularity assumptions.
Abstract: Directed acyclic graphs are commonly used to represent causal relationships among random variables in graphical models. Applications of these models arise in the study of physical and biological systems where directed edges between nodes represent the influence of components of the system on each other. Estimation of directed graphs from observational data is computationally NP-hard. In addition, directed graphs with the same structure may be indistinguishable based on observations alone. When the nodes exhibit a natural ordering, the problem of estimating directed graphs reduces to the problem of estimating the structure of the network. In this paper, we propose an efficient penalized likelihood method for estimation of the adjacency matrix of directed acyclic graphs, when variables inherit a natural ordering. We study variable selection consistency of lasso and adaptive lasso penalties in high-dimensional sparse settings, and propose an error-based choice for selecting the tuning parameter. We show that although the lasso is only variable selection consistent under stringent conditions, the adaptive lasso can consistently estimate the true graph under the usual regularity assumptions.

172 citations


Journal ArticleDOI
TL;DR: A novel selection criterion that is applicable to all kinds of clustering algorithms, including distance based or non-distance based algorithms, is proposed, which measures the robustness of any given clustering algorithm against the randomness in sampling.
Abstract: In cluster analysis, one of the major challenges is to estimate the number of clusters. Most existing approaches attempt to minimize some distance-based dissimilarity measure within clusters. This article proposes a novel selection criterion that is applicable to all kinds of clustering algorithms, including distance based or non-distance based algorithms. The key idea is to select the number of clusters that minimizes the algorithm's instability, which measures the robustness of any given clustering algorithm against the randomness in sampling. A novel estimation scheme for clustering instability is developed based on crossvalidation. The proposed selection criterion's effectiveness is demonstrated on a variety of numerical experiments, and its asymptotic selection consistency is established when the dataset is properly split.

169 citations


Journal ArticleDOI
TL;DR: A new particle smoother that has a computational complexity of O(N), where N is the number of particles, that substantially outperforms the simple filter-smoother and overcomes some degeneracy problems in existing algorithms.
Abstract: In this paper we propose a new particle smoother that has a computational complexity of O(N), where N is the number of particles. This compares favourably with the O(N2) computational cost of most smoothers. The new method also overcomes some degeneracy problems in existing algorithms. Through simulation studies we show that substantial gains in efficiency are obtained for practical amounts of computational cost. It is shown both through these simulation studies, and by the analysis of an athletics dataset, that our new method also substantially outperforms the simple filter-smoother, the only other smoother with computational cost that is O(N).

164 citations


Journal ArticleDOI
TL;DR: It is shown that, for all commonly used parametric and semiparametric models, there is no asymptotic efficiency gain by analyzing original data if the parameter of main interest has a common value across studies, the nuisance parameters have distinct values among studies, and the summary statistics are based on maximum likelihood.
Abstract: Meta-analysis is widely used to synthesize the results of multiple studies. Although meta-analysis is traditionally carried out by combining the summary statistics of relevant studies, advances in technologies and communications have made it increasingly feasible to access the original data on individual participants. In the present paper, we investigate the relative efficiency of analyzing original data versus combining summary statistics. We show that, for all commonly used parametric and semiparametric models, there is no asymptotic efficiency gain by analyzing original data if the parameter of main interest has a common value across studies, the nuisance parameters have distinct values among studies, and the summary statistics are based on maximum likelihood. We also assess the relative efficiency of the two methods when the parameter of main interest has different values among studies or when there are common nuisance parameters across studies. We conduct simulation studies to confirm the theoretical results and provide empirical comparisons from a genetic association study.

145 citations


Journal ArticleDOI
TL;DR: This work proposes a simple methodology, based on latent dimensions and existing covariance models for univariate random fields, to develop flexible, interpretable and computationally feasible classes of cross-covariance functions in closed form.
Abstract: SUMMARY The problem of constructing valid parametric cross-covariance functions is challenging. We propose a simple methodology, based on latent dimensions and existing covariance models for univariate random fields, to develop flexible, interpretable and computationally feasible classes of cross-covariance functions in closed form. We focus on spatio-temporal cross-covariance functions that can be nonseparable, asymmetric and can have different covariance structures, for instance different smoothness parameters, in each component. We discuss estimation of these models and perform a small simulation study to demonstrate our approach. We illustrate our methodology on a trivariate spatio-temporal pollution dataset from California and demonstrate that our cross-covariance performs better than other competing models.

Journal ArticleDOI
TL;DR: In this paper, the authors extend the common linear functional regression model to the case where the dependency of a scalar response on a functional predictor is of polynomial rather than linear nature.
Abstract: SUMMARY We extend the common linear functional regression model to the case where the dependency of a scalar response on a functional predictor is of polynomial rather than linear nature. Focusing on the quadratic case, we demonstrate the usefulness of the polynomial functional regression model, which encompasses linear functional regression as a special case. Our approach works under mild conditions for the case of densely spaced observations and also can be extended to the important practical situation where the functional predictors are derived from sparse and irregular measurements, as is the case in many longitudinal studies. A key observation is the equivalence of the functional polynomial model with a regression model that is a polynomial of the same order in the functional principal component scores of the predictor processes. Theoretical analysis as well as practical implementations are based on this equivalence and on basis representations of predictor processes. We also obtain an explicit representation of the regression surface that defines quadratic functional regression and provide functional asymptotic results for an increasing number of model components as the number of subjects in the study increases. The improvements that can be gained by adopting quadratic as compared to linear functional regression are illustrated with a case study that includes absorption spectra as functional predictors.

Journal ArticleDOI
TL;DR: A new regression interpretation of the Cholesky factor of the covariance matrix is proposed, as opposed to the well-known regression interpretation to lead to a new class of regularized covariance estimators suitable for high-dimensional problems.
Abstract: SUMMARY In this paper we propose a new regression interpretation of the Cholesky factor of the covariance matrix, as opposed to the well-known regression interpretation of the Cholesky factor of the inverse covariance, which leads to a new class of regularized covariance estimators suitable for high-dimensional problems. Regularizing the Cholesky factor of the covariance via this regression interpretation always results in a positive definite estimator. In particular, one can obtain a positive definite banded estimator of the covariance matrix at the same computational cost as the popular banded estimator of Bickel & Levina (2008b), which is not guaranteed to be positive definite. We also establish theoretical connections between banding Cholesky factors of the covariance matrix and its inverse and constrained maximum likelihood estimation under the banding constraint, and compare the numerical performance of several methods in simulations and on a sonar data example.

Journal ArticleDOI
TL;DR: It is established that nested sampling has an approximation error that vanishes at the standard Monte Carlo rate and that this error is asymptotically Gaussian, and it is shown that the asymPTotic variance of the nested sampling approximation typically grows linearly with the dimension of the parameter.
Abstract: Le lien donne acces a la version document de travail intitulee "Contemplating Evidence: properties, extensions of, and alternatives to Nested Sampling"

Journal ArticleDOI
TL;DR: In this paper, the authors provide some theoretical results for testing hypotheses after covariate-adaptive randomization and show that one way to obtain a valid test procedure is to use a correct model between outcomes and covariates including those used in randomization.
Abstract: The covariate-adaptive randomization method was proposed for clinical trials long ago but little theoretical work has been done for statistical inference associated with it. Practitioners often apply test procedures available for simple randomization, which is controversial since procedures valid under simple randomization may not be valid under other randomization schemes. In this paper, we provide some theoretical results for testing hypotheses after covariate-adaptive randomization. We show that one way to obtain a valid test procedure is to use a correct model between outcomes and covariates, including those used in randomization. We also show that the simple two sample t-test, without using any covariate, is conservative under covariate-adaptive biased coin randomization in terms of its Type I error, and that a valid bootstrap t-test can be constructed. The powers of several tests are examined theoretically and empirically. Our study provides guidance for applications and sheds light on further research in this area.

Journal ArticleDOI
TL;DR: A simplified version of the pc algorithm is developed, which is computationally feasible even with thousands of covariates and provides consistent variable selection under conditions on the random design matrix that are of a different nature than coherence conditions for penalty-based approaches like the lasso.
Abstract: SUMMARY We consider variable selection in high-dimensional linear models where the number of covariates greatly exceeds the sample size. We introduce the new concept of partial faithfulness and use it to infer associations between the covariates and the response. Under partial faithfulness, we develop a simplified version of the PC algorithm (Spirtes et al., 2000), which is computationally feasible even with thousands of covariates and provides consistent variable selection under conditions on the random design matrix that are of a different nature than coherence conditions for penalty-based approaches like the lasso. Simulations and application to real data show that our method is competitive compared to penalty-based approaches. We provide an efficient implementation of the algorithm in the R-package pcalg.

Journal ArticleDOI
TL;DR: In this article, a discretization-expectation estimation method is proposed to avoid selecting the number of slices, while preserving the integrity of the central subspace, which can be applied to regressions with multivariate responses.
Abstract: In the context of sufficient dimension reduction, the goal is to parsimoniously recover the central subspace of a regression model. Many inverse regression methods use slicing estimation to recover the central subspace. The efficacy of slicing estimation depends heavily upon the number of slices. However, the selection of the number of slices is an open and long-standing problem. In this paper, we propose a discretization-expectation estimation method, which avoids selecting the number of slices, while preserving the integrity of the central subspace. This generic method assures root-n consistency and asymptotic normality of slicing estimators for many inverse regression methods, and can be applied to regressions with multivariate responses. A BIC type criterion for the dimension of the central subspace is proposed. Comprehensive simulations and an illustrative application show that our method compares favourably with existing estimators.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a method for choosing a small subset of design points to optimize the prediction of a response variable, referred to as the most predictive design points, or covariates, for a given value of k, and computed using information contained in a set of independent observations (X i, Y i ) of (X, Y).
Abstract: We suggest a way of reducing the very high dimension of a functional predictor, X, to a low number of dimensions chosen so as to give the best predictive performance. Specifically, if X is observed on a fine grid of design points t 1 ,...,t r , we propose a method for choosing a small subset of these, say t i1 ,..., t ik , to optimize the prediction of a response variable, Y. The values t ij are referred to as the most predictive design points, or covariates, for a given value of k, and are computed using information contained in a set of independent observations (X i , Y i ) of (X, Y). The algorithm is based on local linear regression, and calculations can be accelerated using linear regression to preselect the design points. Boosting can be employed to further improve the predictive performance. We illustrate the usefulness of our ideas through simulations and examples drawn from chemometrics, and we develop theoretical arguments showing that the methodology can be applied successfully in a range of settings.

Journal ArticleDOI
TL;DR: In this article, the authors proposed penalized empirical likelihood for parameter estimation and variable selection for problems with diverging numbers of parameters, which has the oracle property, with probability tending to 1.
Abstract: We propose penalized empirical likelihood for parameter estimation and variable selection for problems with diverging numbers of parameters. Our results are demonstrated for estimating the mean vector in multivariate analysis and regression coefficients in linear models. By using an appropriate penalty function, we showthat penalized empirical likelihood has the oracle property. That is, with probability tending to 1, penalized empirical likelihood identifies the true model and estimates the nonzero coefficients as efficiently as if the sparsity of the true model was known in advance. The advantage of penalized empirical likelihood as a nonparametric likelihood approach is illustrated by testing hypotheses and constructing confidence regions. Numerical simulations confirm our theoretical findings.

Journal ArticleDOI
TL;DR: The doubly robust estimation of the parameters in a semiparametric conditional odds ratio model is considered and the estimators are consistent and asymptotically normal in a union model that assumes either of two variation independent baseline functions is correctly modelled but not necessarily both.
Abstract: We consider the doubly robust estimation of the parameters in a semiparametric conditional odds ratio model. Our estimators are consistent and asymptotically normal in a union model that assumes either of two variation independent baseline functions is correctly modelled but not necessarily both. Furthermore, when either outcome has finite support, our estimators are semiparametric efficient in the union model at the intersection submodel where both nuisance functions models are correct. For general outcomes, we obtain doubly robust estimators that are nearly efficient at the intersection submodel. Our methods are easy to implement as they do not require the use of the alternating conditional expectations algorithm of Chen (2007).

Journal ArticleDOI
TL;DR: In this article, the central solution space is used to modify all inverse conditional moment-based methods to relax the distributional assumption on the predictors, such as sliced average variance estimation and directional regression.
Abstract: SUMMARY Many classical dimension reduction methods, especially those based on inverse conditional moments, require the predictors to have elliptical distributions, or at least to satisfy a linearity condition. Such conditions, however, are too strong for some applications. Li and Dong (2009) introduced the notion of the central solution space and used it to modify first-order methods, such as sliced inverse regression, so that they no longer rely on these conditions. In this paper we generalize this idea to second-order methods, such as sliced average variance estimation and directional regression. In doing so we demonstrate that the central solution space is a versatile framework: we can use it to modify essentially all inverse conditional moment-based methods to relax the distributional assumption on the predictors. Simulation studies and an application show a substantial improvement of the modified methods over their classical counterparts.

Journal ArticleDOI
TL;DR: In this article, a discriminant direction vector that generally exists only in high-dimension, low sample size settings is studied and the authors investigate mathematical properties and classification performance of this discrimination method.
Abstract: We study a discriminant direction vector that generally exists only in high-dimension, low sample size settings. Projections of data onto this direction vector take on only two distinct values, one for each class. There exist infinitely many such directions in the subspace generated by the data; but the maximal data piling vector has the longest distance between the projections. This paper investigates mathematical properties and classification performance of this discrimination method.

Journal ArticleDOI
TL;DR: This article proposes a broad class of mixture models constructed using suitable kernels on a general compact metric space and then on the planar shape space in particular, under which the Kullback-Leibler property holds.
Abstract: Statistical analysis on landmark-based shape spaces has diverse applications in morphometrics, medical diagnostics, machine vision and other areas. These shape spaces are non-Euclidean quotient manifolds. To conduct nonparametric inferences, one may define notions of centre and spread on this manifold and work with their estimates. However, it is useful to consider full likelihood-based methods, which allow nonparametric estimation of the probability density. This article proposes a broad class of mixture models constructed using suitable kernels on a general compact metric space and then on the planar shape space in particular. Following a Bayesian approach with a nonparametric prior on the mixing distribution, conditions are obtained under which the Kullback–Leibler property holds, implying large support and weak posterior consistency. Gibbs sampling methods are developed for posterior computation, and the methods are applied to problems in density estimation and classification with shape-based predictors. Simulation studies show improved estimation performance relative to existing approaches.

Journal ArticleDOI
TL;DR: In this article, locally D- and EDp-optimal designs for the exponential, log-linear and three-parameter emax models are derived at the same set of points, while the corresponding weights are different.
Abstract: We derive locally D- and EDp-optimal designs for the exponential, log-linear and three-parameter emax models. For each model the locally D- and ED p -optimal designs are supported at the same set of points, while the corresponding weights are different. This indicates that for a given model, D-optimal designs are efficient for estimating the smallest dose that achieves 100p% of the maximum effect in the observed dose range. Conversely, ED p -optimal designs also yield good D-efficiencies. We illustrate the results using several examples and demonstrate that locally D- and ED p -optimal designs for the emax, log-linear and exponential models are relatively robust with respect to misspecification of the model parameters.

Journal ArticleDOI
TL;DR: A semiparametric additive rate model for modelling recurrent events in the presence of a terminal event and an estimating equation for parameter estimation is constructed and the asymptotic distributions of the proposed estimators are derived.
Abstract: We propose a semiparametric additive rate model for modelling recurrent events in the presence of a terminal event. The dependence between recurrent events and terminal event is nonparametric. A general transformation model is used to model the terminal event. We construct an estimating equation for parameter estimation and derive the asymptotic distributions of the proposed estimators. Simulation studies demonstrate that the proposed inference procedure performs well in realistic settings. Application to a medical study is presented.

Journal ArticleDOI
TL;DR: The present paper deals with nonparametric and semiparametric estimation of attributable fraction functions for cohort studies with potentially censored event time data and shows the proposed estimators to be consistent, asymPTotically normal and asymptotically efficient.
Abstract: Attributable fractions are commonly used to measure the impact of risk factors on disease incidence in the population. These static measures can be extended to functions of time when the time to disease occurrence or event time is of interest. The present paper deals with nonparametric and semiparametric estimation of attributable fraction functions for cohort studies with potentially censored event time data. The semiparametric models include the familiar proportional hazards model and a broad class of transformation models. The proposed estimators are shown to be consistent, asymptotically normal and asymptotically efficient. Extensive simulation studies demonstrate that the proposed methods perform well in practical situations. A cardiovascular health study is provided. Connections to causal inference are discussed.

Journal ArticleDOI
TL;DR: In this paper, the authors considered the asymptotic distribution of the likelihood ratio statistic T for testing a subset of parameter of interest θ, θ = (γ, η), H(0) : γ = γ(0), where ϕ is a consistent estimator of ϕ, the nuisance parameter.
Abstract: This paper considers the asymptotic distribution of the likelihood ratio statistic T for testing a subset of parameter of interest θ, θ = (γ, η), H(0) : γ = γ(0), based on the pseudolikelihood L(θ, ϕ), where ϕ is a consistent estimator of ϕ, the nuisance parameter. We show that the asymptotic distribution of T under H(0) is a weighted sum of independent chi-squared variables. Some sufficient conditions are provided for the limiting distribution to be a chi-squared variable. When the true value of the parameter of interest, θ(0), or the true value of the nuisance parameter, ϕ(0), lies on the boundary of parameter space, the problem is shown to be asymptotically equivalent to the problem of testing the restricted mean of a multivariate normal distribution based on one observation from a multivariate normal distribution with misspecified covariance matrix, or from a mixture of multivariate normal distributions. A variety of examples are provided for which the limiting distributions of T may be mixtures of chi-squared variables. We conducted simulation studies to examine the performance of the likelihood ratio test statistics in variance component models and teratological experiments.

Journal ArticleDOI
Yongtao Guan1, Ye Shen1
TL;DR: In this paper, the authors introduce a new estimation method for parametric intensity function models of inhomogeneous spatial point processes based on weighted estimating equations, which can incorporate information on both inhomogeneity and dependence of the process.
Abstract: We introduce a new estimation method for parametric intensity function models of inhomogeneous spatial point processes based on weighted estimating equations The weights can incorporate information on both inhomogeneity and dependence of the process Simulations show that significant efficiency gains can be achieved for non-Poisson processes, compared to the Poisson maximum likelihood estimator An application to tropical forest data illustrates the use of the proposed method

Journal ArticleDOI
TL;DR: This article developed point and interval estimation procedures for t-year mortality rates conditional on the estimated parametric risk score and illustrated with a dataset from a large clinical trial with post-myocardial infarction patients.
Abstract: For modern evidence-based medicine, decisions on disease prevention or management strategies are often guided by a risk index system. For each individual, the system uses his/her baseline information to estimate the risk of experiencing a future disease-related clinical event. Such a risk scoring scheme is usually derived from an overly simplified parametric model. To validate a model-based procedure, one may perform a standard global evaluation via, for instance, a receiver operating characteristic analysis. In this article, we propose a method to calibrate the risk index system at a subject level. Specifically, we developed point and interval estimation procedures for t-year mortality rates conditional on the estimated parametric risk score. The proposals are illustrated with a dataset from a large clinical trial with post-myocardial infarction patients.

Journal ArticleDOI
TL;DR: In this article, a dependence measure derived from a bivariate locally stationary wavelet time series model was proposed to estimate the non-stationary dependence between neural time series, since changes in the dependence structure are presumed to reflect functional interactions between neuronal populations.
Abstract: Large volumes of neuroscience data comprise multiple, nonstationary electrophysiological or neuroimaging time series recorded from different brain regions. Accurately estimating the dependence between such neural time series is critical, since changes in the dependence structure are presumed to reflect functional interactions between neuronal populations. We propose a new dependence measure, derived from a bivariate locally stationary wavelet time series model. Since wavelets are localized in both time and scale, this approach leads to a natural, local and multi-scale estimate of nonstationary dependence. Our methodology is illustrated by application to a simulated example, and to electrophysiological data relating to interactions between the rat hippocampus and prefrontal cortex during working memory and decision making.