scispace - formally typeset
Search or ask a question
DOI

Joint Learning of Topology and Invertible Nonlinearities from Multiple Time Series

TL;DR: In this paper , a nonlinear modeling technique for multiple time series that has a complexity similar to that of linear vector autoregressive (VAR), but it can account for nonlinear interactions for each sensor variable is proposed.
Abstract: Discovery of causal dependencies among time series has been tackled in the past either by using linear models, or using kernel- or deep learning-based nonlinear models, the latter ones entailing great complexity. This paper proposes a nonlinear modelling technique for multiple time series that has a complexity similar to that of linear vector autoregressive (VAR), but it can account for nonlinear interactions for each sensor variable. The modelling assumption is that the time series are generated in two steps: i) a VAR process in a latent space, and ii) a set of invertible nonlinear mappings applied component-wise, mapping each sensor variable into a latent space. Successful identification of the support of the VAR coefficients reveals the topology of the interconnected system. The proposed method enforces sparsity on the VAR coefficients and models the component-wise nonlinearities using invertible neural networks. To solve the estimation problem, a technique combining proximal gradient descent (PGD) and projected gradient descent is designed. Experiments conducted on real and synthetic data sets show that the proposed algorithm provides an improved identification of the support of the VAR coefficients, while improving also the prediction capabilities.
References
More filters
Journal ArticleDOI
Alex Tank1, Ian Covert1, Nicholas J. Foti1, Ali Shojaie1, Emily B. Fox1 
TL;DR: This paper proposed a class of nonlinear methods by applying structured multilayer perceptrons (MLP) or recurrent neural networks (RNNs) combined with sparsityinducing penalties on the weights.
Abstract: While most classical approaches to Granger causality detection assume linear dynamics, many interactions in applied domains, like neuroscience and genomics, are inherently nonlinear. In these cases, using linear models may lead to inconsistent estimation of Granger causal interactions. We propose a class of nonlinear methods by applying structured multilayer perceptrons (MLPs) or recurrent neural networks (RNNs) combined with sparsity-inducing penalties on the weights. By encouraging specific sets of weights to be zero---in particular through the use of convex group-lasso penalties---we can extract the Granger causal structure. To further contrast with traditional approaches, our framework naturally enables us to efficiently capture long-range dependencies between series either via our RNNs or through an automatic lag selection in the MLP. We show that our neural Granger causality methods outperform state-of-the-art nonlinear Granger causality methods on the DREAM3 challenge data. This data consists of nonlinear gene expression and regulation time courses with only a limited number of time points. The successes we show in this challenging dataset provide a powerful example of how deep learning can be useful in cases that go beyond prediction on large datasets. We likewise illustrate our methods in detecting nonlinear interactions in a human motion capture dataset.

84 citations

Journal ArticleDOI
TL;DR: The merits of kernel-based methods are extended here to the task of learning the effective brain connectivity, and an efficient regularized estimator is put forth to leverage the edge sparsity inherent to real-world complex networks.
Abstract: Structural equation models (SEMs) and vector autoregressive models (VARMs) are two broad families of approaches that have been shown useful in effective brain connectivity studies. While VARMs postulate that a given region of interest in the brain is directionally connected to another one by virtue of time-lagged influences, SEMs assert that directed dependencies arise due to instantaneous effects, and may even be adopted when nodal measurements are not necessarily multivariate time series. To unify these complementary perspectives, linear structural vector autoregressive models (SVARMs) that leverage both instantaneous and time-lagged nodal data have recently been put forth. Albeit simple and tractable, linear SVARMs are quite limited since they are incapable of modeling nonlinear dependencies between neuronal time series. To this end, the overarching goal of the present paper is to considerably broaden the span of linear SVARMs by capturing nonlinearities through kernels, which have recently emerged as a powerful nonlinear modeling framework in canonical machine learning tasks, e.g., regression, classification, and dimensionality reduction. The merits of kernel-based methods are extended here to the task of learning the effective brain connectivity, and an efficient regularized estimator is put forth to leverage the edge sparsity inherent to real-world complex networks. Judicious kernel choice from a preselected dictionary of kernels is also addressed using a data-driven approach. Numerical tests on ECoG data captured through a study on epileptic seizures demonstrate that it is possible to unveil previously unknown directed links between brain regions of interest.

41 citations

Book ChapterDOI
31 Aug 2010
TL;DR: The concept of Granger causality is described and recent advances and applications in gene expression regulatory networks are explored by using extensions of Vector Autoregressive models.
Abstract: Understanding the molecular biological processes underlying disease onset requires a detailed description of which genes are expressed at which time points and how their products interact in so-called cellular networks. High-throughput technologies, such as gene expression analysis using DNA microarrays, have been extensively used with this purpose. As a consequence, mathematical methods aiming to infer the structure of gene networks have been proposed in the last few years. Granger causality-based models are among them, presenting well established mathematical interpretations to directionality at the edges of the regulatory network. Here, we describe the concept of Granger causality and explore recent advances and applications in gene expression regulatory networks by using extensions of Vector Autoregressive models.

27 citations

Proceedings ArticleDOI
24 Aug 2014
TL;DR: This paper proposes an exact and efficient method for solving the restricted problem of Euclidean projection onto the positive simplex and demonstrates that the method empirically achieves state-of-the-art convergence on several large-scale high-dimensional datasets.
Abstract: Dual decomposition methods are the current state-of-the-art for training multiclass formulations of Support Vector Machines (SVMs). At every iteration, dual decomposition methods update a small subset of dual variables by solving a restricted optimization problem. In this paper, we propose an exact and efficient method for solving the restricted problem. In our method, the restricted problem is reduced to the well-known problem of Euclidean projection onto the positive simplex, which we can solve exactly in expected O(k) time, where k is the number of classes. We demonstrate that our method empirically achieves state-of-the-art convergence on several large-scale high-dimensional datasets.

23 citations

Posted Content
TL;DR: This work develops an approach to nonlinear Granger causality detection using multilayer perceptrons where the input to the network is the past time lags of all series and the output is the future value of a single series.
Abstract: While most classical approaches to Granger causality detection repose upon linear time series assumptions, many interactions in neuroscience and economics applications are nonlinear. We develop an approach to nonlinear Granger causality detection using multilayer perceptrons where the input to the network is the past time lags of all series and the output is the future value of a single series. A sufficient condition for Granger non-causality in this setting is that all of the outgoing weights of the input data, the past lags of a series, to the first hidden layer are zero. For estimation, we utilize a group lasso penalty to shrink groups of input weights to zero. We also propose a hierarchical penalty for simultaneous Granger causality and lag estimation. We validate our approach on simulated data from both a sparse linear autoregressive model and the sparse and nonlinear Lorenz-96 model.

19 citations