scispace - formally typeset
Open AccessJournal ArticleDOI

Using transfer entropy to measure information flows between financial markets

Reads0
Chats0
TLDR
In this paper, transfer entropy is used to quantify information flows between financial markets and propose a suitable bootstrap procedure for statistical inference, which allows to determine, measure and test for information transfer without being restricted to linear dynamics.
Abstract
We use transfer entropy to quantify information flows between financial markets and propose a suitable bootstrap procedure for statistical inference. Transfer entropy is a model-free measure designed as the Kullback-Leibler distance of transition probabilities. Our approach allows to determine, measure and test for information transfer without being restricted to linear dynamics. In our empirical application, we examine the importance of the credit default swap market relative to the corporate bond market for the pricing of credit risk. We also analyze the dynamic relation between market risk and credit risk proxied by the VIX and the iTraxx Europe, respectively. We conduct the analyses for pre-crisis, crisis and post-crisis periods.

read more

Content maybe subject to copyright    Report

SFB 649 Discussion Paper 2012-051
Using transfer entropy
to measure information
flows between financial
markets
Thomas Dimpfl *
Franziska J. Peter *
* Universität Tübingen, Germany
This research was supported by the Deutsche
Forschungsgemeinschaft through the SFB 649 "Economic Risk".
http://sfb649.wiwi.hu-berlin.de
ISSN 1860-5664
SFB 649, Humboldt-Universität zu Berlin
Spandauer Straße 1, D-10178 Berlin
S
FB
6
4
9
E
C
O
N
O
M
I
C
R
I
S
K
B
E
R
L
I
N

Using transfer entropy to measure information
flows between financial markets
Thomas Dimpfl
Franziska J. Peter
August 23, 2012
Abstract
We use transfer entropy to quantify information flows between financial mar-
kets and propose a suitable bootstrap procedure for statistical inference. Trans-
fer entropy is a model-free measure designed as the Kullback-Leibler distance
of transition probabilities. Our approach allows to determine, measure and test
for information transfer without being restricted to linear dynamics. In our
empirical application, we examine the importance of the credit default swap
market relative to the corporate bond market for the pricing of credit risk. We
also analyze the dynamic relation between market risk and credit risk proxied
by the VIX and the iTraxx Europe, respectively. We conduct the analyses for
pre-crisis, crisis and post-crisis periods.
Keywords: entropy; information flow; non-linear dynamics; price discovery;
credit risk; CDS.
JEL classifications: C14, G15
University of T
¨
ubingen
Corresponding author: Franziska Julia Peter, University of T
¨
ubingen, Department of Statistics,
Econometrics and Empirical Economics, Mohlstraße 36, 72074 T
¨
ubingen, Germany, Phone: +49
7071 29 78165. Fax: +49 7071 29 5546. Email: franziska-julia.peter@uni-tuebingen.de.
We thank participants of the annual meeting of the Society of Nonlinear Dynamics and Economet-
rics, Washington 2011, of the interdisciplinary workshop on Econometric and Statistical Modelling
of Multivariate Time Series, UCL Louvain-la-Neuve, 2011, and of the 2012 Midwest Financial As-
sociation meeting, New Orleans, and two anonymous referees for helpful comments. This research
project is funded by the German Research Foundation (DFG) through grant GR 2288, data are pro-
vided through the DFG SFB 649 “Economic Risk”.
1

1 Introduction
Detecting and measuring interactions between different time series is the subject of
research in various areas. In finance the informational link between different mar-
kets is of particular interest. Yet, there is only a small range of methods to empiri-
cally examine these linkages. The predominant concept is that of Granger causality
(see Granger, 1969) which is widely applied to detect lead-lag relationships be-
tween time series. However, the insights that can be gained from this method are
limited to the mere existence of information flows rather than their quantification.
An actual measure for information transfer between financial markets exists only
for a particular setting of empirical applications: if prices in different markets re-
fer to the same underlying asset, price discovery measures such as the Hasbrouck
(1995) information shares can be used to determine the informational dominant
market. This approach requires a cointegration relationship between the different
time series and only provides a sensible interpretation of the results if cointegration
is implied by theory and supported by the data. Furthermore, Granger causality
and price discovery measures are based on a Vector Autoregressive (VAR) or Vec-
tor Error Correction Model (VECM) framework which imposes rather restrictive
assumptions concerning the underlying (linear) dynamics.
As an alternative we propose to apply the concept of transfer entropy to
measure information flows between different financial time series. Transfer en-
tropy is a model-free measure which is designed as the Kullback-Leibler distance
of transition probabilities. Under weak assumptions this approach allows to quan-
tify information transfer without being restricted to linear dynamics. It is therefore
an appealing tool if the assumptions required by the standard models are not sup-
ported by the data. There exist only few studies that apply transfer entropy within
the context of financial markets. Kwon and Yang (2008a), for example, analyze
the information flow between the S&P 500, the Dow Jones index and selected indi-
vidual companies on a daily basis. Baek, Jung, Kwon, and Moon (2005) examine
the information transfer between groups of NYSE listed stocks to determine market
sensitive and market leading companies, and Kwon and Yang (2008b) investigate
2

the strength and direction of information transfer between various stock indices us-
ing transfer entropy. The measurement of interactions between the Indian stock and
commodity market is the subject of a study by Reddy and Sebastin (2009).
In all these finance studies, statistical inference has not been accounted for.
We address this issue and propose to bootstrap the underlying Markov process (see
Horowitz, 2003) in order to derive the distribution of the estimates and assess the
statistical significance of the estimated information flows. Furthermore, we address
the issue of correcting for small sample bias of the transfer entropy estimator in a
novel way: we suggest to use the bootstrapped distribution under the null hypothesis
of no information flow to correct for finite sample bias rather than using shuffled
data which has been the standard procedure in the literature on information theory
so far (see, inter alia, Papana, Kugiumtzis, and Larsson, 2001).
This paper includes two empirical applications of the transfer entropy method-
ology to measure information flows between financial markets. First, we examine
the information flows between the credit default swap (CDS) and the bond market,
analyzing data on 27 iTraxx Europe companies. Both markets reflect the price of
credit risk for the same reference entity. Therefore, as outlined in Blanco, Brennan,
and Marsh (2005), assuming a cointegration relationship between the time series
is plausible. Our dataset consists of a sample of CDS and bond market data of 27
European companies from January 2004 to December 2011. Due to market imper-
fections cointegration is not always supported for each and every company (as is
also the case in the study of Blanco et al., 2005) so that a VECM cannot be reason-
ably applied. Using transfer entropy, we find in general significant bi-directional
information flow between the CDS and the bond market. Our results point towards
a slight dominance of the CDS market, in particular during the financial crisis, and
thus are in line with previous findings in the literature like Blanco et al. (2005) and
Coudert and Gex (2010).
In our second application we analyze factors that might influence the CDS
market by examining the link between market risk and credit risk. Thereby, we
follow Figuerola-Ferrett and Paraskevopoulos (2009) and consider the dynamic re-
lation between iTraxx and VIX. Again, we draw on a long sample of iTraxx and
3

VIX data ranging from March 2005 to November 2011. We find that in general the
transfer entropy estimates for the flow of information from the VIX to the iTraxx
are statistically significant and exceed the information flow from the iTraxx to the
VIX.
The remainder of the paper is organized as follows. Section 2 introduces the
concept of transfer entropy. Diagnostics and inference are discussed in Section 3.
Section 4 contains the empirical applications and Section 5 concludes.
2 Measuring information flows by transfer entropy
The concept of transfer entropy is best understood within the context of information
theory. In the era of early telecommunications, when communication was based on
Morse code, Hartley (1928) introduced a measure for information relying on the
logarithm of the number of all possible symbol sequences that can occur. The
general aim was to optimally encode messages such that they can be transmitted
more quickly. For that purpose it was necessary to quantify the information that
can be derived from a specific sequence of transmitted symbols.
Consider the following example: when flipping a fair coin, there are two
equally likely outcomes, head or tail. According to Hartley (1928) the information
(denoted by H) that can be gained from flipping a coin once is given by H = log(2).
If the base of the logarithm is 2, H = log
2
(2) = 1 and the measurement unit will be
bits. Consequently, n flips of the coin yield n bits of information (H = log(2
n
) =
n×log
2
(2) = n) and we would need n binary digits to specify the resulting sequence
(such as 1 for heads and 0 for tails).
In the case of symbols that are not equally likely, but occur with different
probabilities p
j
, the amount of information gained from a specific symbol j is given
by log(1/p
j
). The average amount (per symbol) of information one can get from
such a sequence is defined as H =
n
j=1
p
j
log
1
p
j
, where n is the number of dis-
tinct symbols. This leads to the general formula of Shannon (1948): assume that J
is a discrete variable with probability distribution p( j), where j labels the different
4

Citations
More filters
Journal ArticleDOI

RTransferEntropy — Quantifying information flow between different time series using effective transfer entropy

TL;DR: The R package RTransferEntropy is described in detail and its functionality is demonstrated by means of several simulated processes and an application to financial time series is presented.
Journal ArticleDOI

Information diffusion, cluster formation and entropy-based network dynamics in equity and commodity markets☆

TL;DR: This paper investigates the dynamic causal linkages among U.S. equity and commodity futures markets via the utilization of complex network theory using rolling estimations of extended matrices and time-varying network topologies to reveal the temporal dimension of correlation and entropy relationships.
Journal ArticleDOI

The impact of the financial crisis on transatlantic information flows: An intraday analysis

TL;DR: In this article, the authors use intraday stock index return data from both sides of the Atlantic during overlapping trading hours to analyze the dynamic interactions between European and US stock markets.
Journal ArticleDOI

Dependency Relations among International Stock Market Indices

TL;DR: In this article, the authors developed networks of international stock market indices using information and correlation based measures, and applied the formalism of partial correlations to build the dependency network of the data, and calculate the partial Transfer Entropy to quantify the indirect influence that indices have on one another.
Journal ArticleDOI

Estimating the decomposition of predictive information in multivariate systems

TL;DR: A framework for the model-free estimation of information storage and information transfer computed as the terms composing the predictive information about the target of a multivariate dynamical process is presented, resulting in physiologically well-interpretable information decompositions of cardiovascular and cardiorespiratory interactions during head-up tilt and of joint brain-heart dynamics during sleep.
References
More filters
Journal ArticleDOI

A mathematical theory of communication

TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Journal ArticleDOI

Investigating Causal Relations by Econometric Models and Cross-Spectral Methods

TL;DR: In this article, the cross spectrum between two variables can be decomposed into two parts, each relating to a single causal arm of a feedback situation, and measures of causal lag and causal strength can then be constructed.
Journal ArticleDOI

On the pricing of corporate debt: the risk structure of interest rates

TL;DR: In this article, the American Finance Association Meeting, New York, December 1973, presented an abstract of a paper entitled "The Future of Finance: A Review of the State of the Art".
Journal ArticleDOI

Independent coordinates for strange attractors from mutual information.

TL;DR: In this paper, the mutual information I is examined for a model dynamical system and for chaotic data from an experiment on the Belousov-Zhabotinskii reaction.
Related Papers (5)
Frequently Asked Questions (13)
Q1. What are the contributions in "Using transfer entropy to measure information flows between financial markets" ?

The authors use transfer entropy to quantify information flows between financial markets and propose a suitable bootstrap procedure for statistical inference. Their approach allows to determine, measure and test for information transfer without being restricted to linear dynamics. In their empirical application, the authors examine the importance of the credit default swap market relative to the corporate bond market for the pricing of credit risk. The authors also analyze the dynamic relation between market risk and credit risk proxied by the VIX and the iTraxx Europe, respectively. The authors conduct the analyses for pre-crisis, crisis and post-crisis periods. 

The measurement of interactions between the Indian stock and commodity market is the subject of a study by Reddy and Sebastin (2009). 

In order to derive the distribution of the transfer entropy estimates them-selves, the authors need to alter the proposed bootstrap procedure, since the authors have to preserve the dependencies between both time series. 

In order to calculate the corresponding bond spreads, the authors have to construct a time series of bond yields that matches the constant 5-years maturity of the CDS time series. 

Since the time series are likely to contain a considerable amount of noise due to the illiquidity of the bond market and the OTC trading of both assets, the intermediate bin is kept rather large. 

The standard methodology to measure information transmission betweenfinancial markets is the information share technique of Hasbrouck (1995) which relies on the existence of a cointegration relationship. 

the authors address the issue of correcting for small sample bias of the transfer entropy estimator in a novel way: the authors suggest to use the bootstrapped distribution under the null hypothesis of no information flow to correct for finite sample bias rather than using shuffled data which has been the standard procedure in the literature on information theory so far (see, inter alia, Papana, Kugiumtzis, and Larsson, 2001). 

these studies also note that due to market imperfections cointegration is not always supported by the data and a VECM might not suit the observed time series. 

In three cases, the estimate of the bias became larger than the estimate of transfer entropy, resulting in negative values of the effective transfer entropy (irrespective of using shuffling or bootstrapping for bias correction). 

Even if cointegration is supported, but the information share bounds are very large, transfer entropy may be used as a further back-check to determine the informationally dominant market. 

The authors then linearly interpolate these bond yields obtaining a yield curve for different maturities which is in turn used to predict the yield of an artificial bond with five years to maturity. 

Let The authorbe a stationary Markov process of order k, then it holds for the probability to observe The authorat time t +1 in state i conditional on the k previous observations that p(it+1|it , ..., it−k+1) = p(it+1|it , ..., it−k). 

The average amount (per symbol) of information one can get fromsuch a sequence is defined as H = ∑nj=1 p jlog ( 1 p j ) , where n is the number of distinct symbols. 

Trending Questions (1)
In the context of revealing responses of a country's stocks to economic crises, what is transfer entropy?

Transfer entropy is a model-free measure used to quantify information flows between financial markets, including the response of stocks to economic crises.