Using transfer entropy to measure information flows between financial markets
read more
Citations
RTransferEntropy — Quantifying information flow between different time series using effective transfer entropy
Information diffusion, cluster formation and entropy-based network dynamics in equity and commodity markets☆
The impact of the financial crisis on transatlantic information flows: An intraday analysis
Dependency Relations among International Stock Market Indices
Estimating the decomposition of predictive information in multivariate systems
References
A mathematical theory of communication
Investigating Causal Relations by Econometric Models and Cross-Spectral Methods
On the pricing of corporate debt: the risk structure of interest rates
Independent coordinates for strange attractors from mutual information.
Related Papers (5)
Analysing the information flow between financial time series . An improved estimator for transfer entropy
Frequently Asked Questions (13)
Q2. What is the subject of a study by Reddy and Sebastin (2009)?
The measurement of interactions between the Indian stock and commodity market is the subject of a study by Reddy and Sebastin (2009).
Q3. How do the authors derive the distribution of the transfer entropy estimates?
In order to derive the distribution of the transfer entropy estimates them-selves, the authors need to alter the proposed bootstrap procedure, since the authors have to preserve the dependencies between both time series.
Q4. How do the authors calculate the spreads of bonds?
In order to calculate the corresponding bond spreads, the authors have to construct a time series of bond yields that matches the constant 5-years maturity of the CDS time series.
Q5. Why is the intermediate bin kept large?
Since the time series are likely to contain a considerable amount of noise due to the illiquidity of the bond market and the OTC trading of both assets, the intermediate bin is kept rather large.
Q6. What is the standard method to measure information transmission between financial markets?
The standard methodology to measure information transmission betweenfinancial markets is the information share technique of Hasbrouck (1995) which relies on the existence of a cointegration relationship.
Q7. What is the way to correct for the small sample bias of the transfer entropy?
the authors address the issue of correcting for small sample bias of the transfer entropy estimator in a novel way: the authors suggest to use the bootstrapped distribution under the null hypothesis of no information flow to correct for finite sample bias rather than using shuffled data which has been the standard procedure in the literature on information theory so far (see, inter alia, Papana, Kugiumtzis, and Larsson, 2001).
Q8. Why do empirical studies use a VECM framework?
these studies also note that due to market imperfections cointegration is not always supported by the data and a VECM might not suit the observed time series.
Q9. What is the effect of the iTraxx on the transfer entropy?
In three cases, the estimate of the bias became larger than the estimate of transfer entropy, resulting in negative values of the effective transfer entropy (irrespective of using shuffling or bootstrapping for bias correction).
Q10. What is the main advantage of transfer entropy?
Even if cointegration is supported, but the information share bounds are very large, transfer entropy may be used as a further back-check to determine the informationally dominant market.
Q11. How do the authors calculate the yield of a bond with five years to maturity?
The authors then linearly interpolate these bond yields obtaining a yield curve for different maturities which is in turn used to predict the yield of an artificial bond with five years to maturity.
Q12. What is the probability of observing The authorat time t +1?
Let The authorbe a stationary Markov process of order k, then it holds for the probability to observe The authorat time t +1 in state i conditional on the k previous observations that p(it+1|it , ..., it−k+1) = p(it+1|it , ..., it−k).
Q13. What is the average amount of information that one can get from a sequence?
The average amount (per symbol) of information one can get fromsuch a sequence is defined as H = ∑nj=1 p jlog ( 1 p j ) , where n is the number of distinct symbols.