Transfer Entropy
[book]
2018
Statistical relationships among the variables of a complex system reveal a lot about its physical behavior. Therefore, identification of the relevant variables and characterization of their interactions are crucial for a better understanding of a complex system. Correlation-based techniques have been widely utilized to elucidate the linear statistical dependencies in many science and engineering applications. However, for the analysis of nonlinear dependencies, information-theoretic quantities,
more »
... such as Mutual Information (MI) and the Transfer Entropy (TE), have been proven to be superior. MI quantifies the amount of information obtained about one random variable, through the other random variable, and it is symmetric. As an asymmetrical measure, TE quantifies the amount of directed (time-asymmetric) transfer of information between random processes and therefore is related to the measures of causality. In the literature, the Granger causality has been addressed in many fields, such as biomedicine, atmospheric sciences, fluid dynamics, finance, and neuroscience. Despite its success in the identification of couplings between the interacting variables, the use of structural models restricts its performance. Unlike Granger causality, TE is a quantity that is directly estimated from data and it does not suffer from such constraints. In the specific case of Gaussian distributed random variables, equivalence between TE and Granger causality has been proven. The estimation of TE from data is a numerically challenging problem. Generally, this estimation depends on accurate representations of the probability distributions of the relevant variables. Histogram and kernel estimates are two common ways of estimating probability distributions from data. TE can be expressed in terms of other information-theoretic quantities, such as Shannon entropy and MI, which are functions of the probability distributions of the variables. Therefore, it is prone to errors due to the approximations of probability distributions. Moreover, many TE estimation techniques suffer from the bias effects arising from the algebraic sums of other information-theoretic quantities. Thus, bias correction has been an active research area for better estimation performance. Methods such as Symbolic TE and the Kraskov-Stögbauer-Grassberger (KSG) algorithm are among the other techniques used to estimate TE from data. The efficient estimation of TE is still an active research area. Most of these techniques have been proposed to solve specific problems in diverse applications. Hence, a method proposed for the solution of one application might not be the best for another. This Special Issue has been organized to collect distinctive approaches in one publication, as a reference tool for the theory and applications of TE. The contributions are categorized into two sections: the methods and the applications.
doi:10.3390/books978-3-03842-920-3
fatcat:i5amuwcd7zco5caibgb4lgpcyy