14 On Divergence and Dissimilarity Measures for Multiple Time Series
Divergence and dissimilarity measures play an important role in mathematical statistics and statistical data analysis. This work is devoted to a review on the most popular divergence and dissimilarity measures, as well as on some new advanced measures associated with multiple time series data.
14.1. Introduction
An issue of fundamental importance in Statistics is the investigation of Information Measures, which constitutes a broad class of measures that include, among others, the divergence and dissimilarity measures. These measures are used to measure the extent of information contained in the data and/or the divergence or dissimilarity between two populations, functions or data sets. Traditionally, the measures of information are classified into four main categories, namely divergence-type, entropy-type, Fisher-type and Bayesian-type (see Vonta and Karagrigoriou 2011).
Measures of divergence between two probability distributions have a more than 100 year long history initiated by Pearson, Mahalanobis, Lévy and Kolmogorov. Among the most popular measures of divergence are the Kullback–Leibler measure of divergence (Kullback and Leibler 1951) and the Csiszar’s φ-divergence family of measures (Csiszár 1963; Ali and Silvey 1966). Cressie and Read (1984) attempted to provide a unified analysis by introducing the so-called power divergence family of statistics that involved an index and is used in goodness-of-fit ...
Get Applied Modeling Techniques and Data Analysis 1 now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.