The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances
Reads0
Chats0
TLDR
This work implemented 18 recently proposed algorithms in a common Java framework and compared them against two standard benchmark classifiers (and each other) by performing 100 resampling experiments on each of the 85 datasets, indicating that only nine of these algorithms are significantly more accurate than both benchmarks.Abstract:
In the last 5 years there have been a large number of new time series classification algorithms proposed in the literature. These algorithms have been evaluated on subsets of the 47 data sets in the University of California, Riverside time series classification archive. The archive has recently been expanded to 85 data sets, over half of which have been donated by researchers at the University of East Anglia. Aspects of previous evaluations have made comparisons between algorithms difficult. For example, several different programming languages have been used, experiments involved a single train/test split and some used normalised data whilst others did not. The relaunch of the archive provides a timely opportunity to thoroughly evaluate algorithms on a larger number of datasets. We have implemented 18 recently proposed algorithms in a common Java framework and compared them against two standard benchmark classifiers (and each other) by performing 100 resampling experiments on each of the 85 datasets. We use these results to test several hypotheses relating to whether the algorithms are significantly more accurate than the benchmarks and each other. Our results indicate that only nine of these algorithms are significantly more accurate than both benchmarks and that one classifier, the collective of transformation ensembles, is significantly more accurate than all of the others. All of our experiments and results are reproducible: we release all of our code, results and experimental details and we hope these experiments form the basis for more robust testing of new algorithms in the future.read more
Citations
More filters
Proceedings ArticleDOI
TSEvo: Evolutionary Counterfactual Explanations for Time Series Classification
TL;DR: TSEvo as discussed by the authors is a model-agnostic multiobjective evolutionary approach to time series counterfactuals incorporating a variety of time series transformation mechanisms to cope with different types and structures.
Proceedings ArticleDOI
A Feature-based Approach for Identifying Soccer Moves using an Accelerometer Sensor.
Omar Alobaid,Lakshmish Ramaswamy +1 more
TL;DR: This paper explores three different feature-based algorithms: Time Series Forest, Fast Shapelets, and Bag-ofSFA-Symbols and introduces a novel collaborative model consisting of the above-mentioned algorithms in a majority voting mechanism to further enhance the performance of the system.
Journal ArticleDOI
TSLOD: a coupled generalized subsequence local outlier detection model for multivariate time series
TL;DR: Wang et al. as discussed by the authors proposed a coupled generalized local outlier detection model for multivariate time series (MTS), which extends the traditional generalized local outliers detection model to cope with subsequence outliers by incorporating a novel non-IID similarity metric.
Posted Content
Early and Revocable Time Series Classification.
TL;DR: In this paper, a cost-based framework was proposed for early classification of time series, where the decision maker can revoke its earlier decisions based on thenew available measurements, and two new approaches were derived from it.
Journal ArticleDOI
Ultra-fast meta-parameter optimization for time series similarity measures with application to nearest neighbour classification
TL;DR: UltraFastMPSearch as discussed by the authors is a family of algorithms to learn the meta-parameters for different types of time series distance measures, which are significantly faster than the prior state of the art.
References
More filters
Journal ArticleDOI
The WEKA data mining software: an update
TL;DR: This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.
Journal Article
Statistical Comparisons of Classifiers over Multiple Data Sets
TL;DR: A set of simple, yet safe and robust non-parametric tests for statistical comparisons of classifiers is recommended: the Wilcoxon signed ranks test for comparison of two classifiers and the Friedman test with the corresponding post-hoc tests for comparisons of more classifiers over multiple data sets.
Book ChapterDOI
Domain-adversarial training of neural networks
Yaroslav Ganin,Evgeniya Ustinova,Hana Ajakan,Pascal Germain,Hugo Larochelle,François Laviolette,Mario Marchand,Victor Lempitsky +7 more
TL;DR: In this article, a new representation learning approach for domain adaptation is proposed, in which data at training and test time come from similar but different distributions, and features that cannot discriminate between the training (source) and test (target) domains are used to promote the emergence of features that are discriminative for the main learning task on the source domain.
Journal ArticleDOI
Experiencing SAX: a novel symbolic representation of time series
TL;DR: The utility of the new symbolic representation of time series formed is demonstrated, which allows dimensionality/numerosity reduction, and it also allows distance measures to be defined on the symbolic approach that lower bound corresponding distance measuresdefined on the original series.
Journal ArticleDOI
Querying and mining of time series data: experimental comparison of representations and distance measures
TL;DR: An extensive set of time series experiments are conducted re-implementing 8 different representation methods and 9 similarity measures and their variants and testing their effectiveness on 38 time series data sets from a wide variety of application domains to provide a unified validation of some of the existing achievements.