scispace - formally typeset
Search or ask a question
Topic

Dynamic time warping

About: Dynamic time warping is a research topic. Over the lifetime, 6013 publications have been published within this topic receiving 133130 citations.


Papers
More filters
Journal ArticleDOI
01 Jun 2015-Optik
TL;DR: A hand gesture recognition method using the Microsoft Kinect has been proposed, which operates robustly in uncontrolled environments and is insensitive to hand variations and distortions, and the use of two different learning techniques, dynamic time warping and hidden Markov model is demonstrated.

61 citations

Proceedings ArticleDOI
30 Jul 2006
TL;DR: In this paper, a two-pass dynamic time warping algorithm is used to find correspondence between the hand and full-body motions, which can be captured separately and spliced together seamlessly with little or no user input required.
Abstract: We propose a solution to a new problem in animation research: how to use human motion capture data to create character motion with detailed hand gesticulation without the need for the simultaneous capture of hands and the full-body. Occlusion and a difference in scale make it difficult to capture both the detail of the hand movement and unrestricted full-body motion at the same time. With our method, the two can be captured separately and spliced together seamlessly with little or no user input required. The algorithm relies on a novel distance metric derived from research on gestures and uses a two-pass dynamic time warping algorithm to find correspondence between the hand and full-body motions. In addition, we provide a method for supplying user input, useful to animators who want more control over the integrated animation. We show the power of our technique with a variety of common and highly specialized gesticulation examples.

61 citations

Proceedings ArticleDOI
01 May 2001
TL;DR: A survey of time series similarity and indexing techniques can be found in this article, where the authors examine a variety of similarity measures, including Lp norms, time warping, longest common subsequence measures, baselines, moving averaging, or deformable Markov model templates.
Abstract: Time series is the simplest form of temporal data. A time series is a sequence of real numbers collected regularly in time, where each number represents a value. Time series data come up in a variety of domains, including stock market analysis, environmental data, telecommunications data, medical data and financial data. Web data that count the number of clicks on given cites, or model the usage of different pages are also modeled as time series. Therefore time series account for a large fraction of the data stored in commercial databases. There is recently increasing recognition of this fact, and support for time series as a different data type in commercial data bases management systems is increasing. IBM DB2 for example implements support for time series using data-blades.The pervasiveness and importance of time series data has sparked a lot of research work on the topic. While the statistics literature on time series is vast, it has not studied methods that would be appropriate for the time series similarity and indexing problems we discuss here; much of the relevant work on these problems has been done by the computer science community.One interesting problem with time series data is finding whether different time series display similar behavior. More formally, the problem can be stated as: Given two time series X and Y, determine whether they are similar or not (in other words, define and compute a distance function dist(X, Y)). Typically each time series describes the evolution of an object, for example the price of a stock, or the levels of pollution as a function of time at a given data collection station. The objective can be to cluster the different objects to similar groups, or to classify an object based on a set of known object examples. The problem is hard because the similarity model should allow for imprecise matches. One interesting variation is the subsequence similarity problem, where given two time series X and Y, we have to determine those subsequences of X that are similar to pattern Y. To answer these problems, different notions of similarity between time series have been proposed in data mining research.In the tutorial we examine the different time series similarity models that have been proposed, in terms of efficiency and accuracy. The solutions encompass techniques from a wide variety of disciplines, such as databases, signal processing, speech recognition, pattern matching, combinatorics and statistics. We survey proposed similarity techniques, including the Lp norms, time warping, longest common subsequence measures, baselines, moving averaging, or deformable Markov model templates.Another problem that comes up in applications is the indexing problem: given a time series X, and a set of time series S = {Y1,…,YN}, find the time series in S that are most similar to the query X. A variation is the subsequence indexing problem, where given a set of sequences S, and a query sequence (pattern) X, find the sequences in S that contain subsequences that are similar to X. To solve these problems efficiently, appropriate indexing techniques have to be used. Typically, the similarity problem is related to the indexing problem: simple (and possibly inaccurate) similarity measures are usually easy to build indexes for, while more sophisticated similarity measures make the indexing problem hard and interesting.We examine the indexing techniques that can be used for different models, and the dimensionality reduction techniques that have been proposed to improve indexing performance. A time series of length n can be considered as a tuple in an n-dimensional space. Indexing this space directly is inefficient because of the very high dimensionality. The main idea to improve on it is to use a dimensionality reduction technique that takes the n item long time series, and maps it to a lower dimensional space with k dimensions (hopefully, k

61 citations

01 Jan 2007
TL;DR: A speaker segmentation and clustering system aiming at improving the robustness of speaker recognition as well as automatic speech recognition performance in the multiple-speaker scenarios such as telephony conversations and meetings is implemented.
Abstract: The automatic speaker recognition technologies have developed into more and more important modern technologies required by many speech-aided applications. The main challenge for automatic speaker recognition is to deal with the variability of the environments and channels from where the speech was obtained. In previous work, good results have been achieved for clean high-quality speech with matched training and test acoustic conditions, such as high accuracy of speaker identification and verification using clean wideband speech and Gaussian Mixture Models (GMM). However, under mismatched conditions and noisy environments, often expected in real-world conditions, the performance of GMM-based systems degrades significantly, far away from the satisfactory level. Therefore, robustness becomes a crucial research issue in speaker recognition field. In this thesis, our main focus is to-improve the robustness of speaker recognition systems on far-field distant microphones. We investigate approaches to improve robustness from two directions. First, we investigate approaches to improve robustness for traditional speaker recognition system which is based on low-level spectral information. We introduce a new reverberation compensation approach which, along with feature warping in the feature processing procedure, improves the system performance significantly. We propose four multiple channel combination approaches, which utilize information from multiple far-field microphones, to improve robustness under mismatched training-testing conditions. Secondly, we investigate approaches to use high-level speaker information to improve robustness. We propose new techniques to model speaker pronunciation idiosyncrasy from two dimensions: the cross-stream dimension and the time dimension. Such high-level information is expected to be robust under different mismatched conditions. We also built systems that support robust speaker recognition. We implemented a speaker segmentation and clustering system aiming at improving the robustness of speaker recognition as well as automatic speech recognition performance in the multiple-speaker scenarios such as telephony conversations and meetings. We also integrate speaker identification modality with face recognition modality to build a robust person identification system.

61 citations

Journal ArticleDOI
TL;DR: A Bayesian approach based on dynamic time warping (DTW) method for sleep and wake classification was proposed, which used HR and surplus pulse O2 (SPO2) signals to analyze the sleep states and the occurrence of some sleep-related problems.
Abstract: Sleep plays a significant role in human’ smental and physical health. Recently, the associations between lack of sleep and weight gain, development of cancer and many other health problems have been recognized. Then monitoring the sleep and wake state all night is becoming a hotspot issue. Traditionally it classified by a PSG recording which is very costly and uncomfortable. Nowadays, with the advance of internet of things, many convenient wearable devices are being used for medical use, like measuring the heart rate (HR), blood pressure and other signals. With the sleep quality monitor problem, the key question is how to discriminate the sleep and weak stage from these signals. This paper proposed a Bayesian approach based on dynamic time warping (DTW) method for sleep and wake classification. It used HR and surplus pulse O2 (SPO2) signals to analyze the sleep states and the occurrence of some sleep-related problems. DTW is an algorithm that searches an optimal alignment between time series with scaling and shifting and Bayesian methods have been successfully used for object classification in many study. In this paper, a three-step process is used for sleep and wake classification. In the first step, the DTW is used to extract features of the original HR and SPO2 signals. Then a probabilistic model is introduced for using the Bayesian classification for uncertain data. And in the classification step, the DTW features are used as the training dataset in the Bayesian approach for sleep and wake classification. Finally, a case study form a real-word applications, collected from the website of the Sleep Heart Health Study, is presented to shown the feasibility and advantages of the DTW-based Bayesian approach.

61 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
91% related
Convolutional neural network
74.7K papers, 2M citations
87% related
Deep learning
79.8K papers, 2.1M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Artificial neural network
207K papers, 4.5M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023236
2022471
2021341
2020416
2019420
2018377