scispace - formally typeset
Search or ask a question

Showing papers on "Dynamic time warping published in 2003"


Proceedings ArticleDOI
18 Jun 2003
TL;DR: This work presents an algorithm for matching handwritten words in noisy historical documents that performs better and is faster than competing matching techniques and presents experimental results on two different data sets from the George Washington collection.
Abstract: Libraries and other institutions are interested in providing access to scanned versions of their large collections of handwritten historical manuscripts on electronic media. Convenient access to a collection requires an index, which is manually created at great labor and expense. Since current handwriting recognizers do not perform well on historical documents, a technique called word spotting has been developed: clusters with occurrences of the same word in a collection are established using image matching. By annotating "interesting" clusters, an index can be built automatically. We present an algorithm for matching handwritten words in noisy historical documents. The segmented word images are preprocessed to create sets of 1-dimensional features, which are then compared using dynamic time warping. We present experimental results on two different data sets from the George Washington collection. Our experiments show that this algorithm performs better and is faster than competing matching techniques.

626 citations


Proceedings ArticleDOI
24 Aug 2003
TL;DR: The experimental results demonstrate that the index motivated by the need for a single index structure that can support multiple distance measures can help speed-up the computation of expensive similarity measures such as the LCSS and the DTW.
Abstract: Although most time-series data mining research has concentrated on providing solutions for a single distance function, in this work we motivate the need for a single index structure that can support multiple distance measures. Our specific area of interest is the efficient retrieval and analysis of trajectory similarities. Trajectory datasets are very common in environmental applications, mobility experiments, video surveillance and are especially important for the discovery of certain biological patterns. Our primary similarity measure is based on the Longest Common Subsequence (LCSS) model, that offers enhanced robustness, particularly for noisy data, which are encountered very often in real world applications. However, our index is able to accommodate other distance measures as well, including the ubiquitous Euclidean distance, and the increasingly popular Dynamic Time Warping (DTW). While other researchers have advocated one or other of these similarity measures, a major contribution of our work is the ability to support all these measures without the need to restructure the index. Our framework guarantees no false dismissals and can also be tailored to provide much faster response time at the expense of slightly reduced precision/recall. The experimental results demonstrate that our index can help speed-up the computation of expensive similarity measures such as the LCSS and the DTW.

419 citations


Proceedings ArticleDOI
09 Jun 2003
TL;DR: This work treats music as a time series and exploit and improve well-developed techniques from time series databases to index the music for fast similarity queries and improves on existing DTW indexes technique by introducing the concept of envelope transforms, which gives a general guideline for extending existing dimensionality reduction methods toDTW indexes.
Abstract: A Query by Humming system allows the user to find a song by humming part of the tune. No musical training is needed. Previous query by humming systems have not provided satisfactory results for various reasons. Some systems have low retrieval precision because they rely on melodic contour information from the hum tune, which in turn relies on the error-prone note segmentation process. Some systems yield better precision when matching the melody directly from audio, but they are slow because of their extensive use of Dynamic Time Warping (DTW). Our approach improves both the retrieval precision and speed compared to previous approaches. We treat music as a time series and exploit and improve well-developed techniques from time series databases to index the music for fast similarity queries. We improve on existing DTW indexes technique by introducing the concept of envelope transforms, which gives a general guideline for extending existing dimensionality reduction methods to DTW indexes. The net result is high scalability. We confirm our claims through extensive experiments.

302 citations


Journal ArticleDOI
TL;DR: A Haar wavelet-based approximation function for time warping distance is suggested, called Low Resolution Time Warping, which results in less computation by trading off a small amount of accuracy, and is highly effective in suppressing the number of false alarms in similarity search.
Abstract: We address the handling of time series search based on two important distance definitions: Euclidean distance and time warping distance. The conventional method reduces the dimensionality by means of a discrete Fourier transform. We apply the Haar wavelet transform technique and propose the use of a proper normalization so that the method can guarantee no false dismissal for Euclidean distance. We found that this method has competitive performance from our experiments. Euclidean distance measurement cannot handle the time shifts of patterns. It fails to match the same rise and fall patterns of sequences with different scales. A distance measure that handles this problem is the time warping distance. However, the complexity of computing the time warping distance function is high. Also, as time warping distance is not a metric, most indexing techniques would not guarantee any false dismissal. We propose efficient strategies to mitigate the problems of time warping. We suggest a Haar wavelet-based approximation function for time warping distance, called Low Resolution Time Warping, which results in less computation by trading off a small amount of accuracy. We apply our approximation function to similarity search in time series databases, and show by experiment that it is highly effective in suppressing the number of false alarms in similarity search.

259 citations


Journal ArticleDOI
TL;DR: A comprehensive comparative study of artificial neural networks, learning vector quantization and dynamic time warping classification techniques combined with stationary/non-stationary feature extraction for environmental sound recognition shows 70% recognition using mel frequency cepstral coefficients or continuous wavelet transform with dynamic time Warping.

246 citations


Book ChapterDOI
TL;DR: The dynamic time-warping (DTW) approach is used for matching so that non-linear time normalization may be used to deal with the naturally-occuring changes in walking speed.
Abstract: Human gait is an attractive modality for recognizing people at a distance. In this paper we adopt an appearance-based approach to the problem of gait recognition. The width of the outer contour of the binarized silhouette of a walking person is chosen as the basic image feature. Different gait features are extracted from the width vector such as the dowsampled, smoothed width vectors, the velocity profile etc. and sequences of such temporally ordered feature vectors are used for representing a person's gait. We use the dynamic time-warping (DTW) approach for matching so that non-linear time normalization may be used to deal with the naturally-occuring changes in walking speed. The performance of the proposed method is tested using different gait databases.

242 citations


Proceedings ArticleDOI
03 Aug 2003
TL;DR: A range of features suitable for matching words using dynamic time warping (DTW) are analyzed, which aligns and compares sets of features extracted from two images and outperforms competing techniques in speed and precision.
Abstract: For the transition from traditional to digital libraries, the large number of handwritten manuscripts that exist pose a great challenge. Easy access to such collections requires an index, which is currently created manually at great cost. Because automatic handwriting recognizers fail on historical manuscripts, the word spotting technique has been developed: the words in a collection are matched as images and grouped into clusters which contain all instances of the same word. By annotating "interesting" clusters, an index that links words to the locations where they occur can be built automatically. Due to the noise in historical documents, selecting the right features for matching words is crucial. We analyzed a range of features suitable for matching words using dynamic time warping (DTW), which aligns and compares sets of features extracted from two images. Each feature's individual performance was measured on a test set. With an average precision of 72%, a combination of features outperforms competing techniques in speed and precision.

234 citations


Proceedings ArticleDOI
06 Apr 2003
TL;DR: Two approaches that use the fundamental frequency and energy trajectories to capture long-term information are proposed that can achieve a 77% relative improvement over a system based on short-term pitch and energy features alone.
Abstract: Most current state-of-the-art automatic speaker recognition systems extract speaker-dependent features by looking at short-term spectral information. This approach ignores long-term information that can convey supra-segmental information, such as prosodics and speaking style. We propose two approaches that use the fundamental frequency and energy trajectories to capture long-term information. The first approach uses bigram models to model the dynamics of the fundamental frequency and energy trajectories for each speaker. The second approach uses the fundamental frequency trajectories of a predefined set of words as the speaker templates and then, using dynamic time warping, computes the distance between the templates and the words from the test message. The results presented in this work are on Switchboard I using the NIST Extended Data evaluation design. We show that these approaches can achieve an equal error rate of 3.7%, which is a 77% relative improvement over a system based on short-term pitch and energy features alone.

212 citations


Proceedings ArticleDOI
15 Oct 2003
TL;DR: A simple novel technique for preparing reliable reference templates to improve the recognition rate score and produces templates called crosswords reference templates (CWRTs), which can be adapted to any DTW-based speech recognition systems to improve its performance.
Abstract: One of the main problems in dynamic time-warping (DTW) based speech recognition systems are the preparation of reliable reference templates for the set of words to be recognised. This paper presents a simple novel technique for preparing reliable reference templates to improve the recognition rate score. The developed technique produces templates called crosswords reference templates (CWRTs). It extracts the reference template from a set of examples rather than one example. This technique can be adapted to any DTW-based speech recognition systems to improve its performance. The speaker-dependent recognition rate, as tested on the English digits, is improved from 85.3%, using the traditional technique to 99%, using the developed technique.

144 citations


Journal ArticleDOI
TL;DR: The new warping technique proposed is named as extreme points warping (EPW), which proves to be more adaptive in the field of signature verification than DTW, given the presence of the forgeries.

130 citations


Proceedings ArticleDOI
15 Dec 2003
TL;DR: This work presents an effective and efficient approach for word image matching by using gradient-based binary features that has much higher retrieval accuracy and is 893 times faster than Dynamic Time Warping with profile-based shape features.
Abstract: Existing word image retrieval algorithms suffer from either low retrieval precision or high computation complexity. We present an effective and efficient approach for word image matching by using gradient-based binary features. Experiments over a large database of handwritten word images show that the proposed approach consistently outperforms the existing best handwritten word image retrieval algorithm. Dynamic Time Warping (DTW) with profile-based shape features. Not only does the proposed approach have much higher retrieval accuracy, but also it is 893 times faster than DTW.

Proceedings Article
26 Oct 2003
TL;DR: The automatic alignment presented in this paper is based on a dynamic time warping methodology and is robust for difficulties such as trills, vibratos and fast sequences.
Abstract: Music alignment link events in a score and points on the audio performance time axis All the parts of a recording can be thus indexed according to score information The automatic alignment presented in this paper is based on a dynamic time warping methodology Local distances are computed using the signal's spectral features through an attack plus sustain note modeling Good alignment has been obtained for polyphony of up to five instruments The method is robust for difficulties such as trills, vibratos and fast sequences It provides an accurate indicator giving position of score interpretation errors and extra or forgotten notes Implementation optimizations allow aligning long sound files in a relatively short time Evaluation results have been obtained on piano jazz recordings

Journal ArticleDOI
TL;DR: In this article, a modification of the dynamic time warping (DTW) algorithm was proposed to warping spectral batch data, which takes into account the amount of warping information of every process variable.

01 Jan 2003
TL;DR: A tight lower-bounding measure for dynamic time warping distances for univariate time series is introduced and a proof for its lower- bounding property is presented.
Abstract: A tight lower-bounding measure for dynamic time warping (DTW) distances for univariate time series was introduced in [Keogh 2002] and a proof for its lower-bounding property was presented. Here we extend these findings to allow lower-bounding of DTW distances for multivariate time series.

Proceedings Article
01 Jan 2003
TL;DR: In this article, the authors evaluated various query-by-humming (QBH) search systems and found that natural queries from two sources led to lower performance than that typically reported in the QBH literature.
Abstract: Evaluating music information retrieval systems is acknowledged to be a difficult problem. We have created a database and a software testbed for the systematic evaluation of various query-by-humming (QBH) search systems. As might be expected, different queries and different databases lead to wide variations in observed search precision. “Natural” queries from two sources led to lower performance than that typically reported in the QBH literature. These results point out the importance of careful measurement and objective comparisons to study retrieval algorithms. This study compares search algorithms based on note-interval matching with dynamic programming, fixed-frame melodic contour matching with dynamic time warping, and a hidden Markov model. An examination of scaling trends is encouraging: precision falls off very slowly as the database size increases. This trend is simple to compute and could be useful to predict performance on larger databases.

01 Jan 2003
TL;DR: Investigations into a number of different matching techniques for word images, including shape context matching, SSD correlation, Euclidean Distance Mapping and dynamic time warping are described.
Abstract: Indexing and searching collections of handwritten archival documents and manuscripts has always been a challenge because handwriting recognizers do not perform well on such noisy documents. Given a collection of documents written by a single author (or a few authors), one can apply a technique called word spotting. The approach is to cluster word images based on their visual appearance, after segmenting them from the documents. Annotation can then be performed for clusters rather than documents. Given segmented pages, matching handwritten word images in historical documents is a great challenge due to the variations in handwriting and the noise in the images. We describe investigations into a number of different matching techniques for word images. These include shape context matching, SSD correlation, Euclidean Distance Mapping and dynamic time warping. Experimental results show that dynamic time warping works best and gives an average precision of around 70% on a test set of 2000 word images (from ten pages) from the George Washington corpus. Dynamic time warping is relatively expensive and we will describe approaches to speeding up the computation so that the approach scales. Our immediate goal is to process a set of 100 page images with a longer term goal of processing all 6000 available pages.

Book ChapterDOI
TL;DR: In this contribution a function-based approach to on-line signature verification is presented, attaining an outstanding best figure of 0.35% EER for skilled forgeries, when signer-dependent thresholds are considered.
Abstract: In this contribution a function-based approach to on-line signature verification is presented. An initial set of 8 time sequences is used; then first and second time derivates of each function are computed over these, so 24 time sequences are simultaneously considered. A valuable function normalization is applied as a previous stage to a continuous-density HMM-based complete signal modeling scheme of these 24 functions, so no derived statistical features are employed, fully exploiting in this manner the HMM modeling capabilities of the inherent time structure of the dynamic process. In the verification stage, scores are considered not as absolute but rather as relative values with respect to a reference population, permitting the use of a best-reference score-normalization technique. Results using MCYT_Signature sub-corpus on 50 clients are presented, attaining an outstanding best figure of 0.35% EER for skilled forgeries, when signer-dependent thresholds are considered.

Book ChapterDOI
22 Sep 2003
TL;DR: A novel technique to speed up similarity search under uniform scaling is demonstrated, which can achieve a speedup of 2 to 3 orders of magnitude under realistic settings.
Abstract: The problem of efficiently finding patterns in massive time series databases has attracted great interest, and, at least for the Euclidean distance measure, may now be regarded as a solved problem However in recent years there has been an increasing awareness that Euclidean distance is inappropriate for many real world applications The limitations of Euclidean distance stems from the fact that it is very sensitive to distortions in the time axis A partial solution to this problem, Dynamic Time Warping (DTW), aligns the time axis before calculating the Euclidean distance However, DTW can only address the problem of local scaling As we demonstrate in this work, uniform scaling may be just as important in many domains, including applications as diverse as bioinformatics, space telemetry monitoring and motion editing for computer animation In this work, we demonstrate a novel technique to speed up similarity search under uniform scaling As we will demonstrate, our technique is simple and intuitive, and can achieve a speedup of 2 to 3 orders of magnitude under realistic settings

Proceedings Article
01 Jan 2003
TL;DR: As part of improved support for building unit selection voices, the Festival speech synthesis system now includes two algorithms for automatic labeling of wavefile data based on dynamic time warping and HMM-based acoustic modeling.
Abstract: As part of improved support for building unit selection voices, the Festival speech synthesis system now includes two algorithms for automatic labeling of wavefile data. The two methods are based on dynamic time warping and HMM-based acoustic modeling. Our experiments show that DTW is more accurate 70% of the time, but is also more prone to gross labeling errors. HMM modeling exhibits a systematic bias of 15 ms. Combining both methods directs human labelers towards data most likely to be problematic.

Journal ArticleDOI
TL;DR: In this article, a context dependent dynamic time warping (CDFW) was used to recognize isolated musical patterns in a monophonic environment, which exploits the correlation exhibited among adjacent frequency jumps of the feature sequence.
Abstract: Automatic recognition of musical patterns plays a crucial part in musicological and ethnomusicological research and can become an indispensable tool for the search and comparison of music extracts within a large multimedia database. This paper presents an efficient method for recognizing isolated musical patterns in a monophonic environment, using a novel extension of dynamic time warping, which we call context dependent dynamic time warping. Each pattern, to be recognized, is converted into a sequence of frequency jumps by means of a fundamental frequency tracking algorithm, followed by a quantizer. The resulting sequence of frequency jumps is presented to the input of the recognizer. The main characteristic of context dependent dynamic time warping is that it exploits the correlation exhibited among adjacent frequency jumps of the feature sequence. The methodology has been tested in the context of Greek traditional music, which exhibits certain characteristics that make the classification task harder, when compared with western musical tradition. A recognition rate higher than 95% was achieved.

Journal ArticleDOI
TL;DR: The Baum-Welch (1972) estimation algorithm for HMMs is extended and an iterative method for estimating the model parameters of the new model based on the Baum inequality is obtained and efficiently considers all possible alignment paths between the training data and the current model.
Abstract: We introduce an enhanced dynamic time warping model (EDTW) which, unlike conventional dynamic time warping (DTW), considers all possible alignment paths for recognition as well as for parameter estimation. The model, for which DTW and the hidden Markov model (HMM) are special cases, is based on a well-defined quality measure. We extend the derivation of the Forward and Viterbi algorithms for HMMs, in order to obtain efficient solutions for the problems of recognition and optimal path alignment in the new proposed model. We then extend the Baum-Welch (1972) estimation algorithm for HMMs and obtain an iterative method for estimating the model parameters of the new model based on the Baum inequality. This estimation method efficiently considers all possible alignment paths between the training data and the current model. A standard segmental K-means estimation algorithm is also derived for EDTW. We compare the performance of the two training algorithms, with various path movement constraints, in two isolated letter recognition tasks. The new estimation algorithm was found to improve performance over segmental K-means in most experiments.

01 Jan 2003
TL;DR: This paper presents a method that supports dynamic time warping for subsequence matching within a collection of sequences and takes full advantage of the "sliding window" approach and can handle queries of arbitrary length.
Abstract: It has been found that the technique of searching for similar patterns among time series data is very important in a wide range of scientific and business applications. Most of the research works use Euclidean distance as their similarity metric. However, dynamic time warping (DTW) is a more robust distance measure than Euclidean distance in many situations, where sequences may have different lengths or have patterns which are out of phase in the time axis. Unfortunately, DTW does not satisfy the triangle inequality, so spatial indexing techniques cannot be applied. In this paper, we present a method that supports dynamic time warping for subsequence matching within a collection of sequences. Our method takes full advantage of the "sliding window" approach and can handle queries of arbitrary length.

Journal ArticleDOI
TL;DR: It will be shown that HMM2 can be used to extract noise robust features, supposed to be related to formant regions, which can been used as extra features for traditional HMM recognizers to improve their performance.

Patent
07 Nov 2003
TL;DR: In this article, handwritten characters are classified as print or cursive based upon numerical feature values calculated from the shape of an input character, and feature values are applied to inputs of an artificial neural network which outputs a probability of the input character being a print or a cursive.
Abstract: Input handwritten characters are classified as print or cursive based upon numerical feature values calculated from the shape of an input character. The feature values are applied to inputs of an artificial neural network which outputs a probability of the input character being print or cursive. If a character is classified as print, it is analyzed by a print character recognizer. If a character is classified as cursive, it is analyzed using a cursive character recognizer. The cursive character recognizer compares the input character to multiple prototype characters using a Dynamic Time Warping (DTW) algorithm.

Proceedings ArticleDOI
16 Jul 2003
TL;DR: In this article, the authors present a method that supports dynamic time warping for subsequence matching within a collection of sequences, which takes full advantage of the sliding window approach and can handle queries of arbitrary length.
Abstract: It has been found that the technique of searching for similar patterns among time series data is very important in a wide range of scientific and business applications. Most of the research works use Euclidean distance as their similarity metric. However, dynamic time warping (DTW) is a more robust distance measure than Euclidean distance in many situations, where sequences may have different lengths or have patterns which are out of phase in the time axis. Unfortunately, DTW does not satisfy the triangle inequality, so spatial indexing techniques cannot be applied. In this paper, we present a method that supports dynamic time warping for subsequence matching within a collection of sequences. Our method takes full advantage of the "sliding window" approach and can handle queries of arbitrary length.

Proceedings ArticleDOI
01 Jan 2003
TL;DR: Extensions and modifications to the compass operator to make it applicable to texture edge detection in high dimensional images whose dimensions represent the output of a texture filter bank show that the extended compass operator can robustly locate edges in natural scenes with complex textures.
Abstract: The compass operator has proven to be a useful tool for the detection of color edges in real images. Its fundamental contribution is the comparison of oriented distributions of image features over a local area at each pixel. This paper presents extensions and modifications to the operator to make it applicable to texture edge detection in high dimensional images whose dimensions represent the output of a texture filter bank. The results show that the extended compass operator can robustly locate edges in natural scenes with complex textures. In addition, the use of a dynamic time warping distribution matching metric and jittered application of the operator improves the computational running time by a factor of over 50 while still producing comparable results. This large-scale speedup makes application of the algorithm to an entire image database computationally feasible.

Book ChapterDOI
24 Jul 2003
TL;DR: A new k-warping distance algorithm is proposed which modifies the existing time warping distance algorithm by permitting up to k replications for an arbitrary motion of a query trajectory to measure the similarity between two trajectories.
Abstract: Moving objects' trajectories play an important role in doing content-based retrieval in video databases. In this paper, we propose a new k-warping distance algorithm which modifies the existing time warping distance algorithm by permitting up to k replications for an arbitrary motion of a query trajectory to measure the similarity between two trajectories. Based on our k-warping distance algorithm, we also propose a new similar sub-trajectory retrieval scheme for efficient retrieval on moving objects' trajectories in video databases. Our scheme can support multiple properties including direction, distance, and time and can provide the approximate matching that is superior to the exact matching. As its application, we implement the Content-based Soccer Video Retrieval (CSVR) system. Finally, we show from our experiment that our scheme outperforms Li's scheme (no-warping) and Shan's scheme (infinite-warping) in terms of precision and recall measures.

Book ChapterDOI
TL;DR: An algorithm is presented that utilizes minutiae, associate ridges and orientation fields to determine the registration pattern between two fingerprints and match them and demonstrates the performance of the proposed algorithm.
Abstract: The "registration pattern" between two fingerprints is the optimal registration of each part of one fingerprint with respect to the other fingerprint. Registration patterns generated from imposter's matching attempts are different from those patterns from genuine matching attempts, although they may share some similarities in the aspect of minutiae. In this paper, we present an algorithm that utilizes minutiae, associate ridges and orientation fields to determine the registration pattern between two fingerprints and match them. The proposed matching scheme has two stages. An offline, training stage, derives a genuine registration pattern base from a set of genuine matching attempts. Then, an online matching stage registers the two fingerprints and determines the registration pattern. Only if the pattern makes a genuine one, a further fine matching is conducted. The genuine registration pattern base was derived using a set of fingerprints extracted from the NIST Special Database 24. The algorithm has been tested on the second FVC2002 database. Experimental results demonstrate the performance of the proposed algorithm.

Journal ArticleDOI
TL;DR: A one-dimensional model-based misclassification measure to evaluate the distance between a particular model of interest and a combination of many of its competing models and it is demonstrated that the error rate of a recognition system in a noisy environment could also be predicted.
Abstract: A model-based framework of classification error rate estimation is proposed for speech and speaker recognition. It aims at predicting the run-time performance of a hidden Markov model (HMM) based recognition system for a given task vocabulary and grammar without the need of running recognition experiments using a separate set of testing samples. This is highly desirable both in theory and in practice. However, the error rate expression in HMM-based speech recognition systems has no closed form solution due to the complexity of the multi-class comparison process and the need for dynamic time warping to handle various speech patterns. To alleviate the difficulty, we propose a one-dimensional model-based misclassification measure to evaluate the distance between a particular model of interest and a combination of many of its competing models. The error rate for a class characterized by the HMM is then the value of a smoothed zero-one error function given the misclassification measure. The overall error rate of the task vocabulary could then be computed as a function of all the available class error rates. The key here is to evaluate the misclassification measure in terms of the parameters of environmental-matched models without running recognition experiments, where the models are adapted by very limited data that could be just the testing utterance itself. In this paper, we show how the misclassification measure could be approximated by first computing the distance between two mixture Gaussian densities, then between two HMMs with mixture Gaussian state observation densities and finally between two sequences of HMMs. The misclassification measure is then converted into classification error rate. When comparing the error rate obtained in running actual experiments and that of the new framework, the proposed algorithm accurately estimates the classification error rate for many types of speech and speaker recognition problems. Based on the same framework, it is also demonstrated that the error rate of a recognition system in a noisy environment could also be predicted.

Proceedings ArticleDOI
02 Nov 2003
TL;DR: A modification algorithm, which applied linguistic variable concept tree to describe the slope feather of time series, was presented named dynamic time warping, which has strong robustness to loss of feather data due to piecewise segment preprocessing.
Abstract: A growing attention has been paid to mining time series knowledge recently. Euclidean distance measure is used for comparing time series. However, it may be a brittle distance measure as less robustness. In this paper, a modification algorithm, which applied linguistic variable concept tree to describe the slope feather of time series, was presented named dynamic time warping. For reducing the computational time and local shape variance disturbing, the piecewise linear representation was used to process warping path. Moreover, the linguist concept tree was developed based on cloud models theory which integrities randomness and probability of uncertainty, so that made conversion between qualitative and quantitative knowledge. Experiments about cluster analysis on the basis of this algorithm, compared with Euclidean measure, were implemented on synthetic control chart time series. The results showed that this method, presented in this paper, have strong robustness to loss of feather data due to piecewise segment preprocessing. Moreover, after the construction of shape concept tree, we can discover knowledge of time series on different time granularity.