scispace - formally typeset
Search or ask a question

Showing papers on "Cluster analysis published in 1995"


Book
29 Dec 1995
TL;DR: This book, by the authors of the Neural Network Toolbox for MATLAB, provides a clear and detailed coverage of fundamental neural network architectures and learning rules, as well as methods for training them and their applications to practical problems.
Abstract: This book, by the authors of the Neural Network Toolbox for MATLAB, provides a clear and detailed coverage of fundamental neural network architectures and learning rules. In it, the authors emphasize a coherent presentation of the principal neural networks, methods for training them and their applications to practical problems. Features Extensive coverage of training methods for both feedforward networks (including multilayer and radial basis networks) and recurrent networks. In addition to conjugate gradient and Levenberg-Marquardt variations of the backpropagation algorithm, the text also covers Bayesian regularization and early stopping, which ensure the generalization ability of trained networks. Associative and competitive networks, including feature maps and learning vector quantization, are explained with simple building blocks. A chapter of practical training tips for function approximation, pattern recognition, clustering and prediction, along with five chapters presenting detailed real-world case studies. Detailed examples and numerous solved problems. Slides and comprehensive demonstration software can be downloaded from hagan.okstate.edu/nnd.html.

6,463 citations


Journal ArticleDOI
TL;DR: Mean shift, a simple interactive procedure that shifts each data point to the average of data points in its neighborhood is generalized and analyzed and makes some k-means like clustering algorithms its special cases.
Abstract: Mean shift, a simple interactive procedure that shifts each data point to the average of data points in its neighborhood is generalized and analyzed in the paper. This generalization makes some k-means like clustering algorithms its special cases. It is shown that mean shift is a mode-seeking process on the surface constructed with a "shadow" kernal. For Gaussian kernels, mean shift is a gradient mapping. Convergence is studied for mean shift iterations. Cluster analysis if treated as a deterministic problem of finding a fixed point of mean shift that characterizes the data. Applications in clustering and Hough transform are demonstrated. Mean shift is also considered as an evolutionary strategy that performs multistart global optimization. >

3,924 citations


Journal ArticleDOI
TL;DR: The proposed test can detect clusters of any size, located anywhere in the study region, and is not restricted to clusters that conform to predefined administrative or political borders.
Abstract: We present a new method of detection and inference for spatial clusters of a disease. To avoid ad hoc procedures to test for clustering, we have a clearly defined alternative hypothesis and our test statistic is based on the likelihood ratio. The proposed test can detect clusters of any size, located anywhere in the study region. It is not restricted to clusters that conform to predefined administrative or political borders. The test can be used for spatially aggregated data as well as when exact geographic co-ordinates are known for each individual. We illustrate the method on a data set describing the occurrence of leukaemia in Upstate New York.

1,452 citations


Posted Content
TL;DR: The random field models and techniques introduced in this paper differ from those common to much of the computer vision literature in that the underlying random fields are non-Markovian and have a large number of parameters that must be estimated.
Abstract: We present a technique for constructing random fields from a set of training samples. The learning paradigm builds increasingly complex fields by allowing potential functions, or features, that are supported by increasingly large subgraphs. Each feature has a weight that is trained by minimizing the Kullback-Leibler divergence between the model and the empirical distribution of the training data. A greedy algorithm determines how features are incrementally added to the field and an iterative scaling algorithm is used to estimate the optimal values of the weights. The statistical modeling techniques introduced in this paper differ from those common to much of the natural language processing literature since there is no probabilistic finite state or push-down automaton on which the model is built. Our approach also differs from the techniques common to the computer vision literature in that the underlying random fields are non-Markovian and have a large number of parameters that must be estimated. Relations to other learning approaches including decision trees and Boltzmann machines are given. As a demonstration of the method, we describe its application to the problem of automatic word classification in natural language processing. Key words: random field, Kullback-Leibler divergence, iterative scaling, divergence geometry, maximum entropy, EM algorithm, statistical learning, clustering, word morphology, natural language processing

1,140 citations


Proceedings ArticleDOI
22 May 1995
TL;DR: A fast algorithm to map objects into points in some k-dimensional space (k is user-defined), such that the dis-similarities are preserved, and this method is introduced from pattern recognition, namely, Multi-Dimensional Scaling (MDS).
Abstract: A very promising idea for fast searching in traditional and multimedia databases is to map objects into points in k-d space, using k feature-extraction functions, provided by a domain expert [25]. Thus, we can subsequently use highly fine-tuned spatial access methods (SAMs), to answer several types of queries, including the 'Query By Example' type (which translates to a range query); the 'all pairs' query (which translates to a spatial join [8]); the nearest-neighbor or best-match query, etc.However, designing feature extraction functions can be hard. It is relatively easier for a domain expert to assess the similarity/distance of two objects. Given only the distance information though, it is not obvious how to map objects into points.This is exactly the topic of this paper. We describe a fast algorithm to map objects into points in some k-dimensional space (k is user-defined), such that the dis-similarities are preserved. There are two benefits from this mapping: (a) efficient retrieval, in conjunction with a SAM, as discussed before and (b) visualization and data-mining: the objects can now be plotted as points in 2-d or 3-d space, revealing potential clusters, correlations among attributes and other regularities that data-mining is looking for.We introduce an older method from pattern recognition, namely, Multi-Dimensional Scaling (MDS) [51]; although unsuitable for indexing, we use it as yardstick for our method. Then, we propose a much faster algorithm to solve the problem in hand, while in addition it allows for indexing. Experiments on real and synthetic data indeed show that the proposed algorithm is significantly faster than MDS, (being linear, as opposed to quadratic, on the database size N), while it manages to preserve distances and the overall structure of the data-set.

1,124 citations


Proceedings ArticleDOI
22 May 1995
TL;DR: This paper details the sorted neighborhood method that is used by some to solve merge/purge and presents experimental results that demonstrates this approach may work well in practice but at great expense, and shows a means of improving the accuracy of the results based upon a multi-pass approach.
Abstract: Many commercial organizations routinely gather large numbers of databases for various marketing and business analysis functions. The task is to correlate information from different databases by identifying distinct individuals that appear in a number of different databases typically in an inconsistent and often incorrect fashion. The problem we study here is the task of merging data from multiple sources in as efficient manner as possible, while maximizing the accuracy of the result. We call this the merge/purge problem. In this paper we detail the sorted neighborhood method that is used by some to solve merge/purge and present experimental results that demonstrates this approach may work well in practice but at great expense. An alternative method based upon clustering is also presented with a comparative evaluation to the sorted neighborhood method. We show a means of improving the accuracy of the results based upon a multi-pass approach that succeeds by computing the Transitive Closure over the results of independent runs considering alternative primary key attributes in each pass.

985 citations


Journal ArticleDOI
TL;DR: Methods of optimization to derive the maximum likelihood estimates as well as the practical usefulness of these models are discussed and an application on stellar data which dramatically illustrated the relevance of allowing clusters to have different volumes is illustrated.

858 citations


Journal ArticleDOI
TL;DR: This survey describes research directions in netlist partitioning during the past two decades in terms of both problem formulations and solution approaches, and discusses methods which combine clustering with existing algorithms (e.g., two-phase partitioning).

673 citations


Journal ArticleDOI
TL;DR: It is demonstrated that in some cases it may be reasonable to replace the computation of GLCM with that of GLDH (Gray Level Difference Histogram), in order to benefit by a better compromise between texture measurement accuracy, computer storage and computation time.
Abstract: The aim of this study was to investigate the statistical meaning of six GLCM (Gray Level Cooccurrence Matrix) parameters. This objective was mainly pursued by means of a selfcorrsistent, theoretical assessment in order to remain independent from test image. The six statistical parameters are energy, contrast, variance, correlation, entropy and inverse dzfference moment, which are considered the most relevant among the 14 originally proposed by Haralick et al.. The functional analysis supporting theoretical considerations was based on natural clustering in the feature space of segment texture values. The results show that among the six GLCM statistical parameters, five different sets can be identified, each set featuring a specific textural meaning. The first set contains energy and entropy, while the four remaining parameters can be regarded as belonging to four different sets. Two parameters, energy and contrust, are considered to be the most efficient for discriminating different textural patterns. A new GLCM statistical parameter, recursivity, is presented in order to replace energy which presents some degree of correlation with contrast. It is demonstrated that in some cases it may be reasonable to replace the computation of GLCM with that of GLDH (Gray Level Difference Histogram), in order to benefit by a better compromise between texture measurement accuracy, computer storage and computation time.

571 citations


Journal ArticleDOI
TL;DR: An unsupervised segmentation algorithm which uses Markov random field models for color textures which characterize a texture in terms of spatial interaction within each color plane and interaction between different color planes is presented.
Abstract: We present an unsupervised segmentation algorithm which uses Markov random field models for color textures. These models characterize a texture in terms of spatial interaction within each color plane and interaction between different color planes. The models are used by a segmentation algorithm based on agglomerative hierarchical clustering. At the heart of agglomerative clustering is a stepwise optimal merging process that at each iteration maximizes a global performance functional based on the conditional pseudolikelihood of the image. A test for stopping the clustering is applied based on rapid changes in the pseudolikelihood. We provide experimental results that illustrate the advantages of using color texture models and that demonstrate the performance of the segmentation algorithm on color images of natural scenes. Most of the processing during segmentation is local making the algorithm amenable to high performance parallel implementation. >

485 citations


Journal ArticleDOI
01 Aug 1995
TL;DR: This paper discusses parallel algorithms to perform hierarchical clustering using various distance metrics, and a general algorithm is given that can be used to perform clustering with the complete link and average link metrics on a butterfly.
Abstract: Hierarchical clustering is common method used to determine clusters of similar data points in multi-dimensional spaces. $O(n^2)$ algorithms, where $n$ is the number of points to cluster, have long been known for this problem. This paper discusses parallel algorithms to perform hierarchical clustering using various distance metrics. I describe $O(n)$ time algorithms for clustering using the single link, average link, complete link, centroid, median, and minimum variance metrics on an $n$ node CRCW PRAM and $O(n \log n)$ algorithms for these metrics (except average link and complete link) on $\frac{n}{\log n}$ node butterfly networks or trees. Thus, optimal efficiency is achieved for a significant number of processors using these distance metrics. A general algorithm is given that can be used to perform clustering with the complete link and average link metrics on a butterfly. While this algorithm achieves optimal efficiency for the general class of metrics, it is not optimal for the specific cases of complete link and average link clustering.

Journal ArticleDOI
TL;DR: A new algorithm for solving the problem of clustering m objects into c clusters based on a tabu search technique is developed that compares favorably with both the k-means and the simulated annealing algorithms.

Journal ArticleDOI
TL;DR: A hybrid numericsymbolic method that integrates an extended version of the K-means algorithm for cluster determination and a complementary conceptual characterization algorithm for clusters description is proposed.

Proceedings ArticleDOI
23 Oct 1995
TL;DR: A scheme to match video shots and to cluster them by taking into account the temporal variations within individual shots is proposed, using a much reduced representation for a video shot.
Abstract: Browsing, search and retrieval in digital video libraries depend on the ability of the system to match, classify and group video shots by their visual contents. However, similarity of shots cannot always be settled by using only one key frame per shot as commonly practiced. In this paper we propose a scheme to match video shots and to cluster them by taking into account the temporal variations within individual shots. A much reduced representation for a video shot is used. These images still capture the dynamics of visual contents for matching and clustering process. Experimental results are reported.

Posted Content
TL;DR: A method for automatic sense disambiguation of nouns appearing within sets of related nouns — the kind of data one finds in on-line thesauri, or as the output of distributional clustering algorithms.
Abstract: Word groupings useful for language processing tasks are increasingly available, as thesauri appear on-line, and as distributional word clustering techniques improve. However, for many tasks, one is interested in relationships among word {\em senses}, not words. This paper presents a method for automatic sense disambiguation of nouns appearing within sets of related nouns --- the kind of data one finds in on-line thesauri, or as the output of distributional clustering algorithms. Disambiguation is performed with respect to WordNet senses, which are fairly fine-grained; however, the method also permits the assignment of higher-level WordNet categories rather than sense labels. The method is illustrated primarily by example, though results of a more rigorous evaluation are also presented.

Journal Article
TL;DR: After visual examination of traffic operations at sites where breakdown occurred, it was observed that immediately before breakdown, large ramp-vehicle clusters entered the freeway stream and disrupted traffic operations, and a probabilistic model for describing the process of breakdown at ramp-freeway junctions was examined.
Abstract: Observation of field data collected as part of NCHRP Project 3-37 showed that at ramp merge junctions, breakdown may occur at flows lower than the maximum observed, or capacity, flows. Furthermore, it was observed that at the same site and for the same ramp and freeway flows, breakdown may or may not occur. After visual examination of traffic operations at sites where breakdown occurred, it was observed that immediately before breakdown, large ramp-vehicle clusters entered the freeway stream and disrupted traffic operations. It was concluded that breakdown is a probabilistic rather than deterministic event and is a function of ramp-vehicle cluster occurrence. Subsequently, a probabilistic model for describing the process of breakdown at ramp-freeway junctions was examined. The model gives the probability that breakdown will occur at given ramp and freeway flows and is based on ramp-vehicle cluster occurrence. Simulation of a data collection effort was conducted to establish the data requirements for model validation. It was concluded that the amount of data available was not adequate for precise validation of the probabilistic model.

Journal ArticleDOI
TL;DR: This work shows how to combine the grand tour and projection pursuit into a dynamic graphical tool for exploratory data analysis, called a projection pursuit guided tour, which assists in clustering data when clusters are oddly shaped and in finding general low-dimensional structure in high-dimensional, and in particular, sparse data.
Abstract: The grand tour and projection pursuit are two methods for exploring multivariate data. We show how to combine them into a dynamic graphical tool for exploratory data analysis, called a projection pursuit guided tour. This tool assists in clustering data when clusters are oddly shaped and in finding general low-dimensional structure in high-dimensional, and in particular, sparse data. An example shows that the method, which is projection-based, can be quite powerful in situations that may cause grief for methods based on kernel smoothing. The projection pursuit guided tour is also useful for comparing and developing projection pursuit indexes and illustrating some types of asymptotic results.

Journal ArticleDOI
TL;DR: Unsupervised clustering algorithms are described that use the surface density measure and other measures to determine the optimum number of shell clusters automatically, and it is shown through theoretical derivations that surface density is relatively invariant to size and partiality of the clusters.
Abstract: Shell clustering algorithms are ideally suited for computer vision tasks such as boundary detection and surface approximation, particularly when the boundaries have jagged or scattered edges and when the range data is sparse. This is because shell clustering is insensitive to local aberrations, it can be performed directly in image space, and unlike traditional approaches it does assume dense data and does not use additional features such as curvatures and surface normals. The shell clustering algorithms introduced in Part I of this paper assume that the number of clusters is known, however, which is not the case in many boundary detection and surface approximation applications. This problem can be overcome by considering cluster validity. We introduce a validity measure called surface density which is explicitly meant for the type of applications considered in this paper, we show through theoretical derivations that surface density is relatively invariant to size and partiality (incompleteness) of the clusters. We describe unsupervised clustering algorithms that use the surface density measure and other measures to determine the optimum number of shell clusters automatically, and illustrate the application of the proposed algorithms to boundary detection in the case of intensity images and to surface approximation in the case of range images. >

Journal ArticleDOI
TL;DR: This paper shows how to reformulate some clustering criteria so that specialized algorithms can be replaced by general optimization routines found in commercially available software and proves that the original and reformulated versions of each criterion are fully equivalent.
Abstract: Various hard, fuzzy and possibilistic clustering criteria (objective functions) are useful as bases for a variety of pattern recognition problems. At present, many of these criteria have customized individual optimization algorithms. Because of the specialized nature of these algorithms, experimentation with new and existing criteria can be very inconvenient and costly in terms of development and implementation time. This paper shows how to reformulate some clustering criteria so that specialized algorithms can be replaced by general optimization routines found in commercially available software. We prove that the original and reformulated versions of each criterion are fully equivalent. Finally, two numerical examples are given to illustrate reformulation. >

Journal ArticleDOI
TL;DR: An enhancement of the traditional k-means algorithm that approximates an optimal clustering solution with an efficient adaptive learning rate, which renders it usable even in situations where the statistics of the problem task varies slowly with time.
Abstract: Adaptive k-means clustering algorithms have been used in several artificial neural network architectures, such as radial basis function networks or feature-map classifiers, for a competitive partitioning of the input domain. This paper presents an enhancement of the traditional k-means algorithm. It approximates an optimal clustering solution with an efficient adaptive learning rate, which renders it usable even in situations where the statistics of the problem task varies slowly with time. This modification Is based on the optimality criterion for the k-means partition stating that: all the regions in an optimal k-means partition have the same variations if the number of regions in the partition is large and the underlying distribution for generating input patterns is smooth. The goal of equalizing these variations is introduced in the competitive function that assigns each new pattern vector to the "appropriate" region. To evaluate the optimal k-means algorithm, the authors first compare it to other k-means variants on several simple tutorial examples, then the authors evaluate it on a practical application: vector quantization of image data. >

Proceedings Article
01 Jan 1995
TL;DR: In this article, the authors describe an efficient method for obtaining word classes for class language models, which employs an exchange algorithm using the criterion of perplexity improvement, and experimental results on large text corpora of about 1, 4, 39 and 241 million words.
Abstract: In this paper, we describe an efficient method for obtaining word classes for class language models. The method employs an exchange algorithm using the criterion of perplexity improvement. The novel contributions of this paper are the extension of the class bigram perplexity criterion to the class trigram perplexity criterion, the description of an efficient implementation for speeding up the clustering process, the detailed computational complexity analysis of the clustering algorithm, and, finally, experimental results on large text corpora of about 1, 4, 39 and 241 million words including examples of word classes, test corpus perplexities in comparison to word language models, and speech recognition results.

Book ChapterDOI
06 Aug 1995
TL;DR: This paper addresses the task of class identification in spatial databases using clustering techniques using a well-known spatial access method, the R*-tree, and presents several strategies for focusing: selecting representatives from a spatial database, focusing on the relevant clusters and retrieving all objects of a given cluster.
Abstract: Both, the number and the size of spatial databases are rapidly growing because of the large amount of data obtained from satellite images, X-ray crystallography or other scientific equipment Therefore, automated knowledge discovery becomes more and more important in spatial databases So far, most of the methods for knowledge discovery in databases (KDD) have been based on relational database systems In this paper, we address the task of class identification in spatial databases using clustering techniques We put special emphasis on the integration of the discovery methods with the DB interface, which is crucial for the efficiency of KDD on large databases The key to this integration is the use of a well-known spatial access method, the R*-tree The focusing component of a KDD system determines which parts of the database are relevant for the knowledge discovery task We present several strategies for focusing: selecting representatives from a spatial database, focusing on the relevant clusters and retrieving all objects of a given cluster We have applied the proposed techniques to real data from a large protein database used for predicting protein-protein docking A performance evaluation on this database indicates that clustering on large spatial databases can be performed, both, efficiently and effectively

Journal ArticleDOI
TL;DR: In this paper, various analytical approximation methods for following the evolution of cosmological density perturbations into the strong (i.e., nonlinear) clustering regime are discussed.

Journal ArticleDOI
01 Apr 1995
TL;DR: The Navigational View Builder is described, a tool which allows the user to interactively create useful visualizations of the information space which uses four strategies to form effective views: binding, clustering, filtering and hierarchization.
Abstract: Overview diagrams are one of the best tools for orientation and navigation in hypermedia systems. However, constructing effective overview diagrams is a challenging task. This paper describes the Navigational View Builder, a tool which allows the user to interactively create useful visualizations of the information space. It uses four strategies to form effective views. These are binding, clustering, filtering and hierarchization. These strategies use a combination of structural and content analysis of the underlying space for forming the visualizations. This paper discusses these strategies and shows how they can be applied for forming visualizations for the World-Wide Web.

Journal ArticleDOI
TL;DR: Simulation study shows that the proposed 'general' test outperformed the average distance method of Whittemore et al. in most of the cluster models considered.
Abstract: This paper proposes a class of tests applicable to the detection of two types of disease clustering 'focused' and 'general' clustering. The former assesses the clustering of observed cases around the fixed point and the latter does not have any prior information on the centre of clustering. The proposed test for 'general' clustering is a generalization of the index for temporal clustering proposed by Tango in that it adjusts for differences in population densities and also in population distributions among categories of the counfounders such as age and sex. Simulation study shows that the proposed 'general' test outperformed the average distance method of Whittemore et al. in most of the cluster models considered.

Posted Content
TL;DR: In this paper, the first step in the analysis, the computation of linguistic distance between each pair of sites, can be computed as Levenshtein distance between phonetic strings, which correlates closely with the much more laborious technique of determining and counting isoglosses and is more accurate than the more familiar metric of computing Hamming distance based on whether vocabulary entries match.
Abstract: Dialect groupings can be discovered objectively and automatically by cluster analysis of phonetic transcriptions such as those found in a linguistic atlas. The first step in the analysis, the computation of linguistic distance between each pair of sites, can be computed as Levenshtein distance between phonetic strings. This correlates closely with the much more laborious technique of determining and counting isoglosses, and is more accurate than the more familiar metric of computing Hamming distance based on whether vocabulary entries match. In the actual clustering step, traditional agglomerative clustering works better than the top-down technique of partitioning around medoids. When agglomerative clustering of phonetic string comparison distances is applied to Gaelic, reasonable dialect boundaries are obtained, corresponding to national and (within Ireland) provincial boundaries.

Book ChapterDOI
03 Apr 1995
TL;DR: A unifying definition and a classification scheme for existing VB matching criteria and a new matching criterion: the entropy of the grey-level scatter-plot, which requires no segmentation or feature extraction and no a priori knowledge of photometric model parameters.
Abstract: In this paper, 3D voxel-similarity-based (VB) registration algorithms that optimize a feature-space clustering measure are proposed to combine the segmentation and registration process. We present a unifying definition and a classification scheme for existing VB matching criteria and propose a new matching criterion: the entropy of the grey-level scatter-plot. This criterion requires no segmentation or feature extraction and no a priori knowledge of photometric model parameters. The effects of practical implementation issues concerning grey-level resampling, scatter-plot binning, parzen-windowing and resampling frequencies are discussed in detail and evaluated using real world data (CT and MRI).

Journal ArticleDOI
TL;DR: It is shown that the accurate distribution of the energy emitted or received at the cluster level can produce even better results than isotropic clustering at a marginal cost.
Abstract: The paper presents a new radiosity algorithm that allows the simultaneous computation of energy exchanges between surface elements, scattering volume distributions, and groups of surfaces, or object clusters. The new technique is based on a hierarchical formulation of the zonal method, and efficiently integrates volumes and surfaces. In particular no initial linking stage is needed, even for inhomogeneous volumes, thanks to the construction of a global spatial hierarchy. An analogy between object clusters and scattering volumes results in a powerful clustering radiosity algorithm, with no initial linking between surfaces and fast computation of average visibility information through a cluster. We show that the accurate distribution of the energy emitted or received at the cluster level can produce even better results than isotropic clustering at a marginal cost. The resulting algorithm is fast and, more importantly, truly progressive as it allows the quick calculation of approximate solutions with a smooth convergence towards very accurate simulations. >

Journal ArticleDOI
TL;DR: In this paper, the authors examined whether there is a substantial additional payoff to be derived from using mathematical optimization techniques to globally define a set of mini-clusters and presented a new approximate method to mini clustering that involves solving a multi-vehicle pick-up and delivery problem with time windows by column generation.
Abstract: This paper examines whether there is a substantial additional payoff to be derived from using mathematical optimization techniques to globally define a set of mini-clusters. Specifically, we present a new approximate method to mini-clustering that involves solving a multi-vehicle pick-up and delivery problem with time windows by column generation. To solve this problem we have enhanced an existing optimal algorithm in several ways. First, we present an original network design based on lists of neighboring transportation requests. Second, we have developed a specialized initialization procedure which reduces the processing time by nearly 40%. Third, the algorithm was easily generalized to multi-dimensional capacity. Finally, we have also developed a heuristic to reduce the size of the network, while incurring only small losses in solution quality. This allows the application of our approach to much larger problems. To be able to compare the results of optimization-based and local heuristic mini-clustering,...

Proceedings ArticleDOI
20 Mar 1995
TL;DR: This paper introduces a structure strength function as clustering criterion, which is valid for any membership assignments, thereby being capable of determining the plausible number of clusters according to the authors' subjective requisition.
Abstract: In this paper, we propose a new approach to fuzzy clustering by means of a maximum-entropy inference (MEI) method. The resulting formulas have a better form and clearer physical meaning than those obtained by means of the fuzzy c-means (FCM) method. In order to solve the cluster validity problem, we introduce a structure strength function as clustering criterion, which is valid for any membership assignments, thereby being capable of determining the plausible number of clusters according to our subjective requisition. With the proposed structure strength function, we also discuss global minimum problem in terms of simulated annealing. Finally, we simulate a numerical example to demonstrate the approach discussed, and compare our results with those obtained by the traditional approaches. >