scispace - formally typeset
Search or ask a question
Topic

Dunn index

About: Dunn index is a research topic. Over the lifetime, 150 publications have been published within this topic receiving 24021 citations.


Papers
More filters
Journal ArticleDOI
01 Jul 2020
TL;DR: The formalization of the medical data preprocessing stage was proposed in order to find personalized solutions based on current standards and pharmaceutical protocols and to determine deviations of parameters from the normative parameters of the group, as well as the average parameters.
Abstract: The study was conducted by applying machine learning and data mining methods to treatment personalization. This allows individual patient characteristics to be investigated. The personalization method was built on the clustering method and associative rules. It was suggested to determine the average distance between instances in order to find the optimal performance metrics. The formalization of the medical data preprocessing stage was proposed in order to find personalized solutions based on current standards and pharmaceutical protocols. The patient data model was built using time-dependent and time-independent parameters. Personalized treatment is usually based on the decision tree method. This approach requires significant computation time and cannot be parallelized. Therefore, it was proposed to group people by conditions and to determine deviations of parameters from the normative parameters of the group, as well as the average parameters. The novelty of the paper is the new clustering method, which was built from an ensemble of cluster algorithms, and the usage of the new distance measure with Hopkins metrics, which were 0.13 less than for the k-means method. The Dunn index was 0.03 higher than for the BIRCH (balanced iterative reducing and clustering using hierarchies) algorithm. The next stage was the mining of associative rules provided separately for each cluster. This allows a personalized approach to treatment to be created for each patient based on long-term monitoring. The correctness level of the proposed medical decisions is 86%, which was approved by experts.

8 citations

Journal ArticleDOI
TL;DR: This work adopts sum of squared error (SSE) approach and Dunn index to measure the quality of clusters and performs the experimentation on real world crime data to identify spatiotemporal crime clusters.
Abstract: The various sources generate large volume of spatiotemporal data of different types including crime events. In order to detect crime spot and predict future events, their analysis is important. Crime events are spatiotemporal in nature; therefore a distance function is defined for spatiotemporal events and is used in Fuzzy C-Means algorithm for crime analysis. This distance function takes care of both spatial and temporal components of spatiotemporal data. We adopt sum of squared error (SSE) approach and Dunn index to measure the quality of clusters. We also perform the experimentation on real world crime data to identify spatiotemporal crime clusters.

8 citations

Journal ArticleDOI
TL;DR: This research shown that outlier in DBSCAN and K-Means in cluster 1 have similarities is 100%.
Abstract: The aim of study is to discover outlier of customer data to found customer behaviour. The customer behaviour determined with RFM (Recency, Frequency and Monetary) models with K-Mean and DBSCAN algorithm as clustering customer data. There are six step in this study. The first step is determining the best number of clusters with the dunn index (DN) validation method for each algorithm. Based on the dunn index, the best cluster values were 2 clusters with DN value for DBSCAN 1.19 which were minpts and epsilon value 0.2 and 3 and DN for K-Means was 1.31. The next step was to cluster the dataset with the DBSCAN and K-Means algorithm based on the best cluster that was 2. DBSCAN algorithm had 37 outliers data and K-means algorithm had 63 outliers (cluster 1 are 26 outliers and cluster 2 are 37 outliers). This research shown that outlier in DBSCAN and K-Means in cluster 1 have similarities is 100%. But overal outliers similarities is 67%. Based the outliers shown that the behaviour of customers is a small frequency of spending but high recency and monetary.

8 citations

Book ChapterDOI
01 Jan 2020
TL;DR: In this article, the impact of applying dimensionality reduction during the data transformation phase of the clustering process has been investigated for three most common clustering algorithms k-means clustering, clustering large applications (CLARA), and agglomerative hierarchical clustering (AGNES).
Abstract: With the huge volume of data available as input, modern-day statistical analysis leverages clustering techniques to limit the volume of data to be processed. These input data mainly sourced from social media channels and typically have high dimensions due to the diverse features it represents. This is normally referred to as the curse of dimensionality as it makes the clustering process highly computational intensive and less efficient. Dimensionality reduction techniques are proposed as a solution to address this issue. This paper covers an empirical analysis done on the impact of applying dimensionality reduction during the data transformation phase of the clustering process. We measured the impacts in terms of clustering quality and clustering performance for three most common clustering algorithms k-means clustering, clustering large applications (CLARA), and agglomerative hierarchical clustering (AGNES). The clustering quality is compared by using four internal evaluation criteria, namely Silhouette index, Dunn index, Calinski-Harabasz index, and Davies-Bouldin index, and average execution time is verified as a measure of clustering performance.

8 citations

Journal ArticleDOI
03 Mar 2020
TL;DR: This study provides practical evaluation frameworks for accessing clustering results on gene expression cancer datasets and determines that PAM isbest for Affymetrix data set and DIANA is best for cDNA dataset among these four clustering algorithms.
Abstract: Clustering plays a particularly fundamental role in exploring data, creating predictions and to overcome the anomalies in the data. Clusters that contain parallel, identical characteristics in a dataset are grouped using reiterative algorithms. As the data in real world is rising day by day so the challenges of perceiving and interpreting the consequential mass of data, which often consists of millions of measurements are increased by the intricacy of a huge number of genes of biological networks. To addressing this challenge, we use clustering algorithms. In this study, we provided a comparative study of the four most popular clustering algorithms: K-Means, PAM, Agglomerative Hierarchical and DIANA and these are evaluated on eight real cancer (four Affymetrix and four cDNA) gene data and simulated data set. The comparative results based upon seven popular cluster validity indices: Average Silhouette Index, Corrected rand Index, Variation of Information, Dunn Index, Calinski-Harabasz Index, Separation Index, and Pearson Gamma. We determine that PAM is best for Affymetrix data set and DIANA is best for cDNA dataset among these four clustering algorithms. This study provides practical evaluation frameworks for accessing clustering results on gene expression cancer datasets.

7 citations


Network Information
Related Topics (5)
Feature selection
41.4K papers, 1M citations
70% related
Support vector machine
73.6K papers, 1.7M citations
69% related
Genetic algorithm
67.5K papers, 1.2M citations
68% related
Cluster analysis
146.5K papers, 2.9M citations
68% related
Web service
57.6K papers, 989K citations
66% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202120
202028
201917
201813
201710
201611