scispace - formally typeset
Search or ask a question
Author

P. Prabhavathy

Bio: P. Prabhavathy is an academic researcher from VIT University. The author has contributed to research in topics: Spatial analysis & Computer science. The author has an hindex of 3, co-authored 12 publications receiving 37 citations.

Papers
More filters
Book ChapterDOI
01 Jan 2013
TL;DR: The proposed idea is to build an Early Warning Landslide Susceptibility Model (EWLSM) to predict the possibilities of landslides in Niligri’s district of the Tamil Nadu and it is compared and shown that the performance of Bayesian classifier is more accurate than SVM Classifier in landslide analysis.
Abstract: Landslide causes huge damage to human life, infrastructure and the agricultural lands. Landslide susceptibility is required for disaster management and planning development activities in mountain regions. The extent of damages could be reduced or minimized if a long-term early warning system predicting the landslide prone areas would have been in place. We required an early warning system to predict the occurrence of Landslide in advance to prevent these damages. Landslide is triggered by many factors such as rainfall, landuse, soil type, slope and etc. The proposed idea is to build an Early Warning Landslide Susceptibility Model (EWLSM) to predict the possibilities of landslides in Niligri’s district of the Tamil Nadu. The early warning of the landslide susceptibility model is built through data mining technique classification approach with the help of important factors, which triggers a landslide. In this study, we also compared and shown that the performance of Bayesian classifier is more accurate than SVM Classifier in landslide analysis.

19 citations

Proceedings ArticleDOI
04 Jul 2019
TL;DR: The focus of this paper is to propose graph based unsupervised machine learning methods for edge anomaly and node anomaly detection in social network data.
Abstract: In the last decade online social networks analysis has become an interesting area of research for researchers, to study and analyze the activities of users using which the user interaction pattern can be identified and capture any anomalies within an user community. Detecting such users can help in identifying malicious individuals such as automated bots, fake accounts, spammers, sexual predators, and fraudsters. An anomaly (outliers, deviant patterns, exceptions, abnormal data points, malicious user) is an important task in social network analysis. The major hurdle in social networks anomaly detection is to identify irregular patterns in data that behaves significantly different from regular patterns. The focus of this paper is to propose graph based unsupervised machine learning methods for edge anomaly and node anomaly detection in social network data.

7 citations

Journal ArticleDOI
TL;DR: A new distance –based approach is developed to mine co-location patterns from spatial data by using the concept of proximity neighborhood and a new interest measure, a participation index, is used for spatial co- location patterns as it possesses an anti-monotone property.
Abstract: Spatial co-location patterns are the subsets of Boolean spatial features whose instances are often located in close geographic proximity Co-location rules can be identified by spatial statistics or data mining approaches In data mining method, Association rule-based approaches can be used which are further divided into transaction-based approaches and distance-based approaches Transaction-based approaches focus on defining transactions over space so that an Apriori algorithm can be used The natural notion of transactions is absent in spatial data sets which are embedded in continuous geographic space A new distance -based approach is developed to mine co-location patterns from spatial data by using the concept of proximity neighborhood A new interest measure, a participation index, is used for spatial co-location patterns as it possesses an anti-monotone property An algorithm to discover co-location patterns are designed which generates candidate locations and their table instances Finally the co-location rules are generated to identify the patterns

7 citations

Journal Article
TL;DR: Covering-based rough fuzzy set clustering approach is proposed to resolve the uncertainty of sequence data and uses covering-based similarity measure which gives better results as compared to rough set which uses set and sequence similarity measure.
Abstract: Clustering is categorised as hard or soft in nature. Soft clusters may have fuzzy or rough boundaries. Rough clustering can help researchers to discover overlapping clusters in many applications such as web mining and text mining. Rough set approach is a very useful tool to handle the unclear and ambiguous data. As rough sets make use of the equivalence relation property, they remain rigid and it is unreliable and inefficient for real time applications where the datasets may be very large. In this paper, we provide a solution to this problem with covering-based rough set approach. Covering-based rough set is an extension of rough set approach in which the equivalence relation has been relaxed. This method is based on coverings rather than partitions. This makes it more flexible than rough sets and it is more convenient for dealing with complex applications. Clustering sequential data is one of the vital research tasks. We uses covering-based similarity measure which gives better results as compared to rough set which uses set and sequence similarity measure. In this paper, covering-based rough fuzzy set clustering approach is proposed to resolve the uncertainty of sequence data.

3 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Analysis and comparison of the results show that all five landslide models performed well for landslide susceptibility assessment, but it has been observed that the SVM model has the best performance in comparison to other landslide models.
Abstract: Landslide susceptibility assessment of Uttarakhand area of India has been done by applying five machine learning methods namely Support Vector Machines (SVM), Logistic Regression (LR), Fisher's Linear Discriminant Analysis (FLDA), Bayesian Network (BN), and Naive Bayes (NB). Performance of these methods has been evaluated using the ROC curve and statistical index based methods. Analysis and comparison of the results show that all five landslide models performed well for landslide susceptibility assessment (AUCź=ź0.910-0.950). However, it has been observed that the SVM model (AUCź=ź0.950) has the best performance in comparison to other landslide models, followed by the LR model (AUCź=ź0.922), the FLDA model (AUCź=ź0.921), the BN model (AUCź=ź0.915), and the NB model (AUCź=ź0.910), respectively. Machine learning methods namely SVM, LR, FLDA, BN, and NB have been evaluated and compared for landslide susceptibility assessment.Results indicate that all these five models can be applied efficiently for landslide assessment and prediction.Analysis of comparative results reaffirmed that the SVM model is one of the best methods.

363 citations

Journal ArticleDOI
Yu Huang1, Lu Zhao1
01 Jun 2018-Catena
TL;DR: A review of landslide susceptibility mapping using SVM, a machine learning algorithm that uses a small number of samples for prediction and has been widely used in recent years, and its strengths and weaknesses.
Abstract: Landslides are natural phenomena that can cause great loss of life and damage to property. A landslide susceptibility map is a useful tool to help with land management in landslide-prone areas. A support vector machine (SVM) is a machine learning algorithm that uses a small number of samples for prediction and has been widely used in recent years. This paper presents a review of landslide susceptibility mapping using SVM. It presents the basic concept of SVM and its application in landslide susceptibility assessment and mapping. Then it compares the SVM method with four other methods (analytic hierarchy process, logistic regression, artificial neural networks and random forests) used in landslide susceptibility mapping. The application of SVM in landslide susceptibility assessment and mapping is discussed and suggestions for future research are presented. Compared with some of the methods commonly used in landslide susceptibility assessment and mapping, SVM has its strengths and weaknesses owing to its unique theoretical basis. The combination of SVM and other techniques may yield better performance in landslide susceptibility assessment and mapping. A high-quality informative database is essential and classification of landslide types prior to landslide susceptibility assessment is important to help improve model performance.

328 citations

Journal ArticleDOI
TL;DR: This study examines big data in DM to present main contributions, gaps, challenges and future research agenda, and shows a classification of publications, an analysis of the trends and the impact of published research in the DM context.
Abstract: The era of big data and analytics is opening up new possibilities for disaster management (DM). Due to its ability to visualize, analyze and predict disasters, big data is changing the humanitarian operations and crisis management dramatically. Yet, the relevant literature is diverse and fragmented, which calls for its review in order to ascertain its development. A number of publications have dealt with the subject of big data and its applications for minimizing disasters. Based on a systematic literature review, this study examines big data in DM to present main contributions, gaps, challenges and future research agenda. The study presents the findings in terms of yearly distribution, main journals, and most cited papers. The findings also show a classification of publications, an analysis of the trends and the impact of published research in the DM context. Overall the study contributes to a better understanding of the importance of big data in disaster management.

211 citations

Journal ArticleDOI
TL;DR: An extensive and in-depth literature study on current techniques for disaster prediction, detection and management has been done and the results are summarized according to various types of disasters.

120 citations

Journal ArticleDOI
TL;DR: In this paper, an integrated methodology based on a chi-squared automatic interaction detection CHAID model combined with analytic hierarchy process AHP for pairwise comparison to assess medium-scale landslide susceptibility in a catchment in the Inje region of South Korea was used.
Abstract: This article uses an integrated methodology based on a chi-squared automatic interaction detection CHAID model combined with analytic hierarchy process AHP for pair-wise comparison to assess medium-scale landslide susceptibility in a catchment in the Inje region of South Korea. An inventory of 3596 landslide locations was collected using remote sensing, and a random sample comprising 30% of these was used to validate the model. The remaining portion 70% was processed by the nearest-neighbour index NNI technique and used for extracting the cluster patterns at each location. These data were used for model training purposes. Ten landslide-conditioning factors independent variables representing four main domains, namely 1 topology, 2 geology, 3 hydrology, and 4 land cover, were used to produce two landslide-susceptibility maps. The first landslide-susceptibility map LSM1 was produced by overlaying the terminal nodes of the CHAID result tree. The second landslide-susceptibility map LSM2 was produced using the overlay result of AHP pair-wise comparisons of CHAID terminal nodes. The prediction rate curve results were better with LSM2 area under the prediction curve AUC = 0.80 than with LSM1 AUC = 0.76. The results confirmed that the integrated hybrid model has superior prediction performance and reliability, and it is recommended for future use in medium-scale landslide-susceptibility mapping.

90 citations