scispace - formally typeset
Search or ask a question
Institution

Sir Padampat Singhania University

EducationUdaipur, India
About: Sir Padampat Singhania University is a education organization based out in Udaipur, India. It is known for research contribution in the topics: Diesel fuel & Encryption. The organization has 124 authors who have published 228 publications receiving 2066 citations. The organization is also known as: SPSU.


Papers
More filters
Proceedings ArticleDOI
01 Dec 2018
TL;DR: It is found that minimum operating cost of the forty generating unit system is evaluated by BBMO, and Convergence rate of BBMO is also very fast as compared to other considered methods.
Abstract: A new nature inspired algorithm, that simulates the mating behavior of the bumble bees, the Bumble Bees Mating Optimization (BBMO) algorithm, is proposed in this work for optimization of economic load dispatch. Economic dispatch is a method to evaluate the performance of the generating units to fulfill the load demand on minimum fuel cost. The proposed method bumble bees mating optimization (BBMO), work on different three modes namely the queen, the workers and the drones (males). For the evaluation of performance this study consider case study of forty generating unit data. The case study data is tested in various algorithms like Ant colony optimization, particle swarm optimization and genetic algorithm along with BBMO. The performance of all considered algorithm in this work is compared and it is found that minimum operating cost of the forty generating unit system is evaluated by BBMO. Convergence rate of BBMO is also very fast as compared to other considered methods.
Journal ArticleDOI
TL;DR: In this article , the performance of kNN based on Canberra distance metric is measured on different datasets, further the proposed Canberra distance measure, namely, Modified Euclidean-Canberra Blend Distance (MECBD) metric has been applied to the kNN algorithm which led to improvement of class prediction efficiency on the same datasets measured in terms of accuracy, precision, recall, F1-score for different values of k.
Abstract: In today’s world different data sets are available on which regression or classification algorithms of machine learning are applied. One of the classification algorithms is k-nearest neighbor (kNN) which computes distance amongst various rows in a dataset. The performance of kNN is evaluated based on K-value and distance metric used, where K is the total count of neighboring elements. Many different distance metrics have been used by researchers in literature, one of them is Canberra distance metric. In this paper the performance of kNN based on Canberra distance metric is measured on different datasets, further the proposed Canberra distance metric, namely, Modified Euclidean-Canberra Blend Distance (MECBD) metric has been applied to the kNN algorithm which led to improvement of class prediction efficiency on the same datasets measured in terms of accuracy, precision, recall, F1-score for different values of k. Further, this study depicts that MECBD metric use led to improvement in accuracy value 80.4% to 90.3%, 80.6% to 85.4% and 70.0% to 77.0% for various data sets used. Also, implementation of ROC curves and auc for k= 5 is done to show the improvement is kNN model prediction which showed increase in auc values for different data sets, for instance increase in auc values from 0.873 to 0.958 for Spine (2 Classes) dataset, 0.857 to 0.940, 0.983 to 0.983 (no change), 0.910 to 0.957 for DH, SL and NO class for Spine (3 Classes) data set and 0.651 to 0.742 for Haberman’s data set.
Proceedings ArticleDOI
14 Nov 2014
TL;DR: A static index pruning method for phrase queries which is based on the cohesive similarity between terms which creates an effective pruned index and can be applies to standard inverted index for phrase query also.
Abstract: This paper proposes a static index pruning method for phrase queries which is based on the cohesive similarity between terms. The co-occurrence between terms is model by term's cohesiveness within document. The less relevant terms gets pruned away while assuring that there is no change in the top-k results. The proposed method creates an effective pruned index. This method also considers the term proximity based on the term frequency and the terms informative ness. The experiments were conducted on a number of different standard text collections, and analysis of the results shows promising results and is comparable with the existing static pruning method. It is an advantage of the proposed approach that it can be applies to standard inverted index for phrase queries also.

Authors

Showing all 136 results

Network Information
Related Institutions (5)
Amrita Vishwa Vidyapeetham
11K papers, 76.1K citations

83% related

Thapar University
8.5K papers, 130.3K citations

83% related

National Institute of Technology, Rourkela
10.7K papers, 150.1K citations

81% related

National Institute of Technology, Durgapur
5.7K papers, 63.4K citations

81% related

VIT University
24.4K papers, 261.8K citations

81% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20239
202210
202134
202037
201934
201818