Institution
Sir Padampat Singhania University
Education•Udaipur, India•
About: Sir Padampat Singhania University is a education organization based out in Udaipur, India. It is known for research contribution in the topics: Diesel fuel & Encryption. The organization has 124 authors who have published 228 publications receiving 2066 citations. The organization is also known as: SPSU.
Topics: Diesel fuel, Encryption, Ionization, The Internet, Computer science
Papers
More filters
••
10 Jun 2023••
01 Dec 2018TL;DR: It is found that minimum operating cost of the forty generating unit system is evaluated by BBMO, and Convergence rate of BBMO is also very fast as compared to other considered methods.
Abstract: A new nature inspired algorithm, that simulates the mating behavior of the bumble bees, the Bumble Bees Mating Optimization (BBMO) algorithm, is proposed in this work for optimization of economic load dispatch. Economic dispatch is a method to evaluate the performance of the generating units to fulfill the load demand on minimum fuel cost. The proposed method bumble bees mating optimization (BBMO), work on different three modes namely the queen, the workers and the drones (males). For the evaluation of performance this study consider case study of forty generating unit data. The case study data is tested in various algorithms like Ant colony optimization, particle swarm optimization and genetic algorithm along with BBMO. The performance of all considered algorithm in this work is compared and it is found that minimum operating cost of the forty generating unit system is evaluated by BBMO. Convergence rate of BBMO is also very fast as compared to other considered methods.
••
TL;DR: In this article , the performance of kNN based on Canberra distance metric is measured on different datasets, further the proposed Canberra distance measure, namely, Modified Euclidean-Canberra Blend Distance (MECBD) metric has been applied to the kNN algorithm which led to improvement of class prediction efficiency on the same datasets measured in terms of accuracy, precision, recall, F1-score for different values of k.
Abstract: In today’s world different data sets are available on which regression or classification algorithms of machine learning are applied. One of the classification algorithms is k-nearest neighbor (kNN) which computes distance amongst various rows in a dataset. The performance of kNN is evaluated based on K-value and distance metric used, where K is the total count of neighboring elements. Many different distance metrics have been used by researchers in literature, one of them is Canberra distance metric. In this paper the performance of kNN based on Canberra distance metric is measured on different datasets, further the proposed Canberra distance metric, namely, Modified Euclidean-Canberra Blend Distance (MECBD) metric has been applied to the kNN algorithm which led to improvement of class prediction efficiency on the same datasets measured in terms of accuracy, precision, recall, F1-score for different values of k. Further, this study depicts that MECBD metric use led to improvement in accuracy value 80.4% to 90.3%, 80.6% to 85.4% and 70.0% to 77.0% for various data sets used. Also, implementation of ROC curves and auc for k= 5 is done to show the improvement is kNN model prediction which showed increase in auc values for different data sets, for instance increase in auc values from 0.873 to 0.958 for Spine (2 Classes) dataset, 0.857 to 0.940, 0.983 to 0.983 (no change), 0.910 to 0.957 for DH, SL and NO class for Spine (3 Classes) data set and 0.651 to 0.742 for Haberman’s data set.
••
14 Nov 2014TL;DR: A static index pruning method for phrase queries which is based on the cohesive similarity between terms which creates an effective pruned index and can be applies to standard inverted index for phrase query also.
Abstract: This paper proposes a static index pruning method for phrase queries which is based on the cohesive similarity between terms. The co-occurrence between terms is model by term's cohesiveness within document. The less relevant terms gets pruned away while assuring that there is no change in the top-k results. The proposed method creates an effective pruned index. This method also considers the term proximity based on the term frequency and the terms informative ness. The experiments were conducted on a number of different standard text collections, and analysis of the results shows promising results and is comparable with the existing static pruning method. It is an advantage of the proposed approach that it can be applies to standard inverted index for phrase queries also.
Authors
Showing all 136 results
Name | H-index | Papers | Citations |
---|---|---|---|
Naveen Kumar | 21 | 187 | 2525 |
Anshita Gupta | 20 | 94 | 1126 |
Deepak Khazanchi | 19 | 109 | 1752 |
Yashvir Singh | 17 | 134 | 1036 |
Vinod Patidar | 17 | 60 | 2918 |
K.K. Sud | 16 | 32 | 2750 |
Sanjeev Kumar Raghuwanshi | 15 | 180 | 1118 |
Bibhas Chandra | 13 | 44 | 703 |
Ghanshyam Purohit | 10 | 51 | 610 |
Kamaljit I. Lakhtaria | 10 | 32 | 333 |
Kamal Kumar Agrawal | 9 | 13 | 209 |
Vineet Chouhan | 8 | 27 | 211 |
Shilpi Birla | 7 | 44 | 173 |
Shubham Goswami | 7 | 27 | 170 |
Pallavi Dwivedi | 7 | 9 | 271 |