scispace - formally typeset
Search or ask a question
Author

Anand J. Kulkarni

Bio: Anand J. Kulkarni is an academic researcher from Symbiosis International University. The author has contributed to research in topics: Metaheuristic & Multi-agent system. The author has an hindex of 19, co-authored 86 publications receiving 1014 citations. Previous affiliations of Anand J. Kulkarni include University of Windsor & Maharashtra Institute of Technology.


Papers
More filters
Journal ArticleDOI
TL;DR: Results indicate that SELO demonstrates comparable performance to other comparison algorithms, which gives ground to the authors to further establish the effectiveness of this metaheuristic by solving purposeful and real world problems.

145 citations

Proceedings ArticleDOI
13 Oct 2013
TL;DR: A novel concept of Cohort Intelligence (CI) is presented, where every candidate's effort to self supervise its behavior and further adapt to the behavior of other candidate which it intends to follow makes every candidate to improve/evolve its own and eventually the entire cohort behavior.
Abstract: By virtue of the collective and interdependent behavior of its candidates, a swarm organizes itself to achieve a particular task. Similarly, inspired from the natural and social tendency of learning from one another, a novel concept of Cohort Intelligence (CI) is presented. The learning refers to a cohort candidate's effort to self supervise its behavior and further adapt to the behavior of other candidate which it intends to follow. This makes every candidate to improve/evolve its own and eventually the entire cohort behavior. The approach is validated by solving four test problems. The advantages and limitations are also discussed.

105 citations

Journal ArticleDOI
TL;DR: This paper presents an efficient hybrid evolutionary data clustering algorithm referred to as K-MCI, whereby, it is proposed to combine K-means with modified cohort intelligence, and its performance is compared with other well-known algorithms.
Abstract: Clustering is an important and popular technique in data mining. It partitions a set of objects in such a manner that objects in the same clusters are more similar to each another than objects in the different cluster according to certain predefined criteria. K-means is simple yet an efficient method used in data clustering. However, K-means has a tendency to converge to local optima and depends on initial value of cluster centers. In the past, many heuristic algorithms have been introduced to overcome this local optima problem. Nevertheless, these algorithms too suffer several short-comings. In this paper, we present an efficient hybrid evolutionary data clustering algorithm referred to as K-MCI, whereby, we combine K-means with modified cohort intelligence. Our proposed algorithm is tested on several standard data sets from UCI Machine Learning Repository and its performance is compared with other well-known algorithms such as K-means, K-means++, cohort intelligence (CI), modified cohort intelligence (MCI), genetic algorithm (GA), simulated annealing (SA), tabu search (TS), ant colony optimization (ACO), honey bee mating optimization (HBMO) and particle swarm optimization (PSO). The simulation results are very promising in the terms of quality of solution and convergence speed of algorithm.

93 citations

Journal ArticleDOI
TL;DR: An emerging technique, inspired from the natural and social tendency of individuals to learn from each other referred to as Cohort Intelligence (CI), which ability of the approach is tested by solving an NP-hard combinatorial problem such as Knapsack Problem (KP).
Abstract: The previous chapters discussed the algorithm Cohort Intelligence (CI) and its applicability solving several unconstrained and constrained problems. In addition CI was also applied for solving several clustering problems. This validated the learning and self supervising behavior of the cohort. This chapter further tests the ability of CI by solving an NP-hard combinatorial problem such as Knapsack Problem (KP). Several cases of the 0–1 KP are solved. The effect of various parameters on the solution quality has been discussed. The advantages and limitations of the CI methodology are also discussed.

88 citations

Journal ArticleDOI
01 Jun 2010
TL;DR: The theory of Collective Intelligence (COIN) is discussed using the modified version of Probability Collectives (PC) to achieve the global goal and the optimum results to the Rosenbrock function and both the MDMTSP test cases are obtained at reasonable computational costs.
Abstract: Complex systems generally have many components. It is not possible to understand such complex systems only by knowing the individual components and their behavior. This is because any move by a component affects the further decisions/moves by other components and so on. In a complex system, as the number of components grows, complexity also grows exponentially, making the entire system to be seen as a collection of subsystems or a Multi-Agent System (MAS). The major challenge is to make these agents work in a coordinated way, optimizing their local utilities and contributing the maximum towards optimization of the global objective. This paper discusses the theory of Collective Intelligence (COIN) using the modified version of Probability Collectives (PC) to achieve the global goal. The paper successfully demonstrated this approach by optimizing the Rosenbrock function in which the coupled variables are seen as autonomous agents working collectively to achieve the function optimum. To demonstrate the PC approach on combinatorial optimization problems, two test cases of the Multi-Depot Multiple Traveling Salesmen Problem (MDMTSP) with 3 depots, 3 vehicles and 15 nodes are solved. In these cases, the vehicles are considered as autonomous agents collectively searching the minimum cost path. PC is successfully accompanied with insertion, elimination and swapping heuristic techniques. The optimum results to the Rosenbrock function and both the MDMTSP test cases are obtained at reasonable computational costs.

72 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The statistical results and comparisons show that the HHO algorithm provides very promising and occasionally competitive results compared to well-established metaheuristic techniques.

2,871 citations

Journal ArticleDOI
TL;DR: From the experimental results of AO that compared with well-known meta-heuristic methods, the superiority of the developed AO algorithm is observed.

989 citations

Journal ArticleDOI
TL;DR: The comparison results on the benchmark functions suggest that MRFO is far superior to its competitors, and the real-world engineering applications show the merits of this algorithm in tackling challenging problems in terms of computational cost and solution precision.

519 citations

Journal ArticleDOI
TL;DR: Binary variants of the recent Grasshopper Optimisation Algorithm are proposed in this work and employed to select the optimal feature subset for classification purposes within a wrapper-based framework and the comparative results show the superior performance of the BGOA and B GOA-M methods compared to other similar techniques in the literature.
Abstract: Feature Selection (FS) is a challenging machine learning-related task that aims at reducing the number of features by removing irrelevant, redundant and noisy data while maintaining an acceptable level of classification accuracy. FS can be considered as an optimisation problem. Due to the difficulty of this problem and having a large number of local solutions, stochastic optimisation algorithms are promising techniques to solve this problem. As a seminal attempt, binary variants of the recent Grasshopper Optimisation Algorithm (GOA) are proposed in this work and employed to select the optimal feature subset for classification purposes within a wrapper-based framework. Two mechanisms are employed to design a binary GOA, the first one is based on Sigmoid and V-shaped transfer functions, and will be indicated by BGOA-S and BGOA-V, respectively. While the second mechanism uses a novel technique that combines the best solution obtained so far. In addition, a mutation operator is employed to enhance the exploration phase in BGOA algorithm (BGOA-M). The proposed methods are evaluated using 25 standard UCI datasets and compared with 8 well-regarded metaheuristic wrapper-based approaches, and six well known filter-based (e.g., correlation FS) approaches. The comparative results show the superior performance of the BGOA and BGOA-M methods compared to other similar techniques in the literature.

318 citations

Journal ArticleDOI
TL;DR: The results show that PO outperforms all other algorithms, and consistency in performance on such a comprehensive suite of benchmark functions proves the versatility of the algorithm.
Abstract: This paper proposes a novel global optimization algorithm called Political Optimizer (PO), inspired by the multi-phased process of politics. PO is the mathematical mapping of all the major phases of politics such as constituency allocation, party switching, election campaign, inter-party election, and parliamentary affairs. The proposed algorithm assigns each solution a dual role by logically dividing the population into political parties and constituencies, which facilitates each candidate to update its position with respect to the party leader and the constituency winner. Moreover, a novel position updating strategy called recent past-based position updating strategy (RPPUS) is introduced, which is the mathematical modeling of the learning behaviors of the politicians from the previous election. The proposed algorithm is benchmarked with 50 unimodal, multimodal, and fixed dimensional functions against 15 state of the art algorithms. We show through experiments that PO has an excellent convergence speed with good exploration capability in early iterations. Root cause of such behavior of PO is incorporation of RPPUS and logical division of the population to assign dual role to each candidate solution. Using Wilcoxon rank-sum test, PO demonstrates statistically significant performance over the other algorithms. The results show that PO outperforms all other algorithms, and consistency in performance on such a comprehensive suite of benchmark functions proves the versatility of the algorithm. Furthermore, experiments demonstrate that PO is invariant to function shifting and performs consistently in very high dimensional search spaces. Finally, the applicability on real-world applications is demonstrated by efficiently solving four engineering optimization problems.

251 citations