scispace - formally typeset
Search or ask a question
Author

Prasanta K. Jana

Bio: Prasanta K. Jana is an academic researcher from Indian Institutes of Technology. The author has contributed to research in topics: Wireless sensor network & Key distribution in wireless sensor networks. The author has an hindex of 35, co-authored 169 publications receiving 4135 citations. Previous affiliations of Prasanta K. Jana include National Institute of Technology Sikkim & Indian Institute of Technology Dhanbad.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper presents Linear/Nonlinear Programming (LP/NLP) formulations of these problems followed by two proposed algorithms for the same based on particle swarm optimization (PSO) followed by results compared with the existing algorithms to demonstrate their superiority.

411 citations

Journal ArticleDOI
TL;DR: An energy efficient cluster head selection algorithm which is based on particle swarm optimization (PSO) called PSO-ECHS is proposed with an efficient scheme of particle encoding and fitness function and the results are compared with some existing algorithms to demonstrate the superiority of the proposed algorithm.
Abstract: Clustering has been proven to be one of the most efficient techniques for saving energy of wireless sensor networks (WSNs). However, in a hierarchical cluster based WSN, cluster heads (CHs) consume more energy due to extra overload for receiving and aggregating the data from their member sensor nodes and transmitting the aggregated data to the base station. Therefore, the proper selection of CHs plays vital role to conserve the energy of sensor nodes for prolonging the lifetime of WSNs. In this paper, we propose an energy efficient cluster head selection algorithm which is based on particle swarm optimization (PSO) called PSO-ECHS. The algorithm is developed with an efficient scheme of particle encoding and fitness function. For the energy efficiency of the proposed PSO approach, we consider various parameters such as intra-cluster distance, sink distance and residual energy of sensor nodes. We also present cluster formation in which non-cluster head sensor nodes join their CHs based on derived weight function. The algorithm is tested extensively on various scenarios of WSNs, varying number of sensor nodes and the CHs. The results are compared with some existing algorithms to demonstrate the superiority of the proposed algorithm.

322 citations

Journal ArticleDOI
TL;DR: The proposed GA based load balanced clustering algorithm for WSN is shown to perform well for both equal as well as unequal load of the sensor nodes and the rate of convergence.
Abstract: Clustering sensor nodes is an effective topology control method to reduce energy consumption of the sensor nodes for maximizing lifetime of Wireless Sensor Networks (WSNs). However, in a cluster based WSN, the leaders (cluster heads) bear some extra load for various activities such as data collection, data aggregation and communication of the aggregated data to the base station. Therefore, balancing the load of the cluster heads is a challenging issue for the long run operation of the WSNs. Load balanced clustering is known to be an NP-hard problem for a WSN with unequal load of the sensor nodes. Genetic Algorithm (GA) is one of the most popular evolutionary approach that can be applied for finding the fast and efficient solution of such problem. In this paper, we propose a novel GA based load balanced clustering algorithm for WSN. The proposed algorithm is shown to perform well for both equal as well as unequal load of the sensor nodes. We perform extensive simulation of the proposed method and compare the results with some evolutionary based approaches and other related clustering algorithms. The results demonstrate that the proposed algorithm performs better than all such algorithms in terms of various performance metrics such as load balancing, execution time, energy consumption, number of active sensor nodes, number of active cluster heads and the rate of convergence.

223 citations

Journal ArticleDOI
01 Dec 2014
TL;DR: This work proposes a novel differential evolution (DE) based clustering algorithm for WSNs to prolong lifetime of the network by preventing faster death of the highly loaded CHs and incorporates a local improvement phase to the traditional DE for faster convergence and better performance.
Abstract: The proposed work is a novel DE based clustering scheme for WSNs.The algorithm incorporates an additional step to enhance the performance.Experimental results demonstrate the superiority over existing algorithms.The performance is shown in terms of network life, energy consumption, etc. Clustering is an efficient topology control method which balances the traffic load of the sensor nodes and improves the overall scalability and the life time of the wireless sensor networks (WSNs). However, in a cluster based WSN, the cluster heads (CHs) consume more energy due to extra work load of receiving the sensed data, data aggregation and transmission of aggregated data to the base station. Moreover, improper formation of clusters can make some CHs overloaded with high number of sensor nodes. This overload may lead to quick death of the CHs and thus partitions the network and thereby degrade the overall performance of the WSN. It is worthwhile to note that the computational complexity of finding optimum cluster for a large scale WSN is very high by a brute force approach. In this paper, we propose a novel differential evolution (DE) based clustering algorithm for WSNs to prolong lifetime of the network by preventing faster death of the highly loaded CHs. We incorporate a local improvement phase to the traditional DE for faster convergence and better performance of our proposed algorithm. We perform extensive simulation of the proposed algorithm. The experimental results demonstrate the efficiency of the proposed algorithm.

193 citations

Journal ArticleDOI
TL;DR: The proposed energy aware routing algorithm is based on a clever strategy of cluster head (CH) selection, residual energy of the CHs and the intra-cluster distance for cluster formation and achieves constant message and linear time complexity.

176 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: It is found that it is a high time to provide a critical review of the latest literatures published and also to point out some important future avenues of research on DE.
Abstract: Differential Evolution (DE) is arguably one of the most powerful and versatile evolutionary optimizers for the continuous parameter spaces in recent times. Almost 5 years have passed since the first comprehensive survey article was published on DE by Das and Suganthan in 2011. Several developments have been reported on various aspects of the algorithm in these 5 years and the research on and with DE have now reached an impressive state. Considering the huge progress of research with DE and its applications in diverse domains of science and technology, we find that it is a high time to provide a critical review of the latest literatures published and also to point out some important future avenues of research. The purpose of this paper is to summarize and organize the information on these current developments on DE. Beginning with a comprehensive foundation of the basic DE family of algorithms, we proceed through the recent proposals on parameter adaptation of DE, DE-based single-objective global optimizers, DE adopted for various optimization scenarios including constrained, large-scale, multi-objective, multi-modal and dynamic optimization, hybridization of DE with other optimizers, and also the multi-faceted literature on applications of DE. The paper also presents a dozen of interesting open problems and future research issues on DE.

1,265 citations

Journal ArticleDOI
TL;DR: This survey presented a comprehensive investigation of PSO, including its modifications, extensions, and applications to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology.
Abstract: Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms.

836 citations

MonographDOI
01 Jan 2016
TL;DR: In this article, a comprehensive introduction to parallel computing is provided, discussing theoretical issues such as the fundamentals of concurrent processes, models of parallel and distributed computing, and metrics for evaluating and comparing parallel algorithms, as well as practical issues, including methods of designing and implementing shared-and distributed-memory programs, and standards for parallel program implementation.
Abstract: The constantly increasing demand for more computing power can seem impossible to keep up with. However, multicore processors capable of performing computations in parallel allow computers to tackle ever larger problems in a wide variety of applications. This book provides a comprehensive introduction to parallel computing, discussing theoretical issues such as the fundamentals of concurrent processes, models of parallel and distributed computing, and metrics for evaluating and comparing parallel algorithms, as well as practical issues, including methods of designing and implementing shared- and distributed-memory programs, and standards for parallel program implementation, in particular MPI and OpenMP interfaces. Each chapter presents the basics in one place followed by advanced topics, allowing novices and experienced practitioners to quickly find what they need. A glossary and more than 80 exercises with selected solutions aid comprehension. The book is recommended as a text for advanced undergraduate or graduate students and as a reference for practitioners.

572 citations