scispace - formally typeset
Search or ask a question
Author

Indrajit Bhattacharya

Bio: Indrajit Bhattacharya is an academic researcher from Kalyani Government Engineering College. The author has contributed to research in topics: Wireless sensor network & Routing protocol. The author has an hindex of 8, co-authored 37 publications receiving 257 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: PMT (Pixel Matching Technique) is used to verify the signature of the user with the sample signature which is stored in the database and the performance of the proposed method has been compared with the existing ANN (Artificial Neural Network's) back-propagation method and SVM (Support Vector Machine) technique.

58 citations

Journal ArticleDOI
TL;DR: This paper has proposed a reader placement technique in a departmental store equipped with RFID network using Particle Swarm Optimization (PSO), and found minimal number of readers along with their position with 100% coverage of tagged items.
Abstract: An RFID network consists of a set of tags and readers. The cost and the number of tags covered directly depend on the number of readers. So, finding optimal number of readers and their positions to cover all tags is one of the most important issues in an RFID network. In this paper, we have proposed a reader placement technique in a departmental store equipped with RFID network using Particle Swarm Optimization (PSO). The proposed algorithm finds minimal number of readers along with their position with 100% coverage of tagged items. Simulated results also show the algorithms effectiveness in achieving the optimal solution.

55 citations

Journal ArticleDOI
TL;DR: A probabilistic model is adopted by combining prominent features of rumor propagation in order to extract tweets that have at least one rumor propagation feature, which has been observed that around 70% of the total endorsed belief rumors have been detected by proposed model, which is superior to other techniques.
Abstract: The use of online social media for post-disaster situation analysis has recently become popular. However, utilizing information posted on social media has some potential hazards, one of which is rumor. For instance, on Twitter, thousands of verified and non-verified users post tweets to convey information, and not all information posted on Twitter is genuine. Some of them contain fraudulent and unverified information about different facts/incidents - such information are termed as rumors. Identification of such rumor tweets at early stage in the aftermath of a disaster is the main focus of the current work. To this end, a probabilistic model is adopted by combining prominent features of rumor propagation. Each feature has been coded individually in order to extract tweets that have at least one rumor propagation feature. In addition, content-based analysis has been performed to ensure the contribution of the extracted tweets in terms of probability of being a rumor. The proposed model has been tested over a large set of tweets posted during the 2015 Chennai Floods. The proposed model and other four popular baseline rumor detection techniques have been compared with human annotated real rumor data, to check the efficiency of the models in terms of (i) detection of belief rumors and (ii) accuracy at early stage. It has been observed that around 70% of the total endorsed belief rumors have been detected by proposed model, which is superior to other techniques. Finally, in terms of accuracy, the proposed technique also achieved 0.9904 for the considered disaster scenario, which is better than the other methods.

41 citations

Journal ArticleDOI
TL;DR: A novel DTN routing protocol is proposed that is found to reduce message delivery latency and improve message delivery ratio by incurring a small overhead in the Opportunistic Networks Environment (ONE) simulator.
Abstract: Network architecture based on opportunistic Delay Tolerant Network (DTN) is best applicable for post-disaster scenarios, where the controlling point of relief work is any fixed point like a local school building or a hospital, whose location is known to everyone. In this work, 4-tier network architecture for post-disaster relief and situation analysis is proposed. The disaster struck area has been divided into clusters known as Shelter Points (SP). The architecture consists of mobile Relief Workers (RW) at tier 1, Throw boxes (TB) at tier 2 placed at fixed locations within SPs. Data Mules (DM) like vehicles, boats, etc. operate at tier 3 that provide inter-SP connectivity. Master Control Station (MCS) is placed at tier 4. The RWs are provided with smart-phones that act as mobile nodes. The mobile nodes collect information from the disaster incident area and send that information to the TB of its SP, using DTN as the communication technology. The messages are then forwarded to the MCS via the DMs. Based on this architecture, a novel DTN routing protocol is proposed. The routing strategy works by tracking recent direction of movement of mobile nodes by measuring their consecutive distances from the destination at two different instants. If any node moves away from the destination, then it is very unlikely to carry its messages towards the destination. For a node, the fittest node among all its neighbours is selected as the next hop. The fittest node is selected using parameters like past history of successful delivery and delivery latency, current direction of movement and node's recent proximity to the destination. Issues related to routing such as fitness of a node for message delivery, buffer management, packet drop and node energy have been considered. The routing protocol has been implemented in the Opportunistic Networks Environment (ONE) simulator with customized mobility models. It is compared with existing standard DTN routing protocols for efficiency. It is found to reduce message delivery latency and improve message delivery ratio by incurring a small overhead .

25 citations

Journal ArticleDOI
TL;DR: This work has made a comprehensive study of such energy efficient integrated sensor-based system in order to achieve energy efficiency and to prolong network lifetime.
Abstract: Small-size sensor nodes are used as the basic component for collecting and sending the data or information in the ad hoc mode in wireless sensor network (WSN). This network is generally used to collect and process data from different regions where the movement of human is very rare. The sensor nodes are deployed in such a region for collecting data using ad hoc network where, at any time, the unusual situation may happen or there is no fixed network that can work positively and provide any transmission procedure. The location may be very remote or some disaster-prone area. In disaster-prone zone, after disaster, most often no fixed network remains alive. In that scenario, the ad hoc sensor network is one of the reliable sources for collecting and transmitting the data from that region. In this type of situation, sensor network can also be helpful for geo-informatic system. WSN can be used to handle the disaster management manually as well as through an automated system. The main problem for any activity using sensor node is that the nodes are very much battery hunger. An efficient power utilization is required for enhancing the network lifetime by reducing data traffic in the WSN. For this reason, some efficient intelligent software and hardware techniques are required to make the most efficient use of limited resources in terms of energy, computation and storage. One of the most suitable approaches is data aggregation protocol which can reduce the communication cost by extending the lifetime of sensor networks. The techniques can be implemented in different efficient manners, but all are not useful in same application scenarios. More specifically, data can be collected by dynamic approach using rendezvous point (RP), and for that purpose, intelligent neural network-based cluster formation techniques can be used and for fixing the targeted base station, the ant colony optimization algorithm can be used. In this work, we have made a comprehensive study of such energy efficient integrated sensor-based system in order to achieve energy efficiency and to prolong network lifetime.

21 citations


Cited by
More filters
01 Jan 2003
TL;DR: The Colorwave algorithm is presented, a simple, distributed, on-line algorithm for the reader collision problem in radio frequency identification (RFID) systems that enables the RFID system to automatically adapt to changes in the system and in the operating environment of the system.
Abstract: We present the Colorwave algorithm, a simple, distributed, on-line algorithm for the reader collision problem in radio frequency identification (RFID) systems. RFID systems are increasingly being used in applications, such as those experienced in supply chain management, which require RFID readers to operate in close proximity to one another. Readers physically located near one another may interfere with one another's operation. Such reader collisions must be minimized to ensure the correct operation of the RFID system. The Colorwave algorithm yields on-line solutions that are near the optimal static solutions. The dynamic nature of the algorithm enables the RFID system to automatically adapt to changes in the system and in the operating environment of the system.

294 citations

Journal ArticleDOI
TL;DR: This paper presents an extensive review of the state-of-the-art solutions for enhancing security and privacy in D2D communication and identifies lessons to be learned from existing studies and derive a set of “best practices.”
Abstract: Device-to-device (D2D) communication presents a new paradigm in mobile networking to facilitate data exchange between physically proximate devices. The development of D2D is driven by mobile operators to harvest short range communications for improving network performance and supporting proximity-based services. In this paper, we investigate two fundamental and interrelated aspects of D2D communication, security and privacy, which are essential for the adoption and deployment of D2D. We present an extensive review of the state-of-the-art solutions for enhancing security and privacy in D2D communication. By summarizing the challenges, requirements, and features of different proposals, we identify lessons to be learned from existing studies and derive a set of “best practices.” The primary goal of our work is to equip researchers and developers with a better understanding of the underlying problems and the potential solutions for D2D security and privacy. To inspire follow-up research, we identify open problems and highlight future directions with regard to system and communication design. To the best of our knowledge, this is the first comprehensive review to address the fundamental security and privacy issues in D2D communication.

251 citations

01 Jan 2017
TL;DR: This research study explores the Global Positioning System (GPS), its history, and the process of discovery needed to create the most accurate GPS possible, as well as the contemporary applications of GPS technology.
Abstract: This research study explores the Global Positioning System (GPS), its history, and the process of discovery needed to create the most accurate GPS possible, as well as the contemporary applications of GPS technology. Starting with the first satellite in space, GPS has been a work in progress. Originally pursued by the military for improvements to military tactics, GPS has become integrated into the everyday lives of millions of people around the world. How GPS determines location is a dichotomy, with simplistic theory and complex application. Many factors go into GPS to provide a consistent, accurate location. The orbital planes the satellites are placed in provide 24/7 coverage globally, the L-band frequencies used were chosen specifically for the characteristics they possess, and the multiple atomic clocks installed on each satellite provide incredible accuracy down to the nanoseconds, which is quintessential in GPS accuracy. The applications in GPS are far reaching and more applications are continually being discovered. With as far as GPS technology has progressed, there are still several factors that degrade the signal and are a challenge to overcome. Many of these challenges can be corrected efficiently, however, others, such as scintillation and total electron content variability in the ionosphere, create major hurdles to overcome. Luckily, there are many programs that aid in the correction process of these hurdles. The History of GPS According to R. Saunders’ article ​A Short History of GPS Development,​ The Global Positioning System (GPS) has a long history of trial and error and refinement and improvement. It’s purpose has shifted from being a military strategic asset to commonplace among the general public with its use in traveling, farming, and even banking. The beginning of GPS, introduced with a simple idea, can be traced back to the Soviet Union in the late 1950’s. In 1957, the Soviet Union made history with successfully launching the first satellite in space. To track the satellite Sputnik, Physicists and Scientists at John Hopkins University’s Applied Physics Laboratory listened to the beeps Sputnik’s signals produced. They noticed that the beeps had a Doppler Effect or Doppler Shift as the satellite passed by. Much like the sound a siren makes as a fire truck approaches, then as it passes, the sound of the siren seems different. The change in timing between the beeps let the scientist know Sputnik’s location. This led to the idea of reversing that process, to give a location on the Earth. Using radio frequencies to determine location in a two dimensional plane had been around since WWII, but using satellites would push this technology into the three dimensional realm. The United States Navy, Army, and Air Force all began developing their own GPS satellites in the 1960’s, but this was no small task. In the early 1960’s, the Navy launched its first Transit Satellite. The failure of this satellite, however, was due to

248 citations

Journal ArticleDOI
TL;DR: A holistic view of how the information is being weaponized to fulfil the malicious motives and forcefully making a biased user perception about a person, event or firm is put forward.
Abstract: Internet and social media have become a widespread, large scale and easy to use platform for real-time information dissemination. It has become an open stage for discussion, ideology expression, knowledge dissemination, emotions and sentiment sharing. This platform is gaining tremendous attraction and a huge user base from all sections and age groups of society. The matter of concern is that up to what extent the contents that are circulating among all these platforms every second changing the mindset, perceptions and lives of billions of people are verified, authenticated and up to the standards. This paper puts forward a holistic view of how the information is being weaponized to fulfil the malicious motives and forcefully making a biased user perception about a person, event or firm. Further, a taxonomy is provided for the classification of malicious information content at different stages and prevalent technologies to cope up with this issue form origin, propagation, detection and containment stages. We also put forward a research gap and possible future research directions so that the web information content could be more reliable and safer to use for decision making as well as for knowledge sharing.

247 citations

Journal ArticleDOI
TL;DR: A novel technique based on the idea of best features selection is introduced in this article for an offline verification system that is based on three accuracy measures as FAR, FRR and AER.

130 citations