scispace - formally typeset
Search or ask a question

Showing papers on "Wireless sensor network published in 2021"


Journal ArticleDOI
TL;DR: This paper provides a tutorial overview of IRS-aided wireless communications, and elaborate its reflection and channel models, hardware architecture and practical constraints, as well as various appealing applications in wireless networks.
Abstract: Intelligent reflecting surface (IRS) is an enabling technology to engineer the radio signal propagation in wireless networks. By smartly tuning the signal reflection via a large number of low-cost passive reflecting elements, IRS is capable of dynamically altering wireless channels to enhance the communication performance. It is thus expected that the new IRS-aided hybrid wireless network comprising both active and passive components will be highly promising to achieve a sustainable capacity growth cost-effectively in the future. Despite its great potential, IRS faces new challenges to be efficiently integrated into wireless networks, such as reflection optimization, channel estimation, and deployment from communication design perspectives. In this paper, we provide a tutorial overview of IRS-aided wireless communications to address the above issues, and elaborate its reflection and channel models, hardware architecture and practical constraints, as well as various appealing applications in wireless networks. Moreover, we highlight important directions worthy of further investigation in future work.

1,325 citations


Book
27 Aug 2021
TL;DR: In this paper, the authors present the development of wireless sensor networks, including the physical layer, the data link layer, medium access control, and the network layer, as well as some network design examples.
Abstract: Introduction to Wireless Sensor Networks Applications and Motivation Network Performance Objectives Contributions of this Book Organization of this Book The Development of Wireless Sensor Networks Early Wireless Networks Wireless Data Networks Wireless Sensor and Related Networks Conclusion The Physical Layer Some Physical Layer Examples A Practical Physical Layer for Wireless Sensor Networks Simulations and Results Conclusion The Data Link Layer Medium Access Control Techniques The Mediation Device System Analysis and Simulation Conclusion The Network Layer Some Network Design Examples A Wireless Sensor Network Design Employing a Cluster Tree Architecture Simulations Results Conclusion Practical Implementation Issues The Partitioning Decision Transducer Interfaces Time Base Accuracy and Average Power Consumption Conclusion Power Management Power Sources Loads Voltage Converters and Regulators Power Management Strategy Conclusion Antennas and the Definition of RF Performance Antennas RF Performance Definition and Measurement Conclusion Electromagnetic Compatibility EMC: The Problem Examples of Self-Interference The Physics Associated with EMC Problems Principles of Proper Layout The Layout Process Detective/Corrective Techniques Conclusion Electrostatic Discharge The Problem Physical Properties of the Electrostatic Discharge The Effects of ESD on Integrated Circuits Modeling and Test Standards Product Design to Minimize ESD Problems Conclusion Wireless Sensor Network Standards The IEEE 802.15.4 Low-Rate WPAN Standard The ZigBee Alliance The IEEE 1451.5 Wireless Smart Transducer Interface Standard Summary and Opportunities for Future Development Summary Opportunities for Future Development Appendix A--Signal Processing Worksystem (SPW) Appendix B--WinneuRFon Motivation System Requirements Supported Features Current Status and Achievement Simulation Method and More Potential Functionalities Proposal for Future Work Summary Appendix C--The MC13192: An Example Wireless Sensor Network Transceiver Integrated Circuit (IC)

590 citations


Journal ArticleDOI
TL;DR: An iterative algorithm is proposed where, at every step, closed-form solutions for time allocation, bandwidth allocation, power control, computation frequency, and learning accuracy are derived and can reduce up to 59.5% energy consumption compared to the conventional FL method.
Abstract: In this paper, the problem of energy efficient transmission and computation resource allocation for federated learning (FL) over wireless communication networks is investigated. In the considered model, each user exploits limited local computational resources to train a local FL model with its collected data and, then, sends the trained FL model to a base station (BS) which aggregates the local FL model and broadcasts it back to all of the users. Since FL involves an exchange of a learning model between users and the BS, both computation and communication latencies are determined by the learning accuracy level. Meanwhile, due to the limited energy budget of the wireless users, both local computation energy and transmission energy must be considered during the FL process. This joint learning and communication problem is formulated as an optimization problem whose goal is to minimize the total energy consumption of the system under a latency constraint. To solve this problem, an iterative algorithm is proposed where, at every step, closed-form solutions for time allocation, bandwidth allocation, power control, computation frequency, and learning accuracy are derived. Since the iterative algorithm requires an initial feasible solution, we construct the completion time minimization problem and a bisection-based algorithm is proposed to obtain the optimal solution, which is a feasible solution to the original energy minimization problem. Numerical results show that the proposed algorithms can reduce up to 59.5% energy consumption compared to the conventional FL method.

365 citations


Journal ArticleDOI
TL;DR: A comprehensive overview of the state-of-the-art on RISs, with focus on their operating principles, performance evaluation, beamforming design and resource management, applications of machine learning to RIS-enhanced wireless networks, as well as the integration of RISs with other emerging technologies.
Abstract: Reconfigurable intelligent surfaces (RISs), also known as intelligent reflecting surfaces (IRSs), or large intelligent surfaces (LISs), 1 have received significant attention for their potential to enhance the capacity and coverage of wireless networks by smartly reconfiguring the wireless propagation environment. Therefore, RISs are considered a promising technology for the sixth-generation (6G) of communication networks. In this context, we provide a comprehensive overview of the state-of-the-art on RISs, with focus on their operating principles, performance evaluation, beamforming design and resource management, applications of machine learning to RIS-enhanced wireless networks, as well as the integration of RISs with other emerging technologies. We describe the basic principles of RISs both from physics and communications perspectives, based on which we present performance evaluation of multiantenna assisted RIS systems. In addition, we systematically survey existing designs for RIS-enhanced wireless networks encompassing performance analysis, information theory, and performance optimization perspectives. Furthermore, we survey existing research contributions that apply machine learning for tackling challenges in dynamic scenarios, such as random fluctuations of wireless channels and user mobility in RIS-enhanced wireless networks. Last but not least, we identify major issues and research opportunities associated with the integration of RISs and other emerging technologies for applications to next-generation networks. 1 Without loss of generality, we use the name of RIS in the remainder of this paper.

343 citations


Journal ArticleDOI
TL;DR: Experimental results outwards show that the intelligent module provides energy-efficient, secured transmission with low computational time as well as a reduced bit error rate, which is a key requirement considering the intelligent manufacturing of VSNs.
Abstract: Due to technology advancement, smart visual sensing required in terms of data transfer capacity, energy-efficiency, security, and computational-efficiency. The high-quality image transmission in visual sensor networks (VSNs) consumes more space, energy, transmission delay which may experience the various security threats. Image compression is a key phase of visual sensing systems that needs to be effective. This motivates us to propose a fast and efficient intelligent image transmission module to achieve the energy-efficiency, minimum delay, and bandwidth utilization. Compressive sensing (CS) introduced to speedily compressed the image to reduces the consumption of energy, time minimization, and efficient bandwidth utilization. However, CS cannot achieve security against the different kinds of threats. Several methods introduced since the last decade to address the security challenges in the CS domain, but efficiency is a key requirement considering the intelligent manufacturing of VSNs. Furthermore, the random variables selected for the CS having the problem of recovering the image quality due to the accumulation of noise. Thus concerning the above challenges, this paper introduced a novel one-way image transmission module in multiple input multiple output that provides secure and energy-efficient with the CS model. The secured transmission in the CS domain proposed using the security matrix which is called a compressed secured matrix and perfect reconstruction with the random matrix measurement in the CS. Experimental results outwards that the intelligent module provides energy-efficient, secured transmission with low computational time as well as a reduced bit error rate.

262 citations


Journal ArticleDOI
TL;DR: The Corvus corone module two-way image transmission is proposed that provides energy efficiency along CS model, secured transmission through a matrix of security under CS such as inbuilt method, which was named as compressed secured matrix and faultless reconstruction along that of eminent random matrix counting under CS.
Abstract: The manufacturing of intelligent and secure visual data transmission over the wireless sensor network is key requirement nowadays to many applications. The two-way transmission of image under a wireless channel needed image must compatible along channel characteristics such as band width, energy-efficient, time consumption and security because the image adopts big space under the device of storage and need a long time that easily undergoes cipher attacks. Moreover, Quizzical the problem for the additional time under compression results that, the secondary process of the compression followed through the acquisition consumes more time.,Hence, for resolving these issues, compressive sensing (CS) has emerged, which compressed the image at the time of sensing emerges as a speedy manner that reduces the time consumption and saves bandwidth utilization but fails under secured transmission. Several kinds of research paved path to resolve the security problems under CS through providing security such as the secondary process.,Thus, concerning the above issues, this paper proposed the Corvus corone module two-way image transmission that provides energy efficiency along CS model, secured transmission through a matrix of security under CS such as inbuilt method, which was named as compressed secured matrix and faultless reconstruction along that of eminent random matrix counting under CS.,Experimental outputs shows intelligent module gives energy efficient, secured transmission along lower computational timing also decreased bit error rate.

252 citations


Journal ArticleDOI
01 Jan 2021
TL;DR: The Butterfly Optimization Algorithm (BOA) is employed to choose an optimal cluster head from a group of nodes and the outputs of the proposed methodology are compared with traditional approaches LEACH, DEEC and compared with some existing methods.
Abstract: Wireless Sensor Networks (WSNs) consist of a large number of spatially distributed sensor nodes connected through the wireless medium to monitor and record the physical information from the environment. The nodes of WSN are battery powered, so after a certain period it loose entire energy. This energy constraint affects the lifetime of the network. The objective of this study is to minimize the overall energy consumption and to maximize the network lifetime. At present, clustering and routing algorithms are widely used in WSNs to enhance the network lifetime. In this study, the Butterfly Optimization Algorithm (BOA) is employed to choose an optimal cluster head from a group of nodes. The cluster head selection is optimized by the residual energy of the nodes, distance to the neighbors, distance to the base station, node degree and node centrality. The route between the cluster head and the base station is identified by using Ant Colony Optimization (ACO), it selects the optimal route based on the distance, residual energy and node degree. The performance measures of this proposed methodology are analyzed in terms of alive nodes, dead nodes, energy consumption and data packets received by the BS. The outputs of the proposed methodology are compared with traditional approaches LEACH, DEEC and compared with some existing methods FUCHAR, CRHS, BERA, CPSO, ALOC and FLION. For example, the alive nodes of the proposed methodology are 200 at 1500 iterations which is higher compared to the CRHS and BERA methods.

174 citations


Journal ArticleDOI
TL;DR: This survey presents a comprehensive analysis of the exploitation of network slicing in IoT realisation and discusses the role of other emerging technologies and concepts, such as blockchain and Artificial Intelligence/Machine Learning (AI/ML) in network slicing and IoT integration.
Abstract: Internet of Things (IoT) is an emerging technology that makes people’s lives smart by conquering a plethora of diverse application and service areas. In near future, the fifth-generation (5G) wireless networks provide the connectivity for this IoT ecosystem. It has been carefully designed to facilitate the exponential growth in the IoT field. Network slicing is one of the key technologies in the 5G architecture that has the ability to divide the physical network into multiple logical networks (i.e., slices) with different network characteristics. Therefore, network slicing is also a key enabler of realisation of IoT in 5G. Network slicing can satisfy the various networking demands by heterogeneous IoT applications via dedicated slices. In this survey, we present a comprehensive analysis of the exploitation of network slicing in IoT realisation. We discuss network slicing utilisation in different IoT application scenarios, along with the technical challenges that can be solved via network slicing. Furthermore, integration challenges and open research problems related to the network slicing in the IoT realisation are also discussed in this paper. Finally, we discuss the role of other emerging technologies and concepts, such as blockchain and Artificial Intelligence/Machine Learning (AI/ML) in network slicing and IoT integration.

173 citations


Journal ArticleDOI
TL;DR: In this paper, the secure distributed set-membership filtering problem for general nonlinear system over wireless sensor networks is investigated for the purpose of getting close to practical wireline networks.
Abstract: In this paper, the secure distributed set-membership filtering problem is investigated for general nonlinear system over wireless sensor networks. For the purpose of getting close to practical wire...

153 citations


Journal ArticleDOI
TL;DR: This article investigates the unmanned aerial vehicle (UAV)-assisted wireless powered Internet-of-Things system, where a UAV takes off from a data center, flies to each of the ground sensor nodes (SNs) in order to transfer energy and collect data from the SNs, and then returns to the data center.
Abstract: This article investigates the unmanned aerial vehicle (UAV)-assisted wireless powered Internet-of-Things system, where a UAV takes off from a data center, flies to each of the ground sensor nodes (SNs) in order to transfer energy and collect data from the SNs, and then returns to the data center. For such a system, an optimization problem is formulated to minimize the average Age of Information (AoI) of the data collected from all ground SNs. Since the average AoI depends on the UAV’s trajectory, the time required for energy harvesting (EH) and data collection for each SN, these factors need to be optimized jointly. Moreover, instead of the traditional linear EH model, we employ a nonlinear model because the behavior of the EH circuits is nonlinear by nature. To solve this nonconvex problem, we propose to decompose it into two subproblems, i.e., a joint energy transfer and data collection time allocation problem and a UAV’s trajectory planning problem. For the first subproblem, we prove that it is convex and give an optimal solution by using Karush–Kuhn–Tucker (KKT) conditions. This solution is used as the input for the second subproblem, and we solve optimally it by designing dynamic programming (DP) and ant colony (AC) heuristic algorithms. The simulation results show that the DP-based algorithm obtains the minimal average AoI of the system, and the AC-based heuristic finds solutions with near-optimal average AoI. The results also reveal that the average AoI increases as the flying altitude of the UAV increases and linearly with the size of the collected data at each ground SN.

138 citations


Journal ArticleDOI
TL;DR: This work designs novel scheduling and resource allocation policies that decide on the subset of the devices to transmit at each round, and how the resources should be allocated among the participating devices, not only based on their channel conditions, but also on the significance of their local model updates.
Abstract: We study federated learning (FL) at the wireless edge, where power-limited devices with local datasets collaboratively train a joint model with the help of a remote parameter server (PS) We assume that the devices are connected to the PS through a bandwidth-limited shared wireless channel At each iteration of FL, a subset of the devices are scheduled to transmit their local model updates to the PS over orthogonal channel resources, while each participating device must compress its model update to accommodate to its link capacity We design novel scheduling and resource allocation policies that decide on the subset of the devices to transmit at each round, and how the resources should be allocated among the participating devices, not only based on their channel conditions, but also on the significance of their local model updates We then establish convergence of a wireless FL algorithm with device scheduling, where devices have limited capacity to convey their messages The results of numerical experiments show that the proposed scheduling policy, based on both the channel conditions and the significance of the local model updates, provides a better long-term performance than scheduling policies based only on either of the two metrics individually Furthermore, we observe that when the data is independent and identically distributed (iid) across devices, selecting a single device at each round provides the best performance, while when the data distribution is non-iid, scheduling multiple devices at each round improves the performance This observation is verified by the convergence result, which shows that the number of scheduled devices should increase for a less diverse and more biased data distribution

Journal ArticleDOI
TL;DR: The proposed work makes use of a hybrid metaheuristic algorithm, namely, Whale Optimization Algorithm with Simulated Annealing with WOA, and is compared with several state‐of‐the‐art optimization algorithms like Artificial Bee Colony algorithm, Genetic Algorithm, Adaptive Gravitational Search algorithm, WOA.
Abstract: © 2020 John Wiley & Sons, Ltd. Recently Internet of Things (IoT) is being used in several fields like smart city, agriculture, weather forecasting, smart grids, waste management, etc. Even though IoT has huge potential in several applications, there are some areas for improvement. In the current work, we have concentrated on minimizing the energy consumption of sensors in the IoT network that will lead to an increase in the network lifetime. In this work, to optimize the energy consumption, most appropriate Cluster Head (CH) is chosen in the IoT network. The proposed work makes use of a hybrid metaheuristic algorithm, namely, Whale Optimization Algorithm (WOA) with Simulated Annealing (SA). To select the optimal CH in the clusters of IoT network, several performance metrics such as the number of alive nodes, load, temperature, residual energy, cost function have been used. The proposed approach is then compared with several state-of-the-art optimization algorithms like Artificial Bee Colony algorithm, Genetic Algorithm, Adaptive Gravitational Search algorithm, WOA. The results prove the superiority of the proposed hybrid approach over existing approaches.

Journal ArticleDOI
TL;DR: This article performs a comprehensive review of the TL algorithms used in different wireless communication fields, such as base stations/access points switching, indoor wireless localization and intrusion detection in wireless networks, etc.
Abstract: In the coming 6G communications, network densification, high throughput, positioning accuracy, energy efficiency, and many other key performance indicator requirements are becoming increasingly strict In the future, how to improve work efficiency while saving costs is one of the foremost research directions in wireless communications Being able to learn from experience is an important way to approach this vision Transfer learning (TL) encourages new tasks/domains to learn from experienced tasks/domains for helping new tasks become faster and more efficient TL can help save energy and improve efficiency with the correlation and similarity information between different tasks in many fields of wireless communications Therefore, applying TL to future 6G communications is a very valuable topic TL has achieved some good results in wireless communications In order to improve the development of TL applied in 6G communications, this article performs a comprehensive review of the TL algorithms used in different wireless communication fields, such as base stations/access points switching, indoor wireless localization and intrusion detection in wireless networks, etc Moreover, the future research directions of mutual relationship between TL and 6G communications are discussed in detail Challenges and future issues about integrate TL into 6G are proposed at the end This article is intended to help readers understand the past, present, and future between TL and wireless communications

Journal ArticleDOI
TL;DR: A comprehensive overview of the latest research efforts on integrating UAVs into cellular networks, with an emphasis on how to exploit advanced techniques to meet the diversified service requirements of next-generation wireless systems is provided.
Abstract: Due to the advancements in cellular technologies and the dense deployment of cellular infrastructure, integrating unmanned aerial vehicles (UAVs) into the fifth-generation (5G) and beyond cellular networks is a promising solution to achieve safe UAV operation as well as enabling diversified applications with mission-specific payload data delivery. In particular, 5G networks need to support three typical usage scenarios, namely, enhanced mobile broadband (eMBB), ultra-reliable low-latency communications (URLLC), and massive machine-type communications (mMTC). On the one hand, UAVs can be leveraged as cost-effective aerial platforms to provide ground users with enhanced communication services by exploiting their high cruising altitude and controllable maneuverability in three-dimensional (3D) space. On the other hand, providing such communication services simultaneously for both UAV and ground users poses new challenges due to the need for ubiquitous 3D signal coverage as well as the strong air-ground network interference. Besides the requirement of high-performance wireless communications, the ability to support effective and efficient sensing as well as network intelligence is also essential for 5G-and-beyond 3D heterogeneous wireless networks with coexisting aerial and ground users. In this paper, we provide a comprehensive overview of the latest research efforts on integrating UAVs into cellular networks, with an emphasis on how to exploit advanced techniques (e.g., intelligent reflecting surface, short packet transmission, energy harvesting, joint communication and radar sensing, and edge intelligence) to meet the diversified service requirements of next-generation wireless systems. Moreover, we highlight important directions for further investigation in future work.

Journal ArticleDOI
TL;DR: The dynamic and energy‐efficient clustering for energy hole mitigation (DECEM) is proposed and the simulation experiments reveal that DECEM has enhanced stability period by 5% and 31% as compared to the MEEC and IDHR protocols, respectively.

Journal ArticleDOI
TL;DR: The results showed that the proposed GWOSVM-IDS with seven wolves overwhelms the other proposed and comparative algorithms.
Abstract: Intrusion in wireless sensor networks (WSNs) aims to degrade or even eliminating the capability of these networks to provide its functions In this paper, an enhanced intrusion detection system (IDS) is proposed by using the modified binary grey wolf optimizer with support vector machine (GWOSVM-IDS) The GWOSVM-IDS used 3 wolves, 5 wolves and 7 wolves to find the best number of wolves The proposed method aims to increase intrusion detection accuracy and detection rate and reduce processing time in the WSN environment through decrease false alarms rates, and the number of features resulted from the IDSs in the WSN environment Indeed, the NSL KDD’99 dataset is used to demonstrate the performance of the proposed method and compare it with other existing methods The proposed methods are evaluated in terms of accuracy, the number of features, execution time, false alarm rate, and detection rate The results showed that the proposed GWOSVM-IDS with seven wolves overwhelms the other proposed and comparative algorithms

Journal ArticleDOI
TL;DR: The proposed ODSD framework has exceptional benefits for real-time applications while maintaining the security of the dynamic storage of data.
Abstract: The Industry 4.0 IoT network integration with blockchain architecture is a decentralized, distributed ledger mechanism used to record multi-user transactions. Blockchain requires a data storage system designed to be secure, reliable, and fully transparent, emerged as a preferred IoT-based digital storage on WSN. Blockchain technology is being used in the paper to construct the node recognition system according to the storage of data for WSNs. The data storage process on such data must be secure and traceable in different forensics and decision making. The primary theme of the dynamic data security is therefore for rejecting exploitation of the unauthorized user and for evaluating the mechanism in tracing and evidence of system’s data operation in a dynamic manner, growth and quality features under the stochastic state of the model; (1) a mathematical method for the secured storage of data in dynamic is built through distributed node cooperation in IoT industry. (2) the ownership transition feature and the dynamic storage of data system architecture are configured, (3) the emerging distributed storage architecture for blockchain-based WSN will substantially reduce overhead storage for each node without affecting data integrity; (4) minimize the latency of data reconstruction in distributed over storage system, and propose an effective and scalable algorithm for optimizing storage latency issue. In addition to this research, the system implements verified possession of data for replacing the evidence in original digital currency for mining and to store new data blocks that will be compared to the proof system, dramatically reduces computational capacity. The proposed ODSD framework has exceptional benefits for real-time applications while maintaining the security of the dynamic storage of data.

Journal ArticleDOI
TL;DR: A comprehensive survey on Intrusion Detection System (IDS) for IoT is presented and various IDS placement strategies and IDS analysis strategies in IoT architecture are discussed, along with Machine Learning (ML) and Deep Learning techniques for detecting attacks in IoT networks.
Abstract: Internet of Things (IoT) is widely accepted technology in both industrial as well as academic field. The objective of IoT is to combine the physical environment with the cyber world and create one big intelligent network. This technology has been applied to various application domains such as developing smart home, smart cities, healthcare applications, wireless sensor networks, cloud environment, enterprise network, web applications, and smart grid technologies. These wide emerging applications in variety of domains raise many security issues such as protecting devices and network, attacks in IoT networks, and managing resource-constrained IoT networks. To address the scalability and resource-constrained security issues, many security solutions have been proposed for IoT such as web application firewalls and intrusion detection systems. In this paper, a comprehensive survey on Intrusion Detection System (IDS) for IoT is presented for years 2015–2019. We have discussed various IDS placement strategies and IDS analysis strategies in IoT architecture. The paper discusses various intrusions in IoT, along with Machine Learning (ML) and Deep Learning (DL) techniques for detecting attacks in IoT networks. The paper also discusses security issues and challenges in IoT.

Journal ArticleDOI
TL;DR: A general machine-learning-based architecture for sensor validation built upon a series of neural-network estimators and a classifier is proposed, which aims at detecting anomalies in measurements from sensors, identifying the faulty ones and accommodating them with appropriate estimated data, thus paving the way to reliable digital twins.
Abstract: Sensor technologies empower Industry 4.0 by enabling integration of in-field and real-time raw data into digital twins. However, sensors might be unreliable due to inherent issues and/or environmental conditions. This article aims at detecting anomalies in measurements from sensors, identifying the faulty ones and accommodating them with appropriate estimated data, thus paving the way to reliable digital twins. More specifically, we propose a general machine-learning-based architecture for sensor validation built upon a series of neural-network estimators and a classifier. Estimators correspond to virtual sensors of all unreliable sensors (to reconstruct normal behaviour and replace the isolated faulty sensor within the system), whereas the classifier is used for detection and isolation tasks. A comprehensive statistical analysis on three different real-world data-sets is conducted and the performance of the proposed architecture validated under hard and soft synthetically-generated faults.

Journal ArticleDOI
TL;DR: This paper reviews the literature with specific attention to aspects of wireless networking for the preservation of energy and aggregation of data in IoT-WSN systems.

Book ChapterDOI
01 Jan 2021
TL;DR: A review of taxonomy of some of the significant forest fire detection techniques encountered in the literature so far is reported and comprehensive tabular study of the state-of-art techniques is given which will help in the appropriate selection of methods to be employed for the real-time detection of forest fire.
Abstract: Recently reported technological growth in wireless sensor network (WSN) has extended its application in various disastrous applications. One of the most concerned issues is the forest fires occurring across the globe. Every year thousands of hectares of forest are burnt in the forest fires occurring due to one or the other reasons. Although numerous attempts have been made for the detection of forest fires at the earliest, there is still scope for the utilization of optimum technique for the same. This paper aims to report a review of taxonomy of some of the significant forest fire detection techniques encountered in the literature so far. Moreover, scenario of the forest fires prevailing in India is also discussed. In this paper, the comprehensive tabular study of the state-of-art techniques is given which will help in the appropriate selection of methods to be employed for the real-time detection of forest fire.

Journal ArticleDOI
TL;DR: In this paper, an improved identity-based encryption algorithm (IIBE) is proposed, which can effectively simplify the key generation process, reduce the network traffic, and improve the network security.
Abstract: Wireless sensor networks (WSN) have problems such as limited power, weak computing power, poor communication ability, and vulnerability to attack. However, the existing encryption methods cannot effectively solve the above problems when applied to WSN. To this end, according to WSN’s characteristics and based on the identity-based encryption idea, an improved identity-based encryption algorithm (IIBE) is proposed, which can effectively simplify the key generation process, reduce the network traffic, and improve the network security. The design idea of this algorithm lies between the traditional public key encryption and identity-based public tweezers’ encryption. Compared with the traditional public key encryption, the algorithm does not need a public key certificate and avoids the management of the certificate. Compared with identity-based public key encryption, the algorithm addresses the key escrow and key revocation problems. The results of the actual network distribution experiments demonstrate that IIBE has low energy consumption and high security, which are suitable for application in WSN with high requirements on security.

Journal ArticleDOI
22 Sep 2021-Energies
TL;DR: This paper presents a methodology of an energy-efficient clustering algorithm for collecting and transmitting data based on the Optimized Low-Energy Adaptive Clustering Hierarchy (LEACH) protocol, and the network’s lifetime is enhanced as it also maximizes the residual energy of nodes.
Abstract: A Flying Ad-hoc network constitutes many sensor nodes with limited processing speed and storage capacity as they institute a minor battery-driven device with a limited quantity of energy. One of the primary roles of the sensor node is to store and transmit the collected information to the base station (BS). Thus, the life span of the network is the main criterion for the efficient design of the FANETS Network, as sensor nodes always have limited resources. In this paper, we present a methodology of an energy-efficient clustering algorithm for collecting and transmitting data based on the Optimized Low-Energy Adaptive Clustering Hierarchy (LEACH) protocol. The selection of CH is grounded on the new optimized threshold function. In contrast, LEACH is a hierarchical routing protocol that randomly selects cluster head nodes in a loop and results in an increased cluster headcount, but also causes more rapid power consumption. Thus, we have to circumvent these limitations by improving the LEACH Protocol. Our proposed algorithm diminishes the energy usage for data transmission in the routing protocol, and the network’s lifetime is enhanced as it also maximizes the residual energy of nodes. The experimental results performed on MATLAB yield better performance than the existing LEACH and Centralized Low-Energy Adaptive Clustering Hierarchy Protocol in terms of energy efficiency per unit node and the packet delivery ratio with less energy utilization. In addition, the First Node Death (FND) is also meliorated when compared to the LEACH and LEACH-C protocols.

Journal ArticleDOI
TL;DR: Compared to state-of-art IoT-based farming methods, the CL-IoT reduces energy consumption, communication overhead, and end-to-end delay up to a certain extent and maximizes the network throughput.
Abstract: Internet of Things (IoT) for Intelligent Manufacturing of Smart Farming gained significant attention from researchers to automate various farming applications called Smart Farming (SF). The sensors and actuators deployed across the farm using which farmers receive periodic farm information related to temperature, soil moisture, light intensity, and water used, etc. The clustering-based methods are proven energy-efficient solutions for Wireless Sensor Networks (WSNs). However, by considering long-distance communications and scalable networks of IoT enabled SF; the present clustering solutions cannot be feasible and having higher delay and latency for various SF applications. To focus on requirements SF applications, an efficient and scalable protocol for remote monitoring and decision making of farms in rural regions called CL-IoT protocol proposed. A cross-layer-based clustering and routing algorithms have designed to reduce network communication delay, latency, and energy consumption. The cross-layer-based optimal Cluster Head (CH) selection solution proposed to overcome the energy asymmetry problem in WSN. The parameters of different layers like a physical, medium access control (MAC), and network layer of each sensor used to evaluate and select optimal CH and efficient data transmission. The nature-inspired algorithm proposed with a novel probabilistic decision rule functions as a fitness function to discover the optimal route for data transmission. The performance of the CL-IoT protocol analyzed using NS2 by considering the energy-efficiency, computational-efficiency, and QoS-efficiency factors. Compared to state-of-art IoT-based farming methods, the CL-IoT reduces energy consumption, communication overhead, and end-to-end delay up to a certain extent and maximizes the network throughput.

Journal ArticleDOI
TL;DR: This paper introduces an advanced approach for CH selection using a modified Rider Optimization Algorithm (ROA) on the intra-distance inter-distance between the CH and nodes in the IoT and wireless sensor networks.

Journal ArticleDOI
TL;DR: In this article, the authors presented an optimized energy-efficient and secure blockchain-based software-defined IoT framework for smart networks, which ensures efficient cluster-head selection and secure network communication via the identification and isolation of rouge switches.
Abstract: Software-Defined Networking (SDN) and Blockchain are leading technologies used worldwide to establish safe network communication as well as build secure network infrastructures They provide a robust and reliable platform to address threats and face challenges such as security, privacy, flexibility, scalability, and confidentiality Driven by these assumptions, this paper presents an optimized energy-efficient and secure Blockchain-based software-defined IoT framework for smart networks Indeed, SDN and Blockchain technologies have proven to be able to suitably manage resource utilization and to develop secure network communication across the IoT ecosystem However, there is a lack of research works that present a comprehensive definition of such a framework that can meet the requirements of the IoT ecosystem (ie efficient energy utilization and reduced end-to-end delay) Therefore, in this research, we present a layered hierarchical architecture for the deployment of a distributed yet efficient Blockchain-enabled SDN-IoT framework that ensures efficient cluster-head selection and secure network communication via the identification and isolation of rouge switches Besides, the Blockchain-enabled flow-rules record keeps track of the rules enforced in the switches and maintains the consistency within the controller cluster Finally, we assess the performance of the proposed framework in a simulation environment and show that it can achieve optimized energy-utilization, end-to-end delay, and throughput compared to considered baselines, thus being able to achieve efficiency and security in the smart network

Journal ArticleDOI
TL;DR: This article proposes to leverage the intelligent reflecting surface (IRS) that is capable of dynamically reconfiguring the propagation environment to drastically enhance the efficiency of both downlink EB and uplink AirComp in IoT networks and demonstrates the performance gains of the proposed algorithm over the baseline methods.
Abstract: Fast wireless data aggregation and efficient battery recharging are two critical design challenges of Internet-of-Things (IoT) networks. Over-the-air computation (AirComp) and energy beamforming (EB) turn out to be two promising techniques that can address these two challenges, necessitating the design of wireless-powered AirComp. However, due to severe channel propagation, the energy harvested by IoT devices may not be sufficient to support AirComp. In this article, we propose to leverage the intelligent reflecting surface (IRS) that is capable of dynamically reconfiguring the propagation environment to drastically enhance the efficiency of both downlink EB and uplink AirComp in IoT networks. Due to the coupled problems of downlink EB and uplink AirComp, we further propose the joint design of energy and aggregation beamformers at the access point, downlink/uplink phase-shift matrices at the IRS, and transmit power at the IoT devices, to minimize the mean-squared error (MSE), which quantifies the AirComp distortion. However, the formulated problem is a highly intractable nonconvex quadratic programming problem. To solve this problem, we first obtain the closed-form expressions of the energy beamformer and the device transmit power, and then develop an alternating optimization framework based on difference-of-convex programming to design the aggregation beamformers and IRS phase-shift matrices. Simulation results demonstrate the performance gains of the proposed algorithm over the baseline methods and show that deploying an IRS can significantly reduce the MSE of AirComp.

Journal ArticleDOI
TL;DR: A survey of this field based on the objectives for clustering, such as reducing energy consumption and load balancing, as well as the network properties relevant for efficient clustering in IoT,such as network heterogeneity and mobility is conducted.
Abstract: Many Internet of Things (IoT) networks are created as an overlay over traditional ad-hoc networks such as Zigbee. Moreover, IoT networks can resemble ad-hoc networks over networks that support device-to-device (D2D) communication, e.g., D2D-enabled cellular networks and WiFi-Direct. In these ad-hoc types of IoT networks, efficient topology management is a crucial requirement, and in particular in massive scale deployments. Traditionally, clustering has been recognized as a common approach for topology management in ad-hoc networks, e.g., in Wireless Sensor Networks (WSNs). Topology management in WSNs and ad-hoc IoT networks has many design commonalities as both need to transfer data to the destination hop by hop. Thus, WSN clustering techniques can presumably be applied for topology management in ad-hoc IoT networks. This requires a comprehensive study on WSN clustering techniques and investigating their applicability to ad-hoc IoT networks. In this article, we conduct a survey of this field based on the objectives for clustering, such as reducing energy consumption and load balancing, as well as the network properties relevant for efficient clustering in IoT, such as network heterogeneity and mobility. Beyond that, we investigate the advantages and challenges of clustering when IoT is integrated with modern computing and communication technologies such as Blockchain, Fog/Edge computing, and 5G. This survey provides useful insights into research on IoT clustering, allows broader understanding of its design challenges for IoT networks, and sheds light on its future applications in modern technologies integrated with IoT.

Journal ArticleDOI
TL;DR: This study investigates the feasibility of using edge computing for smart parking surveillance tasks, specifically, parking occupancy detection using the real-time video feed and results are promising that the final detection method achieves over 95% accuracy in real-world scenarios with high efficiency and reliability.
Abstract: Cloud computing has been a main-stream computing service for years. Recently, with the rapid development in urbanization, massive video surveillance data are produced at an unprecedented speed. A traditional solution to deal with the big data would require a large amount of computing and storage resources. With the advances in Internet of things (IoT), artificial intelligence, and communication technologies, edge computing offers a new solution to the problem by processing all or part of the data locally at the edge of a surveillance system. In this study, we investigate the feasibility of using edge computing for smart parking surveillance tasks, specifically, parking occupancy detection using the real-time video feed. The system processing pipeline is carefully designed with the consideration of flexibility, online surveillance, data transmission, detection accuracy, and system reliability. It enables artificial intelligence at the edge by implementing an enhanced single shot multibox detector (SSD). A few more algorithms are developed either locally at the edge of the system or on the centralized data server targeting optimal system efficiency and accuracy. Thorough field tests were conducted in the Angle Lake parking garage for three months. The experimental results are promising that the final detection method achieves over 95% accuracy in real-world scenarios with high efficiency and reliability. The proposed smart parking surveillance system is a critical component of smart cities and can be a solid foundation for future applications in intelligent transportation systems.

Journal ArticleDOI
TL;DR: This article investigates the analog gradient aggregation (AGA) solution to overcome the communication bottleneck for wireless federated learning applications by exploiting the idea of analog over-the-air transmission by proposing a novel design of both the transceiver and learning algorithm.
Abstract: This article investigates the analog gradient aggregation (AGA) solution to overcome the communication bottleneck for wireless federated learning applications by exploiting the idea of analog over-the-air transmission Despite the various advantages, this special transmission solution also brings new challenges to both transceiver design and learning algorithm design due to the nonstationary local gradients and the time-varying wireless channels in different communication rounds To address these issues, we propose a novel design of both the transceiver and learning algorithm for the AGA solution In particular, the parameters in the transceiver are optimized with the consideration of the nonstationarity in the local gradients based on a simple feedback variable Moreover, a novel learning rate design is proposed for the stochastic gradient descent algorithm, which is adaptive to the quality of the gradient estimation Theoretical analyses are provided on the convergence rate of the proposed AGA solution Finally, the effectiveness of the proposed solution is confirmed by two separate experiments based on linear regression and the shallow neural network The simulation results verify that the proposed solution outperforms various state-of-the-art baseline schemes with a much faster convergence speed