scispace - formally typeset
Search or ask a question

Showing papers by "Sudeep Tanwar published in 2023"


Journal ArticleDOI
TL;DR: In this article , a case study on Metaverse-assisted Real Estate Management (REM) is presented, where the Metaverse governs a Buyer-Broker-Seller (BBS) architecture for land registrations.
Abstract: The Metaverse allows the integration of physical and digital versions of users, processes, and environments where entities communicate, transact, and socialize. With the shift towards Extended Reality (XR) technologies, the Metaverse is envisioned to support a wide range of applicative verticals. It will support a seamless mix of physical and virtual worlds (realities) and, thus, will be a game changer for the Future Internet, built on the Semantic Web framework. The Metaverse will be ably assisted by the convergence of emerging wireless communication networks (such as Fifth-Generation and Beyond networks) or Sixth-Generation (6G) networks, Blockchain (BC), Web 3.0, Artificial Intelligence (AI), and Non-Fungible Tokens (NFTs). It has the potential for convergence in diverse industrial applications such as digital twins, telehealth care, connected vehicles, virtual education, social networks, and financial applications. Recent studies on the Metaverse have focused on explaining its key components, but a systematic study of the Metaverse in terms of industrial applications has not yet been performed. Owing to this gap, this survey presents the salient features and assistive Metaverse technologies. We discuss a high-level and generic Metaverse framework for modern industrial cyberspace and discuss the potential challenges and future directions of the Metaverse’s realization. A case study on Metaverse-assisted Real Estate Management (REM) is presented, where the Metaverse governs a Buyer–Broker–Seller (BBS) architecture for land registrations. We discuss the performance evaluation of the current land registration ecosystem in terms of cost evaluation, trust probability, and mining cost on the BC network. The obtained results show the viability of the Metaverse in REM setups.

9 citations


Journal ArticleDOI
TL;DR: In this article, an artificial intelligence-based system model was proposed to detect the malicious user trying to compromise the IoT environment using a binary classification problem, and the proposed system model is evaluated by considering different assessment measures that comprise the training accuracy, training loss, classification measures (precision, recall, and F1 score), and receiver operating characteristic (ROC) curve.
Abstract: The Internet of Things (IoT) is a key enabler technology that recently received significant attention from the scientific community across the globe. It helps transform everyone’s life by connecting physical and virtual devices with each other to offer staggering benefits, such as automation and control, higher productivity, real-time information access, and improved efficiency. However, IoT devices and their accumulated data are susceptible to various security threats and vulnerabilities, such as data integrity, denial-of-service, interception, and information disclosure attacks. In recent years, the IoT with blockchain technology has seen rapid growth, where smart contracts play an essential role in validating IoT data. However, these smart contracts can be vulnerable and degrade the performance of IoT applications. Hence, besides offering indispensable features to ease human lives, there is also a need to confront IoT environment security attacks, especially data integrity attacks. Toward this aim, this paper proposed an artificial intelligence-based system model with a dual objective. It first detects the malicious user trying to compromise the IoT environment using a binary classification problem. Further, blockchain technology is utilized to offer tamper-proof storage to store non-malicious IoT data. However, a malicious user can exploit the blockchain-based smart contract to deteriorate the performance IoT environment. For that, this paper utilizes deep learning algorithms to classify malicious and non-malicious smart contracts. The proposed system model offers an end-to-end security pipeline through which the IoT data are disseminated to the recipient. Lastly, the proposed system model is evaluated by considering different assessment measures that comprise the training accuracy, training loss, classification measures (precision, recall, and F1 score), and receiver operating characteristic (ROC) curve.

4 citations


Journal ArticleDOI
TL;DR: In this article , the authors employed a long short-term memory based AI model on the edge servers to classify the machines' malicious and nonmalicious message requests and forwarded them to the onion routing (OR) network.
Abstract: M2M communication in the Industrial Internet of Things is still in its infancy as the information exchange between machines is hindered by various modern security challenges and threats. An attacker can leverage the M2M communication by exploiting it with resource exhaustion, data integrity, and injection attacks. In this article, to address the aforementioned security issues, we first employed a long short-term memory based AI model on the edge servers to classify the machines' malicious and nonmalicious message requests and forwarded them to the onion routing (OR) network. Then, to enhance the security and reliability of the conventional OR network, we have associated it with blockchain technology by incorporating two additional fields along with the original message requests, i.e., verifying token and time to live that validates the incoming message requests. Additionally, the OR network, along with blockchain, is simulated inside a discrete simulator, i.e., a shadow simulator. Finally, the performance of the proposed system is evaluated with different performance metrics, such as F1 score, precision, recall, and false-negative rate. The empirical results show that the proposed OR network outperforms the conventional OR in terms of throughput, decryption time (computationally inexpensive), and OR circuit compromised rate.

3 citations


Journal ArticleDOI
22 Apr 2023-Sensors
TL;DR: In this article , the authors proposed a Game-o-Meta scheme, which integrates federated learning in the game metaverses with GP data being trained on local devices only, and the proposed scheme was compared against traditional schemes based on parameters such as GP task offloading, GP avatar rendering latency, and GS availability.
Abstract: The aim of the peer-to-peer (P2P) decentralized gaming industry has shifted towards realistic gaming environment (GE) support for game players (GPs). Recent innovations in the metaverse have motivated the gaming industry to look beyond augmented reality and virtual reality engines, which improve the reality of virtual game worlds. In gaming metaverses (GMs), GPs can play, socialize, and trade virtual objects in the GE. On game servers (GSs), the collected GM data are analyzed by artificial intelligence models to personalize the GE according to the GP. However, communication with GSs suffers from high-end latency, bandwidth concerns, and issues regarding the security and privacy of GP data, which pose a severe threat to the emerging GM landscape. Thus, we proposed a scheme, Game-o-Meta, that integrates federated learning in the GE, with GP data being trained on local devices only. We envisioned the GE over a sixth-generation tactile internet service to address the bandwidth and latency issues and assure real-time haptic control. In the GM, the GP’s game tasks are collected and trained on the GS, and then a pre-trained model is downloaded by the GP, which is trained using local data. The proposed scheme was compared against traditional schemes based on parameters such as GP task offloading, GP avatar rendering latency, and GS availability. The results indicated the viability of the proposed scheme.

2 citations


Journal ArticleDOI
TL;DR: The authors propose SanJeeVni, a blockchain (BC)-assisted UAV vaccine distribution at the backdrop of sixth-generation (6G) enhanced ultra-reliable low latency communication ( 6G-eRLLC) communication, which indicates the scheme efficacy in practical setups.
Abstract: Recently, unmanned aerial vehicles (UAVs) are deployed in Novel Coronavirus Disease-2019 (COVID-19) vaccine distribution process. To address issues of fake vaccine distribution, real-time massive UAV monitoring and control at nodal centers (NCs), the authors propose SanJeeVni, a blockchain (BC)-assisted UAV vaccine distribution at the backdrop of sixth-generation (6G) enhanced ultra-reliable low latency communication (6G-eRLLC) communication. The scheme considers user registration, vaccine request, and distribution through a public Solana BC setup, which assures a scalable transaction rate. Based on vaccine requests at production setups, UAV swarms are triggered with vaccine delivery to NCs. An intelligent edge offloading scheme is proposed to support UAV coordinates and routing path setups. The scheme is compared against fifth-generation (5G) uRLLC communication. In the simulation, we achieve and 86% improvement in service latency, 12.2% energy reduction of UAV with 76.25% more UAV coverage in 6G-eRLLC, and a significant improvement of $\approx 199.76$ % in storage cost against the Ethereum network, which indicates the scheme efficacy in practical setups.

1 citations


Journal ArticleDOI
TL;DR: In this paper , the authors proposed a contactless camera-based attendance system with the equipped functionalities of anti-spoofing, which can detect liveliness, so fake attendance marking is eliminated.

1 citations


Journal ArticleDOI
TL;DR: In this article , the authors introduced the concept of gradient encryption in federated learning (FL), which preserves the users' privacy without the additional computation requirements, and the computational power present in the edge devices helps to fine tune the local model and encrypt the input data to preserve privacy without any drop in performance.
Abstract: Autonomous vehicles (AVs) are getting popular because of their usage in a wide range of applications like delivery systems, self-driving taxis, and ambulances. AVs utilize the power of machine learning (ML) and deep learning (DL) algorithms to improve their self-driving learning experiences. The sudden surge in the number of AVs raises the need for distributed learning ecosystem to optimize their self-driving experiences at a rapid pace. Toward this goal, federated learning (FL) benefits, which can create a distributed learning environment for AVs. But, the traditional FL transfers the raw input data directly to a server, which leads to privacy concerns among the end-users. The concept of blockchain helps us to protect privacy, but it requires additional computational infrastructure. The extra infrastructure increases the operational cost for the company handling and maintaining the AVs. Motivated by this, in this paper, the authors introduced the concept of gradient encryption in FL, which preserves the users’ privacy without the additional computation requirements. The computational power present in the edge devices helps to fine-tune the local model and encrypt the input data to preserve privacy without any drop in performance. For performance evaluation, the authors have built a German traffic sign recognition system using a convolutional neural network (CNN) algorithm-based classification system and GeFL. The simulation process is carried out over a wide range of input parameters to analyze the performance at scale. Simulation results of GeFL outperform the conventional FL-based algorithms in terms of accuracy, i.e., 2% higher. Also, the amount of data transferred among the devices in the network is nearly three times less in GeFL compared to the traditional FL.

1 citations


Journal ArticleDOI
TL;DR: In this article , the authors present a literature survey of possible threats from the perspective of different application areas and review the most recent defensive algorithms and strategies used to guard against security and privacy threats in those areas.
Abstract: Machine learning (ML) and Deep learning (DL) models are popular in many areas, from business, medicine, industries, healthcare, transportation, smart cities, and many more. However, the conventional centralized training techniques may not apply to upcoming distributed applications, which require high accuracy and quick response time. It is mainly due to limited storage and performance bottleneck problems on the centralized servers during the execution of various ML and DL-based models. However, federated learning (FL) is a developing approach to training ML models in a collaborative and distributed manner. It allows the full potential exploitation of these models with unlimited data and distributed computing power. In FL, edge computing devices collaborate to train a global model on their private data and computational power without sharing their private data on the network, thereby offering privacy preservation by default. But the distributed nature of FL faces various challenges related to data heterogeneity, client mobility, scalability, and seamless data aggregation. Moreover, the communication channels, clients, and central servers are also vulnerable to attacks which may give various security threats. Thus, a structured vulnerability and risk assessment are needed to deploy FL successfully in real-life scenarios. Furthermore, the scope of FL is expanding in terms of its application areas, with each area facing different threats. In this paper, we analyze various vulnerabilities present in the FL environment and design a literature survey of possible threats from the perspective of different application areas. Also, we review the most recent defensive algorithms and strategies used to guard against security and privacy threats in those areas. For a systematic coverage of the topic, we considered various applications under four main categories: space, air, ground, and underwater communications. We also compared the proposed methodologies regarding the underlying approach, base model, datasets, evaluation matrices, and achievements. Lastly, various approaches' future directions and existing drawbacks are discussed in detail.

1 citations


Journal ArticleDOI
TL;DR: In this paper , explainable artificial intelligence (XAI) was used to extract essential features from the CC fraud dataset, which improved the performance of the LSTM model and showed promising results in detecting CC fraud patterns.
Abstract: Credit card (CC) fraud has been a persistent problem and has affected financial organizations. Traditional machine learning (ML) algorithms are ineffective owing to the increased attack space, and techniques such as long short-term memory (LSTM) have shown promising results in detecting CC fraud patterns. However, owing to the black box nature of the LSTM model, the decision-making process could be improved. Thus, in this paper, we propose a scheme, RaKShA, which presents explainable artificial intelligence (XAI) to help understand and interpret the behavior of black box models. XAI is formally used to interpret these black box models; however, we used XAI to extract essential features from the CC fraud dataset, consequently improving the performance of the LSTM model. The XAI was integrated with LSTM to form an explainable LSTM (X-LSTM) model. The proposed approach takes preprocessed data and feeds it to the XAI model, which computes the variable importance plot for the dataset, which simplifies the feature selection. Then, the data are presented to the LSTM model, and the output classification is stored in a smart contract (SC), ensuring no tampering with the results. The final data are stored on the blockchain (BC), which forms trusted and chronological ledger entries. We have considered two open-source CC datasets. We obtain an accuracy of 99.8% with our proposed X-LSTM model over 50 epochs compared to 85% without XAI (simple LSTM model). We present the gas fee requirements, IPFS bandwidth, and the fraud detection contract specification in blockchain metrics. The proposed results indicate the practical viability of our scheme in real-financial CC spending and lending setups.

1 citations


Journal ArticleDOI
01 Mar 2023-Sensors
TL;DR: The authors proposed a deterministic finite automaton (DFA) based lemmatization technique for the Gujarati language to transform lemmas into their root words and then inferred the set of topics from this lemmated corpus of Gujarati text.
Abstract: Topic modeling is a machine learning algorithm based on statistics that follows unsupervised machine learning techniques for mapping a high-dimensional corpus to a low-dimensional topical subspace, but it could be better. A topic model’s topic is expected to be interpretable as a concept, i.e., correspond to human understanding of a topic occurring in texts. While discovering corpus themes, inference constantly uses vocabulary that impacts topic quality due to its size. Inflectional forms are in the corpus. Since words frequently appear in the same sentence and are likely to have a latent topic, practically all topic models rely on co-occurrence signals between various terms in the corpus. The topics get weaker because of the abundance of distinct tokens in languages with extensive inflectional morphology. Lemmatization is often used to preempt this problem. Gujarati is one of the morphologically rich languages, as a word may have several inflectional forms. This paper proposes a deterministic finite automaton (DFA) based lemmatization technique for the Gujarati language to transform lemmas into their root words. The set of topics is then inferred from this lemmatized corpus of Gujarati text. We employ statistical divergence measurements to identify semantically less coherent (overly general) topics. The result shows that the lemmatized Gujarati corpus learns more interpretable and meaningful subjects than unlemmatized text. Finally, results show that lemmatization curtails the size of vocabulary decreases by 16% and the semantic coherence for all three measurements—Log Conditional Probability, Pointwise Mutual Information, and Normalized Pointwise Mutual Information—from −9.39 to −7.49, −6.79 to −5.18, and −0.23 to −0.17, respectively.

Journal ArticleDOI
TL;DR: In this article , the authors proposed an efficient resource allocation scheme incorporating the artificial intelligence (AI) algorithm-based autoencoder that overcomes information ambiguity when there is the same weight in the data rate matrix of the Hungarian algorithm.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed a reverse auction and blockchain-based energy trading (ET) approach by adopting the smart grid, where EVs can be classified into prosumer or consumer during the ET in the proposed approach.
Abstract: This paper proposes a reverse auction and blockchain-based electric vehicles (EVs) energy trading (ET) approach by adopting the smart grid. EVs can be classified into prosumer or consumer during the ET in the proposed approach. We further introduced the Interplanetary File System (IPFS)-based cost-efficient ET approach considering the sixth-generation (6G) wireless network to address the scalability and response time issues of the data transactions. Furthermore, the proposed approach leverages a reverse auction mechanism for optimal ET for EVs. The reverse auction mechanism maximizes the profit for EVs participating in the ET scheme using smart grid. Based on the reverse auction mechanism, a consumer can choose to trade energy with the prosumer or smart grid based on solar panel efficiency. Finally, the proposed approach is simulated considering the various performance metrics such as transaction efficiency, profit for consumers, and convergence to enable an optimal and efficient ET system.

Journal ArticleDOI
TL;DR: In this article , the authors present a survey on performance evaluation metrics for various IoT applications and graphically present the comparison of multiple parameters of domains security, energy efficiency, data storage, and network performance.

Journal ArticleDOI
TL;DR: In this article , a blockchain and AI-envisioned secure and trusted framework (HEART) is proposed to classify wearable devices as malicious or non-malicious, and a smart contract that allows only those patients having a wearable device to be classified as nonmalicious to the public blockchain network.
Abstract: Over the last few decades, the healthcare industry has continuously grown, with hundreds of thousands of patients obtaining treatment remotely using smart devices. Data security becomes a prime concern with such a massive increase in the number of patients. Numerous attacks on healthcare data have recently been identified that can put the patient’s identity at stake. For example, the private data of millions of patients have been published online, posing a severe risk to patients’ data privacy. However, with the advent of Industry 4.0, medical practitioners can digitally assess the patient’s condition and administer prompt prescriptions. However, wearable devices are also vulnerable to numerous security threats, such as session hijacking, data manipulation, and spoofing attacks. Attackers can tamper with the patient’s wearable device and relays the tampered data to the concerned doctor. This can put the patient’s life at high risk. Since blockchain is a transparent and immutable decentralized system, it can be utilized for securely storing patient’s wearable data. Artificial Intelligence (AI), on the other hand, utilizes different machine learning techniques to classify malicious data from an oncoming stream of patient’s wearable data. An amalgamation of these two technologies would make the possibility of tampering the patient’s data extremely difficult. To mitigate the aforementioned issues, this paper proposes a blockchain and AI-envisioned secure and trusted framework (HEART). Here, Long-Short Term Model (LSTM) is used to classify wearable devices as malicious or non-malicious. Then, we design a smart contract that allows only of those patients’ data having a wearable device to be classified as non-malicious to the public blockchain network. This information is then accessible to all involved in the patient’s care. We then evaluate the HEART’s performance considering various evaluation metrics such as accuracy, recall, precision, scalability, and network latency. On the training and testing sets, the model achieves accuracies of 93% and 92.92%, respectively.

Journal ArticleDOI
TL;DR: In this article , the authors proposed a Cuckoo Search-Kekre Fast Codebook Generation (CS-KFCG) algorithm for image compression, which improves the speed of codebook generation by using Flight Dissemination Function (FDF).
Abstract: For constructing the best local codebook for image compression, there are many Vector Quantization (VQ) procedures, but the simplest VQ procedure is the Linde–Buzo–Gray (LBG) procedure. Techniques such as the Gaussian Dissemination Function (GDF) are used for the searching process in generating a global codebook for particle swarm optimization (PSO), Honeybee mating optimization (HBMO), and Firefly (FA) procedures. However, when particle velocity is very high, FA encounters a problem when brighter fireflies are trivial, and PSO suffers uncertainty in merging. A novel procedure, Cuckoo Search–Kekre Fast Codebook Generation (CS-KFCG), is proposed that enhances Cuckoo Search–Linde–Buzo–Gray (CS-LBG) codebook by implementing a Flight Dissemination Function (FDF), which produces more speed than other states of the art algorithms with appropriate mutation expectations for the overall codebook. Also, CS-KFGC has generated a high Peak Signal Noise Ratio (PSNR) in terms of high duration (time) and better acceptability rate.

Journal ArticleDOI
TL;DR: In this paper , the authors developed an object detection and identification framework to bolster public safety, which consists of an unmanned aerial vehicle (UAV) utilized for data collection that constantly monitors and captures the images of designated areas.
Abstract: In this paper, we developed an object detection and identification framework to bolster public safety. Before developing the proposed framework, several existing frameworks were analyzed to bolster public safety. The other models were carefully observed for their strengths and weaknesses based on the machine learning and deep learning algorithms they operate on. All these were kept in mind during the development of the proposed model. The proposed framework consists of an unmanned aerial vehicle (UAV) utilized for data collection that constantly monitors and captures the images of the designated areas. A convolutional neural network (CNN) model is developed to recognize a threat and identifies various handheld objects, such as guns and knives, which facilitate criminals to commit crimes. The proposed CNN model comprises 16 layers with input, convolutional, dense, max-pool, and flattened layers of different dimensions. For that, a benchmarked dataset, that is, small objects handled similarly to a weapon (SOHAs), a weapon detection dataset is used. It comprises six classes of 8945 images, with 5947 used for training, 1699 used for testing, and 849 used for validation. Once the CNN model accomplishes the object identification and classification, that is, the person is criminal or non-criminal, the criminal is forwarded to various law enforcement agencies and non-criminal data are again forwarded to the CNN model for improvising its accuracy rate. As a result, the proposed CNN model outperforms several pre-trained models with an accuracy of 0.8352 and a validation accuracy of 0.7758. In addition, the proposed model gives a minimal loss of 0.83 with a validation loss of 0.97. The proposed framework decreases the burden on crime-fighting agencies and increases the accuracy of crime detection. Additionally, it ensures fairness and operates at a meager computational cost compared to similar pre-trained models.

Journal ArticleDOI
TL;DR: In this article , a context-aware reliable routing protocol that integrates k-means clustering and support vector machine (SVM) is proposed to promote reliable routing in VANETs.
Abstract: The Vehicular Ad-hoc Network (VANET) is an innovative technology that allows vehicles to connect with neighboring roadside structures to deliver intelligent transportation applications. To deliver safe communication among vehicles, a reliable routing approach is required. Due to the excessive mobility and frequent variation in network topology, establishing a reliable routing for VANETs takes a lot of work. In VANETs, transmission links are extremely susceptible to interruption; as a result, the routing efficiency of these constantly evolving networks requires special attention. To promote reliable routing in VANETs, we propose a novel context-aware reliable routing protocol that integrates k-means clustering and support vector machine (SVM) in this paper. The k-means clustering divides the routes into two clusters named GOOD and BAD. The cluster with high mean square error (MSE) is labelled as BAD, and the cluster with low MSE is labelled as GOOD. After training the routing data with SVM, the performance of each route from source to target is improved in terms of Packet Delivery Ratio (PDR), throughput, and End to End Delay (E2E). The proposed protocol will achieve improved routing efficiency with these changes.


Proceedings ArticleDOI
15 Jul 2023
TL;DR: In this article , a downlink mMIMO NOMA cooperative system is proposed to reduce battery expenditure and increase the cell edge user's energy efficiency and sum rate in 5G cellular system.
Abstract: With the development of the Internet of Things (IoT), the number of devices will also increase tremendously. However, we need more wireless communication resources. It has been shown in the literature that non-orthogonal multiple access (NOMA) offers high multiplexing gains due to the simultaneous transfer of signals, and massive multiple-input–multiple-outputs (mMIMOs) offer high spectrum efficiency due to the high antenna gain and high multiplexing gains. Therefore, a downlink mMIMO NOMA cooperative system is considered in this paper. The users at the cell edge in 5G cellular system generally suffer from poor signal quality as they are far away from the BS and expend high battery power to decode the signals superimposed through NOMA. Thus, this paper uses a cooperative relay system and proposes the mMIMO NOMA double-mode model to reduce battery expenditure and increase the cell edge user’s energy efficiency and sum rate. In the mMIMO NOMA double-mode model, two modes of operation are defined. Depending on the relay’s battery level, these modes are chosen to utilize the system’s energy efficiency. Comprehensive numerical results show the improvement in the proposed system’s average sum rate and average energy efficiency compared with a conventional system. In a cooperative NOMA system, the base station (BS) transmits a signal to a relay, and the relay forwards the signal to a cluster of users. This cluster formation depends on the user positions and geographical restrictions concerning the relay equipment. Therefore, it is vital to form user clusters for efficient and simultaneous transmission. This paper also presents a novel method for efficient cluster formation.

Journal ArticleDOI
TL;DR: In this article , a case study of a hyper-ledger driven IoT-enabled scientific publishing system (SPS) is proposed to address the limitations of the traditional SPS, which serves the dual purpose of low-powered computational tagging of manuscripts as smart objects, and also supports rewarding and completing the verification of transactions by peers without involving a third party.

Journal ArticleDOI
01 Jan 2023-Sensors
TL;DR: In this paper , the authors presented a case study on public safety applications that utilizes the essential characteristics of artificial intelligence (AI), blockchain, and a 6G network to handle data integrity attacks on the crime data.
Abstract: Mobile applications have rapidly grown over the past few decades to offer futuristic applications, such as autonomous vehicles, smart farming, and smart city. Such applications require ubiquitous, real-time, and secure communications to deliver services quickly. Toward this aim, sixth-generation (6G) wireless technology offers superior performance with high reliability, enhanced transmission rate, and low latency. However, managing the resources of the aforementioned applications is highly complex in the precarious network. An adversary can perform various network-related attacks (i.e., data injection or modification) to jeopardize the regular operation of the smart applications. Therefore, incorporating blockchain technology in the smart application can be a prominent solution to tackle security, reliability, and data-sharing privacy concerns. Motivated by the same, we presented a case study on public safety applications that utilizes the essential characteristics of artificial intelligence (AI), blockchain, and a 6G network to handle data integrity attacks on the crime data. The case study is assessed using various performance parameters by considering blockchain scalability, packet drop ratio, and training accuracy. Lastly, we explored different research challenges of adopting blockchain in the 6G wireless network.

Journal ArticleDOI
TL;DR: In this article , the authors proposed a convolutional neural network and bi-directional gated recurrent unit (CNN + BiGRU) attention-based architecture for the classification of heartbeat sound.
Abstract: Cardiovascular diseases (CVDs) are a significant cause of death worldwide. CVDs can be prevented by diagnosing heartbeat sounds and other conventional techniques early to reduce the harmful effects caused by CVDs. However, it is still challenging to segment, extract features, and predict heartbeat sounds in elderly people. The inception of deep learning (DL) algorithms has helped detect various types of heartbeat sounds at an early stage. Motivated by this, we proposed an intelligent architecture categorizing heartbeat into normal and murmurs for elderly people. We have used a standard heartbeat dataset with heartbeat class labels, i.e., normal and murmur. Furthermore, it is augmented and preprocessed by normalization and standardization to significantly reduce computational power and time. The proposed convolutional neural network and bi-directional gated recurrent unit (CNN + BiGRU) attention-based architecture for the classification of heartbeat sound achieves an accuracy of 90% compared to the baseline approaches. Hence, the proposed novel CNN + BiGRU attention-based architecture is superior to other DL models for heartbeat sound classification.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed a biometric-based secure and lightweight message dissemination scheme (i.e., BiOIoV ) for the IoV ecosystem to mitigate various security attacks.
Abstract: Internet of vehicles (IoV) allows vehicles to disseminate various messages to neighbouring vehicles and these messages include traffic data, location sharing, driving assistance, road safety, and accident statistics. Additionally, an insecure channel is used for the exchange of these messages, which causes several problems, such as security attacks, data distribution, dynamic topology, vehicle mobility, user privacy, high computational and transmission overhead, throughput, or authentication process when sending these messages across an IoV network with limited resources. Considering the aforementioned issues, in this paper, we propose a biometric-based secure and lightweight message dissemination scheme (i.e., BiOIoV ) for the IoV ecosystem to mitigate various security attacks. We assess BiOIoV security and privacy defenses against various security threats in formal security analysis using Scyther and ProVerif 2.03. We also presented security proofs of BiOIoV. Then, we evaluated the performance of the proposed scheme with cutting-edge approaches. Results show the efficacy of the proposed scheme compared to state-of-the-art approaches in terms of communication overhead, computation cost, and energy consumption.

Journal ArticleDOI
TL;DR: In this article , a zero-trust architecture is proposed to provide end-to-end security for decentralized Oracle Networks (DONs) based applications, which offers a one-level indirection to the execution of the actual smart contracts.

Journal ArticleDOI
TL;DR: In this paper , a mapping study has been conducted to search different scientific databases to identify the existing challenges in healthcare management systems and to analyze the existing blockchain-based healthcare applications, and also provides insights into the research challenges in blockchain and proposes solution taxonomy through comparative analysis.
Abstract: Blockchain technology was bestowed through bitcoin; research has continuously stretched out its applications in different sectors, proving blockchain as a versatile technology expanded in non-financial use cases. In the healthcare industry, blockchain is relied upon to have critical effects. Although exploration here is generally new yet developing quickly, along these lines, researchers in computer science, healthcare information technology, and professionals are continually geared to stay up with research progress. The study presents an exhaustive study on blockchain as a technology in depth from all possible perspectives and its adoption in the healthcare sector. A mapping study has been conducted to search different scientific databases to identify the existing challenges in healthcare management systems and to analyze the existing blockchain-based healthcare applications. Though blockchain has inherent highlights, such as distributed ledger, encryption, consensus, and immutability, blockchain adoption in healthcare has challenges. This paper also provides insights into the research challenges in blockchain and proposes solution taxonomy through comparative analysis.

Journal ArticleDOI
TL;DR: In this paper , a fusion of AI and coalition game for secure resource allocation in non-orthogonal multiple access (NOMA)-based cooperative D2D communication is presented.
Abstract: Device-to-device (D2D) communication offers a low-cost paradigm where two devices in close proximity can communicate without needing a base station (BS). It significantly improves radio resource allocation, channel gain, communication latency, and energy efficiency and offers cooperative communication to enhance the weak user's network coverage. The cellular mobile users (CMUs) share the spectral resources (e.g., power, channel, and spectrum) with D2D mobile users (DMUs), improving spectral efficiency. However, the reuse of radio resources causes various interferences, such as intercell and intracell interference, that degrade the performance of overall D2D communication. To overcome the aforementioned issues, this paper presents a fusion of AI and coalition game for secure resource allocation in non-orthogonal multiple access (NOMA)-based cooperative D2D communication. Here, NOMA uses the successive interference cancellation (SIC) technique to reduce the severe impact of interference from the D2D systems. Further, we utilized a coalition game theoretic model that efficiently and securely allocates the resources between CMUs and DMUs. However, in the coalition game, all DMUs participate in obtaining resources from CMUs, which increases the computational overhead of the overall system. For that, we employ artificial intelligence (AI) classifiers that bifurcate the DMUs based on their channel quality parameters, such as reference signal received power (RSRP), received signal strength indicator (RSSI), signal-to-noise ratio (SNR), and channel quality indicator (CQI). It only forwards the DMUs that have better channel quality parameters into the coalition game, thus reducing the computational overhead of the overall D2D communication. The performance of the proposed scheme is evaluated using various statistical metrics, for example, precision score, accuracy, recall, F1 score, overall sum rate, and secrecy capacity, where an accuracy of 99.38% is achieved while selecting DMUs for D2D communication.

Journal ArticleDOI
TL;DR: In this article , the authors designed an extensive blockchain-based healthcare system (MyEasyHealthcare) with reduced gas consumption, transaction cost, execution cost, and bandwidth utilization, along with enhanced security at three levels.
Abstract: Blockchain systems have seen vast growth due to the immense potential in developing secure applications for education, healthcare, and so forth. The healthcare system is extensively researched to provide convenience to human life. With the exponential growth in healthcare systems and devices, patient data security and privacy issues are becoming primary concerns. Blockchain is emerging as a solution to secure healthcare records, but it faces certain shortcomings like transaction time, execution time, gas cost consumption, bandwidth utilization, and so forth. The current article designed an extensive blockchain-based healthcare system (MyEasyHealthcare) with reduced gas consumption, transaction cost, execution cost, and bandwidth utilization, along with enhanced security at three levels. At the first level, the professionals and patients get registered, which provides identity access management. Secondly, authorization is required for each registered entity by the owners. Lastly, the third level includes a doctor-patient relationship where the hospital's owner assigns a patient to a particular doctor. The data is protected from the outer world and is preserved only between the doctor and the patient. Moreover, to include the majority of tasks for hospital management, the developed system incorporates a smart contract to record seven different parameters for patient diagnosis by a physician and 15 different parameters by a pathologist. The designed system is evaluated for the amount of gas consumed, transaction cost, execution cost, and bandwidth utilization by simulating/executing the written smart contract on InterPlanetary File System (IPFS) and Remix to check the feasibility of the developed system (MyEasyHealthcare) for the real world; the results testify the proposed system is useful in the real world.

Journal ArticleDOI
01 Feb 2023-Sensors
TL;DR: In this article , a support vector machine-based red deer algorithm (SVM-RDA) was proposed to reduce the spectrum handoff delay and power consumption in cognitive radio networks.
Abstract: A cognitive radio network (CRN) is an intelligent network that can detect unoccupied spectrum space without interfering with the primary user (PU). Spectrum scarcity arises due to the stable channel allocation, which the CRN handles. Spectrum handoff management is a critical problem that must be addressed in the CRN to ensure indefinite connection and profitable use of unallocated spectrum space for secondary users (SUs). Spectrum handoff (SHO) has some disadvantages, i.e., communication delay and power consumption. To overcome these drawbacks, a reduction in handoff should be a priority. This study proposes the use of dynamic spectrum access (DSA) to check for available channels for SU during handoff using a metaheuristic algorithm depending on machine learning. The simulation results show that the proposed “support vector machine-based red deer algorithm” (SVM-RDA) is resilient and has low complexity. The suggested algorithm’s experimental setup offers several handoffs, unsuccessful handoffs, handoff delay, throughput, signal-to-noise ratio (SNR), SU bandwidth, and total spectrum bandwidth. This study provides an improved system performance during SHO. The inferred technique anticipates handoff delay and minimizes the handoff numbers. The results show that the recommended method is better at making predictions with fewer handoffs compared to the other three.

Journal ArticleDOI
TL;DR: In this article , a limited access encryption algorithm incorporating federated learning (LEAF) framework is presented, i.e., an encryption technique that solves privacy issues with the help of edge-enabled AI models.
Abstract: Over the last decades, the healthcare industry has been revolutionized like anything, especially after the Covid-19 surge. Various artificial intelligence approaches have also been explored during this era for their applicability in healthcare. However, traditional AI techniques and algorithms are prone to overfitting with minimum robustness to unseen or untrained data. So, there is a requirement for new techniques which can counterfeit the issues mentioned earlier. Federated learning (FL) can help to make specific AI services for the network of hospitals with less overfitting and more robust modules. However, with the inclusion of FL, the problem related to user privacy is the biggest challenge, making using FL in the real world a grand challenge. Most solutions presented in the literature used blockchain technology to mitigate the issues mentioned earlier. However, it prevents third-party systems from penetrating the decision process, but the network devices can access shared data. Moreover, blockchain implementation requires new paradigms and infrastructure with an additional overhead cost. Motivated by these facts, the paper presents a limited access encryption algorithm incorporating FL (LEAF) framework, i.e., an encryption technique that solves privacy issues with the help of edge-enabled AI models. The proposed LEAF framework preserves user privacy and minimizes overhead costs. The authors have evaluated the performance of the LEAF framework using extensive simulations and achieved superior results. The achieved accuracy of the proposed LEAF framework is 3% more than that of the traditional centralized and FL-based systems advantages without compromising user privacy. In the best scenario, the proposed framework’s encryption process also compresses the data size by 4-5 times.

Journal ArticleDOI
TL;DR: In this paper , the authors present the integration of suitable low-powered consensus protocols and smart contract design to assess and validate the blockchain-IoT ecosystems, and present a case study of a smart contract-based blockchain-driven ecosystem with a comparative analysis of mining cost and latency.
Abstract: Recently, Internet-of-Things (IoT) based applications have shifted from centralized infrastructures to decentralized ecosystems, owing to user data's security and privacy limitations. The shift has opened new doors for intruders to launch distributed attacks in diverse IoT scenarios that jeopardize the application environments. Moreover, as heterogeneous and autonomous networks communicate, the attacks intensify, which justifies the requirement of trust as a key policy. Recently, blockchain-based IoT solutions have been proposed that address trust limitations by maintaining data consistency, immutability, and chronology in IoT environments. However, IoT ecosystems are resource-constrained and have low bandwidth and finite computing power of sensor nodes. Thus, the inclusion of blockchain requires an effective policy design regarding consensus and smart contract environments in heterogeneous IoT applications. Recent studies have presented blockchain as a potential solution in IoT, but an effective view of consensus and smart contract design to meet the end application requirements is an open problem. Motivated by the same, the survey presents the integration of suitable low-powered consensus protocols and smart contract design to assess and validate the blockchain-IoT ecosystems. We present blockchain-IoT's emerging communication and security aspects with performance issues of consensus protocols, interoperability, and implementation platforms. A case study of a smart contract-based blockchain-driven ecosystem is presented with a comparative analysis of mining cost and latency, which shows its suitability in real-world setups. We also highlight attacks on blockchain IoT, open issues, potential findings, and future directions. The survey intends to drive novel solutions for future consensus and safe, smart contract designs to support applicative IoT ecosystems.