scispace - formally typeset
Search or ask a question

Showing papers by "Mohammad Reza Khosravi published in 2022"


Journal ArticleDOI
TL;DR: A service offloading (SOL) method with deep reinforcement learning, is proposed for DT-empowered IoV in edge computing, which leverages deep Q-network (DQN), which combines the value function approximation of deep learning and reinforcement learning.
Abstract: With the potential of implementing computing-intensive applications, edge computing is combined with digital twinning (DT)-empowered Internet of vehicles (IoV) to enhance intelligent transportation capabilities. By updating digital twins of vehicles and offloading services to edge computing devices (ECDs), the insufficiency in vehicles’ computational resources can be complemented. However, owing to the computational intensity of DT-empowered IoV, ECD would overload under excessive service requests, which deteriorates the quality of service (QoS). To address this problem, in this article, a multiuser offloading system is analyzed, where the QoS is reflected through the response time of services. Then, a service offloading (SOL) method with deep reinforcement learning, is proposed for DT-empowered IoV in edge computing. To obtain optimized offloading decisions, SOL leverages deep Q-network (DQN), which combines the value function approximation of deep learning and reinforcement learning. Eventually, experiments with comparative methods indicate that SOL is effective and adaptable in diverse environments.

107 citations


Journal ArticleDOI
TL;DR: This work puts forward an accurate LSH (locality-sensitive hashing)-based traffic flow prediction approach with the ability to protect privacy, and demonstrates the feasibility of the proposal in terms of prediction accuracy and efficiency while guaranteeing sensor data privacy.
Abstract: With the continuous increment of city volume and size, a number of traffic-related urban units (e.g., vehicles, roads, buildings, etc.) are emerging rapidly, which plays a heavy burden on the scientific traffic control of smart cities. In this situation, it is becoming a necessity to utilize the sensor data from massive cameras deployed at city crossings for accurate traffic flow prediction. However, the traffic sensor data are often distributed and stored by different organizations or parties with zero trust, which impedes the multi-party sensor data sharing significantly due to privacy concerns. Therefore, it requires challenging efforts to balance the tradeoff between data sharing and data privacy to enable cross-organization traffic data fusion and prediction. In light of this challenge, we put forward an accurate LSH (locality-sensitive hashing)-based traffic flow prediction approach with the ability to protect privacy. Finally, through a series of experiments deployed on a real-world traffic dataset, we demonstrate the feasibility of our proposal in terms of prediction accuracy and efficiency while guaranteeing sensor data privacy.

30 citations


Journal ArticleDOI
TL;DR: A smart and sustainable conceptual framework that leverages cloud computing, IoT devices, and artificial intelligence to process and obtain necessary information is introduced that provides digital analytics and saves results in decentralized cloud repositories through blockchain technology to promote various applications.
Abstract: Advancements in digital technologies, such as the Internet of Things (IoT), fog/edge/cloud computing, and cyber‐physical systems have revolutionized a broad spectrum of smart city applications. The significant contributions and rapid developments of advanced artificial intelligence‐based technologies and approaches, like, machine learning and deep learning, which are applied for extracting accurate information from extensive data, perform a potential role in IoT applications. Moreover, blockchain technology's fast adoption also contributes a significant role in the development of the new digital smart city ecosystem. Thus, artificial intelligence and blockchain technology convergence revolutionize smart city infrastructures to establish sustainable ecosystems for IoT applications. Nevertheless, these advancements and technological improvements also provide both opportunities and challenges for developing sustainable IoT applications. This paper aims to examine the convergence of blockchain technology and artificial intelligence, a unique driver towards technological transformation in intelligent and sustainable IoT applications. We mainly discussed the advantages of blockchain technology that might promote the advancement and development of sustainable IoT applications. On the basis of the discussion, we introduced a smart and sustainable conceptual framework that leverages cloud computing, IoT devices, and artificial intelligence to process and obtain necessary information. The system provides digital analytics and saves results in decentralized cloud repositories through blockchain technology to promote various applications. Moreover, the layer‐based architecture allows a sustainable incentive structure, which can possibly assist secure and protected smart city applications. We reviewed the enhanced solutions, summing up the key points that can be applied for generating various artificial intelligence and blockchain‐based systems. Also, we discussed the issues that still remain open and our future research goals; that can introduce new ideas and future guidelines for sustainable IoT applications.

27 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigated the different aspects of multimedia computing in Video Synthetic Aperture Radar (Video-SAR) as a new mode of radar imaging for real-time remote sensing and surveillance.

22 citations


Journal ArticleDOI
TL;DR: The results revealed that the proposed feature selection method (GWO-RF) has outperformed the other state of art methods where it achieved overall accuracy, precision, and sensitivity rates of 98.74%, 98.88%, and 98.63%, respectively.
Abstract: Colorectal cancer (CRC) is one of the most common malignant cancers worldwide. To reduce cancer mortality, early diagnosis and treatment are essential in leading to a greater improvement and survival length of patients. In this paper, a hybrid feature selection technique (RF-GWO) based on random forest (RF) algorithm and gray wolf optimization (GWO) was proposed for handling high dimensional and redundant datasets for early diagnosis of colorectal cancer (CRC). Feature selection aims to properly select the minimal most relevant subset of features out of a vast amount of complex noisy data to reach high classification accuracy. Gray wolf optimization (GWO) and random forest (RF) algorithm were utilized to find the most suitable features in the histological images of the human colorectal cancer dataset. Then, based on the best-selected features, the artificial neural networks (ANNs) classifier was applied to classify multiclass texture analysis in colorectal cancer. A comparison between the GWO and another optimizer technique particle swarm optimization (PSO) was also conducted to determine which technique is the most successful in the enhancement of the RF algorithm. Furthermore, it is crucial to select an optimizer technique having the capability of removing redundant features and attaining the optimal feature subset and therefore achieving high CRC classification performance in terms of accuracy, precision, and sensitivity rates. The Heidelberg University Medical Center Pathology archive was used for performance check of the proposed method which was found to outperform benchmark approaches. The results revealed that the proposed feature selection method (GWO-RF) has outperformed the other state of art methods where it achieved overall accuracy, precision, and sensitivity rates of 98.74%, 98.88%, and 98.63%, respectively.

13 citations


Journal ArticleDOI
TL;DR: This paper intends to create an optimized algorithm for implementing an automatic license plate recognition system, and employs a neural network to extract the plate characters automatically.
Abstract: Vehicles play a vital role in modern-day intelligent transportation systems (ITS). Plate characters provide a standard means of identification for any vehicle. To serve this purpose, an automatic license plate recognition system is studied. In this paper, we intend to create an optimized algorithm for implementing the scheme. Firstly, we undertake several challenging stages. The first step is introduced as the determination of plate location. Then, in the second phase, we apply an initial improvement to decline the likely noises using the Gaussian function to provide an appropriate filter for this target. Next, the rest of the project is organized as follows, finding the edge of images, enhancing modified pictures, and selecting the exact place of the plate. Afterward, tilting and plate rotation improvement and plate characters' extraction are considered two essential steps in this regard. Eventually, the final step of this project consists of several stages, such as employing a neural network to extract the plate characters automatically.

13 citations


Journal ArticleDOI
TL;DR: In this paper, a high-performance electro-absorption optical modulator based on the epsilon-near zero (ENZ) effect is proposed, where an external voltage is applied across the graphene layers to change the carrier concentration in the indium tin oxide (ITO) layers.
Abstract: A high-performance electro-absorption optical modulator based on the epsilon-near-zero (ENZ) effect is proposed. The structure consists of a waveguide with a silicon (Si) core over which a stack of graphene/HfO $_2$ /graphene/ITO/HfO $_2$ /graphene is grown, covered by a Si cladding. An external voltage is applied across the graphene layers to change the carrier concentration in the indium tin oxide (ITO) layers. Using a self-consistent theory, the required voltage to achieve the ENZ points in the ITO layers is calculated up to 3.42 V for an ITO thickness of 5 nm. The operation of the modulator is investigated using a three-dimensional finite-difference time-domain (FDTD) method, resulting in a modulation depth as high as 5.23 dB/ $\boldsymbol{\mu}$ m (5.36 dB/ $\boldsymbol{\mu }$ m) at a wavelength of 1.55 $\boldsymbol{\mu }$ m for the TE (TM) polarization, which ensures the polarization-insensitivity of our proposed modulator. It is also calculated that the insertion loss of the modulator is in the order of $\boldsymbol{ 2.5 \times 10^{-3}}$ dB/ $\boldsymbol{\mu }$ m that yields the figure of merit (FOM) of more than 1800. The outstanding features of our proposed modulator are mainly attributed to using the Si cladding layer instead of metal cladding. Furthermore, in contrast to the previously studied structures with metal electrodes, graphene layers significantly reduce the insertion loss.

12 citations


Journal ArticleDOI
TL;DR: In this paper , a deep reinforcement learning-based multi-objective edge server placement strategy, named DESP, is fully explored, to promote the coverage rate, the workload balancing and reduce the average delay of finishing tasks in the IoV.

11 citations


Journal ArticleDOI
TL;DR: This study proposes and discusses an easy-to-implement energy-efficient location-based opportunistic routing protocol (EELORP) that can work efficiently for various applications of UASN-assisted Internet of Underwater Things (IoUTs) platforms with reduced delay.
Abstract: In underwater acoustic sensor networks (UASNs), the reliable transfer of data from the source nodes located underwater to the destination nodes at the surface through the network of intermediate nodes is a significant challenge due to various unique characteristics of UASN such as continuous mobility of sensor nodes, increased propagation delay, restriction in energy, and heightened interference. Recently, the location-based opportunistic routing protocols seem to show potential by providing commendable quality of service (QoS) in the underwater environment. This study initially reviews all the latest location-based opportunistic routing protocols proposed for UASNs and discusses its possible limitations and challenges. Most of the existing works focus either on improving the QoS or on energy efficiency, and the few hybrid protocols that focus on both parameters are too complex with increased overhead and lack techniques to overcome communication voids. Further, this study proposes and discusses an easy-to-implement energy-efficient location-based opportunistic routing protocol (EELORP) that can work efficiently for various applications of UASN-assisted Internet of Underwater Things (IoUTs) platforms with reduced delay. We simulate the protocol in Aqua-Sim, and the results obtained show better performance than existing protocols in terms of QoS and energy efficiency.

7 citations


Journal ArticleDOI
TL;DR: A support vector machine using a grey wolf optimizer (SVM-GWO) hybrid regression model to predict the wear rates of UHMWPE based on published polyethylene data from pin on disc wear experiments typically performed in the field of prosthetic hip implants proved to be a reliable and robust model.
Abstract: One of the greatest challenges in joint arthroplasty is to enhance the wear resistance of ultrahigh molecular weight polyethylene (UHMWPE), which is one of the most successful polymers as acetabular bearings for total hip joint prosthesis. In order to improve UHMWPE wear rates, it is necessary to develop efficient methods to predict its wear rates in various conditions and therefore help in improving its wear resistance, mechanical properties, and increasing its life span inside the body. This article presents a support vector machine using a grey wolf optimizer (SVM-GWO) hybrid regression model to predict the wear rates of UHMWPE based on published polyethylene data from pin on disc (PoD) wear experiments typically performed in the field of prosthetic hip implants. The dataset was an aggregate of 29 different PoD UHMWPE datasets collected from Google Scholar and PubMed databases, and it consisted of 129 data points. Shapley additive explanations (SHAP) values were used to interpret the presented model to identify the most important and decisive parameters that affect the wear rates of UHMWPE and, therefore, predict its wear behavior inside the body under different conditions. The results revealed that radiation doses had the highest impact on the model’s prediction, where high values of radiation doses had a negative impact on the model output. The pronounced effect of irradiation doses and surface roughness on the wear rates of polyethylene was clear in the results when average disc surface roughness R a values were below 0.05 μm, and irradiation doses were above 95 kGy produced 0 mg/MC wear rate. The proposed model proved to be a reliable and robust model for the prediction of wear rates and prioritizing factors that most significantly affect its wear rates. The proposed model can help material engineers to further design polyethylene acetabular linings via improving the wear resistance and minimizing the necessity for wear experiments.

6 citations


Journal ArticleDOI
TL;DR: Through the improved ARIMA model, the proposed approach can capture underlying pattern of healthcare data changes with time and accurately predict missing data and the experiments conducted on the WISDM dataset show that MHDP SVD_ARIMA approach is effective and efficient in predicting missing healthcare data.
Abstract: Healthcare uses state-of-the-art technologies (such as wearable devices, blood glucose meters, electrocardiographs), which results in the generation of large amounts of data. Healthcare data is essential in patient management and plays a critical role in transforming healthcare services, medical scheme design, and scientific research. Missing data is a challenging problem in healthcare due to system failure and untimely filing, resulting in inaccurate diagnosis treatment anomalies. Therefore, there is a need to accurately predict and impute missing data as only complete data could provide a scientific and comprehensive basis for patients, doctors, and researchers. However, traditional approaches in this paradigm often neglect the effect of the time factor on forecasting results. This paper proposes a time-aware missing healthcare data prediction approach based on the autoregressive integrated moving average (ARIMA) model. We combine a truncated singular value decomposition (SVD) with the ARIMA model to improve the prediction efficiency of the ARIMA model and remove data redundancy and noise. Through the improved ARIMA model, our proposed approach (named MHDP SVD_ARIMA) can capture underlying pattern of healthcare data changes with time and accurately predict missing data. The experiments conducted on the WISDM dataset show that MHDP SVD_ARIMA approach is effective and efficient in predicting missing healthcare data.

DOI
01 Jan 2022
TL;DR: Wang et al. as mentioned in this paper proposed a privacy-aware and accurate missing traffic flow prediction approach based on time-aware Locality-Sensitive Hashing technique and deployed a set of experiments based on real traffic dataset.
Abstract: With the continuous development of IoT, a number of sensors establish on the roadside to monitor traffic conditions in real time. The continuously traffic data generated by these sensors makes traffic management feasible. However, loss of data may occur due to inevitable sensor failure, impeding traffic managers to understand traffic dynamics clearly. In this situation, it is becoming a necessity to predict missing traffic flow accurately for effective traffic management. Furthermore, the traffic sensor data are often distributed and stored by different agencies, which inhibits the multi-party sensor data sharing significantly due to privacy concerns. Therefore, it has become a major obstacle to balance the tradeoff between data sharing and vehicle privacy. In light of these challenges, we propose a privacy-aware and accurate missing traffic flow prediction approach based on time-aware Locality-Sensitive Hashing technique. At last, we deploy a set of experiments based on a real traffic dataset. Experimental reports demonstrate the feasibility of our proposal in terms of traffic flow prediction accuracy and efficiency while guaranteeing sensor data privacy.

Journal ArticleDOI
TL;DR: In this article , a novel and intelligent method is proposed in which uses a reference history of flows to assign an importance degree to each table entry, which makes use of the popularity of traffic flows in the table to select the intended flow for the replacement.
Abstract: Software-defined networks have been developed to allow the entire network to be managed as a programmable entity. As a well-known protocol in this field, OpenFlow installs new packet forwarding rules of the distinct packets of Big Data flows (known as flow entries) in the flow tables of network switches in order to implement the desired management policies. Despite the high speed, flow tables have limited capacity to store the information of Big Data flows. As a result of inefficient policy for replacing the entries of the flow table, lack of flow entries corresponding to the incoming packets in the flow table of the switch will increase the references to the controller for forwarding this packet as well as the amount of delay in packet forwarding. The underlying idea of the proposed method is to make use of the popularity of traffic flows in the table to select the intended flow for the replacement. For replacement of flow table entries, a novel and intelligent method is proposed in this research which uses a reference history of flows to assign an importance degree to each table entry. Comparison of the simulation results confirms the superiority of the method for reducing the controller's overflow.


Journal ArticleDOI
TL;DR: In this article , the authors proposed a new intelligent solution that protects the main servers at the network edge by using alternative edge servers and agents, and the simulation results show that the proposed mechanism has considerably increased the processing efficiency at the edge servers, and simultaneously provided the required security against internet attacks.
Abstract: ABSTRACT In a distributed denial-of-service attack, a large volume of packets is sent to the victim server to prevent service delivery and make it unavailable to valid users and clients at the network edge. In this paper, we use content delivery networks to propose a new intelligent solution that protects the main servers at the network edge by using alternative edge servers and agents. Thus, Internet attacks on the main servers could be prevented. The simulation results show that, the proposed mechanism has considerably increased the processing efficiency at the edge servers, and simultaneously provided the required security against internet attacks.

Journal ArticleDOI
TL;DR: In this article , a two-step method for detecting falls in thermal image frames using deep learning (DL) is presented. But, the method is limited to the detection of poor balance.
Abstract: People’s need for healthcare capacity has become increasingly critical as the elderly population continues to grow in most communities. Approximately 25–47% of seniors fall annually, and early detection of poor balance can significantly reduce their risk. Automated fall detection with big data analytics is key to maintaining the safety of the elderly in smart cities. Visible image systems (VIS) in smart buildings, on the other hand, visible image systems (VIS) in smart buildings may compromise the privacy of seniors by enabling technologies for intelligent big data analytics (IBDA). Thermal imaging (TI) is less obtrusive than visual imaging and can be used in combination with machine vision to perform a wide range of IBDAs. In this study, we present a novel two-step method for detecting falls in TI frames using deep learning (DL). As the first step, tracking tools are used to locate people’s locations. A novel modified deep transfer learning (TL) technique is used to classify the trajectory created by the tracking approach for people who are at risk of falling. Fall detection by the IBDA will be connected to the Internet of medical things (IoMT) and used as smart technology in the process of big data-assisted pervasive surveillance and health analytics. According to an analysis of the publicly available thermal fall dataset, our method outperforms traditional fall detection methods, with an average error of less than 3%. Additionally, IoMT platforms facilitate data processing, real-time monitoring and healthcare management. Our smart scheme for using big data analytics to enable intelligent decisions is compatible with the various spaces and provides a comfortable and safe environment for current and future elderly people.

Journal ArticleDOI
TL;DR: In this article , a tuple space flow classification algorithm is parallelized on a CPU cluster using MPI and OpenMP according to different scenarios, and the maximum speed of flow classification can be achieved if the sum of processes and threads does not outnumber CPU cores.
Abstract: The considerable move towards the use of renewable energy resources has been provided by the digitization of energy systems with the help of virtual power plants (VPPs). However, due to the coincidence of this move with the introduction of new technologies in information and communications, joining these systems raises concerns about the privacy of personal data. The only real-world approach widely used in this case is to anonymize or pseudonymize the information associated with individuals in data received from distributed measurement devices. In this paper, we propose the method of classifying received data packets into different flows and assigning different access levels for each flow. This method makes data pseudonymous. Before this step, the received data, which has a different format, is unionized. To implement this idea, a tuple space flow classification algorithm is parallelized on a CPU cluster using MPI and OpenMP according to different scenarios. The CPU cluster consists of one head node and two computational nodes for packet classification operations. In this research, two scenarios have been used to run the CPU algorithm in parallel. The first scenario uses MPI and the second scenario uses a combination of MPI and OpenMP libraries. Also, the Tuple Space algorithm has been implemented on the computing systems using the mentioned libraries in the form of two scenarios using OpenMP and MPI. According to our results, the increase in the number of processor cores is linearly correlated with the increase in the speed of classification. Furthermore, while MPI uses more memory than OpenMP, it helps to achieve a higher speed of classification. In the combined method, the maximum speed of flow classification can be achieved if the number of processes and threads is equal to the number of processor cores. In other words, when the sum of processes and threads does not outnumber CPU cores, the least classification time and memory usage can be achieved.


Journal ArticleDOI
TL;DR: In this paper , an approach to stress detection based on metaheuristic fuzzy inference system-based learning (fMFiS-L) and emotion recognition is presented in smart homes.

Journal ArticleDOI
TL;DR: In this paper , a pipeline-based micro-core that is used in network processors to classify packets is proposed, which has a power consumption of 1.294w and its throughput with a frequency of 233MHz exceeds 147 Gbps.

Proceedings ArticleDOI
06 Nov 2022
TL;DR: In this paper , the essentiality and importance of CiteScore as the most recent Scopus indicator to evaluate periodical publications in the area of computer science and communications (CSC), indexed and abstracted by Scopus (mainly including journals and magazines).
Abstract: This paper addresses the essentiality and importance of CiteScore as the most recent Scopus indicator to evaluate periodical publications in the area of computer science and communications (CSC), indexed and abstracted by Scopus (mainly including journals and magazines). Currently, the old indicator of Scopus, SJR (through Scimago journal report), is the most common way of calculating scientometric quartiles through Scopus. The study shows that CiteScore is the best alternative of SJR to find Scopus-related quartiles for the CSC publications. In fact, it may be the best rival for the IF-based quartiles (as the representative of WoS), at least for the CSC journals.

Journal ArticleDOI
TL;DR: In this paper , two algorithms are presented for reducing power consumption during TCAM memory upgrades, where the key idea is the reduction in the search range as well as the number of displacements while inserting and deleting rules in TCAM.
Abstract: Abstract Classification is a fundamental processing task in advanced network systems. This technique is exploited in 5G/6G wireless sensors networks where flow-based processing of the internet packets is highly demanded by intelligent applications that analyze big volumes of data in a limited time. In this process, the input packets are classified into specific streams by matching to a set of filters. The ternary content-addressable memory (TCAM) is used in hardware implementation of internet packets. However, due to the parallel search capabilities, this memory leads to an increase in the speed and drop of hardware bundles compared to other types of software bundles, but with the increase in the number of rules stored in its layers, the power required for searching, inserting and eliminating increases. Various architectures have been proposed to solve this problem, but none of them has proposed a plan to reduce power consumption while updating the rules in the TCAM memory. In this paper, two algorithms are presented for reducing power consumption during TCAM memory upgrades. The key idea in the proposed algorithms is the reduction in the search range as well as the number of displacements while inserting and deleting rules in TCAM. Implementation and evaluation of proposed methods represent a reduction of more than 50% of the number of visits to TCAM in both proposed algorithms, as well as reducing the update time in the second proposed algorithm compared to the first proposed algorithm which confirms the efficiency of both methods.

Proceedings ArticleDOI
06 Nov 2022
TL;DR: In this article , the authors used the Markov chain approach to determine the most accurate simulation runs and timing parameters of the EV and charging station in the car-sharing scenario and determined the behavior of consumers in standby and charge queue length.
Abstract: Since environmental challenges have diminished due to global modernization, technology is crucial in addressing pertinent problems by lowering pollution in many regions. One of the most crucial topics is intelligent transportation, which may be evaluated by substituting the Electric Vehicle (EV) with fossil fuel, which has increased megacity pollution in recent years. This research intends to model the EV and charging station in the car-sharing scenario using the Markov chain approach to determine the most accurate simulation runs and timing parameters. In addition, by separating the region into commercial and residential zones, we attempted to determine the behavior of consumers in standby and charge queue length. The findings demonstrate the simulation's effectiveness since they are an exact reflection of actual conduct.

OtherDOI
TL;DR: In this article , the authors present a full-text version of this article with the link below to share a fulltext version with your friends and colleagues, but no abstract is available for this article.
Abstract: Mathematische NachrichtenVolume 295, Issue 1 ISSUE INFORMATIONFree Access Issue Information First published: 31 January 2022 https://doi.org/10.1002/mana.202210012AboutPDF ToolsRequest permissionExport citationAdd to favoritesTrack citation ShareShare Give accessShare full text accessShare full-text accessPlease review our Terms and Conditions of Use and check box below to share full-text version of article.I have read and accept the Wiley Online Library Terms and Conditions of UseShareable LinkUse the link below to share a full-text version of this article with your friends and colleagues. Learn more.Copy URL Share a linkShare onFacebookTwitterLinked InRedditWechat No abstract is available for this article. Volume295, Issue1January 2022 RelatedInformation

Journal ArticleDOI
TL;DR: In this article , two algorithms are presented for reducing power consumption during TCAM memory upgrades, where the key idea is the reduction in the search range as well as the number of displacements while inserting and deleting rules in TCAM.
Abstract: Abstract Classification is a fundamental processing task in advanced network systems. This technique is exploited in 5G/6G wireless sensors networks where flow-based processing of the internet packets is highly demanded by intelligent applications that analyze big volumes of data in a limited time. In this process, the input packets are classified into specific streams by matching to a set of filters. The ternary content-addressable memory (TCAM) is used in hardware implementation of internet packets. However, due to the parallel search capabilities, this memory leads to an increase in the speed and drop of hardware bundles compared to other types of software bundles, but with the increase in the number of rules stored in its layers, the power required for searching, inserting and eliminating increases. Various architectures have been proposed to solve this problem, but none of them has proposed a plan to reduce power consumption while updating the rules in the TCAM memory. In this paper, two algorithms are presented for reducing power consumption during TCAM memory upgrades. The key idea in the proposed algorithms is the reduction in the search range as well as the number of displacements while inserting and deleting rules in TCAM. Implementation and evaluation of proposed methods represent a reduction of more than 50% of the number of visits to TCAM in both proposed algorithms, as well as reducing the update time in the second proposed algorithm compared to the first proposed algorithm which confirms the efficiency of both methods.

Journal ArticleDOI
TL;DR: This work uses TS-based clustering algorithm to obtain clustering subsets to enhance the detection capability of Android malware and proposes a novel framework-TSDroid, which uses the lifeCycle of API as temporal metric and the sizes of APPs are utilized as spatial metric.
Abstract: In the era of smart healthcare tremendous growing, plenty of smart devices facilitate cognitive computing for the purposes of lower cost, smarter diagnostic, etc. Android system has been widely used in the field of IoMT, and as the main operating system. However, Android malware is becoming one major security concern for healthcare, by the serious threat for our medical software assets, like the leakage of private information, the abusing of critical operations, etc. Unfortunately, the existing methods focus on building sustainable classification models, without fully considering system API which is the key to model aging. Compared to the traditional methods, we apply the lifeCycle of API as temporal metric. In addition to the temporal view, the ”sizes” of the APPs are utilized as spatial metric in the spatial view. Based on this, we firstly discuss the temporal and spatial metrics together in terms of clustering, and then propose our novel framework-TSDroid. In this framework, we use TS-based clustering algorithm to obtain clustering subsets to enhance the detection capability. We have carried out an experimental verification on three existing excellent methods (i.e., Drebin, HinDroid and DroidEvolver) and obtain good promotion effects by our framework.

Journal ArticleDOI
TL;DR: In this article , the authors focus on artificial intelligence Internet of Things for medical things (AIoMT), with particular emphasis on medical sensors for remote patient monitoring and body area interfacing.
Abstract: The papers in this special section focus on artificial intelligence Internet of Things for medical things (AIoMT), with particular emphasis on medical sensors for remote patient monitoring and body area interfacing. Examines issues involving design and implementation, practice use, measurements, and patient monitoring.


Journal ArticleDOI
TL;DR: In this article , the design and simulation of a Si-3-N-4 waveguide with embedded ABC-metamaterial for generating second-harmonic (SH) of 1.55 was presented.
Abstract: ABC-metamaterial, structured by repeating atomic layer deposition of three dielectric material-stack layers, opens the new possibility for creating second-order nonlinearity in CMOS-compatible photonics platforms. Here, the design and simulation of Si3N4 waveguide with embedded ABC-metamaterial for generating second-harmonic (SH) of 1.55 ${\mu }\text{m}$ wave is presented. High-overlap-integral modal phase matching between the fundamental mode of the fundamental frequency and second-order mode of the SH is attained by properly engineering the waveguide geometry. Numerical analyzes show an absolute efficiency of 0.013% for 10 mW pump power in the straight waveguide which can be enhanced to at least 13.4% by employing a microring resonator.

Journal ArticleDOI
04 May 2022
TL;DR: A novel feature selection framework that is implemented on Hadoop and Apache Spark platform and compared with existing feature selection models presented in the literature shows that the proposed model performs well in terms of scalability and accuracy.
Abstract: In data analysis, data scientists usually focus on the size of data instead of features selection. Owing to the extreme growth of internet resources data are growing exponentially with more features, which leads to big data dimensionality problems. The high volume of features contains much of redundant data, which may affect the feature classification in terms of accuracy. In the current scenario, feature selection attracts the research community to identify and to remove irrelevant features with more scalability and accuracy. To accommodate this, in this research study, we present a novel feature selection framework that is implemented on Hadoop and Apache Spark platform. In contrast, the proposed model also includes rough sets and differential evolution (DE) algorithm, where rough sets are used to find the minimum features, but rough sets do not consider the degree of overlying in the data. Therefore, DE algorithm is used to find the most optimal features. The proposed model is studied with Random Forest and Naive Bayes classifiers on five well-known data sets and compared with existing feature selection models presented in the literature. The results show that the proposed model performs well in terms of scalability and accuracy.