scispace - formally typeset

Showing papers in "IEEE Access in 2021"


Journal ArticleDOI
TL;DR: In this article, a novel LIS architecture based on sparse channel sensors is proposed, where all the LIS elements are passive except for a few elements that are connected to the baseband.
Abstract: Employing large intelligent surfaces (LISs) is a promising solution for improving the coverage and rate of future wireless systems. These surfaces comprise massive numbers of nearly-passive elements that interact with the incident signals, for example by reflecting them, in a smart way that improves the wireless system performance. Prior work focused on the design of the LIS reflection matrices assuming full channel knowledge. Estimating these channels at the LIS, however, is a key challenging problem. With the massive number of LIS elements, channel estimation or reflection beam training will be associated with (i) huge training overhead if all the LIS elements are passive (not connected to a baseband) or with (ii) prohibitive hardware complexity and power consumption if all the elements are connected to the baseband through a fully-digital or hybrid analog/digital architecture. This paper proposes efficient solutions for these problems by leveraging tools from compressive sensing and deep learning. First, a novel LIS architecture based on sparse channel sensors is proposed. In this architecture, all the LIS elements are passive except for a few elements that are active (connected to the baseband). We then develop two solutions that design the LIS reflection matrices with negligible training overhead. In the first approach, we leverage compressive sensing tools to construct the channels at all the LIS elements from the channels seen only at the active elements. In the second approach, we develop a deep-learning based solution where the LIS learns how to interact with the incident signal given the channels at the active elements, which represent the state of the environment and transmitter/receiver locations. We show that the achievable rates of the proposed solutions approach the upper bound, which assumes perfect channel knowledge, with negligible training overhead and with only a few active elements, making them promising for future LIS systems.

271 citations


Journal ArticleDOI
TL;DR: In this paper, a review of deep learning based systems for the detection of the new coronavirus (COVID-19) outbreak has been presented, which can be potentially further utilized to combat the outbreak.
Abstract: Novel coronavirus (COVID-19) outbreak, has raised a calamitous situation all over the world and has become one of the most acute and severe ailments in the past hundred years. The prevalence rate of COVID-19 is rapidly rising every day throughout the globe. Although no vaccines for this pandemic have been discovered yet, deep learning techniques proved themselves to be a powerful tool in the arsenal used by clinicians for the automatic diagnosis of COVID-19. This paper aims to overview the recently developed systems based on deep learning techniques using different medical imaging modalities like Computer Tomography (CT) and X-ray. This review specifically discusses the systems developed for COVID-19 diagnosis using deep learning techniques and provides insights on well-known data sets used to train these networks. It also highlights the data partitioning techniques and various performance measures developed by researchers in this field. A taxonomy is drawn to categorize the recent works for proper insight. Finally, we conclude by addressing the challenges associated with the use of deep learning methods for COVID-19 detection and probable future trends in this research area. The aim of this paper is to facilitate experts (medical or otherwise) and technicians in understanding the ways deep learning techniques are used in this regard and how they can be potentially further utilized to combat the outbreak of COVID-19.

56 citations


Journal ArticleDOI
TL;DR: In this article, the authors focus on convergent 6G communication, localization and sensing systems by identifying key technology enablers, discussing their underlying challenges, implementation issues, and recommending potential solutions.
Abstract: Herein, we focus on convergent 6G communication, localization and sensing systems by identifying key technology enablers, discussing their underlying challenges, implementation issues, and recommending potential solutions. Moreover, we discuss exciting new opportunities for integrated localization and sensing applications, which will disrupt traditional design principles and revolutionize the way we live, interact with our environment, and do business. Regarding potential enabling technologies, 6G will continue to develop towards even higher frequency ranges, wider bandwidths, and massive antenna arrays. In turn, this will enable sensing solutions with very fine range, Doppler, and angular resolutions, as well as localization to cm-level degree of accuracy. Besides, new materials, device types, and reconfigurable surfaces will allow network operators to reshape and control the electromagnetic response of the environment. At the same time, machine learning and artificial intelligence will leverage the unprecedented availability of data and computing resources to tackle the biggest and hardest problems in wireless communication systems. As a result, 6G will be truly intelligent wireless systems that will provide not only ubiquitous communication but also empower high accuracy localization and high-resolution sensing services. They will become the catalyst for this revolution by bringing about a unique new set of features and service capabilities, where localization and sensing will coexist with communication, continuously sharing the available resources in time, frequency, and space. This work concludes by highlighting foundational research challenges, as well as implications and opportunities related to privacy, security, and trust.

42 citations


Journal ArticleDOI
TL;DR: A survey comprehensively reviews over 200 reports covering robotic systems which have emerged or have been repurposed during the past several months, to provide insights to both academia and industry as mentioned in this paper.
Abstract: As a result of the difficulties brought by COVID-19 and its associated lockdowns, many individuals and companies have turned to robots in order to overcome the challenges of the pandemic. Compared with traditional human labor, robotic and autonomous systems have advantages such as an intrinsic immunity to the virus and an inability for human-robot-human spread of any disease-causing pathogens, though there are still many technical hurdles for the robotics industry to overcome. This survey comprehensively reviews over 200 reports covering robotic systems which have emerged or have been repurposed during the past several months, to provide insights to both academia and industry. In each chapter, we cover both the advantages and the challenges for each robot, finding that robotics systems are overall apt solutions for dealing with many of the problems brought on by COVID-19, including: diagnosis, screening, disinfection, surgery, telehealth, care, logistics, manufacturing and broader interpersonal problems unique to the lockdowns of the pandemic. By discussing the potential new robot capabilities and fields they applied to, we expect the robotics industry to take a leap forward due to this unexpected pandemic.

41 citations


Journal ArticleDOI
TL;DR: A comprehensive comparison with various state-of-the-art methods reveals the importance of benchmarking the deep learning methods for automated real-time polyp identification and delineations that can potentially transform current clinical practices and minimise miss-detection rates.
Abstract: Computer-aided detection, localisation, and segmentation methods can help improve colonoscopy procedures. Even though many methods have been built to tackle automatic detection and segmentation of polyps, benchmarking of state-of-the-art methods still remains an open problem. This is due to the increasing number of researched computer vision methods that can be applied to polyp datasets. Benchmarking of novel methods can provide a direction to the development of automated polyp detection and segmentation tasks. Furthermore, it ensures that the produced results in the community are reproducible and provide a fair comparison of developed methods. In this paper, we benchmark several recent state-of-the-art methods using Kvasir-SEG, an open-access dataset of colonoscopy images for polyp detection, localisation, and segmentation evaluating both method accuracy and speed. Whilst, most methods in literature have competitive performance over accuracy, we show that the proposed ColonSegNet achieved a better trade-off between an average precision of 0.8000 and mean IoU of 0.8100, and the fastest speed of 180 frames per second for the detection and localisation task. Likewise, the proposed ColonSegNet achieved a competitive dice coefficient of 0.8206 and the best average speed of 182.38 frames per second for the segmentation task. Our comprehensive comparison with various state-of-the-art methods reveals the importance of benchmarking the deep learning methods for automated real-time polyp identification and delineations that can potentially transform current clinical practices and minimise miss-detection rates.

41 citations


Journal ArticleDOI
TL;DR: In this article, a systematic literature review of contrastive and counterfactual explanations of artificial intelligence algorithms is presented, which provides readers with a thorough and reproducible analysis of the interdisciplinary research field under study.
Abstract: A number of algorithms in the field of artificial intelligence offer poorly interpretable decisions. To disclose the reasoning behind such algorithms, their output can be explained by means of so-called evidence-based (or factual) explanations. Alternatively, contrastive and counterfactual explanations justify why the output of the algorithms is not any different and how it could be changed, respectively. It is of crucial importance to bridge the gap between theoretical approaches to contrastive and counterfactual explanation and the corresponding computational frameworks. In this work we conduct a systematic literature review which provides readers with a thorough and reproducible analysis of the interdisciplinary research field under study. We first examine theoretical foundations of contrastive and counterfactual accounts of explanation. Then, we report the state-of-the-art computational frameworks for contrastive and counterfactual explanation generation. In addition, we analyze how grounded such frameworks are on the insights from the inspected theoretical approaches. As a result, we highlight a variety of properties of the approaches under study and reveal a number of shortcomings thereof. Moreover, we define a taxonomy regarding both theoretical and practical approaches to contrastive and counterfactual explanation.

40 citations


Journal ArticleDOI
TL;DR: In this paper, an effective sarcasm identification framework on social media data by pursuing the paradigms of neural language models and deep neural networks is presented. But sarcasm detection on text documents is one of the most challenging tasks in NLP.
Abstract: Sarcasm identification on text documents is one of the most challenging tasks in natural language processing (NLP), has become an essential research direction, due to its prevalence on social media data. The purpose of our research is to present an effective sarcasm identification framework on social media data by pursuing the paradigms of neural language models and deep neural networks. To represent text documents, we introduce inverse gravity moment based term weighted word embedding model with trigrams. In this way, critical words/terms have higher values by keeping the word-ordering information. In our model, we present a three-layer stacked bidirectional long short-term memory architecture to identify sarcastic text documents. For the evaluation task, the presented framework has been evaluated on three-sarcasm identification corpus. In the empirical analysis, three neural language models (i.e., word2vec, fastText and GloVe), two unsupervised term weighting functions (i.e., term-frequency, and TF-IDF) and eight supervised term weighting functions (i.e., odds ratio, relevance frequency, balanced distributional concentration, inverse question frequency-question frequency-inverse category frequency, short text weighting, inverse gravity moment, regularized entropy and inverse false negative-true positive-inverse category frequency) have been evaluated. For sarcasm identification task, the presented model yields promising results with a classification accuracy of 95.30%.

38 citations


Journal ArticleDOI
TL;DR: A brief overview of the added features and key performance indicators of 5G NR is presented and a next-generation wireless communication architecture that acts as the platform for migration towards beyond 5G/6G networks is proposed.
Abstract: Nowadays, 5G is in its initial phase of commercialization. The 5G network will revolutionize the existing wireless network with its enhanced capabilities and novel features. 5G New Radio (5G NR), referred to as the global standardization of 5G, is presently under the $3^{\mathrm {rd}}$ Generation Partnership Project (3GPP) and can be operable over the wide range of frequency bands from less than 6GHz to mmWave (100GHz). 3GPP mainly focuses on the three major use cases of 5G NR that are comprised of Ultra-Reliable and Low Latency Communication (uRLLC), Massive Machine Type Communication (mMTC), Enhanced Mobile Broadband (eMBB). For meeting the targets of 5G NR, multiple features like scalable numerology, flexible spectrum, forward compatibility, and ultra-lean design are added as compared to the LTE systems. This paper presents a brief overview of the added features and key performance indicators of 5G NR. The issues related to the adaptation of higher modulation schemes and inter-RAT handover synchronization are well addressed in this paper. With the consideration of these challenges, a next-generation wireless communication architecture is proposed. The architecture acts as the platform for migration towards beyond 5G/6G networks. Along with this, various technologies and applications of 6G networks are also overviewed in this paper. 6G network will incorporate Artificial intelligence (AI) based services, edge computing, quantum computing, optical wireless communication, hybrid access, and tactile services. For enabling these diverse services, a virtualized network slicing based architecture of 6G is proposed. Various ongoing projects on 6G and its technologies are also listed in this paper.

34 citations


Journal ArticleDOI
TL;DR: This work establishes a foundation of dynamic networks with consistent, detailed terminology and notation and presents a comprehensive survey of dynamic graph neural network models using the proposed terminology.
Abstract: Dynamic networks are used in a wide range of fields, including social network analysis, recommender systems and epidemiology. Representing complex networks as structures changing over time allow network models to leverage not only structural but also temporal patterns. However, as dynamic network literature stems from diverse fields and makes use of inconsistent terminology, it is challenging to navigate. Meanwhile, graph neural networks (GNNs) have gained a lot of attention in recent years for their ability to perform well on a range of network science tasks, such as link prediction and node classification. Despite the popularity of graph neural networks and the proven benefits of dynamic network models, there has been little focus on graph neural networks for dynamic networks. To address the challenges resulting from the fact that this research crosses diverse fields as well as to survey dynamic graph neural networks, this work is split into two main parts. First, to address the ambiguity of the dynamic network terminology we establish a foundation of dynamic networks with consistent, detailed terminology and notation. Second, we present a comprehensive survey of dynamic graph neural network models using the proposed terminology.

31 citations


Journal ArticleDOI
TL;DR: This work develops an algorithm that learns collision avoidance among a variety of heterogeneous, non-communicating, dynamic agents without assuming they follow any particular behavior rules and extends the previous work by introducing a strategy using Long Short-Term Memory (LSTM) that enables the algorithm to use observations of an arbitrary number of other agents.
Abstract: Collision avoidance algorithms are essential for safe and efficient robot operation among pedestrians. This work proposes using deep reinforcement (RL) learning as a framework to model the complex interactions and cooperation with nearby, decision-making agents, such as pedestrians and other robots. Existing RL-based works assume homogeneity of agent properties, use specific motion models over short timescales, or lack a principled method to handle a large, possibly varying number of agents. Therefore, this work develops an algorithm that learns collision avoidance among a variety of heterogeneous, non-communicating, dynamic agents without assuming they follow any particular behavior rules. It extends our previous work by introducing a strategy using Long Short-Term Memory (LSTM) that enables the algorithm to use observations of an arbitrary number of other agents, instead of a small, fixed number of neighbors. The proposed algorithm is shown to outperform a classical collision avoidance algorithm, another deep RL-based algorithm, and scales with the number of agents better (fewer collisions, shorter time to goal) than our previously published learning-based approach. Analysis of the LSTM provides insights into how observations of nearby agents affect the hidden state and quantifies the performance impact of various agent ordering heuristics. The learned policy generalizes to several applications beyond the training scenarios: formation control (arrangement into letters), demonstrations on a fleet of four multirotors and on a fully autonomous robotic vehicle capable of traveling at human walking speed among pedestrians.

30 citations


Journal ArticleDOI
TL;DR: In this paper, a mobile app-based intelligent portable healthcare (pHealth) tool, called ${i}$ WorkSafe, is presented to assist industries in detecting possible suspects for COVID-19 infection among their employees who may need primary care.
Abstract: The recent outbreak of the novel Coronavirus Disease (COVID-19) has given rise to diverse health issues due to its high transmission rate and limited treatment options. Almost the whole world, at some point of time, was placed in lock-down in an attempt to stop the spread of the virus, with resulting psychological and economic sequela. As countries start to ease lock-down measures and reopen industries, ensuring a healthy workplace for employees has become imperative. Thus, this paper presents a mobile app-based intelligent portable healthcare (pHealth) tool, called ${i}$ WorkSafe, to assist industries in detecting possible suspects for COVID-19 infection among their employees who may need primary care. Developed mainly for low-end Android devices, the ${i}$ WorkSafe app hosts a fuzzy neural network model that integrates data of employees’ health status from the industry’s database, proximity and contact tracing data from the mobile devices, and user-reported COVID-19 self-test data. Using the built-in Bluetooth low energy sensing technology and K Nearest Neighbor and K-means techniques, the app is capable of tracking users’ proximity and trace contact with other employees. Additionally, it uses a logistic regression model to calculate the COVID-19 self-test score and a Bayesian Decision Tree model for checking real-time health condition from an intelligent e-health platform for further clinical attention of the employees. Rolled out in an apparel factory on 12 employees as a test case, the pHealth tool generates an alert to maintain social distancing among employees inside the industry. In addition, the app helps employees to estimate risk with possible COVID-19 infection based on the collected data and found that the score is effective in estimating personal health condition of the app user.

Journal ArticleDOI
TL;DR: In this article, a distributed switched consensus control algorithm for a group of robot manipulators is proposed to solve abrupt occurrence of parameters jumping and directed communication topologies changing in the control process of networked manipulators, and a unified analysis methodology is developed to perform convergence analysis for the closed-loop system by Lyapunov stable theory.
Abstract: To solve abruptly occurrence of parameters jumping and directed communication topologies changing in the control process of networked manipulators, in this paper, distributed switched consensus control algorithms are formulated for a group of robot manipulators in realizing cooperative consensus performance. In fact, networked Lagrange systems are modeled as switched systems regarding the different parameters and topologies. Namely, the dynamic models switch when the system parameters or the topology structures change. The consensus control strategy is constructed by resorting to (improved) average dwell time (ADT) method and sliding-mode control technique, and a unified analysis methodology is developed to perform the convergence analysis for the closed-loop system by Lyapunov stable theory. The main contribution of this paper is the development of a systematically adaptive consensus algorithm by simultaneously considering shifting parameters and switching communication network (as two unavoidable key factors) in the process of communication interaction among robots. A distinctive feature of the developed consensus protocol is to introduce the directed network topology characterizing the local communication interaction among robots, which is especially suitable for representing the the structures and features of the realistic cooperative multi-robotic systems. Accordingly, the developed consensus tracking strategy for manipulators possess prominent advantages including robustness,stability and effectiveness over the existing concentrated on single robot counterparts. Finally, numerical simulations of two-link manipulators are performed to illustrate the effectiveness of the obtained control algorithm.


Journal ArticleDOI
TL;DR: In this article, the authors provide a comprehensive overview of the cyber-physical energy systems (CPS) security landscape with an emphasis on CPES, and demonstrate a threat modeling methodology to accurately represent the CPS elements, their interdependencies, as well as the possible attack entry points and system vulnerabilities.
Abstract: Cyber-physical systems (CPS) are interconnected architectures that employ analog and digital components as well as communication and computational resources for their operation and interaction with the physical environment. CPS constitute the backbone of enterprise (e.g., smart cities), industrial (e.g., smart manufacturing), and critical infrastructure (e.g., energy systems). Thus, their vital importance, interoperability, and plurality of computing devices make them prominent targets for malicious attacks aiming to disrupt their operations. Attacks targeting cyber-physical energy systems (CPES), given their mission-critical nature within the power grid infrastructure, can lead to disastrous consequences. The security of CPES can be enhanced by leveraging testbed capabilities in order to replicate and understand power systems operating conditions, discover vulnerabilities, develop security countermeasures, and evaluate grid operation under fault-induced or maliciously constructed scenarios. Adequately modeling and reproducing the behavior of CPS could be a challenging task. In this paper, we provide a comprehensive overview of the CPS security landscape with an emphasis on CPES. Specifically, we demonstrate a threat modeling methodology to accurately represent the CPS elements, their interdependencies, as well as the possible attack entry points and system vulnerabilities. Leveraging the threat model formulation, we present a CPS framework designed to delineate the hardware, software, and modeling resources required to simulate the CPS and construct high-fidelity models that can be used to evaluate the system’s performance under adverse scenarios. The system performance is assessed using scenario-specific metrics, while risk assessment enables the system vulnerability prioritization factoring the impact on the system operation. The overarching framework for modeling, simulating, assessing, and mitigating attacks in a CPS is illustrated using four representative attack scenarios targeting CPES. The key objective of this paper is to demonstrate a step-by-step process that can be used to enact in-depth cybersecurity analyses, thus leading to more resilient and secure CPS.

Journal ArticleDOI
TL;DR: A narrative literature review examines the numerous developments and breakthroughs in the U-net architecture and provides observations on recent trends, and discusses the many innovations that have advanced in deep learning and how these tools facilitate U-nets.
Abstract: U-net is an image segmentation technique developed primarily for image segmentation tasks. These traits provide U-net with a high utility within the medical imaging community and have resulted in extensive adoption of U-net as the primary tool for segmentation tasks in medical imaging. The success of U-net is evident in its widespread use in nearly all major image modalities, from CT scans and MRI to X-rays and microscopy. Furthermore, while U-net is largely a segmentation tool, there have been instances of the use of U-net in other applications. Given that U-net’s potential is still increasing, this narrative literature review examines the numerous developments and breakthroughs in the U-net architecture and provides observations on recent trends. We also discuss the many innovations that have advanced in deep learning and discuss how these tools facilitate U-net. In addition, we review the different image modalities and application areas that have been enhanced by U-net.

Journal ArticleDOI
TL;DR: In this paper, a matching decision method for manufacturing service resources is proposed based on multidimensional information fusion, where the information entropy and rough set theory are applied to classify the importance of manufacturing service tasks, while the matching capability are analyzed by using a hybrid collaborative filtering algorithm.
Abstract: With the development of specialization, coordination and intelligence in the manufacturing service process, the issue of how to quickly extract potential resources or capabilities for distributed manufacturing service requirements, and how to carry out resource matching for manufacturing service requirements with correlated mapping characteristics, have become the critical issues to be addressed in the cloud manufacturing environment. Through the combination of the characteristics of relevance, synergy and diversity of manufacturing service tasks on the intelligent cloud platform, a matching decision method for manufacturing service resources is proposed in this paper based on multidimensional information fusion. On the basis of integrating multidimensional information data in cloud manufacturing resource, the information entropy and rough set theory are applied to classify the importance of manufacturing service tasks, while the matching capability are analyzed by using a hybrid collaborative filtering (HCF) algorithm. Then, the information of function attribute, reliability and preference is employed to match and push manufacturing service resources or capabilities actively, so as to realize the matching decision of manufacturing service resources with precise quality, stable service and maximum efficiency. At last, a case study of resources matching decision for body & chassis manufacturing service in a new energy automobile enterprise is presented, in which the experimental results show that the proposed approach is more accuracy and effective compared with other different recommendation algorithms.

Journal ArticleDOI
TL;DR: In this paper, the use of a modified neural network algorithm (MNNA) is proposed as a novel adaptive tuning algorithm to optimize the controller gains and a new mathematical modulation is introduced to promote the exploration manner of NNA without initial parameters.
Abstract: The tuning of the robot actuator represents many challenges to follow a predefined trajectory on account of the uncertainties of parameters and the model nonlinearity. Furthermore, the controller gains require proper optimization to achieve good performance. In this paper, the use of a modified neural network algorithm (MNNA) is proposed as a novel adaptive tuning algorithm to optimize the controller gains. Furthermore, a new mathematical modulation is introduced to promote the exploration manner of the NNA without initial parameters. Specifically, the modulation is formed by using a polynomial mutation. The proposed algorithm is applied to select the proportional integral derivative (PID) controller gains of a robot manipulator arms in lieu of conventional procedures of designer expertise. Another vital contribution is formulating a new performance index that guarantees to improve the settling time and the overshoot of every arm output simultaneously. The proposed algorithm is evaluated with different intelligent techniques in the literature, including the genetic algorithm (GA) and the cuckoo search algorithm (CSA) with PID controllers, where its superiority to follow various trajectories is demonstrated. To affirm the robustness and efficiency of the proposed algorithm, several trajectories and uncertainties of parameters are considered for assessing the response of a robotic manipulator.

Journal ArticleDOI
TL;DR: In this article, an adaptive neuro-fuzzy inference system (ANFIS) is proposed for blade pitch control of wind energy conversion systems (WECS) instead of the conventional controllers.
Abstract: Wind speed fluctuations and load demand variations represent the big challenges against wind energy conversion systems (WECS). Besides, the inefficient measuring devices and the environmental impacts (e.g. temperature, humidity, and noise signals) affect the system equipment, leading to increased system uncertainty issues. In addition, the time delay due to the communication channels can make a gap between the transmitted control signal and the WECS that causes instability for the WECS operation. To tackle these issues, this paper proposes an adaptive neuro-fuzzy inference system (ANFIS) as an effective control technique for blade pitch control of the WECS instead of the conventional controllers. However, the ANFIS requires a suitable dataset for training and testing to adjust its membership functions in order to provide effective performance. In this regard, this paper also suggests an effective strategy to prepare a sufficient dataset for training and testing of the ANFIS controller. Specifically, a new optimization algorithm named the mayfly optimization algorithm (MOA) is developed to find the optimal parameters of the proportional integral derivative (PID) controller to find the optimal dataset for training and testing of the ANFIS controller. To demonstrate the advantages of the proposed technique, it is compared with different three algorithms in the literature. Another contribution is that a new time-domain named figure of demerit is established to confirm the minimization of settling time and the maximum overshoot in a simultaneous manner. A lot of test scenarios are performed to confirm the effectiveness and robustness of the proposed ANFIS based technique. The robustness of the proposed method is verified based on the frequency domain conditions that are driven from Hermite–Biehler theorem. The results emphases that the proposed controller provides superior performance against the wind speed fluctuations, load demand variations, system parameters uncertainties, and the time delay of the communication channels.

Journal ArticleDOI
TL;DR: In this paper, the authors evaluated PERT-based critical paths of new service development (NSD) process for renewable energy investment projects and concluded that solar energy is the most appropriate renewable energy alternative.
Abstract: The purpose of this study is to evaluate PERT-based critical paths of new service development (NSD) process for renewable energy investment projects. In this context, a novel three-stage model has been proposed. In the first stage, 10 different steps in NSD process are weighted by considering 2-tuple hesitant interval-valued Spherical fuzzy IVSF DEMATEL approach. In the second stage, 26 different critical paths for NSD process are identified. Moreover, the third stage includes the ranking the renewable energy alternatives by path scenarios with 2-tuple hesitant IVSF TOPSIS. The findings demonstrate that idea screening and the formation of cross-functional team are the most significant criteria for the NSD process of renewable energy investments. Additionally, while considering all activities of NSD process, it is concluded that solar energy is the most appropriate renewable energy alternative. This result is also similar for considering the longest path by activity number, the longest path by duration and the shortest path by activity number. However, it is also determined that geothermal energy is the most ideal type of renewable energy to invest in while considering the shortest path by duration. Therefore, it is obvious that investors should primarily give importance to generate new products for solar energy projects. In this way, it can be easier for them to provide efficiency in their investments. On the other hand, if there is time constraint or a positive result is expected from the project in a short time, geothermal energy is the most suitable renewable energy type to invest.

Journal ArticleDOI
Faisal Jamil1, Naeem Iqbal1, Imran1, Shabir Ahmad1, Do-Hyeun Kim1 
TL;DR: In this paper, a blockchain-based predictive energy trading platform is proposed to provide real-time support, day-ahead controlling, and generation scheduling of distributed energy resources in smart microgrids.
Abstract: It is expected that peer to peer energy trading will constitute a significant share of research in upcoming generation power systems due to the rising demand of energy in smart microgrids. However, the on-demand use of energy is considered a big challenge to achieve the optimal cost for households. This paper proposes a blockchain-based predictive energy trading platform to provide real-time support, day-ahead controlling, and generation scheduling of distributed energy resources. The proposed blockchain-based platform consists of two modules; blockchain-based energy trading and smart contract enabled predictive analytics modules. The blockchain module allows peers with real-time energy consumption monitoring, easy energy trading control, reward model, and unchangeable energy trading transaction logs. The smart contract enabled predictive analytics module aims to build a prediction model based on historical energy consumption data to predict short-term energy consumption. This paper uses real energy consumption data acquired from the Jeju province energy department, the Republic of Korea. This study aims to achieve optimal power flow and energy crowdsourcing, supporting energy trading among the consumer and prosumer. Energy trading is based on day-ahead, real-time control, and scheduling of distributed energy resources to meet the smart grid’s load demand. Moreover, we use data mining techniques to perform time-series analysis to extract and analyze underlying patterns from the historical energy consumption data. The time-series analysis supports energy management to devise better future decisions to plan and manage energy resources effectively. To evaluate the proposed predictive model’s performance, we have used several statistical measures, such as mean square error and root mean square error on various machine learning models, namely recurrent neural networks and alike. Moreover, we also evaluate the blockchain platform’s effectiveness through hyperledger calliper in terms of latency, throughput, and resource utilization. Based on the experimental results, the proposed model is effectively used for energy crowdsourcing between the prosumer and consumer to attain service quality.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a model that incorporates different methods to achieve effective prediction of heart disease, which used efficient Data Collection, Data Pre-processing and Data Transformation methods to create accurate information for the training model.
Abstract: Cardiovascular diseases (CVD) are among the most common serious illnesses affecting human health. CVDs may be prevented or mitigated by early diagnosis, and this may reduce mortality rates. Identifying risk factors using machine learning models is a promising approach. We would like to propose a model that incorporates different methods to achieve effective prediction of heart disease. For our proposed model to be successful, we have used efficient Data Collection, Data Pre-processing and Data Transformation methods to create accurate information for the training model. We have used a combined dataset (Cleveland, Long Beach VA, Switzerland, Hungarian and Stat log). Suitable features are selected by using the Relief, and Least Absolute Shrinkage and Selection Operator (LASSO) techniques. New hybrid classifiers like Decision Tree Bagging Method (DTBM), Random Forest Bagging Method (RFBM), K-Nearest Neighbors Bagging Method (KNNBM), AdaBoost Boosting Method (ABBM), and Gradient Boosting Boosting Method (GBBM) are developed by integrating the traditional classifiers with bagging and boosting methods, which are used in the training process. We have also instrumented some machine learning algorithms to calculate the Accuracy (ACC), Sensitivity (SEN), Error Rate, Precision (PRE) and F1 Score (F1) of our model, along with the Negative Predictive Value (NPR), False Positive Rate (FPR), and False Negative Rate (FNR). The results are shown separately to provide comparisons. Based on the result analysis, we can conclude that our proposed model produced the highest accuracy while using RFBM and Relief feature selection methods (99.05%).

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a new PAD technique based on three image representation approaches combining local and global information of the fingerprint, which can correctly discriminate bona fide from attack presentations in the aforementioned scenarios.
Abstract: Fingerprint-based biometric systems have experienced a large development in the past. In spite of many advantages, they are still vulnerable to attack presentations (APs). Therefore, the task of determining whether a sample stems from a live subject (i.e., bona fide) or from an artificial replica is a mandatory requirement which has recently received a considerable attention. Nowadays, when the materials for the fabrication of the Presentation Attack Instruments (PAIs) have been used to train the Presentation Attack Detection (PAD) methods, the PAIs can be successfully identified in most cases. However, current PAD methods still face difficulties detecting PAIs built from unknown materials and/or unknown recepies, or acquired using different capture devices. To tackle this issue, we propose a new PAD technique based on three image representation approaches combining local and global information of the fingerprint. By transforming these representations into a common feature space, we can correctly discriminate bona fide from attack presentations in the aforementioned scenarios. The experimental evaluation of our proposal over the LivDet 2011 to 2019 databases, yielded error rates outperforming the top state-of-the-art results by up to 72% in the most challenging scenarios. In addition, the best representation achieved the best results in the LivDet 2019 competition (overall accuracy of 96.17%).

Journal ArticleDOI
TL;DR: In this article, a fuzzy logic based algorithm for varying the step size of the incremental conductance (INC) maximum power point tracking (MPPT) method for PV is proposed, where a variable voltage step size is estimated according to the degree of ascent or descent of the powervoltage relation.
Abstract: Recently, solar energy has been intensively employed in power systems, especially using the photovoltaic (PV) generation units In this regard, this paper proposes a novel design of a fuzzy logic based algorithm for varying the step size of the incremental conductance (INC) maximum power point tracking (MPPT) method for PV In the proposed method, a variable voltage step size is estimated according to the degree of ascent or descent of the power-voltage relation For this purpose, a novel unique treatment is proposed based on introducing five effective regions around the point of maximum PV power To vary the step size of the duty cycle, a fuzzy logic system is developed according to the locations of the fuzzy inputs regarding the five regions The developed fuzzy inputs are inspired from the slope of the power-voltage relation, namely the current-voltage ratio and its derivatives whereas appropriate membership functions and fuzzy rules are designed The benefit of the proposed method is that the MPPT efficiency is improved for varying the step size of the incremental conductance method, thanks to the effective coordination between the proposed fuzzy logic based algorithm and the INC method The output DC power of the PV array and the tracking speed are presented as indices for illustrating the improvement achieved in MPPT The proposed method is verified and tested through the simulation of a grid-connected PV system model The simulation results reveal a valuable improvement in static and dynamic responses over that of the traditional INC method with the variation of the environmental conditions Further, it enhances the output dc power and reduce the convergence time to reach the steady state condition with intermittent environmental conditions

Journal ArticleDOI
TL;DR: In this paper, the major design aspects of such a cellular joint communication and sensing (JCAS) system are discussed, and an analysis of the choice of the waveform that points towards choosing the one that is best suited for communication also for radar sensing is presented.
Abstract: The 6G vision of creating authentic digital twin representations of the physical world calls for new sensing solutions to compose multi-layered maps of our environments. Radio sensing using the mobile communication network as a sensor has the potential to become an essential component of the solution. With the evolution of cellular systems to mmWave bands in 5G and potentially sub-THz bands in 6G, small cell deployments will begin to dominate. Large bandwidth systems deployed in small cell configurations provide an unprecedented opportunity to employ the mobile network for sensing. In this paper, we focus on the major design aspects of such a cellular joint communication and sensing (JCAS) system. We present an analysis of the choice of the waveform that points towards choosing the one that is best suited for communication also for radar sensing. We discuss several techniques for efficiently integrating the sensing capability into the JCAS system, some of which are applicable with NR air-interface for evolved 5G systems. Specifically, methods for reducing sensing overhead by appropriate sensing signal design or by configuring separate numerologies for communications and sensing are presented. Sophisticated use of the sensing signals is shown to reduce the signaling overhead by a factor of 2.67 for an exemplary road traffic monitoring use case. We then present a vision for future advanced JCAS systems building upon distributed massive MIMO and discuss various other research challenges for JCAS that need to be addressed in order to pave the way towards natively integrated JCAS in 6G.

Journal ArticleDOI
TL;DR: Based on the DeepFM model, the authors predicts the incidence of hepatitis in each sample in the structured disease prediction data of the 2020 Artificial Intelligence Challenge Preliminary Competition, and make minor improvements and parameter adjustments to DeepFM.
Abstract: In recent years, with the increase of computer computing power, Deep Learning has begun to be favored. Its learning of non-linear feature combinations has played a role that traditional machine learning cannot reach in almost every field. The application of Deep Learning has also driven the advancement of Factorization Machine (FM) in the field of recommendation systems, because Deep Learning and FM can learn high-order and low-order features combinations respectively, and FM’s hidden vector system enables it to learn information from sparse data. The integration of them has attracted the attention of many scholars. They have researched many classic models such as Factorization-supported Neural Network (FNN), Product-based Neural Networks (PNN), Inner PNN (IPNN), Wide&Deep, Deep&Cross, DeepFM, etc. for the Click-Through-Rate (CTR) problem, and their performance is getting better and better. This kind of model is also suitable for agriculture, meteorology, disease prediction and other fields due to the above advantages. Based on the DeepFM model, we predicts the incidence of hepatitis in each sample in the structured disease prediction data of the 2020 Artificial Intelligence Challenge Preliminary Competition, and make minor improvements and parameter adjustments to DeepFM. Compared with other models, the improved DeepFM has excellent performance in AUC. This research can be applied to electronic medical records to reduce the workload of doctors and make doctors focus on the samples with higher predicted incidence rates. For some changing data, such as blood pressure, height, weight, cholesterol, etc., we can introduce the Internet of Medical Things (IoMT). IoMT’s sensors can be used to conduct transmission to ensure that the disease can be predicted in time, just in case. After joining IoMT, a healthcare system is formed, which is superior in forecasting and time performance.

Journal ArticleDOI
TL;DR: An extensive survey on the use of blockchain andAI for combating coronavirus (COVID-19) epidemics based on the rapidly emerging literature and introduces a new conceptual architecture which integrates blockchain and AI specific for COVID- 19 fighting.
Abstract: The beginning of 2020 has seen the emergence of coronavirus outbreak caused by a novel virus called SARS-CoV-2. The sudden explosion and uncontrolled worldwide spread of COVID-19 show the limitations of existing healthcare systems in timely handling public health emergencies. In such contexts, innovative technologies such as blockchain and Artificial Intelligence (AI) have emerged as promising solutions for fighting coronavirus epidemic. In particular, blockchain can combat pandemics by enabling early detection of outbreaks, ensuring the ordering of medical data, and ensuring reliable medical supply chain during the outbreak tracing. Moreover, AI provides intelligent solutions for identifying symptoms caused by coronavirus for treatments and supporting drug manufacturing. Therefore, we present an extensive survey on the use of blockchain and AI for combating COVID-19 epidemics. First, we introduce a new conceptual architecture which integrates blockchain and AI for fighting COVID-19. Then, we survey the latest research efforts on the use of blockchain and AI for fighting COVID-19 in various applications. The newly emerging projects and use cases enabled by these technologies to deal with coronavirus pandemic are also presented. A case study is also provided using federated AI for COVID-19 detection. Finally, we point out challenges and future directions that motivate more research efforts to deal with future coronavirus-like epidemics.

Journal ArticleDOI
TL;DR: In this article, a new application of the Forensic-Based Investigation Algorithm (FBIA), which is a new meta-heuristic optimization technique, is introduced to accurately extract the electrical parameters of different PV models.
Abstract: The accurate parameter extraction of photovoltaic (PV) module is pivotal for determining and optimizing the energy output of PV systems into electric power networks. Consequently, a Photovoltaic Single-Diode Model (PVSDM), Double Diode Model (PVDDM), and Triple- Diode Model (PVTDM) is demonstrated to consider the PV losses. This article introduces a new application of the Forensic-Based Investigation Algorithm (FBIA), which is a new meta-heuristic optimization technique, to accurately extract the electrical parameters of different PV models. The FBIA is inspired by the suspect investigation, location, and pursuit processes that are used by police officers. The FBIA has two phases, which are the investigation phase applying by the investigators team, and the pursuit phase employing by the police agents team. The validity of the FBIA for PVSDM, PVDDM, and PVTDM is commonly considered by the numerical analysis executing under diverse values of solar irradiations and temperatures. The optimal five, seven, and nine parameters of PVSDM, PVDDM, and PVTDM, respectively, are accomplished using the FBIA and compared with those manifested by various optimization techniques. The numerical results are compared for the marketable Photowatt-PWP 201 polycrystalline and Kyocera KC200GT modules. The efficacy of the FBIA for the three models is properly carried out checking its standard deviation error with that obtained from various recently proposed optimization techniques in 2020 which are Jellyfish search (JFS) optimizer, Manta Ray Foraging optimizer (MRFO), Marine Predators Algorithm(MPA), Equilibrium Optimizer (EO), Heap Based Optimizer (HBO). The standard deviations of the fitness values over 30 runs are developed to be less than $1 \times 10^{-6}$ for the three models, which make the FBIA results are extremely consistent. Therefore, FBIA is foreseen to be a competitive technique for PV module parameter extraction.

Journal ArticleDOI
TL;DR: In this article, a multi-objective slime mould algorithm (MOSMA) is proposed to solve the problem of multiobjective optimization problems in industrial environment by incorporating the optimal food path using the positive negative feedback system.
Abstract: This paper proposes a multi-objective Slime Mould Algorithm (MOSMA), a multi-objective variant of the recently-developed Slime Mould Algorithm (SMA) for handling the multi-objective optimization problems in industries. Recently, for handling optimization problems, several meta-heuristic and evolutionary optimization techniques have been suggested for the optimization community. These methods tend to suffer from low-quality solutions when evaluating multi-objective optimization (MOO) problems than addressing the objective functions of identifying Pareto optimal solutions’ accurate estimation and increasing the distribution throughout all objectives. The SMA method follows the logic gained from the oscillation behaviors of slime mould in the laboratory experiments. The SMA algorithm shows a powerful performance compared to other well-established methods, and it is designed by incorporating the optimal food path using the positive-negative feedback system. The proposed MOSMA algorithm employs the same underlying SMA mechanisms for convergence combined with an elitist non-dominated sorting approach to estimate Pareto optimal solutions. As a posteriori method, the multi-objective formulation is maintained in the MOSMA, and a crowding distance operator is utilized to ensure increasing the coverage of optimal solutions across all objectives. To verify and validate the performance of MOSMA, 41 different case studies, including unconstrained, constrained, and real-world engineering design problems are considered. The performance of the MOSMA is compared with Multiobjective Symbiotic-Organism Search (MOSOS), Multi-objective Evolutionary Algorithm Based on Decomposition (MOEA/D), and Multiobjective Water-Cycle Algorithm (MOWCA) in terms of different performance metrics, such as Generational Distance (GD), Inverted Generational Distance (IGD), Maximum Spread (MS), Spacing, and Run-time. The simulation results demonstrated the superiority of the proposed algorithm in realizing high-quality solutions to all multi-objective problems, including linear, nonlinear, continuous, and discrete Pareto optimal front. The results indicate the effectiveness of the proposed algorithm in solving complicated multi-objective problems. This research will be backed up with extra online service and guidance for the paper’s source code at https://premkumarmanoharan.wixsite.com/mysite and https://aliasgharheidari.com/SMA.html . Also, the source code of SMA is shared with the public at https://aliasgharheidari.com/SMA.html .

Journal ArticleDOI
TL;DR: A potential fog-cloud combined IoT platform that can be used in the systematic and intelligent COVID-19 prevention and control, which involves five interventions including CO VID-19 Symptom Diagnosis, Quarantine Monitoring, Contact Tracing & Social Distancing, COVID -19 Outbreak Forecasting, and SARS-CoV-2 Mutation Tracking is demonstrated.
Abstract: As a result of the worldwide transmission of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), coronavirus disease 2019 (COVID-19) has evolved into an unprecedented pandemic. Currently, with unavailable pharmaceutical treatments and low vaccination rates, this novel coronavirus results in a great impact on public health, human society, and global economy, which is likely to last for many years. One of the lessons learned from the COVID-19 pandemic is that a long-term system with non-pharmaceutical interventions for preventing and controlling new infectious diseases is desirable to be implemented. Internet of things (IoT) platform is preferred to be utilized to achieve this goal, due to its ubiquitous sensing ability and seamless connectivity. IoT technology is changing our lives through smart healthcare, smart home, and smart city, which aims to build a more convenient and intelligent community. This paper presents how the IoT could be incorporated into the epidemic prevention and control system. Specifically, we demonstrate a potential fog-cloud combined IoT platform that can be used in the systematic and intelligent COVID-19 prevention and control, which involves five interventions including COVID-19 Symptom Diagnosis, Quarantine Monitoring, Contact Tracing & Social Distancing, COVID-19 Outbreak Forecasting, and SARS-CoV-2 Mutation Tracking. We investigate and review the state-of-the-art literatures of these five interventions to present the capabilities of IoT in countering against the current COVID-19 pandemic or future infectious disease epidemics.

Journal ArticleDOI
TL;DR: In this paper, the authors discuss the blockchain concept and relevant factors that provide a detailed analysis of potential security attacks and presents existing solutions that can be deployed as countermeasures to such attacks.
Abstract: Blockchain technology is becoming increasingly attractive to the next generation, as it is uniquely suited to the information era. Blockchain technology can also be applied to the Internet of Things (IoT). The advancement of IoT technology in various domains has led to substantial progress in distributed systems. Blockchain concept requires a decentralized data management system for storing and sharing the data and transactions in the network. This paper discusses the blockchain concept and relevant factors that provide a detailed analysis of potential security attacks and presents existing solutions that can be deployed as countermeasures to such attacks. This paper also includes blockchain security enhancement solutions by summarizing key points that can be exploited to develop various blockchain systems and security tools that counter security vulnerabilities. Finally, the paper discusses open issues relating to and future research directions of blockchain-IoT systems.