scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Access in 2021"


Journal ArticleDOI
TL;DR: Point Transformer as mentioned in this paper is a deep neural network that operates directly on unordered and unstructured point sets to extract local and global features and relate both representations by introducing the local-global attention mechanism.
Abstract: In this work, we present Point Transformer, a deep neural network that operates directly on unordered and unstructured point sets. We design Point Transformer to extract local and global features and relate both representations by introducing the local-global attention mechanism, which aims to capture spatial point relations and shape information. For that purpose, we propose SortNet, as part of the Point Transformer, which induces input permutation invariance by selecting points based on a learned score. The output of Point Transformer is a sorted and permutation invariant feature list that can directly be incorporated into common computer vision applications. We evaluate our approach on standard classification and part segmentation benchmarks to demonstrate competitive results compared to the prior work.

581 citations


Journal ArticleDOI
TL;DR: A narrative literature review examines the numerous developments and breakthroughs in the U-net architecture and provides observations on recent trends, and discusses the many innovations that have advanced in deep learning and how these tools facilitate U-nets.
Abstract: U-net is an image segmentation technique developed primarily for image segmentation tasks. These traits provide U-net with a high utility within the medical imaging community and have resulted in extensive adoption of U-net as the primary tool for segmentation tasks in medical imaging. The success of U-net is evident in its widespread use in nearly all major image modalities, from CT scans and MRI to X-rays and microscopy. Furthermore, while U-net is largely a segmentation tool, there have been instances of the use of U-net in other applications. Given that U-net’s potential is still increasing, this narrative literature review examines the numerous developments and breakthroughs in the U-net architecture and provides observations on recent trends. We also discuss the many innovations that have advanced in deep learning and discuss how these tools facilitate U-net. In addition, we review the different image modalities and application areas that have been enhanced by U-net.

425 citations


Journal ArticleDOI
TL;DR: In this article, a novel LIS architecture based on sparse channel sensors is proposed, where all the LIS elements are passive except for a few elements that are connected to the baseband.
Abstract: Employing large intelligent surfaces (LISs) is a promising solution for improving the coverage and rate of future wireless systems. These surfaces comprise massive numbers of nearly-passive elements that interact with the incident signals, for example by reflecting them, in a smart way that improves the wireless system performance. Prior work focused on the design of the LIS reflection matrices assuming full channel knowledge. Estimating these channels at the LIS, however, is a key challenging problem. With the massive number of LIS elements, channel estimation or reflection beam training will be associated with (i) huge training overhead if all the LIS elements are passive (not connected to a baseband) or with (ii) prohibitive hardware complexity and power consumption if all the elements are connected to the baseband through a fully-digital or hybrid analog/digital architecture. This paper proposes efficient solutions for these problems by leveraging tools from compressive sensing and deep learning. First, a novel LIS architecture based on sparse channel sensors is proposed. In this architecture, all the LIS elements are passive except for a few elements that are active (connected to the baseband). We then develop two solutions that design the LIS reflection matrices with negligible training overhead. In the first approach, we leverage compressive sensing tools to construct the channels at all the LIS elements from the channels seen only at the active elements. In the second approach, we develop a deep-learning based solution where the LIS learns how to interact with the incident signal given the channels at the active elements, which represent the state of the environment and transmitter/receiver locations. We show that the achievable rates of the proposed solutions approach the upper bound, which assumes perfect channel knowledge, with negligible training overhead and with only a few active elements, making them promising for future LIS systems.

405 citations


Journal ArticleDOI
TL;DR: In this article, the authors focus on convergent 6G communication, localization and sensing systems by identifying key technology enablers, discussing their underlying challenges, implementation issues, and recommending potential solutions.
Abstract: Herein, we focus on convergent 6G communication, localization and sensing systems by identifying key technology enablers, discussing their underlying challenges, implementation issues, and recommending potential solutions. Moreover, we discuss exciting new opportunities for integrated localization and sensing applications, which will disrupt traditional design principles and revolutionize the way we live, interact with our environment, and do business. Regarding potential enabling technologies, 6G will continue to develop towards even higher frequency ranges, wider bandwidths, and massive antenna arrays. In turn, this will enable sensing solutions with very fine range, Doppler, and angular resolutions, as well as localization to cm-level degree of accuracy. Besides, new materials, device types, and reconfigurable surfaces will allow network operators to reshape and control the electromagnetic response of the environment. At the same time, machine learning and artificial intelligence will leverage the unprecedented availability of data and computing resources to tackle the biggest and hardest problems in wireless communication systems. As a result, 6G will be truly intelligent wireless systems that will provide not only ubiquitous communication but also empower high accuracy localization and high-resolution sensing services. They will become the catalyst for this revolution by bringing about a unique new set of features and service capabilities, where localization and sensing will coexist with communication, continuously sharing the available resources in time, frequency, and space. This work concludes by highlighting foundational research challenges, as well as implications and opportunities related to privacy, security, and trust.

224 citations


Journal ArticleDOI
TL;DR: In this paper, the major design aspects of such a cellular joint communication and sensing (JCAS) system are discussed, and an analysis of the choice of the waveform that points towards choosing the one that is best suited for communication also for radar sensing is presented.
Abstract: The 6G vision of creating authentic digital twin representations of the physical world calls for new sensing solutions to compose multi-layered maps of our environments. Radio sensing using the mobile communication network as a sensor has the potential to become an essential component of the solution. With the evolution of cellular systems to mmWave bands in 5G and potentially sub-THz bands in 6G, small cell deployments will begin to dominate. Large bandwidth systems deployed in small cell configurations provide an unprecedented opportunity to employ the mobile network for sensing. In this paper, we focus on the major design aspects of such a cellular joint communication and sensing (JCAS) system. We present an analysis of the choice of the waveform that points towards choosing the one that is best suited for communication also for radar sensing. We discuss several techniques for efficiently integrating the sensing capability into the JCAS system, some of which are applicable with NR air-interface for evolved 5G systems. Specifically, methods for reducing sensing overhead by appropriate sensing signal design or by configuring separate numerologies for communications and sensing are presented. Sophisticated use of the sensing signals is shown to reduce the signaling overhead by a factor of 2.67 for an exemplary road traffic monitoring use case. We then present a vision for future advanced JCAS systems building upon distributed massive MIMO and discuss various other research challenges for JCAS that need to be addressed in order to pave the way towards natively integrated JCAS in 6G.

223 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a systematic review of ML applications in the field of agriculture, focusing on prediction of soil parameters such as organic carbon and moisture content, crop yield prediction, disease and weed detection in crops and species detection.
Abstract: Agriculture plays a vital role in the economic growth of any country. With the increase of population, frequent changes in climatic conditions and limited resources, it becomes a challenging task to fulfil the food requirement of the present population. Precision agriculture also known as smart farming have emerged as an innovative tool to address current challenges in agricultural sustainability. The mechanism that drives this cutting edge technology is machine learning (ML). It gives the machine ability to learn without being explicitly programmed. ML together with IoT (Internet of Things) enabled farm machinery are key components of the next agriculture revolution. In this article, authors present a systematic review of ML applications in the field of agriculture. The areas that are focused are prediction of soil parameters such as organic carbon and moisture content, crop yield prediction, disease and weed detection in crops and species detection. ML with computer vision are reviewed for the classification of a different set of crop images in order to monitor the crop quality and yield assessment. This approach can be integrated for enhanced livestock production by predicting fertility patterns, diagnosing eating disorders, cattle behaviour based on ML models using data collected by collar sensors, etc. Intelligent irrigation which includes drip irrigation and intelligent harvesting techniques are also reviewed that reduces human labour to a great extent. This article demonstrates how knowledge-based agriculture can improve the sustainable productivity and quality of the product.

214 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present the current trends and challenges for the detection of plant leaf disease using deep learning and advanced imaging techniques, and discuss some of the current challenges and problems that need to be resolved.
Abstract: Deep learning is a branch of artificial intelligence. In recent years, with the advantages of automatic learning and feature extraction, it has been widely concerned by academic and industrial circles. It has been widely used in image and video processing, voice processing, and natural language processing. At the same time, it has also become a research hotspot in the field of agricultural plant protection, such as plant disease recognition and pest range assessment, etc. The application of deep learning in plant disease recognition can avoid the disadvantages caused by artificial selection of disease spot features, make plant disease feature extraction more objective, and improve the research efficiency and technology transformation speed. This review provides the research progress of deep learning technology in the field of crop leaf disease identification in recent years. In this paper, we present the current trends and challenges for the detection of plant leaf disease using deep learning and advanced imaging techniques. We hope that this work will be a valuable resource for researchers who study the detection of plant diseases and insect pests. At the same time, we also discussed some of the current challenges and problems that need to be resolved.

198 citations


Journal ArticleDOI
TL;DR: A brief overview of the added features and key performance indicators of 5G NR is presented and a next-generation wireless communication architecture that acts as the platform for migration towards beyond 5G/6G networks is proposed.
Abstract: Nowadays, 5G is in its initial phase of commercialization. The 5G network will revolutionize the existing wireless network with its enhanced capabilities and novel features. 5G New Radio (5G NR), referred to as the global standardization of 5G, is presently under the $3^{\mathrm {rd}}$ Generation Partnership Project (3GPP) and can be operable over the wide range of frequency bands from less than 6GHz to mmWave (100GHz). 3GPP mainly focuses on the three major use cases of 5G NR that are comprised of Ultra-Reliable and Low Latency Communication (uRLLC), Massive Machine Type Communication (mMTC), Enhanced Mobile Broadband (eMBB). For meeting the targets of 5G NR, multiple features like scalable numerology, flexible spectrum, forward compatibility, and ultra-lean design are added as compared to the LTE systems. This paper presents a brief overview of the added features and key performance indicators of 5G NR. The issues related to the adaptation of higher modulation schemes and inter-RAT handover synchronization are well addressed in this paper. With the consideration of these challenges, a next-generation wireless communication architecture is proposed. The architecture acts as the platform for migration towards beyond 5G/6G networks. Along with this, various technologies and applications of 6G networks are also overviewed in this paper. 6G network will incorporate Artificial intelligence (AI) based services, edge computing, quantum computing, optical wireless communication, hybrid access, and tactile services. For enabling these diverse services, a virtualized network slicing based architecture of 6G is proposed. Various ongoing projects on 6G and its technologies are also listed in this paper.

189 citations


Journal ArticleDOI
TL;DR: In this article, an extensive literature review on solving feature selection problem using metaheuristic algorithms which are developed in the ten years (2009-2019) is presented, and a categorical list of more than a hundred metaheuristics algorithms is presented.
Abstract: Feature selection is a critical and prominent task in machine learning. To reduce the dimension of the feature set while maintaining the accuracy of the performance is the main aim of the feature selection problem. Various methods have been developed to classify the datasets. However, metaheuristic algorithms have achieved great attention in solving numerous optimization problem. Therefore, this paper presents an extensive literature review on solving feature selection problem using metaheuristic algorithms which are developed in the ten years (2009-2019). Further, metaheuristic algorithms have been classified into four categories based on their behaviour. Moreover, a categorical list of more than a hundred metaheuristic algorithms is presented. To solve the feature selection problem, only binary variants of metaheuristic algorithms have been reviewed and corresponding to their categories, a detailed description of them explained. The metaheuristic algorithms in solving feature selection problem are given with their binary classification, name of the classifier used, datasets and the evaluation metrics. After reviewing the papers, challenges and issues are also identified in obtaining the best feature subset using different metaheuristic algorithms. Finally, some research gaps are also highlighted for the researchers who want to pursue their research in developing or modifying metaheuristic algorithms for classification. For an application, a case study is presented in which datasets are adopted from the UCI repository and numerous metaheuristic algorithms are employed to obtain the optimal feature subset.

182 citations


Journal ArticleDOI
TL;DR: In this paper, an effective sarcasm identification framework on social media data by pursuing the paradigms of neural language models and deep neural networks is presented. But sarcasm detection on text documents is one of the most challenging tasks in NLP.
Abstract: Sarcasm identification on text documents is one of the most challenging tasks in natural language processing (NLP), has become an essential research direction, due to its prevalence on social media data. The purpose of our research is to present an effective sarcasm identification framework on social media data by pursuing the paradigms of neural language models and deep neural networks. To represent text documents, we introduce inverse gravity moment based term weighted word embedding model with trigrams. In this way, critical words/terms have higher values by keeping the word-ordering information. In our model, we present a three-layer stacked bidirectional long short-term memory architecture to identify sarcastic text documents. For the evaluation task, the presented framework has been evaluated on three-sarcasm identification corpus. In the empirical analysis, three neural language models (i.e., word2vec, fastText and GloVe), two unsupervised term weighting functions (i.e., term-frequency, and TF-IDF) and eight supervised term weighting functions (i.e., odds ratio, relevance frequency, balanced distributional concentration, inverse question frequency-question frequency-inverse category frequency, short text weighting, inverse gravity moment, regularized entropy and inverse false negative-true positive-inverse category frequency) have been evaluated. For sarcasm identification task, the presented model yields promising results with a classification accuracy of 95.30%.

182 citations


Journal ArticleDOI
TL;DR: In this article, a systematic literature review of contrastive and counterfactual explanations of artificial intelligence algorithms is presented, which provides readers with a thorough and reproducible analysis of the interdisciplinary research field under study.
Abstract: A number of algorithms in the field of artificial intelligence offer poorly interpretable decisions. To disclose the reasoning behind such algorithms, their output can be explained by means of so-called evidence-based (or factual) explanations. Alternatively, contrastive and counterfactual explanations justify why the output of the algorithms is not any different and how it could be changed, respectively. It is of crucial importance to bridge the gap between theoretical approaches to contrastive and counterfactual explanation and the corresponding computational frameworks. In this work we conduct a systematic literature review which provides readers with a thorough and reproducible analysis of the interdisciplinary research field under study. We first examine theoretical foundations of contrastive and counterfactual accounts of explanation. Then, we report the state-of-the-art computational frameworks for contrastive and counterfactual explanation generation. In addition, we analyze how grounded such frameworks are on the insights from the inspected theoretical approaches. As a result, we highlight a variety of properties of the approaches under study and reveal a number of shortcomings thereof. Moreover, we define a taxonomy regarding both theoretical and practical approaches to contrastive and counterfactual explanation.

Journal ArticleDOI
TL;DR: In this paper, a review of deep learning based systems for the detection of the new coronavirus (COVID-19) outbreak has been presented, which can be potentially further utilized to combat the outbreak.
Abstract: Novel coronavirus (COVID-19) outbreak, has raised a calamitous situation all over the world and has become one of the most acute and severe ailments in the past hundred years. The prevalence rate of COVID-19 is rapidly rising every day throughout the globe. Although no vaccines for this pandemic have been discovered yet, deep learning techniques proved themselves to be a powerful tool in the arsenal used by clinicians for the automatic diagnosis of COVID-19. This paper aims to overview the recently developed systems based on deep learning techniques using different medical imaging modalities like Computer Tomography (CT) and X-ray. This review specifically discusses the systems developed for COVID-19 diagnosis using deep learning techniques and provides insights on well-known data sets used to train these networks. It also highlights the data partitioning techniques and various performance measures developed by researchers in this field. A taxonomy is drawn to categorize the recent works for proper insight. Finally, we conclude by addressing the challenges associated with the use of deep learning methods for COVID-19 detection and probable future trends in this research area. The aim of this paper is to facilitate experts (medical or otherwise) and technicians in understanding the ways deep learning techniques are used in this regard and how they can be potentially further utilized to combat the outbreak of COVID-19.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a model that incorporates different methods to achieve effective prediction of heart disease, which used efficient Data Collection, Data Pre-processing and Data Transformation methods to create accurate information for the training model.
Abstract: Cardiovascular diseases (CVD) are among the most common serious illnesses affecting human health. CVDs may be prevented or mitigated by early diagnosis, and this may reduce mortality rates. Identifying risk factors using machine learning models is a promising approach. We would like to propose a model that incorporates different methods to achieve effective prediction of heart disease. For our proposed model to be successful, we have used efficient Data Collection, Data Pre-processing and Data Transformation methods to create accurate information for the training model. We have used a combined dataset (Cleveland, Long Beach VA, Switzerland, Hungarian and Stat log). Suitable features are selected by using the Relief, and Least Absolute Shrinkage and Selection Operator (LASSO) techniques. New hybrid classifiers like Decision Tree Bagging Method (DTBM), Random Forest Bagging Method (RFBM), K-Nearest Neighbors Bagging Method (KNNBM), AdaBoost Boosting Method (ABBM), and Gradient Boosting Boosting Method (GBBM) are developed by integrating the traditional classifiers with bagging and boosting methods, which are used in the training process. We have also instrumented some machine learning algorithms to calculate the Accuracy (ACC), Sensitivity (SEN), Error Rate, Precision (PRE) and F1 Score (F1) of our model, along with the Negative Predictive Value (NPR), False Positive Rate (FPR), and False Negative Rate (FNR). The results are shown separately to provide comparisons. Based on the result analysis, we can conclude that our proposed model produced the highest accuracy while using RFBM and Relief feature selection methods (99.05%).

Journal ArticleDOI
TL;DR: A comprehensive comparison with various state-of-the-art methods reveals the importance of benchmarking the deep learning methods for automated real-time polyp identification and delineations that can potentially transform current clinical practices and minimise miss-detection rates.
Abstract: Computer-aided detection, localisation, and segmentation methods can help improve colonoscopy procedures. Even though many methods have been built to tackle automatic detection and segmentation of polyps, benchmarking of state-of-the-art methods still remains an open problem. This is due to the increasing number of researched computer vision methods that can be applied to polyp datasets. Benchmarking of novel methods can provide a direction to the development of automated polyp detection and segmentation tasks. Furthermore, it ensures that the produced results in the community are reproducible and provide a fair comparison of developed methods. In this paper, we benchmark several recent state-of-the-art methods using Kvasir-SEG, an open-access dataset of colonoscopy images for polyp detection, localisation, and segmentation evaluating both method accuracy and speed. Whilst, most methods in literature have competitive performance over accuracy, we show that the proposed ColonSegNet achieved a better trade-off between an average precision of 0.8000 and mean IoU of 0.8100, and the fastest speed of 180 frames per second for the detection and localisation task. Likewise, the proposed ColonSegNet achieved a competitive dice coefficient of 0.8206 and the best average speed of 182.38 frames per second for the segmentation task. Our comprehensive comparison with various state-of-the-art methods reveals the importance of benchmarking the deep learning methods for automated real-time polyp identification and delineations that can potentially transform current clinical practices and minimise miss-detection rates.

Journal ArticleDOI
Weihao Weng1, Xin Zhu1
TL;DR: In this article, the authors propose to enlarge receptive fields by increasing the kernel sizes of convolutional layers in steps (e.g., from $3\times 3$ to $7\times 7$ and then $15\times 15$ ) instead of downsampling.
Abstract: Encoder–decoder networks are state-of-the-art approaches to biomedical image segmentation, but have two problems: i.e., the widely used pooling operations may discard spatial information, and therefore low-level semantics are lost. Feature fusion methods can mitigate these problems but feature maps of different scales cannot be easily fused because down- and upsampling change the spatial resolution of feature map. To address these issues, we propose INet, which enlarges receptive fields by increasing the kernel sizes of convolutional layers in steps (e.g., from $3\times 3$ to $7\times 7$ and then $15\times 15$ ) instead of downsampling. Inspired by an Inception module, INet extracts features by kernels of different sizes through concatenating the output feature maps of all preceding convolutional layers. We also find that the large kernel makes the network feasible for biomedical image segmentation. In addition, INet uses two overlapping max-poolings, i.e., max-poolings with stride 1, to extract the sharpest features. Fixed-size and fixed-channel feature maps enable INet to concatenate feature maps and add multiple shortcuts across layers. In this way, INet can recover low-level semantics by concatenating the feature maps of all preceding layers and expedite the training by adding multiple shortcuts. Because INet has additional residual shortcuts, we compare INet with a UNet system that also has residual shortcuts (ResUNet). To confirm INet as a backbone architecture for biomedical image segmentation, we implement dense connections on INet (called DenseINet) and compare it to a DenseUNet system with residual shortcuts (ResDenseUNet). INet and DenseINet require 16.9% and 37.6% fewer parameters than ResUNet and ResDenseUNet, respectively. In comparison with six encoder–decoder approaches using nine public datasets, INet and DenseINet demonstrate efficient improvements in biomedical image segmentation. INet outperforms DeepLabV3, which implementing atrous convolution instead of downsampling to increase receptive fields. INet also outperforms two recent methods (named HRNet and MS-NAS) that maintain high-resolution representations and repeatedly exchange the information across resolutions.

Journal ArticleDOI
TL;DR: In this paper, the authors analyzed the heart failure survivors from the dataset of 299 patients admitted in hospital and found significant features and effective data mining techniques that can boost the accuracy of cardiovascular patient's survivor prediction.
Abstract: Cardiovascular disease is a substantial cause of mortality and morbidity in the world. In clinical data analytics, it is a great challenge to predict heart disease survivor. Data mining transforms huge amounts of raw data generated by the health industry into useful information that can help in making informed decisions. Various studies proved that significant features play a key role in improving performance of machine learning models. This study analyzes the heart failure survivors from the dataset of 299 patients admitted in hospital. The aim is to find significant features and effective data mining techniques that can boost the accuracy of cardiovascular patient’s survivor prediction. To predict patient’s survival, this study employs nine classification models: Decision Tree (DT), Adaptive boosting classifier (AdaBoost), Logistic Regression (LR), Stochastic Gradient classifier (SGD), Random Forest (RF), Gradient Boosting classifier (GBM), Extra Tree Classifier (ETC), Gaussian Naive Bayes classifier (G-NB) and Support Vector Machine (SVM). The imbalance class problem is handled by Synthetic Minority Oversampling Technique (SMOTE). Furthermore, machine learning models are trained on the highest ranked features selected by RF. The results are compared with those provided by machine learning algorithms using full set of features. Experimental results demonstrate that ETC outperforms other models and achieves 0.9262 accuracy value with SMOTE in prediction of heart patient’s survival.

Journal ArticleDOI
TL;DR: In this paper, a detailed review of the planning, operation, and control of DC microgrids is presented, which explicitly helps readers understand existing developments on DC microgrid planning and operation, as well as identify the need for additional research in order to further contribute to the topic.
Abstract: In recent years, due to the wide utilization of direct current (DC) power sources, such as solar photovoltaic (PV), fuel cells, different DC loads, high-level integration of different energy storage systems such as batteries, supercapacitors, DC microgrids have been gaining more importance. Furthermore, unlike conventional AC systems, DC microgrids do not have issues such as synchronization, harmonics, reactive power control, and frequency control. However, the incorporation of different distributed generators, such as PV, wind, fuel cell, loads, and energy storage devices in the common DC bus complicates the control of DC bus voltage as well as the power-sharing. In order to ensure the secure and safe operation of DC microgrids, different control techniques, such as centralized, decentralized, distributed, multilevel, and hierarchical control, are presented. The optimal planning of DC microgrids has an impact on operation and control algorithms; thus, coordination among them is required. A detailed review of the planning, operation, and control of DC microgrids is missing in the existing literature. Thus, this article documents developments in the planning, operation, and control of DC microgrids covered in research in the past 15 years. DC microgrid planning, operation, and control challenges and opportunities are discussed. Different planning, control, and operation methods are well documented with their advantages and disadvantages to provide an excellent foundation for industry personnel and researchers. Power-sharing and energy management operation, control, and planning issues are summarized for both grid-connected and islanded DC microgrids. Also, key research areas in DC microgrid planning, operation, and control are identified to adopt cutting-edge technologies. This review explicitly helps readers understand existing developments on DC microgrid planning, operation, and control as well as identify the need for additional research in order to further contribute to the topic.

Journal ArticleDOI
TL;DR: In this article, a new technique is proposed to forecast short-term electrical load, which is based on the integration of convolutional neural network (CNN) and long shortterm memory (LSTM) network.
Abstract: In this study, a new technique is proposed to forecast short-term electrical load. Load forecasting is an integral part of power system planning and operation. Precise forecasting of load is essential for unit commitment, capacity planning, network augmentation and demand side management. Load forecasting can be generally categorized into three classes such as short-term, midterm and long-term. Short-term forecasting is usually done to predict load for next few hours to few weeks. In the literature, various methodologies such as regression analysis, machine learning approaches, deep learning methods and artificial intelligence systems have been used for short-term load forecasting. However, existing techniques may not always provide higher accuracy in short-term load forecasting. To overcome this challenge, a new approach is proposed in this paper for short-term load forecasting. The developed method is based on the integration of convolutional neural network (CNN) and long short-term memory (LSTM) network. The method is applied to Bangladesh power system to provide short-term forecasting of electrical load. Also, the effectiveness of the proposed technique is validated by comparing the forecasting errors with that of some existing approaches such as long short-term memory network, radial basis function network and extreme gradient boosting algorithm. It is found that the proposed strategy results in higher precision and accuracy in short-term load forecasting.

Journal ArticleDOI
TL;DR: A survey comprehensively reviews over 200 reports covering robotic systems which have emerged or have been repurposed during the past several months, to provide insights to both academia and industry as mentioned in this paper.
Abstract: As a result of the difficulties brought by COVID-19 and its associated lockdowns, many individuals and companies have turned to robots in order to overcome the challenges of the pandemic. Compared with traditional human labor, robotic and autonomous systems have advantages such as an intrinsic immunity to the virus and an inability for human-robot-human spread of any disease-causing pathogens, though there are still many technical hurdles for the robotics industry to overcome. This survey comprehensively reviews over 200 reports covering robotic systems which have emerged or have been repurposed during the past several months, to provide insights to both academia and industry. In each chapter, we cover both the advantages and the challenges for each robot, finding that robotics systems are overall apt solutions for dealing with many of the problems brought on by COVID-19, including: diagnosis, screening, disinfection, surgery, telehealth, care, logistics, manufacturing and broader interpersonal problems unique to the lockdowns of the pandemic. By discussing the potential new robot capabilities and fields they applied to, we expect the robotics industry to take a leap forward due to this unexpected pandemic.

Journal ArticleDOI
TL;DR: This work establishes a foundation of dynamic networks with consistent, detailed terminology and notation and presents a comprehensive survey of dynamic graph neural network models using the proposed terminology.
Abstract: Dynamic networks are used in a wide range of fields, including social network analysis, recommender systems and epidemiology. Representing complex networks as structures changing over time allow network models to leverage not only structural but also temporal patterns. However, as dynamic network literature stems from diverse fields and makes use of inconsistent terminology, it is challenging to navigate. Meanwhile, graph neural networks (GNNs) have gained a lot of attention in recent years for their ability to perform well on a range of network science tasks, such as link prediction and node classification. Despite the popularity of graph neural networks and the proven benefits of dynamic network models, there has been little focus on graph neural networks for dynamic networks. To address the challenges resulting from the fact that this research crosses diverse fields as well as to survey dynamic graph neural networks, this work is split into two main parts. First, to address the ambiguity of the dynamic network terminology we establish a foundation of dynamic networks with consistent, detailed terminology and notation. Second, we present a comprehensive survey of dynamic graph neural network models using the proposed terminology.

Journal ArticleDOI
TL;DR: In this article, the authors provide an extensive review of NSGA-II for selected combinatorial optimization problems viz. assignment problem, allocation problem, travelling salesman problem, vehicle routing problem, scheduling problem, and knapsack problem.
Abstract: This paper provides an extensive review of the popular multi-objective optimization algorithm NSGA-II for selected combinatorial optimization problems viz. assignment problem, allocation problem, travelling salesman problem, vehicle routing problem, scheduling problem, and knapsack problem. It is identified that based on the manner in which NSGA-II has been implemented for solving the aforementioned group of problems, there can be three categories: Conventional NSGA-II, where the authors have implemented the basic version of NSGA-II, without making any changes in the operators; the second one is Modified NSGA-II, where the researchers have implemented NSGA-II after making some changes into it and finally, Hybrid NSGA-II variants, where the researchers have hybridized the conventional and modified NSGA-II with some other technique. The article analyses the modifications in NSGA-II and also discusses the various performance assessment techniques used by the researchers, i.e., test instances, performance metrics, statistical tests, case studies, benchmarking with other state-of-the-art algorithms. Additionally, the paper also provides a brief bibliometric analysis based on the work done in this study.

Journal ArticleDOI
TL;DR: In this article, a novel anomaly-based IDS (Intrusion Detection System) using machine learning techniques to detect and classify attacks in IoT networks is proposed, where a convolutional neural network model is used to create a multiclass classification model.
Abstract: The growing development of IoT (Internet of Things) devices creates a large attack surface for cybercriminals to conduct potentially more destructive cyberattacks; as a result, the security industry has seen an exponential increase in cyber-attacks. Many of these attacks have effectively accomplished their malicious goals because intruders conduct cyber-attacks using novel and innovative techniques. An anomaly-based IDS (Intrusion Detection System) uses machine learning techniques to detect and classify attacks in IoT networks. In the presence of unpredictable network technologies and various intrusion methods, traditional machine learning techniques appear inefficient. In many research areas, deep learning methods have shown their ability to identify anomalies accurately. Convolutional neural networks are an excellent alternative for anomaly detection and classification due to their ability to automatically categorize main characteristics in input data and their effectiveness in performing faster computations. In this paper, we design and develop a novel anomaly-based intrusion detection model for IoT networks. First, a convolutional neural network model is used to create a multiclass classification model. The proposed model is then implemented using convolutional neural networks in 1D, 2D, and 3D. The proposed convolutional neural network model is validated using the BoT-IoT, IoT Network Intrusion, MQTT-IoT-IDS2020, and IoT-23 intrusion detection datasets. Transfer learning is used to implement binary and multiclass classification using a convolutional neural network multiclass pre-trained model. Our proposed binary and multiclass classification models have achieved high accuracy, precision, recall, and F1 score compared to existing deep learning implementations.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a deep attention convolutional neural network (CNN) for scene classification in remote sensing, which computes a new feature map as a weighted average of these original feature maps.
Abstract: Scene classification is a highly useful task in Remote Sensing (RS) applications. Many efforts have been made to improve the accuracy of RS scene classification. Scene classification is a challenging problem, especially for large datasets with tens of thousands of images with a large number of classes and taken under different circumstances. One problem that is observed in scene classification is the fact that for a given scene, only one part of it indicates which class it belongs to, whereas the other parts are either irrelevant or they actually tend to belong to another class. To address this issue, this paper proposes a deep attention Convolutional Neural Network (CNN) for scene classification in remote sensing. CNN models use successive convolutional layers to learn feature maps from larger and larger regions (or receptive fields) of the scene. The attention mechanism computes a new feature map as a weighted average of these original feature maps. In particular, we propose a solution, named EfficientNet-B3-Attn-2, based on the pre-trained EfficientNet-B3 CNN enhanced with an attention mechanism. A dedicated branch is added to layer 262 of the network, to compute the required weights. These weights are learned automatically by training the whole CNN model end-to-end using the backpropagation algorithm. In this way, the network learns to emphasize important regions of the scene and suppress the regions that are irrelevant to the classification. We tested the proposed EfficientNet-B3-Attn-2 on six popular remote sensing datasets, namely UC Merced, KSA, OPTIMAL-31, RSSCN7, WHU-RS19, and AID datasets, showing its strong capabilities in classifying RS scenes.

Journal ArticleDOI
TL;DR: In this paper, the authors discuss the blockchain concept and relevant factors that provide a detailed analysis of potential security attacks and presents existing solutions that can be deployed as countermeasures to such attacks.
Abstract: Blockchain technology is becoming increasingly attractive to the next generation, as it is uniquely suited to the information era. Blockchain technology can also be applied to the Internet of Things (IoT). The advancement of IoT technology in various domains has led to substantial progress in distributed systems. Blockchain concept requires a decentralized data management system for storing and sharing the data and transactions in the network. This paper discusses the blockchain concept and relevant factors that provide a detailed analysis of potential security attacks and presents existing solutions that can be deployed as countermeasures to such attacks. This paper also includes blockchain security enhancement solutions by summarizing key points that can be exploited to develop various blockchain systems and security tools that counter security vulnerabilities. Finally, the paper discusses open issues relating to and future research directions of blockchain-IoT systems.

Journal ArticleDOI
TL;DR: In this article, the authors present an Ethereum blockchain-based approach leveraging smart contracts and decentralized off-chain storage for efficient product traceability in the healthcare supply chain, which guarantees data provenance, eliminates the need for intermediaries and provides a secure, immutable history of transactions to all stakeholders.
Abstract: Healthcare supply chains are complex structures spanning across multiple organizational and geographical boundaries, providing critical backbone to services vital for everyday life. The inherent complexity of such systems can introduce impurities including inaccurate information, lack of transparency and limited data provenance. Counterfeit drugs is one consequence of such limitations within existing supply chains which not only has serious adverse impact on human health but also causes severe economic loss to the healthcare industry. Consequently, existing studies have emphasized the need for a robust, end-to-end track and trace system for pharmaceutical supply chains. Therein, an end-to-end product tracking system across the pharmaceutical supply chain is paramount to ensuring product safety and eliminating counterfeits. Most existing track and trace systems are centralized leading to data privacy, transparency and authenticity issues in healthcare supply chains. In this article, we present an Ethereum blockchain-based approach leveraging smart contracts and decentralized off-chain storage for efficient product traceability in the healthcare supply chain. The smart contract guarantees data provenance, eliminates the need for intermediaries and provides a secure, immutable history of transactions to all stakeholders. We present the system architecture and detailed algorithms that govern the working principles of our proposed solution. We perform testing and validation, and present cost and security analysis of the system to evaluate its effectiveness to enhance traceability within pharmaceutical supply chains.

Journal ArticleDOI
TL;DR: A comprehensive survey of IoT-and IoMT-based edge-intelligent smart health care, mainly focusing on journal articles published between 2014 and 2020, is presented in this article.
Abstract: Smart health care is an important aspect of connected living. Health care is one of the basic pillars of human need, and smart health care is projected to produce several billion dollars in revenue in the near future. There are several components of smart health care, including the Internet of Things (IoT), the Internet of Medical Things (IoMT), medical sensors, artificial intelligence (AI), edge computing, cloud computing, and next-generation wireless communication technology. Many papers in the literature deal with smart health care or health care in general. Here, we present a comprehensive survey of IoT- and IoMT-based edge-intelligent smart health care, mainly focusing on journal articles published between 2014 and 2020. We survey this literature by answering several research areas on IoT and IoMT, AI, edge and cloud computing, security, and medical signals fusion. We also address current research challenges and offer some future research directions.

Journal ArticleDOI
TL;DR: 10 popular supervised and unsupervised ML algorithms for identifying effective and efficient ML–AIDS of networks and computers are applied and the true positive and negative rates, accuracy, precision, recall, and F-Score of 31 ML-AIDS models are evaluated.
Abstract: An intrusion detection system (IDS) is an important protection instrument for detecting complex network attacks Various machine learning (ML) or deep learning (DL) algorithms have been proposed for implementing anomaly-based IDS (AIDS) Our review of the AIDS literature identifies some issues in related work, including the randomness of the selected algorithms, parameters, and testing criteria, the application of old datasets, or shallow analyses and validation of the results This paper comprehensively reviews previous studies on AIDS by using a set of criteria with different datasets and types of attacks to set benchmarking outcomes that can reveal the suitable AIDS algorithms, parameters, and testing criteria Specifically, this paper applies 10 popular supervised and unsupervised ML algorithms for identifying effective and efficient ML–AIDS of networks and computers These supervised ML algorithms include the artificial neural network (ANN), decision tree (DT), k-nearest neighbor (k-NN), naive Bayes (NB), random forest (RF), support vector machine (SVM), and convolutional neural network (CNN) algorithms, whereas the unsupervised ML algorithms include the expectation-maximization (EM), k-means, and self-organizing maps (SOM) algorithms Several models of these algorithms are introduced, and the turning and training parameters of each algorithm are examined to achieve an optimal classifier evaluation Unlike previous studies, this study evaluates the performance of AIDS by measuring the true positive and negative rates, accuracy, precision, recall, and F-Score of 31 ML-AIDS models The training and testing time for ML-AIDS models are also considered in measuring their performance efficiency given that time complexity is an important factor in AIDSs The ML-AIDS models are tested by using a recent and highly unbalanced multiclass CICIDS2017 dataset that involves real-world network attacks In general, the k-NN-AIDS, DT-AIDS, and NB-AIDS models obtain the best results and show a greater capability in detecting web attacks compared with other models that demonstrate irregular and inferior results

Journal ArticleDOI
TL;DR: In this paper, a new deep learning (DL) model based on the transfer-learning (TL) technique is developed to efficiently assist in the automatic detection and diagnosis of the BC suspected area based on two techniques namely 80-20 and cross-validation.
Abstract: Breast cancer (BC) is one of the primary causes of cancer death among women. Early detection of BC allows patients to receive appropriate treatment, thus increasing the possibility of survival. In this work, a new deep-learning (DL) model based on the transfer-learning (TL) technique is developed to efficiently assist in the automatic detection and diagnosis of the BC suspected area based on two techniques namely 80–20 and cross-validation. DL architectures are modeled to be problem-specific. TL uses the knowledge gained during solving one problem in another relevant problem. In the proposed model, the features are extracted from the mammographic image analysis- society (MIAS) dataset using a pre-trained convolutional neural network (CNN) architecture such as Inception V3, ResNet50, Visual Geometry Group networks (VGG)-19, VGG-16, and Inception-V2 ResNet. Six evaluation metrics for evaluating the performance of the proposed model in terms of accuracy, sensitivity, specificity, precision, F-score, and area under the ROC curve (AUC) has been chosen. Experimental results show that the TL of the VGG16 model is powerful for BC diagnosis by classifying the mammogram breast images with overall accuracy, sensitivity, specificity, precision, F-score, and AUC of 98.96%, 97.83%, 99.13%, 97.35%, 97.66%, and 0.995, respectively for 80–20 method and 98.87%, 97.27%, 98.2%, 98.84%, 98.04%, and 0.993 for 10-fold cross-validation method.

Journal ArticleDOI
TL;DR: In this article, the authors have compared the existing algorithms in terms of implementation cost, hardware and software performances and attack resistance properties and discussed the demand and a direction for new research in the area of lightweight cryptography to optimize balance amongst cost, performance and security.
Abstract: IoT is becoming more common and popular due to its wide range of applications in various domains. They collect data from the real environment and transfer it over the networks. There are many challenges while deploying IoT in a real-world, varying from tiny sensors to servers. Security is considered as the number one challenge in IoT deployments, as most of the IoT devices are physically accessible in the real world and many of them are limited in resources (such as energy, memory, processing power and even physical space). In this paper, we are focusing on these resource-constrained IoT devices (such as RFID tags, sensors, smart cards, etc.) as securing them in such circumstances is a challenging task. The communication from such devices can be secured by a mean of lightweight cryptography, a lighter version of cryptography. More than fifty lightweight cryptography (plain encryption) algorithms are available in the market with a focus on a specific application(s), and another 57 algorithms have been submitted by the researchers to the NIST competition recently. To provide a holistic view of the area, in this paper, we have compared the existing algorithms in terms of implementation cost, hardware and software performances and attack resistance properties. Also, we have discussed the demand and a direction for new research in the area of lightweight cryptography to optimize balance amongst cost, performance and security.

Journal ArticleDOI
TL;DR: A review of the status of Resonant Inductive Wireless Power Transfer Charging technology also highlighting the present status and its future of the wireless EV market is presented in this paper. But, the focus of this paper is not on the electric vehicles.
Abstract: Considering a future scenario in which a driverless Electric Vehicle (EV) needs an automatic charging system without human intervention. In this regard, there is a requirement for a fully automatable, fast, safe, cost-effective, and reliable charging infrastructure that provides a profitable business model and fast adoption in the electrified transportation systems. These qualities can be comprehended through wireless charging systems. Wireless Power Transfer (WPT) is a futuristic technology with the advantage of flexibility, convenience, safety, and the capability of becoming fully automated. In WPT methods resonant inductive wireless charging has to gain more attention compared to other wireless power transfer methods due to high efficiency and easy maintenance. This literature presents a review of the status of Resonant Inductive Wireless Power Transfer Charging technology also highlighting the present status and its future of the wireless EV market. First, the paper delivers a brief history throw lights on wireless charging methods, highlighting the pros and cons. Then, the paper aids a comparative review of different type’s inductive pads, rails, and compensations technologies done so far. The static and dynamic charging techniques and their characteristics are also illustrated. The role and importance of power electronics and converter types used in various applications are discussed. The batteries and their management systems as well as various problems involved in WPT are also addressed. Different trades like cyber security economic effects, health and safety, foreign object detection, and the effect and impact on the distribution grid are explored. Prospects and challenges involved in wireless charging systems are also highlighting in this work. We believe that this work could help further the research and development of WPT systems.