scispace - formally typeset
Search or ask a question

Showing papers in "Electronics in 2021"


Journal ArticleDOI
TL;DR: This work provides an overview of the most relevant evaluation methods used in object detection competitions, highlighting their peculiarities, differences, and advantages, and provides a novel open-source toolkit supporting different annotation formats and 15 performance metrics, making it easy for researchers to evaluate the performance of their detection algorithms in most known datasets.
Abstract: Recent outstanding results of supervised object detection in competitions and challenges are often associated with specific metrics and datasets. The evaluation of such methods applied in different contexts have increased the demand for annotated datasets. Annotation tools represent the location and size of objects in distinct formats, leading to a lack of consensus on the representation. Such a scenario often complicates the comparison of object detection methods. This work alleviates this problem along the following lines: (i) It provides an overview of the most relevant evaluation methods used in object detection competitions, highlighting their peculiarities, differences, and advantages; (ii) it examines the most used annotation formats, showing how different implementations may influence the assessment results; and (iii) it provides a novel open-source toolkit supporting different annotation formats and 15 performance metrics, making it easy for researchers to evaluate the performance of their detection algorithms in most known datasets. In addition, this work proposes a new metric, also included in the toolkit, for evaluating object detection in videos that is based on the spatio-temporal overlap between the ground-truth and detected bounding boxes.

246 citations


Journal ArticleDOI
TL;DR: A comprehensive overview of methods proposed in the current literature for the evaluation of ML explanations is presented, finding that the quantitative metrics for both model-based and example-based explanations are primarily used to evaluate the parsimony/simplicity of interpretability, and subjective measures have been embraced as the focal point for the human-centered evaluation of explainable systems.
Abstract: The most successful Machine Learning (ML) systems remain complex black boxes to end-users, and even experts are often unable to understand the rationale behind their decisions. The lack of transparency of such systems can have severe consequences or poor uses of limited valuable resources in medical diagnosis, financial decision-making, and in other high-stake domains. Therefore, the issue of ML explanation has experienced a surge in interest from the research community to application domains. While numerous explanation methods have been explored, there is a need for evaluations to quantify the quality of explanation methods to determine whether and to what extent the offered explainability achieves the defined objective, and compare available explanation methods and suggest the best explanation from the comparison for a specific task. This survey paper presents a comprehensive overview of methods proposed in the current literature for the evaluation of ML explanations. We identify properties of explainability from the review of definitions of explainability. The identified properties of explainability are used as objectives that evaluation metrics should achieve. The survey found that the quantitative metrics for both model-based and example-based explanations are primarily used to evaluate the parsimony/simplicity of interpretability, while the quantitative metrics for attribution-based explanations are primarily used to evaluate the soundness of fidelity of explainability. The survey also demonstrated that subjective measures, such as trust and confidence, have been embraced as the focal point for the human-centered evaluation of explainable systems. The paper concludes that the evaluation of ML explanations is a multidisciplinary research topic. It is also not possible to define an implementation of evaluation metrics, which can be applied to all explanation methods.

195 citations


Journal ArticleDOI
TL;DR: A small object detection layer is added to improve the model’s ability to detect small defects and the model can well meet the requirements of real-time detection and provides a robust strategy for the kiwi flaw detection system.
Abstract: Defect detection is the most important step in the postpartum reprocessing of kiwifruit. However, there are some small defects difficult to detect. The accuracy and speed of existing detection algorithms are difficult to meet the requirements of real-time detection. For solving these problems, we developed a defect detection model based on YOLOv5, which is able to detect defects accurately and at a fast speed. The main contributions of this research are as follows: (1) a small object detection layer is added to improve the model’s ability to detect small defects; (2) we pay attention to the importance of different channels by embedding SELayer; (3) the loss function CIoU is introduced to make the regression more accurate; (4) under the prerequisite of no increase in training cost, we train our model based on transfer learning and use the CosineAnnealing algorithm to improve the effect. The results of the experiment show that the overall performance of the improved network YOLOv5-Ours is better than the original and mainstream detection algorithms. The mAP@0.5 of YOLOv5-Ours has reached 94.7%, which was an improvement of nearly 9%, compared to the original algorithm. Our model only takes 0.1 s to detect a single image, which proves the effectiveness of the model. Therefore, YOLOv5-Ours can well meet the requirements of real-time detection and provides a robust strategy for the kiwi flaw detection system.

140 citations


Journal ArticleDOI
TL;DR: The accuracy results in the identification of diseases showed that the deep CNN model is promising and can greatly impact the efficient identification of the diseases, and may have potential in the detection of diseases in real-time agricultural systems.
Abstract: The timely identification and early prevention of crop diseases are essential for improving production. In this paper, deep convolutional-neural-network (CNN) models are implemented to identify and diagnose diseases in plants from their leaves, since CNNs have achieved impressive results in the field of machine vision. Standard CNN models require a large number of parameters and higher computation cost. In this paper, we replaced standard convolution with depth=separable convolution, which reduces the parameter number and computation cost. The implemented models were trained with an open dataset consisting of 14 different plant species, and 38 different categorical disease classes and healthy plant leaves. To evaluate the performance of the models, different parameters such as batch size, dropout, and different numbers of epochs were incorporated. The implemented models achieved a disease-classification accuracy rates of 98.42%, 99.11%, 97.02%, and 99.56% using InceptionV3, InceptionResNetV2, MobileNetV2, and EfficientNetB0, respectively, which were greater than that of traditional handcrafted-feature-based approaches. In comparison with other deep-learning models, the implemented model achieved better performance in terms of accuracy and it required less training time. Moreover, the MobileNetV2 architecture is compatible with mobile devices using the optimized parameter. The accuracy results in the identification of diseases showed that the deep CNN model is promising and can greatly impact the efficient identification of the diseases, and may have potential in the detection of diseases in real-time agricultural systems.

125 citations


Journal ArticleDOI
TL;DR: In this paper, the authors focus mainly on the primary taxonomy and newly released deep CNN architectures, and divide numerous recent developments in CNN architectures into eight groups: spatial exploitation, multi-path, depth, breadth, dimension, channel boosting, feature-map exploitation, and attention-based CNN.
Abstract: Computer vision is becoming an increasingly trendy word in the area of image processing. With the emergence of computer vision applications, there is a significant demand to recognize objects automatically. Deep CNN (convolution neural network) has benefited the computer vision community by producing excellent results in video processing, object recognition, picture classification and segmentation, natural language processing, speech recognition, and many other fields. Furthermore, the introduction of large amounts of data and readily available hardware has opened new avenues for CNN study. Several inspirational concepts for the progress of CNN have been investigated, including alternative activation functions, regularization, parameter optimization, and architectural advances. Furthermore, achieving innovations in architecture results in a tremendous enhancement in the capacity of the deep CNN. Significant emphasis has been given to leveraging channel and spatial information, with a depth of architecture and information processing via multi-path. This survey paper focuses mainly on the primary taxonomy and newly released deep CNN architectures, and it divides numerous recent developments in CNN architectures into eight groups. Spatial exploitation, multi-path, depth, breadth, dimension, channel boosting, feature-map exploitation, and attention-based CNN are the eight categories. The main contribution of this manuscript is in comparing various architectural evolutions in CNN by its architectural change, strengths, and weaknesses. Besides, it also includes an explanation of the CNN’s components, the strengths and weaknesses of various CNN variants, research gap or open challenges, CNN applications, and the future research direction.

107 citations


Journal ArticleDOI
TL;DR: The state of the art of one subset of these algorithms: the deep reinforcement learning (DRL) techniques, designed to ensure stable and smooth UAV navigation by training computer-simulated environments are described.
Abstract: Unmanned Aerial Vehicles (UAVs) are increasingly being used in many challenging and diversified applications. These applications belong to the civilian and the military fields. To name a few; infrastructure inspection, traffic patrolling, remote sensing, mapping, surveillance, rescuing humans and animals, environment monitoring, and Intelligence, Surveillance, Target Acquisition, and Reconnaissance (ISTAR) operations. However, the use of UAVs in these applications needs a substantial level of autonomy. In other words, UAVs should have the ability to accomplish planned missions in unexpected situations without requiring human intervention. To ensure this level of autonomy, many artificial intelligence algorithms were designed. These algorithms targeted the guidance, navigation, and control (GNC) of UAVs. In this paper, we described the state of the art of one subset of these algorithms: the deep reinforcement learning (DRL) techniques. We made a detailed description of them, and we deduced the current limitations in this area. We noted that most of these DRL methods were designed to ensure stable and smooth UAV navigation by training computer-simulated environments. We realized that further research efforts are needed to address the challenges that restrain their deployment in real-life scenarios.

98 citations


Journal ArticleDOI
TL;DR: A novel pipeline strategy is introduced, where the training of the dense layer(s) is followed by tuning each of the pre-trained DCNN blocks successively that has led to gradual improvement of the accuracy of FER to a higher level.
Abstract: Human facial emotion recognition (FER) has attracted the attention of the research community for its promising applications. Mapping different facial expressions to the respective emotional states are the main task in FER. The classical FER consists of two major steps: feature extraction and emotion recognition. Currently, the Deep Neural Networks, especially the Convolutional Neural Network (CNN), is widely used in FER by virtue of its inherent feature extraction mechanism from images. Several works have been reported on CNN with only a few layers to resolve FER problems. However, standard shallow CNNs with straightforward learning schemes have limited feature extraction capability to capture emotion information from high-resolution images. A notable drawback of the most existing methods is that they consider only the frontal images (i.e., ignore profile views for convenience), although the profile views taken from different angles are important for a practical FER system. For developing a highly accurate FER system, this study proposes a very Deep CNN (DCNN) modeling through Transfer Learning (TL) technique where a pre-trained DCNN model is adopted by replacing its dense upper layer(s) compatible with FER, and the model is fine-tuned with facial emotion data. A novel pipeline strategy is introduced, where the training of the dense layer(s) is followed by tuning each of the pre-trained DCNN blocks successively that has led to gradual improvement of the accuracy of FER to a higher level. The proposed FER system is verified on eight different pre-trained DCNN models (VGG-16, VGG-19, ResNet-18, ResNet-34, ResNet-50, ResNet-152, Inception-v3 and DenseNet-161) and well-known KDEF and JAFFE facial image datasets. FER is very challenging even for frontal views alone. FER on the KDEF dataset poses further challenges due to the diversity of images with different profile views together with frontal views. The proposed method achieved remarkable accuracy on both datasets with pre-trained models. On a 10-fold cross-validation way, the best achieved FER accuracies with DenseNet-161 on test sets of KDEF and JAFFE are 96.51% and 99.52%, respectively. The evaluation results reveal the superiority of the proposed FER system over the existing ones regarding emotion detection accuracy. Moreover, the achieved performance on the KDEF dataset with profile views is promising as it clearly demonstrates the required proficiency for real-life applications.

94 citations


Journal ArticleDOI
TL;DR: A novel framework for multi-class wearable user identification, with a basis in the recognition of human behavior through the use of deep learning models, is presented, and the proposed framework’s effectiveness was demonstrated.
Abstract: Currently, a significant amount of interest is focused on research in the field of Human Activity Recognition (HAR) as a result of the wide variety of its practical uses in real-world applications, such as biometric user identification, health monitoring of the elderly, and surveillance by authorities. The widespread use of wearable sensor devices and the Internet of Things (IoT) has led the topic of HAR to become a significant subject in areas of mobile and ubiquitous computing. In recent years, the most widely-used inference and problem-solving approach in the HAR system has been deep learning. Nevertheless, major challenges exist with regard to the application of HAR for problems in biometric user identification in which various human behaviors can be regarded as types of biometric qualities and used for identifying people. In this research study, a novel framework for multi-class wearable user identification, with a basis in the recognition of human behavior through the use of deep learning models, is presented. In order to obtain advanced information regarding users during the performance of various activities, sensory data from tri-axial gyroscopes and tri-axial accelerometers of the wearable devices are applied. Additionally, a set of experiments were shown to validate this work, and the proposed framework’s effectiveness was demonstrated. The results for the two basic models, namely, the Convolutional Neural Network (CNN) and the Long Short-Term Memory (LSTM) deep learning, showed that the highest accuracy for all users was 91.77% and 92.43%, respectively. With regard to the biometric user identification, these are both acceptable levels.

92 citations


Journal ArticleDOI
TL;DR: A deep learning-based intrusion detection system for DDoS attacks based on three models, namely, convolutional neural networks, deep neural Networks, and recurrent neural networks are proposed.
Abstract: Smart Agriculture or Agricultural Internet of things, consists of integrating advanced technologies (e.g., NFV, SDN, 5G/6G, Blockchain, IoT, Fog, Edge, and AI) into existing farm operations to improve the quality and productivity of agricultural products. The convergence of Industry 4.0 and Intelligent Agriculture provides new opportunities for migration from factory agriculture to the future generation, known as Agriculture 4.0. However, since the deployment of thousands of IoT based devices is in an open field, there are many new threats in Agriculture 4.0. Security researchers are involved in this topic to ensure the safety of the system since an adversary can initiate many cyber attacks, such as DDoS attacks to making a service unavailable and then injecting false data to tell us that the agricultural equipment is safe but in reality, it has been theft. In this paper, we propose a deep learning-based intrusion detection system for DDoS attacks based on three models, namely, convolutional neural networks, deep neural networks, and recurrent neural networks. Each model’s performance is studied within two classification types (binary and multiclass) using two new real traffic datasets, namely, CIC-DDoS2019 dataset and TON_IoT dataset, which contain different types of DDoS attacks.

82 citations


Journal ArticleDOI
TL;DR: A state-of-the-art review of electric vehicle technology, charging methods, standards, and optimization techniques is presented and several recommendations are put forward for future research.
Abstract: This paper presents a state-of-the-art review of electric vehicle technology, charging methods, standards, and optimization techniques. The essential characteristics of Hybrid Electric Vehicle (HEV) and Electric Vehicle (EV) are first discussed. Recent research on EV charging methods such as Battery Swap Station (BSS), Wireless Power Transfer (WPT), and Conductive Charging (CC) are then presented. This is followed by a discussion of EV standards such as charging levels and their configurations. Next, some of the most used optimization techniques for the sizing and placement of EV charging stations are analyzed. Finally, based on the insights gained, several recommendations are put forward for future research.

75 citations


Journal ArticleDOI
TL;DR: The Properly Wearing Masked Face Detection Dataset (PWMFD), which included 9205 images of mask wearing samples with three categories, is proposed and Squeeze and Excitation (SE)-YOLOv3, a mask detector with relatively balanced effectiveness and efficiency is proposed.
Abstract: The rapid outbreak of COVID-19 has caused serious harm and infected tens of millions of people worldwide. Since there is no specific treatment, wearing masks has become an effective method to prevent the transmission of COVID-19 and is required in most public areas, which has also led to a growing demand for automatic real-time mask detection services to replace manual reminding. However, few studies on face mask detection are being conducted. It is urgent to improve the performance of mask detectors. In this paper, we proposed the Properly Wearing Masked Face Detection Dataset (PWMFD), which included 9205 images of mask wearing samples with three categories. Moreover, we proposed Squeeze and Excitation (SE)-YOLOv3, a mask detector with relatively balanced effectiveness and efficiency. We integrated the attention mechanism by introducing the SE block into Darknet53 to obtain the relationships among channels so that the network can focus more on the important feature. We adopted GIoUloss, which can better describe the spatial difference between predicted and ground truth boxes to improve the stability of bounding box regression. Focal loss was utilized for solving the extreme foreground-background class imbalance. Besides, we performed corresponding image augmentation techniques to further improve the robustness of the model on the specific task. Experimental results showed that SE-YOLOv3 outperformed YOLOv3 and other state-of-the-art detectors on PWMFD and achieved a higher 8.6% mAP compared to YOLOv3 while having a comparable detection speed.

Journal ArticleDOI
TL;DR: This is the first in-depth literature survey of all ML techniques in the field of low power consumption WSN-IoT for smart cities and shows that the supervised learning algorithms have been most widely used as compared to reinforcement learning and unsupervised learning for smart city applications.
Abstract: Artificial intelligence (AI) and machine learning (ML) techniques have huge potential to efficiently manage the automated operation of the internet of things (IoT) nodes deployed in smart cities. In smart cities, the major IoT applications are smart traffic monitoring, smart waste management, smart buildings and patient healthcare monitoring. The small size IoT nodes based on low power Bluetooth (IEEE 802.15.1) standard and wireless sensor networks (WSN) (IEEE 802.15.4) standard are generally used for transmission of data to a remote location using gateways. The WSN based IoT (WSN-IoT) design problems include network coverage and connectivity issues, energy consumption, bandwidth requirement, network lifetime maximization, communication protocols and state of the art infrastructure. In this paper, the authors propose machine learning methods as an optimization tool for regular WSN-IoT nodes deployed in smart city applications. As per the author’s knowledge, this is the first in-depth literature survey of all ML techniques in the field of low power consumption WSN-IoT for smart cities. The results of this unique survey article show that the supervised learning algorithms have been most widely used (61%) as compared to reinforcement learning (27%) and unsupervised learning (12%) for smart city applications.

Journal ArticleDOI
TL;DR: In this article, an extensive review of artificial neural networks (ANNs) based optimization algorithm techniques with some of the famous optimization techniques, e.g., genetic algorithm (GA), particle swarm optimization (PSO), artificial bee colony (ABC), and backtracking search algorithm (BSA), is presented.
Abstract: In the last few years, intensive research has been done to enhance artificial intelligence (AI) using optimization techniques. In this paper, we present an extensive review of artificial neural networks (ANNs) based optimization algorithm techniques with some of the famous optimization techniques, e.g., genetic algorithm (GA), particle swarm optimization (PSO), artificial bee colony (ABC), and backtracking search algorithm (BSA) and some modern developed techniques, e.g., the lightning search algorithm (LSA) and whale optimization algorithm (WOA), and many more. The entire set of such techniques is classified as algorithms based on a population where the initial population is randomly created. Input parameters are initialized within the specified range, and they can provide optimal solutions. This paper emphasizes enhancing the neural network via optimization algorithms by manipulating its tuned parameters or training parameters to obtain the best structure network pattern to dissolve the problems in the best way. This paper includes some results for improving the ANN performance by PSO, GA, ABC, and BSA optimization techniques, respectively, to search for optimal parameters, e.g., the number of neurons in the hidden layers and learning rate. The obtained neural net is used for solving energy management problems in the virtual power plant system.

Journal ArticleDOI
TL;DR: This work proposed a novel solution to attack detection problems in industrial control systems based on measurement data in the supervisory control and data acquisition (SCADA) system, called measurement intrusion detection system (MIDS), which enables the system to detect any abnormal activity in the system even if the attacker tries to conceal it in theSystem’s control layer.
Abstract: Attack detection problems in industrial control systems (ICSs) are commonly known as a network traffic monitoring scheme for detecting abnormal activities. However, a network-based intrusion detection system can be deceived by attackers that imitate the system’s normal activity. In this work, we proposed a novel solution to this problem based on measurement data in the supervisory control and data acquisition (SCADA) system. The proposed approach is called measurement intrusion detection system (MIDS), which enables the system to detect any abnormal activity in the system even if the attacker tries to conceal it in the system’s control layer. A supervised machine learning model is generated to classify normal and abnormal activities in an ICS to evaluate the MIDS performance. A hardware-in-the-loop (HIL) testbed is developed to simulate the power generation units and exploit the attack dataset. In the proposed approach, we applied several machine learning models on the dataset, which show remarkable performances in detecting the dataset’s anomalies, especially stealthy attacks. The results show that the random forest is performing better than other classifier algorithms in detecting anomalies based on measured data in the testbed.

Journal ArticleDOI
TL;DR: This article study the different uses of sentiment analysis in the detection of fake news, with a discussion of the most relevant elements and shortcomings, and the requirements that should be met in the near future, such as multilingualism, explainability, mitigation of biases, or treatment of multimedia elements.
Abstract: In recent years, we have witnessed a rise in fake news, i.e., provably false pieces of information created with the intention of deception. The dissemination of this type of news poses a serious threat to cohesion and social well-being, since it fosters political polarization and the distrust of people with respect to their leaders. The huge amount of news that is disseminated through social media makes manual verification unfeasible, which has promoted the design and implementation of automatic systems for fake news detection. The creators of fake news use various stylistic tricks to promote the success of their creations, with one of them being to excite the sentiments of the recipients. This has led to sentiment analysis, the part of text analytics in charge of determining the polarity and strength of sentiments expressed in a text, to be used in fake news detection approaches, either as a basis of the system or as a complementary element. In this article, we study the different uses of sentiment analysis in the detection of fake news, with a discussion of the most relevant elements and shortcomings, and the requirements that should be met in the near future, such as multilingualism, explainability, mitigation of biases, or treatment of multimedia elements.

Journal ArticleDOI
TL;DR: The main research directions include the coupling of decision-making with augmented reality for seamless interfacing that combines the real and virtual worlds of manufacturing operators; methods and techniques for addressing uncertainty of data, in lieu of emerging Internet of Things devices.
Abstract: Decision-making for manufacturing and maintenance operations is benefiting from the advanced sensor infrastructure of Industry 4.0, enabling the use of algorithms that analyze data, predict emerging situations, and recommend mitigating actions. The current paper reviews the literature on data-driven decision-making in maintenance and outlines directions for future research towards data-driven decision-making for Industry 4.0 maintenance applications. The main research directions include the coupling of decision-making with augmented reality for seamless interfacing that combines the real and virtual worlds of manufacturing operators; methods and techniques for addressing uncertainty of data, in lieu of emerging Internet of Things (IoT) devices; integration of maintenance decision-making with other operations such as scheduling and planning; utilization of the cloud continuum for optimal deployment of decision-making services; capability of decision-making methods to cope with big data; incorporation of advanced security mechanisms; and coupling decision-making with simulation software, autonomous robots, and other additive manufacturing initiatives.

Journal ArticleDOI
TL;DR: In this paper, a review on the three most important communication techniques (ground, aerial, and underwater vehicles) has been presented that throws light on trajectory planning, its optimization, and various issues in a summarized way.
Abstract: In this paper, a review on the three most important communication techniques (ground, aerial, and underwater vehicles) has been presented that throws light on trajectory planning, its optimization, and various issues in a summarized way. This kind of extensive research is not often seen in the literature, so an effort has been made for readers interested in path planning to fill the gap. Moreover, optimization techniques suitable for implementing ground, aerial, and underwater vehicles are also a part of this review. This paper covers the numerical, bio-inspired techniques and their hybridization with each other for each of the dimensions mentioned. The paper provides a consolidated platform, where plenty of available research on-ground autonomous vehicle and their trajectory optimization with the extension for aerial and underwater vehicles are documented.

Journal ArticleDOI
TL;DR: The experimental result showed that the proposed routing protocol adapts to dynamic changes in the communication networks, like obstacles and shadows, and achieved better performance in data transmission in terms of throughput, packet delivery ratio, end-to-end delay, and routing overhead.
Abstract: In recent times, visible light communication is an emerging technology that supports high speed data communication for wireless communication systems. However, the performance of the visible light communication system is impaired by inter symbol interference, the time dispersive nature of the channel, and nonlinear features of the light emitting diode that significantly reduces the bit error rate performance. To address these problems, many environments offer a rich infrastructure of light sources for end-to-end communication. In this research paper, an effective routing protocol named the modified grasshopper optimization algorithm is proposed to reduce communication interruptions, and to provide alternative routes in the network without the need of previous topology knowledge. In this research paper, the proposed routing protocol is implemented and analyzed using the MATLAB environment. The experimental result showed that the proposed routing protocol adapts to dynamic changes in the communication networks, like obstacles and shadows. Hence, the proposed protocol achieved better performance in data transmission in terms of throughput, packet delivery ratio, end-to-end delay, and routing overhead. In addition, the performance is analyzed by varying the number of nodes like 50, 100, 250, and 500. From the experimental analysis, the proposed routing protocol achieved maximum of 16.69% and minimum of 2.20% improvement in packet delivery ratio, and minimized 0.80 milliseconds of end-to-end delay compared to the existing optimization algorithms.

Journal ArticleDOI
TL;DR: The detailed experimental analysis revealed that the proposed model has the ability to efficiently exploit mixed cryptocurrency data, reduces overfitting and decreases the computational cost in comparison with traditional fully-connected deep neural networks.
Abstract: Nowadays, cryptocurrencies are established and widely recognized as an alternative exchange currency method. They have infiltrated most financial transactions and as a result cryptocurrency trade is generally considered one of the most popular and promising types of profitable investments. Nevertheless, this constantly increasing financial market is characterized by significant volatility and strong price fluctuations over a short-time period therefore, the development of an accurate and reliable forecasting model is considered essential for portfolio management and optimization. In this research, we propose a multiple-input deep neural network model for the prediction of cryptocurrency price and movement. The proposed forecasting model utilizes as inputs different cryptocurrency data and handles them independently in order to exploit useful information from each cryptocurrency separately. An extensive empirical study was performed using three consecutive years of cryptocurrency data from three cryptocurrencies with the highest market capitalization i.e., Bitcoin (BTC), Etherium (ETH), and Ripple (XRP). The detailed experimental analysis revealed that the proposed model has the ability to efficiently exploit mixed cryptocurrency data, reduces overfitting and decreases the computational cost in comparison with traditional fully-connected deep neural networks.

Journal ArticleDOI
TL;DR: A review of the recent development in SER is provided and the impact of various attention mechanisms on SER performance is examined and overall comparison of the system accuracies is performed on a widely used IEMOCAP benchmark database.
Abstract: Emotions are an integral part of human interactions and are significant factors in determining user satisfaction or customer opinion. speech emotion recognition (SER) modules also play an important role in the development of human–computer interaction (HCI) applications. A tremendous number of SER systems have been developed over the last decades. Attention-based deep neural networks (DNNs) have been shown as suitable tools for mining information that is unevenly time distributed in multimedia content. The attention mechanism has been recently incorporated in DNN architectures to emphasise also emotional salient information. This paper provides a review of the recent development in SER and also examines the impact of various attention mechanisms on SER performance. Overall comparison of the system accuracies is performed on a widely used IEMOCAP benchmark database.

Journal ArticleDOI
TL;DR: This paper presents a comprehensive survey of the meta-heuristic optimization algorithms on the text clustering applications and highlights its main procedures, its advantages and disadvantages, and recommends potential future research paths.
Abstract: This paper presents a comprehensive survey of the meta-heuristic optimization algorithms on the text clustering applications and highlights its main procedures. These Artificial Intelligence (AI) algorithms are recognized as promising swarm intelligence methods due to their successful ability to solve machine learning problems, especially text clustering problems. This paper reviews all of the relevant literature on meta-heuristic-based text clustering applications, including many variants, such as basic, modified, hybridized, and multi-objective methods. As well, the main procedures of text clustering and critical discussions are given. Hence, this review reports its advantages and disadvantages and recommends potential future research paths. The main keywords that have been considered in this paper are text, clustering, meta-heuristic, optimization, and algorithm.

Journal ArticleDOI
TL;DR: This review article is intended to be a preface to the Special Issue on Voltage Stability of Microgrids in Power Systems and presents a comprehensive review of the literature on voltage stability of power systems with a relatively high percentage of IBGs in the generation mix of the system.
Abstract: The main purpose of developing microgrids (MGs) is to facilitate the integration of renewable energy sources (RESs) into the power grid. RESs are normally connected to the grid via power electronic inverters. As various types of RESs are increasingly being connected to the electrical power grid, power systems of the near future will have more inverter-based generators (IBGs) instead of synchronous machines. Since IBGs have significant differences in their characteristics compared to synchronous generators (SGs), particularly concerning their inertia and capability to provide reactive power, their impacts on the system dynamics are different compared to SGs. In particular, system stability analysis will require new approaches. As such, research is currently being conducted on the stability of power systems with the inclusion of IBGs. This review article is intended to be a preface to the Special Issue on Voltage Stability of Microgrids in Power Systems. It presents a comprehensive review of the literature on voltage stability of power systems with a relatively high percentage of IBGs in the generation mix of the system. As the research is developing rapidly in this field, it is understood that by the time that this article is published, and further in the future, there will be many more new developments in this area. Certainly, other articles in this special issue will highlight some other important aspects of the voltage stability of microgrids.

Journal ArticleDOI
TL;DR: This paper provides a comprehensive review exclusively on the state-of-the-art ML-based data-driven fault detection/diagnosis techniques to provide a ready reference and direction to the research community aiming towards developing an accurate, reliable, adaptive and easy to implement fault diagnosis strategy for the LIB system.
Abstract: Fault detection/diagnosis has become a crucial function of the battery management system (BMS) due to the increasing application of lithium-ion batteries (LIBs) in highly sophisticated and high-power applications to ensure the safe and reliable operation of the system. The application of Machine Learning (ML) in the BMS of LIB has long been adopted for efficient, reliable, accurate prediction of several important states of LIB such as state of charge, state of health and remaining useful life. Inspired by some of the promising features of ML-based techniques over the conventional LIB fault detection/diagnosis methods such as model-based, knowledge-based and signal processing-based techniques, ML-based data-driven methods have been a prime research focus in the last few years. This paper provides a comprehensive review exclusively on the state-of-the-art ML-based data-driven fault detection/diagnosis techniques to provide a ready reference and direction to the research community aiming towards developing an accurate, reliable, adaptive and easy to implement fault diagnosis strategy for the LIB system. Current issues of existing strategies and future challenges of LIB fault diagnosis are also explained for better understanding and guidance.

Journal ArticleDOI
TL;DR: Simulation results show that the proposed DNN-IoT-BA method provides lower energy consumption and latency than conventional methods to support real-time traffic conditions.
Abstract: In this paper, Deep Neural Networks (DNN) with Bat Algorithms (BA) offer a dynamic form of traffic control in Vehicular Adhoc Networks (VANETs). The former is used to route vehicles across highly congested paths to enhance efficiency, with a lower average latency. The latter is combined with the Internet of Things (IoT) and it moves across the VANETs to analyze the traffic congestion status between the network nodes. The experimental analysis tests the effectiveness of DNN-IoT-BA in various machine or deep learning algorithms in VANETs. DNN-IoT-BA is validated through various network metrics, like packet delivery ratio, latency and packet error rate. The simulation results show that the proposed method provides lower energy consumption and latency than conventional methods to support real-time traffic conditions.

Journal ArticleDOI
TL;DR: Low power high-speed hardware architectures for the efficient field programmable gate array (FPGA) implementation of the advanced encryption standard (AES) algorithm to provide data security and modified positive polarity reed muller (MPPRM) architecture is inserted.
Abstract: Nowadays, a huge amount of digital data is frequently changed among different embedded devices over wireless communication technologies. Data security is considered an important parameter for avoiding information loss and preventing cyber-crimes. This research article details the low power high-speed hardware architectures for the efficient field programmable gate array (FPGA) implementation of the advanced encryption standard (AES) algorithm to provide data security. This work does not depend on the Look-Up Table (LUTs) for the implementation the SubBytes and InvSubBytes stages of transformations of the AES encryption and decryption; this new architecture uses combinational logical circuits for implementing SubBytes and InvSubBytes transformation. Due to the elimination of LUTs, unwanted delays are eliminated in this architecture and a subpipelining structure is introduced for improving the speed of the AES algorithm. Here, modified positive polarity reed muller (MPPRM) architecture is inserted to reduce the total hardware requirements, and comparisons are made with different implementations. With MPPRM architecture introduced in SubBytes stages, an efficient mixcolumn and invmixcolumn architecture that is suited to subpipelined round units is added. The performances of the proposed AES-MPPRM architecture is analyzed in terms of number of slice registers, flip flops, number of slice LUTs, number of logical elements, slices, bonded IOB, operating frequency and delay. There are five different AES architectures including LAES, AES-CTR, AES-CFA, AES-BSRD, and AES-EMCBE. The LUT of the AES-MPPRM architecture designed in the Spartan 6 is reduced up to 15.45% when compared to the AES-BSRD.

Journal ArticleDOI
TL;DR: This work presents a new algorithm for image encryption using a hyperchaotic system and Fibonacci Q-matrix, which achieved an excellent security level and outperformed the existing image encryption algorithms.
Abstract: In the age of Information Technology, the day-life required transmitting millions of images between users. Securing these images is essential. Digital image encryption is a well-known technique used in securing image content. In image encryption techniques, digital images are converted into noise images using secret keys, where restoring them to their originals required the same keys. Most image encryption techniques depend on two steps: confusion and diffusion. In this work, a new algorithm presented for image encryption using a hyperchaotic system and Fibonacci Q-matrix. The original image is confused in this algorithm, utilizing randomly generated numbers by the six-dimension hyperchaotic system. Then, the permutated image diffused using the Fibonacci Q-matrix. The proposed image encryption algorithm tested using noise and data cut attacks, histograms, keyspace, and sensitivity. Moreover, the proposed algorithm’s performance compared with several existing algorithms using entropy, correlation coefficients, and robustness against attack. The proposed algorithm achieved an excellent security level and outperformed the existing image encryption algorithms.

Journal ArticleDOI
TL;DR: In this article, a single layer MIMO antenna for 5G 28 GHz frequency band applications is proposed and investigated, which operates in the Ka-band, which is the most desirable frequency band for mm-wave communication.
Abstract: In this paper, a novel single layer Multiple Input–Multiple Output (MIMO) antenna for Fifth-Generation (5G) 28 GHz frequency band applications is proposed and investigated. The proposed MIMO antenna operates in the Ka-band, which is the most desirable frequency band for 5G mm-wave communication. The dielectric material is a Rogers-5880 with a relative permittivity, thickness and loss tangent of 2.2, 0.787 mm and 0.0009, respectively, in the proposed antenna design. The proposed MIMO configuration antenna element consists of triplet circular shaped rings surrounded by an infinity-shaped shell. The simulated gain achieved by the proposed design is 6.1 dBi, while the measured gain is 5.5 dBi. Furthermore, the measured and simulated antenna efficiency is 90% and 92%, respectively. One of the MIMO performance metrics—i.e., the Envelope Correlation Coefficient (ECC)—is also analyzed and found to be less than 0.16 for the entire operating bandwidth. The proposed MIMO design operates efficiently with a low ECC, better efficiency and a satisfactory gain, showing that the proposed design is a potential candidate for mm-wave communication.

Journal ArticleDOI
TL;DR: Experimental results show that the hybrid DL model called CNN-BiGRU outperformed the other DL models with a high accuracy and achieved the highest recognition performance in other scenarios, as well as a variety of performance indicators, including accuracy, F1-score, and confusion matrix.
Abstract: Sensor-based human activity recognition (S-HAR) has become an important and high-impact topic of research within human-centered computing. In the last decade, successful applications of S-HAR have been presented through fruitful academic research and industrial applications, including for healthcare monitoring, smart home controlling, and daily sport tracking. However, the growing requirements of many current applications for recognizing complex human activities (CHA) have begun to attract the attention of the HAR research field when compared with simple human activities (SHA). S-HAR has shown that deep learning (DL), a type of machine learning based on complicated artificial neural networks, has a significant degree of recognition efficiency. Convolutional neural networks (CNNs) and recurrent neural networks (RNNs) are two different types of DL methods that have been successfully applied to the S-HAR challenge in recent years. In this paper, we focused on four RNN-based DL models (LSTMs, BiLSTMs, GRUs, and BiGRUs) that performed complex activity recognition tasks. The efficiency of four hybrid DL models that combine convolutional layers with the efficient RNN-based models was also studied. Experimental studies on the UTwente dataset demonstrated that the suggested hybrid RNN-based models achieved a high level of recognition performance along with a variety of performance indicators, including accuracy, F1-score, and confusion matrix. The experimental results show that the hybrid DL model called CNN-BiGRU outperformed the other DL models with a high accuracy of 98.89% when using only complex activity data. Moreover, the CNN-BiGRU model also achieved the highest recognition performance in other scenarios (99.44% by using only simple activity data and 98.78% with a combination of simple and complex activities).

Journal ArticleDOI
TL;DR: This paper provides a systematic and comprehensive survey that reviews the latest research efforts focused on machine learning (ML) based performance improvement of wireless networks, while considering all layers of the protocol stack (PHY, MAC and network).
Abstract: This paper presents a systematic and comprehensive survey that reviews the latest research efforts focused on machine learning (ML) based performance improvement of wireless networks, while considering all layers of the protocol stack: PHY, MAC and network. First, the related work and paper contributions are discussed, followed by providing the necessary background on data-driven approaches and machine learning to help non-machine learning experts understand all discussed techniques. Then, a comprehensive review is presented on works employing ML-based approaches to optimize the wireless communication parameters settings to achieve improved network quality-of-service (QoS) and quality-of-experience (QoE). We first categorize these works into: radio analysis, MAC analysis and network prediction approaches, followed by subcategories within each. Finally, open challenges and broader perspectives are discussed.

Journal ArticleDOI
TL;DR: This paper introduces the development and application of PPIR technology, followed by its classification and analysis, and presents the theory of four types of deep learning methods and their applications in PPIR.
Abstract: Plant phenotypic image recognition (PPIR) is an important branch of smart agriculture. In recent years, deep learning has achieved significant breakthroughs in image recognition. Consequently, PPIR technology that is based on deep learning is becoming increasingly popular. First, this paper introduces the development and application of PPIR technology, followed by its classification and analysis. Second, it presents the theory of four types of deep learning methods and their applications in PPIR. These methods include the convolutional neural network, deep belief network, recurrent neural network, and stacked autoencoder, and they are applied to identify plant species, diagnose plant diseases, etc. Finally, the difficulties and challenges of deep learning in PPIR are discussed.