Showing papers in "Electronics in 2020"
TL;DR: This paper reviews the latest studies that have employed deep learning to solve sentiment analysis problems, such as sentiment polarity, and models using term frequency-inverse document frequency and word embedding have been applied to a series of datasets.
Abstract: The study of public opinion can provide us with valuable information. The analysis of sentiment on social networks, such as Twitter or Facebook, has become a powerful means of learning about the users’ opinions and has a wide range of applications. However, the efficiency and accuracy of sentiment analysis is being hindered by the challenges encountered in natural language processing (NLP). In recent years, it has been demonstrated that deep learning models are a promising solution to the challenges of NLP. This paper reviews the latest studies that have employed deep learning to solve sentiment analysis problems, such as sentiment polarity. Models using term frequency-inverse document frequency (TF-IDF) and word embedding have been applied to a series of datasets. Finally, a comparative study has been conducted on the experimental results obtained for the different models and input features.
TL;DR: Variants of the k-means algorithms including their recent developments are discussed, where their effectiveness is investigated based on the experimental analysis of a variety of datasets.
Abstract: The k-means clustering algorithm is considered one of the most powerful and popular data mining algorithms in the research community. However, despite its popularity, the algorithm has certain limitations, including problems associated with random initialization of the centroids which leads to unexpected convergence. Additionally, such a clustering algorithm requires the number of clusters to be defined beforehand, which is responsible for different cluster shapes and outlier effects. A fundamental problem of the k-means algorithm is its inability to handle various data types. This paper provides a structured and synoptic overview of research conducted on the k-means algorithm to overcome such shortcomings. Variants of the k-means algorithms including their recent developments are discussed, where their effectiveness is investigated based on the experimental analysis of a variety of datasets. The detailed experimental analysis along with a thorough comparison among different k-means clustering algorithms differentiates our work compared to other existing survey papers. Furthermore, it outlines a clear and thorough understanding of the k-means algorithm along with its different research directions.
TL;DR: This research is supported by the Employment Based Postgraduate Program of the Irish Research Council (IRC) Project ID: EBPPG/2015/185 and partially funded under the SFI Strategic Partnership Program by Science Foundation Ireland.
Abstract: This research is supported by the Employment Based Postgraduate Program of the Irish Research Council (IRC) Project ID: EBPPG/2015/185 and partially funded under the SFI Strategic Partnership Program by Science Foundation Ireland (SFI) and FotoNation Ltd. (Xperi Corporation) Project ID: 13/SPP/I2868 on Next Generation Imaging for Smartphone and Embedded Platforms.
TL;DR: A hybrid principal component analysis (PCA)-firefly based machine learning model to classify intrusion detection system (IDS) datasets and experimental results confirm the fact that the proposed model performs better than the existing machine learning models.
Abstract: The enormous popularity of the internet across all spheres of human life has introduced various risks of malicious attacks in the network. The activities performed over the network could be effortlessly proliferated, which has led to the emergence of intrusion detection systems. The patterns of the attacks are also dynamic, which necessitates efficient classification and prediction of cyber attacks. In this paper we propose a hybrid principal component analysis (PCA)-firefly based machine learning model to classify intrusion detection system (IDS) datasets. The dataset used in the study is collected from Kaggle. The model first performs One-Hot encoding for the transformation of the IDS datasets. The hybrid PCA-firefly algorithm is then used for dimensionality reduction. The XGBoost algorithm is implemented on the reduced dataset for classification. A comprehensive evaluation of the model is conducted with the state of the art machine learning approaches to justify the superiority of our proposed approach. The experimental results confirm the fact that the proposed model performs better than the existing machine learning models.
TL;DR: The proposed model is evaluated against the prevalent machine learning models and the results justify the superiority of the proposed model in terms of Accuracy, Precision, Recall, Sensitivity and Specificity.
Abstract: Diabetic Retinopathy is a major cause of vision loss and blindness affecting millions of people across the globe. Although there are established screening methods - fluorescein angiography and optical coherence tomography for detection of the disease but in majority of the cases, the patients remain ignorant and fail to undertake such tests at an appropriate time. The early detection of the disease plays an extremely important role in preventing vision loss which is the consequence of diabetes mellitus remaining untreated among patients for a prolonged time period. Various machine learning and deep learning approaches have been implemented on diabetic retinopathy dataset for classification and prediction of the disease but majority of them have neglected the aspect of data pre-processing and dimensionality reduction, leading to biased results. The dataset used in the present study is a diabetes retinopathy dataset collected from the UCI machine learning repository. At its inceptions, the raw dataset is normalized using the Standardscalar technique and then Principal Component Analysis (PCA) is used to extract the most significant features in the dataset. Further, Firefly algorithm is implemented for dimensionality reduction. This reduced dataset is fed into a Deep Neural Network Model for classification. The results generated from the model is evaluated against the prevalent machine learning models and the results justify the superiority of the proposed model in terms of Accuracy, Precision, Recall, Sensitivity and Specificity.
TL;DR: An IoT agriculture framework has been presented that contextualizes the representation of a wide range of current solutions in the field of agriculture and open issues and challenges have been presented to provide the researchers promising future directions in the domain of IoT agriculture.
Abstract: The growing demand for food in terms of quality and quantity has increased the need for industrialization and intensification in the agriculture field. Internet of Things (IoT) is a highly promising technology that is offering many innovative solutions to modernize the agriculture sector. Research institutions and scientific groups are continuously working to deliver solutions and products using IoT to address different domains of agriculture. This paper presents a systematic literature review (SLR) by conducting a survey of IoT technologies and their current utilization in different application domains of the agriculture sector. The underlying SLR has been compiled by reviewing research articles published in well-reputed venues between 2006 and 2019. A total of 67 papers were carefully selected through a systematic process and classified accordingly. The primary objective of this systematic study is the collection of all relevant research on IoT agricultural applications, sensors/devices, communication protocols, and network types. Furthermore, it also discusses the main issues and challenges that are being investigated in the field of agriculture. Moreover, an IoT agriculture framework has been presented that contextualizes the representation of a wide range of current solutions in the field of agriculture. Similarly, country policies for IoT-based agriculture have also been presented. Lastly, open issues and challenges have been presented to provide the researchers promising future directions in the domain of IoT agriculture.
TL;DR: This work develops a DL-based intrusion model based on a Convolutional Neural Network and evaluates its performance through comparison with an Recurrent Neural Network (RNN) and suggests the optimal CNN design for the better performance through numerous experiments.
Abstract: As cyberattacks become more intelligent, it is challenging to detect advanced attacks in a variety of fields including industry, national defense, and healthcare. Traditional intrusion detection systems are no longer enough to detect these advanced attacks with unexpected patterns. Attackers bypass known signatures and pretend to be normal users. Deep learning is an alternative to solving these issues. Deep Learning (DL)-based intrusion detection does not require a lot of attack signatures or the list of normal behaviors to generate detection rules. DL defines intrusion features by itself through training empirical data. We develop a DL-based intrusion model especially focusing on denial of service (DoS) attacks. For the intrusion dataset, we use KDD CUP 1999 dataset (KDD), the most widely used dataset for the evaluation of intrusion detection systems (IDS). KDD consists of four types of attack categories, such as DoS, user to root (U2R), remote to local (R2L), and probing. Numerous KDD studies have been employing machine learning and classifying the dataset into the four categories or into two categories such as attack and benign. Rather than focusing on the broad categories, we focus on various attacks belonging to same category. Unlike other categories of KDD, the DoS category has enough samples for training each attack. In addition to KDD, we use CSE-CIC-IDS2018 which is the most up-to-date IDS dataset. CSE-CIC-IDS2018 consists of more advanced DoS attacks than that of KDD. In this work, we focus on the DoS category of both datasets and develop a DL model for DoS detection. We develop our model based on a Convolutional Neural Network (CNN) and evaluate its performance through comparison with an Recurrent Neural Network (RNN). Furthermore, we suggest the optimal CNN design for the better performance through numerous experiments.
TL;DR: The proposed method achieves better performance than the YOLOv3 method in terms of recall, mean average precision, and F1-score and has faster convergence speed for initializing the width and height of the predicted bounding boxes.
Abstract: The ‘You Only Look Once’ v3 (YOLOv3) method is among the most widely used deep learning-based object detection methods. It uses the k-means cluster method to estimate the initial width and height of the predicted bounding boxes. With this method, the estimated width and height are sensitive to the initial cluster centers, and the processing of large-scale datasets is time-consuming. In order to address these problems, a new cluster method for estimating the initial width and height of the predicted bounding boxes has been developed. Firstly, it randomly selects a couple of width and height values as one initial cluster center separate from the width and height of the ground truth boxes. Secondly, it constructs Markov chains based on the selected initial cluster and uses the final points of every Markov chain as the other initial centers. In the construction of Markov chains, the intersection-over-union method is used to compute the distance between the selected initial clusters and each candidate point, instead of the square root method. Finally, this method can be used to continually update the cluster center with each new set of width and height values, which are only a part of the data selected from the datasets. Our simulation results show that the new method has faster convergence speed for initializing the width and height of the predicted bounding boxes and that it can select more representative initial widths and heights of the predicted bounding boxes. Our proposed method achieves better performance than the YOLOv3 method in terms of recall, mean average precision, and F1-score.
TL;DR: In this paper, a 4-port multiple-input-multiple-output (MIMO) antenna array operating in the mm-wave band for 5G applications is presented, where an identical two-element array excited by the feed network based on a T-junction power combiner/divider is introduced, while the ground plane is made defected with rectangular, circular and a zigzag-shaped slotted structure to enhance the radiation characteristics of the antenna.
Abstract: We present a 4-port Multiple-Input-Multiple-Output (MIMO) antenna array operating in the mm-wave band for 5G applications. An identical two-element array excited by the feed network based on a T-junction power combiner/divider is introduced in the reported paper. The array elements are rectangular-shaped slotted patch antennas, while the ground plane is made defected with rectangular, circular, and a zigzag-shaped slotted structure to enhance the radiation characteristics of the antenna. To validate the performance, the MIMO structure is fabricated and measured. The simulated and measured results are in good coherence. The proposed structure can operate in a 25.5–29.6 GHz frequency band supporting the impending mm-wave 5G applications. Moreover, the peak gain attained for the operating frequency band is 8.3 dBi. Additionally, to obtain high isolation between antenna elements, the polarization diversity is employed between the adjacent radiators, resulting in a low Envelope Correlation Coefficient (ECC). Other MIMO performance metrics such as the Channel Capacity Loss (CCL), Mean Effective Gain (MEG), and Diversity gain (DG) of the proposed structure are analyzed, and the results indicate the suitability of the design as a potential contender for imminent mm-wave 5G MIMO applications.
TL;DR: The history of face recognition technology, the current state-of-the-art methodologies, and future directions are presented, specifically on the most recent databases, 2D and 3D face recognition methods.
Abstract: Face recognition is one of the most active research fields of computer vision and pattern recognition, with many practical and commercial applications including identification, access control, forensics, and human-computer interactions. However, identifying a face in a crowd raises serious questions about individual freedoms and poses ethical issues. Significant methods, algorithms, approaches, and databases have been proposed over recent years to study constrained and unconstrained face recognition. 2D approaches reached some degree of maturity and reported very high rates of recognition. This performance is achieved in controlled environments where the acquisition parameters are controlled, such as lighting, angle of view, and distance between the camera–subject. However, if the ambient conditions (e.g., lighting) or the facial appearance (e.g., pose or facial expression) change, this performance will degrade dramatically. 3D approaches were proposed as an alternative solution to the problems mentioned above. The advantage of 3D data lies in its invariance to pose and lighting conditions, which has enhanced recognition systems efficiency. 3D data, however, is somewhat sensitive to changes in facial expressions. This review presents the history of face recognition technology, the current state-of-the-art methodologies, and future directions. We specifically concentrate on the most recent databases, 2D and 3D face recognition methods. Besides, we pay particular attention to deep learning approach as it presents the actuality in this field. Open issues are examined and potential directions for research in facial recognition are proposed in order to provide the reader with a point of reference for topics that deserve consideration.
TL;DR: Compared to other technologies, RRAM is the most promising approach which can be applicable as high-density memory, storage class memory, neuromorphic computing, and also in hardware security.
Abstract: Emerging nonvolatile memory (eNVM) devices are pushing the limits of emerging applications beyond the scope of silicon-based complementary metal oxide semiconductors (CMOS). Among several alternatives, phase change memory, spin-transfer torque random access memory, and resistive random-access memory (RRAM) are major emerging technologies. This review explains all varieties of prototype and eNVM devices, their challenges, and their applications. A performance comparison shows that it is difficult to achieve a “universal memory” which can fulfill all requirements. Compared to other emerging alternative devices, RRAM technology is showing promise with its highly scalable, cost-effective, simple two-terminal structure, low-voltage and ultra-low-power operation capabilities, high-speed switching with high-endurance, long retention, and the possibility of three-dimensional integration for high-density applications. More precisely, this review explains the journey and device engineering of RRAM with various architectures. The challenges in different prototype and eNVM devices is disused with the conventional and novel application areas. Compare to other technologies, RRAM is the most promising approach which can be applicable as high-density memory, storage class memory, neuromorphic computing, and also in hardware security. In the post-CMOS era, a more efficient, intelligent, and secure computing system is possible to design with the help of eNVM devices.
TL;DR: The study reveals that electro-mechanical scanning is the most prominent technology in use today and solid-state LiDAR systems are expected to fill in the gap in ADAS applications despite the low technology readiness in comparison to MEMS scanners.
Abstract: In recent years, light detection and ranging (LiDAR) technology has gained huge popularity in various applications such as navigation, robotics, remote sensing, and advanced driving assistance systems (ADAS). This popularity is mainly due to the improvements in LiDAR performance in terms of range detection, accuracy, power consumption, as well as physical features such as dimension and weight. Although a number of literatures on LiDAR technology have been published earlier, not many has been reported on the state-of-the-art LiDAR scanning mechanisms. The aim of this article is to review the scanning mechanisms employed in LiDAR technology from past research works to the current commercial products. The review highlights four commonly used mechanisms in LiDAR systems: Opto-mechanical, electromechanical, micro-electromechanical systems (MEMS), and solid-state scanning. The study reveals that electro-mechanical scanning is the most prominent technology in use today. The commercially available 1D time of flight (TOF) LiDAR instrument is currently the most attractive option for conversion from 1D to 3D LiDAR system, provided that low scanning rate is not an issue. As for applications with low size, weight, and power (SWaP) requirements, MEMS scanning is found to be the better alternative. MEMS scanning is by far the more matured technology compared to solid-state scanning and is currently given great emphasis to increase its robustness for fulfilling the requirements of ADAS applications. Finally, solid-state LiDAR systems are expected to fill in the gap in ADAS applications despite the low technology readiness in comparison to MEMS scanners. However, since solid-state scanning is believed to have superior robustness, field of view (FOV), and scanning rate potential, great efforts are given by both academics and industries to further develop this technology.
TL;DR: This paper systematically introduces the existing state-of-the-art approaches and a variety of applications that benefit from these methods in knowledge graph embedding and introduces the advanced models that utilize additional semantic information to improve the performance of the original methods.
Abstract: A knowledge graph (KG), also known as a knowledge base, is a particular kind of network structure in which the node indicates entity and the edge represent relation. However, with the explosion of network volume, the problem of data sparsity that causes large-scale KG systems to calculate and manage difficultly has become more significant. For alleviating the issue, knowledge graph embedding is proposed to embed entities and relations in a KG to a low-, dense and continuous feature space, and endow the yield model with abilities of knowledge inference and fusion. In recent years, many researchers have poured much attention in this approach, and we will systematically introduce the existing state-of-the-art approaches and a variety of applications that benefit from these methods in this paper. In addition, we discuss future prospects for the development of techniques and application trends. Specifically, we first introduce the embedding models that only leverage the information of observed triplets in the KG. We illustrate the overall framework and specific idea and compare the advantages and disadvantages of such approaches. Next, we introduce the advanced models that utilize additional semantic information to improve the performance of the original methods. We divide the additional information into two categories, including textual descriptions and relation paths. The extension approaches in each category are described, following the same classification criteria as those defined for the triplet fact-based models. We then describe two experiments for comparing the performance of listed methods and mention some broader domain tasks such as question answering, recommender systems, and so forth. Finally, we collect several hurdles that need to be overcome and provide a few future research directions for knowledge graph embedding.
TL;DR: A novel blockchain and machine learning-based drug supply chain management and recommendation system (DSCMR) that is deployed using Hyperledger fabrics and trained on well known publicly available drug reviews dataset provided by the UCI, an open-source machine learning repository.
Abstract: From the last decade, pharmaceutical companies are facing difficulties in tracking their products during the supply chain process, allowing the counterfeiters to add their fake medicines into the market. Counterfeit drugs are analyzed as a very big challenge for the pharmaceutical industry worldwide. As indicated by the statistics, yearly business loss of around $200 billion is reported by US pharmaceutical companies due to these counterfeit drugs. These drugs may not help the patients to recover the disease but have many other dangerous side effects. According to the World Health Organization (WHO) survey report, in under-developed countries every 10th drug use by the consumers is counterfeit and has low quality. Hence, a system that can trace and track drug delivery at every phase is needed to solve the counterfeiting problem. The blockchain has the full potential to handle and track the supply chain process very efficiently. In this paper, we have proposed and implemented a novel blockchain and machine learning-based drug supply chain management and recommendation system (DSCMR). Our proposed system consists of two main modules: blockchain-based drug supply chain management and machine learning-based drug recommendation system for consumers. In the first module, the drug supply chain management system is deployed using Hyperledger fabrics which is capable of continuously monitor and track the drug delivery process in the smart pharmaceutical industry. On the other hand, the N-gram, LightGBM models are used in the machine learning module to recommend the top-rated or best medicines to the customers of the pharmaceutical industry. These models have trained on well known publicly available drug reviews dataset provided by the UCI: an open-source machine learning repository. Moreover, the machine learning module is integrated with this blockchain system with the help of the REST API. Finally, we also perform several tests to check the efficiency and usability of our proposed system.
TL;DR: The experiment indicates that deep learning algorithms are suitable for intrusion detection in IoT network environment.
Abstract: With the popularity of Internet of Things (IoT) technology, the security of the IoT network has become an important issue. Traditional intrusion detection systems have their limitations when applied to the IoT network due to resource constraints and the complexity. This research focusses on the design, implementation and testing of an intrusion detection system which uses a hybrid placement strategy based on a multi-agent system, blockchain and deep learning algorithms. The system consists of the following modules: data collection, data management, analysis, and response. The National security lab–knowledge discovery and data mining NSL-KDD dataset is used to test the system. The results demonstrate the efficiency of deep learning algorithms when detecting attacks from the transport layer. The experiment indicates that deep learning algorithms are suitable for intrusion detection in IoT network environment.
TL;DR: This study aims to present a comprehensive review of IoT systems-related technologies, protocols, architecture and threats emerging from compromised IoT devices along with providing an overview of intrusion detection models.
Abstract: The Internet of Things (IoT) is poised to impact several aspects of our lives with its fast proliferation in many areas such as wearable devices, smart sensors and home appliances. IoT devices are characterized by their connectivity, pervasiveness and limited processing capability. The number of IoT devices in the world is increasing rapidly and it is expected that there will be 50 billion devices connected to the Internet by the end of the year 2020. This explosion of IoT devices, which can be easily increased compared to desktop computers, has led to a spike in IoT-based cyber-attack incidents. To alleviate this challenge, there is a requirement to develop new techniques for detecting attacks initiated from compromised IoT devices. Machine and deep learning techniques are in this context the most appropriate detective control approach against attacks generated from IoT devices. This study aims to present a comprehensive review of IoT systems-related technologies, protocols, architecture and threats emerging from compromised IoT devices along with providing an overview of intrusion detection models. This work also covers the analysis of various machine learning and deep learning-based techniques suitable to detect IoT systems related to cyber-attacks.
TL;DR: In this paper, a review of the recent advances in the anode and cathode materials for the next-generation Li-ion batteries is presented, focusing on the electrode materials, such as carbon-based, semiconductor/metal, metal oxides/nitrides/phosphides/sulfides.
Abstract: In the context of constant growth in the utilization of the Li-ion batteries, there was a great surge in the quest for electrode materials and predominant usage that lead to the retiring of Li-ion batteries. This review focuses on the recent advances in the anode and cathode materials for the next-generation Li-ion batteries. To achieve higher power and energy demands of Li-ion batteries in future energy storage applications, the selection of the electrode materials plays a crucial role. The electrode materials, such as carbon-based, semiconductor/metal, metal oxides/nitrides/phosphides/sulfides, determine appreciable properties of Li-ion batteries such as greater specific surface area, a minimal distance of diffusion, and higher conductivity. Various classifications of the anode materials such as the intercalation/de- intercalation, alloy/de-alloy, and various conversion materials are illustrated lucidly. Further, the cathode materials, such as nickel-rich LiNixCoyMnzO2 (NCM), were discussed. NCM members such as NCM 333, NCM 523 that enabled to advance for NCM622 and NCM81are reported. The nanostructured materials bridged the gap in the realization of next-generation Li-ion batteries. Li-ion batteries’ electrode nanostructure synthesis, performance, and reaction mechanisms were considered with great concern. The serious effects of Li-ion batteries disposal need to be cut significantly to reduce the detrimental effect on the environment. Hence, the recycling of spent Li-ion batteries has gained much attention in recent years. Various recycling techniques and their effect on the electroactive materials are illustrated. The key areas covered in this review are anode and cathode materials and recent advances along with their recycling techniques. In light of crucial points covered in this review, it constitutes a suitable reference for engineers, researchers, and designers in energy storage applications.
TL;DR: Studies show that the performance of HIDS is enhanced, compared to SIDS and AIDS in terms of detection rate and low false-alarm rates.
Abstract: Cyberttacks are becoming increasingly sophisticated, necessitating the efficient intrusion detection mechanisms to monitor computer resources and generate reports on anomalous or suspicious activities. Many Intrusion Detection Systems (IDSs) use a single classifier for identifying intrusions. Single classifier IDSs are unable to achieve high accuracy and low false alarm rates due to polymorphic, metamorphic, and zero-day behaviors of malware. In this paper, a Hybrid IDS (HIDS) is proposed by combining the C5 decision tree classifier and One Class Support Vector Machine (OC-SVM). HIDS combines the strengths of SIDS) and Anomaly-based Intrusion Detection System (AIDS). The SIDS was developed based on the C5.0 Decision tree classifier and AIDS was developed based on the one-class Support Vector Machine (SVM). This framework aims to identify both the well-known intrusions and zero-day attacks with high detection accuracy and low false-alarm rates. The proposed HIDS is evaluated using the benchmark datasets, namely, Network Security Laboratory-Knowledge Discovery in Databases (NSL-KDD) and Australian Defence Force Academy (ADFA) datasets. Studies show that the performance of HIDS is enhanced, compared to SIDS and AIDS in terms of detection rate and low false-alarm rates.
TL;DR: This research proposes lightweight deep learning models that classify the erythrocytes into three classes: circular (normal), elongated (sickle cells), and other blood content, which are different in the number of layers and learnable filters.
Abstract: Sickle cell anemia, which is also called sickle cell disease (SCD), is a hematological disorder that causes occlusion in blood vessels, leading to hurtful episodes and even death. The key function of red blood cells (erythrocytes) is to supply all the parts of the human body with oxygen. Red blood cells (RBCs) form a crescent or sickle shape when sickle cell anemia affects them. This abnormal shape makes it difficult for sickle cells to move through the bloodstream, hence decreasing the oxygen flow. The precise classification of RBCs is the first step toward accurate diagnosis, which aids in evaluating the danger level of sickle cell anemia. The manual classification methods of erythrocytes require immense time, and it is possible that errors may be made throughout the classification stage. Traditional computer-aided techniques, which have been employed for erythrocyte classification, are based on handcrafted features techniques, and their performance relies on the selected features. They also are very sensitive to different sizes, colors, and complex shapes. However, microscopy images of erythrocytes are very complex in shape with different sizes. To this end, this research proposes lightweight deep learning models that classify the erythrocytes into three classes: circular (normal), elongated (sickle cells), and other blood content. These models are different in the number of layers and learnable filters. The available datasets of red blood cells with sickle cell disease are very small for training deep learning models. Therefore, addressing the lack of training data is the main aim of this paper. To tackle this issue and optimize the performance, the transfer learning technique is utilized. Transfer learning does not significantly affect performance on medical image tasks when the source domain is completely different from the target domain. In some cases, it can degrade the performance. Hence, we have applied the same domain transfer learning, unlike other methods that used the ImageNet dataset for transfer learning. To minimize the overfitting effect, we have utilized several data augmentation techniques. Our model obtained state-of-the-art performance and outperformed the latest methods by achieving an accuracy of 99.54% with our model and 99.98% with our model plus a multiclass SVM classifier on the erythrocytesIDB dataset and 98.87% on the collected dataset.
TL;DR: The paper analyses the dimensions that decentralization and the use of smart contracts will take the IoMT in e-healthcare, proposes a novel architecture, and outlines the advantages, challenges, and future trends related to the integration of all three.
Abstract: The concept of Blockchain has penetrated a wide range of scientific areas, and its use is considered to rise exponentially in the near future. Executing short scripts of predefined code called smart contracts on Blockchain can eliminate the need of intermediaries and can also raise the multitude of execution of contracts. In this paper, we discuss the concept of Blockchain along with smart contracts and discuss their applicability in the Internet of Medical Things (IoMT) in the e-healthcare domain. The paper analyses the dimensions that decentralization and the use of smart contracts will take the IoMT in e-healthcare, proposes a novel architecture, and also outlines the advantages, challenges, and future trends related to the integration of all three. The proposed architecture shows its effectiveness with average packet delivery ratio, average latency, and average energy efficiency performance parameters when compared with traditional approaches.
TL;DR: A hybrid model of parallel convolutional layers and residual links is utilized to classify hematoxylin–eosin-stained breast biopsy images into four classes: invasive carcinoma, in-situ carcinomas, benign tumor and normal tissue.
Abstract: Breast cancer is a significant factor in female mortality. An early cancer diagnosis leads to a reduction in the breast cancer death rate. With the help of a computer-aided diagnosis system, the efficiency increased, and the cost was reduced for the cancer diagnosis. Traditional breast cancer classification techniques are based on handcrafted features techniques, and their performance relies upon the chosen features. They also are very sensitive to different sizes and complex shapes. However, histopathological breast cancer images are very complex in shape. Currently, deep learning models have become an alternative solution for diagnosis, and have overcome the drawbacks of classical classification techniques. Although deep learning has performed well in various tasks of computer vision and pattern recognition, it still has some challenges. One of the main challenges is the lack of training data. To address this challenge and optimize the performance, we have utilized a transfer learning technique which is where the deep learning models train on a task, and then fine-tune the models for another task. We have employed transfer learning in two ways: Training our proposed model first on the same domain dataset, then on the target dataset, and training our model on a different domain dataset, then on the target dataset. We have empirically proven that the same domain transfer learning optimized the performance. Our hybrid model of parallel convolutional layers and residual links is utilized to classify hematoxylin–eosin-stained breast biopsy images into four classes: invasive carcinoma, in-situ carcinoma, benign tumor and normal tissue. To reduce the effect of overfitting, we have augmented the images with different image processing techniques. The proposed model achieved state-of-the-art performance, and it outperformed the latest methods by achieving a patch-wise classification accuracy of 90.5%, and an image-wise classification accuracy of 97.4% on the validation set. Moreover, we have achieved an image-wise classification accuracy of 96.1% on the test set of the microscopy ICIAR-2018 dataset.
TL;DR: A comprehensive guideline to select an adequate outlier detection model for the sensors in the IoT context for various applications is discussed and a comprehensive review of the detecting sensor faults, anomalies, outliers in the Internet of Things and the challenges is presented.
Abstract: The Internet of Things (IoT) has gained significant recognition to become a novel sensing paradigm to interact with the physical world in this Industry 4.0 era. The IoTs are being used in many diverse applications that are part of our life and is growing to become the global digital nervous systems. It is quite evident that in the near future, hundreds of millions of individuals and businesses with billions will have smart-sensors and advanced communication technology, and these things will expand the boundaries of current systems. This will result in a potential change in the way we work, learn, innovate, live and entertain. The heterogeneous smart sensors within the Internet of Things are indispensable parts, which capture the raw data from the physical world by being the first port of contact. Often the sensors within the IoT are deployed or installed in harsh environments. This inevitably means that the sensors are prone to failure, malfunction, rapid attrition, malicious attacks, theft and tampering. All of these conditions cause the sensors within the IoT to produce unusual and erroneous readings, often known as outliers. Much of the current research has been done in developing the sensor outlier and fault detection models exclusively for the Wireless Sensor Networks (WSN), and adequate research has not been done so far in the context of the IoT. Wireless sensor network’s operational framework differ greatly when compared to IoT’s operational framework, using some of the existing models developed for WSN cannot be used on IoT’s for detecting outliers and faults. Sensor faults and outlier detection is very crucial in the IoT to detect the high probability of erroneous reading or data corruption, thereby ensuring the quality of the data collected by sensors. The data collected by sensors are initially pre-processed to be transformed into information and when Artificially Intelligent (AI), Machine Learning (ML) models are further used by the IoT, the information is further processed into applications and processes. Any faulty, erroneous, corrupted sensor readings corrupt the trained models, which thereby produces abnormal processes or outliers that are significantly distinct from the normal behavioural processes of a system. In this paper, we present a comprehensive review of the detecting sensor faults, anomalies, outliers in the Internet of Things and the challenges. A comprehensive guideline to select an adequate outlier detection model for the sensors in the IoT context for various applications is discussed.
TL;DR: Different types of effective implementation techniques for reconfigurable antennas used in various wireless communication systems such as satellite, multiple-input multiple-output (MIMO), mobile terminals and cognitive radio communications are investigated.
Abstract: Due to the fast development of wireless communication technology, reconfigurable antennas with multimode and cognitive radio operation in modern wireless applications with a high-data rate have drawn very close attention from researchers. Reconfigurable antennas can provide various functions in operating frequency, beam pattern, polarization, etc. The dynamic tuning can be achieved by manipulating a certain switching mechanism through controlling electronic, mechanical, physical or optical switches. Among them, electronic switches are the most popular in constituting reconfigurable antennas due to their efficiency, reliability and ease of integrating with microwave circuitry. In this paper, we review different implementation techniques for reconfigurable antennas. Different types of effective implementation techniques have been investigated to be used in various wireless communication systems such as satellite, multiple-input multiple-output (MIMO), mobile terminals and cognitive radio communications. Characteristics and fundamental properties of the reconfigurable antennas are investigated.
TL;DR: This paper provides a comprehensive review of the state-of-the-art artificial intelligence techniques to support various applications in a distributed smart grid and discusses how artificial intelligence and market liberalization can potentially help to increase the overall social welfare of the grid.
Abstract: The power system worldwide is going through a revolutionary transformation due to the integration with various distributed components, including advanced metering infrastructure, communication infrastructure, distributed energy resources, and electric vehicles, to improve the reliability, energy efficiency, management, and security of the future power system These components are becoming more tightly integrated with IoT They are expected to generate a vast amount of data to support various applications in the smart grid, such as distributed energy management, generation forecasting, grid health monitoring, fault detection, home energy management, etc With these new components and information, artificial intelligence techniques can be applied to automate and further improve the performance of the smart grid In this paper, we provide a comprehensive review of the state-of-the-art artificial intelligence techniques to support various applications in a distributed smart grid In particular, we discuss how artificial techniques are applied to support the integration of renewable energy resources, the integration of energy storage systems, demand response, management of the grid and home energy, and security As the smart grid involves various actors, such as energy produces, markets, and consumers, we also discuss how artificial intelligence and market liberalization can potentially help to increase the overall social welfare of the grid Finally, we provide further research challenges for large-scale integration and orchestration of automated distributed devices to realize a truly smart grid
TL;DR: The objective of this paper is to survey the research challenges associated with multi-tasking within the deep reinforcement arena and present the state-of-the-art approaches by comparing and contrasting recent solutions, namely DISTRAL, IMPALA and PopArt that aim to address core challenges such as scalability, distraction dilemma, partial observability, catastrophic forgetting and negative knowledge transfer.
Abstract: Driven by the recent technological advancements within the field of artificial intelligence research, deep learning has emerged as a promising representation learning technique across all of the machine learning classes, especially within the reinforcement learning arena. This new direction has given rise to the evolution of a new technological domain named deep reinforcement learning, which combines the representational learning power of deep learning with existing reinforcement learning methods. Undoubtedly, the inception of deep reinforcement learning has played a vital role in optimizing the performance of reinforcement learning-based intelligent agents with model-free based approaches. Although these methods could improve the performance of agents to a greater extent, they were mainly limited to systems that adopted reinforcement learning algorithms focused on learning a single task. At the same moment, the aforementioned approach was found to be relatively data-inefficient, particularly when reinforcement learning agents needed to interact with more complex and rich data environments. This is primarily due to the limited applicability of deep reinforcement learning algorithms to many scenarios across related tasks from the same environment. The objective of this paper is to survey the research challenges associated with multi-tasking within the deep reinforcement arena and present the state-of-the-art approaches by comparing and contrasting recent solutions, namely DISTRAL (DIStill & TRAnsfer Learning), IMPALA(Importance Weighted Actor-Learner Architecture) and PopArt that aim to address core challenges such as scalability, distraction dilemma, partial observability, catastrophic forgetting and negative knowledge transfer.
TL;DR: This research is the first to implement stacked autoencoders by using DAEs and AEs for feature learning in DL and demonstrates, the proposed DL model can extract high-level features not only from the training data but also from unseen data.
Abstract: The electrocardiogram (ECG) is a widely used, noninvasive test for analyzing arrhythmia. However, the ECG signal is prone to contamination by different kinds of noise. Such noise may cause deformation on the ECG heartbeat waveform, leading to cardiologists’ mislabeling or misinterpreting heartbeats due to varying types of artifacts and interference. To address this problem, some previous studies propose a computerized technique based on machine learning (ML) to distinguish between normal and abnormal heartbeats. Unfortunately, ML works on a handcrafted, feature-based approach and lacks feature representation. To overcome such drawbacks, deep learning (DL) is proposed in the pre-training and fine-tuning phases to produce an automated feature representation for multi-class classification of arrhythmia conditions. In the pre-training phase, stacked denoising autoencoders (DAEs) and autoencoders (AEs) are used for feature learning; in the fine-tuning phase, deep neural networks (DNNs) are implemented as a classifier. To the best of our knowledge, this research is the first to implement stacked autoencoders by using DAEs and AEs for feature learning in DL. Physionet’s well-known MIT-BIH Arrhythmia Database, as well as the MIT-BIH Noise Stress Test Database (NSTDB). Only four records are used from the NSTDB dataset: 118 24 dB, 118 −6 dB, 119 24 dB, and 119 −6 dB, with two levels of signal-to-noise ratio (SNRs) at 24 dB and −6 dB. In the validation process, six models are compared to select the best DL model. For all fine-tuned hyperparameters, the best model of ECG heartbeat classification achieves an accuracy, sensitivity, specificity, precision, and F1-score of 99.34%, 93.83%, 99.57%, 89.81%, and 91.44%, respectively. As the results demonstrate, the proposed DL model can extract high-level features not only from the training data but also from unseen data. Such a model has good application prospects in clinical practice.
TL;DR: A compact tree shape planar quad element MIMO antenna bearing a wide bandwidth for 5G communication operating in the millimeter-wave spectrum is proposed in this article, where the radiating element of the proposed design contains four different arcs to achieve the wide bandwidth response.
Abstract: A compact tree shape planar quad element Multiple Input Multiple Output (MIMO) antenna bearing a wide bandwidth for 5G communication operating in the millimeter-wave spectrum is proposed The radiating element of the proposed design contains four different arcs to achieve the wide bandwidth response Each radiating element is backed by a 157 mm thicker Rogers-5880 substrate material, having a loss tangent and relative dielectric constant of 00009 and 22, respectively The measured impedance bandwidth of the proposed quad element MIMO antenna system based on 10 dB criterion is from 23 GHz to 40 GHz with a port isolation of greater than 20 dB The measured radiation patterns are presented at 28 GHz, 33 GHz and 38 GHz with a maximum total gain of 1058, 887 and 1145 dB, respectively The high gain of the proposed antenna further helps to overcome the atmospheric attenuations faced by the higher frequencies In addition, the measured total efficiency of the proposed MIMO antenna is observed above 70% for the millimeter wave frequencies Furthermore, the MIMO key performance metrics such as Mean Effective Gain (MEG) and Envelope Correlation Coefficient (ECC) are analyzed and found to conform to the required standard of MEG < 3 dB and ECC < 05 A prototype of the proposed quad element MIMO antenna system is fabricated and measured The experimental results validate the simulation design process conducted with Computer Simulation Technology (CST) software
TL;DR: This paper presents a blockchain-based system to guarantee the trustworthiness of the stored recordings, allowing authorities to validate whether or not a video has been altered, and diminishes the risk of copyright encroachment for law enforcement agencies and clients users by securing possession and identity.
Abstract: The video created by a surveillance cameras plays a crucial role in crime prevention and examinations in smart cities. The closed-circuit television camera (CCTV) is essential for a range of public uses in a smart city; combined with Internet of Things (IoT) technologies they can turn into smart sensors that help to ensure safety and security. However, the authenticity of the camera itself raises issues of building up integrity and suitability of data. In this paper, we present a blockchain-based system to guarantee the trustworthiness of the stored recordings, allowing authorities to validate whether or not a video has been altered. It helps to discriminate fake videos from original ones and to make sure that surveillance cameras are authentic. Since the distributed ledger of the blockchain records the metadata of the CCTV video as well, it is obstructing the chance of forgery of the data. This immutable ledger diminishes the risk of copyright encroachment for law enforcement agencies and clients users by securing possession and identity.
TL;DR: An insight into the open-data resources pertinent to the study of the spread of the Covid-19 pandemic and its control is provided and the present limitations and difficulties encountered are described.
Abstract: We provide an insight into the open-data resources pertinent to the study of the spread of the Covid-19 pandemic and its control. We identify the variables required to analyze fundamental aspects like seasonal behavior, regional mortality rates, and effectiveness of government measures. Open-data resources, along with data-driven methodologies, provide many opportunities to improve the response of the different administrations to the virus. We describe the present limitations and difficulties encountered in most of the open-data resources. To facilitate the access to the main open-data portals and resources, we identify the most relevant institutions, on a global scale, providing Covid-19 information and/or auxiliary variables (demographics, mobility, etc.). We also describe several open resources to access Covid-19 datasets at a country-wide level (i.e., China, Italy, Spain, France, Germany, US, etc.). To facilitate the rapid response to the study of the seasonal behavior of Covid-19, we enumerate the main open resources in terms of weather and climate variables. We also assess the reusability of some representative open-data sources.
TL;DR: In this paper, a multi-channel pre-trained ResNet architecture is presented to facilitate the diagnosis of COVID-19 chest X-ray, and three ResNet-based models are retrained to classify X-rays in a one-against-all basis from (a) normal or diseased, (b) pneumonia or non-pneumonia, and (c) COVID19 or non COVID 19 individuals.
Abstract: The 2019 novel coronavirus (COVID-19) has spread rapidly all over the world. The standard test for screening COVID-19 patients is the polymerase chain reaction test. As this method is time consuming, as an alternative, chest X-rays may be considered for quick screening. However, specialization is required to read COVID-19 chest X-ray images as they vary in features. To address this, we present a multi-channel pre-trained ResNet architecture to facilitate the diagnosis of COVID-19 chest X-ray. Three ResNet-based models were retrained to classify X-rays in a one-against-all basis from (a) normal or diseased, (b) pneumonia or non-pneumonia, and (c) COVID-19 or non-COVID19 individuals. Finally, these three models were ensembled and fine-tuned using X-rays from 1579 normal, 4245 pneumonia, and 184 COVID-19 individuals to classify normal, pneumonia, and COVID-19 cases in a one-against-one framework. Our results show that the ensemble model is more accurate than the single model as it extracts more relevant semantic features for each class. The method provides a precision of 94% and a recall of 100%. It could potentially help clinicians in screening patients for COVID-19, thus facilitating immediate triaging and treatment for better outcomes.