scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Access in 2018"


Journal ArticleDOI
Amina Adadi1, Mohammed Berrada1
TL;DR: This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI, and review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories.
Abstract: At the dawn of the fourth industrial revolution, we are witnessing a fast and widespread adoption of artificial intelligence (AI) in our daily life, which contributes to accelerating the shift towards a more algorithmic society. However, even with such unprecedented advancements, a key impediment to the use of AI-based systems is that they often lack transparency. Indeed, the black-box nature of these systems allows powerful predictions, but it cannot be directly explained. This issue has triggered a new debate on explainable AI (XAI). A research field holds substantial promise for improving trust and transparency of AI-based systems. It is recognized as the sine qua non for AI to continue making steady progress without disruption. This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI. Through the lens of the literature, we review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories.

2,258 citations


Journal ArticleDOI
TL;DR: A comprehensive survey on adversarial attacks on deep learning in computer vision can be found in this paper, where the authors review the works that design adversarial attack, analyze the existence of such attacks and propose defenses against them.
Abstract: Deep learning is at the heart of the current rise of artificial intelligence. In the field of computer vision, it has become the workhorse for applications ranging from self-driving cars to surveillance and security. Whereas, deep neural networks have demonstrated phenomenal success (often beyond human capabilities) in solving complex problems, recent studies show that they are vulnerable to adversarial attacks in the form of subtle perturbations to inputs that lead a model to predict incorrect outputs. For images, such perturbations are often too small to be perceptible, yet they completely fool the deep learning models. Adversarial attacks pose a serious threat to the success of deep learning in practice. This fact has recently led to a large influx of contributions in this direction. This paper presents the first comprehensive survey on adversarial attacks on deep learning in computer vision. We review the works that design adversarial attacks, analyze the existence of such attacks and propose defenses against them. To emphasize that adversarial attacks are possible in practical conditions, we separately review the contributions that evaluate adversarial attacks in the real-world scenarios. Finally, drawing on the reviewed literature, we provide a broader outlook of this research direction.

1,542 citations


Journal ArticleDOI
TL;DR: A comprehensive survey, analyzing how edge computing improves the performance of IoT networks and considers security issues in edge computing, evaluating the availability, integrity, and the confidentiality of security strategies of each group, and proposing a framework for security evaluation of IoT Networks with edge computing.
Abstract: The Internet of Things (IoT) now permeates our daily lives, providing important measurement and collection tools to inform our every decision. Millions of sensors and devices are continuously producing data and exchanging important messages via complex networks supporting machine-to-machine communications and monitoring and controlling critical smart-world infrastructures. As a strategy to mitigate the escalation in resource congestion, edge computing has emerged as a new paradigm to solve IoT and localized computing needs. Compared with the well-known cloud computing, edge computing will migrate data computation or storage to the network “edge,” near the end users. Thus, a number of computation nodes distributed across the network can offload the computational stress away from the centralized data center, and can significantly reduce the latency in message exchange. In addition, the distributed structure can balance network traffic and avoid the traffic peaks in IoT networks, reducing the transmission latency between edge/cloudlet servers and end users, as well as reducing response times for real-time IoT applications in comparison with traditional cloud services. Furthermore, by transferring computation and communication overhead from nodes with limited battery supply to nodes with significant power resources, the system can extend the lifetime of the individual nodes. In this paper, we conduct a comprehensive survey, analyzing how edge computing improves the performance of IoT networks. We categorize edge computing into different groups based on architecture, and study their performance by comparing network latency, bandwidth occupation, energy consumption, and overhead. In addition, we consider security issues in edge computing, evaluating the availability, integrity, and the confidentiality of security strategies of each group, and propose a framework for security evaluation of IoT networks with edge computing. Finally, we compare the performance of various IoT applications (smart city, smart grid, smart transportation, and so on) in edge computing and traditional cloud computing architectures.

1,008 citations


Journal ArticleDOI
TL;DR: A comprehensive review related to emerging and enabling technologies with main focus on 5G mobile networks that is envisaged to support the exponential traffic growth for enabling the IoT.
Abstract: The Internet of Things (IoT) is a promising technology which tends to revolutionize and connect the global world via heterogeneous smart devices through seamless connectivity. The current demand for machine-type communications (MTC) has resulted in a variety of communication technologies with diverse service requirements to achieve the modern IoT vision. More recent cellular standards like long-term evolution (LTE) have been introduced for mobile devices but are not well suited for low-power and low data rate devices such as the IoT devices. To address this, there is a number of emerging IoT standards. Fifth generation (5G) mobile network, in particular, aims to address the limitations of previous cellular standards and be a potential key enabler for future IoT. In this paper, the state-of-the-art of the IoT application requirements along with their associated communication technologies are surveyed. In addition, the third generation partnership project cellular-based low-power wide area solutions to support and enable the new service requirements for Massive to Critical IoT use cases are discussed in detail, including extended coverage global system for mobile communications for the Internet of Things, enhanced machine-type communications, and narrowband-Internet of Things. Furthermore, 5G new radio enhancements for new service requirements and enabling technologies for the IoT are introduced. This paper presents a comprehensive review related to emerging and enabling technologies with main focus on 5G mobile networks that is envisaged to support the exponential traffic growth for enabling the IoT. The challenges and open research directions pertinent to the deployment of massive to critical IoT applications are also presented in coming up with an efficient context-aware congestion control mechanism.

951 citations


Journal ArticleDOI
TL;DR: This review introduces the machine learning algorithms as applied to medical image analysis, focusing on convolutional neural networks, and emphasizing clinical aspects of the field, covering key research areas and applications of medical image classification, localization, detection, segmentation, and registration.
Abstract: The tremendous success of machine learning algorithms at image recognition tasks in recent years intersects with a time of dramatically increased use of electronic medical records and diagnostic imaging. This review introduces the machine learning algorithms as applied to medical image analysis, focusing on convolutional neural networks, and emphasizing clinical aspects of the field. The advantage of machine learning in an era of medical big data is that significant hierarchal relationships within the data can be discovered algorithmically without laborious hand-crafting of features. We cover key research areas and applications of medical image classification, localization, detection, segmentation, and registration. We conclude by discussing research obstacles, emerging trends, and possible future directions.

941 citations


Journal ArticleDOI
Qinglin Qi1, Fei Tao1
TL;DR: The similarities and differences between big data and digital twin are compared from the general and data perspectives and how they can be integrated to promote smart manufacturing are discussed.
Abstract: With the advances in new-generation information technologies, especially big data and digital twin, smart manufacturing is becoming the focus of global manufacturing transformation and upgrading. Intelligence comes from data. Integrated analysis for the manufacturing big data is beneficial to all aspects of manufacturing. Besides, the digital twin paves a way for the cyber-physical integration of manufacturing, which is an important bottleneck to achieve smart manufacturing. In this paper, the big data and digital twin in manufacturing are reviewed, including their concept as well as their applications in product design, production planning, manufacturing, and predictive maintenance. On this basis, the similarities and differences between big data and digital twin are compared from the general and data perspectives. Since the big data and digital twin can be complementary, how they can be integrated to promote smart manufacturing are discussed.

856 citations


Journal ArticleDOI
TL;DR: The augmentation of fully convolutional networks with long short term memory recurrent neural network (LSTM RNN) sub-modules for time series classification with attention mechanism and refinement as a method to enhance the performance of trained models are proposed.
Abstract: Fully convolutional neural networks (FCNs) have been shown to achieve the state-of-the-art performance on the task of classifying time series sequences. We propose the augmentation of fully convolutional networks with long short term memory recurrent neural network (LSTM RNN) sub-modules for time series classification. Our proposed models significantly enhance the performance of fully convolutional networks with a nominal increase in model size and require minimal preprocessing of the data set. The proposed long short term memory fully convolutional network (LSTM-FCN) achieves the state-of-the-art performance compared with others. We also explore the usage of attention mechanism to improve time series classification with the attention long short term memory fully convolutional network (ALSTM-FCN). The attention mechanism allows one to visualize the decision process of the LSTM cell. Furthermore, we propose refinement as a method to enhance the performance of trained models. An overall analysis of the performance of our model is provided and compared with other techniques.

851 citations


Journal ArticleDOI
TL;DR: A thorough review on how to adapt blockchain to the specific needs of IoT in order to develop Blockchain-based IoT (BIoT) applications is presented and some recommendations are enumerated with the aim of guiding future BIoT researchers and developers on some of the issues that will have to be tackled before deploying the next generation of BIeT applications.
Abstract: The paradigm of Internet of Things (IoT) is paving the way for a world, where many of our daily objects will be interconnected and will interact with their environment in order to collect information and automate certain tasks. Such a vision requires, among other things, seamless authentication, data privacy, security, robustness against attacks, easy deployment, and self-maintenance. Such features can be brought by blockchain, a technology born with a cryptocurrency called Bitcoin. In this paper, a thorough review on how to adapt blockchain to the specific needs of IoT in order to develop Blockchain-based IoT (BIoT) applications is presented. After describing the basics of blockchain, the most relevant BIoT applications are described with the objective of emphasizing how blockchain can impact traditional cloud-centered IoT applications. Then, the current challenges and possible optimizations are detailed regarding many aspects that affect the design, development, and deployment of a BIoT application. Finally, some recommendations are enumerated with the aim of guiding future BIoT researchers and developers on some of the issues that will have to be tackled before deploying the next generation of BIoT applications.

755 citations


Journal ArticleDOI
TL;DR: A hierarchical architecture of the smart factory was proposed first, and then the key technologies were analyzed from the aspects of the physical resource layer, the network layer, and the data application layer, which showed that the overall equipment effectiveness of the equipment is significantly improved.
Abstract: Due to the current structure of digital factory, it is necessary to build the smart factory to upgrade the manufacturing industry. Smart factory adopts the combination of physical technology and cyber technology and deeply integrates previously independent discrete systems making the involved technologies more complex and precise than they are now. In this paper, a hierarchical architecture of the smart factory was proposed first, and then the key technologies were analyzed from the aspects of the physical resource layer, the network layer, and the data application layer. In addition, we discussed the major issues and potential solutions to key emerging technologies, such as Internet of Things (IoT), big data, and cloud computing, which are embedded in the manufacturing process. Finally, a candy packing line was used to verify the key technologies of smart factory, which showed that the overall equipment effectiveness of the equipment is significantly improved.

736 citations


Journal ArticleDOI
TL;DR: This survey report describes key literature surveys on machine learning (ML) and deep learning (DL) methods for network analysis of intrusion detection and provides a brief tutorial description of each ML/DL method.
Abstract: With the development of the Internet, cyber-attacks are changing rapidly and the cyber security situation is not optimistic. This survey report describes key literature surveys on machine learning (ML) and deep learning (DL) methods for network analysis of intrusion detection and provides a brief tutorial description of each ML/DL method. Papers representing each method were indexed, read, and summarized based on their temporal or thermal correlations. Because data are so important in ML/DL methods, we describe some of the commonly used network datasets used in ML/DL, discuss the challenges of using ML/DL for cybersecurity and provide suggestions for research directions.

676 citations


Journal ArticleDOI
TL;DR: In this paper, the authors study the potential advantages of allowing for non-orthogonal sharing of RAN resources in uplink communications from a set of eMBB, mMTC, and URLLC devices to a common base station.
Abstract: The grand objective of 5G wireless technology is to support three generic services with vastly heterogeneous requirements: enhanced mobile broadband (eMBB), massive machine-type communications (mMTCs), and ultra-reliable low-latency communications (URLLCs). Service heterogeneity can be accommodated by network slicing, through which each service is allocated resources to provide performance guarantees and isolation from the other services. Slicing of the radio access network (RAN) is typically done by means of orthogonal resource allocation among the services. This paper studies the potential advantages of allowing for non-orthogonal sharing of RAN resources in uplink communications from a set of eMBB, mMTC, and URLLC devices to a common base station. The approach is referred to as heterogeneous non-orthogonal multiple access (H-NOMA), in contrast to the conventional NOMA techniques that involve users with homogeneous requirements and hence can be investigated through a standard multiple access channel. The study devises a communication-theoretic model that accounts for the heterogeneous requirements and characteristics of the three services. The concept of reliability diversity is introduced as a design principle that leverages the different reliability requirements across the services in order to ensure performance guarantees with non-orthogonal RAN slicing. This paper reveals that H-NOMA can lead, in some regimes, to significant gains in terms of performance tradeoffs among the three generic services as compared to orthogonal slicing.

Journal ArticleDOI
TL;DR: An in-depth analysis of the majority of the deep neural networks (DNNs) proposed in the state of the art for image recognition, with a complete view of what solutions have been explored so far and in which research directions are worth exploring in the future.
Abstract: This paper presents an in-depth analysis of the majority of the deep neural networks (DNNs) proposed in the state of the art for image recognition. For each DNN, multiple performance indices are observed, such as recognition accuracy, model complexity, computational complexity, memory usage, and inference time. The behavior of such performance indices and some combinations of them are analyzed and discussed. To measure the indices, we experiment the use of DNNs on two different computer architectures, a workstation equipped with a NVIDIA Titan X Pascal, and an embedded system based on a NVIDIA Jetson TX1 board. This experimentation allows a direct comparison between DNNs running on machines with very different computational capacities. This paper is useful for researchers to have a complete view of what solutions have been explored so far and in which research directions are worth exploring in the future, and for practitioners to select the DNN architecture(s) that better fit the resource constraints of practical deployments and applications. To complete this work, all the DNNs, as well as the software used for the analysis, are available online.

Journal ArticleDOI
TL;DR: The results of the evaluation show that performance is improved by reducing the induced delay, reducing the response time, increasing throughput, and the ability to detect real-time attacks in the IoT network with low performance overheads.
Abstract: The recent expansion of the Internet of Things (IoT) and the consequent explosion in the volume of data produced by smart devices have led to the outsourcing of data to designated data centers However, to manage these huge data stores, centralized data centers, such as cloud storage cannot afford auspicious way There are many challenges that must be addressed in the traditional network architecture due to the rapid growth in the diversity and number of devices connected to the internet, which is not designed to provide high availability, real-time data delivery, scalability, security, resilience, and low latency To address these issues, this paper proposes a novel blockchain-based distributed cloud architecture with a software defined networking (SDN) enable controller fog nodes at the edge of the network to meet the required design principles The proposed model is a distributed cloud architecture based on blockchain technology, which provides low-cost, secure, and on-demand access to the most competitive computing infrastructures in an IoT network By creating a distributed cloud infrastructure, the proposed model enables cost-effective high-performance computing Furthermore, to bring computing resources to the edge of the IoT network and allow low latency access to large amounts of data in a secure manner, we provide a secure distributed fog node architecture that uses SDN and blockchain techniques Fog nodes are distributed fog computing entities that allow the deployment of fog services, and are formed by multiple computing resources at the edge of the IoT network We evaluated the performance of our proposed architecture and compared it with the existing models using various performance measures The results of our evaluation show that performance is improved by reducing the induced delay, reducing the response time, increasing throughput, and the ability to detect real-time attacks in the IoT network with low performance overheads

Journal ArticleDOI
Rui Xiong1, Jiayi Cao1, Quanqing Yu1, Hongwen He1, Fengchun Sun1 
TL;DR: The review presents the key feedback factors that are indispensable for accurate estimation of battery SoC, and presents the possible recommendations for the development of next generation of smart SoC estimation and battery management systems for electric vehicles and battery energy storage system.
Abstract: Battery technology is the bottleneck of the electric vehicles (EVs). It is important, both in theory and practical application, to do research on the modeling and state estimation of batteries, which is essential to optimizing energy management, extending the life cycle, reducing cost, and safeguarding the safe application of batteries in EVs. However, the batteries, with strong time-variables and nonlinear characteristics, are further influenced by such random factors such as driving loads, operational conditions, in the application of EVs. The real-time, accurate estimation of their state is challenging. The classification of the estimation methodologies for estimating state-of-charge (SoC) of battery focusing with the estimation method/algorithm, advantages, drawbacks, and estimation error are systematically and separately discussed. Especially for the battery packs existing of the inevitable inconsistency in cell capacity, resistance and voltage, the advanced characterizing monomer selection, and bias correction-based method has been described and discussed. The review also presents the key feedback factors that are indispensable for accurate estimation of battery SoC, it will be helpful for ensuring the SoC estimation accuracy. It will be very helpful for choosing an appropriate method to develop a reliable and safe battery management system and energy management strategy of the EVs. Finally, the paper also highlights a number of key factors and challenges, and presents the possible recommendations for the development of next generation of smart SoC estimation and battery management systems for electric vehicles and battery energy storage system.

Journal ArticleDOI
TL;DR: A novel action recognition method by processing the video data using convolutional neural network (CNN) and deep bidirectional LSTM (DB-LSTM) network that is capable of learning long term sequences and can process lengthy videos by analyzing features for a certain time interval.
Abstract: Recurrent neural network (RNN) and long short-term memory (LSTM) have achieved great success in processing sequential multimedia data and yielded the state-of-the-art results in speech recognition, digital signal processing, video processing, and text data analysis. In this paper, we propose a novel action recognition method by processing the video data using convolutional neural network (CNN) and deep bidirectional LSTM (DB-LSTM) network. First, deep features are extracted from every sixth frame of the videos, which helps reduce the redundancy and complexity. Next, the sequential information among frame features is learnt using DB-LSTM network, where multiple layers are stacked together in both forward pass and backward pass of DB-LSTM to increase its depth. The proposed method is capable of learning long term sequences and can process lengthy videos by analyzing features for a certain time interval. Experimental results show significant improvements in action recognition using the proposed method on three benchmark data sets including UCF-101, YouTube 11 Actions, and HMDB51 compared with the state-of-the-art action recognition methods.

Journal ArticleDOI
TL;DR: This review will hopefully lead to increasing efforts toward the development of an advanced Li-ion battery in terms of economics, longevity, specific power, energy density, safety, and performance in vehicle applications.
Abstract: A variety of rechargeable batteries are now available in world markets for powering electric vehicles (EVs). The lithium-ion (Li-ion) battery is considered the best among all battery types and cells because of its superior characteristics and performance. The positive environmental impacts and recycling potential of lithium batteries have influenced the development of new research for improving Li-ion battery technologies. However, the cost reduction, safe operation, and mitigation of negative ecological impacts are now a common concern for advancement. This paper provides a comprehensive study on the state of the art of Li-ion batteries including the fundamentals, structures, and overall performance evaluations of different types of lithium batteries. A study on a battery management system for Li-ion battery storage in EV applications is demonstrated, which includes a cell condition monitoring, charge, and discharge control, states estimation, protection and equalization, temperature control and heat management, battery fault diagnosis, and assessment aimed at enhancing the overall performance of the system. It is observed that the Li-ion batteries are becoming very popular in vehicle applications due to price reductions and lightweight with high power density. However, the management of the charging and discharging processes, CO2 and greenhouse gases emissions, health effects, and recycling and refurbishing processes have still not been resolved satisfactorily. Consequently, this review focuses on the many factors, challenges, and problems and provides recommendations for sustainable battery manufacturing for future EVs. This review will hopefully lead to increasing efforts toward the development of an advanced Li-ion battery in terms of economics, longevity, specific power, energy density, safety, and performance in vehicle applications.

Journal ArticleDOI
TL;DR: This review focuses on the fundamentals of hyperspectral image analysis and its modern applications such as food quality and safety assessment, medical diagnosis and image guided surgery, forensic document examination, defense and homeland security, remote sensing applicationssuch as precision agriculture and water resource management and material identification and mapping of artworks.
Abstract: Over the past three decades, significant developments have been made in hyperspectral imaging due to which it has emerged as an effective tool in numerous civil, environmental, and military applications. Modern sensor technologies are capable of covering large surfaces of earth with exceptional spatial, spectral, and temporal resolutions. Due to these features, hyperspectral imaging has been effectively used in numerous remote sensing applications requiring estimation of physical parameters of many complex surfaces and identification of visually similar materials having fine spectral signatures. In the recent years, ground based hyperspectral imaging has gained immense interest in the research on electronic imaging for food inspection, forensic science, medical surgery and diagnosis, and military applications. This review focuses on the fundamentals of hyperspectral image analysis and its modern applications such as food quality and safety assessment, medical diagnosis and image guided surgery, forensic document examination, defense and homeland security, remote sensing applications such as precision agriculture and water resource management and material identification and mapping of artworks. Moreover, recent research on the use of hyperspectral imaging for examination of forgery detection in questioned documents, aided by deep learning, is also presented. This review can be a useful baseline for future research in hyperspectral image analysis.

Journal ArticleDOI
TL;DR: This paper studies the data storage and sharing scheme for decentralized storage systems and proposes a framework that combines the decentralized storage system interplanetary file system, the Ethereum blockchain, and ABE technology, and solves the problem that the cloud server may not return all of the results searched or return wrong results.
Abstract: In traditional cloud storage systems, attribute-based encryption (ABE) is regarded as an important technology for solving the problem of data privacy and fine-grained access control. However, in all ABE schemes, the private key generator has the ability to decrypt all data stored in the cloud server, which may bring serious problems such as key abuse and privacy data leakage. Meanwhile, the traditional cloud storage model runs in a centralized storage manner, so single point of failure may leads to the collapse of system. With the development of blockchain technology, decentralized storage mode has entered the public view. The decentralized storage approach can solve the problem of single point of failure in traditional cloud storage systems and enjoy a number of advantages over centralized storage, such as low price and high throughput. In this paper, we study the data storage and sharing scheme for decentralized storage systems and propose a framework that combines the decentralized storage system interplanetary file system, the Ethereum blockchain, and ABE technology. In this framework, the data owner has the ability to distribute secret key for data users and encrypt shared data by specifying access policy, and the scheme achieves fine-grained access control over data. At the same time, based on smart contract on the Ethereum blockchain, the keyword search function on the cipher text of the decentralized storage systems is implemented, which solves the problem that the cloud server may not return all of the results searched or return wrong results in the traditional cloud storage systems. Finally, we simulated the scheme in the Linux system and the Ethereum official test network Rinkeby, and the experimental results show that our scheme is feasible.

Journal ArticleDOI
TL;DR: The EduCTX platform represents the basis of the Edu CTX initiative, which anticipates that various HEIs would join forces in order to create a globally efficient, simplified, and ubiquitous environment inorder to avoid language and administrative barriers.
Abstract: Blockchain technology enables the creation of a decentralized environment, where transactions and data are not under the control of any third party organization. Any transaction ever completed is recorded in a public ledger in a verifiable and permanent way. Based on the blockchain technology, we propose a global higher education credit platform, named EduCTX. This platform is based on the concept of the European Credit Transfer and Accumulation System (ECTS). It constitutes a globally trusted, decentralized higher education credit, and grading system that can offer a globally unified viewpoint for students and higher education institutions (HEIs), as well as for other potential stakeholders, such as companies, institutions, and organizations. As a proof of concept, we present a prototype implementation of the environment, based on the open-source Ark Blockchain Platform. Based on a globally distributed peer-to-peer network, EduCTX will process, manage, and control ECTX tokens, which represent credits that students gain for completed courses, such as ECTS. HEIs are the peers of the blockchain network. The platform is a first step toward a more transparent and technologically advanced form of higher education systems. The EduCTX platform represents the basis of the EduCTX initiative, which anticipates that various HEIs would join forces in order to create a globally efficient, simplified, and ubiquitous environment in order to avoid language and administrative barriers. Therefore, we invite and encourage HEIs to join the EduCTX initiative and the EduCTX blockchain network.

Journal ArticleDOI
TL;DR: The proposed hybrid security model for securing the diagnostic text data in medical images proved its ability to hide the confidential patient’s data into a transmitted cover image with high imperceptibility, capacity, and minimal deterioration in the received stego-image.
Abstract: Due to the significant advancement of the Internet of Things (IoT) in the healthcare sector, the security, and the integrity of the medical data became big challenges for healthcare services applications. This paper proposes a hybrid security model for securing the diagnostic text data in medical images. The proposed model is developed through integrating either 2-D discrete wavelet transform 1 level (2D-DWT-1L) or 2-D discrete wavelet transform 2 level (2D-DWT-2L) steganography technique with a proposed hybrid encryption scheme. The proposed hybrid encryption schema is built using a combination of Advanced Encryption Standard, and Rivest, Shamir, and Adleman algorithms. The proposed model starts by encrypting the secret data; then it hides the result in a cover image using 2D-DWT-1L or 2D-DWT-2L. Both color and gray-scale images are used as cover images to conceal different text sizes. The performance of the proposed system was evaluated based on six statistical parameters; the peak signal-to-noise ratio (PSNR), mean square error (MSE), bit error rate (BER), structural similarity (SSIM), structural content (SC), and correlation. The PSNR values were relatively varied from 50.59 to 57.44 in case of color images and from 50.52 to 56.09 with the gray scale images. The MSE values varied from 0.12 to 0.57 for the color images and from 0.14 to 0.57 for the gray scale images. The BER values were zero for both images, while SSIM, SC, and correlation values were ones for both images. Compared with the state-of-the-art methods, the proposed model proved its ability to hide the confidential patient’s data into a transmitted cover image with high imperceptibility, capacity, and minimal deterioration in the received stego-image.

Journal ArticleDOI
TL;DR: A thorough investigation of deep learning in its applications and mechanisms is sought, as a categorical collection of state of the art in deep learning research, to provide a broad reference for those seeking a primer on deep learning and its various implementations, platforms, algorithms, and uses in a variety of smart-world systems.
Abstract: Deep learning has exploded in the public consciousness, primarily as predictive and analytical products suffuse our world, in the form of numerous human-centered smart-world systems, including targeted advertisements, natural language assistants and interpreters, and prototype self-driving vehicle systems. Yet to most, the underlying mechanisms that enable such human-centered smart products remain obscure. In contrast, researchers across disciplines have been incorporating deep learning into their research to solve problems that could not have been approached before. In this paper, we seek to provide a thorough investigation of deep learning in its applications and mechanisms. Specifically, as a categorical collection of state of the art in deep learning research, we hope to provide a broad reference for those seeking a primer on deep learning and its various implementations, platforms, algorithms, and uses in a variety of smart-world systems. Furthermore, we hope to outline recent key advancements in the technology, and provide insight into areas, in which deep learning can improve investigation, as well as highlight new areas of research that have yet to see the application of deep learning, but could nonetheless benefit immensely. We hope this survey provides a valuable reference for new deep learning practitioners, as well as those seeking to innovate in the application of deep learning.

Journal ArticleDOI
TL;DR: This paper proposes a novel IDS called the hierarchical spatial-temporal features-based intrusion detection system (HAST-IDS), which first learns the low-level spatial features of network traffic using deep convolutional neural networks (CNNs) and then learns high-level temporal features using long short-term memory networks.
Abstract: The development of an anomaly-based intrusion detection system (IDS) is a primary research direction in the field of intrusion detection. An IDS learns normal and anomalous behavior by analyzing network traffic and can detect unknown and new attacks. However, the performance of an IDS is highly dependent on feature design, and designing a feature set that can accurately characterize network traffic is still an ongoing research issue. Anomaly-based IDSs also have the problem of a high false alarm rate (FAR), which seriously restricts their practical applications. In this paper, we propose a novel IDS called the hierarchical spatial-temporal features-based intrusion detection system (HAST-IDS), which first learns the low-level spatial features of network traffic using deep convolutional neural networks (CNNs) and then learns high-level temporal features using long short-term memory networks. The entire process of feature learning is completed by the deep neural networks automatically; no feature engineering techniques are required. The automatically learned traffic features effectively reduce the FAR. The standard DARPA1998 and ISCX2012 data sets are used to evaluate the performance of the proposed system. The experimental results show that the HAST-IDS outperforms other published approaches in terms of accuracy, detection rate, and FAR, which successfully demonstrates its effectiveness in both feature learning and FAR reduction.

Journal ArticleDOI
TL;DR: An advanced ESS is required with regard to capacity, protection, control interface, energy management, and characteristics to enhance the performance of ESS in MG applications to develop a cost-effective and efficient ESS model with a prolonged life cycle for sustainable MG implementation.
Abstract: A microgrid (MG) is a local entity that consists of distributed energy resources (DERs) to achieve local power reliability and sustainable energy utilization. The MG concept or renewable energy technologies integrated with energy storage systems (ESS) have gained increasing interest and popularity because it can store energy at off-peak hours and supply energy at peak hours. However, existing ESS technology faces challenges in storing energy due to various issues, such as charging/discharging, safety, reliability, size, cost, life cycle, and overall management. Thus, an advanced ESS is required with regard to capacity, protection, control interface, energy management, and characteristics to enhance the performance of ESS in MG applications. This paper comprehensively reviews the types of ESS technologies, ESS structures along with their configurations, classifications, features, energy conversion, and evaluation process. Moreover, details on the advantages and disadvantages of ESS in MG applications have been analyzed based on the process of energy formations, material selection, power transfer mechanism, capacity, efficiency, and cycle period. Existing reviews critically demonstrate the current technologies for ESS in MG applications. However, the optimum management of ESSs for efficient MG operation remains a challenge in modern power system networks. This review also highlights the key factors, issues, and challenges with possible recommendations for the further development of ESS in future MG applications. All the highlighted insights of this review significantly contribute to the increasing effort toward the development of a cost-effective and efficient ESS model with a prolonged life cycle for sustainable MG implementation.

Journal ArticleDOI
TL;DR: This paper gives a systematic survey of clustering with deep learning in views of architecture and introduces the preliminary knowledge for better understanding of this field.
Abstract: Clustering is a fundamental problem in many data-driven application domains, and clustering performance highly depends on the quality of data representation. Hence, linear or non-linear feature transformations have been extensively used to learn a better data representation for clustering. In recent years, a lot of works focused on using deep neural networks to learn a clustering-friendly representation, resulting in a significant increase of clustering performance. In this paper, we give a systematic survey of clustering with deep learning in views of architecture. Specifically, we first introduce the preliminary knowledge for better understanding of this field. Then, a taxonomy of clustering with deep learning is proposed and some representative methods are introduced. Finally, we propose some interesting future opportunities of clustering with deep learning and give some conclusion remarks.

Journal ArticleDOI
TL;DR: Well-known machine learning techniques, namely, SVM, random forest, and extreme learning machine (ELM) are applied and the results indicate that ELM outperforms other approaches in intrusion detection mechanisms.
Abstract: Intrusion detection is a fundamental part of security tools, such as adaptive security appliances, intrusion detection systems, intrusion prevention systems, and firewalls. Various intrusion detection techniques are used, but their performance is an issue. Intrusion detection performance depends on accuracy, which needs to improve to decrease false alarms and to increase the detection rate. To resolve concerns on performance, multilayer perceptron, support vector machine (SVM), and other techniques have been used in recent work. Such techniques indicate limitations and are not efficient for use in large data sets, such as system and network data. The intrusion detection system is used in analyzing huge traffic data; thus, an efficient classification technique is necessary to overcome the issue. This problem is considered in this paper. Well-known machine learning techniques, namely, SVM, random forest, and extreme learning machine (ELM) are applied. These techniques are well-known because of their capability in classification. The NSL–knowledge discovery and data mining data set is used, which is considered a benchmark in the evaluation of intrusion detection mechanisms. The results indicate that ELM outperforms other approaches.

Journal ArticleDOI
TL;DR: This survey will help the industry and research community synthesize and identify the requirements for Fog computing and present some open issues, which will determine the future research direction for the Fog computing paradigm.
Abstract: Emerging technologies such as the Internet of Things (IoT) require latency-aware computation for real-time application processing. In IoT environments, connected things generate a huge amount of data, which are generally referred to as big data. Data generated from IoT devices are generally processed in a cloud infrastructure because of the on-demand services and scalability features of the cloud computing paradigm. However, processing IoT application requests on the cloud exclusively is not an efficient solution for some IoT applications, especially time-sensitive ones. To address this issue, Fog computing, which resides in between cloud and IoT devices, was proposed. In general, in the Fog computing environment, IoT devices are connected to Fog devices. These Fog devices are located in close proximity to users and are responsible for intermediate computation and storage. One of the key challenges in running IoT applications in a Fog computing environment are resource allocation and task scheduling. Fog computing research is still in its infancy, and taxonomy-based investigation into the requirements of Fog infrastructure, platform, and applications mapped to current research is still required. This survey will help the industry and research community synthesize and identify the requirements for Fog computing. This paper starts with an overview of Fog computing in which the definition of Fog computing, research trends, and the technical differences between Fog and cloud are reviewed. Then, we investigate numerous proposed Fog computing architectures and describe the components of these architectures in detail. From this, the role of each component will be defined, which will help in the deployment of Fog computing. Next, a taxonomy of Fog computing is proposed by considering the requirements of the Fog computing paradigm. We also discuss existing research works and gaps in resource allocation and scheduling, fault tolerance, simulation tools, and Fog-based microservices. Finally, by addressing the limitations of current research works, we present some open issues, which will determine the future research direction for the Fog computing paradigm.

Journal ArticleDOI
TL;DR: To guarantee the validity of EHRs encapsulated in blockchain, this paper presents an attribute-based signature scheme with multiple authorities, in which a patient endorses a message according to the attribute while disclosing no information other than the evidence that he has attested to it.
Abstract: Electronic Health Records (EHRs) are entirely controlled by hospitals instead of patients, which complicates seeking medical advices from different hospitals. Patients face a critical need to focus on the details of their own healthcare and restore management of their own medical data. The rapid development of blockchain technology promotes population healthcare, including medical records as well as patient-related data. This technology provides patients with comprehensive, immutable records, and access to EHRs free from service providers and treatment websites. In this paper, to guarantee the validity of EHRs encapsulated in blockchain, we present an attribute-based signature scheme with multiple authorities, in which a patient endorses a message according to the attribute while disclosing no information other than the evidence that he has attested to it. Furthermore, there are multiple authorities without a trusted single or central one to generate and distribute public/private keys of the patient, which avoids the escrow problem and conforms to the mode of distributed data storage in the blockchain. By sharing the secret pseudorandom function seeds among authorities, this protocol resists collusion attack out of $N$ from $N-1$ corrupted authorities. Under the assumption of the computational bilinear Diffie-Hellman, we also formally demonstrate that, in terms of the unforgeability and perfect privacy of the attribute-signer, this attribute-based signature scheme is secure in the random oracle model. The comparison shows the efficiency and properties between the proposed method and methods proposed in other studies.

Journal ArticleDOI
TL;DR: The background of intrusion detection and blockchain is introduced, the applicability of blockchain to intrusion detection is discussed, and open challenges in this direction are identified.
Abstract: With the purpose of identifying cyber threats and possible incidents, intrusion detection systems (IDSs) are widely deployed in various computer networks. In order to enhance the detection capability of a single IDS, collaborative intrusion detection networks (or collaborative IDSs) have been developed, which allow IDS nodes to exchange data with each other. However, data and trust management still remain two challenges for current detection architectures, which may degrade the effectiveness of such detection systems. In recent years, blockchain technology has shown its adaptability in many fields, such as supply chain management, international payment, interbanking, and so on. As blockchain can protect the integrity of data storage and ensure process transparency, it has a potential to be applied to intrusion detection domain. Motivated by this, this paper provides a review regarding the intersection of IDSs and blockchains. In particular, we introduce the background of intrusion detection and blockchain, discuss the applicability of blockchain to intrusion detection, and identify open challenges in this direction.

Journal ArticleDOI
TL;DR: This paper comprehensively survey the body of existing research on I-IoT, and proposes a three-dimensional framework to explore the existing research space and investigate the adoption of some representative networking technologies, including 5G, machine-to-machine communication, and software-defined networking.
Abstract: The vision of Industry 4.0, otherwise known as the fourth industrial revolution, is the integration of massively deployed smart computing and network technologies in industrial production and manufacturing settings for the purposes of automation, reliability, and control, implicating the development of an Industrial Internet of Things (I-IoT). Specifically, I-IoT is devoted to adopting the IoT to enable the interconnection of anything, anywhere, and at any time in the manufacturing system context to improve the productivity, efficiency, safety, and intelligence. As an emerging technology, I-IoT has distinct properties and requirements that distinguish it from consumer IoT, including the unique types of smart devices incorporated, network technologies and quality-of-service requirements, and strict needs of command and control. To more clearly understand the complexities of I-IoT and its distinct needs and to present a unified assessment of the technology from a systems’ perspective, in this paper, we comprehensively survey the body of existing research on I-IoT. Particularly, we first present the I-IoT architecture, I-IoT applications (i.e., factory automation and process automation), and their characteristics. We then consider existing research efforts from the three key system aspects of control, networking, and computing. Regarding control, we first categorize industrial control systems and then present recent and relevant research efforts. Next, considering networking, we propose a three-dimensional framework to explore the existing research space and investigate the adoption of some representative networking technologies, including 5G, machine-to-machine communication, and software-defined networking. Similarly, concerning computing, we again propose a second three-dimensional framework that explores the problem space of computing in I-IoT and investigate the cloud, edge, and hybrid cloud and edge computing platforms. Finally, we outline particular challenges and future research needs in control, networking, and computing systems, as well as for the adoption of machine learning in an I-IoT context.

Journal ArticleDOI
TL;DR: Two improved models based on deep learning that are used to train and test nine kinds of maize leaf images are obtained by adjusting the parameters, changing the pooling combinations, adding dropout operations and rectified linear unit functions, and reducing the number of classifiers.
Abstract: In the field of agricultural information, the automatic identification and diagnosis of maize leaf diseases is highly desired. To improve the identification accuracy of maize leaf diseases and reduce the number of network parameters, the improved GoogLeNet and Cifar10 models based on deep learning are proposed for leaf disease recognition in this paper. Two improved models that are used to train and test nine kinds of maize leaf images are obtained by adjusting the parameters, changing the pooling combinations, adding dropout operations and rectified linear unit functions, and reducing the number of classifiers. In addition, the number of parameters of the improved models is significantly smaller than that of the VGG and AlexNet structures. During the recognition of eight kinds of maize leaf diseases, the GoogLeNet model achieves a top - 1 average identification accuracy of 98.9%, and the Cifar10 model achieves an average accuracy of 98.8%. The improved methods are possibly improved the accuracy of maize leaf disease, and reduced the convergence iterations, which can effectively improve the model training and recognition efficiency.