scispace - formally typeset

Showing papers in "IEEE Access in 2017"


Journal ArticleDOI

[...]

TL;DR: The experimental results show that RNN-IDS is very suitable for modeling a classification model with high accuracy and that its performance is superior to that of traditional machine learning classification methods in both binary and multiclass classification.
Abstract: Intrusion detection plays an important role in ensuring information security, and the key technology is to accurately identify various attacks in the network. In this paper, we explore how to model an intrusion detection system based on deep learning, and we propose a deep learning approach for intrusion detection using recurrent neural networks (RNN-IDS). Moreover, we study the performance of the model in binary classification and multiclass classification, and the number of neurons and different learning rate impacts on the performance of the proposed model. We compare it with those of J48, artificial neural network, random forest, support vector machine, and other machine learning methods proposed by previous researchers on the benchmark data set. The experimental results show that RNN-IDS is very suitable for modeling a classification model with high accuracy and that its performance is superior to that of traditional machine learning classification methods in both binary and multiclass classification. The RNN-IDS model improves the accuracy of the intrusion detection and provides a new research method for intrusion detection.

714 citations


Journal ArticleDOI

[...]

TL;DR: This survey makes an exhaustive review on the state-of-the-art research efforts on mobile edge networks, including definition, architecture, and advantages, and presents a comprehensive survey of issues on computing, caching, and communication techniques at the network edge.
Abstract: As the explosive growth of smart devices and the advent of many new applications, traffic volume has been growing exponentially. The traditional centralized network architecture cannot accommodate such user demands due to heavy burden on the backhaul links and long latency. Therefore, new architectures, which bring network functions and contents to the network edge, are proposed, i.e., mobile edge computing and caching. Mobile edge networks provide cloud computing and caching capabilities at the edge of cellular networks. In this survey, we make an exhaustive review on the state-of-the-art research efforts on mobile edge networks. We first give an overview of mobile edge networks, including definition, architecture, and advantages. Next, a comprehensive survey of issues on computing, caching, and communication techniques at the network edge is presented. The applications and use cases of mobile edge networks are discussed. Subsequently, the key enablers of mobile edge networks, such as cloud technology, SDN/NFV, and smart devices are discussed. Finally, open research challenges and future directions are presented as well.

620 citations


Journal ArticleDOI

[...]

TL;DR: The proposed MeDShare system is blockchain-based and provides data provenance, auditing, and control for shared medical data in cloud repositories among big data entities and employs smart contracts and an access control mechanism to effectively track the behavior of the data.
Abstract: The dissemination of patients’ medical records results in diverse risks to patients’ privacy as malicious activities on these records cause severe damage to the reputation, finances, and so on of all parties related directly or indirectly to the data. Current methods to effectively manage and protect medical records have been proved to be insufficient. In this paper, we propose MeDShare, a system that addresses the issue of medical data sharing among medical big data custodians in a trust-less environment. The system is blockchain-based and provides data provenance, auditing, and control for shared medical data in cloud repositories among big data entities. MeDShare monitors entities that access data for malicious use from a data custodian system. In MeDShare, data transitions and sharing from one entity to the other, along with all actions performed on the MeDShare system, are recorded in a tamper-proof manner. The design employs smart contracts and an access control mechanism to effectively track the behavior of the data and revoke access to offending entities on detection of violation of permissions on data. The performance of MeDShare is comparable to current cutting edge solutions to data sharing among cloud service providers. By implementing MeDShare, cloud service providers and other data guardians will be able to achieve data provenance and auditing while sharing medical data with entities such as research and medical institutions with minimal risk to data privacy.

521 citations


Journal ArticleDOI

[...]

TL;DR: This paper streamline machine learning algorithms for effective prediction of chronic disease outbreak in disease-frequent communities by proposing a new convolutional neural network (CNN)-based multimodal disease risk prediction algorithm using structured and unstructured data from hospital.
Abstract: With big data growth in biomedical and healthcare communities, accurate analysis of medical data benefits early disease detection, patient care, and community services. However, the analysis accuracy is reduced when the quality of medical data is incomplete. Moreover, different regions exhibit unique characteristics of certain regional diseases, which may weaken the prediction of disease outbreaks. In this paper, we streamline machine learning algorithms for effective prediction of chronic disease outbreak in disease-frequent communities. We experiment the modified prediction models over real-life hospital data collected from central China in 2013–2015. To overcome the difficulty of incomplete data, we use a latent factor model to reconstruct the missing data. We experiment on a regional chronic disease of cerebral infarction. We propose a new convolutional neural network (CNN)-based multimodal disease risk prediction algorithm using structured and unstructured data from hospital. To the best of our knowledge, none of the existing work focused on both data types in the area of medical big data analytics. Compared with several typical prediction algorithms, the prediction accuracy of our proposed algorithm reaches 94.8% with a convergence speed, which is faster than that of the CNN-based unimodal disease risk prediction algorithm.

512 citations


Journal ArticleDOI

[...]

TL;DR: Three forms of IM are investigated: spatial modulation, channel modulation and orthogonal frequency division multiplexing (OFDM) with IM, which consider the transmit antennas of a multiple-input multiple-output system, the radio frequency mirrors mounted at a transmit antenna and the subcarriers of an OFDM system for IM techniques, respectively.
Abstract: What is index modulation (IM)? This is an interesting question that we have started to hear more and more frequently over the past few years. The aim of this paper is to answer this question in a comprehensive manner by covering not only the basic principles and emerging variants of IM, but also reviewing the most recent as well as promising advances in this field toward the application scenarios foreseen in next-generation wireless networks. More specifically, we investigate three forms of IM: spatial modulation, channel modulation and orthogonal frequency division multiplexing (OFDM) with IM, which consider the transmit antennas of a multiple-input multiple-output system, the radio frequency mirrors (parasitic elements) mounted at a transmit antenna and the subcarriers of an OFDM system for IM techniques, respectively. We present the up-to-date advances in these three promising frontiers and discuss possible future research directions for IM-based schemes toward low-complexity, spectrum- and energy-efficient next-generation wireless networks.

510 citations


Journal ArticleDOI

[...]

TL;DR: The state-of-the-art of data mining and analytics are reviewed through eight unsupervisedLearning and ten supervised learning algorithms, as well as the application status of semi-supervised learning algorithms.
Abstract: Data mining and analytics have played an important role in knowledge discovery and decision making/supports in the process industry over the past several decades. As a computational engine to data mining and analytics, machine learning serves as basic tools for information extraction, data pattern recognition and predictions. From the perspective of machine learning, this paper provides a review on existing data mining and analytics applications in the process industry over the past several decades. The state-of-the-art of data mining and analytics are reviewed through eight unsupervised learning and ten supervised learning algorithms, as well as the application status of semi-supervised learning algorithms. Several perspectives are highlighted and discussed for future researches on data mining and analytics in the process industry.

483 citations


Journal ArticleDOI

[...]

TL;DR: The state-of-the-art research efforts directed toward big IoT data analytics are investigated, the relationship between big data analytics and IoT is explained, and several opportunities brought by data analytics in IoT paradigm are discussed.
Abstract: Voluminous amounts of data have been produced, since the past decade as the miniaturization of Internet of things (IoT) devices increases. However, such data are not useful without analytic power. Numerous big data, IoT, and analytics solutions have enabled people to obtain valuable insight into large data generated by IoT devices. However, these solutions are still in their infancy, and the domain lacks a comprehensive survey. This paper investigates the state-of-the-art research efforts directed toward big IoT data analytics. The relationship between big data analytics and IoT is explained. Moreover, this paper adds value by proposing a new architecture for big IoT data analytics. Furthermore, big IoT data analytic types, methods, and technologies for big data mining are discussed. Numerous notable use cases are also presented. Several opportunities brought by data analytics in IoT paradigm are then discussed. Finally, open research challenges, such as privacy, big data mining, visualization, and integration, are presented as future research directions.

467 citations


Journal ArticleDOI

[...]

TL;DR: A standard model for application in future IoT healthcare systems is proposed, and the state-of-the-art research relating to each area of the model is presented, evaluating their strengths, weaknesses, and overall suitability for a wearable IoT healthcare system.
Abstract: Internet of Things (IoT) technology has attracted much attention in recent years for its potential to alleviate the strain on healthcare systems caused by an aging population and a rise in chronic illness. Standardization is a key issue limiting progress in this area, and thus this paper proposes a standard model for application in future IoT healthcare systems. This survey paper then presents the state-of-the-art research relating to each area of the model, evaluating their strengths, weaknesses, and overall suitability for a wearable IoT healthcare system. Challenges that healthcare IoT faces including security, privacy, wearability, and low-power operation are presented, and recommendations are made for future research directions.

449 citations


Journal ArticleDOI

[...]

Fei Tao1, Meng Zhang1
TL;DR: A novel concept of digital twin shop-floor (DTS) based on digital twin is explored and its four key components are discussed, including physicalShop-floor, virtual shop- Floor, shop- floor service system, and shop-ground digital twin data.
Abstract: With the developments and applications of the new information technologies, such as cloud computing, Internet of Things, big data, and artificial intelligence, a smart manufacturing era is coming. At the same time, various national manufacturing development strategies have been put forward, such as Industry 4.0 , Industrial Internet , manufacturing based on Cyber-Physical System , and Made in China 2025 . However, one of specific challenges to achieve smart manufacturing with these strategies is how to converge the manufacturing physical world and the virtual world, so as to realize a series of smart operations in the manufacturing process, including smart interconnection, smart interaction, smart control and management, etc. In this context, as a basic unit of manufacturing, shop-floor is required to reach the interaction and convergence between physical and virtual spaces, which is not only the imperative demand of smart manufacturing, but also the evolving trend of itself. Accordingly, a novel concept of digital twin shop-floor (DTS) based on digital twin is explored and its four key components are discussed, including physical shop-floor, virtual shop-floor, shop-floor service system, and shop-floor digital twin data. What is more, the operation mechanisms and implementing methods for DTS are studied and key technologies as well as challenges ahead are investigated, respectively.

406 citations


Journal ArticleDOI

[...]

TL;DR: This paper compiles, summarizes, and organizes machine learning challenges with Big Data, highlighting the cause–effect relationship by organizing challenges according to Big Data Vs or dimensions that instigated the issue: volume, velocity, variety, or veracity.
Abstract: The Big Data revolution promises to transform how we live, work, and think by enabling process optimization, empowering insight discovery and improving decision making. The realization of this grand potential relies on the ability to extract value from such massive data through data analytics; machine learning is at its core because of its ability to learn from data and provide data driven insights, decisions, and predictions. However, traditional machine learning approaches were developed in a different era, and thus are based upon multiple assumptions, such as the data set fitting entirely into memory, what unfortunately no longer holds true in this new context. These broken assumptions, together with the Big Data characteristics, are creating obstacles for the traditional techniques. Consequently, this paper compiles, summarizes, and organizes machine learning challenges with Big Data. In contrast to other research that discusses challenges, this work highlights the cause–effect relationship by organizing challenges according to Big Data Vs or dimensions that instigated the issue: volume, velocity, variety, or veracity. Moreover, emerging machine learning approaches and techniques are discussed in terms of how they are capable of handling the various challenges with the ultimate objective of helping practitioners select appropriate solutions for their use cases. Finally, a matrix relating the challenges and approaches is presented. Through this process, this paper provides a perspective on the domain, identifies research gaps and opportunities, and provides a strong foundation and encouragement for further research in the field of machine learning with Big Data.

382 citations


Journal ArticleDOI

[...]

TL;DR: This paper provides an overview of existing security and privacy concerns, particularly for the fog computing, and highlights ongoing research effort, open challenges, and research trends in privacy and security issues for fog computing.
Abstract: Fog computing paradigm extends the storage, networking, and computing facilities of the cloud computing toward the edge of the networks while offloading the cloud data centers and reducing service latency to the end users. However, the characteristics of fog computing arise new security and privacy challenges. The existing security and privacy measurements for cloud computing cannot be directly applied to the fog computing due to its features, such as mobility, heterogeneity, and large-scale geo-distribution. This paper provides an overview of existing security and privacy concerns, particularly for the fog computing. Afterward, this survey highlights ongoing research effort, open challenges, and research trends in privacy and security issues for fog computing.

Journal ArticleDOI

[...]

TL;DR: This paper presents a digital twin architecture reference model for the cloud-based CPS, C2PS, where the model helps in identifying various degrees of basic and hybrid computation-interaction modes in this paradigm.
Abstract: Cyber-physical system (CPS) is a new trend in the Internet-of-Things related research works, where physical systems act as the sensors to collect real-world information and communicate them to the computation modules (i.e. cyber layer), which further analyze and notify the findings to the corresponding physical systems through a feedback loop. Contemporary researchers recommend integrating cloud technologies in the CPS cyber layer to ensure the scalability of storage, computation, and cross domain communication capabilities. Though there exist a few descriptive models of the cloud-based CPS architecture, it is important to analytically describe the key CPS properties: computation, control, and communication. In this paper, we present a digital twin architecture reference model for the cloud-based CPS, C2PS, where we analytically describe the key properties of the C2PS. The model helps in identifying various degrees of basic and hybrid computation-interaction modes in this paradigm. We have designed C2PS smart interaction controller using a Bayesian belief network, so that the system dynamically considers current contexts. The composition of fuzzy rule base with the Bayes network further enables the system with reconfiguration capability. We also describe analytically, how C2PS subsystem communications can generate even more complex system-of-systems. Later, we present a telematics-based prototype driving assistance application for the vehicular domain of C2PS, VCPS, to demonstrate the efficacy of the architecture reference model.

Journal ArticleDOI

[...]

TL;DR: The state-of-the-art dc microgrid technology that covers ac interfaces, architectures, possible grounding schemes, power quality issues, and communication systems is presented.
Abstract: To meet the fast-growing energy demand and, at the same time, tackle environmental concerns resulting from conventional energy sources, renewable energy sources are getting integrated in power networks to ensure reliable and affordable energy for the public and industrial sectors However, the integration of renewable energy in the ageing electrical grids can result in new risks/challenges, such as security of supply, base load energy capacity, seasonal effects, and so on Recent research and development in microgrids have proved that microgrids, which are fueled by renewable energy sources and managed by smart grids (use of smart sensors and smart energy management system), can offer higher reliability and more efficient energy systems in a cost-effective manner Further improvement in the reliability and efficiency of electrical grids can be achieved by utilizing dc distribution in microgrid systems DC microgrid is an attractive technology in the modern electrical grid system because of its natural interface with renewable energy sources, electric loads, and energy storage systems In the recent past, an increase in research work has been observed in the area of dc microgrid, which brings this technology closer to practical implementation This paper presents the state-of-the-art dc microgrid technology that covers ac interfaces, architectures, possible grounding schemes, power quality issues, and communication systems The advantages of dc grids can be harvested in many applications to improve their reliability and efficiency This paper also discusses benefits and challenges of using dc grid systems in several applications This paper highlights the urgent need of standardizations for dc microgrid technology and presents recent updates in this area

Journal ArticleDOI

[...]

TL;DR: A comprehensive review of the SDWSN literature is presented, which delves into some of the challenges facing this paradigm, as well as the majorSDWSN design requirements that need to be considered to address these challenges.
Abstract: Software defined networking (SDN) brings about innovation, simplicity in network management, and configuration in network computing. Traditional networks often lack the flexibility to bring into effect instant changes because of the rigidity of the network and also the over dependence on proprietary services. SDN decouples the control plane from the data plane, thus moving the control logic from the node to a central controller. A wireless sensor network (WSN) is a great platform for low-rate wireless personal area networks with little resources and short communication ranges. However, as the scale of WSN expands, it faces several challenges, such as network management and heterogeneous-node networks. The SDN approach to WSNs seeks to alleviate most of the challenges and ultimately foster efficiency and sustainability in WSNs. The fusion of these two models gives rise to a new paradigm: Software defined wireless sensor networks (SDWSN). The SDWSN model is also envisioned to play a critical role in the looming Internet of Things paradigm. This paper presents a comprehensive review of the SDWSN literature. Moreover, it delves into some of the challenges facing this paradigm, as well as the major SDWSN design requirements that need to be considered to address these challenges.

Journal ArticleDOI

[...]

TL;DR: The proposed Lightweight Privacy-preserving data aggregation scheme, called LPDA, is characterized by employing the homomorphic Paillier encryption, Chinese Remainder Theorem, and one-way hash chain techniques to not only aggregate hybrid IoT devices’ data into one, but also early filter injected false data at the network edge.
Abstract: Fog computing-enhanced Internet of Things (IoT) has recently received considerable attention, as the fog devices deployed at the network edge can not only provide low latency, location awareness but also improve real-time and quality of services in IoT application scenarios. Privacy-preserving data aggregation is one of typical fog computing applications in IoT, and many privacy-preserving data aggregation schemes have been proposed in the past years. However, most of them only support data aggregation for homogeneous IoT devices, and cannot aggregate hybrid IoT devices’ data into one in some real IoT applications. To address this challenge, in this paper, we present a lightweight privacy-preserving data aggregation scheme, called Lightweight Privacy-preserving Data Aggregation, for fog computing-enhanced IoT. The proposed LPDA is characterized by employing the homomorphic Paillier encryption, Chinese Remainder Theorem, and one-way hash chain techniques to not only aggregate hybrid IoT devices’ data into one, but also early filter injected false data at the network edge. Detailed security analysis shows LPDA is really secure and privacy-enhanced with differential privacy techniques. In addition, extensive performance evaluations are conducted, and the results indicate LPDA is really lightweight in fog computing-enhanced IoT.

Journal ArticleDOI

[...]

TL;DR: The background and state-of-the-art of the narrow-band Internet of Things (NB-IoT) is reviewed, including smart cities, smart buildings, intelligent environment monitoring, intelligent user services, and smart metering, and five intelligent applications are analyzed.
Abstract: In this paper, we review the background and state-of-the-art of the narrow-band Internet of Things (NB-IoT). We first introduce NB-IoT general background, development history, and standardization. Then, we present NB-IoT features through the review of current national and international studies on NB-IoT technology, where we focus on basic theories and key technologies, i.e., connection count analysis theory, delay analysis theory, coverage enhancement mechanism, ultra-low power consumption technology, and coupling relationship between signaling and data. Subsequently, we compare several performances of NB-IoT and other wireless and mobile communication technologies in aspects of latency, security, availability, data transmission rate, energy consumption, spectral efficiency, and coverage area. Moreover, we analyze five intelligent applications of NB-IoT, including smart cities, smart buildings, intelligent environment monitoring, intelligent user services, and smart metering. Finally, we summarize security requirements of NB-IoT, which need to be solved urgently. These discussions aim to provide a comprehensive overview of NB-IoT, which can help readers to understand clearly the scientific problems and future research directions of NB-IoT.

Journal ArticleDOI

[...]

TL;DR: This paper proposes a novel product ownership management system (POMS) of RFID-attached products for anti-counterfeits that can be used in the post supply chain and implements a proof-of-concept experimental system employing a blockchain-based decentralized application platform, Ethereum.
Abstract: For more than a decade now, radio frequency identification (RFID) technology has been quite effective in providing anti-counterfeits measures in the supply chain. However, the genuineness of RFID tags cannot be guaranteed in the post supply chain, since these tags can be rather easily cloned in the public space. In this paper, we propose a novel product ownership management system (POMS) of RFID-attached products for anti-counterfeits that can be used in the post supply chain. For this purpose, we leverage the idea of Bitcoin ’s blockchain that anyone can check the proof of possession of balance. With the proposed POMS, a customer can reject the purchase of counterfeits even with genuine RFID tag information, if the seller does not possess their ownership. We have implemented a proof-of-concept experimental system employing a blockchain-based decentralized application platform, Ethereum , and evaluated its cost performance. Results have shown that, typically, the cost of managing the ownership of a product with up to six transfers is less than U.S. $1.

Journal ArticleDOI

[...]

TL;DR: Smart augmentation works, by creating a network that learns how to generate augmented data during the training process of a target network in a way that reduces that networks loss, and allows to learn augmentations that minimize the error of that network.
Abstract: A recurring problem faced when training neural networks is that there is typically not enough data to maximize the generalization capability of deep neural networks. There are many techniques to address this, including data augmentation, dropout, and transfer learning. In this paper, we introduce an additional method, which we call smart augmentation and we show how to use it to increase the accuracy and reduce over fitting on a target network. Smart augmentation works, by creating a network that learns how to generate augmented data during the training process of a target network in a way that reduces that networks loss. This allows us to learn augmentations that minimize the error of that network. Smart augmentation has shown the potential to increase accuracy by demonstrably significant measures on all data sets tested. In addition, it has shown potential to achieve similar or improved performance levels with significantly smaller network sizes in a number of tested cases.

Journal ArticleDOI

[...]

TL;DR: A new technique for NTC based on a combination of deep learning models that can be used for IoT traffic provides better detection results than alternative algorithms without requiring any feature engineering, which is usual when applying other models.
Abstract: A network traffic classifier (NTC) is an important part of current network monitoring systems, being its task to infer the network service that is currently used by a communication flow (e.g., HTTP and SIP). The detection is based on a number of features associated with the communication flow, for example, source and destination ports and bytes transmitted per packet. NTC is important, because much information about a current network flow can be learned and anticipated just by knowing its network service (required latency, traffic volume, and possible duration). This is of particular interest for the management and monitoring of Internet of Things (IoT) networks, where NTC will help to segregate traffic and behavior of heterogeneous devices and services. In this paper, we present a new technique for NTC based on a combination of deep learning models that can be used for IoT traffic. We show that a recurrent neural network (RNN) combined with a convolutional neural network (CNN) provides best detection results. The natural domain for a CNN, which is image processing, has been extended to NTC in an easy and natural way. We show that the proposed method provides better detection results than alternative algorithms without requiring any feature engineering, which is usual when applying other models. A complete study is presented on several architectures that integrate a CNN and an RNN, including the impact of the features chosen and the length of the network flows used for training.

Journal ArticleDOI

[...]

TL;DR: This paper provides over three decades long systematic literature review on clustering algorithm and its applicability and usability in the context of EDM.
Abstract: Presently, educational institutions compile and store huge volumes of data, such as student enrolment and attendance records, as well as their examination results. Mining such data yields stimulating information that serves its handlers well. Rapid growth in educational data points to the fact that distilling massive amounts of data requires a more sophisticated set of algorithms. This issue led to the emergence of the field of educational data mining (EDM). Traditional data mining algorithms cannot be directly applied to educational problems, as they may have a specific objective and function. This implies that a preprocessing algorithm has to be enforced first and only then some specific data mining methods can be applied to the problems. One such preprocessing algorithm in EDM is clustering. Many studies on EDM have focused on the application of various data mining algorithms to educational attributes. Therefore, this paper provides over three decades long (1983–2016) systematic literature review on clustering algorithm and its applicability and usability in the context of EDM. Future insights are outlined based on the literature reviewed, and avenues for further research are identified.

Journal ArticleDOI

[...]

TL;DR: In this paper, the authors investigated the application of NOMA with successive interference cancellation (SIC) in downlink multiuser multiple-input multiple-output (MIMO) cellular systems, where the total number of receive antennas at user equipment (UE) ends in a cell is more than the number of transmit antennas at the BS.
Abstract: We investigate the application of non-orthogonal multiple access (NOMA) with successive interference cancellation (SIC) in downlink multiuser multiple-input multiple-output (MIMO) cellular systems, where the total number of receive antennas at user equipment (UE) ends in a cell is more than the number of transmit antennas at the base station (BS). We first dynamically group the UE receive antennas into a number of clusters equal to or more than the number of BS transmit antennas. A single beamforming vector is then shared by all the receive antennas in a cluster. We propose a linear beamforming technique in which all the receive antennas can significantly cancel the inter-cluster interference. On the other hand, the receive antennas in each cluster are scheduled on the power domain NOMA basis with SIC at the receiver ends. For inter-cluster and intra-cluster power allocation, we provide dynamic power allocation solutions with an objective to maximizing the overall cell capacity. An extensive performance evaluation is carried out for the proposed MIMO-NOMA system and the results are compared with those for conventional orthogonal multiple access (OMA)-based MIMO systems and other existing MIMO-NOMA solutions. The numerical results quantify the capacity gain of the proposed MIMO-NOMA model over MIMO-OMA and other existing MIMO-NOMA solutions.

Journal ArticleDOI

[...]

TL;DR: It is pointed out that the integration of the FC and IoE paradigms may give rise to opportunities for new applications in the realms of the IoE, Smart City, Industry 4.0, and Big Data Streaming while introducing new open issues.
Abstract: Fog computing (FC) and Internet of Everything (IoE) are two emerging technological paradigms that, to date, have been considered standing-alone. However, because of their complementary features, we expect that their integration can foster a number of computing and network-intensive pervasive applications under the incoming realm of the future Internet. Motivated by this consideration, the goal of this position paper is fivefold. First, we review the technological attributes and platforms proposed in the current literature for the standing-alone FC and IoE paradigms. Second, by leveraging some use cases as illustrative examples, we point out that the integration of the FC and IoE paradigms may give rise to opportunities for new applications in the realms of the IoE, Smart City, Industry 4.0, and Big Data Streaming, while introducing new open issues. Third, we propose a novel technological paradigm, the Fog of Everything (FoE) paradigm, that integrates FC and IoE and then we detail the main building blocks and services of the corresponding technological platform and protocol stack. Fourth, as a proof-of-concept, we present the simulated energy-delay performance of a small-scale FoE prototype, namely, the V-FoE prototype. Afterward, we compare the obtained performance with the corresponding one of a benchmark technological platform, e.g., the V-D2D one. It exploits only device-to-device links to establish inter-thing “ad hoc” communication. Last, we point out the position of the proposed FoE paradigm over a spectrum of seemingly related recent research projects.

Journal ArticleDOI

[...]

TL;DR: An overview of the architecture of the LEO satellite constellation-based IoT including the following topics: LEOatellite constellation structure, efficient spectrum allocation, heterogeneous networks compatibility, and access and routing protocols is provided.
Abstract: Internet of Things (IoT) is one of the evolutionary directions of the Internet. This paper focuses on the low earth orbit (LEO) satellite constellation-based IoT services for their irreplaceable functions. In many cases, IoT devices are distributed in remote areas (e.g., desert, ocean, and forest) in some special applications, they are placed in some extreme topography, where are unable to have direct terrestrial network accesses and can only be covered by satellite. Comparing with the traditional geostationary earth orbit (GEO) systems, LEO satellite constellation has the advantages of low propagation delay, small propagation loss and global coverage. Furthermore, revision of existing IoT protocol are necessary to enhance the compatibility of the LEO satellite constellation-based IoT with terrestrial IoT systems. In this paper, we provide an overview of the architecture of the LEO satellite constellation-based IoT including the following topics: LEO satellite constellation structure, efficient spectrum allocation, heterogeneous networks compatibility, and access and routing protocols.

Journal ArticleDOI

[...]

TL;DR: A conceptual smart pre-copy live migration approach is presented for VM migration that can estimate the downtime after each iteration to determine whether to proceed to the stop-and-copy stage during a system failure or an attack on a fog computing node.
Abstract: Fog computing, an extension of cloud computing services to the edge of the network to decrease latency and network congestion, is a relatively recent research trend. Although both cloud and fog offer similar resources and services, the latter is characterized by low latency with a wider spread and geographically distributed nodes to support mobility and real-time interaction. In this paper, we describe the fog computing architecture and review its different services and applications. We then discuss security and privacy issues in fog computing, focusing on service and resource availability. Virtualization is a vital technology in both fog and cloud computing that enables virtual machines (VMs) to coexist in a physical server (host) to share resources. These VMs could be subject to malicious attacks or the physical server hosting it could experience system failure, both of which result in unavailability of services and resources. Therefore, a conceptual smart pre-copy live migration approach is presented for VM migration. Using this approach, we can estimate the downtime after each iteration to determine whether to proceed to the stop-and-copy stage during a system failure or an attack on a fog computing node. This will minimize both the downtime and the migration time to guarantee resource and service availability to the end users of fog computing. Last, future research directions are outlined.

Journal ArticleDOI

[...]

TL;DR: This survey discusses advances in tracking and registration, since their functionality is crucial to any MAR application and the network connectivity of the devices that run MAR applications together with its importance to the performance of the application.
Abstract: The boom in the capabilities and features of mobile devices, like smartphones, tablets, and wearables, combined with the ubiquitous and affordable Internet access and the advances in the areas of cooperative networking, computer vision, and mobile cloud computing transformed mobile augmented reality (MAR) from science fiction to a reality. Although mobile devices are more constrained computationalwise from traditional computers, they have a multitude of sensors that can be used to the development of more sophisticated MAR applications and can be assisted from remote servers for the execution of their intensive parts. In this paper, after introducing the reader to the basics of MAR, we present a categorization of the application fields together with some representative examples. Next, we introduce the reader to the user interface and experience in MAR applications and continue with the core system components of the MAR systems. After that, we discuss advances in tracking and registration, since their functionality is crucial to any MAR application and the network connectivity of the devices that run MAR applications together with its importance to the performance of the application. We continue with the importance of data management in MAR systems and the systems performance and sustainability, and before we conclude this survey, we present existing challenging problems.

Journal Article

[...]

TL;DR: In this paper, an orthogonal AMP (OAMP) algorithm based on de-correlated linear estimation (LE) and divergence-free non-linear estimation (NLE) is proposed.
Abstract: Approximate message passing (AMP) is a low-cost iterative signal recovery algorithm for linear system models. When the system transform matrix has independent identically distributed (IID) Gaussian entries, the performance of AMP can be asymptotically characterized by a simple scalar recursion called state evolution (SE). However, SE may become unreliable for other matrix ensembles, especially for ill-conditioned ones. This imposes limits on the applications of AMP. In this paper, we propose an orthogonal AMP (OAMP) algorithm based on de-correlated linear estimation (LE) and divergence-free non-linear estimation (NLE). The Onsager term in standard AMP vanishes as a result of the divergence-free constraint on NLE. We develop an SE procedure for OAMP and show numerically that the SE for OAMP is accurate for general unitarily-invariant matrices, including IID Gaussian matrices and partial orthogonal matrices. We further derive optimized options for OAMP and show that the corresponding SE fixed point coincides with the optimal performance obtained via the replica method. Our numerical results demonstrate that OAMP can be advantageous over AMP, especially for ill-conditioned matrices.

Journal ArticleDOI

[...]

TL;DR: The aim of this paper is to review literature on data fusion for IoT with a particular focus on mathematical methods (including probabilistic methods, artificial intelligence, and theory of belief) and specific IoT environments (distributed, heterogeneous, nonlinear, and object tracking environments).
Abstract: The Internet of Things (IoT) is set to become one of the key technological developments of our times provided we are able to realize its full potential. The number of objects connected to IoT is expected to reach 50 billion by 2020 due to the massive influx of diverse objects emerging progressively. IoT, hence, is expected to be a major producer of big data. Sharing and collaboration of data and other resources would be the key for enabling sustainable ubiquitous environments, such as smart cities and societies. A timely fusion and analysis of big data, acquired from IoT and other sources, to enable highly efficient, reliable, and accurate decision making and management of ubiquitous environments would be a grand future challenge. Computational intelligence would play a key role in this challenge. A number of surveys exist on data fusion. However, these are mainly focused on specific application areas or classifications. The aim of this paper is to review literature on data fusion for IoT with a particular focus on mathematical methods (including probabilistic methods, artificial intelligence, and theory of belief) and specific IoT environments (distributed, heterogeneous, nonlinear, and object tracking environments). The opportunities and challenges for each of the mathematical methods and environments are given. Future developments, including emerging areas that would intrinsically benefit from data fusion and IoT, autonomous vehicles, deep learning for data fusion, and smart cities, are discussed.

Journal ArticleDOI

[...]

TL;DR: This work provides a point of departure for future researchers that will be required to solve the problem of wireless convergence by presenting the applications, topologies, levels of system integration, the current state of the art, and outlines of future information-centric systems.
Abstract: Wireless mediums, such as RF, optical, or acoustical, provide finite resources for the purposes of remote sensing (such as radar) and data communications. Often, these two functions are at odds with one another and compete for these resources. Applications for wireless technology are growing rapidly, and RF convergence is already presenting itself as a requirement for both users as consumer and military system requirements evolve. The broad solution space to this complex problem encompasses cooperation or codesigning of systems with both sensing and communications functions. By jointly considering the systems during the design phase, rather than perpetuating a notion of mutual interference, both system’s performance can be improved. We provide a point of departure for future researchers that will be required to solve this problem by presenting the applications, topologies, levels of system integration, the current state of the art, and outlines of future information-centric systems.

Journal ArticleDOI

[...]

TL;DR: A new signature-based authenticated key establishment scheme for the IoT environment that provides more functionality features, and its computational and communication costs are also comparable with other existing approaches.
Abstract: Internet of Things (IoT) is a network of all devices that can be accessed through the Internet. These devices can be remotely accessed and controlled using existing network infrastructure, thus allowing a direct integration of computing systems with the physical world. This also reduces human involvement along with improving accuracy and efficiency, resulting in economic benefit. The devices in IoT facilitate the day-to-day life of people. However, the IoT has an enormous threat to security and privacy due to its heterogeneous and dynamic nature. Authentication is one of the most challenging security requirements in the IoT environment, where a user (external party) can directly access information from the devices, provided the mutual authentication between user and devices happens. In this paper, we present a new signature-based authenticated key establishment scheme for the IoT environment. The proposed scheme is tested for security with the help of the widely used Burrows-Abadi–Needham logic, informal security analysis, and also the formal security verification using the broadly accepted automated validation of Internet security protocols and applications tool. The proposed scheme is also implemented using the widely accepted NS2 simulator, and the simulation results demonstrate the practicability of the scheme. Finally, the proposed scheme provides more functionality features, and its computational and communication costs are also comparable with other existing approaches.

Journal ArticleDOI

[...]

TL;DR: A review of the main safety systems that have been proposed and applied in industrial robotic environments that contribute to the achievement of safe collaborative human–robot work is presented.
Abstract: After many years of rigid conventional procedures of production, industrial manufacturing is going through a process of change toward flexible and intelligent manufacturing, the so-called Industry 4.0. In this paper, human–robot collaboration has an important role in smart factories since it contributes to the achievement of higher productivity and greater efficiency. However, this evolution means breaking with the established safety procedures as the separation of workspaces between robot and human is removed. These changes are reflected in safety standards related to industrial robotics since the last decade, and have led to the development of a wide field of research focusing on the prevention of human–robot impacts and/or the minimization of related risks or their consequences. This paper presents a review of the main safety systems that have been proposed and applied in industrial robotic environments that contribute to the achievement of safe collaborative human–robot work. Additionally, a review is provided of the current regulations along with new concepts that have been introduced in them. The discussion presented in this paper includes multi-disciplinary approaches, such as techniques for estimation and the evaluation of injuries in human–robot collisions, mechanical and software devices designed to minimize the consequences of human–robot impact, impact detection systems, and strategies to prevent collisions or minimize their consequences when they occur.