scispace - formally typeset
Search or ask a question

Showing papers by "Beijing University of Posts and Telecommunications published in 2019"


Proceedings ArticleDOI
13 May 2019
TL;DR: Wang et al. as discussed by the authors proposed a heterogeneous graph neural network based on the hierarchical attention, including node-level and semantic-level attentions, which can generate node embedding by aggregating features from meta-path based neighbors in a hierarchical manner.
Abstract: Graph neural network, as a powerful graph representation technique based on deep learning, has shown superior performance and attracted considerable research interest. However, it has not been fully considered in graph neural network for heterogeneous graph which contains different types of nodes and links. The heterogeneity and rich semantic information bring great challenges for designing a graph neural network for heterogeneous graph. Recently, one of the most exciting advancements in deep learning is the attention mechanism, whose great potential has been well demonstrated in various areas. In this paper, we first propose a novel heterogeneous graph neural network based on the hierarchical attention, including node-level and semantic-level attentions. Specifically, the node-level attention aims to learn the importance between a node and its meta-path based neighbors, while the semantic-level attention is able to learn the importance of different meta-paths. With the learned importance from both node-level and semantic-level attention, the importance of node and meta-path can be fully considered. Then the proposed model can generate node embedding by aggregating features from meta-path based neighbors in a hierarchical manner. Extensive experimental results on three real-world heterogeneous graphs not only show the superior performance of our proposed model over the state-of-the-arts, but also demonstrate its potentially good interpretability for graph analysis.

1,467 citations


Journal ArticleDOI
TL;DR: A novel heterogeneous network embedding based approach for HIN based recommendation, called HERec is proposed, which shows the capability of the HERec model for the cold-start problem, and reveals that the transformed embedding information from HINs can improve the recommendation performance.
Abstract: Due to the flexibility in modelling data heterogeneity, heterogeneous information network (HIN) has been adopted to characterize complex and heterogeneous auxiliary data in recommender systems, called HIN based recommendation . It is challenging to develop effective methods for HIN based recommendation in both extraction and exploitation of the information from HINs. Most of HIN based recommendation methods rely on path based similarity, which cannot fully mine latent structure features of users and items. In this paper, we propose a novel heterogeneous network embedding based approach for HIN based recommendation, called HERec. To embed HINs, we design a meta-path based random walk strategy to generate meaningful node sequences for network embedding. The learned node embeddings are first transformed by a set of fusion functions, and subsequently integrated into an extended matrix factorization (MF) model. The extended MF model together with fusion functions are jointly optimized for the rating prediction task. Extensive experiments on three real-world datasets demonstrate the effectiveness of the HERec model. Moreover, we show the capability of the HERec model for the cold-start problem, and reveal that the transformed embedding information from HINs can improve the recommendation performance.

768 citations


Journal ArticleDOI
TL;DR: This paper constitutes the first holistic tutorial on the development of ANN-based ML techniques tailored to the needs of future wireless networks and overviews how artificial neural networks (ANNs)-based ML algorithms can be employed for solving various wireless networking problems.
Abstract: In order to effectively provide ultra reliable low latency communications and pervasive connectivity for Internet of Things (IoT) devices, next-generation wireless networks can leverage intelligent, data-driven functions enabled by the integration of machine learning (ML) notions across the wireless core and edge infrastructure. In this context, this paper provides a comprehensive tutorial that overviews how artificial neural networks (ANNs)-based ML algorithms can be employed for solving various wireless networking problems. For this purpose, we first present a detailed overview of a number of key types of ANNs that include recurrent, spiking, and deep neural networks, that are pertinent to wireless networking applications. For each type of ANN, we present the basic architecture as well as specific examples that are particularly important and relevant wireless network design. Such ANN examples include echo state networks, liquid state machine, and long short term memory. Then, we provide an in-depth overview on the variety of wireless communication problems that can be addressed using ANNs, ranging from communication using unmanned aerial vehicles to virtual reality applications over wireless networks as well as edge computing and caching. For each individual application, we present the main motivation for using ANNs along with the associated challenges while we also provide a detailed example for a use case scenario and outline future works that can be addressed using ANNs. In a nutshell, this paper constitutes the first holistic tutorial on the development of ANN-based ML techniques tailored to the needs of future wireless networks.

666 citations


Journal ArticleDOI
TL;DR: Simulation results reveal that the proposed system is effective and feasible in collecting, calculating, and storing trust values in vehicular networks.
Abstract: Vehicular networks enable vehicles to generate and broadcast messages in order to improve traffic safety and efficiency. However, due to the nontrusted environments, it is difficult for vehicles to evaluate the credibilities of received messages. In this paper, we propose a decentralized trust management system in vehicular networks based on blockchain techniques. In this system, vehicles can validate the received messages from neighboring vehicles using Bayesian Inference Model. Based on the validation result, the vehicle will generate a rating for each message source vehicle. With the ratings uploaded from vehicles, roadside units (RSUs) calculate the trust value offsets of involved vehicles and pack these data into a “block.” Then, each RSU will try to add their “blocks” to the trust blockchain which is maintained by all the RSUs. By employing the joint proof-of-work (PoW) and proof-of-stake consensus mechanism, the more total value of offsets (stake) is in the block, the easier RSU can find the nonce for the hash function (PoW). In this way, all RSUs collaboratively maintain an updated, reliable, and consistent trust blockchain. Simulation results reveal that the proposed system is effective and feasible in collecting, calculating, and storing trust values in vehicular networks.

650 citations


Book ChapterDOI
Matej Kristan1, Ales Leonardis2, Jiří Matas3, Michael Felsberg4  +155 moreInstitutions (47)
23 Jan 2019
TL;DR: The Visual Object Tracking challenge VOT2018 is the sixth annual tracker benchmarking activity organized by the VOT initiative; results of over eighty trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years.
Abstract: The Visual Object Tracking challenge VOT2018 is the sixth annual tracker benchmarking activity organized by the VOT initiative. Results of over eighty trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The evaluation included the standard VOT and other popular methodologies for short-term tracking analysis and a “real-time” experiment simulating a situation where a tracker processes images as if provided by a continuously running sensor. A long-term tracking subchallenge has been introduced to the set of standard VOT sub-challenges. The new subchallenge focuses on long-term tracking properties, namely coping with target disappearance and reappearance. A new dataset has been compiled and a performance evaluation methodology that focuses on long-term tracking capabilities has been adopted. The VOT toolkit has been updated to support both standard short-term and the new long-term tracking subchallenges. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website (http://votchallenge.net).

639 citations


Posted Content
TL;DR: An Adaptive Training Sample Selection (ATSS) to automatically select positive and negative samples according to statistical characteristics of object significantly improves the performance of anchor-based and anchor-free detectors and bridges the gap between them.
Abstract: Object detection has been dominated by anchor-based detectors for several years. Recently, anchor-free detectors have become popular due to the proposal of FPN and Focal Loss. In this paper, we first point out that the essential difference between anchor-based and anchor-free detection is actually how to define positive and negative training samples, which leads to the performance gap between them. If they adopt the same definition of positive and negative samples during training, there is no obvious difference in the final performance, no matter regressing from a box or a point. This shows that how to select positive and negative training samples is important for current object detectors. Then, we propose an Adaptive Training Sample Selection (ATSS) to automatically select positive and negative samples according to statistical characteristics of object. It significantly improves the performance of anchor-based and anchor-free detectors and bridges the gap between them. Finally, we discuss the necessity of tiling multiple anchors per location on the image to detect objects. Extensive experiments conducted on MS COCO support our aforementioned analysis and conclusions. With the newly introduced ATSS, we improve state-of-the-art detectors by a large margin to $50.7\%$ AP without introducing any overhead. The code is available at this https URL

564 citations


Proceedings ArticleDOI
25 Jul 2019
TL;DR: The core idea is to capture the normal patterns of multivariate time series by learning their robust representations with key techniques such as stochastic variable connection and planar normalizing flow, reconstruct input data by the representations, and use the reconstruction probabilities to determine anomalies.
Abstract: Industry devices (i.e., entities) such as server machines, spacecrafts, engines, etc., are typically monitored with multivariate time series, whose anomaly detection is critical for an entity's service quality management. However, due to the complex temporal dependence and stochasticity of multivariate time series, their anomaly detection remains a big challenge. This paper proposes OmniAnomaly, a stochastic recurrent neural network for multivariate time series anomaly detection that works well robustly for various devices. Its core idea is to capture the normal patterns of multivariate time series by learning their robust representations with key techniques such as stochastic variable connection and planar normalizing flow, reconstruct input data by the representations, and use the reconstruction probabilities to determine anomalies. Moreover, for a detected entity anomaly, OmniAnomaly can provide interpretations based on the reconstruction probabilities of its constituent univariate time series. The evaluation experiments are conducted on two public datasets from aerospace and a new server machine dataset (collected and released by us) from an Internet company. OmniAnomaly achieves an overall F1-Score of 0.86 in three real-world datasets, signicantly outperforming the best performing baseline method by 0.09. The interpretation accuracy for OmniAnomaly is up to 0.89.

541 citations


Journal ArticleDOI
TL;DR: A comprehensive survey on the literature involving blockchain technology applied to smart cities, from the perspectives of smart citizen, smart healthcare, smart grid, smart transportation, supply chain management, and others is provided.
Abstract: In recent years, the rapid urbanization of world’s population causes many economic, social, and environmental problems, which affect people’s living conditions and quality of life significantly. The concept of “smart city” brings opportunities to solve these urban problems. The objectives of smart cities are to make the best use of public resources, provide high-quality services to the citizens, and improve the people’s quality of life. Information and communication technology plays an important role in the implementation of smart cities. Blockchain as an emerging technology has many good features, such as trust-free, transparency, pseudonymity, democracy, automation, decentralization, and security. These features of blockchain are helpful to improve smart city services and promote the development of smart cities. In this paper, we provide a comprehensive survey on the literature involving blockchain technology applied to smart cities. First, the related works and background knowledge are introduced. Then, we review how blockchain technology is applied in the realm of smart cities, from the perspectives of smart citizen, smart healthcare, smart grid, smart transportation, supply chain management, and others. Finally, some challenges and broader perspectives are discussed.

472 citations


Journal ArticleDOI
TL;DR: This paper provides a comprehensive survey on the literature involving machine learning algorithms applied to SDN, from the perspective of traffic classification, routing optimization, quality of service/quality of experience prediction, resource management and security.
Abstract: In recent years, with the rapid development of current Internet and mobile communication technologies, the infrastructure, devices and resources in networking systems are becoming more complex and heterogeneous. In order to efficiently organize, manage, maintain and optimize networking systems, more intelligence needs to be deployed. However, due to the inherently distributed feature of traditional networks, machine learning techniques are hard to be applied and deployed to control and operate networks. Software defined networking (SDN) brings us new chances to provide intelligence inside the networks. The capabilities of SDN (e.g., logically centralized control, global view of the network, software-based traffic analysis, and dynamic updating of forwarding rules) make it easier to apply machine learning techniques. In this paper, we provide a comprehensive survey on the literature involving machine learning algorithms applied to SDN. First, the related works and background knowledge are introduced. Then, we present an overview of machine learning algorithms. In addition, we review how machine learning algorithms are applied in the realm of SDN, from the perspective of traffic classification, routing optimization, quality of service/quality of experience prediction, resource management and security. Finally, challenges and broader perspectives are discussed.

436 citations


Journal ArticleDOI
TL;DR: A new deep locality-preserving convolutional neural network (DLP-CNN) method that aims to enhance the discriminative power of deep features by preserving the locality closeness while maximizing the inter-class scatter is proposed.
Abstract: Facial expression is central to human experience, but most previous databases and studies are limited to posed facial behavior under controlled conditions In this paper, we present a novel facial expression database, Real-world Affective Face Database (RAF-DB), which contains approximately 30 000 facial images with uncontrolled poses and illumination from thousands of individuals of diverse ages and races During the crowdsourcing annotation, each image is independently labeled by approximately 40 annotators An expectation–maximization algorithm is developed to reliably estimate the emotion labels, which reveals that real-world faces often express compound or even mixture emotions A cross-database study between RAF-DB and CK+ database further indicates that the action units of real-world emotions are much more diverse than, or even deviate from, those of laboratory-controlled emotions To address the recognition of multi-modal expressions in the wild, we propose a new deep locality-preserving convolutional neural network (DLP-CNN) method that aims to enhance the discriminative power of deep features by preserving the locality closeness while maximizing the inter-class scatter Benchmark experiments on 7-class basic expressions and 11-class compound expressions, as well as additional experiments on CK+, MMI, and SFEW 20 databases, show that the proposed DLP-CNN outperforms the state-of-the-art handcrafted features and deep learning-based methods for expression recognition in the wild To promote further study, we have made the RAF database, benchmarks, and descriptor encodings publicly available to the research community

429 citations


Posted Content
TL;DR: Simulation results show that the proposed joint federated learning and communication framework can improve the identification accuracy by up to 1.4%, 3.5% and 4.1%, respectively, compared to an optimal user selection algorithm with random resource allocation and a wireless optimization algorithm that minimizes the sum packet error rates of all users while being agnostic to the FL parameters.
Abstract: In this paper, the problem of training federated learning (FL) algorithms over a realistic wireless network is studied. In particular, in the considered model, wireless users execute an FL algorithm while training their local FL models using their own data and transmitting the trained local FL models to a base station (BS) that will generate a global FL model and send it back to the users. Since all training parameters are transmitted over wireless links, the quality of the training will be affected by wireless factors such as packet errors and the availability of wireless resources. Meanwhile, due to the limited wireless bandwidth, the BS must select an appropriate subset of users to execute the FL algorithm so as to build a global FL model accurately. This joint learning, wireless resource allocation, and user selection problem is formulated as an optimization problem whose goal is to minimize an FL loss function that captures the performance of the FL algorithm. To address this problem, a closed-form expression for the expected convergence rate of the FL algorithm is first derived to quantify the impact of wireless factors on FL. Then, based on the expected convergence rate of the FL algorithm, the optimal transmit power for each user is derived, under a given user selection and uplink resource block (RB) allocation scheme. Finally, the user selection and uplink RB allocation is optimized so as to minimize the FL loss function. Simulation results show that the proposed joint federated learning and communication framework can reduce the FL loss function value by up to 10% and 16%, respectively, compared to: 1) An optimal user selection algorithm with random resource allocation and 2) a standard FL algorithm with random user selection and resource allocation.

Proceedings ArticleDOI
Matej Kristan1, Amanda Berg2, Linyu Zheng3, Litu Rout4  +176 moreInstitutions (43)
01 Oct 2019
TL;DR: The Visual Object Tracking challenge VOT2019 is the seventh annual tracker benchmarking activity organized by the VOT initiative; results of 81 trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years.
Abstract: The Visual Object Tracking challenge VOT2019 is the seventh annual tracker benchmarking activity organized by the VOT initiative. Results of 81 trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The evaluation included the standard VOT and other popular methodologies for short-term tracking analysis as well as the standard VOT methodology for long-term tracking analysis. The VOT2019 challenge was composed of five challenges focusing on different tracking domains: (i) VOTST2019 challenge focused on short-term tracking in RGB, (ii) VOT-RT2019 challenge focused on "real-time" shortterm tracking in RGB, (iii) VOT-LT2019 focused on longterm tracking namely coping with target disappearance and reappearance. Two new challenges have been introduced: (iv) VOT-RGBT2019 challenge focused on short-term tracking in RGB and thermal imagery and (v) VOT-RGBD2019 challenge focused on long-term tracking in RGB and depth imagery. The VOT-ST2019, VOT-RT2019 and VOT-LT2019 datasets were refreshed while new datasets were introduced for VOT-RGBT2019 and VOT-RGBD2019. The VOT toolkit has been updated to support both standard shortterm, long-term tracking and tracking with multi-channel imagery. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website.

Journal ArticleDOI
TL;DR: The Blockchain technologies which can potentially address the critical challenges arising from the IoT and hence suit the IoT applications are identified with potential adaptations and enhancements elaborated on the Blockchain consensus protocols and data structures.

Journal ArticleDOI
TL;DR: In this paper, the authors present a survey of the recent advances of ML in wireless communication, which are classified as: resource management in the MAC layer, networking and mobility management in network layer, and localization in the application layer.
Abstract: As a key technique for enabling artificial intelligence, machine learning (ML) is capable of solving complex problems without explicit programming. Motivated by its successful applications to many practical tasks like image recognition, both industry and the research community have advocated the applications of ML in wireless communication. This paper comprehensively surveys the recent advances of the applications of ML in wireless communication, which are classified as: resource management in the MAC layer, networking and mobility management in the network layer, and localization in the application layer. The applications in resource management further include power control, spectrum management, backhaul management, cache management, and beamformer design and computation resource management, while ML-based networking focuses on the applications in clustering, base station switching control, user association, and routing. Moreover, literatures in each aspect is organized according to the adopted ML techniques. In addition, several conditions for applying ML to wireless communication are identified to help readers decide whether to use ML and which kind of ML techniques to use. Traditional approaches are also summarized together with their performance comparison with ML-based approaches, based on which the motivations of surveyed literatures to adopt ML are clarified. Given the extensiveness of the research area, challenges and unresolved issues are presented to facilitate future studies. Specifically, ML-based network slicing, infrastructure update to support ML-based paradigms, open data sets and platforms for researchers, theoretical guidance for ML implementation, and so on are discussed.

Journal ArticleDOI
TL;DR: The findings suggest that financial development increases ecological footprint and economic growth, energy consumption, foreign direct investment (FDI), and urbanization pollute the environment by increasing ecological footprint.
Abstract: This work aims to contribute to the existing literature by investigating at the impact of financial development on ecological footprint. To achieve this goal, we have employed Driscoll-Kraay panel regression model for a panel of 59 Belt and Road countries in the period from 1990 to 2016. The findings suggest that financial development increases ecological footprint. Moreover, economic growth, energy consumption, foreign direct investment (FDI), and urbanization pollute the environment by increasing ecological footprint. In addition, several diagnostic tests have been applied to confirm the reliability and validity of the results. From the outcome of the study, various policy implications have been proposed for Belt and Road countries to minimize the ecological footprint.


Journal ArticleDOI
TL;DR: This paper forms the edge server placement problem in mobile edge computing environments for smart cities as a multi-objective constraint optimization problem that places edge servers in some strategic locations with the objective to make balance the workloads of edge servers and minimize the access delay between the mobile user and edge server.

Proceedings ArticleDOI
01 Oct 2019
TL;DR: Zhang et al. as discussed by the authors proposed a Mixed High-Order Attention Network (MHN) to further enhance the discrimination and richness of attention knowledge in an explicit manner, which can capture the subtle differences among pedestrians and produce the discriminative attention proposals.
Abstract: Attention has become more attractive in person re-identification (ReID) as it is capable of biasing the allocation of available resources towards the most informative parts of an input signal. However, state-of-the-art works concentrate only on coarse or first-order attention design, e.g. spatial and channels attention, while rarely exploring higher-order attention mechanism. We take a step towards addressing this problem. In this paper, we first propose the High-Order Attention (HOA) module to model and utilize the complex and high-order statistics information in attention mechanism, so as to capture the subtle differences among pedestrians and to produce the discriminative attention proposals. Then, rethinking person ReID as a zero-shot learning problem, we propose the Mixed High-Order Attention Network (MHN) to further enhance the discrimination and richness of attention knowledge in an explicit manner. Extensive experiments have been conducted to validate the superiority of our MHN for person ReID over a wide variety of state-of-the-art methods on three large-scale datasets, including Market-1501, DukeMTMC-ReID and CUHK03-NP. Code is available at http://www.bhchen.cn.

Journal ArticleDOI
TL;DR: A novel spatially variant recurrent neural network (RNN) is proposed as an edge stream to model edge details, with the guidance of another auto-encoder, to enhance the visibility of degraded images.
Abstract: Camera sensors often fail to capture clear images or videos in a poorly lit environment. In this paper, we propose a trainable hybrid network to enhance the visibility of such degraded images. The proposed network consists of two distinct streams to simultaneously learn the global content and the salient structures of the clear image in a unified network. More specifically, the content stream estimates the global content of the low-light input through an encoder–decoder network. However, the encoder in the content stream tends to lose some structure details. To remedy this, we propose a novel spatially variant recurrent neural network (RNN) as an edge stream to model edge details, with the guidance of another auto-encoder. The experimental results show that the proposed network favorably performs against the state-of-the-art low-light image enhancement algorithms.

Journal ArticleDOI
TL;DR: The Panoptic Studio system and method are the first in reconstructing full body motion of more than five people engaged in social interactions without using markers, and empirically demonstrate the impact of the number of views in achieving this goal.
Abstract: We present an approach to capture the 3D motion of a group of people engaged in a social interaction. The core challenges in capturing social interactions are: (1) occlusion is functional and frequent; (2) subtle motion needs to be measured over a space large enough to host a social group; (3) human appearance and configuration variation is immense; and (4) attaching markers to the body may prime the nature of interactions. The Panoptic Studio is a system organized around the thesis that social interactions should be measured through the integration of perceptual analyses over a large variety of view points. We present a modularized system designed around this principle, consisting of integrated structural, hardware, and software innovations. The system takes, as input, 480 synchronized video streams of multiple people engaged in social activities, and produces, as output, the labeled time-varying 3D structure of anatomical landmarks on individuals in the space. Our algorithm is designed to fuse the “weak” perceptual processes in the large number of views by progressively generating skeletal proposals from low-level appearance cues, and a framework for temporal refinement is also presented by associating body parts to reconstructed dense 3D trajectory stream. Our system and method are the first in reconstructing full body motion of more than five people engaged in social interactions without using markers. We also empirically demonstrate the impact of the number of views in achieving this goal.

Journal ArticleDOI
TL;DR: The findings disclose that globalization is not a significant determinant of the ecological footprint; however, it significantly increases the ecological carbon footprint.
Abstract: This study focuses to investigate the relationship between globalization and the ecological footprint for Malaysia from 1971 to 2014. The results of the Bayer and Hanck cointegration test and the ARDL bound test show the existence of cointegration among variables. The findings disclose that globalization is not a significant determinant of the ecological footprint; however, it significantly increases the ecological carbon footprint. Energy consumption and economic growth stimulate the ecological footprint and carbon footprint in Malaysia. Population density reduces the ecological footprint and carbon footprint. Further, financial development mitigates the ecological footprint. The causality results disclose the feedback hypothesis between energy consumption and economic growth in the long run and short run.

Proceedings ArticleDOI
25 Jul 2019
TL;DR: A metapath-guided heterogeneous Graph Neural Network to learn the embeddings of objects in intent recommendation as a Heterogeneous Information Network is proposed and Offline experiments on real large-scale data show the superior performance of the proposed MEIRec, compared to representative methods.
Abstract: With the prevalence of mobile e-commerce nowadays, a new type of recommendation services, called intent recommendation, is widely used in many mobile e-commerce Apps, such as Taobao and Amazon. Different from traditional query recommendation and item recommendation, intent recommendation is to automatically recommend user intent according to user historical behaviors without any input when users open the App. Intent recommendation becomes very popular in the past two years, because of revealing user latent intents and avoiding tedious input in mobile phones. Existing methods used in industry usually need laboring feature engineering. Moreover, they only utilize attribute and statistic information of users and queries, and fail to take full advantage of rich interaction information in intent recommendation, which may result in limited performances. In this paper, we propose to model the complex objects and rich interactions in intent recommendation as a Heterogeneous Information Network. Furthermore, we present a novel M etapath-guided E mbedding method for I ntent Rec ommendation~(called MEIRec). In order to fully utilize rich structural information, we design a metapath-guided heterogeneous Graph Neural Network to learn the embeddings of objects in intent recommendation. In addition, in order to alleviate huge learning parameters in embeddings, we propose a uniform term embedding mechanism, in which embeddings of objects are made up with the same term embedding space. Offline experiments on real large-scale data show the superior performance of the proposed MEIRec, compared to representative methods.Moreover, the results of online experiments on Taobao e-commerce platform show that MEIRec not only gains a performance improvement of 1.54% on CTR metric, but also attracts up to 2.66% of new users to search queries.

Journal ArticleDOI
TL;DR: In this paper, the authors introduce the basic concept of blockchain and illustrate why a consensus mechanism plays an indispensable role in a blockchain enabled IoT system, and discuss the main ideas of two famous consensus mechanisms, PoW and PoS, and list their limitations in IoT.
Abstract: Blockchain has been regarded as a promising technology for IoT, since it provides significant solutions for decentralized networks that can address trust and security concerns, high maintenance cost problems, and so on. The decentralization provided by blockchain can be largely attributed to the use of a consensus mechanism, which enables peer-to-peer trading in a distributed manner without the involvement of any third party. This article starts by introducing the basic concept of blockchain and illustrating why a consensus mechanism plays an indispensable role in a blockchain enabled IoT system. Then we discuss the main ideas of two famous consensus mechanisms, PoW and PoS, and list their limitations in IoT. Next, two mainstream DAG based consensus mechanisms, the Tangle and Hashgraph, are reviewed to show why DAG consensus is more suitable for IoT system than PoW and PoS. Potential issues and challenges of DAG based consensus mechanisms to be addressed in the future are discussed in the last section.

Journal ArticleDOI
TL;DR: Simulations results show that the proposed framework can effectively improve the performance of blockchain-enabled IIoT systems and well adapt to the dynamics of the IIeT.
Abstract: Recent advances in the industrial Internet of things (IIoT) provide plenty of opportunities for various industries. To address the security and efficiency issues of the massive IIoT data, blockchain is widely considered as a promising solution to enable data storing/processing/sharing in a secure and efficient way. To meet the high throughput requirement, this paper proposes a novel deep reinforcement learning (DRL)-based performance optimization framework for blockchain-enabled IIoT systems, the goals of which are threefold: 1) providing a methodology for evaluating the system from the aspects of scalability, decentralization, latency, and security; 2) improving the scalability of the underlying blockchain without affecting the system's decentralization, latency, and security; and 3) designing a modulable blockchain for IIoT systems, where the block producers, consensus algorithm, block size, and block interval can be selected/adjusted using the DRL technique. Simulations results show that our proposed framework can effectively improve the performance of blockchain-enabled IIoT systems and well adapt to the dynamics of the IIoT.

Journal ArticleDOI
TL;DR: A comprehensive survey of deep learning based human pose estimation methods and analyzes the methodologies employed and summarizes and discusses recent works with a methodology-based taxonomy.


Journal ArticleDOI
TL;DR: In this article, a review summarizes the applications of Na metal anodes before providing an in-depth review of research efforts attempting to solve the aforementioned challenges, including electrolyte optimization, artificial solid electrolyte interphase, and electrode structure design.

Journal ArticleDOI
TL;DR: In this treatise, the cloud computing service is introduced into the blockchain platform for the sake of assisting to offload computational task from the IIoT network itself and a multiagent reinforcement learning algorithm is conceived for searching the near-optimal policy.
Abstract: Past few years have witnessed the compelling applications of the blockchain technique in our daily life ranging from the financial market to health care. Considering the integration of the blockchain technique and the industrial Internet of Things (IoT), blockchain may act as a distributed ledger for beneficially establishing a decentralized autonomous trading platform for industrial IoT (IIoT) networks. However, the power and computation constraints prevent IoT devices from directly participating in this proof-of-work process. As a remedy, in this treatise, the cloud computing service is introduced into the blockchain platform for the sake of assisting to offload computational task from the IIoT network itself. In addition, we study the resource management and pricing problem between the cloud provider and miners. More explicitly, we model the interaction between the cloud provider and miners as a Stackelberg game, where the leader, i.e., cloud provider, makes the price first, and then miners act as the followers. Moreover, in order to find the Nash equilibrium of the proposed Stackelberg game, a multiagent reinforcement learning algorithm is conceived for searching the near-optimal policy. Finally, extensive simulations are conducted to evaluate our proposed algorithm in comparison to some state-of-the-art schemes.

Journal ArticleDOI
TL;DR: The scalability issue is discussed from the perspectives of throughput, storage and networking, and existing enabling technologies for scalable blockchain systems are presented.
Abstract: In the past decade, crypto-currencies such as Bitcoin and Litecoin have developed rapidly. Blockchain as the underlying technology of these digital crypto-currencies has attracted great attention from academia and industry. Blockchain has many good features, such as trust-free, transparency, anonymity, democracy, automation, decentralization and security. Despite these promising features, scalability is still a key barrier when the blockchain technology is widely used in real business environments. In this article, we focus on the scalability issue, and provide a brief survey of recent studies on scalable blockchain systems. We first discuss the scalability issue from the perspectives of throughput, storage and networking. Then, existing enabling technologies for scalable blockchain systems are presented. We also discuss some research challenges and future research directions for scalable blockchain systems.

Journal ArticleDOI
TL;DR: This proof-of-principle chip-based CV-QKD system is capable of producing a secret key rate of 0.14 kbps (under collective attack) over a simulated distance of 100 km in fibre, offering new possibilities for low-cost, scalable and portable quantum networks.
Abstract: Quantum key distribution (QKD) is a quantum communication technology that promises unconditional communication security. High-performance and cost-effective QKD systems are essential for the establishment of quantum communication networks1–3. By integrating all the optical components (except the laser source) on a silicon photonic chip, we have realized a stable, miniaturized and low-cost system for continuous-variable QKD (CV-QKD) that is compatible with the existing fibre optical communication infrastructure4. Here, the integrated silicon photonic chip is demonstrated for CV-QKD. It implements the widely studied Gaussian-modulated coherent state protocol that encodes continuous distributed information on the quadrature of laser light5,6. Our proof-of-principle chip-based CV-QKD system is capable of producing a secret key rate of 0.14 kbps (under collective attack) over a simulated distance of 100 km in fibre, offering new possibilities for low-cost, scalable and portable quantum networks. A sender and a receiver for continuous-variable quantum key distribution are packed onto separate silicon photonic chips. By using an external 1,550-nm laser, a secret key rate of 0.14 kbps is transmitted over a simulated distance of 100 km in fibre.