scispace - formally typeset
Search or ask a question

Showing papers on "Heterogeneous network published in 2019"


Journal ArticleDOI
TL;DR: A novel heterogeneous network embedding based approach for HIN based recommendation, called HERec is proposed, which shows the capability of the HERec model for the cold-start problem, and reveals that the transformed embedding information from HINs can improve the recommendation performance.
Abstract: Due to the flexibility in modelling data heterogeneity, heterogeneous information network (HIN) has been adopted to characterize complex and heterogeneous auxiliary data in recommender systems, called HIN based recommendation . It is challenging to develop effective methods for HIN based recommendation in both extraction and exploitation of the information from HINs. Most of HIN based recommendation methods rely on path based similarity, which cannot fully mine latent structure features of users and items. In this paper, we propose a novel heterogeneous network embedding based approach for HIN based recommendation, called HERec. To embed HINs, we design a meta-path based random walk strategy to generate meaningful node sequences for network embedding. The learned node embeddings are first transformed by a set of fusion functions, and subsequently integrated into an extended matrix factorization (MF) model. The extended MF model together with fusion functions are jointly optimized for the rating prediction task. Extensive experiments on three real-world datasets demonstrate the effectiveness of the HERec model. Moreover, we show the capability of the HERec model for the cold-start problem, and reveal that the transformed embedding information from HINs can improve the recommendation performance.

768 citations


Posted Content
TL;DR: This work proposes q-Fair Federated Learning (q-FFL), a novel optimization objective inspired by fair resource allocation in wireless networks that encourages a more fair accuracy distribution across devices in federated networks.
Abstract: Federated learning involves training statistical models in massive, heterogeneous networks. Naively minimizing an aggregate loss function in such a network may disproportionately advantage or disadvantage some of the devices. In this work, we propose q-Fair Federated Learning (q-FFL), a novel optimization objective inspired by fair resource allocation in wireless networks that encourages a more fair (specifically, a more uniform) accuracy distribution across devices in federated networks. To solve q-FFL, we devise a communication-efficient method, q-FedAvg, that is suited to federated networks. We validate both the effectiveness of q-FFL and the efficiency of q-FedAvg on a suite of federated datasets with both convex and non-convex models, and show that q-FFL (along with q-FedAvg) outperforms existing baselines in terms of the resulting fairness, flexibility, and efficiency.

298 citations


Proceedings ArticleDOI
Sungsoo Ahn1, Shell Xu Hu, Andreas Damianou2, Neil D. Lawrence2, Zhenwen Dai2 
15 Jun 2019
TL;DR: In this article, the authors propose an information-theoretic framework for knowledge transfer which formulates knowledge transfer as maximizing the mutual information between the teacher and the student networks, and compare their method with existing knowledge transfer methods on both knowledge distillation and transfer learning tasks and show that their method consistently outperforms existing methods.
Abstract: Transferring knowledge from a teacher neural network pretrained on the same or a similar task to a student neural network can significantly improve the performance of the student neural network. Existing knowledge transfer approaches match the activations or the corresponding hand-crafted features of the teacher and the student networks. We propose an information-theoretic framework for knowledge transfer which formulates knowledge transfer as maximizing the mutual information between the teacher and the student networks. We compare our method with existing knowledge transfer methods on both knowledge distillation and transfer learning tasks and show that our method consistently outperforms existing methods. We further demonstrate the strength of our method on knowledge transfer across heterogeneous network architectures by transferring knowledge from a convolutional neural network (CNN) to a multi-layer perceptron (MLP) on CIFAR-10. The resulting MLP significantly outperforms the-state-of-the-art methods and it achieves similar performance to the CNN with a single convolutional layer.

298 citations


Journal ArticleDOI
08 Jan 2019
TL;DR: This Review describes advances in the statistical physics of complex networks and provides a reference for the state of the art in theoretical network modelling and applications to real-world systems for pattern detection and network reconstruction.
Abstract: In the past 15 years, statistical physics has been successful as a framework for modelling complex networks. On the theoretical side, this approach has unveiled a variety of physical phenomena, such as the emergence of mixed distributions and ensemble non-equivalence, that are observed in heterogeneous networks but not in homogeneous systems. At the same time, thanks to the deep connection between the principle of maximum entropy and information theory, statistical physics has led to the definition of null models for networks that reproduce features of real-world systems but that are otherwise as random as possible. We review here the statistical physics approach and the null models for complex networks, focusing in particular on analytical frameworks that reproduce local network features. We show how these models have been used to detect statistically significant structural patterns in real-world networks and to reconstruct the network structure in cases of incomplete information. We further survey the statistical physics models that reproduce more complex, semilocal network features using Markov chain Monte Carlo sampling, as well as models of generalized network structures, such as multiplex networks, interacting networks and simplicial complexes. This Review describes advances in the statistical physics of complex networks and provides a reference for the state of the art in theoretical network modelling and applications to real-world systems for pattern detection and network reconstruction.

249 citations


Proceedings ArticleDOI
25 Jul 2019
TL;DR: Results of the offline A/B tests on product recommendation further confirm the effectiveness and efficiency of the framework in practice, and the theoretical analysis of the proposed framework gives its connection with previous works and proving its better expressiveness.
Abstract: Network embedding (or graph embedding) has been widely used in many real-world applications. However, existing methods mainly focus on networks with single-typed nodes/edges and cannot scale well to handle large networks. Many real-world networks consist of billions of nodes and edges of multiple types, and each node is associated with different attributes. In this paper, we formalize the problem of embedding learning for the Attributed Multiplex Heterogeneous Network and propose a unified framework to address this problem. The framework supports both transductive and inductive learning. We also give the theoretical analysis of the proposed framework, showing its connection with previous works and proving its better expressiveness. We conduct systematical evaluations for the proposed framework on four different genres of challenging datasets: Amazon, YouTube, Twitter, and Alibaba. Experimental results demonstrate that with the learned embeddings from the proposed framework, we can achieve statistically significant improvements (e.g., 5.99-28.23% lift by F1 scores; p

225 citations


Posted Content
TL;DR: An information-theoretic framework for knowledge transfer is proposed which formulates knowledge transfer as maximizing the mutual information between the teacher and the student networks and which consistently outperforms existing methods.
Abstract: Transferring knowledge from a teacher neural network pretrained on the same or a similar task to a student neural network can significantly improve the performance of the student neural network. Existing knowledge transfer approaches match the activations or the corresponding hand-crafted features of the teacher and the student networks. We propose an information-theoretic framework for knowledge transfer which formulates knowledge transfer as maximizing the mutual information between the teacher and the student networks. We compare our method with existing knowledge transfer methods on both knowledge distillation and transfer learning tasks and show that our method consistently outperforms existing methods. We further demonstrate the strength of our method on knowledge transfer across heterogeneous network architectures by transferring knowledge from a convolutional neural network (CNN) to a multi-layer perceptron (MLP) on CIFAR-10. The resulting MLP significantly outperforms the-state-of-the-art methods and it achieves similar performance to the CNN with a single convolutional layer.

212 citations


Journal ArticleDOI
TL;DR: A two-stage joint hovering altitude and power control solution for the resource allocation problem in UAV networks considering the inevitable cross-tier interference from space-air-ground heterogeneous networks is proposed.
Abstract: Unmanned aerial vehicles (UAVs) have been widely used in both military and civilian applications. Equipped with diverse communication payloads, UAVs cooperating with satellites and base stations constitute a space-air-ground three-tier heterogeneous network, which are beneficial in terms of both providing the seamless coverage as well as of improving the capacity for increasingly prosperous Internet of Things networks. However, cross-tier interference may be inevitable among these tightly embraced heterogeneous networks when sharing the same spectrum. The power association problem in satellite, UAV and macrocell three-tier networks becomes a critical issue. In this paper, we propose a two-stage joint hovering altitude and power control solution for the resource allocation problem in UAV networks considering the inevitable cross-tier interference from space-air-ground heterogeneous networks. Furthermore, Lagrange dual decomposition and concave-convex procedure method are used to solve this problem, followed by a low-complexity greedy search algorithm. Finally, simulation results show the effectiveness of our proposed two-stage joint optimization algorithm in terms of UAV network’s total throughput.

209 citations


Journal ArticleDOI
TL;DR: A fundamental tradeoff between energy consumption and service delay when provisioning mobile services in vehicular networks is explored and a novel model is proposed to depict the users’ willingness of contributing their resources to the public is proposed.
Abstract: In the past decade, network data communication has experienced a rapid growth, which has led to explosive congestion in heterogeneous networks. Moreover, the emerging industrial applications, such as automatic driving put forward higher requirements on both networks and devices. On the contrary, running computation-intensive industrial applications locally are constrained by the limited resources of devices. Correspondingly, fog computing has recently emerged to reduce the congestion of content-centric networks. It has proven to be a good way in industry and traffic for reducing network delay and processing time. In addition, device-to-device offloading is viewed as a promising paradigm to transmit network data in mobile environment, especially for autodriving vehicles. In this paper, jointly taking both the network traffic and computation workload of industrial traffic into consideration, we explore a fundamental tradeoff between energy consumption and service delay when provisioning mobile services in vehicular networks. In particular, when the available resource in mobile vehicles becomes a bottleneck, we propose a novel model to depict the users’ willingness of contributing their resources to the public. We then formulate a cost minimization problem by exploiting the framework of Markov decision progress (MDP) and propose the dynamic reinforcement learning scheduling algorithm and the deep dynamic scheduling algorithm to solve the offloading decision problem. By adopting different mobile trajectory traces, we conduct extensive simulations to evaluate the performance of the proposed algorithms. The results show that our proposed algorithms outperform other benchmark schemes in the mobile edge networks.

170 citations


Journal ArticleDOI
TL;DR: This study extended the RWR algorithm to multiplex and heterogeneous networks, and devised a leave‐one‐out cross‐validation strategy to evaluate the algorithms abilities to predict disease‐associated genes.
Abstract: Motivation Recent years have witnessed an exponential growth in the number of identified interactions between biological molecules. These interactions are usually represented as large and complex networks, calling for the development of appropriated tools to exploit the functional information they contain. Random walk with restart (RWR) is the state-of-the-art guilt-by-association approach. It explores the network vicinity of gene/protein seeds to study their functions, based on the premise that nodes related to similar functions tend to lie close to each other in the networks. Results In this study, we extended the RWR algorithm to multiplex and heterogeneous networks. The walk can now explore different layers of physical and functional interactions between genes and proteins, such as protein-protein interactions and co-expression associations. In addition, the walk can also jump to a network containing different sets of edges and nodes, such as phenotype similarities between diseases. We devised a leave-one-out cross-validation strategy to evaluate the algorithms abilities to predict disease-associated genes. We demonstrate the increased performances of the multiplex-heterogeneous RWR as compared to several random walks on monoplex or heterogeneous networks. Overall, our framework is able to leverage the different interaction sources to outperform current approaches. Finally, we applied the algorithm to predict candidate genes for the Wiedemann-Rautenstrauch syndrome, and to explore the network vicinity of the SHORT syndrome. Availability and implementation The source code is available on GitHub at: https://github.com/alberto-valdeolivas/RWR-MH. In addition, an R package is freely available through Bioconductor at: http://bioconductor.org/packages/RandomWalkRestartMH/. Supplementary information Supplementary data are available at Bioinformatics online.

159 citations


Journal ArticleDOI
TL;DR: This review paper clearly presents how OWC technologies, such as visible light communication, light fidelity, optical camera communication, and free space optics communication, will be an effective solution for successful deployment of 5G/6G and IoT systems.
Abstract: The upcoming fifth- and sixth-generation (5G and 6G, respectively) communication systems are expected to deal with enormous advances compared to the existing fourth-generation communication system. The few important and common issues related to the service quality of 5G and 6G communication systems are high capacity, massive connectivity, low latency, high security, low-energy consumption, high quality of experience, and reliable connectivity. Of course, 6G communication will provide several-fold improved performances compared to the 5G communication regarding these issues. The Internet of Things (IoT) based on the tactile internet will also be an essential part of 5G-and-beyond (5GB) (e.g., 5G and 6G) communication systems. Accordingly, 5GB wireless networks will face numerous challenges in supporting the extensive verities of heterogeneous traffic and in satisfying the mentioned service-quality-related parameters. Optical wireless communication (OWC), along with many other wireless technologies, is a promising candidate for serving the demands of 5GB communication systems. This review paper clearly presents how OWC technologies, such as visible light communication, light fidelity, optical camera communication, and free space optics communication, will be an effective solution for successful deployment of 5G/6G and IoT systems.

156 citations


Proceedings ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a unified framework to address the problem of embedding learning for the Attributed Multiplex Heterogeneous Network and support both transductive and inductive learning.
Abstract: Network embedding (or graph embedding) has been widely used in many real-world applications. However, existing methods mainly focus on networks with single-typed nodes/edges and cannot scale well to handle large networks. Many real-world networks consist of billions of nodes and edges of multiple types, and each node is associated with different attributes. In this paper, we formalize the problem of embedding learning for the Attributed Multiplex Heterogeneous Network and propose a unified framework to address this problem. The framework supports both transductive and inductive learning. We also give the theoretical analysis of the proposed framework, showing its connection with previous works and proving its better expressiveness. We conduct systematical evaluations for the proposed framework on four different genres of challenging datasets: Amazon, YouTube, Twitter, and Alibaba. Experimental results demonstrate that with the learned embeddings from the proposed framework, we can achieve statistically significant improvements (e.g., 5.99-28.23% lift by F1 scores; p<<0.01, t-test) over previous state-of-the-art methods for link prediction. The framework has also been successfully deployed on the recommendation system of a worldwide leading e-commerce company, Alibaba Group. Results of the offline A/B tests on product recommendation further confirm the effectiveness and efficiency of the framework in practice.

Journal ArticleDOI
TL;DR: This paper aims to maximize the entire system energy efficiency, including the macrocell and small cells, in a NOMA HetNet via subchannel allocation and power allocation via convex relaxation and dual-decomposition techniques.
Abstract: Non-orthogonal multiple access (NOMA) has been considered as a key technology in the fifth-generation mobile communication networks due to its superior spectrum efficiency. Since the heterogeneous network has been emerged to satisfy users’ explosive data rate requirements and large connectivity of mobile Internet, implementing NOMA policy in heterogeneous networks (HetNets) has become an inevitable trend to enhance the 5G system throughput and spectrum efficiency. In this paper, we aim to maximize the entire system energy efficiency, including the macrocell and small cells, in a NOMA HetNet via subchannel allocation and power allocation. By considering the co-channel interference and cross-tier interference, the energy efficient resource allocation problem is formulated as a mixed integer nonconvex optimization problem. It is challenging to obtain the optimal solution; therefore, a suboptimal algorithm is proposed to alternatively optimize the macrocell and the small cells resource allocation. Specifically, convex relaxation and dual-decomposition techniques are exploited to optimize the subchannel allocation and power allocation. Moreover, optimal closed-form power allocation expressions are derived for small cell and macrocell user equipments by the Lagrangian approach. Simulations results show that the proposed algorithms can converge within ten iterations and can also attain higher system energy efficiency than the reference schemes.

Journal ArticleDOI
TL;DR: An enhanced clustering hierarchy approach has been proposed to achieve energy efficiency in WSNs by using sleeping-waking mechanism for overlapping and neighboring nodes, where the data redundancy is minimized and then network lifetime is maximized.
Abstract: In order to gather data more efficiently, a clustering hierarchy algorithm is used for data communication in wireless sensor networks (WSNs). This algorithm is one of the major techniques to improve the energy efficiency in WSNs and it provides an effective manner to maximize the lifetime of WSNs. Hierarchical protocols based on clustering hierarchy are proposed to save energy of WSNs in which the nodes with higher remaining energy could be used to collect data and transmit it to a base station. However, most of the previous approaches based on clustering hierarchy have not considered the redundant data collected by the adjacent nodes or nodes overlap each other. In this paper, an enhanced clustering hierarchy (ECH) approach has been proposed to achieve energy efficiency in WSNs by using sleeping-waking mechanism for overlapping and neighboring nodes. Thus, the data redundancy is minimized and then network lifetime is maximized. In contrast of previous hierarchical routing protocols where all nodes are required for collecting and transmitting data, the proposed approach only requires the waking nodes to do these tasks, which are keys of energy consumption in WSNs. We implement (ECH) approach in homogeneous and heterogeneous networks. Results of the simulation show its effectiveness.

Journal ArticleDOI
TL;DR: A novel hierarchical network architecture enabled by software defined networking is proposed, which integrates cross-layer high and low altitude platforms into conventional terrestrial cellular networks to inject additional capacity and expand the coverage for underserved areas in a flexible, seamless, and cost-effective manner.
Abstract: UAVs are expected to be an important complementary component for 5G (and beyond) communication systems to achieve the goal of global access to the Internet for all. To fully exploit the benefits of the distinct features of various UAVs, this article proposes a novel hierarchical network architecture enabled by software defined networking, which integrates cross-layer high and low altitude platforms into conventional terrestrial cellular networks to inject additional capacity and expand the coverage for underserved areas in a flexible, seamless, and cost-effective manner. Specifically, we first present a comprehensive comparison and review of different types of UAVs for communication services. Then, we propose an integrated airground heterogeneous network architecture and outline its characteristics and potential advantages. Next, several key enabling techniques for the integrated system are discussed in detail. In addition, we identify the potential application scenarios where the system can further enhance the performance of traditional terrestrial networks, followed by a case study to demonstrate the effectiveness of the proposed architecture. Finally, the discussions on challenges and open research issues are given.

Posted Content
TL;DR: A systematic and in-depth survey of the ML- and DL-based resource management mechanisms in cellular wireless and IoT networks and the techniques specifically designed for HetNets, MIMO and D2D communications, and NOMA networks.
Abstract: Internet-of-Things (IoT) refers to a massively heterogeneous network formed through smart devices connected to the Internet. In the wake of disruptive IoT with a huge amount and variety of data, Machine Learning (ML) and Deep Learning (DL) mechanisms will play a pivotal role to bring intelligence to the IoT networks. Among other aspects, ML and DL can play an essential role in addressing the challenges of resource management in large-scale IoT networks. In this article, we conduct a systematic and in-depth survey of the ML- and DL-based resource management mechanisms in cellular wireless and IoT networks. We start with the challenges of resource management in cellular IoT and low-power IoT networks, review the traditional resource management mechanisms for IoT networks, and motivate the use of ML and DL techniques for resource management in these networks. Then, we provide a comprehensive survey of the existing ML- and DL-based resource allocation techniques in wireless IoT networks and also techniques specifically designed for HetNets, MIMO and D2D communications, and NOMA networks. To this end, we also identify the future research directions in using ML and DL for resource allocation and management in IoT networks.

Journal ArticleDOI
08 Jun 2019-Sensors
TL;DR: The aim of this survey is to provide a comprehensive overview of the design changes brought in the NB-IoT standardization along with the detailed research developments from the perspectives of Physical and MAC layers.
Abstract: Narrowband internet of things (NB-IoT) is a recent cellular radio access technology based on Long-Term Evolution (LTE) introduced by Third-Generation Partnership Project (3GPP) for Low-Power Wide-Area Networks (LPWAN). The main aim of NB-IoT is to support massive machine-type communication (mMTC) and enable low-power, low-cost, and low-data-rate communication. NB-IoT is based on LTE design with some changes to meet the mMTC requirements. For example, in the physical (PHY) layer only single-antenna and low-order modulations are supported, and in the Medium Access Control (MAC) layers only one physical resource block is allocated for resource scheduling. The aim of this survey is to provide a comprehensive overview of the design changes brought in the NB-IoT standardization along with the detailed research developments from the perspectives of Physical and MAC layers. The survey also includes an overview of Evolved Packet Core (EPC) changes to support the Service Capability Exposure Function (SCEF) to manage both IP and non-IP data packets through Control Plane (CP) and User Plane (UP), the possible deployment scenarios of NB-IoT in future Heterogeneous Wireless Networks (HetNet). Finally, existing and emerging research challenges in this direction are presented to motivate future research activities.

Journal ArticleDOI
TL;DR: In this paper, the authors developed a stochastic geometry-based analytical framework for a millimeter wave (mm-wave) two-tier HetNet with integrated access and backhaul (IAB) where only the macro BSs (MBSs) have fiber access to the core network and the small cell BSs are wirelessly backhauled by the MBSs over mm-wave links.
Abstract: With the emergence of integrated access and backhaul (IAB) in the fifth generation (5G) of cellular networks, backhaul is no longer just a passive capacity constraint in cellular network design. In fact, this tight integration of access and backhaul is one of the key ways in which 5G millimeter wave (mm-wave) heterogeneous cellular networks (HetNets) differ from traditional settings where the backhaul network was designed independently from the radio access network (RAN). With the goal of elucidating key design trends for this new paradigm, we develop a stochastic geometry-based analytical framework for a millimeter wave (mm-wave) two-tier HetNet with IAB where only the macro BSs (MBSs) have fiber access to the core network and the small cell BSs (SBSs) are wirelessly backhauled by the MBSs over mm-wave links. For this network, we derive the downlink rate coverage probability for two types of resource allocations at the MBS: 1) integrated resource allocation (IRA): where the total bandwidth (BW) is dynamically split between access and backhaul, and 2) orthogonal resource allocation (ORA): where a static partition is defined for the access and backhaul communications. Our analysis concretely demonstrates that offloading users from the MBSs to SBSs may not provide similar rate improvements in an IAB setting as it would in a HetNet with fiber-backhauled SBS. Our analysis also shows that it is not possible to improve the user rate in an IAB setting by simply densifying the SBSs due to the bottleneck on the rate of wireless backhaul links between MBS and SBS.

Journal ArticleDOI
TL;DR: In this paper, two kinds of event-based protocols based on local sampled information are designed, without the need to solve any matrix equation or inequality, which guarantee the achievement of consensus and the exclusion of Zeno behaviors for jointly connected undirected switching graphs.
Abstract: This paper investigates the distributed event-based consensus problem of switching networks satisfying the jointly connected condition. Both the state consensus of homogeneous linear networks and the output consensus of heterogeneous networks are studied. Two kinds of event-based protocols based on local sampled information are designed, without the need to solve any matrix equation or inequality. Theoretical analysis indicates that the proposed event-based protocols guarantee the achievement of consensus and the exclusion of Zeno behaviors for jointly connected undirected switching graphs. These protocols, relying on no global knowledge of the network topology and independent of switching rules, can be devised and utilized in a completely distributed manner. They are able to avoid continuous information exchanges for either controllers’ updating or triggering functions’ monitoring, which ensures the feasibility of the presented protocols.

Journal ArticleDOI
TL;DR: This paper considers a setting, where MUs can offload their computations to the MEC server through a small cell base station, the SBS connects to the macro BS through a wireless backhaul, and computation resource at the M EC server is shared among offloading MUs.
Abstract: Considered as a key technology in 5G networks, mobile edge computing (MEC) can support intensive computation for energy-constrained and computation-limited mobile users (MUs) through offloading various computation and service functions to the edge of mobile networks. In addition to MEC, wireless heterogeneous networks will play an important role in providing high transmission capacity for MUs in 5G, where wireless backhaul is a cost-effective and viable solution to solve the expensive backhaul deployment issue. In this paper, we consider a setting, where MUs can offload their computations to the MEC server through a small cell base station (SBS), the SBS connects to the macro BS through a wireless backhaul, and computation resource at the MEC server is shared among offloading MUs. First, we formulate a joint optimization problem with the goal of minimizing the system-wide computation overhead. This is a mixed-integer problem and hard to derive the optimal solution. To solve this problem, we propose to decompose it into two subproblems, namely the offloading decision subproblem and the joint backhaul bandwidth and computation resource allocation subproblem. An algorithm, namely JOBCA, is proposed to obtain a feasible solution to the original problem by solving two subproblems iteratively. Finally, numerical results are conducted to verify the performance improvement of the proposed algorithm over two baseline algorithms and the close performance of the proposed algorithm compared with the centralized exhaustive search.

Proceedings ArticleDOI
30 Jan 2019
TL;DR: A Semantic-aware Heterogeneous Network Embedding model (SHNE) is developed that performs joint optimization of heterogeneous SkipGram and deep semantic encoding for capturing both heterogeneous structural closeness and unstructured semantic relations among all nodes, as function of node content, that exist in the network.
Abstract: Representation learning in heterogeneous networks faces challenges due to heterogeneous structural information of multiple types of nodes and relations, and also due to the unstructured attribute or content (e.g., text) associated with some types of nodes. While many recent works have studied homogeneous, heterogeneous, and attributed networks embedding, there are few works that have collectively solved these challenges in heterogeneous networks. In this paper, we address them by developing a Semantic-aware Heterogeneous Network Embedding model (SHNE). SHNE performs joint optimization of heterogeneous SkipGram and deep semantic encoding for capturing both heterogeneous structural closeness and unstructured semantic relations among all nodes, as function of node content, that exist in the network. Extensive experiments demonstrate that SHNE outperforms state-of-the-art baselines in various heterogeneous network mining tasks, such as link prediction, document retrieval, node recommendation, relevance search, and class visualization.

Journal ArticleDOI
Ping Xuan1, Shuxiang Pan1, Tiangang Zhang1, Yong Liu1, Hao Sun1 
30 Aug 2019-Cells
TL;DR: A novel method based on the graph convolutional network and Convolutional neural network, referred to as GCNLDA, to infer disease-related lncRNA candidates and had superior performance against state-of-the-art prediction methods.
Abstract: Aberrant expressions of long non-coding RNAs (lncRNAs) are often associated with diseases and identification of disease-related lncRNAs is helpful for elucidating complex pathogenesis. Recent methods for predicting associations between lncRNAs and diseases integrate their pertinent heterogeneous data. However, they failed to deeply integrate topological information of heterogeneous network comprising lncRNAs, diseases, and miRNAs. We proposed a novel method based on the graph convolutional network and convolutional neural network, referred to as GCNLDA, to infer disease-related lncRNA candidates. The heterogeneous network containing the lncRNA, disease, and miRNA nodes, is constructed firstly. The embedding matrix of a lncRNA-disease node pair was constructed according to various biological premises about lncRNAs, diseases, and miRNAs. A new framework based on a graph convolutional network and a convolutional neural network was developed to learn network and local representations of the lncRNA-disease pair. On the left side of the framework, the autoencoder based on graph convolution deeply integrated topological information within the heterogeneous lncRNA-disease-miRNA network. Moreover, as different node features have discriminative contributions to the association prediction, an attention mechanism at node feature level is constructed. The left side learnt the network representation of the lncRNA-disease pair. The convolutional neural networks on the right side of the framework learnt the local representation of the lncRNA-disease pair by focusing on the similarities, associations, and interactions that are only related to the pair. Compared to several state-of-the-art prediction methods, GCNLDA had superior performance. Case studies on stomach cancer, osteosarcoma, and lung cancer confirmed that GCNLDA effectively discovers the potential lncRNA-disease associations.

Proceedings ArticleDOI
03 Nov 2019
TL;DR: This paper systematically formalizes the meta-path guided random walk as a higher-order Markov chain process, and presents a heterogeneous personalized spacey random walk to efficiently and effectively attain the expected stationary distribution among nodes.
Abstract: Heterogeneous information network (HIN) embedding has gained increasing interests recently. However, the current way of random-walk based HIN embedding methods have paid few attention to the higher-order Markov chain nature of meta-path guided random walks, especially to the stationarity issue. In this paper, we systematically formalize the meta-path guided random walk as a higher-order Markov chain process,and present a heterogeneous personalized spacey random walk to efficiently and effectively attain the expected stationary distribution among nodes. Then we propose a generalized scalable framework to leverage the heterogeneous personalized spacey random walk to learn embeddings for multiple types of nodes in an HIN guided by a meta-path, a meta-graph, and a meta-schema respectively. We conduct extensive experiments in several heterogeneous networks and demonstrate that our methods substantially outperform the existing state-of-the-art network embedding algorithms.

Journal ArticleDOI
Wenzhong Li1, Han Zhang1, Shaohua Gao1, Chaojing Xue1, Xiaoliang Wang1, Sanglu Lu1 
TL;DR: This paper proposes a learning-based multipath congestion control approach called SmartCC, which adopts an asynchronous reinforcement learning framework to learn a set of congestion rules, and proposes a hierarchical tile coding algorithm for state aggregation and a function estimation approach for function estimation that can derive the optimal policy efficiently.
Abstract: The Multipath TCP (MPTCP) protocol has been standardized by the IETF as an extension of conventional TCP, which enables multi-homed devices to establish multiple paths for simultaneous data transmission. Congestion control is a fundamental mechanism for the design and implementation of MPTCP. Due to the diverse QoS characteristics of heterogeneous links, existing multipath congestion control mechanisms suffer from a number of performance problems such as bufferbloat, suboptimal bandwidth usage, etc. In this paper, we propose a learning-based multipath congestion control approach called SmartCC to deal with the diversities of multiple communication path in heterogeneous networks. SmartCC adopts an asynchronous reinforcement learning framework to learn a set of congestion rules, which allows the sender to observe the environment and take actions to adjust the subflows’ congestion windows adaptively to fit different network situations. To deal with the problem of infinite states in high-dimensional space, we propose a hierarchical tile coding algorithm for state aggregation and a function estimation approach for $Q$ -learning, which can derive the optimal policy efficiently. Due to the asynchronous design of SmartCC, the processes of model training and execution are decoupled, and the learning process will not introduce extra delay and overhead on the decision making process in MPTCP congestion control. We conduct extensive experiments for performance evaluation, which show that SmartCC improves the aggregate throughput significantly and outperforms the state-of-the-art mechanisms on a variety of performance metrics.

Journal ArticleDOI
TL;DR: A network embedding method, Mashup, was adopted to extract essential and informative drug features from several drug heterogeneous networks, representing different properties of drugs, and was superior to the models incorporating other machine learning algorithms and one previous model.
Abstract: The side effects of drugs are not only harmful to humans but also the major reasons for withdrawing approved drugs, bringing greater risks for pharmaceutical companies. However, detecting the side effects for a given drug via traditional experiments is time- consuming and expensive. In recent years, several computational methods have been proposed to predict the side effects of drugs. However, most of the methods cannot effectively integrate the heterogeneous properties of drugs. In this study, we adopted a network embedding method, Mashup, to extract essential and informative drug features from several drug heterogeneous networks, representing different properties of drugs. For side effects, a network was also built, from where side effect features were extracted. These features can capture essential information about drugs and side effects in a network level. Drug and side effect features were combined together to represent each pair of drug and side effect, which was deemed as a sample in this study. Furthermore, they were fed into a random forest (RF) algorithm to construct the prediction model, called the RF network model. The RF network model was evaluated by several tests. The average of Matthews correlation coefficients on the balanced and unbalanced datasets was 0.640 and 0.641, respectively. The RF network model was superior to the models incorporating other machine learning algorithms and one previous model. Finally, we also investigated the influence of two feature dimension parameters on the RF network model and found that our model was not very sensitive to these parameters.

Journal ArticleDOI
08 Nov 2019-Sensors
TL;DR: Different paradigms for wireless access networks such as millimetre-wave communications, Long-Term Evolution in unlicensed spectrum, ultra-dense heterogeneous networks, device-to-device communications and massive multiple-input multiple-output communications have been analysed as possible technologies for improvement of wireless networks energy efficiency.
Abstract: Although information and communications technologies (ICTs) have the potential of enabling powerful social, economic and environmental benefits, ICT systems give a non-negligible contribution to world electricity consumption and carbon dioxide (CO2) footprint. This contribution will sustain since the increased demand for user′s connectivity and an explosion of traffic volumes necessitate continuous expansion of current ICTs services and deployment of new infrastructures and technologies which must ensure the expected user experiences and performance. In this paper, analyses of costs for the global annual energy consumption of telecommunication networks, estimation of ICT sector CO2 footprint contribution and predictions of energy consumption of all connected user-related devices and equipment in the period 2011–2030 are presented. Since presented estimations of network energy consumption trends for main communication sectors by 2030 shows that highest contribution to global energy consumption will come from wireless access networks and data centres (DCs), the rest of the paper analyses technologies and concepts which can contribute to the energy-efficiency improvements of these two sectors. More specifically, different paradigms for wireless access networks such as millimetre-wave communications, Long-Term Evolution in unlicensed spectrum, ultra-dense heterogeneous networks, device-to-device communications and massive multiple-input multiple-output communications have been analysed as possible technologies for improvement of wireless networks energy efficiency. Additionally, approaches related to the DC resource management, DCs power management, green DC monitoring and thermal management in DCs have been discussed as promising approaches to improvement of DC power usage efficiency. For each of analysed technologies, future research challenges and open issues have been summarised and discussed. Lastly, an overview of the accepted papers in the Special Issue dedicated to the green, energy-efficient and sustainable networks is presented.

Journal ArticleDOI
TL;DR: A subspace-based local identification approach using local observations is presented, which consists of subspace intersection operations in both the temporal and spatial domains for the identification of discrete-time LTI systems that are interconnected by their unmeasurable states.

Journal ArticleDOI
TL;DR: In this article, the authors present a comprehensive literature review on applications of economic and pricing theory for resource management in the evolving fifth generation (5G) wireless networks, including user association, spectrum allocation, and interference and power management.
Abstract: This paper presents a comprehensive literature review on applications of economic and pricing theory for resource management in the evolving fifth generation (5G) wireless networks. The 5G wireless networks are envisioned to overcome existing limitations of cellular networks in terms of data rate, capacity, latency, energy efficiency, spectrum efficiency, coverage, reliability, and cost per information transfer. To achieve the goals, the 5G systems will adopt emerging technologies such as massive multiple-input multiple-output, mmWave communications, and dense heterogeneous networks. However, 5G involves multiple entities and stakeholders that may have different objectives, e.g., high data rate, low latency, utility maximization, and revenue/profit maximization. This poses a number of challenges to resource management designs of 5G. While the traditional solutions may neither efficient nor applicable, economic and pricing models have been recently developed and adopted as useful tools to achieve the objectives. In this paper, we review economic and pricing approaches proposed to address resource management issues in the 5G wireless networks including user association, spectrum allocation, and interference and power management. Furthermore, we present applications of economic and pricing models for wireless caching and mobile data offloading. Finally, we highlight important challenges, open issues and future research directions of applying economic and pricing models to the 5G wireless networks.

Journal ArticleDOI
TL;DR: This paper describes the Tactile Internet's human-in-the-loop-centric design principles and haptic communications models, and elaborate on the development of decentralized cooperative dynamic bandwidth allocation algorithms for end-to-end resource coordination in fiber-wireless (FiWi) access networks.
Abstract: Historically, research efforts in optical networks have focused on the goal of continuously increasing capacity rather than on lowering end-to-end latency. This slowly started to change in the access environment with post-Next-Generation Passive Optical Network 2 research. The emphasis on latency grew in importance with the introduction of 5G ultra-reliable and low-latency communication requirements. In this paper, we focus on the emerging Tactile Internet as one of the most interesting 5G low-latency applications enabling novel immersive experiences. After describing the Tactile Internet's human-in-the-loop-centric design principles and haptic communications models, we elaborate on the development of decentralized cooperative dynamic bandwidth allocation algorithms for end-to-end resource coordination in fiber-wireless (FiWi) access networks. We then use machine learning in the context of FiWi enhanced heterogeneous networks to decouple haptic feedback from the impact of extensive propagation delays. This enables humans to perceive remote task environments in time at a 1-ms granularity.

Journal ArticleDOI
TL;DR: This paper discusses the issues related to interference management for 5G network from the perspective of Heterogeneous Network and Device-to-Device communication, by using enabling techniques, such as Inter-cell Interference Coordination, Coordinated Multipoint, and Coordinated Scheduling.
Abstract: The future wireless Fifth Generation (5G) communication network required a higher bandwidth in order to achieve greater data rate. It will be largely characterized by small cell deployments, typically in the range of 200 meters of radius/cell, at most. The implementation of small size networks delivers various advantages such as high data rate and low signal delay. However, it also suffers from various issues such as inter-cell, intra-cell, and inter-user interferences. This paper discusses the issues related to interference management for 5G network from the perspective of Heterogeneous Network and Device-to-Device communication, by using enabling techniques, such as Inter-cell Interference Coordination, Coordinated Multipoint, and Coordinated Scheduling. Furthermore, several pertinent issues have been critically reviewed focusing on their methodologies, advantages and limitations along with the future work. Future directions proposed by the 3rd Generation Partnership Project for interference mitigation has also been outlined. This review will act as a guide for the researchers to comprehend various existing and emerging enabling interference mitigation techniques for further exploration and smooth implementation of 5G wireless network.

Journal ArticleDOI
TL;DR: A mobile positioning method that adopts recurrent neural network algorithms to analyze the received signal strength indications from heterogeneous networks for estimating the locations of mobile stations and finds that the lower location errors can be obtained by the recurrent neural networks with multiple consecutive timestamps.
Abstract: This study proposes a mobile positioning method that adopts recurrent neural network algorithms to analyze the received signal strength indications from heterogeneous networks (e.g., cellular networks and Wi-Fi networks) for estimating the locations of mobile stations. The recurrent neural networks with multiple consecutive timestamps can be applied to extract the features of time series data for the improvement of location estimation. In practical experimental environments, there are 4525 records, 59 different base stations, and 582 different Wi-Fi access points detected in Fuzhou University in China. The lower location errors can be obtained by the recurrent neural networks with multiple consecutive timestamps (e.g., two timestamps and three timestamps); from the experimental results, it can be observed that the average error of location estimation was 9.19 m by the proposed mobile positioning method with two timestamps.