scispace - formally typeset
Search or ask a question

Showing papers on "Heterogeneous network published in 2022"


Journal ArticleDOI
TL;DR: This work proposes a three-channel framework together with a novel Heterogeneous Edge-enhanced graph ATtention network (HEAT) that can realize simultaneous trajectory predictions for multiple agents under complex traffic situations, and achieve state-of-the-art performance with respect to prediction accuracy.
Abstract: Simultaneous trajectory prediction for multiple heterogeneous traffic participants is essential for safe and efficient operation of connected automated vehicles under complex driving situations. Two main challenges for this task are to handle the varying number of heterogeneous target agents and jointly consider multiple factors that would affect their future motions. This is because different kinds of agents have different motion patterns, and their behaviors are jointly affected by their individual dynamics, their interactions with surrounding agents, as well as the traffic infrastructures. A trajectory prediction method handling these challenges will benefit the downstream decision-making and planning modules of autonomous vehicles. To meet these challenges, we propose a three-channel framework together with a novel Heterogeneous Edge-enhanced graph ATtention network (HEAT). Our framework is able to deal with the heterogeneity of the target agents and traffic participants involved. Specifically, agents’ dynamics are extracted from their historical states using type-specific encoders. The inter-agent interactions are represented with a directed edge-featured heterogeneous graph and processed by the designed HEAT network to extract interaction features. Besides, the map features are shared across all agents by introducing a selective gate-mechanism. And finally, the trajectories of multiple agents are predicted simultaneously. Validations using both urban and highway driving datasets show that the proposed model can realize simultaneous trajectory predictions for multiple agents under complex traffic situations, and achieve state-of-the-art performance with respect to prediction accuracy. The achieved final displacement error (FDE@3sec) is 0.66 meter under urban driving, demonstrating the feasibility and effectiveness of the proposed approach.

62 citations


Journal ArticleDOI
TL;DR: An iterative block coordinate descent-based algorithm which exploits the semi-definite relaxation, the S-procedure, and the singular value decomposition method is developed and results reveal that the proposed algorithm outperforms existing algorithms in terms of fairness, EE, and outage probability.
Abstract: The energy efficiency (EE) of femtocells is always limited by the surrounding radio environments in heterogeneous networks (HetNets), such as walls and obstacles. In this paper, we propose to deploy reconfigurable intelligent surfaces (RISs) to improve the EE of femtocells. However, perfect channel state information is more difficult to obtain due to the passive characteristics of RISs and non-cooperative relationship between different tiers. Besides, the low-cost transceivers and reflecting units suffer nontrivial hardware impairments (HWIs) due to the hardware limitations of practical systems. To this end, we investigate a realistic robust beamforming design based on max-min fairness for an RIS-aided HetNet under channel uncertainties and residual HWIs. The joint optimization of transmit beamforming vectors of femto base stations (FBSs) and the phase-shift matrices of RISs is formulated as a non-convex problem to maximize the minimum EE of the femtocell subject to the constraints of the maximum transmit power of FBSs, the quality of service of users, and unit modulus phase-shift constraints of RISs. We develop an iterative block coordinate descent-based algorithm which exploits the semi-definite relaxation, the S-procedure, and the singular value decomposition method. Simulation results reveal that the proposed algorithm outperforms existing algorithms in terms of fairness, EE, and outage probability.

58 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a digital forensics tool to protect end users in 5G heterogeneous networks, which is built based on deep learning and can realize the detection of attacks via classification.
Abstract: The upcoming 5G heterogeneous networks (HetNets) have attracted much attention worldwide. Large amounts of high-velocity data can be transported by using the bandwidth spectrum of HetNets, yielding both great benefits and several concerning issues. In particular, great harm to our community could occur if the main visual information channels, such as images and videos, are maliciously attacked and uploaded to the Internet, where they can be spread quickly. Therefore, we propose a novel framework as a digital forensics tool to protect end users. It is built based on deep learning and can realize the detection of attacks via classification. Compared with the conventional methods and justified by our experiments, the data collection efficiency, robustness, and detection performance of the proposed model are all refined. In addition, assisted by 5G HetNets, our proposed framework makes it possible to provide high-quality real-time forensics services on edge consumer devices such as cell phone and laptops, which brings colossal practical value. Some discussions are also carried out to outline potential future threats.

42 citations


Journal ArticleDOI
TL;DR: In this paper , an elastic cell-zooming algorithm based on the quality of service and traffic loads of end-users is performed by adaptively adjusting the transmission power of small cells in order to reduce energy consumption.
Abstract: Long-term evolution advanced (LTE-A) heterogeneous networks have been observed to offer reliable and service-differentiated communication, thereby enabling numerous mobile applications such as smart meters, remote sensors, and vehicular applications. This fact envisions the trend of Internet of Things (IoT) underlaying heterogeneous small cell networks. On this basis, this paper proposes an energy-efficient framework for such a scenario, where multitier heterogeneous small cell networks provide wireless connection and seamless coverage for mobile users and IoT nodes. In our proposed framework, an elastic cell-zooming algorithm based on the quality of service and traffic loads of end-users is performed by adaptively adjusting the transmission power of small cells in order to reduce energy consumption. In addition, aiming at the high energy efficiency of IoT underlaying small cell networks, a clustering-based IoT structure is used, where a SWIPT-CH selection algorithm is proposed to maximize the average residual energy of IoT nodes and to mitigate resource competition between IoT nodes and mobile users. Extensive simulations demonstrate that our proposed framework can significantly enhance the energy efficiency for IoT underlaying small cell networks with guaranteed outage probability.

35 citations


Journal ArticleDOI
TL;DR: An intelligent-driven green resource allocation mechanism for the IIoT under 5G heterogeneous networks that can achieve better performance than other traditional deep learning (DL) methods and maintain service quality above accepted levels as well is proposed.
Abstract: The Industrial Internet of Things (IIoT) is one of the important applications under the 5G massive machine type of communication (mMTC) scenario. To ensure the high reliability of IIoT services, it is necessary to apply an efficient resource allocation method under the dynamic and complex environment. In view of the absence of energy-efficient resource management architecture for the entire network, this article proposes an intelligent-driven green resource allocation mechanism for the IIoT under 5G heterogeneous networks. First, an intelligent end-to-end self-organizing resource allocation framework for IIoT service is given. Next, an energy-efficient resource allocation model within the framework is proposed. It is then solved by an intelligent mechanism with the asynchronous advantage actor critic driven deep reinforcement learning algorithm. Through the comparison analysis of different methods and rewards under IIoT scenarios with proper parameters setting, the proposed method can achieve better performance than other traditional deep learning (DL) methods and maintain service quality above accepted levels as well.

29 citations


Journal ArticleDOI
TL;DR: Based on virtual network architecture and deep reinforcement learning (DRL), the authors model SAGIN’s heterogeneous resource orchestration as a multi-domain virtual network embedding (VNE) problem.
Abstract: Traditional ground wireless communication networks cannot provide high-quality services for artificial intelligence (AI) applications such as intelligent transportation systems (ITS) due to deployment, coverage and capacity issues. The space-air-ground integrated network (SAGIN) has become a research focus in the industry. Compared with traditional wireless communication networks, SAGIN is more flexible and reliable, and it has wider coverage and higher quality of seamless connection. However, due to its inherent heterogeneity, time-varying and self-organizing characteristics, the deployment and use of SAGIN still faces huge challenges, among which the orchestration of heterogeneous resources is a key issue. Based on virtual network architecture and deep reinforcement learning (DRL), we model SAGIN’s heterogeneous resource orchestration as a multi-domain virtual network embedding (VNE) problem, and propose a SAGIN cross-domain VNE algorithm. We model the different network segments of SAGIN, and set the network attributes according to the actual situation of SAGIN and user needs. In DRL, the agent is acted by a five-layer policy network. We build a feature matrix based on network attributes extracted from SAGIN and use it as the agent training environment. Through training, the probability of each underlying node being embedded can be derived. In test phase, we complete the embedding process of virtual nodes and links in turn based on this probability. Finally, we verify the effectiveness of the algorithm from both training and testing.

23 citations


Journal ArticleDOI
TL;DR: An adaptive particle swarm optimization (PSO) ensemble with genetic mutation-based routing is proposed to select control nodes for IoT based software-defined WSN and the simulation result of the proposed algorithm outperforms over other algorithms under the different arrangements of the network.

22 citations


Journal ArticleDOI
TL;DR: This technique integrates three common Multi-Attribute Decision-Making techniques, notably the Fuzzy Analytic Hierarchy Process (FAHP), Entropy, and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), to take into consideration user preferences for every prospective network as well as the real scenario of heterogeneous networks.
Abstract: Mobile workstations are frequently used in heterogeneous network's challenging environments. Users must move between various networks for a myriad of purposes, including vertical handover. At this time, it is critical for the mobile station to quickly pick the most appropriate networks from all identified alternative connections with the decision outcome, avoiding the ping-pong effect to the greatest extent feasible. Based on a combination of network characteristics as well as user choice, this study offers a heterogeneous network selection method. This technique integrates three common Multi-Attribute Decision-Making (MADM) techniques, notably the Fuzzy Analytic Hierarchy Process (FAHP), Entropy, and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), to take into consideration user preferences for every prospective network as well as the real scenario of heterogeneous networks. For different traffic classes, FAHP is first utilized to determine the weights of network parameters and the utility numbers of total options available. Next, entropies and TOPSIS are utilized to obtain only the unbiased weightages of network factors and utility principles of totally different options. The most suitable networks, whose utility number is the greatest and larger than that of the equivalent number of present networks of the phone station, are chosen to provide accessibility based on the utility numbers of each prospective system as a limit. The suggested method not only eliminates a particular algorithm's one-sided character but also dynamically changes the percentage of each method in the desired outcome based on real needs. The proposed model was compared to the three existing hybrid methods. The results showed that it could precisely choose the optimized network connectivity and significantly reduce the value of vertical handoffs. It also provides the requisite Quality of Service (QoS) and Quality of Everything (QoE) in terms of the quantitative benefits of vertical handovers.

18 citations


Journal ArticleDOI
TL;DR: In this paper , a multi-agent double deep Q network (DDQN)-based approach was proposed to jointly optimize the beamforming vectors and power splitting ratio in multi-user multiple-input single-output (MU-MISO) simultaneous wireless information and power transfer (SWIPT)-enabled heterogeneous networks (HetNets).
Abstract: This paper proposes a multi-agent double deep Q network (DDQN)-based approach to jointly optimize the beamforming vectors and power splitting (PS) ratio in multi-user multiple-input single-output (MU-MISO) simultaneous wireless information and power transfer (SWIPT)-enabled heterogeneous networks (HetNets), where a macro base station (MBS) and several femto base stations (FBSs) serve multiple macro user equipments (MUEs) and femto user equipments (FUEs). The PS receiver architecture is deployed at FUEs. An optimization problem is formulated to maximize the achievable sum information rate of FUEs under the constraints of the achievable information rate requirements of MUEs and FUEs and the energy harvesting (EH) requirements of FUEs. Since the optimization problem is challenging to handle due to the high dimension and time-varying environment, an efficient multi-agent DDQN-based algorithm is presented, which is trained in a centralized manner and runs in a distributed manner, where two sets of deep neural network parameters are jointly updated and trained to tackle the problem and avoid overestimation. To facilitate the presented multi-agent DDQN-based algorithm, the action space, the state space and the reward function are designed, where the codebook matrix is employed to deal with the complex transmit beamforming vectors. Simulation results validate the proposed algorithm. Notable performance gains are achieved by the proposed algorithm due to considering the beam directions in the action space and the adaptability to the Doppler frequency shifts. Besides, the proposed algorithm is shown to be superior to other benchmark ones numerically.

16 citations


Proceedings ArticleDOI
12 Aug 2022
TL;DR: The proposed Multiplex Heterogeneous Graph Convolutional Network (MHGCN) can automatically learn the useful heterogeneous meta-path interactions of different lengths in multiplex heterogeneous networks through multi-layer convolution aggregation and effectively integrates both multi-relation structural signals and attribute semantics into the learned node embeddings with both unsupervised and semi-supervised learning paradigms.
Abstract: Heterogeneous graph convolutional networks have gained great popularity in tackling various network analytical tasks on heterogeneous network data, ranging from link prediction to node classification. However, most existing works ignore the relation heterogeneity with multiplex network between multi-typed nodes and different importance of relations in meta-paths for node embedding, which can hardly capture the heterogeneous structure signals across different relations. To tackle this challenge, this work proposes a Multiplex Heterogeneous Graph Convolutional Network (MHGCN) for heterogeneous network embedding. Our MHGCN can automatically learn the useful heterogeneous meta-path interactions of different lengths in multiplex heterogeneous networks through multi-layer convolution aggregation. Additionally, we effectively integrate both multi-relation structural signals and attribute semantics into the learned node embeddings with both unsupervised and semi-supervised learning paradigms. Extensive experiments on five real-world datasets with various network analytical tasks demonstrate the significant superiority of MHGCN against state-of-the-art embedding baselines in terms of all evaluation metrics. The source code of our method is available at: https://github.com/NSSSJSS/MHGCN.

15 citations


Journal ArticleDOI
TL;DR: A systematic in-depth, and comprehensive survey of the applications of DRL techniques in RRAM for next generation wireless networks to guide and stimulate more research endeavors towards building efficient and fine-grained DRL-based RRAM schemes for future wireless networks.
Abstract: Next generation wireless networks are expected to be extremely complex due to their massive heterogeneity in terms of the types of network architectures they incorporate, the types and numbers of smart IoT devices they serve, and the types of emerging applications they support. In such large-scale and heterogeneous networks (HetNets), radio resource allocation and management (RRAM) becomes one of the major challenges encountered during system design and deployment. In this context, emerging Deep Reinforcement Learning (DRL) techniques are expected to be one of the main enabling technologies to address the RRAM in future wireless HetNets. In this paper, we conduct a systematic in-depth, and comprehensive survey of the applications of DRL techniques in RRAM for next generation wireless networks. Towards this, we first overview the existing traditional RRAM methods and identify their limitations that motivate the use of DRL techniques in RRAM. Then, we provide a comprehensive review of the most widely used DRL algorithms to address RRAM problems, including the value- and policy-based algorithms. The advantages, limitations, and use-cases for each algorithm are provided. We then conduct a comprehensive and in-depth literature review and classify existing related works based on both the radio resources they are addressing and the type of wireless networks they are investigating. To this end, we carefully identify the types of DRL algorithms utilized in each related work, the elements of these algorithms, and the main findings of each related work. Finally, we highlight important open challenges and provide insights into several future research directions in the context of DRL-based RRAM. This survey is intentionally designed to guide and stimulate more research endeavors towards building efficient and fine-grained DRL-based RRAM schemes for future wireless networks.

Journal ArticleDOI
TL;DR: An overview of the interference issues relating to the B5G networks from the perspective of HetNets, D2D, Ultra-Dense Networks (UDNs), and Unmanned Aerial Vehicles (UAVs) is provided.
Abstract: Beyond Fifth Generation (B5G) networks are expected to be the most efficient cellular wireless networks with greater capacity, lower latency, and higher speed than the current networks. Key enabling technologies, such as millimeter-wave (mm-wave), beamforming, Massive Multiple-Input Multiple-Output (M-MIMO), Device-to-Device (D2D), Relay Node (RN), and Heterogeneous Networks (HetNets) are essential to enable the new network to keep growing. In the forthcoming wireless networks with massive random deployment, frequency re-use strategies and multiple low power nodes, severe interference issues will impact the system. Consequently, interference management represents the main challenge for future wireless networks, commonly referred to as B5G. This paper provides an overview of the interference issues relating to the B5G networks from the perspective of HetNets, D2D, Ultra-Dense Networks (UDNs), and Unmanned Aerial Vehicles (UAVs). Furthermore, the existing interference mitigation techniques are discussed by reviewing the latest relevant studies with a focus on their methods, advantages, limitations, and future directions. Moreover, the open issues and future directions to reduce the effects of interference are also presented. The findings of this work can act as a guide to better understand the current and developing methodologies to mitigate the interference issues in B5G networks.

Journal ArticleDOI
TL;DR: In this article , the authors explored the energy efficiency of the 5G mobile networks from the energy consumption and network power efficiency perspective considering the varying high volume traffic load, the number of antennas, varying bandwidth, and varying density of low power nodes (LPNs).

Journal ArticleDOI
01 Feb 2022-Sensors
TL;DR: This paper proposes a heterogeneous network model for IoV and service-oriented network optimization that focuses on three key networking entities: vehicular cloud, heterogeneous communication, and smart use cases as clients.
Abstract: Heterogeneous vehicular communication on the Internet of connected vehicle (IoV) environment is an emerging research theme toward achieving smart transportation. It is an evolution of the existing vehicular ad hoc network architecture due to the increasingly heterogeneous nature of the various existing networks in road traffic environments that need to be integrated. The existing literature on vehicular communication is lacking in the area of network optimization for heterogeneous network environments. In this context, this paper proposes a heterogeneous network model for IoV and service-oriented network optimization. The network model focuses on three key networking entities: vehicular cloud, heterogeneous communication, and smart use cases as clients. Most traffic-related data–oriented computations are performed at cloud servers for making intelligent decisions. The connection component enables handoff-centric network communication in heterogeneous vehicular environments. The use-case-oriented smart traffic services are implemented as clients for the network model. The model is tested for various service-oriented metrics in heterogeneous vehicular communication environments with the aim of affirming several service benefits. Future challenges and issues in heterogeneous IoV environments are also highlighted.

Journal ArticleDOI
TL;DR: DSG-DTI as mentioned in this paper uses a heterogeneous graph autoencoder and heterogeneous attention network-based matrix completion to predict drug-target interactions and can generalize to newly registered drugs and targets with slight performance degradation.
Abstract: Drug target interaction prediction is a crucial stage in drug discovery. However, brute-force search over a compound database is financially infeasible. We have witnessed the increasing measured drug-target interactions records in recent years, and the rich drug/protein-related information allows the usage of graph machine learning. Despite the advances in deep learning-enabled drug-target interaction, there are still open challenges: (1) rich and complex relationship between drugs and proteins can be explored; (2) the intermediate node is not calibrated in the heterogeneous graph. To tackle with above issues, this paper proposed a framework named DSG-DTI. Specifically, DSG-DTI has the heterogeneous graph autoencoder and heterogeneous attention network-based Matrix Completion. Our framework ensures that the known types of nodes (e.g., drug, target, side effects, diseases) are precisely embedded into high-dimensional space with our pretraining skills. Also, the attention-based heterogeneous graph-based matrix completion achieves highly competitive results via effective long-range dependencies extraction. We verify our model on two public benchmarks. The result of two publicly available benchmark application programs show that the proposed scheme effectively predicts drug-target interactions and can generalize to newly registered drugs and targets with slight performance degradation, outperforming the best accuracy compared with other baselines.

Journal ArticleDOI
TL;DR: In this paper , the authors investigated secure user association, power and sub-carrier allocation for secrecy rate maximization in N-tier HetNets using DU-De strategy.
Abstract: Coverage, capacity and throughput can be enhanced significantly by offering a hybrid solution consisting of microwave (μW) high power base station (HPB) underlaid millimetre wave (mW) low power base station (LPB) augmented by downlink uplink decoupled (DU-De) user association strategy in N-tier heterogeneous networks (HetNets). Presence of diversity in the HetNets pave the way for marlacious attacks by eavesdropper to wiretap the channel of legal users. However, secure user association, power and μW & mW sub-carriers allocation employing DU-De strategy has not been investigated in the past. We formulates mathematical models for DU-De strategy and downlink uplink coupled (DU-Co) strategy to investigate secure user association, power and sub-carrier in μW and mW band allocation for secrecy rate maximization in N-tier HetNets. The nature of formulated problems is complex, challenging and NP-hard. We have used ɛ-optimal algorithm to solve the formulated problems and achieve ɛ-optimal solution. Extensive simulation results in terms of secure user association and average secrecy rate show the effectiveness of the DU-De strategy over the DU-Co strategy in HetNets.

Journal ArticleDOI
TL;DR: In this paper , a coordinated user association and spectrum allocation by resorting to non-cooperative game theory is proposed for novel multi-tier HetNets with disparate spectrum (microwave and millimeter wave).
Abstract: Dense deployment of small cells operating on different frequency bands based on multiple technologies provides a fundamental way to face the imminent thousand-fold traffic augmentation. This heterogeneous network (HetNet) architecture enables efficient traffic offloading among different tiers and technologies. However, research on multi-tier HetNets where various tiers share the same microwave spectrum has been well-addressed over the past years. Therefore, our work is targeted towards novel multi-tier HetNets with disparate spectrum (microwave and millimeter wave). In fact, despite the huge capacity brought by millimeter-wave technology, the latter will fail to provide universal coverage, especially indoor, and so mmWave will inevitably co-exist with a traditional sub-6GHz cellular network. In this work, we propose a coordinated user association and spectrum allocation by resorting to non-cooperative game theory. In fact, in such an arduous context, efficient distributed solutions are imperative. Extensive simulation results show the precedence of our coordinated approach in comparison with state-of-the-art heuristics. Moreover, we evaluate the impact of various network parameters, such as mmWave density, cell load, and user distribution and density, offering valuable guidelines into practical 5G HetNet design. Finally, we assess the benefit brought by massive MIMO for mmWave in such a highly heterogeneous setting.

Journal ArticleDOI
TL;DR: An intelligent radio access network (RAN) architecture for the integrated 6G network is presented, which targets balancing the computation loads and fronthaul burden and achieving service-awareness for heterogeneous and distributed requests from users.
Abstract: The integration of space–air–ground–sea networking in 6G, which is expected to not only achieve seamless coverage but also offer service-aware access and transmission, has introduced many new challenges for current mobile communications systems. Service awareness requires the 6G network to be aware of the demands of a diverse range of services as well as the occupation, utilization, and variation of network resources, which will enable the capability of deriving more intelligent and effective solutions for complicated heterogeneous resource configuration. Following this trend, this article investigates potential techniques that may improve service-aware radio access using the heterogeneous 6G network. We start with a discussion on the evolution of cloud-based RAN architectures from 5G to 6G, and then we present an intelligent radio access network (RAN) architecture for the integrated 6G network, which targets balancing the computation loads and fronthaul burden and achieving service-awareness for heterogeneous and distributed requests from users. In order for the service-aware access and transmissions to be equipped for future heterogeneous 6G networks, we analyze the challenges and potential solutions for the heterogeneous resource configuration, including a tightly coupled cross-layer design, resource service-aware sensing and allocation, transmission over multiple radio access technologies (RAT), and user socialization for cloud extension. Finally, we briefly explore some promising and crucial research topics on service-aware radio access for 6G networks.

Journal ArticleDOI
TL;DR: In this article , the authors proposed an innovative multi-user cost-efficient crowd-assisted delivery and computing (MEC-DC) framework, which leverages mobile edge computing and end-user resources to support high performance VR content delivery over 5G and beyond heterogeneous networks (5G-HetNets).
Abstract: The latest evolution of wireless communications enables user access rich Virtual Reality (VR) services via the Internet, including while on the move. However, providing a premium immersive experience for massive number of concurrent users with various device configurations is a significant challenge due to the ultra-high data rate and ultra-low delay requirements of live VR services. This paper introduces an innovative multi-user cost-efficient crowd-assisted delivery and computing (MEC-DC) framework, which leverages mobile edge computing and end-user resources to support high performance VR content delivery over 5G-and-beyond heterogeneous networks (5G-HetNets). The proposed MEC-DC framework is based on three main solutions. First is a novel buffer-nadir-based multicast (BNM) mechanism for VR transmissions over 5G-HetNets. BNM ensures smooth and synchronized user viewing experience by maximizing the average playback buffer-nadir of all participants with stochastic optimization. Second and third are practical distributed algorithms: the cost-efficient multicast-aware transcoding offloading (MATO) and crowd-assisted delivery algorithm (CAD) which optimize jointly multicast delivery and video transcoding. The algorithms optimality and complexity were investigated. The proposed MATO-CAD solution was evaluated with real datasets, trace-driven numerical simulations, and prototype-based experiments. The trace-driven experimental results showed how the proposed solution provides 18% throughput improvement, lowest delay and best playback freeze ratio in comparison with three other state-of-the-art solutions.

Proceedings ArticleDOI
18 Feb 2022
TL;DR: A unified framework covering most HGNNs is proposed, consisting of three components: heterogeneous linear transformation, heterogeneous graph transformation, and heterogeneous message passing layer, and a platform Space4HGNN is built, which offers modularized components, reproducible implementations, and standardized evaluation for HGNN's.
Abstract: Heterogeneous Graph Neural Network (HGNN) has been successfully employed in various tasks, but we cannot accurately know the importance of different design dimensions of HGNNs due to diverse architectures and applied scenarios. Besides, in the research community of HGNNs, implementing and evaluating various tasks still need much human effort. To mitigate these issues, we first propose a unified framework covering most HGNNs, consisting of three components: heterogeneous linear transformation, heterogeneous graph transformation, and heterogeneous message passing layer. Then we build a platform Space4HGNN by defining a design space for HGNNs based on the unified framework, which offers modularized components, reproducible implementations, and standardized evaluation for HGNNs. Finally, we conduct experiments to analyze the effect of different designs. With the insights found, we distill a condensed design space and verify its effectiveness.

Journal ArticleDOI
TL;DR: In this article , a DRL model was proposed for solving the network selection problem with the aim of optimizing medical data delivery over heterogeneous health systems, and an optimization model was formulated to minimize the transmission energy consumption and latency, while meeting diverse applications' quality of service (QoS) requirements.
Abstract: Smart health systems improve our quality oflife by integrating diverse information and technologies into health and medical practices. Such technologies can significantly improve the existing health services. However, reliability, latency, and limited networks resources are among the many challenges hindering the realization of smart health systems. Thus, in this paper, we leverage the dense heterogeneous network (HetNet) architecture over 5 G network to enhance network capacity and provide seamless connectivity for smart health systems. However, network selection in HetNets is still a challenging problem that needs to be addressed. Inspired by the success of Deep Reinforcement Learning (DRL) in solving complicated control problems, we present a novel DRL model for solving the network selection problem with the aim of optimizing medical data delivery over heterogeneous health systems. Specifically, we formulate an optimization model that integrates the network selection problem with adaptive compression, at the network edge, to minimize the transmission energy consumption and latency, while meeting diverse applications’ Quality of service (QoS) requirements. Our experimental results show that the proposed DRL-based model could minimize the energy consumption and latency compared to the greedy techniques, while meeting different users’ demands in high dynamics environments.

Journal ArticleDOI
TL;DR: In this paper , the authors provide a survey on the intelligent load balancing models that have been developed in HetNets, including those based on the machine learning (ML) technology.
Abstract: The massive growth of mobile users and the essential need for high communication service quality necessitate the deployment of ultra-dense heterogeneous networks (HetNets) consisting of macro, micro, pico and femto cells. Each cell type provides different cell coverage and distinct system capacity in HetNets. This leads to the pressing need to balance loads between cells, especially with the random distribution of users in numerous mobility directions. This paper provides a survey on the intelligent load balancing models that have been developed in HetNets, including those based on the machine learning (ML) technology. The survey provides a guideline and a roadmap for developing cost-effective, flexible and intelligent load balancing models in future HetNets. An overview of the generic problem of load balancing is also presented. The concept of load balancing is first introduced, and its purpose, functionality and evaluation criteria are then explained. Besides, a basic load balancing model and its operational procedure are described. A comprehensive literature review is then conducted, including techniques and solutions of addressing the load balancing problem. The key performance indicators (KPIs) used in the evaluation of load balancing models in HetNets are presented, along with the concurrent optimisation of coverage (CCO) and mobility robustness optimisation (MRO) relationship of load balancing. A comprehensive literature review of ML-driven load balancing solutions is specifically accomplished to show the historical development of load balancing models. Finally, the current challenges in implementing these models are explained as well as the future operational aspects of load balancing.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors developed a heterogeneous graph neural network model, named as HGDTI, which includes a learning phase of network node embedding and a training phase of DTI classification.
Abstract: In research on new drug discovery, the traditional wet experiment has a long period. Predicting drug-target interaction (DTI) in silico can greatly narrow the scope of search of candidate medications. Excellent algorithm model may be more effective in revealing the potential connection between drug and target in the bioinformatics network composed of drugs, proteins and other related data.In this work, we have developed a heterogeneous graph neural network model, named as HGDTI, which includes a learning phase of network node embedding and a training phase of DTI classification. This method first obtains the molecular fingerprint information of drugs and the pseudo amino acid composition information of proteins, then extracts the initial features of nodes through Bi-LSTM, and uses the attention mechanism to aggregate heterogeneous neighbors. In several comparative experiments, the overall performance of HGDTI significantly outperforms other state-of-the-art DTI prediction models, and the negative sampling technology is employed to further optimize the prediction power of model. In addition, we have proved the robustness of HGDTI through heterogeneous network content reduction tests, and proved the rationality of HGDTI through other comparative experiments. These results indicate that HGDTI can utilize heterogeneous information to capture the embedding of drugs and targets, and provide assistance for drug development.The HGDTI based on heterogeneous graph neural network model, can utilize heterogeneous information to capture the embedding of drugs and targets, and provide assistance for drug development. For the convenience of related researchers, a user-friendly web-server has been established at http://bioinfo.jcu.edu.cn/hgdti .

Journal ArticleDOI
TL;DR: DTIHNC as mentioned in this paper integrates heterogeneous graph attention operations to update the embedding of a node based on information in its 1-hop neighbors, and for multi-hop neighbor information, a random walk with restart aware graph attention to integrate more information through a larger neighborhood region.
Abstract: Accurate identification of drug-target interactions (DTIs) plays a crucial role in drug discovery. Compared with traditional experimental methods that are labor-intensive and time-consuming, computational methods are more and more popular in recent years. Conventional computational methods almost simply view heterogeneous networks which integrate diverse drug-related and target-related dataset instead of fully exploring drug and target similarities. In this paper, we propose a new method, named DTIHNC, for $\mathbf{D}$rug-$\mathbf{T}$arget $\mathbf{I}$nteraction identification, which integrates $\mathbf{H}$eterogeneous $\mathbf{N}$etworks and $\mathbf{C}$ross-modal similarities calculated by relations between drugs, proteins, diseases and side effects. Firstly, the low-dimensional features of drugs, proteins, diseases and side effects are obtained from original features by a denoising autoencoder. Then, we construct a heterogeneous network across drug, protein, disease and side-effect nodes. In heterogeneous network, we exploit the heterogeneous graph attention operations to update the embedding of a node based on information in its 1-hop neighbors, and for multi-hop neighbor information, we propose random walk with restart aware graph attention to integrate more information through a larger neighborhood region. Next, we calculate cross-modal drug and protein similarities from cross-scale relations between drugs, proteins, diseases and side effects. Finally, a multiple-layer convolutional neural network deeply integrates similarity information of drugs and proteins with the embedding features obtained from heterogeneous graph attention network. Experiments have demonstrated its effectiveness and better performance than state-of-the-art methods. Datasets and a stand-alone package are provided on Github with website https://github.com/ningq669/DTIHNC.

Journal ArticleDOI
TL;DR: RHGT as discussed by the authors constructs a drug-gene-disease interactive network based on biological data, and then proposes a three-level network embedding model, which learns network embeddings at fine-grained subtype-level, node-level and coarsegrained edge-level respectively.
Abstract: Drug repurposing refers to discovery of new medical instructions for existing chemical drugs, which has great pharmaceutical significance. Recently, large-scale biological datasets are increasingly available, and many graph neural network (GNN) based methods for drug repurposing have been developed. These methods often deem drug repurposing as a link prediction problem, which mines features of biological data to identify drug–disease associations (i.e., drug–disease links). Due to heterogeneity of data, we need to deeply explore heterogeneous information of biological network for drug repurposing. In this paper, we propose a Relation-aware Heterogeneous Graph Transformer (RHGT) model to capture heterogeneous information for drug repurposing. We first construct a drug–gene–disease interactive network-based on biological data, and then propose a three-level network embedding model, which learns network embeddings at fine-grained subtype-level, node-level and coarse-grained edge-level, respectively. The output of subtype-level is the input of node-level and edge-level, and the output of node-level is the input of edge level. We get edge embeddings at edge-level, which integrates edge type embeddings and node embeddings. We deem that in this way, characteristics of drug–gene–disease interactive network can be captured more comprehensively. Finally, we identify drug–disease associations (i.e., drug–disease links) based on the relationship between drug–gene edge embeddings and gene–disease edge embeddings. Experimental results show that our model performs better than other state-of-the-art graph neural network methods, which validates effectiveness of the proposed model. • A novel neural model, called RHGT, is proposed for drug repurposing. • RHGT characterizes the heterogeneity of the network at node level and edge level. • A fine-grained method is developed to learn edge type embedding. • RHGT achieves state-of-art performance in CTD and TTD datasets.

Journal ArticleDOI
TL;DR: In this paper , a comprehensive review of handover management in future mobile ultra-dense HetNets is presented to highlight their contribution in providing seamless connection during user mobility.

Journal ArticleDOI
01 Oct 2022-Sensors
TL;DR: A comprehensive review of existing standards and enabling technologies is proposed, a taxonomy is defined to classify the different elements characterizing multi-connectivity in 5G and future networks, and lessons common to these different contexts are presented.
Abstract: To manage a growing number of users and an ever-increasing demand for bandwidth, future 5th Generation (5G) cellular networks will combine different radio access technologies (cellular, satellite, and WiFi, among others) and different types of equipment (pico-cells, femto-cells, small-cells, macro-cells, etc.). Multi-connectivity is an emerging paradigm aiming to leverage this heterogeneous architecture. To achieve this, multi-connectivity proposes to enable UE to simultaneously use component carriers from different and heterogeneous network nodes: base stations, WiFi access points, etc. This could offer many benefits in terms of quality of service, energy efficiency, fairness, mobility, and spectrum and interference management. Therefore, this survey aims to present an overview of multi-connectivity in 5G networks and beyond. To do so, a comprehensive review of existing standards and enabling technologies is proposed. Then, a taxonomy is defined to classify the different elements characterizing multi-connectivity in 5G and future networks. Thereafter, existing research works using multi-connectivity to improve the quality of service, energy efficiency, fairness, mobility management, and spectrum and interference management are analyzed and compared. In addition, lessons common to these different contexts are presented. Finally, open challenges for multi-connectivity in 5G networks and beyond are discussed.

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a relation-aware Heterogeneous Graph Transformer (RHGT) model to capture heterogeneous information of biological network for drug repurposing.
Abstract: Drug repurposing refers to discovery of new medical instructions for existing chemical drugs, which has great pharmaceutical significance. Recently, large-scale biological datasets are increasingly available, and many graph neural network (GNN) based methods for drug repurposing have been developed. These methods often deem drug repurposing as a link prediction problem, which mines features of biological data to identify drug-disease associations (i.e., drug-disease links). Due to heterogeneity of data, we need to deeply explore heterogeneous information of biological network for drug repurposing. In this paper, we propose a Relation-aware Heterogeneous Graph Transformer (RHGT) model to capture heterogeneous information for drug repurposing. We first construct a drug-gene-disease interactive network-based on biological data, and then propose a three-level network embedding model, which learns network embeddings at fine-grained subtype-level, node-level and coarse-grained edge-level, respectively. The output of subtype-level is the input of node-level and edge-level, and the output of node-level is the input of edge level. We get edge embeddings at edge-level, which integrates edge type embeddings and node embeddings. We deem that in this way, characteristics of drug-gene-disease interactive network can be captured more comprehensively. Finally, we identify drug-disease associations (i.e., drug-disease links) based on the relationship between drug-gene edge embeddings and gene-disease edge embeddings. Experimental results show that our model performs better than other state-of-the-art graph neural network methods, which validates effectiveness of the proposed model.

Journal ArticleDOI
TL;DR: DTIHNC as discussed by the authors integrates heterogeneous graph attention operations to update the embedding of a node based on information in its 1-hop neighbors, and for multi-hop neighbor information, a random walk with restart aware graph attention to integrate more information through a larger neighborhood region.
Abstract: Accurate identification of drug-target interactions (DTIs) plays a crucial role in drug discovery. Compared with traditional experimental methods that are labor-intensive and time-consuming, computational methods are more and more popular in recent years. Conventional computational methods almost simply view heterogeneous networks which integrate diverse drug-related and target-related dataset instead of fully exploring drug and target similarities. In this paper, we propose a new method, named DTIHNC, for $\mathbf{D}$rug-$\mathbf{T}$arget $\mathbf{I}$nteraction identification, which integrates $\mathbf{H}$eterogeneous $\mathbf{N}$etworks and $\mathbf{C}$ross-modal similarities calculated by relations between drugs, proteins, diseases and side effects. Firstly, the low-dimensional features of drugs, proteins, diseases and side effects are obtained from original features by a denoising autoencoder. Then, we construct a heterogeneous network across drug, protein, disease and side-effect nodes. In heterogeneous network, we exploit the heterogeneous graph attention operations to update the embedding of a node based on information in its 1-hop neighbors, and for multi-hop neighbor information, we propose random walk with restart aware graph attention to integrate more information through a larger neighborhood region. Next, we calculate cross-modal drug and protein similarities from cross-scale relations between drugs, proteins, diseases and side effects. Finally, a multiple-layer convolutional neural network deeply integrates similarity information of drugs and proteins with the embedding features obtained from heterogeneous graph attention network. Experiments have demonstrated its effectiveness and better performance than state-of-the-art methods. Datasets and a stand-alone package are provided on Github with website https://github.com/ningq669/DTIHNC.

Journal ArticleDOI
TL;DR: In this article , the authors proposed a novel algorithm called opportune context-aware network selection (OCANS), which dynamically and automatically takes into account customer context in the decision of the best network to achieve maximum user satisfaction.