scispace - formally typeset
Search or ask a question

Showing papers on "Network topology published in 2020"


Journal ArticleDOI
TL;DR: Network representation learning as discussed by the authors is a new learning paradigm to embed network vertices into a low-dimensional vector space, by preserving network topology structure, vertex content, and other side information.
Abstract: With the widespread use of information technologies, information networks are becoming increasingly popular to capture complex relationships across various disciplines, such as social networks, citation networks, telecommunication networks, and biological networks. Analyzing these networks sheds light on different aspects of social life such as the structure of societies, information diffusion, and communication patterns. In reality, however, the large scale of information networks often makes network analytic tasks computationally expensive or intractable. Network representation learning has been recently proposed as a new learning paradigm to embed network vertices into a low-dimensional vector space, by preserving network topology structure, vertex content, and other side information. This facilitates the original network to be easily handled in the new vector space for further analysis. In this survey, we perform a comprehensive review of the current literature on network representation learning in the data mining and machine learning field. We propose new taxonomies to categorize and summarize the state-of-the-art network representation learning techniques according to the underlying learning mechanisms, the network information intended to preserve, as well as the algorithmic designs and methodologies. We summarize evaluation protocols used for validating network representation learning including published benchmark datasets, evaluation methods, and open source algorithms. We also perform empirical studies to compare the performance of representative algorithms on common datasets, and analyze their computational complexity. Finally, we suggest promising research directions to facilitate future study.

494 citations


Journal ArticleDOI
TL;DR: This review summarizes the fundamentals as well as major progress by focusing on the chemistry to design structures including principle, synthetic strategies and control methods, and proposes major fundamental issues to be addressed in chemistry and future directions from physics, materials and application perspectives.
Abstract: A new approach has been developed to design organic polymers using topology diagrams. This strategy enables covalent integration of organic units into ordered topologies and creates a new polymer form, that is, covalent organic frameworks. This is a breakthrough in chemistry because it sets a molecular platform for synthesizing polymers with predesignable primary and high-order structures, which has been a central aim for over a century but unattainable with traditional design principles. This new field has its own features that are distinct from conventional polymers. This Review summarizes the fundamentals as well as major progress by focusing on the chemistry used to design structures, including the principles, synthetic strategies, and control methods. We scrutinize built-in functions that are specific to the structures by revealing various interplays and mechanisms involved in the expression of function. We propose major fundamental issues to be addressed in chemistry as well as future directions from physics, materials, and application perspectives.

328 citations


Journal ArticleDOI
TL;DR: A novel distributed-reference-observer-based fault-tolerant tracking control approach is established, under which the global tracking errors are proved to be asymptotically convergent in the presence of actuator failures.
Abstract: In this paper, for linear leader–follower networks with multiple heterogeneous actuator faults, including partial loss of effectiveness fault and actuator bias fault, a cooperative fault-tolerant control (CFTC) approach is developed. Assume that the interaction network topology among all nodes is a switching directed graph. To address the difficulty of designing the distributed compensation control laws under the time-varying asymmetrical network structure, a novel distributed-reference-observer-based fault-tolerant tracking control approach is established, under which the global tracking errors are proved to be asymptotically convergent in the presence of actuator failures. First, by constructing a group of distributed reference observers based on neighborhood state information, all followers can estimate the leader’s state trajectories directly. Second, a decentralized adaptive fault-tolerant tracking controller via local estimation is designed to achieve the global synchronization. Furthermore, the reliable coordination problem under switching directed topology with intermittent communications is solved by utilizing the presented CFTC approach. Finally, the effectiveness of the proposed coordination control protocol is illustrated by its applications to a networked aircraft system.

220 citations


Proceedings Article
20 Jul 2020
TL;DR: MCUNet, a framework that jointly designs the efficient neural architecture (T TinyNAS) and the lightweight inference engine (TinyEngine), enabling ImageNet-scale inference on microcontrollers, is proposed, suggesting that the era of always-on tiny machine learning on IoT devices has arrived.
Abstract: Machine learning on tiny IoT devices based on microcontroller units (MCU) is appealing but challenging: the memory of microcontrollers is 2-3 orders of magnitude smaller even than mobile phones. We propose MCUNet, a framework that jointly designs the efficient neural architecture (TinyNAS) and the lightweight inference engine (TinyEngine), enabling ImageNet-scale inference on microcontrollers. TinyNAS adopts a two-stage neural architecture search approach that first optimizes the search space to fit the resource constraints, then specializes the network architecture in the optimized search space. TinyNAS can automatically handle diverse constraints (i.e.device, latency, energy, memory) under low search costs.TinyNAS is co-designed with TinyEngine, a memory-efficient inference library to expand the search space and fit a larger model. TinyEngine adapts the memory scheduling according to the overall network topology rather than layer-wise optimization, reducing the memory usage by 4.8x, and accelerating the inference by 1.7-3.3x compared to TF-Lite Micro and CMSIS-NN. MCUNet is the first to achieves >70% ImageNet top1 accuracy on an off-the-shelf commercial microcontroller, using 3.5x less SRAM and 5.7x less Flash compared to quantized MobileNetV2 and ResNet-18. On visual&audio wake words tasks, MCUNet achieves state-of-the-art accuracy and runs 2.4-3.4x faster than MobileNetV2 and ProxylessNAS-based solutions with 3.7-4.1x smaller peak SRAM. Our study suggests that the era of always-on tiny machine learning on IoT devices has arrived. Code and models can be found here: this https URL.

170 citations


Journal ArticleDOI
TL;DR: In this article, the authors presented a fully connected quantum communication network on a city-wide scale without active switching or trusted nodes and demonstrated simultaneous and secure connections between all 28 pairings of eight users.
Abstract: Quantum communication is rapidly gaining popularity due to its high security and technological maturity. However, most implementations are limited to just two communicating parties (users). Quantum communication networks aim to connect a multitude of users. Here, we present a fully connected quantum communication network on a city-wide scale without active switching or trusted nodes. We demonstrate simultaneous and secure connections between all 28 pairings of eight users. Our novel network topology is easily scalable to many users, allows traffic management features, and minimizes the infrastructure as well as the user hardware needed.

151 citations


Journal ArticleDOI
TL;DR: The main purpose of the addressed filtering problem is to design a set of distributed filters such that, in the simultaneous presence of the RR transmission protocol, the multirate mechanism, and the bounded noises, there exists a certain ellipsoid that includes all possible error states at each time instant.
Abstract: In this paper, the distributed set-membership filtering problem is dealt with for a class of time-varying multirate systems in sensor networks with the communication protocol. For relieving the communication burden, the round-Robin (RR) protocol is exploited to orchestrate the transmission order, under which each sensor node only broadcasts partial information to both the corresponding local filter and its neighboring nodes. In order to meet the practical transmission requirements as well as reduce communication cost, the multirate strategy is proposed to govern the sampling/update rate of the plant, the sensors, and the filters. By means of the lifting technique, the augmented filtering error system is established with a unified sampling rate. The main purpose of the addressed filtering problem is to design a set of distributed filters such that, in the simultaneous presence of the RR transmission protocol, the multirate mechanism, and the bounded noises, there exists a certain ellipsoid that includes all possible error states at each time instant. Then, the desired distributed filter gains are obtained by minimizing such an ellipsoid in the sense of the minimum trace of the weighted matrix. The proposed resource-efficient filtering algorithm is of a recursive form, thereby facilitating the online implementation. A numerical simulation example is given to demonstrate the effectiveness of the proposed protocol-based distributed filter design method.

150 citations


Journal ArticleDOI
TL;DR: In this article, the authors propose a novel network model based on Graph Neural Network (GNN) that is able to understand the complex relationship between topology, routing, and input traffic to produce accurate estimates of the per-source/destination per-packet delay distribution and loss.
Abstract: Network modeling is a key enabler to achieve efficient network operation in future self-driving Software-Defined Networks. However, we still lack functional network models able to produce accurate predictions of Key Performance Indicators (KPI) such as delay, jitter or loss at limited cost. In this paper we propose RouteNet, a novel network model based on Graph Neural Network (GNN) that is able to understand the complex relationship between topology, routing, and input traffic to produce accurate estimates of the per-source/destination per-packet delay distribution and loss. RouteNet leverages the ability of GNNs to learn and model graph-structured information and as a result, our model is able to generalize over arbitrary topologies, routing schemes and traffic intensity. In our evaluation, we show that RouteNet is able to predict accurately the delay distribution (mean delay and jitter) and loss even in topologies, routing and traffic unseen in the training (worst case MRE = 15.4%). Also, we present several use cases where we leverage the KPI predictions of our GNN model to achieve efficient routing optimization and network planning.

145 citations


Proceedings ArticleDOI
30 Jul 2020
TL;DR: Using real topologies and traffic characteristics, it is shown that PINT concurrently enables applications such as congestion control, path tracing, and computing tail latencies, using only sixteen bits per packet, with performance comparable to the state of the art.
Abstract: Commodity network devices support adding in-band telemetry measurements into data packets, enabling a wide range of applications, including network troubleshooting, congestion control, and path tracing. However, including such information on packets adds significant overhead that impacts both flow completion times and application-level performance. We introduce PINT, an in-band network telemetry framework that bounds the amount of information added to each packet. PINT encodes the requested data on multiple packets, allowing per-packet overhead limits that can be as low as one bit. We analyze PINT and prove performance bounds, including cases when multiple queries are running simultaneously. PINT is implemented in P4 and can be deployed on network devices.Using real topologies and traffic characteristics, we show that PINT concurrently enables applications such as congestion control, path tracing, and computing tail latencies, using only sixteen bits per packet, with performance comparable to the state of the art.

139 citations


Journal ArticleDOI
TL;DR: Evaluation results show that DDQN-VNFPA can get improved network performance in terms of the reject number and reject ratio of Service Function Chain Requests, throughput, end-to-end delay, VNFI running time and load balancing compared with the algorithms in existing literatures.
Abstract: The emerging paradigm - Software-Defined Networking (SDN) and Network Function Virtualization (NFV) - makes it feasible and scalable to run Virtual Network Functions (VNFs) in commercial-off-the-shelf devices, which provides a variety of network services with reduced cost. Benefitting from centralized network management, lots of information about network devices, traffic and resources can be collected in SDN/NFV-enabled networks. Using powerful machine learning tools, algorithms can be designed in a customized way according to the collected information to efficiently optimize network performance. In this paper, we study the VNF placement problem in SDN/NFV-enabled networks, which is naturally formulated as a Binary Integer Programming (BIP) problem. Using deep reinforcement learning, we propose a Double Deep Q Network-based VNF Placement Algorithm (DDQN-VNFPA). Specifically, DDQN determines the optimal solution from a prohibitively large solution space and DDQN-VNFPA then places/releases VNF Instances (VNFIs) following a threshold-based policy. We evaluate DDQN-VNFPA with trace-driven simulations on a real-world network topology. Evaluation results show that DDQN-VNFPA can get improved network performance in terms of the reject number and reject ratio of Service Function Chain Requests (SFCRs), throughput, end-to-end delay, VNFI running time and load balancing compared with the algorithms in existing literatures.

134 citations


Journal ArticleDOI
TL;DR: The results indicate that the proposed scheme can highly improve the credibility and efficiency of MEC collaboration and adopts accommodative bloom filter as a carrier for multidomain collaborative routing consensus without exposing topology privacy.
Abstract: Mobile edge computing (MEC) sinks computing power to the edge of networks and integrates mobile access networks and Internet services in 5G and beyond. With the continuous development of services, privacy protection is extremely important in a heterogeneous MEC system for multiserver collaboration. However, most of the existing schemes only consider the privacy of users or services other than the privacy of network topology. For the purpose of topology privacy protection, this article employs blockchain to construct heterogeneous MEC systems and adopts accommodative bloom filter as a carrier for multidomain collaborative routing consensus without exposing topology privacy. Blockchain is used to implement multiplex mutual trust networking and collaborative routing verification through the membership service and consensus mechanism. Experiments are conducted to evaluate the feasibility and performances of our scheme. The results indicate that the proposed scheme can highly improve the credibility and efficiency of MEC collaboration.

129 citations


Journal ArticleDOI
TL;DR: This work proposes convolutional networks with adaptive inference graphs (ConvNet-AIG) that adaptively define their network topology conditioned on the input image, and observes that ConvNet- AIG shows a higher robustness than ResNets, complementing other known defense mechanisms.
Abstract: Do convolutional networks really need a fixed feed-forward structure? What if, after identifying the high-level concept of an image, a network could move directly to a layer that can distinguish fine-grained differences? Currently, a network would first need to execute sometimes hundreds of intermediate layers that specialize in unrelated aspects. Ideally, the more a network already knows about an image, the better it should be at deciding which layer to compute next. In this work, we propose convolutional networks with adaptive inference graphs (ConvNet-AIG) that adaptively define their network topology conditioned on the input image. Following a high-level structure similar to residual networks (ResNets), ConvNet-AIG decides for each input image on the fly which layers are needed. In experiments on ImageNet we show that ConvNet-AIG learns distinct inference graphs for different categories. Both ConvNet-AIG with 50 and 101 layers outperform their ResNet counterpart, while using $$20\%$$ and $$38\%$$ less computations respectively. By grouping parameters into layers for related classes and only executing relevant layers, ConvNet-AIG improves both efficiency and overall classification quality. Lastly, we also study the effect of adaptive inference graphs on the susceptibility towards adversarial examples. We observe that ConvNet-AIG shows a higher robustness than ResNets, complementing other known defense mechanisms.

Journal ArticleDOI
TL;DR: A stochastic load balancing scheme which aims to provide probabilistic guarantee against the resource overloading with virtual machine migration, while minimizing the total migration overhead and the migration cost is proposed.
Abstract: Cloud computing offers a cost-effective and elastic computing paradigm that facilitates large scale data storage and analytics. By deploying virtualization technologies in the datacenter, cloud enables efficient resource management and isolation for various big data applications. Since the hotspots (i.e., overloaded machines) can degrade the performance of these applications, virtual machine migration has been utilized to perform load balancing in the datacenters to eliminate hotspots and guarantee Service Level Agreements (SLAs). However, the previous load balancing schemes make migration decisions based on deterministic resource demand estimation and workload characterization, without considering their stochastic properties. By studying real world traces, we show that the resource demand and workload of virtual machines are highly dynamic and bursty, which can cause these schemes to make inefficient migrations for load balancing. To address this problem, in this paper we propose a stochastic load balancing scheme which aims to provide probabilistic guarantee against the resource overloading with virtual machine migration, while minimizing the total migration overhead. Our scheme effectively addresses the prediction of the distribution of resource demand and the multidimensional resource requirements with stochastic characterization. Moreover, as opposed to the previous works that measure the migration cost without considering the network topology, our scheme explicitly takes into account the distance between the source physical machine and the destination physical machine for a virtual machine migration. The trace-driven experiments show that our scheme outperforms the previous schemes in terms of SLA violation and the migration cost.

Journal ArticleDOI
TL;DR: A unified sensor measurement transmission model is put forward to account for the simultaneous presence of deception attacks and various network-induced constraints, and delicate secure distributed filters are constructed by admitting the corrupted sensor measurement.
Abstract: This paper is concerned with secure ${\ell _{1}}$ -gain performance analysis and distributed finite-time filter design for a positive discrete-time linear system over a sensor network in the presence of deception attacks. A group of intercommunicating sensors is densely deployed to measure, gather, and process the output of the positive system. Each sensor is capable of sharing its measurement with its neighboring sensors in accordance with a prescribed network topology while suffering from random communication link failure. Meanwhile, the aggregated measurement on each sensor during network transmission is corrupted by stochastic deception attacks which compromise the sensor’s measurement integrity. First, a unified sensor measurement transmission model is put forward to account for the simultaneous presence of deception attacks and various network-induced constraints. Second, delicate secure distributed filters are constructed by admitting the corrupted sensor measurement. Third, theoretical analysis on finite-time ${\ell _{1}}$ -gain boundedness of the filtering error system and design of desired positive filters are carried out. The solution to the filter gain parameters is characterized by a set of linear programming inequalities. Finally, the effectiveness of the obtained results is verified through the secure monitoring of power distribution in the smart grid.

Journal ArticleDOI
TL;DR: A numerical method to identify the topology and estimate line parameters without the information of voltage angles is proposed and can provide an accurate estimation of the topological and line parameters based on limited samples of measurement without voltage angles.
Abstract: The energy management system becomes increasingly indispensable with the extensive penetration of new players in the distribution networks, such as renewable energy, storage, and controllable load. Also, the operation optimization of the active distribution system requires information on operation state monitoring. Smart measuring equipment enables the topology identification and branch line parameters estimation from a data-driven perspective. Nevertheless, many current methods require the nodal voltage angles measured by phasor measurement units (PMUs), which might be unrealistic for conventional distribution networks. This paper proposes a numerical method to identify the topology and estimate line parameters without the information of voltage angles. We propose a two-step framework: the first step applies a data-driven regression method to provide a preliminary estimation on the topology and line parameter; the second step utilizes a joint data-and-model-driven method, i.e., a specialized Newton-Raphson iteration and power flow equations, to calculate the line parameter, recover voltage angle and further correct the topology. We test the method on IEEE 33 and 123-bus looped networks with load data from 1000 users in Ireland. The results demonstrate that the proposed method can provide an accurate estimation of the topology and line parameters based on limited samples of measurement without voltage angles.

Journal ArticleDOI
TL;DR: This article is concerned with the distributed recursive filtering issue for stochastic discrete time-varying systems subjected to both state saturations and round-robin protocols over sensor networks, and finds that by using a matrix simplification technique, the sensor network topology’s sparseness issue can be tackled.
Abstract: This article is concerned with the distributed recursive filtering issue for stochastic discrete time-varying systems subjected to both state saturations and round-robin (RR) protocols over sensor networks. The phenomenon of state saturation is considered to better describe practical engineering. The RR protocol is introduced to mitigate a network burden by determining which component of the sensor node has access to the network at each transmission instant. The purpose of the issue under consideration is to construct a distributed recursive filter such that a certain filtering error covariance’s upper bound can be found and the corresponding filter parameters’ explicit expression is given with both state saturations and RR protocols. By taking advantage of matrix difference equations, a filtering error covariance’s upper bound can be presented and then be minimized by appropriately designing filter parameters. In particular, by using a matrix simplification technique, the sensor network topology’s sparseness issue can be tackled. Finally, the feasibility for the addressed filtering scheme is demonstrated by an example.

Journal ArticleDOI
TL;DR: In this paper, an artificial neural network (ANN) approach is used to determine the thermal conductivity of Al2O3 -Cu / EG with an equal volume (50:50).
Abstract: An artificial neural network (ANN) approach is used to determine the thermal conductivity of Al2O3 – Cu / EG with an equal volume (50:50). For this purpose, a mixture of Al2O3 and Cu (50:50) nanoparticles are added in to EG at various concentrations of 0.125 to 2.0 at T=25 to T=50 °C. The method of two-step approach is applied to add nanoparticles through the base fluid. Moreover, the feedforward multilayer perceptron of NN is examined to simulate the thermal conduction coefficient of Al2O3 – Cu nanofluid. So that, more than thirty six measured points are achieved through the experiments; while twenty five ones are chosen for ANN and eleven remained ones are applied to validate the network. It is seen that the ANN proposed approach can present the thermal conduction coefficient of hybrid nanofluids with suitable accuracy and good agreement with those of available empirical data.

Journal ArticleDOI
TL;DR: The main purpose is to design a novel memory sampled-data control scheme to ensure the synchronization of the master-slave system and achieves the stochastic stability and satisfies an extended dissipative performance index via constructing Lyapunov function.
Abstract: The work is concerned with the synchronization issue of complex dynamic networks subject to the semi-Markov process. The semi-Markov process is used to describe the switching among different modes of network topology. Meanwhile, a constant signal transmission delay is considered in the sampled-data controller when dealing with the synchronization problem. The main purpose is to design a novel memory sampled-data control scheme to ensure the synchronization of the master-slave system. With the help of some improved integral inequality techniques, several sufficient conditions are obtained to assure the error system achieves the stochastic stability and satisfies an extended dissipative performance index via constructing Lyapunov function. Finally, two simulation examples are given to verify the validity and superiority of the memory sampled-data controller designed.

Journal ArticleDOI
TL;DR: A model of the three-layer heterogeneous satellite network is constructed and a low-complexity method for calculating the capacity between satellites is proposed and a long-term optimal capacity allocation algorithm is proposed to optimize the long- term utility of the system.
Abstract: The development of satellite networks is drawing much more attention in recent years due to the wide coverage ability. Composed of geosynchronous orbit (GEO), medium earth orbit (MEO), and low earth orbit (LEO) satellites, the satellite network is a three-layer heterogeneous network of high complexity, for which comprehensive theoretical analysis is still missing. In this paper, we investigate the problem of capacity management in the three-layer heterogeneous satellite network. We first construct the model of the network and propose a low-complexity method for calculating the capacity between satellites. Based on the time structure of the time expanded graph, the searching space is greatly reduced compared to traditional augmenting path searching strategies, which can significantly reduce the computing complexity. Then, based on Q-learning, we proposed a long-term optimal capacity allocation algorithm to optimize the long-term utility of the system. In order to reduce the storage and computing complexity, a learning framework with low-complexity is constructed while taking the properties of satellite systems into account. Finally, we analyze the capacity performance of the three-layer heterogeneous satellite network and also evaluate the proposed algorithms with numerical results.

Proceedings ArticleDOI
23 Aug 2020
TL;DR: A novel framework that considers multi-level graph convolutions on both local network structure and hypergraph structure in a unified manner is proposed that overcomes data insufficiency problem of existing work and does not necessarily rely on user demographic information.
Abstract: Cross-platform account matching plays a significant role in social network analytics, and is beneficial for a wide range of applications. However, existing methods either heavily rely on high-quality user generated content (including user profiles) or suffer from data insufficiency problem if only focusing on network topology, which brings researchers into an insoluble dilemma of model selection. In this paper, to address this problem, we propose a novel framework that considers multi-level graph convolutions on both local network structure and hypergraph structure in a unified manner. The proposed method overcomes data insufficiency problem of existing work and does not necessarily rely on user demographic information. Moreover, to adapt the proposed method to be capable of handling large-scale social networks, we propose a two-phase space reconciliation mechanism to align the embedding spaces in both network partitioning based parallel training and account matching across different social networks. Extensive experiments have been conducted on two large-scale real-life social networks. The experimental results demonstrate that the proposed method outperforms the state-of-the-art models with a big margin.

Posted Content
Fanfan Ye1, Shiliang Pu, Qiaoyong Zhong, Chao Li, Di Xie, Huiming Tang1 
TL;DR: This paper proposes Dynamic GCN, in which a novel convolutional neural network named Context-encoding Network (CeN) is introduced to learn skeleton topology automatically, and achieves state-of-the-art performance on three large-scale benchmarks, namely NTU-RGB+D, NTU+D 120 and Skeleton-Kinetics.
Abstract: Graph Convolutional Networks (GCNs) have attracted increasing interests for the task of skeleton-based action recognition. The key lies in the design of the graph structure, which encodes skeleton topology information. In this paper, we propose Dynamic GCN, in which a novel convolutional neural network named Contextencoding Network (CeN) is introduced to learn skeleton topology automatically. In particular, when learning the dependency between two joints, contextual features from the rest joints are incorporated in a global manner. CeN is extremely lightweight yet effective, and can be embedded into a graph convolutional layer. By stacking multiple CeN-enabled graph convolutional layers, we build Dynamic GCN. Notably, as a merit of CeN, dynamic graph topologies are constructed for different input samples as well as graph convolutional layers of various depths. Besides, three alternative context modeling architectures are well explored, which may serve as a guideline for future research on graph topology learning. CeN brings only ~7% extra FLOPs for the baseline model, and Dynamic GCN achieves better performance with $2\times$~$4\times$ fewer FLOPs than existing methods. By further combining static physical body connections and motion modalities, we achieve state-of-the-art performance on three large-scale benchmarks, namely NTU-RGB+D, NTU-RGB+D 120 and Skeleton-Kinetics.

Journal ArticleDOI
TL;DR: The proposed hi-GCN method performs the graph embedding learning from a hierarchical perspective while considering the structure in individual brain network and the subject's correlation in the global population network, which can capture the most essential embedding features to improve the classification performance of disease diagnosis.

Proceedings ArticleDOI
14 Jun 2020
TL;DR: In this paper, a coarse-to-fine neural architecture search (C2FNAS) was proposed to automatically search a 3D segmentation network from scratch without inconsistency on network size or input size.
Abstract: 3D convolution neural networks (CNN) have been proved very successful in parsing organs or tumours in 3D medical images, but it remains sophisticated and time-consuming to choose or design proper 3D networks given different task contexts. Recently, Neural Architecture Search (NAS) is proposed to solve this problem by searching for the best network architecture automatically. However, the inconsistency between search stage and deployment stage often exists in NAS algorithms due to memory constraints and large search space, which could become more serious when applying NAS to some memory and time-consuming tasks, such as 3D medical image segmentation. In this paper, we propose a coarse-to-fine neural architecture search (C2FNAS) to automatically search a 3D segmentation network from scratch without inconsistency on network size or input size. Specifically, we divide the search procedure into two stages: 1) the coarse stage, where we search the macro-level topology of the network, i.e. how each convolution module is connected to other modules; 2) the fine stage, where we search at micro-level for operations in each cell based on previous searched macro-level topology. The coarse-to-fine manner divides the search procedure into two consecutive stages and meanwhile resolves the inconsistency. We evaluate our method on 10 public datasets from Medical Segmentation Decalthon (MSD) challenge, and achieve state-of-the-art performance with the network searched using one dataset, which demonstrates the effectiveness and generalization of our searched models.

Journal ArticleDOI
TL;DR: A deep reinforcement learning (DRL) based scheme with experience replay and target network is proposed as a solution that can efficiently handle complex and dynamic SFC embedding scenarios in IoT.
Abstract: The Internet of things (IoT) is becoming more and more flexible and economical with the advancement in information and communication technologies. However, IoT networks will be ultra-dense with the explosive growth of IoT devices. Network function virtualization (NFV) emerges to provide flexible network frameworks and efficient resource management for the performance of IoT networks. In NFV-enabled IoT infrastructure, service function chain (SFC) is an ordered combination of virtual network functions (VNFs) that are related to each other based on the logic of IoT applications. However, the embedding process of SFC to IoT networks is becoming a big challenge due to the dynamic nature of IoT networks and the abundance of IoT terminals. In this paper, we decompose the complex VNFs into smaller virtual network function components (VNFCs) to make more effective decisions since VNF nodes and IoT network devices are usually heterogeneous. In addition, a deep reinforcement learning (DRL) based scheme with experience replay and target network is proposed as a solution that can efficiently handle complex and dynamic SFC embedding scenarios in IoT. Our simulations consider different types of IoT network topologies. The simulation results present the efficiency of the proposed dynamic SFC embedding scheme.

Journal ArticleDOI
TL;DR: The role of graph convolutional filters in GNNs is discussed and it is shown that any architecture built with such filters has the fundamental properties of permutation equivariance and stability to changes in the topology.
Abstract: Network data can be conveniently modeled as a graph signal, where data values are assigned to nodes of a graph that describes the underlying network topology. Successful learning from network data is built upon methods that effectively exploit this graph structure. In this article, we leverage graph signal processing (GSP) to characterize the representation space of graph neural networks (GNNs). We discuss the role of graph convolutional filters in GNNs and show that any architecture built with such filters has the fundamental properties of permutation equivariance and stability to changes in the topology. These two properties offer insight about the workings of GNNs and help explain their scalability and transferability properties, which, coupled with their local and distributed nature, make GNNs powerful tools for learning in physical networks. We also introduce GNN extensions using edge-varying and autoregressive moving average (ARMA) graph filters and discuss their properties. Finally, we study the use of GNNs in recommender systems and learning decentralized controllers for robot swarms.

Proceedings ArticleDOI
30 Jul 2020
TL;DR: Evaluation results show that the proposed algorithm Q-CAST increases the number of successful long-distance entanglements by a big margin compared to other methods, and may encourage more network researchers to study the entanglement routing problem.
Abstract: Quantum entanglement enables important computing applications such as quantum key distribution. Based on quantum entanglement, quantum networks are built to provide long-distance secret sharing between two remote communication parties. Establishing a multi-hop quantum entanglement exhibits a high failure rate, and existing quantum networks rely on trusted repeater nodes to transmit quantum bits. However, when the scale of a quantum network increases, it requires end-to-end multi-hop quantum entanglements in order to deliver secret bits without letting the repeaters know the secret bits. This work focuses on the entanglement routing problem, whose objective is to build long-distance entanglements via untrusted repeaters for concurrent source-destination pairs through multiple hops. Different from existing work that analyzes the traditional routing techniques on special network topologies, we present a comprehensive entanglement routing model that reflects the differences between quantum networks and classical networks as well as a new entanglement routing algorithm that utilizes the unique properties of quantum networks. Evaluation results show that the proposed algorithm Q-CAST increases the number of successful long-distance entanglements by a big margin compared to other methods. The model and simulator developed by this work may encourage more network researchers to study the entanglement routing problem.

Journal ArticleDOI
TL;DR: In this article, the authors advocate a new learning paradigm called fog learning, which will intelligently distribute ML model training across the continuum of nodes from edge devices to cloud servers, and consider a multi-layer hybrid learning framework consisting of heterogeneous devices with various proximities.
Abstract: Machine learning (ML) tasks are becoming ubiquitous in today's network applications. Federated learning has emerged recently as a technique for training ML models at the network edge by leveraging processing capabilities across the nodes that collect the data. There are several challenges with employing conventional federated learning in contemporary networks, due to the significant heterogeneity in compute and communication capabilities that exist across devices. To address this, we advocate a new learning paradigm called fog learning, which will intelligently distribute ML model training across the continuum of nodes from edge devices to cloud servers. Fog learning enhances federated learning along three major dimensions: network, heterogeneity, and proximity. It considers a multi-layer hybrid learning framework consisting of heterogeneous devices with various proximities. It accounts for the topology structures of the local networks among the heterogeneous nodes at each network layer, orchestrating them for collaborative/cooperative learning through device-to-device communications. This migrates from star network topologies used for parameter transfers in federated learning to more distributed topologies at scale. We discuss several open research directions toward realizing fog learning.

Proceedings ArticleDOI
Fanfan Ye1, Shiliang Pu, Qiaoyong Zhong, Chao Li, Di Xie, Huiming Tang1 
12 Oct 2020
TL;DR: In this paper, a novel convolutional neural network named Context Encoding Network (CeN) is introduced to learn skeleton topology automatically, where contextual features from the rest joints are incorporated in a global manner.
Abstract: raph Convolutional Networks (GCNs) have attracted increasing interests for the task of skeleton-based action recognition. The key lies in the design of the graph structure, which encodes skeleton topology information. In this paper, we propose Dynamic GCN, in which a novel convolutional neural network named Context-encoding Network (CeN) is introduced to learn skeleton topology automatically. In particular, when learning the dependency between two joints, contextual features from the rest joints are incorporated in a global manner. CeN is extremely lightweight yet effective, and can be embedded into a graph convolutional layer. By stacking multiple CeN-enabled graph convolutional layers, we build Dynamic GCN. Notably, as a merit of CeN, dynamic graph topologies are constructed for different input samples as well as graph convolutional layers of various depths. Besides, three alternative context modeling architectures are well explored, which may serve as a guideline for future research on graph topology learning. CeN brings only ~7% extra FLOPs for the baseline model, and Dynamic GCN achieves better performance with 2x ~4x fewer FLOPs than existing methods. By further combining static physical body connections and motion modalities, we achieve state-of-the-art performance on three large-scale benchmarks, namely NTU-RGB+D, NTU-RGB+D 120 and Skeleton-Kinetics.

Journal ArticleDOI
TL;DR: The proposed CapsNet based technique can achieve extraction of desired features from image data sets and provides tumor classification automatically with 92.65% accuracy.
Abstract: Visual evaluation of many magnetic resonance images is a difficult task. Therefore, computer-assisted brain tumor classification techniques have been proposed. These techniques have several drawbacks or limitations. Capsule based neural networks are new approaches that can preserve spatial relationships of learned features using dynamic routing algorithm. By this way, not only performance of tumor recognition increases but also sampling efficiency and generalisation capability improves. Therefore, in this work, a Capsule Network (CapsNet) is used to achieve fully automated classification of tumors from brain magnetic resonance images. In this work, prevalent three types of tumors (pituitary, glioma and meningioma) have been handled. The main contributions in this paper are as follows: 1) A comprehensive review on CapsNet based methods is presented. 2) A new CapsNet topology is designed by using a Sobolev gradient-based optimisation, expectation-maximisation based dynamic routing and tumor boundary information. 3) The network topology is applied to categorise three types of brain tumors. 4) Comparative evaluations of the results obtained by other methods are performed. According to the experimental results, the proposed CapsNet based technique can achieve extraction of desired features from image data sets and provides tumor classification automatically with 92.65% accuracy.

Journal ArticleDOI
TL;DR: This paper investigates the distributed containment control problem for a class of general second-order multiagent systems with switched dynamics, which is composed of a continuous-time (CT) subsystem and a discrete- time (DT) subsystem.
Abstract: This paper investigates the distributed containment control problem for a class of general second-order multiagent systems with switched dynamics, which is composed of a continuous-time (CT) subsystem and a discrete-time (DT) subsystem. For this switched multiagent system under fixed directed topology, a distributed containment control protocol is proposed for each follower based on the relative local measurements of neighboring followers and leaders. Some necessary and sufficient conditions are derived under the condition that the network topology contains a directed spanning forest, and these conditions ensure that the general second-order containment control problem can be solved under arbitrary CT–DT switching. If the general second-order system is reduced to the double integrator system, some simpler containment conditions are presented. Furthermore, the similar results are also obtained under switching directed topology. Finally, some simulation examples are presented to show the efficiency of the theoretical results.

Journal ArticleDOI
TL;DR: Some sufficient conditions for the exponential stability of the cyber-physical delayed-switching system are developed, which guarantees the robustness of the proposed strategy against the communication delays and dynamically changing interaction topologies.
Abstract: In this paper, a robust neighbor-based distributed cooperative control strategy is proposed for dc cyber-physical microgrids, considering communication delays and slow switching topologies. The proposed robust control strategy can synchronize the voltages of a dc microgrid to the desired value while achieving the optimal load sharing for minimizing distributed energy resources’ (DERs) generation cost to achieve their economic operation at the same layer via a sparse communication network considering communication delays and slow switching topologies synchronously. The continuous interaction of physical–electrical and cyber networks generally exacerbates the occurrence of communication delays. Moreover, the arbitrary switching topologies could destroy the system’s transient characteristics at the switching time instants. To further quantify these impacts on the system stability, the communication delay and average switching dwell-time-dependent control conditions for the proposed control strategy are proved based on the Lyapunov–Krasovskii theory. Some sufficient conditions for the exponential stability of the cyber-physical delayed-switching system are developed, which guarantees the robustness of the proposed strategy against the communication delays and dynamically changing interaction topologies. The proposed control protocols are shown to be fully distributed and implemented through a sparse communication network. Finally, several cases on a modified IEEE 34-bus test network are investigated which demonstrate the effectiveness and performance of the results.