scispace - formally typeset
Search or ask a question

Showing papers on "Network topology published in 2018"


Journal ArticleDOI
TL;DR: This Review surveys important aspects of communication dynamics in brain networks and proposes that communication dynamics may act as potential generative models of effective connectivity and can offer insight into the mechanisms by which brain networks transform and process information.
Abstract: Neuronal signalling and communication underpin virtually all aspects of brain activity and function. Network science approaches to modelling and analysing the dynamics of communication on networks have proved useful for simulating functional brain connectivity and predicting emergent network states. This Review surveys important aspects of communication dynamics in brain networks. We begin by sketching a conceptual framework that views communication dynamics as a necessary link between the empirical domains of structural and functional connectivity. We then consider how different local and global topological attributes of structural networks support potential patterns of network communication, and how the interactions between network topology and dynamic models can provide additional insights and constraints. We end by proposing that communication dynamics may act as potential generative models of effective connectivity and can offer insight into the mechanisms by which brain networks transform and process information.

592 citations


Journal ArticleDOI
17 Apr 2018
TL;DR: This paper presents an overview of recent work in decentralized optimization and surveys the state-of-theart algorithms and their analyses tailored to these different scenarios, highlighting the role of the network topology.
Abstract: In decentralized optimization, nodes cooperate to minimize an overall objective function that is the sum (or average) of per-node private objective functions. Algorithms interleave local computations with communication among all or a subset of the nodes. Motivated by a variety of applications..decentralized estimation in sensor networks, fitting models to massive data sets, and decentralized control of multirobot systems, to name a few..significant advances have been made toward the development of robust, practical algorithms with theoretical performance guarantees. This paper presents an overview of recent work in this area. In general, rates of convergence depend not only on the number of nodes involved and the desired level of accuracy, but also on the structure and nature of the network over which nodes communicate (e.g., whether links are directed or undirected, static or time varying). We survey the state-of-theart algorithms and their analyses tailored to these different scenarios, highlighting the role of the network topology.

397 citations


Book ChapterDOI
08 Sep 2018
TL;DR: This work proposes convolutional networks with adaptive inference graphs (ConvNet-AIG) that adaptively define their network topology conditioned on the input image that shows a higher robustness than ResNets, complementing other known defense mechanisms.
Abstract: Do convolutional networks really need a fixed feed-forward structure? What if, after identifying the high-level concept of an image, a network could move directly to a layer that can distinguish fine-grained differences? Currently, a network would first need to execute sometimes hundreds of intermediate layers that specialize in unrelated aspects. Ideally, the more a network already knows about an image, the better it should be at deciding which layer to compute next. In this work, we propose convolutional networks with adaptive inference graphs (ConvNet-AIG) that adaptively define their network topology conditioned on the input image. Following a high-level structure similar to residual networks (ResNets), ConvNet-AIG decides for each input image on the fly which layers are needed. In experiments on ImageNet we show that ConvNet-AIG learns distinct inference graphs for different categories. Both ConvNet-AIG with 50 and 101 layers outperform their ResNet counterpart, while using \(20\%\) and \(33\%\) less computations respectively. By grouping parameters into layers for related classes and only executing relevant layers, ConvNet-AIG improves both efficiency and overall classification quality. Lastly, we also study the effect of adaptive inference graphs on the susceptibility towards adversarial examples. We observe that ConvNet-AIG shows a higher robustness than ResNets, complementing other known defense mechanisms.

307 citations


Journal ArticleDOI
TL;DR: In this article, an improved real-coded genetic algorithm and an enhanced mixed integer linear programming (MILP) based method have been developed to schedule the unit commitment and economic dispatch of microgrid units.

288 citations


Journal ArticleDOI
TL;DR: A novel distributed event-triggered communication protocol based on state estimates of neighboring agents is proposed to solve the consensus problem of the leader-following systems and can greatly reduce the communication load of multiagent networks.
Abstract: In this paper, the leader-following consensus problem of high-order multiagent systems via event-triggered control is discussed. A novel distributed event-triggered communication protocol based on state estimates of neighboring agents is proposed to solve the consensus problem of the leader-following systems. We first investigate the consensus problem in a fixed topology, and then extend to the switching topologies. State estimates in fixed topology are only updated when the trigger condition is satisfied. However, state estimates in switching topologies are renewed with two cases: 1) the communication topology is switched or 2) the trigger condition is satisfied. Clearly, compared to continuous-time interaction, this protocol can greatly reduce the communication load of multiagent networks. Besides, the event-triggering function is constructed based on the local information and a new event-triggered rule is given. Moreover, “Zeno behavior” can be excluded. Finally, we give two examples to validate the feasibility and efficiency of our approach.

269 citations


Proceedings ArticleDOI
16 Apr 2018
TL;DR: In this article, an experience-driven approach that can learn to well control a communication network from its own experience rather than an accurate mathematical model is proposed. And two new techniques, TE-aware exploration and actor-critic-based prioritized experience replay, are proposed to optimize the general DRL framework particularly for TE.
Abstract: Modern communication networks have become very complicated and highly dynamic, which makes them hard to model, predict and control. In this paper, we develop a novel experience-driven approach that can learn to well control a communication network from its own experience rather than an accurate mathematical model, just as a human learns a new skill (such as driving, swimming, etc). Specifically, we, for the first time, propose to leverage emerging Deep Reinforcement Learning (DRL) for enabling model-free control in communication networks; and present a novel and highly effective DRL-based control framework, DRL-TE, for a fundamental networking problem: Traffic Engineering (TE). The proposed framework maximizes a widely-used utility function by jointly learning network environment and its dynamics, and making decisions under the guidance of powerful Deep Neural Networks (DNNs). We propose two new techniques, TE-aware exploration and actor-critic-based prioritized experience replay, to optimize the general DRL framework particularly for TE. To validate and evaluate the proposed framework, we implemented it in ns-3, and tested it comprehensively with both representative and randomly generated network topologies. Extensive packet-level simulation results show that 1) compared to several widely-used baseline methods, DRL-TE significantly reduces end-to-end delay and consistently improves the network utility, while offering better or comparable throughput; 2) DRL-TE is robust to network changes; and 3) DRL-TE consistently outperforms a state-of-the-art DRL method (for continuous control), Deep Deterministic Policy Gradient (DDPG), which, however, does not offer satisfying performance.

260 citations


Journal ArticleDOI
TL;DR: This work studies a trajectory-based interaction time prediction algorithm to cope with an unstable network topology and high rate of disconnection in SIoVs and proposes a cooperative quality-aware system model, which focuses on a reliability assurance strategy and quality optimization method.
Abstract: Because of the enormous potential to guarantee road safety and improve driving experience, social Internet of Vehicle (SIoV) is becoming a hot research topic in both academic and industrial circles. As the ever-increasing variety, quantity, and intelligence of on-board equipment, along with the ever-growing demand for service quality of automobiles, the way to provide users with a range of security-related and user-oriented vehicular applications has become significant. This paper concentrates on the design of a service access system in SIoVs, which focuses on a reliability assurance strategy and quality optimization method. First, in lieu of the instability of vehicular devices, a dynamic access service evaluation scheme is investigated, which explores the potential relevance of vehicles by constructing their social relationships. Next, this work studies a trajectory-based interaction time prediction algorithm to cope with an unstable network topology and high rate of disconnection in SIoVs. At last, a cooperative quality-aware system model is proposed for service access in SIoVs. Simulation results demonstrate the effectiveness of the proposed scheme.

254 citations


Journal ArticleDOI
TL;DR: In this paper, a delay-optimal cooperative edge caching in large-scale user-centric mobile networks, where the content placement and cluster size are optimized based on the stochastic information of network topology, traffic distribution, channel quality, and file popularity, is proposed.
Abstract: With files proactively stored at base stations (BSs), mobile edge caching enables direct content delivery without remote file fetching, which can reduce the end-to-end delay while relieving backhaul pressure. To effectively utilize the limited cache size in practice, cooperative caching can be leveraged to exploit caching diversity, by allowing users served by multiple base stations under the emerging user-centric network architecture. This paper explores delay-optimal cooperative edge caching in large-scale user-centric mobile networks, where the content placement and cluster size are optimized based on the stochastic information of network topology, traffic distribution, channel quality, and file popularity. Specifically, a greedy content placement algorithm is proposed based on the optimal bandwidth allocation, which can achieve $(1-{1/e})$ -optimality with linear computational complexity. In addition, the optimal user-centric cluster size is studied, and a condition constraining the maximal cluster size is presented in explicit form, which reflects the tradeoff between caching diversity and spectrum efficiency. Extensive simulations are conducted for analysis validation and performance evaluation. Numerical results demonstrate that the proposed greedy content placement algorithm can reduce the average file transmission delay up to 45 percent compared with the non-cooperative and hit-ratio-maximal schemes. Furthermore, the optimal clustering is also discussed considering the influences of different system parameters.

248 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a multi-agent system (MAS)-based distributed coordinated control strategies to balance the power and energy, stabilize voltage and frequency, achieve economic and coordinated operation among the MGs and MGCs.
Abstract: The increasing integration of the distributed renewable energy sources highlights the requirement to design various control strategies for microgrids (MGs) and microgrid clusters (MGCs). The multiagent system (MAS)-based distributed coordinated control strategies show the benefits to balance the power and energy, stabilize voltage and frequency, achieve economic and coordinated operation among the MGs and MGCs. However, the complex and diverse combinations of distributed generations (DGs) in MAS increase the complexity of system control and operation. In order to design the optimized configuration and control strategy using MAS, the topology models and mathematic models such as the graph topology model, noncooperative game model, the genetic algorithm, and particle swarm optimization algorithm are summarized. The merits and drawbacks of these control methods are compared. Moreover, since the consensus is a vital problem in the complex dynamical systems, the distributed MAS-based consensus protocols are systematically reviewed. On the other hand, the communication delay issue, which is inevitable no matter in the low- or high-bandwidth communication networks, is crucial to maintain the stability of the MGs and MGCs with fixed and random delays. Various control strategies to compensate the effect of communication delays have been reviewed, such as the neural network-based predictive control, the weighted average predictive control, the gain scheduling scheme, and synchronization schemes based on the multitimer model for the case of fixed communication delay, and the generalized predictive control, networked predictive control, model predictive control, Smith predictor, $H_{\infty}$ -based control, sliding mode control for the random communication delay scenarios. Furthermore, various control methods have been summarized to describe switching topologies in MAS with different objectives, such as the plug-in or plug-out of DGs in an MG, and the plug-in or plug-out of MGs in an MGC, and multiagent-based energy coordination and the economic dispatch of the MGC. Finally, the future research directions of the multiagent-based distributed coordinated control and optimization in MGs and MGCs are also presented.

246 citations


Journal ArticleDOI
18 Jul 2018-Nature
TL;DR: Cooperative self-assembly is used as a design principle to prepare a material that can be switched between two topological states and produces coherent changes in several network properties at once, including branch functionality, junction fluctuations, defect tolerance, shear modulus, stress-relaxation behaviour and self-healing.
Abstract: Polymer networks can have a range of desirable properties such as mechanical strength, wide compositional diversity between different materials, permanent porosity, convenient processability and broad solvent compatibility1,2. Designing polymer networks from the bottom up with new structural motifs and chemical compositions can be used to impart dynamic features such as malleability or self-healing, or to allow the material to respond to environmental stimuli3–8. However, many existing systems exhibit only one operational state that is defined by the material’s composition and topology3–6; or their responsiveness may be irreversible7,9,10 and limited to a single network property11,12 (such as stiffness). Here we use cooperative self-assembly as a design principle to prepare a material that can be switched between two topological states. By using networks of polymer-linked metal–organic cages in which the cages change shape and size on irradiation, we can reversibly switch the network topology with ultraviolet or green light. This photoswitching produces coherent changes in several network properties at once, including branch functionality, junction fluctuations, defect tolerance, shear modulus, stress-relaxation behaviour and self-healing. Topology-switching materials could prove useful in fields such as soft robotics and photo-actuators as well as providing model systems for fundamental polymer physics studies.

225 citations


Journal ArticleDOI
TL;DR: This tutorial paper reviews several machine learning concepts tailored to the optical networking industry and discusses algorithm choices, data and model management strategies, and integration into existing network control and management tools.
Abstract: Networks are complex interacting systems involving cloud operations, core and metro transport, and mobile connectivity all the way to video streaming and similar user applications.With localized and highly engineered operational tools, it is typical of these networks to take days to weeks for any changes, upgrades, or service deployments to take effect. Machine learning, a sub-domain of artificial intelligence, is highly suitable for complex system representation. In this tutorial paper, we review several machine learning concepts tailored to the optical networking industry and discuss algorithm choices, data and model management strategies, and integration into existing network control and management tools. We then describe four networking case studies in detail, covering predictive maintenance, virtual network topology management, capacity optimization, and optical spectral analysis.

Proceedings Article
03 Jul 2018
TL;DR: In this paper, a bidirectional tree-structured reinforcement learning meta-controller is proposed to modify the path topology of a given network while keeping the merits of reusing weights, and thus allowing efficiently designing effective structures with complex path topologies like Inception models.
Abstract: We introduce a new function-preserving transformation for efficient neural architecture search. This network transformation allows reusing previously trained networks and existing successful architectures that improves sample efficiency. We aim to address the limitation of current network transformation operations that can only perform layer-level architecture modifications, such as adding (pruning) filters or inserting (removing) a layer, which fails to change the topology of connection paths. Our proposed path-level transformation operations enable the meta-controller to modify the path topology of the given network while keeping the merits of reusing weights, and thus allow efficiently designing effective structures with complex path topologies like Inception models. We further propose a bidirectional tree-structured reinforcement learning meta-controller to explore a simple yet highly expressive tree-structured architecture space that can be viewed as a generalization of multi-branch architectures. We experimented on the image classification datasets with limited computational resources (about 200 GPU-hours), where we observed improved parameter efficiency and better test results (97.70% test accuracy on CIFAR-10 with 14.3M parameters and 74.6% top-1 accuracy on ImageNet in the mobile setting), demonstrating the effectiveness and transferability of our designed architectures.

Journal ArticleDOI
TL;DR: The PyPSA-Eur dataset as mentioned in this paper is the first open model dataset of the European power system at the transmission network level to cover the full ENTSO-E area, which contains 6001 lines (alternating current lines at and above 220kV voltage level and all high voltage direct current lines), 3657 substations, a new open database of conventional power plants, time series for electrical demand and variable renewable generator availability, and geographic potentials for the expansion of wind and solar power.

Journal ArticleDOI
TL;DR: The proposed method involves the application of principal component analysis and its graph-theoretic interpretation to infer the steady state network topology from smart meter energy measurements and is demonstrated through simulation on randomly generated networks and on IEEE recognized Roy Billinton distribution test system.
Abstract: In a power distribution network, the network topology information is essential for an efficient operation. This network connectivity information is often not available at the low voltage (LV) level due to uninformed changes that happen from time to time. In this paper, we propose a novel data-driven approach to identify the underlying network topology for LV distribution networks including the load phase connectivity from time series of energy measurements. The proposed method involves the application of principal component analysis and its graph-theoretic interpretation to infer the steady state network topology from smart meter energy measurements. The method is demonstrated through simulation on randomly generated networks and also on IEEE recognized Roy Billinton distribution test system.

Journal ArticleDOI
TL;DR: Simulation results of IEEE 30-bus and IEEE 57-bus test cases show that the key nodes can be effectively identified with high electrical centrality and resultant cascading failures that eventually lead to a severe decrease in net-ability, verifying the correctness and effectiveness of the analysis.
Abstract: The analysis of blackouts, which can inevitably lead to catastrophic damage to power grids, helps to explore the nature of complex power grids but becomes difficult using conventional methods This brief studies the vulnerability analysis and recognition of key nodes in power grids from a complex network perspective Based on the ac power flow model and the network topology weighted with admittance, the cascading failure model is established first The node electrical centrality is further pointed out, using complex network centrality theory, to identify the key nodes in power grids To effectively analyze the behavior and verify the correctness of node electrical centrality, the net-ability and vulnerability index are introduced to describe the transfer ability and performance under normal operation and assess the vulnerability of the power system under cascading failures, respectively Simulation results of IEEE 30-bus and IEEE 57-bus test cases show that the key nodes can be effectively identified with high electrical centrality, the resultant cascading failures that eventually lead to a severe decrease in net-ability, verifying the correctness and effectiveness of the analysis

Journal ArticleDOI
TL;DR: A ML classifier is investigated that predicts whether the bit error rate of unestablished lightpaths meets the required system threshold based on traffic volume, desired route, and modulation format.
Abstract: Predicting the quality of transmission (QoT) of a lightpath prior to its deployment is a step of capital importance for an optimized design of optical networks. Due to the continuous advances in optical transmission, the number of design parameters available to system engineers (e.g., modulation formats, baud rate, code rate, etc.) is growing dramatically, thus significantly increasing the alternative scenarios for lightpath deployment. As of today, existing (pre-deployment) estimation techniques for light-path QoT belong to two categories: “exact” analytical models estimating physical-layer impairments, which provide accurate results but incur heavy computational requirements, and margined formulas, which are computationally faster but typically introduce high link margins that lead to underutilization of network resources. In this paper, we explore a third option, i.e., machine learning (ML), as ML techniques have already been successfully applied for optimization and performance prediction of complex systems where analytical models are hard to derive and/ or numerical procedures impose high computational burden. We investigate a ML classifier that predicts whether the bit error rate of unestablished lightpaths meets the required system threshold based on traffic volume, desired route, and modulation format. The classifier is trained and tested on synthetic data and its performance is assessed over different network topologies and for various combinations of classification features. Results in terms of classifier accuracy are promising and motivate further investigation over real field data.

Journal ArticleDOI
TL;DR: Energy-efficiency improvements in core networks obtained as a result of work carried out by the GreenTouch consortium over a five-year period are discussed and an experimental demonstration that illustrates the feasibility of energy-efficient content distribution in IP/WDM networks is implemented.
Abstract: In this paper, we discuss energy-efficiency improvements in core networks obtained as a result of work carried out by the GreenTouch consortium over a five-year period A number of techniques that yield substantial energy savings in core networks were introduced, including (i) the use of improved network components with lower power consumption, (ii) putting idle components into sleep mode, (iii) optically bypassing intermediate routers, (iv) the use of mixed line rates, (v) placing resources for protection into a low power state when idle, (vi) optimization of the network physical topology, and (vii) the optimization of distributed clouds for content distribution and network equipment virtualization These techniques are recommended as the main energy-efficiency improvement measures for 2020 core networks A mixed integer linear programming optimization model combining all the aforementioned techniques was built to minimize energy consumption in the core network We consider group 1 nations' traffic and place this traffic on a US continental network represented by the AT&T network topology The projections of the 2020 equipment power consumption are based on two scenarios: a business as usual (BAU) scenario and a GreenTouch (GT) (ie, BAU + GT) scenario The results show that the 2020 BAU scenario improves the network energy efficiency by a factor of 423 x compared with the 2010 network as a result of the reduction in the network equipment power consumption Considering the 2020 BAU + GT network, the network equipment improvements alone reduce network power by a factor of 20 x compared with the 2010 network Including of all the BAU + GT energy-efficiency techniques yields a total energy efficiency improvement of 315× We have also implemented an experimental demonstration that illustrates the feasibility of energy-efficient content distribution in IP/WDM networks

Proceedings ArticleDOI
07 Aug 2018
TL;DR: This paper presents the five-year evolution of B4, Google's private software-defined WAN, and describes the techniques employed to incrementally move from offering best-effort content-copy services to carrier-grade availability, while concurrently scaling B4 to accommodate 100x more traffic.
Abstract: Private WANs are increasingly important to the operation of enterprises, telecoms, and cloud providers. For example, B4, Google's private software-defined WAN, is larger and growing faster than our connectivity to the public Internet. In this paper, we present the five-year evolution of B4. We describe the techniques we employed to incrementally move from offering best-effort content-copy services to carrier-grade availability, while concurrently scaling B4 to accommodate 100x more traffic. Our key challenge is balancing the tension introduced by hierarchy required for scalability, the partitioning required for availability, and the capacity asymmetry inherent to the construction and operation of any large-scale network. We discuss our approach to managing this tension: i) we design a custom hierarchical network topology for both horizontal and vertical software scaling, ii) we manage inherent capacity asymmetry in hierarchical topologies using a novel traffic engineering algorithm without packet encapsulation, and iii) we re-architect switch forwarding rules via two-stage matching/hashing to deal with asymmetric network failures at scale.

Journal ArticleDOI
Jiajia Liu1, Yongpeng Shi1, Lei Zhao1, Yurui Cao1, Wen Sun1, Nei Kato2 
TL;DR: This paper first explores the satellite gateway placement problem to obtain the minimum average latency, and investigates a more challenging problem, i.e., the joint placement of controllers and gateways, for the maximum network reliability while satisfying the latency constraint.
Abstract: Leveraging the concept of software-defined network (SDN), the integration of terrestrial 5G and satellite networks brings us lots of benefits. The placement problem of controllers and satellite gateways is of fundamental importance for design of such SDN-enabled integrated network, especially, for the network reliability and latency, since different placement schemes would produce various network performances. To the best of our knowledge, it is an entirely new problem. Toward this end, in this paper, we first explore the satellite gateway placement problem to obtain the minimum average latency. A simulated annealing based approximate solution (SAA), is developed for this problem, which is able to achieve a near-optimal latency. Based on the analysis of latency, we further investigate a more challenging problem, i.e., the joint placement of controllers and gateways, for the maximum network reliability while satisfying the latency constraint. A simulated annealing and clustering hybrid algorithm (SACA) is proposed to solve this problem. Extensive experiments based on real world online network topologies have been conducted and as validated by our numerical results, enumeration algorithms are able to produce optimal results but having extremely long running time, while SAA and SACA can achieve approximate optimal performances with much lower computational complexity.

Journal ArticleDOI
TL;DR: This paper describes the different beamforming approaches in each network topology according to its design objective such as increasing the throughput, enhancing the energy transfer efficiency, and minimizing the total transmit power, with paying special attention to exploiting the physical layer security.
Abstract: Wireless energy harvesting (EH) is a promising solution to prolong lifetime of power-constrained networks such as military and sensor networks. The high sensitivity of energy transfer to signal decay due to path loss and fading, promotes multi-antenna techniques like beamforming as the candidate transmission scheme for EH networks. Exploiting beamforming in EH networks has gained overwhelming interest, and lot of literature has appeared recently regarding this topic. The objective of this paper is to point out the state-of-the-art research activity on beamforming implementation in EH wireless networks. We first review the basic concepts and architecture of EH wireless networks. In addition, we also discuss the effects of beamforming transmission scheme on system performance in EH wireless communication. Furthermore, we present a comprehensive survey of multi-antenna EH communications. We cover the supporting network architectures like broadcasting, relay, and cognitive radio networks with the various beamforming deployment within the network architecture. We classify the different beamforming approaches in each network topology according to its design objective such as increasing the throughput, enhancing the energy transfer efficiency, and minimizing the total transmit power, with paying special attention to exploiting the physical layer security. We also survey major advances as well as open issues, challenges, and future research directions in multi-antenna EH communications.

Journal ArticleDOI
TL;DR: In this article, a distributed observer that guarantees asymptotic reconstruction of the state for the most general class of LTI systems, sensor network topologies, and sensor measurement structures is proposed.
Abstract: We consider the problem of distributed state estimation of a linear time-invariant (LTI) system by a network of sensors. We develop a distributed observer that guarantees asymptotic reconstruction of the state for the most general class of LTI systems, sensor network topologies, and sensor measurement structures. Our analysis builds upon the following key observation—a given node can reconstruct a portion of the state solely by using its own measurements and constructing appropriate Luenberger observers; hence, it only needs to exchange information with neighbors (via consensus dynamics) for estimating the portion of the state that is not locally detectable. This intuitive approach leads to a new class of distributed observers with several appealing features. Furthermore, by imposing additional constraints on the system dynamics and network topology, we show that it is possible to construct a simpler version of the proposed distributed observer that achieves the same objective while admitting a fully distributed design phase. Our general framework allows extensions to time-varying networks that result from communication losses, and scenarios including faults or attacks at the nodes.

Journal ArticleDOI
TL;DR: In this article, the authors investigated whether navigation is a parsimonious routing model for connectomics and found that human, mouse, and macaque connectomes can be successfully navigated with near-optimal efficiency (>80% of optimal efficiency for typical connection densities).
Abstract: Understanding the mechanisms of neural communication in largescale brain networks remains a major goal in neuroscience. We investigated whether navigation is a parsimonious routing model for connectomics. Navigating a network involves progressing to the next node that is closest in distance to a desired destination. We developed a measure to quantify navigation efficiency and found that connectomes in a range of mammalian species (human, mouse, and macaque) can be successfully navigated with near-optimal efficiency (>80% of optimal efficiency for typical connection densities). Rewiring network topology or repositioning network nodes resulted in 45-60% reductions in navigation performance. We found that the human connectome cannot be progressively randomized or clusterized to result in topologies with substantially improved navigation performance (>5%), suggesting a topological balance between regularity and randomness that is conducive to efficient navigation. Navigation was also found to (i) promote a resource-efficient distribution of the information traffic load, potentially relieving communication bottlenecks, and (ii) explain significant variation in functional connectivity. Unlike commonly studied communication strategies in connectomics, navigation does not mandate assumptions about global knowledge of network topology. We conclude that the topology and geometry of brain networks are conducive to efficient decentralized communication.

Journal ArticleDOI
TL;DR: In this article, the authors proposed an optimal planning model of distributed energy storage systems in active distribution networks incorporating soft open points and reactive power capability of DGs, and the results demonstrate that the optimal distributed ESS systems planning obtained by the proposed model achieves better economic solution.

Journal ArticleDOI
TL;DR: Some novel sufficient conditions are obtained for ensuring the exponential stability in mean square and the switching topology-dependent filters are derived such that an optimal disturbance rejection attenuation level can be guaranteed for the estimation disagreement of the filtering network.
Abstract: In this paper, the distributed ${H_{\infty }}$ state estimation problem is investigated for a class of filtering networks with time-varying switching topologies and packet losses. In the filter design, the time-varying switching topologies, partial information exchange between filters, the packet losses in transmission from the neighbor filters and the channel noises are simultaneously considered. The considered topology evolves not only over time, but also by event switches which are assumed to be subjects to a nonhomogeneous Markov chain, and its probability transition matrix is time-varying. Some novel sufficient conditions are obtained for ensuring the exponential stability in mean square and the switching topology-dependent filters are derived such that an optimal ${H_{\infty }}$ disturbance rejection attenuation level can be guaranteed for the estimation disagreement of the filtering network. Finally, simulation examples are provided to demonstrate the effectiveness of the theoretical results.

Journal ArticleDOI
TL;DR: The proposed CNPA algorithm can remarkably reduce the maximum latency between controllers and their associated switches and the end-to-end latency of controllers.
Abstract: One grand challenge in software defined networking is to select appropriate locations for controllers to shorten the latency between controllers and switches in wide area networks. In the literature, the majority of approaches are focused on the reduction of packet propagation latency, but propagation latency is only one of the contributors of the overall latency between controllers and their associated switches. In this paper, we explore and investigate more possible contributors of the latency, including the end-to-end latency and the queuing latency of controllers. In order to decrease the end-to-end latency, the concept of network partition is introduced and a clustering-based network partition algorithm (CNPA) is then proposed to partition the network. The CNPA can guarantee that each partition is able to shorten the maximum end-to-end latency between controllers and switches. To further decrease the queuing latency of controllers, appropriate multiple controllers are then placed in the subnetworks. Extensive simulations are conducted under two real network topologies from the Internet Topology Zoo. The results verify that the proposed algorithm can remarkably reduce the maximum latency between controllers and their associated switches.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a distributed algorithm with convergence assurance based on the alternating direction method of multipliers (ADMM) for minimizing the overall energy cost in a distribution network consisting of multiple MGs, with the practical operating constraints (e.g., power balance and the battery's operational constraints) explicitly incorporated.

Journal ArticleDOI
TL;DR: A distributed adaptive sliding mode control scheme for more realistic vehicular platooning is presented, which does not require the exact values of each entity in the topological matrix, and only needs to know the bounds of its eigenvalues.
Abstract: In a platoon control system, a fixed and symmetrical topology is quite rare because of adverse communication environments and continuously moving vehicles. This paper presents a distributed adaptive sliding mode control scheme for more realistic vehicular platooning. In this scheme, adaptive mechanism is adopted to handle platoon parametric uncertainties, while a structural decomposition method deals with the coupling of interaction topology. A numerical algorithm based on linear matrix inequality is developed to place the poles of the sliding motion dynamics in the required area to balance quickness and smoothness. The proposed scheme allows the nodes to interact with each other via different types of topologies, e.g., either asymmetrical or symmetrical, either fixed or switching. Different from existing techniques, it does not require the exact values of each entity in the topological matrix, and only needs to know the bounds of its eigenvalues. The effectiveness of this proposed methodology is validated by bench tests under several conditions.

Journal ArticleDOI
TL;DR: This work uses linear network control theory to derive accurate closed-form expressions that relate the connectivity of a subset of structural connections (those linking driver nodes to non-driver nodes) to the minimum energy required to control networked systems.
Abstract: Networked systems display complex patterns of interactions between components. In physical networks, these interactions often occur along structural connections that link components in a hard-wired connection topology, supporting a variety of system-wide dynamical behaviors such as synchronization. While descriptions of these behaviors are important, they are only a first step towards understanding and harnessing the relationship between network topology and system behavior. Here, we use linear network control theory to derive accurate closed-form expressions that relate the connectivity of a subset of structural connections (those linking driver nodes to non-driver nodes) to the minimum energy required to control networked systems. To illustrate the utility of the mathematics, we apply this approach to high-resolution connectomes recently reconstructed from Drosophila, mouse, and human brains. We use these principles to suggest an advantage of the human brain in supporting diverse network dynamics with small energetic costs while remaining robust to perturbations, and to perform clinically accessible targeted manipulation of the brain's control performance by removing single edges in the network. Generally, our results ground the expectation of a control system's behavior in its network architecture, and directly inspire new directions in network analysis and design via distributed control.

Journal ArticleDOI
TL;DR: A comprehensive survey of recent solutions for load balancing in data center networks is presented and the differences between data center load balancing mechanisms and traditional Internet traffic scheduling are analyzed.
Abstract: Data center networks usually employ the scale-out model to provide high bisection bandwidth for applications. A large amount of data is required to be transferred frequently between servers across multiple paths. However, traditional load balancing algorithms like equal-cost multi-path routing are not suitable for rapidly varying traffic in data center networks. Based on the special data center topologies and traffic characteristics, researchers have recently proposed some novel traffic scheduling mechanisms to balance traffic. In this paper, we present a comprehensive survey of recent solutions for load balancing in data center networks. First, recently proposed data center network topologies and the studies of traffic characteristics are introduced. Second, the definition of the load-balancing problem is described. Third, we analyze the differences between data center load balancing mechanisms and traditional Internet traffic scheduling. Then, we present an in-depth overview of recent data center load balancing mechanisms. Finally, we analyze the performance of these solutions and discuss future research directions.

Journal ArticleDOI
TL;DR: In this article, the authors introduce a model for the emergence of innovations, in which cognitive processes are described as random walks on the network of links among ideas or concepts, and an innovation corresponds to the first visit of a node.
Abstract: We introduce a model for the emergence of innovations, in which cognitive processes are described as random walks on the network of links among ideas or concepts, and an innovation corresponds to the first visit of a node. The transition matrix of the random walk depends on the network weights, while in turn the weight of an edge is reinforced by the passage of a walker. The presence of the network naturally accounts for the mechanism of the "adjacent possible," and the model reproduces both the rate at which novelties emerge and the correlations among them observed empirically. We show this by using synthetic networks and by studying real data sets on the growth of knowledge in different scientific disciplines. Edge-reinforced random walks on complex topologies offer a new modeling framework for the dynamics of correlated novelties and are another example of coevolution of processes and networks.