scispace - formally typeset
Search or ask a question

Showing papers on "Network topology published in 2017"


Proceedings ArticleDOI
21 Jul 2017
TL;DR: ResNeXt as discussed by the authors is a simple, highly modularized network architecture for image classification, which is constructed by repeating a building block that aggregates a set of transformations with the same topology.
Abstract: We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call cardinality (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online.

7,183 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a methodology and set of validation criteria for the systematic creation of synthetic power system test cases, which do not correspond to any real grid and are free from confidentiality requirements.
Abstract: This paper presents a methodology and set of validation criteria for the systematic creation of synthetic power system test cases The synthesized grids do not correspond to any real grid and are, thus, free from confidentiality requirements The cases are built to match statistical characteristics found in actual power grids First, substations are geographically placed on a selected territory, synthesized from public information about the underlying population and generation plants A clustering technique is employed, which ensures the synthetic substations meet realistic proportions of load and generation, among other constraints Next, a network of transmission lines is added This paper describes several structural statistics to be used in characterizing real power system networks, including connectivity, Delaunay triangulation overlap, dc power flow analysis, and line intersection rate The paper presents a methodology to generate synthetic line topologies with realistic parameters that satisfy these criteria Then, the test cases can be augmented with additional complexities to build large, realistic cases The methodology is illustrated in building a 2000 bus public test case that meets the criteria specified

531 citations


Proceedings Article
06 Jul 2017
TL;DR: In this article, a dual path network (DPN) is proposed for image classification, which shares common features while maintaining the flexibility to explore new features through dual path architectures, achieving state-of-the-art performance on the ImagNet-1k, Places365 and PASCAL VOC datasets.
Abstract: In this work, we present a simple, highly efficient and modularized Dual Path Network (DPN) for image classification which presents a new topology of connection paths internally. By revealing the equivalence of the state-of-the-art Residual Network (ResNet) and Densely Convolutional Network (DenseNet) within the HORNN framework, we find that ResNet enables feature re-usage while DenseNet enables new features exploration which are both important for learning good representations. To enjoy the benefits from both path topologies, our proposed Dual Path Network shares common features while maintaining the flexibility to explore new features through dual path architectures. Extensive experiments on three benchmark datasets, ImagNet-1k, Places365 and PASCAL VOC, clearly demonstrate superior performance of the proposed DPN over state-of-the-arts. In particular, on the ImagNet-1k dataset, a shallow DPN surpasses the best ResNeXt-101(64x4d) with 26% smaller model size, 25% less computational cost and 8% lower memory consumption, and a deeper DPN (DPN-131) further pushes the state-of-the-art single model performance with about 2 times faster training speed. Experiments on the Places365 large-scale scene dataset, PASCAL VOC detection dataset, and PASCAL VOC segmentation dataset also demonstrate its consistently better performance than DenseNet, ResNet and the latest ResNeXt model over various applications.

475 citations


Journal ArticleDOI
TL;DR: Extensive simulations and analysis show the effectiveness and efficiency of the proposed framework, in which the blockchain structure performs better in term of key transfer time than the structure with a central manager, while the dynamic scheme allows SMs to flexibly fit various traffic levels.
Abstract: As modern vehicle and communication technologies advanced apace, people begin to believe that the Intelligent Transportation System (ITS) would be achievable in one decade. ITS introduces information technology to the transportation infrastructures and aims to improve road safety and traffic efficiency. However, security is still a main concern in vehicular communication systems (VCSs). This can be addressed through secured group broadcast. Therefore, secure key management schemes are considered as a critical technique for network security. In this paper, we propose a framework for providing secure key management within the heterogeneous network. The security managers (SMs) play a key role in the framework by capturing the vehicle departure information, encapsulating block to transport keys and then executing rekeying to vehicles within the same security domain. The first part of this framework is a novel network topology based on a decentralized blockchain structure. The blockchain concept is proposed to simplify the distributed key management in heterogeneous VCS domains. The second part of the framework uses the dynamic transaction collection period to further reduce the key transfer time during vehicles handover. Extensive simulations and analysis show the effectiveness and efficiency of the proposed framework, in which the blockchain structure performs better in term of key transfer time than the structure with a central manager, while the dynamic scheme allows SMs to flexibly fit various traffic levels.

466 citations


Journal ArticleDOI
TL;DR: OSMnx is presented, a new tool to make the collection of data and creation and analysis of street networks simple, consistent, automatable and sound from the perspectives of graph theory, transportation, and urban design.

413 citations


Proceedings ArticleDOI
01 Oct 2017
TL;DR: This paper takes advantage of the latest developments in deep learning to have an initial segmentation of the aerial images and proposes an algorithm that reasons about missing connections in the extracted road topology as a shortest path problem that can be solved efficiently.
Abstract: Creating road maps is essential for applications such as autonomous driving and city planning. Most approaches in industry focus on leveraging expensive sensors mounted on top of a fleet of cars. This results in very accurate estimates when exploiting a user in the loop. However, these solutions are very expensive and have small coverage. In contrast, in this paper we propose an approach that directly estimates road topology from aerial images. This provides us with an affordable solution with large coverage. Towards this goal, we take advantage of the latest developments in deep learning to have an initial segmentation of the aerial images. We then propose an algorithm that reasons about missing connections in the extracted road topology as a shortest path problem that can be solved efficiently. We demonstrate the effectiveness of our approach in the challenging TorontoCity dataset [23] and show very significant improvements over the state-of-the-art.

373 citations


Proceedings ArticleDOI
01 Sep 2017
TL;DR: This paper models network traffic as time-series, particularly transmission control protocol / internet protocol (TCP/IP) packets in a predefined time range with supervised learning methods such as multi-layer perceptron (MLP), CNN, CNN-recurrent neural network (CNN-RNN), CNN-long short-term memory ( CNN-LSTM) and CNN-gated recurrent unit (GRU), using millions of known good and bad network connections.
Abstract: Recently, Convolutional neural network (CNN) architectures in deep learning have achieved significant results in the field of computer vision. To transform this performance toward the task of intrusion detection (ID) in cyber security, this paper models network traffic as time-series, particularly transmission control protocol / internet protocol (TCP/IP) packets in a predefined time range with supervised learning methods such as multi-layer perceptron (MLP), CNN, CNN-recurrent neural network (CNN-RNN), CNN-long short-term memory (CNN-LSTM) and CNN-gated recurrent unit (GRU), using millions of known good and bad network connections. To measure the efficacy of these approaches we evaluate on the most important synthetic ID data set such as KDDCup 99. To select the optimal network architecture, comprehensive analysis of various MLP, CNN, CNN-RNN, CNN-LSTM and CNN-GRU with its topologies, network parameters and network structures is used. The models in each experiment are run up to 1000 epochs with learning rate in the range [0.01-05]. CNN and its variant architectures have significantly performed well in comparison to the classical machine learning classifiers. This is mainly due to the reason that CNN have capability to extract high level feature representations that represents the abstract form of low level feature sets of network traffic connections.

349 citations


Posted Content
TL;DR: This work reveals the equivalence of the state-of-the-art Residual Network (ResNet) and Densely Convolutional Network (DenseNet) within the HORNN framework, and finds that ResNet enables feature re-usage while DenseNet enables new features exploration which are both important for learning good representations.
Abstract: In this work, we present a simple, highly efficient and modularized Dual Path Network (DPN) for image classification which presents a new topology of connection paths internally. By revealing the equivalence of the state-of-the-art Residual Network (ResNet) and Densely Convolutional Network (DenseNet) within the HORNN framework, we find that ResNet enables feature re-usage while DenseNet enables new features exploration which are both important for learning good representations. To enjoy the benefits from both path topologies, our proposed Dual Path Network shares common features while maintaining the flexibility to explore new features through dual path architectures. Extensive experiments on three benchmark datasets, ImagNet-1k, Places365 and PASCAL VOC, clearly demonstrate superior performance of the proposed DPN over state-of-the-arts. In particular, on the ImagNet-1k dataset, a shallow DPN surpasses the best ResNeXt-101(64x4d) with 26% smaller model size, 25% less computational cost and 8% lower memory consumption, and a deeper DPN (DPN-131) further pushes the state-of-the-art single model performance with about 2 times faster training speed. Experiments on the Places365 large-scale scene dataset, PASCAL VOC detection dataset, and PASCAL VOC segmentation dataset also demonstrate its consistently better performance than DenseNet, ResNet and the latest ResNeXt model over various applications.

342 citations


Journal ArticleDOI
TL;DR: This paper investigates the problem of network-based leader-following consensus of nonlinear multi-agent systems via distributed impulsive control by taking network-induced delays into account and derives a general consensus criterion.

270 citations


Proceedings ArticleDOI
25 Jun 2017
TL;DR: In this paper, it was shown that a preemptive Last Generated First Served (LGFS) policy results in smaller age processes at all nodes of the network (in a stochastic ordering sense) than any other causal policy.
Abstract: The problem of reducing the age-of-information has been extensively studied in single-hop networks. In this paper, we minimize the age-of-information in general multihop networks. If the packet transmission times over the network links are exponentially distributed, we prove that a preemptive Last Generated First Served (LGFS) policy results in smaller age processes at all nodes of the network (in a stochastic ordering sense) than any other causal policy. In addition, for arbitrary distributions of packet transmission times, the non-preemptive LGFS policy is shown to minimize the age processes at all nodes among all non-preemptive work-conserving policies (again in a stochastic ordering sense). It is surprising that such simple policies can achieve optimality of the joint distribution of the age processes at all nodes even under arbitrary network topologies, as well as arbitrary packet generation and arrival times. These optimality results not only hold for the age processes, but also for any non-decreasing functional of the age processes.

253 citations


Journal ArticleDOI
24 Jul 2017
TL;DR: The novel idea is to find a graph shift that, while being consistent with the provided spectral information, endows the network with certain desired properties such as sparsity, and develops efficient inference algorithms stemming from provably tight convex relaxations of natural nonconvex criteria.
Abstract: We address the problem of identifying the structure of an undirected graph from the observation of signals defined on its nodes. Fundamentally, the unknown graph encodes direct relationships between signal elements, which we aim to recover from observable indirect relationships generated by a diffusion process on the graph. The fresh look advocated here leverages concepts from convex optimization and stationarity of graph signals, in order to identify the graph shift operator (a matrix representation of the graph) given only its eigenvectors . These spectral templates can be obtained, e.g., from the sample covariance of independent graph signals diffused on the sought network. The novel idea is to find a graph shift that, while being consistent with the provided spectral information, endows the network with certain desired properties such as sparsity. To that end, we develop efficient inference algorithms stemming from provably tight convex relaxations of natural nonconvex criteria, particularizing the results for two shifts: the adjacency matrix and the normalized Laplacian. Algorithms and theoretical recovery conditions are developed not only when the templates are perfectly known, but also when the eigenvectors are noisy or when only a subset of them are given. Numerical tests showcase the effectiveness of the proposed algorithms in recovering synthetic and real-world networks.

Journal ArticleDOI
TL;DR: Simulation results show that the proposed CG-based algorithm can approximate the performance of the ILP and outperform an existing benchmark in terms of the profit from service provisioning.
Abstract: Network function virtualization (NFV) is a promising technology to decouple the network functions from dedicated hardware elements, leading to the significant cost reduction in network service provisioning. As more and more users are trying to access their services wherever and whenever, we expect the NFV-related service function chains (SFCs) to be dynamic and adaptive, i.e., they can be readjusted to adapt to the service requests’ dynamics for better user experience. In this paper, we study how to optimize SFC deployment and readjustment in the dynamic situation. Specifically, we try to jointly optimize the deployment of new users’ SFCs and the readjustment of in-service users’ SFCs while considering the trade-off between resource consumption and operational overhead. We first formulate an integer linear programming (ILP) model to solve the problem exactly. Then, to reduce the time complexity, we design a column generation (CG) model for the optimization. Simulation results show that the proposed CG-based algorithm can approximate the performance of the ILP and outperform an existing benchmark in terms of the profit from service provisioning.

Journal ArticleDOI
TL;DR: This work compares two VM mobility modes, bulk and live migration, as a function of mobile cloud service requirements, determining that a high preference should be given to live migration and bulk migrations seem to be a feasible alternative on delay-stringent tiny-disk services, such as augmented reality support, and only with further relaxation on network constraints.
Abstract: Major interest is currently given to the integration of clusters of virtualization servers, also referred to as ‘cloudlets’ or ‘edge clouds’, into the access network to allow higher performance and reliability in the access to mobile edge computing services. We tackle the edge cloud network design problem for mobile access networks. The model is such that the virtual machines (VMs) are associated with mobile users and are allocated to cloudlets. Designing an edge cloud network implies first determining where to install cloudlet facilities among the available sites, then assigning sets of access points, such as base stations to cloudlets, while supporting VM orchestration and considering partial user mobility information, as well as the satisfaction of service-level agreements. We present link-path formulations supported by heuristics to compute solutions in reasonable time. We qualify the advantage in considering mobility for both users and VMs as up to 20% less users not satisfied in their SLA with a little increase of opened facilities. We compare two VM mobility modes, bulk and live migration, as a function of mobile cloud service requirements, determining that a high preference should be given to live migration, while bulk migrations seem to be a feasible alternative on delay-stringent tiny-disk services, such as augmented reality support, and only with further relaxation on network constraints.

Journal ArticleDOI
TL;DR: A Q-learning-based approach to identify critical attack sequences with consideration of physical system behaviors is proposed to identify new smart grid vulnerability that can be exploited by attacks on the network topology.
Abstract: Recent studies on sequential attack schemes revealed new smart grid vulnerability that can be exploited by attacks on the network topology. Traditional power systems contingency analysis needs to be expanded to handle the complex risk of cyber-physical attacks. To analyze the transmission grid vulnerability under sequential topology attacks, this paper proposes a Q-learning-based approach to identify critical attack sequences with consideration of physical system behaviors. A realistic power flow cascading outage model is used to simulate the system behavior, where attacker can use the Q-learning to improve the damage of sequential topology attack toward system failures with the least attack efforts. Case studies based on three IEEE test systems have demonstrated the learning ability and effectiveness of Q-learning-based vulnerability analysis.

Journal ArticleDOI
TL;DR: An algorithm based on the gradient push-sum method is proposed to solve the EDP in a distributed manner over communication networks potentially with time-varying topologies and communication delays.
Abstract: In power system operation, the economic dispatch problem (EDP) aims to minimize the total generation cost while meeting the demand and satisfying generator capacity limits. This paper proposes an algorithm based on the gradient push-sum method to solve the EDP in a distributed manner over communication networks potentially with time-varying topologies and communication delays. This paper shows that the proposed algorithm is guaranteed to solve the EDP if the time-varying directed communication network is uniformly jointly strongly connected. Moreover, the proposed algorithm is also able to handle arbitrarily large but bounded time-varying delays on communication links. Numerical simulations are used to illustrate and validate the proposed algorithm.

Journal ArticleDOI
TL;DR: This paper addresses the consensus problem for a continuous-time multiagent system (MAS) with Markovian network topologies and external disturbance with a proposed consensus protocol that relies only on group and partial modes and eliminates the need for complete knowledge of global modes.
Abstract: This paper addresses the consensus problem for a continuous-time multiagent system (MAS) with Markovian network topologies and external disturbance. Different from some existing results, global jumping modes of the Markovian network topologies are not required to be completely available for consensus protocol design. A network topology mode regulator (NTMR) is first developed to decompose unavailable global modes into several overlapping groups, where overlapping groups refer to the scenario that there exist commonly shared local modes between any two distinct groups. The NTMR schedules which group modes each agent may access at every time step. Then a new group mode-dependent distributed consensus protocol on the basis of relative measurement outputs of neighboring agents is delicately constructed. In this sense, the proposed consensus protocol relies only on group and partial modes and eliminates the need for complete knowledge of global modes. Sufficient conditions on the existence of desired distributed consensus protocols are derived to ensure consensus of the MAS with a prescribed $H_{\infty }$ performance level. Two examples are provided to show the effectiveness of the proposed consensus protocol.

Journal ArticleDOI
Jiaojiao Jiang1, Sheng Wen1, Shui Yu1, Yang Xiang1, Wanlei Zhou1 
TL;DR: The state-of-the-art in source identification techniques is reviewed and the pros and cons of current methods in this field are discussed and a series of experiments and comparisons based on various environment settings are provided.
Abstract: It has long been a significant but difficult problem to identify propagation sources based on limited knowledge of network structures and the varying states of network nodes. In practice, real cases can be locating the sources of rumors in online social networks and finding origins of a rolling blackout in smart grids. This paper reviews the state-of-the-art in source identification techniques and discusses the pros and cons of current methods in this field. Furthermore, in order to gain a quantitative understanding of current methods, we provide a series of experiments and comparisons based on various environment settings. Especially, our observation reveals considerable differences in performance by employing different network topologies, various propagation schemes, and diverse propagation probabilities. We therefore reach the following points for future work. First, current methods remain far from practice as their accuracy in terms of error distance ( ${\delta}$ ) is normally larger than three in most scenarios. Second, the majority of current methods are too time consuming to quickly locate the origins of propagation. In addition, we list five open issues of current methods exposed by the analysis, from the perspectives of topology, number of sources, number of networks, temporal dynamics, and complexity and scalability. Solutions to these open issues are of great academic and practical significance.

Journal ArticleDOI
TL;DR: A new cascade switch-ladder multilevel inverter topology is presented which can generate a large number of output voltage levels and requires fewer numbers of components than other structures.
Abstract: In this paper, a new cascade switch-ladder multilevel inverter topology is presented which can generate a large number of output voltage levels. First, a fundamental switch-ladder multilevel inverter structure is described. Then, the structure of recommended cascade topology based on series connection of fundamental switch-ladder topologies is presented. To generate maximum number of levels with minimum number of switching elements, dc sources, and voltage on switches, the proposed cascade topology is optimized. Comparison results prove that the presented cascade topology requires fewer numbers of components. Also, the value of voltage rating on switches is less than other structures. Experimental results for two topologies are analyzed to verify the performance of the proposed topology.

Proceedings ArticleDOI
07 Aug 2017
TL;DR: While RotorNet dynamically reconfigures its constituent circuit switches, it decouples switch configuration from traffic patterns, obviating the need for demand collection and admitting a fully decentralized control plane.
Abstract: The ever-increasing bandwidth requirements of modern datacenters have led researchers to propose networks based upon optical circuit switches, but these proposals face significant deployment challenges. In particular, previous proposals dynamically configure circuit switches in response to changes in workload, requiring network-wide demand estimation, centralized circuit assignment, and tight time synchronization between various network elements--- resulting in a complex and unwieldy control plane. Moreover, limitations in the technologies underlying the individual circuit switches restrict both the rate at which they can be reconfigured and the scale of the network that can be constructed.We propose RotorNet, a circuit-based network design that addresses these two challenges. While RotorNet dynamically reconfigures its constituent circuit switches, it decouples switch configuration from traffic patterns, obviating the need for demand collection and admitting a fully decentralized control plane. At the physical layer, RotorNet relaxes the requirements on the underlying circuit switches---in particular by not requiring individual switches to implement a full crossbar---enabling them to scale to 1000s of ports. We show that RotorNet outperforms comparably priced Fat Tree topologies under a variety of workload conditions, including traces taken from two commercial datacenters. We also demonstrate a small-scale RotorNet operating in practice on an eight-node testbed.

Journal ArticleDOI
01 May 2017-Genetics
TL;DR: The concept of topology weighting, a method for quantifying relationships between taxa that are not necessarily monophyletic, and visualizing how these relationships change across the genome is introduced, suitable for exploring relationships in almost any genomic dataset.
Abstract: We introduce the concept of topology weighting, a method for quantifying relationships between taxa that are not necessarily monophyletic, and visualizing how these relationships change across the genome. A given set of taxa can be related in a limited number of ways, but if each taxon is represented by multiple sequences, the number of possible topologies becomes very large. Topology weighting reduces this complexity by quantifying the contribution of each taxon topology to the full tree. We describe our method for topology weighting by iterative sampling of subtrees (Twisst), and test it on both simulated and real genomic data. Overall, we show that this is an informative and versatile approach, suitable for exploring relationships in almost any genomic dataset. Scripts to implement the method described are available at http://github.com/simonhmartin/twisst.

Journal ArticleDOI
TL;DR: The results show the ability of the proposed approach to maintain a stable string of realistic vehicles with different control-communication topologies, even in the presence of strong interference, delays, and fading conditions, providing higher comfort and safety for platoon drivers.
Abstract: Automated and coordinated vehicles' driving (platooning) is very challenging due to the multibody control complexity and the presence of unreliable time-varying wireless intervehicular communication (IVC). We propose a novel controller for vehicle platooning based on consensus and analytically demonstrate its stability and dynamic properties. Traditional approaches assume the logical control topology as a constraint fixed a priori , and the control law is designed consequently; our approach makes the control topology a design parameter that can be exploited to reconfigure the controller, depending on the needs and characteristics of the scenario. Furthermore, the controller automatically compensates outdated information caused by network losses and delays. The controller is implemented in Plexe , which is a state-of-the-art IVC and mobility simulator that includes basic building blocks for platooning. Analysis and simulations show the controller robustness and performance in several scenarios, including realistic propagation conditions with interference caused by other vehicles. We compare our approach against a controller taken from the literature, which is generally considered among the most performing ones. Finally, we test the proposed controller by implementing the real dynamics (engine, transmission, braking systems, etc.) of heterogeneous vehicles in Plexe and verifying that platoons remain stable and safe, regardless of real-life impairments that cannot be modeled in the analytic solution. The results show the ability of the proposed approach to maintain a stable string of realistic vehicles with different control-communication topologies, even in the presence of strong interference, delays, and fading conditions, providing higher comfort and safety for platoon drivers.

Journal ArticleDOI
TL;DR: The concept of controllability destructive nodes is proposed, which indicates that the difficulty in graphical characterization turns out to be the identification of topology structures of controlla destructive nodes.
Abstract: Recently, graphical characterization of multiagent controllability has been studied extensively. A major effort in the study is to determine controllability directly from topology structures of communication graphs. In this paper, we proposed the concept of controllability destructive nodes, which indicates that the difficulty in graphical characterization turns out to be the identification of topology structures of controllability destructive nodes. It is shown that each kind of double and triple controllability destructive nodes happens to have a uniform topology structure which can be defined similarly. The definition, however, is verified not to be applicable to the topology structures of quadruple controllability destructive (QCD) nodes. Even so, a design method is proposed to uncover topology structures of QCD nodes for graphs with any size, and a complete graphical characterization is presented for the graphs consisting of five vertices. One advantage of the established complete graphical characterization is that the controllability of any graph with any selection of leaders can be determined directly from the identified/defined destructive topology structures. The results generate several necessary and sufficient graphical conditions for controllability. A key step of arriving at these results is the discovery of a relationship between the topology structure of the controllability destructive nodes and a corresponding eigenvector of the Laplacian matrix.

Journal ArticleDOI
TL;DR: It is reported that the transfer of diverse, task- rule information in distributed brain regions can be predicted based on estimated activity flow through resting-state network connections, and that these task-rule information transfers are coordinated by global hub regions within cognitive control networks.
Abstract: Resting-state network connectivity has been associated with a variety of cognitive abilities, yet it remains unclear how these connectivity properties might contribute to the neurocognitive computations underlying these abilities. We developed a new approach-information transfer mapping-to test the hypothesis that resting-state functional network topology describes the computational mappings between brain regions that carry cognitive task information. Here, we report that the transfer of diverse, task-rule information in distributed brain regions can be predicted based on estimated activity flow through resting-state network connections. Further, we find that these task-rule information transfers are coordinated by global hub regions within cognitive control networks. Activity flow over resting-state connections thus provides a large-scale network mechanism for cognitive task information transfer and global information coordination in the human brain, demonstrating the cognitive relevance of resting-state network topology.

Journal ArticleDOI
TL;DR: The benefits of using UAVs for this function include significantly decreasing sensor node energy consumption, lower interference, and offers considerably increased flexibility in controlling the density of the deployed nodes since the need for the multihop approach for sensor-to-sink communication is either eliminated or significantly reduced.

Posted Content
TL;DR: In decentralized optimization, nodes cooperate to minimize an overall objective function that is the sum (or average) of per-node private objective functions as discussed by the authors, where nodes interleave local computations with communication among all or a subset of the nodes.
Abstract: In decentralized optimization, nodes cooperate to minimize an overall objective function that is the sum (or average) of per-node private objective functions. Algorithms interleave local computations with communication among all or a subset of the nodes. Motivated by a variety of applications---distributed estimation in sensor networks, fitting models to massive data sets, and distributed control of multi-robot systems, to name a few---significant advances have been made towards the development of robust, practical algorithms with theoretical performance guarantees. This paper presents an overview of recent work in this area. In general, rates of convergence depend not only on the number of nodes involved and the desired level of accuracy, but also on the structure and nature of the network over which nodes communicate (e.g., whether links are directed or undirected, static or time-varying). We survey the state-of-the-art algorithms and their analyses tailored to these different scenarios, highlighting the role of the network topology.

Journal ArticleDOI
TL;DR: This paper proposes a methodology that utilizes new data from sensor-equipped DER devices to obtain the distribution grid topology and presents a graphical model to describe the probabilistic relationship among different voltage measurements.
Abstract: Distributed energy resources (DERs), such as photovoltaic, wind, and gas generators, are connected to the grid more than ever before, which introduces tremendous changes in the distribution grid. Due to these changes, it is important to understand where these DERs are connected in order to sustainably operate the distribution grid. But the exact distribution system topology is difficult to obtain due to frequent distribution grid reconfigurations and insufficient knowledge about new components. In this paper, we propose a methodology that utilizes new data from sensor-equipped DER devices to obtain the distribution grid topology. Specifically, a graphical model is presented to describe the probabilistic relationship among different voltage measurements. With power flow analysis, a mutual information-based identification algorithm is proposed to deal with tree and partially meshed networks. Simulation results show highly accurate connectivity identification in the IEEE standard distribution test systems and Electric Power Research Institute test systems.

Journal ArticleDOI
TL;DR: This paper proposes a redundant VM placement optimization approach to enhancing the reliability of cloud services and shows that the proposed approach outperforms four other representative methods in network resource consumption in the service recovery stage.
Abstract: With rapid adoption of the cloud computing model, many enterprises have begun deploying cloud-based services. Failures of virtual machines (VMs) in clouds have caused serious quality assurance issues for those services. VM replication is a commonly used technique for enhancing the reliability of cloud services. However, when determining the VM redundancy strategy for a specific service, many state-of-the-art methods ignore the huge network resource consumption issue that could be experienced when the service is in failure recovery mode. This paper proposes a redundant VM placement optimization approach to enhancing the reliability of cloud services. The approach employs three algorithms. The first algorithm selects an appropriate set of VM-hosting servers from a potentially large set of candidate host servers based upon the network topology. The second algorithm determines an optimal strategy to place the primary and backup VMs on the selected host servers with k-fault-tolerance assurance. Lastly, a heuristic is used to address the task-to-VM reassignment optimization problem, which is formulated as finding a maximum weight matching in bipartite graphs. The evaluation results show that the proposed approach outperforms four other representative methods in network resource consumption in the service recovery stage.

Journal Article
TL;DR: In this paper, a temporal point process model, Coevolve, is proposed to simulate interleaved diffusion and network events, allowing the intensity of one process to be modulated by that of the other.
Abstract: Information diffusion in online social networks is affected by the underlying network topology, but it also has the power to change it. Online users are constantly creating new links when exposed to new information sources, and in turn these links are alternating the way information spreads. However, these two highly intertwined stochastic processes, information diffusion and network evolution, have been predominantly studied separately, ignoring their co-evolutionary dynamics We propose a temporal point process model, Coevolve, for such joint dynamics, allowing the intensity of one process to be modulated by that of the other. This model allows us to efficiently simulate interleaved diffusion and network events, and generate traces obeying common diffusion and network patterns observed in real-world networks such as Twitter. Furthermore, we also develop a convex optimization framework to learn the parameters of the model from historical diffusion and network evolution traces. We experimented with both synthetic data and data gathered from Twitter, and show that our model provides a good fit to the data as well as more accurate predictions than alternatives.

Journal ArticleDOI
TL;DR: The proposed methodology proves to be capable of providing a promising solution for drug‐target prediction based on topological similarity with a heterogeneous network, and may be readily re‐purposed and adapted in the existing of similarity‐based methodologies.
Abstract: Motivation A heterogeneous network topology possessing abundant interactions between biomedical entities has yet to be utilized in similarity-based methods for predicting drug-target associations based on the array of varying features of drugs and their targets. Deep learning reveals features of vertices of a large network that can be adapted in accommodating the similarity-based solutions to provide a flexible method of drug-target prediction. Results We propose a similarity-based drug-target prediction method that enhances existing association discovery methods by using a topology-based similarity measure. DeepWalk, a deep learning method, is adopted in this study to calculate the similarities within Linked Tripartite Network (LTN), a heterogeneous network generated from biomedical linked datasets. This proposed method shows promising results for drug-target association prediction: 98.96% AUC ROC score with a 10-fold cross-validation and 99.25% AUC ROC score with a Monte Carlo cross-validation with LTN. By utilizing DeepWalk, we demonstrate that: (i) this method outperforms other existing topology-based similarity computation methods, (ii) the performance is better for tripartite than with bipartite networks and (iii) the measure of similarity using network topology outperforms the ones derived from chemical structure (drugs) or genomic sequence (targets). Our proposed methodology proves to be capable of providing a promising solution for drug-target prediction based on topological similarity with a heterogeneous network, and may be readily re-purposed and adapted in the existing of similarity-based methodologies. Availability and implementation The proposed method has been developed in JAVA and it is available, along with the data at the following URL: https://github.com/zongnansu1982/drug-target-prediction . Contact nazong@ucsd.edu. Supplementary information Supplementary data are available at Bioinformatics online.

Journal ArticleDOI
TL;DR: This article surveys the state-of-the-art solutions for controller placement in SDN, draws a taxonomy based on their objectives, and proposes a new approach to minimize the packet propagation latency between controllers and switches.
Abstract: Recently, a variety of solutions have been proposed to tackle the controller placement problem in SDN. The objectives include minimizing the latency between controllers and their associated switches, enhancing reliability and resilience of the network, and minimizing deployment cost and energy consumption. In this article, we first survey the state-of-the-art solutions and draw a taxonomy based on their objectives, and then propose a new approach to minimize the packet propagation latency between controllers and switches. In order to encourage future research, we also identify the ongoing research challenges and open issues relevant to this problem.