scispace - formally typeset
Search or ask a question

Showing papers on "Node (networking) published in 2017"


Proceedings ArticleDOI
04 Aug 2017
TL;DR: Two scalable representation learning models, namely metapath2vec and metapATH2vec++, are developed that are able to not only outperform state-of-the-art embedding models in various heterogeneous network mining tasks, but also discern the structural and semantic correlations between diverse network objects.
Abstract: We study the problem of representation learning in heterogeneous networks. Its unique challenges come from the existence of multiple types of nodes and links, which limit the feasibility of the conventional network embedding techniques. We develop two scalable representation learning models, namely metapath2vec and metapath2vec++. The metapath2vec model formalizes meta-path-based random walks to construct the heterogeneous neighborhood of a node and then leverages a heterogeneous skip-gram model to perform node embeddings. The metapath2vec++ model further enables the simultaneous modeling of structural and semantic correlations in heterogeneous networks. Extensive experiments show that metapath2vec and metapath2vec++ are able to not only outperform state-of-the-art embedding models in various heterogeneous network mining tasks, such as node classification, clustering, and similarity search, but also discern the structural and semantic correlations between diverse network objects.

1,794 citations


Proceedings ArticleDOI
04 Aug 2017
TL;DR: Struc2vec, a novel and flexible framework for learning latent representations for the structural identity of nodes, is presented, which improves performance on classification tasks that depend more on structural identity.
Abstract: Structural identity is a concept of symmetry in which network nodes are identified according to the network structure and their relationship to other nodes. Structural identity has been studied in theory and practice over the past decades, but only recently has it been addressed with representational learning techniques. This work presents struc2vec, a novel and flexible framework for learning latent representations for the structural identity of nodes. struc2vec uses a hierarchy to measure node similarity at different scales, and constructs a multilayer graph to encode structural similarities and generate structural context for nodes. Numerical experiments indicate that state-of-the-art techniques for learning node representations fail in capturing stronger notions of structural identity, while struc2vec exhibits much superior performance in this task, as it overcomes limitations of prior approaches. As a consequence, numerical experiments indicate that struc2vec improves performance on classification tasks that depend more on structural identity.

988 citations


Journal ArticleDOI
TL;DR: In this paper, the authors study how to optimally manage the freshness of information updates sent from a source node to a destination via a channel and develop efficient algorithms to find the optimal update policy among all causal policies and establish sufficient and necessary conditions for the optimality of the zero-wait policy.
Abstract: In this paper, we study how to optimally manage the freshness of information updates sent from a source node to a destination via a channel. A proper metric for data freshness at the destination is the age-of-information , or simply age , which is defined as how old the freshest received update is, since the moment that this update was generated at the source node (e.g., a sensor). A reasonable update policy is the zero-wait policy, i.e., the source node submits a fresh update once the previous update is delivered, which achieves the maximum throughput and the minimum delay. Surprisingly, this zero-wait policy does not always minimize the age. This counter-intuitive phenomenon motivates us to study how to optimally control information updates to keep the data fresh and to understand when the zero-wait policy is optimal. We introduce a general age penalty function to characterize the level of dissatisfaction on data staleness and formulate the average age penalty minimization problem as a constrained semi-Markov decision problem with an uncountable state space. We develop efficient algorithms to find the optimal update policy among all causal policies and establish sufficient and necessary conditions for the optimality of the zero-wait policy. Our investigation shows that the zero-wait policy is far from the optimum if: 1) the age penalty function grows quickly with respect to the age; 2) the packet transmission times over the channel are positively correlated over time; or 3) the packet transmission times are highly random (e.g., following a heavy-tail distribution).

857 citations


Journal ArticleDOI
TL;DR: A high-level view of a UAV-based integrative IoT platform for the delivery of IoT services from large height, along with the overall system orchestrator, is presented and how UAVs can be used for crowd surveillance based on face recognition is demonstrated.
Abstract: Unmanned aerial vehicles are gaining a lot of popularity among an ever growing community of amateurs as well as service providers. Emerging technologies, such as LTE 4G/5G networks and mobile edge computing, will widen the use case scenarios of UAVs. In this article, we discuss the potential of UAVs, equipped with IoT devices, in delivering IoT services from great heights. A high-level view of a UAV-based integrative IoT platform for the delivery of IoT services from large height, along with the overall system orchestrator, is presented in this article. As an envisioned use case of the platform, the article demonstrates how UAVs can be used for crowd surveillance based on face recognition. To evaluate the use case, we study the offloading of video data processing to a MEC node compared to the local processing of video data onboard UAVs. For this, we developed a testbed consisting of a local processing node and one MEC node. To perform face recognition, the Local Binary Pattern Histogram method from the Open Source Computer Vision is used. The obtained results demonstrate the efficiency of the MEC-based offloading approach in saving the scarce energy of UAVs, reducing the processing time of recognition, and promptly detecting suspicious persons.

613 citations


Posted Content
TL;DR: This paper studies a D-PSGD algorithm and provides the first theoretical analysis that indicates a regime in which decentralized algorithms might outperform centralized algorithms for distributed stochastic gradient descent.
Abstract: Most distributed machine learning systems nowadays, including TensorFlow and CNTK, are built in a centralized fashion. One bottleneck of centralized algorithms lies on high communication cost on the central node. Motivated by this, we ask, can decentralized algorithms be faster than its centralized counterpart? Although decentralized PSGD (D-PSGD) algorithms have been studied by the control community, existing analysis and theory do not show any advantage over centralized PSGD (C-PSGD) algorithms, simply assuming the application scenario where only the decentralized network is available. In this paper, we study a D-PSGD algorithm and provide the first theoretical analysis that indicates a regime in which decentralized algorithms might outperform centralized algorithms for distributed stochastic gradient descent. This is because D-PSGD has comparable total computational complexities to C-PSGD but requires much less communication cost on the busiest node. We further conduct an empirical study to validate our theoretical analysis across multiple frameworks (CNTK and Torch), different network configurations, and computation platforms up to 112 GPUs. On network configurations with low bandwidth or high latency, D-PSGD can be up to one order of magnitude faster than its well-optimized centralized counterparts.

582 citations


Journal ArticleDOI
TL;DR: In this paper, a convolutional neural network is used to predict the coefficients of a locally affine model in bilateral space, which is then applied to the full-resolution image.
Abstract: Performance is a critical challenge in mobile image processing. Given a reference imaging pipeline, or even human-adjusted pairs of images, we seek to reproduce the enhancements and enable real-time evaluation. For this, we introduce a new neural network architecture inspired by bilateral grid processing and local affine color transforms. Using pairs of input/output images, we train a convolutional neural network to predict the coefficients of a locally-affine model in bilateral space. Our architecture learns to make local, global, and content-dependent decisions to approximate the desired image transformation. At runtime, the neural network consumes a low-resolution version of the input image, produces a set of affine transformations in bilateral space, upsamples those transformations in an edge-preserving fashion using a new slicing node, and then applies those upsampled transformations to the full-resolution image. Our algorithm processes high-resolution images on a smartphone in milliseconds, provides a real-time viewfinder at 1080p resolution, and matches the quality of state-of-the-art approximation techniques on a large class of image operators. Unlike previous work, our model is trained off-line from data and therefore does not require access to the original operator at runtime. This allows our model to learn complex, scene-dependent transformations for which no reference implementation is available, such as the photographic edits of a human retoucher.

510 citations


Proceedings Article
01 Dec 2017
TL;DR: In this article, the authors study a decentralized stochastic gradient descent (D-PSGD) algorithm and provide the first theoretical analysis that indicates a regime in which decentralized algorithms might outperform centralized algorithms for distributed Stochastic Gradient Descent.
Abstract: Most distributed machine learning systems nowadays, including TensorFlow and CNTK, are built in a centralized fashion. One bottleneck of centralized algorithms lies on high communication cost on the central node. Motivated by this, we ask, can decentralized algorithms be faster than its centralized counterpart? Although decentralized PSGD (D-PSGD) algorithms have been studied by the control community, existing analysis and theory do not show any advantage over centralized PSGD (C-PSGD) algorithms, simply assuming the application scenario where only the decentralized network is available. In this paper, we study a D-PSGD algorithm and provide the first theoretical analysis that indicates a regime in which decentralized algorithms might outperform centralized algorithms for distributed stochastic gradient descent. This is because D-PSGD has comparable total computational complexities to C-PSGD but requires much less communication cost on the busiest node. We further conduct an empirical study to validate our theoretical analysis across multiple frameworks (CNTK and Torch), different network configurations, and computation platforms up to 112 GPUs. On network configurations with low bandwidth or high latency, D-PSGD can be up to one order of magnitude faster than its well-optimized centralized counterparts.

489 citations


Proceedings ArticleDOI
TL;DR: Struc2vec as mentioned in this paper uses a hierarchy to measure node similarity at different scales, and constructs a multilayer graph to encode structural similarities and generate structural context for nodes, which improves performance on classification tasks that depend more on structural identity.
Abstract: Structural identity is a concept of symmetry in which network nodes are identified according to the network structure and their relationship to other nodes. Structural identity has been studied in theory and practice over the past decades, but only recently has it been addressed with representational learning techniques. This work presents struc2vec, a novel and flexible framework for learning latent representations for the structural identity of nodes. struc2vec uses a hierarchy to measure node similarity at different scales, and constructs a multilayer graph to encode structural similarities and generate structural context for nodes. Numerical experiments indicate that state-of-the-art techniques for learning node representations fail in capturing stronger notions of structural identity, while struc2vec exhibits much superior performance in this task, as it overcomes limitations of prior approaches. As a consequence, numerical experiments indicate that struc2vec improves performance on classification tasks that depend more on structural identity.

472 citations


Proceedings ArticleDOI
02 Feb 2017
TL;DR: A novel Label informed Attributed Network Embedding (LANE) framework that can smoothly incorporate label information into the attributed network embedding while preserving their correlations is proposed and achieves significantly better performance compared with the state-of-the-art embedding algorithms.
Abstract: Attributed network embedding aims to seek low-dimensional vector representations for nodes in a network, such that original network topological structure and node attribute proximity can be preserved in the vectors. These learned representations have been demonstrated to be helpful in many learning tasks such as network clustering and link prediction. While existing algorithms follow an unsupervised manner, nodes in many real-world attributed networks are often associated with abundant label information, which is potentially valuable in seeking more effective joint vector representations. In this paper, we investigate how labels can be modeled and incorporated to improve attributed network embedding. This is a challenging task since label information could be noisy and incomplete. In addition, labels are completely distinct with the geometrical structure and node attributes. The bewildering combination of heterogeneous information makes the joint vector representation learning more difficult. To address these issues, we propose a novel Label informed Attributed Network Embedding (LANE) framework. It can smoothly incorporate label information into the attributed network embedding while preserving their correlations. Experiments on real-world datasets demonstrate that the proposed framework achieves significantly better performance compared with the state-of-the-art embedding algorithms.

467 citations


Journal ArticleDOI
TL;DR: It is proved that no algorithm can uniquely solve community detection, and a general No Free Lunch theorem for community detection is proved, which implies that there can be no algorithm that is optimal for all possible community detection tasks.
Abstract: Across many scientific domains, there is a common need to automatically extract a simplified view or coarse-graining of how a complex system's components interact. This general task is called community detection in networks and is analogous to searching for clusters in independent vector data. It is common to evaluate the performance of community detection algorithms by their ability to find so-called ground truth communities. This works well in synthetic networks with planted communities because these networks' links are formed explicitly based on those known communities. However, there are no planted communities in real-world networks. Instead, it is standard practice to treat some observed discrete-valued node attributes, or metadata, as ground truth. We show that metadata are not the same as ground truth and that treating them as such induces severe theoretical and practical problems. We prove that no algorithm can uniquely solve community detection, and we prove a general No Free Lunch theorem for community detection, which implies that there can be no algorithm that is optimal for all possible community detection tasks. However, community detection remains a powerful tool and node metadata still have value, so a careful exploration of their relationship with network structure can yield insights of genuine worth. We illustrate this point by introducing two statistical techniques that can quantify the relationship between metadata and community structure for a broad class of models. We demonstrate these techniques using both synthetic and real-world networks, and for multiple types of metadata and community structures.

447 citations


Journal ArticleDOI
TL;DR: In this paper, the authors derived the downlink coverage probability of a reference receiver located at an arbitrary position on the ground assuming Nakagami-$m$ fading for all wireless links.
Abstract: In this paper, we consider a finite network of unmanned aerial vehicles serving a given region. Modeling this network as a uniform binomial point process, we derive the downlink coverage probability of a reference receiver located at an arbitrary position on the ground assuming Nakagami- $m$ fading for all wireless links. The reference receiver is assumed to connect to its closest transmitting node as is usually the case in cellular systems. After deriving the distribution of distances from the reference receiver to the serving and interfering nodes, we derive an exact expression for downlink coverage probability in terms of the derivative of Laplace transform of interference power distribution. In the downlink of this system, it is not unusual to encounter scenarios in which the line-of-sight component is significantly stronger than the reflected multipath components. To emulate such scenarios, we also derive the coverage probability in the absence of fading from the results of Nakagami- $m$ fading by taking the limit $m \to \infty$ . Using asymptotic expansion of incomplete gamma function, we concretely show that this limit reduces to a redundant condition. Consequently, we derive an accurate coverage probability approximation for this case using dominant interferer-based approach in which the effect of dominant interferer is exactly captured and the residual interference from other interferers is carefully approximated. We then derive the bounds of the approximate coverage probability using Berry-Esseen theorem. Our analyses reveal several useful trends in coverage probability as a function of height of the transmitting nodes and the location of reference receiver on the ground.

Proceedings ArticleDOI
06 Nov 2017
TL;DR: This paper studies an important yet largely under-explored setting of graph embedding, i.e., embedding communities instead of each individual nodes, and proposes a novel community embedding framework that jointly solves the three tasks together.
Abstract: In this paper, we study an important yet largely under-explored setting of graph embedding, i.e., embedding communities instead of each individual nodes. We find that community embedding is not only useful for community-level applications such as graph visualization, but also beneficial to both community detection and node classification. To learn such embedding, our insight hinges upon a closed loop among community embedding, community detection and node embedding. On the one hand, node embedding can help improve community detection, which outputs good communities for fitting better community embedding. On the other hand, community embedding can be used to optimize the node embedding by introducing a community-aware high-order proximity. Guided by this insight, we propose a novel community embedding framework that jointly solves the three tasks together. We evaluate such a framework on multiple real-world datasets, and show that it improves graph visualization and outperforms state-of-the-art baselines in various application tasks, e.g., community detection and node classification.

Journal ArticleDOI
TL;DR: It is proved that asymptotic stability of such a DMPC can be achieved through an explicit sufficient condition on the weights of the cost functions, by using the sum of local cost functions as a Lyapunov candidate.
Abstract: This paper presents a distributed model predictive control (DMPC) algorithm for heterogeneous vehicle platoons with unidirectional topologies and a priori unknown desired set point. The vehicles (or nodes) in a platoon are dynamically decoupled but constrained by spatial geometry. Each node is assigned a local open-loop optimal control problem only relying on the information of neighboring nodes, in which the cost function is designed by penalizing on the errors between the predicted and assumed trajectories. Together with this penalization, an equality-based terminal constraint is proposed to ensure stability, which enforces the terminal states of each node in the predictive horizon equal to the average of its neighboring states. By using the sum of local cost functions as a Lyapunov candidate, it is proved that asymptotic stability of such a DMPC can be achieved through an explicit sufficient condition on the weights of the cost functions. Simulations with passenger cars demonstrate the effectiveness of the proposed DMPC.

Journal ArticleDOI
TL;DR: This paper introduces a decomposition framework to model, analyze, and design the platoon system, and the basis of typical distributed control techniques is presented, including linear consensus control, distributed robust control, distributing sliding mode control, and distributed model predictive control.
Abstract: The platooning of connected and automated vehicles (CAVs) is expected to have a transformative impact on road transportation, e.g., enhancing highway safety, improving traffic utility, and reducing fuel consumption. Requiring only local information, distributed control schemes are scalable approaches to the coordination of multiple CAVs without using centralized communication and computation. From the perspective of multi-agent consensus control, this paper introduces a decomposition framework to model, analyze, and design the platoon system. In this framework, a platoon is naturally decomposed into four interrelated components, i.e., 1) node dynamics, 2) information flow network, 3) distributed controller, and 4) geometry formation. The classic model of each component is summarized according to the results of the literature survey; four main performance metrics, i.e., internal stability, stability margin, string stability, and coherence behavior, are discussed in the same fashion. Also, the basis of typical distributed control techniques is presented, including linear consensus control, distributed robust control, distributed sliding mode control, and distributed model predictive control.

Journal ArticleDOI
TL;DR: A lightweight anonymous mutual authentication and key agreement scheme for centralized two-hop WBANs is proposed, which allows sensor nodes attached to the patient’s body to authenticate with the local server/hub node and establish a session key in an anonymous and unlinkable manner.

Journal ArticleDOI
TL;DR: A modified Stable Election Protocol (SEP), named Prolong-SEP (P- SEP) is presented to prolong the stable period of Fog-supported sensor networks by maintaining balanced energy consumption.
Abstract: Energy efficiency is one of the main issues that will drive the design of fog-supported wireless sensor networks (WSNs). Indeed, the behavior of such networks becomes very unstable in node's heterogeneity and/or node's failure. In WSNs, clusters are dynamically built up by neighbor nodes, to save energy and prolong the network lifetime. One of the nodes plays the role of Cluster Head (CH) that is responsible for transferring data among the neighboring sensors. Due to pervasive use of WSNs, finding an energy-efficient policy to opt CHs in the WSNs has become increasingly important. Due to this motivations, in this paper, a modified Stable Election Protocol (SEP), named Prolong-SEP (P-SEP) is presented to prolong the stable period of Fog-supported sensor networks by maintaining balanced energy consumption. P-SEP enables uniform nodes distribution, new CH selecting policy, and prolong the time interval of the system, especially before the failure of the first node. P-SEP considers two-level nodes' heterogeneities: advanced and normal nodes. In P-SEP, the advanced and normal nodes have the opportunity to become CHs. The performance of the proposed approach is evaluated by varying the various parameters of the network in comparison with other state-of-the-art cluster-based routing protocols. The simulation results point out that, by varying the initial energy and node heterogeneity parameters, the network lifetime of P-SEP improved by 31, 29, 20 and 40 % in comparison with SEP, Low-Energy Adaptive Clustering Hierarchy with Deterministic Cluster-Head Selection (LEACH-DCHS), Modified SEP (M-SEP) and an efficient modified SEP (EM-SEP), respectively.

Journal ArticleDOI
23 May 2017-Sensors
TL;DR: This study investigates the scalability in terms of the number of end devices per gateway of single-gateway LoRaWAN deployments, and determines the intra-technology interference behavior with two physical end nodes, by checking the impact of an interfering node on a transmitting node.
Abstract: LoRa is a long-range, low power, low bit rate and single-hop wireless communication technology It is intended to be used in Internet of Things (IoT) applications involving battery-powered devices with low throughput requirements A LoRaWAN network consists of multiple end nodes that communicate with one or more gateways These gateways act like a transparent bridge towards a common network server The amount of end devices and their throughput requirements will have an impact on the performance of the LoRaWAN network This study investigates the scalability in terms of the number of end devices per gateway of single-gateway LoRaWAN deployments First, we determine the intra-technology interference behavior with two physical end nodes, by checking the impact of an interfering node on a transmitting node Measurements show that even under concurrent transmission, one of the packets can be received under certain conditions Based on these measurements, we create a simulation model for assessing the scalability of a single gateway LoRaWAN network We show that when the number of nodes increases up to 1000 per gateway, the losses will be up to 32% In such a case, pure Aloha will have around 90% losses However, when the duty cycle of the application layer becomes lower than the allowed radio duty cycle of 1%, losses will be even lower We also show network scalability simulation results for some IoT use cases based on real data

Proceedings ArticleDOI
06 Nov 2017
TL;DR: DANE first provides an offline method for a consensus embedding and then leverages matrix perturbation theory to maintain the freshness of the end embedding results in an online manner, and performs extensive experiments to corroborate the effectiveness and efficiency of the proposed framework.
Abstract: Network embedding leverages the node proximity manifested to learn a low-dimensional node vector representation for each node in the network. The learned embeddings could advance various learning tasks such as node classification, network clustering, and link prediction. Most, if not all, of the existing works, are overwhelmingly performed in the context of plain and static networks. Nonetheless, in reality, network structure often evolves over time with addition/deletion of links and nodes. Also, a vast majority of real-world networks are associated with a rich set of node attributes, and their attribute values are also naturally changing, with the emerging of new content patterns and the fading of old content patterns. These changing characteristics motivate us to seek an effective embedding representation to capture network and attribute evolving patterns, which is of fundamental importance for learning in a dynamic environment. To our best knowledge, we are the first to tackle this problem with the following two challenges: (1) the inherently correlated network and node attributes could be noisy and incomplete, it necessitates a robust consensus representation to capture their individual properties and correlations; (2) the embedding learning needs to be performed in an online fashion to adapt to the changes accordingly. In this paper, we tackle this problem by proposing a novel dynamic attributed network embedding framework - DANE. In particular, DANE first provides an offline method for a consensus embedding and then leverages matrix perturbation theory to maintain the freshness of the end embedding results in an online manner. We perform extensive experiments on both synthetic and real attributed networks to corroborate the effectiveness and efficiency of the proposed framework.

Proceedings ArticleDOI
TL;DR: In this paper, the authors propose a dynamic attributed network embedding framework (DANE), which first provides an offline method for a consensus embedding and then leverages matrix perturbation theory to maintain the freshness of the end embedding results in an online manner.
Abstract: Network embedding leverages the node proximity manifested to learn a low-dimensional node vector representation for each node in the network. The learned embeddings could advance various learning tasks such as node classification, network clustering, and link prediction. Most, if not all, of the existing works, are overwhelmingly performed in the context of plain and static networks. Nonetheless, in reality, network structure often evolves over time with addition/deletion of links and nodes. Also, a vast majority of real-world networks are associated with a rich set of node attributes, and their attribute values are also naturally changing, with the emerging of new content patterns and the fading of old content patterns. These changing characteristics motivate us to seek an effective embedding representation to capture network and attribute evolving patterns, which is of fundamental importance for learning in a dynamic environment. To our best knowledge, we are the first to tackle this problem with the following two challenges: (1) the inherently correlated network and node attributes could be noisy and incomplete, it necessitates a robust consensus representation to capture their individual properties and correlations; (2) the embedding learning needs to be performed in an online fashion to adapt to the changes accordingly. In this paper, we tackle this problem by proposing a novel dynamic attributed network embedding framework - DANE. In particular, DANE first provides an offline method for a consensus embedding and then leverages matrix perturbation theory to maintain the freshness of the end embedding results in an online manner. We perform extensive experiments on both synthetic and real attributed networks to corroborate the effectiveness and efficiency of the proposed framework.

Journal ArticleDOI
21 Dec 2017-Energies
TL;DR: A black start strategy for microgrids based on a parallel restoration strategy based on the variation coefficient method is proposed and the whole optimization of the reconstructed network is realized.
Abstract: The black start capability is vital for microgrids, which can potentially improve the reliability of the power grid. This paper proposes a black start strategy for microgrids based on a parallel restoration strategy. Considering the characteristics of distributed generations (DGs), an evaluation model, which is used to assess the black start capability of DGs, is established by adopting the variation coefficient method. Thus, the DGs with good black start capability, which are selected by a diversity sequence method, are restored first in parallel under the constraints of DGs and network. During the selection process of recovery paths, line weight and node importance degree are proposed under the consideration of the node topological importance and the load importance as well as the backbone network restoration time. Therefore, the whole optimization of the reconstructed network is realized. Finally, the simulation results verify the feasibility and effectiveness of the strategy.

Proceedings ArticleDOI
03 Apr 2017
TL;DR: In this article, an end-to-end deep learning approach was proposed to predict the future size of cascades in a social network, without hand-crafted features or heuristics.
Abstract: Information cascades, effectively facilitated by most social network platforms, are recognized as a major factor in almost every social success and disaster in these networks. Can cascades be predicted? While many believe that they are inherently unpredictable, recent work has shown that some key properties of information cascades, such as size, growth, and shape, can be predicted by a machine learning algorithm that combines many features. These predictors all depend on a bag of hand-crafting features to represent the cascade network and the global network structures. Such features, always carefully and sometimes mysteriously designed, are not easy to extend or to generalize to a different platform or domain. Inspired by the recent successes of deep learning in multiple data mining tasks, we investigate whether an end-to-end deep learning approach could effectively predict the future size of cascades. Such a method automatically learns the representation of individual cascade graphs in the context of the global network structure, without hand-crafted features or heuristics. We find that node embeddings fall short of predictive power, and it is critical to learn the representation of a cascade graph as a whole. We present algorithms that learn the representation of cascade graphs in an end-to-end manner, which significantly improve the performance of cascade prediction over strong baselines including feature based methods, node embedding methods, and graph kernel methods. Our results also provide interesting implications for cascade prediction in general.

Journal ArticleDOI
TL;DR: Simulation results on sample networks reveal just how relevant the centrality of initiator nodes is on the latter development of an information cascade, and the spreading influence of a node is defined as the fraction of nodes that is activated as a result of the initial activation of that node.
Abstract: Information cascades are important dynamical processes in complex networks. An information cascade can describe the spreading dynamics of rumour, disease, memes, or marketing campaigns, which initially start from a node or a set of nodes in the network. If conditions are right, information cascades rapidly encompass large parts of the network, thus leading to epidemics or epidemic spreading. Certain network topologies are particularly conducive to epidemics, while others decelerate and even prohibit rapid information spreading. Here we review models that describe information cascades in complex networks, with an emphasis on the role and consequences of node centrality. In particular, we present simulation results on sample networks that reveal just how relevant the centrality of initiator nodes is on the latter development of an information cascade, and we define the spreading influence of a node as the fraction of nodes that is activated as a result of the initial activation of that node. A systemic review of existing results shows that some centrality measures, such as the degree and betweenness, are positively correlated with the spreading influence, while other centrality measures, such as eccentricity and the information index, have negative correlation. A positive correlation implies that choosing a node with the highest centrality value will activate the largest number of nodes, while a negative correlation implies that the node with the lowest centrality value will have the same effect.We discuss possible applications of these results, and we emphasize how information cascades can help us identify nodes with the highest spreading capability in complex networks.

Proceedings ArticleDOI
01 May 2017
TL;DR: A distributed optimization algorithm based on distributed alternating direction method of multipliers (ADMM) via variable splitting is proposed to achieve the optimal workload allocation solution that maximizes users' QoE under the given power efficiency.
Abstract: This paper studies the workload offloading problem for fog computing networks in which a set of fog nodes can offload part or all the workload originally targeted to the cloud data centers to further improve the quality-of-experience (QoE) of users. We investigate two performance metrics for fog computing networks: users' QoE and fog nodes' power efficiency. We observe a fundamental tradeoff between these two metrics for fog computing networks. We then consider cooperative fog computing networks in which multiple fog nodes can help each other to jointly offload workload from cloud data centers. We propose a novel cooperation strategy referred to as offload forwarding, in which each fog node, instead of always relying on cloud data centers to process its unprocessed workload, can also forward part or all of its unprocessed workload to its neighboring fog nodes to further improve the QoE of its users. A distributed optimization algorithm based on distributed alternating direction method of multipliers (ADMM) via variable splitting is proposed to achieve the optimal workload allocation solution that maximizes users' QoE under the given power efficiency. We consider a fog computing platform that is supported by a wireless infrastructure as a case study to verify the performance of our proposed framework. Numerical results show that our proposed approach significantly improves the performance of fog computing networks.

Journal ArticleDOI
TL;DR: In this article, a feedback-based framework is proposed to steer a system toward any of its natural long-term dynamic behaviors, regardless of the specific functional forms and system parameters.
Abstract: What can we learn about controlling a system solely from its underlying network structure? Here we adapt a recently developed framework for control of networks governed by a broad class of nonlinear dynamics that includes the major dynamic models of biological, technological, and social processes. This feedback-based framework provides realizable node overrides that steer a system toward any of its natural long-term dynamic behaviors, regardless of the specific functional forms and system parameters. We use this framework on several real networks, identify the topological characteristics that underlie the predicted node overrides, and compare its predictions to those of structural controllability in control theory. Finally, we demonstrate this framework's applicability in dynamic models of gene regulatory networks and identify nodes whose override is necessary for control in the general case but not in specific model instances.

Journal ArticleDOI
TL;DR: This paper is concerned with the state estimation problem for a class of nonlinear dynamical networks with time-varying delays subject to the round-robin protocol, and designs an estimator, such that the estimation error is exponentially ultimately bounded with a certain asymptotic upper bound in mean squaresubject to the process noise and exogenous disturbance.
Abstract: This paper is concerned with the state estimation problem for a class of nonlinear dynamical networks with time-varying delays subject to the round-robin protocol. The communication between the state estimator and the nodes of the dynamical networks is implemented through a shared constrained network, in which only one node is allowed to send data at each time instant. The round-robin protocol is utilized to orchestrate the transmission order of nodes. By using a switch-based approach, the dynamics of the estimation error is modeled by a periodic parameter-switching system with time-varying delays. The purpose of the problem addressed is to design an estimator, such that the estimation error is exponentially ultimately bounded with a certain asymptotic upper bound in mean square subject to the process noise and exogenous disturbance. Furthermore, such a bound is subsequently minimized by the designed estimator parameters. A novel Lyapunov-like functional is employed to deal with the dynamics analysis issue of the estimation error. Sufficient conditions are established to guarantee the ultimate boundedness of the estimation error in mean square by applying the stochastic analysis approach. Then, the desired estimator gains are characterized by solving a convex problem. Finally, a numerical example is given to illustrate the effectiveness of the estimator design scheme.

Journal ArticleDOI
TL;DR: A WP-BackCom network is modeled as a random Poisson cluster process in the horizontal plane where PBs are Poisson distributed and active ad hoc pairs of backscatter communication nodes with fixed separation distances form random clusters centered at PBs.
Abstract: Future Internet-of-Things (IoT) will connect billions of small computing devices embedded in the environment and support their device-to-device (D2D) communication. Powering the massive number of embedded devices is a key challenge of designing IoT, since batteries increase the devices’ form factors and battery recharging/replacement is difficult. To tackle this challenge, we propose a novel network architecture that enables D2D communication between passive nodes by integrating wireless power transfer and backscatter communication, which is called a wirelessly powered backscatter communication (WP-BackCom) network. In this network, standalone power beacons (PBs) are deployed for wirelessly powering nodes by beaming unmodulated carrier signals to targeted nodes. Provisioned with a backscatter antenna, a node transmits data to an intended receiver by modulating and reflecting a fraction of a carrier signal. Such transmission by backscatter consumes orders-of-magnitude less power than a traditional radio. Thereby, the dense deployment of low-complexity PBs with high transmission power can power a large-scale IoT. In this paper, a WP-BackCom network is modeled as a random Poisson cluster process in the horizontal plane where PBs are Poisson distributed and active ad hoc pairs of backscatter communication nodes with fixed separation distances form random clusters centered at PBs. The backscatter nodes can harvest energy from and backscatter carrier signals transmitted by PBs. Furthermore, the transmission power of each node depends on the distance from the associated PB. Applying stochastic geometry, the network coverage probability and transmission capacity are derived and optimized as functions of backscatter parameters, including backscatter duty cycle, reflection coefficient, and the PB density. The effects of the parameters on network performance are quantified.

Journal ArticleDOI
TL;DR: A genetic algorithm-based, self-organizing network clustering (GASONeC) method that provides a framework to dynamically optimize wireless sensor node clusters and greatly extends the network life and the improvement up to 43.44 %.
Abstract: The dynamic nature of wireless sensor networks (WSNs) and numerous possible cluster configurations make searching for an optimal network structure on-the-fly an open challenge. To address this problem, we propose a genetic algorithm-based, self-organizing network clustering (GASONeC) method that provides a framework to dynamically optimize wireless sensor node clusters. In GASONeC, the residual energy, the expected energy expenditure, the distance to the base station, and the number of nodes in the vicinity are employed in search for an optimal, dynamic network structure. Balancing these factors is the key of organizing nodes into appropriate clusters and designating a surrogate node as cluster head. Compared to the state-of-the-art methods, GASONeC greatly extends the network life and the improvement up to 43.44 %. The node density greatly affects the network longevity. Due to the increased distance between nodes, the network life is usually shortened. In addition, when the base station is placed far from the sensor field, it is preferred that more clusters are formed to conserve energy. The overall average time of GASONeC is 0.58 s with a standard deviation of 0.05.

Journal ArticleDOI
19 Apr 2017-Neuron
TL;DR: It is found that node degree accurately predicted observed deficits in memory consolidation, with silencing of highly connected hubs producing the largest impairments.

Journal ArticleDOI
TL;DR: Simulation results demonstrate that the UCNPD protocol can efficiently decrease the speed of the nodes death, prolong the network lifetime, and balance the energy dissipation of all nodes.

Journal ArticleDOI
TL;DR: An extensive experimental study of the reliability of LoRa, one of the most promising long-range low-power wireless technologies to date, focuses on the impact of physical layer settings on the effective data rate and energy efficiency of communications.
Abstract: Recent technological innovations allow compact radios to transmit over long distances with minimal energy consumption and could drastically affect the way Internet of Things (IoT) technologies communicate in the near future. By extending the communication range of links, it is indeed possible to reduce the network diameter to a point that each node can communicate with almost every other node in the network directly. This drastically simplifies communication, removing the need of routing, and significantly reduces the overhead of data collection. Long-range low-power wireless technology, however, is still at its infancy, and it is yet unclear (i) whether it is sufficiently reliable to complement existing short-range and cellular technologies and (ii) which radio settings can sustain a high delivery rate while maximizing energy-efficiency. To shed light on this matter, this paper presents an extensive experimental study of the reliability of LoRa , one of the most promising long-range low-power wireless technologies to date. We focus our evaluation on the impact of physical layer settings on the effective data rate and energy efficiency of communications. Our results show that it is often not worth tuning parameters, thereby reducing the data rate in order to maximize the probability of successful reception, especially on links at the edge of their communication range. Furthermore, we study the impact of environmental factors on the performance of LoRa, and show that higher temperatures significantly decrease the received signal strength and may drastically affect packet reception.