scispace - formally typeset
Search or ask a question

Showing papers on "Node (networking) published in 2018"


Journal ArticleDOI
TL;DR: The results of the evaluation show that performance is improved by reducing the induced delay, reducing the response time, increasing throughput, and the ability to detect real-time attacks in the IoT network with low performance overheads.
Abstract: The recent expansion of the Internet of Things (IoT) and the consequent explosion in the volume of data produced by smart devices have led to the outsourcing of data to designated data centers However, to manage these huge data stores, centralized data centers, such as cloud storage cannot afford auspicious way There are many challenges that must be addressed in the traditional network architecture due to the rapid growth in the diversity and number of devices connected to the internet, which is not designed to provide high availability, real-time data delivery, scalability, security, resilience, and low latency To address these issues, this paper proposes a novel blockchain-based distributed cloud architecture with a software defined networking (SDN) enable controller fog nodes at the edge of the network to meet the required design principles The proposed model is a distributed cloud architecture based on blockchain technology, which provides low-cost, secure, and on-demand access to the most competitive computing infrastructures in an IoT network By creating a distributed cloud infrastructure, the proposed model enables cost-effective high-performance computing Furthermore, to bring computing resources to the edge of the IoT network and allow low latency access to large amounts of data in a secure manner, we provide a secure distributed fog node architecture that uses SDN and blockchain techniques Fog nodes are distributed fog computing entities that allow the deployment of fog services, and are formed by multiple computing resources at the edge of the IoT network We evaluated the performance of our proposed architecture and compared it with the existing models using various performance measures The results of our evaluation show that performance is improved by reducing the induced delay, reducing the response time, increasing throughput, and the ability to detect real-time attacks in the IoT network with low performance overheads

549 citations


Proceedings Article
03 Jul 2018
TL;DR: In this paper, the authors explore an architecture called jumping knowledge (JK) networks that flexibly leverages, for each node, different neighborhood ranges to enable better structure-aware representation.
Abstract: Recent deep learning approaches for representation learning on graphs follow a neighborhood aggregation procedure. We analyze some important properties of these models, and propose a strategy to overcome those. In particular, the range of "neighboring" nodes that a node's representation draws from strongly depends on the graph structure, analogous to the spread of a random walk. To adapt to local neighborhood properties and tasks, we explore an architecture -- jumping knowledge (JK) networks -- that flexibly leverages, for each node, different neighborhood ranges to enable better structure-aware representation. In a number of experiments on social, bioinformatics and citation networks, we demonstrate that our model achieves state-of-the-art performance. Furthermore, combining the JK framework with models like Graph Convolutional Networks, GraphSAGE and Graph Attention Networks consistently improves those models' performance.

423 citations


Journal ArticleDOI
TL;DR: In this article, the authors utilized queuing theory to bring a thorough study on the energy consumption, execution delay, and payment cost of offloading processes in a fog computing system, where three queuing models were applied, respectively, to the MD, fog, and cloud centers, and the data rate and power consumption of the wireless link were explicitly considered.
Abstract: Fog computing system is an emergent architecture for providing computing, storage, control, and networking capabilities for realizing Internet of Things. In the fog computing system, the mobile devices (MDs) can offload its data or computational expensive tasks to the fog node within its proximity, instead of distant cloud. Although offloading can reduce energy consumption at the MDs, it may also incur a larger execution delay including transmission time between the MDs and the fog/cloud servers, and waiting and execution time at the servers. Therefore, how to balance the energy consumption and delay performance is of research importance. Moreover, based on the energy consumption and delay, how to design a cost model for the MDs to enjoy the fog and cloud services is also important. In this paper, we utilize queuing theory to bring a thorough study on the energy consumption, execution delay, and payment cost of offloading processes in a fog computing system. Specifically, three queuing models are applied, respectively, to the MD, fog, and cloud centers, and the data rate and power consumption of the wireless link are explicitly considered. Based on the theoretical analysis, a multiobjective optimization problem is formulated with a joint objective to minimize the energy consumption, execution delay, and payment cost by finding the optimal offloading probability and transmit power for each MD. Extensive simulation studies are conducted to demonstrate the effectiveness of the proposed scheme and the superior performance over several existed schemes are observed.

398 citations


Journal ArticleDOI
TL;DR: This paper proposes a generic Attributed Social Network Embedding framework (ASNE), which learns representations for social actors by preserving both the structural proximity and attribute proximity, and shows significant gains on the tasks of link prediction and node classification.
Abstract: Embedding network data into a low-dimensional vector space has shown promising performance for many real-world applications, such as node classification and entity retrieval. However, most existing methods focused only on leveraging network structure. For social networks, besides the network structure, there also exists rich information about social actors, such as user profiles of friendship networks and textual content of citation networks. These rich attribute information of social actors reveal the homophily effect, exerting huge impacts on the formation of social networks. In this paper, we explore the rich evidence source of attributes in social networks to improve network embedding. We propose a generic Attributed Social Network Embedding framework ( ASNE ), which learns representations for social actors (i.e., nodes) by preserving both the structural proximity and attribute proximity . While the structural proximity captures the global network structure, the attribute proximity accounts for the homophily effect. To justify our proposal, we conduct extensive experiments on four real-world social networks. Compared to the state-of-the-art network embedding approaches, ASNE can learn more informative representations, achieving substantial gains on the tasks of link prediction and node classification. Specifically, ASNE significantly outperforms node2vec with an 8.2 percent relative improvement on the link prediction task, and a 12.7 percent gain on the node classification task.

380 citations


Proceedings Article
15 Feb 2018
TL;DR: Graph2Gauss is proposed - an approach that can efficiently learn versatile node embeddings on large scale (attributed) graphs that show strong performance on tasks such as link prediction and node classification and the benefits of modeling uncertainty are demonstrated.
Abstract: Methods that learn representations of nodes in a graph play a critical role in network analysis since they enable many downstream learning tasks. We propose Graph2Gauss - an approach that can efficiently learn versatile node embeddings on large scale (attributed) graphs that show strong performance on tasks such as link prediction and node classification. Unlike most approaches that represent nodes as point vectors in a low-dimensional continuous space, we embed each node as a Gaussian distribution, allowing us to capture uncertainty about the representation. Furthermore, we propose an unsupervised method that handles inductive learning scenarios and is applicable to different types of graphs: plain/attributed, directed/undirected. By leveraging both the network structure and the associated node attributes, we are able to generalize to unseen nodes without additional training. To learn the embeddings we adopt a personalized ranking formulation w.r.t. the node distances that exploits the natural ordering of the nodes imposed by the network structure. Experiments on real world networks demonstrate the high performance of our approach, outperforming state-of-the-art network embedding methods on several different tasks. Additionally, we demonstrate the benefits of modeling uncertainty - by analyzing it we can estimate neighborhood diversity and detect the intrinsic latent dimensionality of a graph.

364 citations


Journal ArticleDOI
TL;DR: An Enhanced Power Efficient Gathering in Sensor Information Systems (EPEGASIS) algorithm is proposed to alleviate the hot spots problem from four aspects: optimal communication distance is determined, threshold value is set to protect the dying nodes, mobile sink technology is used to balance the energy consumption among nodes, and extensive experiments have been performed.
Abstract: Energy efficiency has been a hot research topic for many years and many routing algorithms have been proposed to improve energy efficiency and to prolong lifetime for wireless sensor networks (WSNs). Since nodes close to the sink usually need to consume more energy to forward data of its neighbours to sink, they will exhaust energy more quickly. These nodes are called hot spot nodes and we call this phenomenon hot spot problem. In this paper, an Enhanced Power Efficient Gathering in Sensor Information Systems (EPEGASIS) algorithm is proposed to alleviate the hot spots problem from four aspects. Firstly, optimal communication distance is determined to reduce the energy consumption during transmission. Then threshold value is set to protect the dying nodes and mobile sink technology is used to balance the energy consumption among nodes. Next, the node can adjust its communication range according to its distance to the sink node. Finally, extensive experiments have been performed to show that our proposed EPEGASIS performs better in terms of lifetime, energy consumption, and network latency.

250 citations


Proceedings ArticleDOI
19 Jul 2018
TL;DR: This paper proposes a novel approach, NetWalk, for anomaly detection in dynamic networks by learning network representations which can be updated dynamically as the network evolves, and employs a clustering-based technique to incrementally and dynamically detect network anomalies.
Abstract: Massive and dynamic networks arise in many practical applications such as social media, security and public health. Given an evolutionary network, it is crucial to detect structural anomalies, such as vertices and edges whose "behaviors'' deviate from underlying majority of the network, in a real-time fashion. Recently, network embedding has proven a powerful tool in learning the low-dimensional representations of vertices in networks that can capture and preserve the network structure. However, most existing network embedding approaches are designed for static networks, and thus may not be perfectly suited for a dynamic environment in which the network representation has to be constantly updated. In this paper, we propose a novel approach, NetWalk, for anomaly detection in dynamic networks by learning network representations which can be updated dynamically as the network evolves. We first encode the vertices of the dynamic network to vector representations by clique embedding, which jointly minimizes the pairwise distance of vertex representations of each walk derived from the dynamic networks, and the deep autoencoder reconstruction error serving as a global regularization. The vector representations can be computed with constant space requirements using reservoir sampling. On the basis of the learned low-dimensional vertex representations, a clustering-based technique is employed to incrementally and dynamically detect network anomalies. Compared with existing approaches, NetWalk has several advantages: 1) the network embedding can be updated dynamically, 2) streaming network nodes and edges can be encoded efficiently with constant memory space usage, 3). flexible to be applied on different types of networks, and 4) network anomalies can be detected in real-time. Extensive experiments on four real datasets demonstrate the effectiveness of NetWalk.

245 citations


Journal ArticleDOI
30 Jun 2018-Sensors
TL;DR: This paper describes an energy consumption model based on LoRa and LoRaWAN, which allows estimating the consumed power of each sensor node element and can be used to compare different Lo RaWAN modes to find the best sensor node design to achieve its energy autonomy.
Abstract: Energy efficiency is the key requirement to maximize sensor node lifetime. Sensor nodes are typically powered by a battery source that has finite lifetime. Most Internet of Thing (IoT) applications require sensor nodes to operate reliably for an extended period of time. To design an autonomous sensor node, it is important to model its energy consumption for different tasks. Each task consumes a power consumption amount for a period of time. To optimize the consumed energy of the sensor node and have long communication range, Low Power Wide Area Network technology is considered. This paper describes an energy consumption model based on LoRa and LoRaWAN, which allows estimating the consumed power of each sensor node element. The definition of the different node units is first introduced. Then, a full energy model for communicating sensors is proposed. This model can be used to compare different LoRaWAN modes to find the best sensor node design to achieve its energy autonomy.

230 citations


Proceedings ArticleDOI
Yuan Zuo1, Guannan Liu1, Hao Lin1, Jia Guo1, Xiaoqian Hu1, Junjie Wu1 
19 Jul 2018
TL;DR: Experiments on three large-scale real-life networks demonstrate that the embeddings learned from the proposed HTNE model achieve better performance than state-of-the-art methods in various tasks including node classification, link prediction, and embedding visualization.
Abstract: Given the rich real-life applications of network mining as well as the surge of representation learning in recent years, network embedding has become the focal point of increasing research interests in both academic and industrial domains. Nevertheless, the complete temporal formation process of networks characterized by sequential interactive events between nodes has yet seldom been modeled in the existing studies, which calls for further research on the so-called temporal network embedding problem. In light of this, in this paper, we introduce the concept of neighborhood formation sequence to describe the evolution of a node, where temporal excitation effects exist between neighbors in the sequence, and thus we propose a Hawkes process based Temporal Network Embedding (HTNE) method. HTNE well integrates the Hawkes process into network embedding so as to capture the influence of historical neighbors on the current neighbors. In particular, the interactions of low-dimensional vectors are fed into the Hawkes process as base rate and temporal influence, respectively. In addition, attention mechanism is also integrated into HTNE to better determine the influence of historical neighbors on current neighbors of a node. Experiments on three large-scale real-life networks demonstrate that the embeddings learned from the proposed HTNE model achieve better performance than state-of-the-art methods in various tasks including node classification, link prediction, and embedding visualization. In particular, temporal recommendation based on arrival rate inferred from node embeddings shows excellent predictive power of the proposed model.

221 citations


Journal ArticleDOI
21 Dec 2018-Sensors
TL;DR: A hybrid wearable sensor network system towards the Internet of Things (IoT) connected safety and health monitoring applications aimed at improving safety in the outdoor workplace is presented.
Abstract: This paper presents a hybrid wearable sensor network system towards the Internet of Things (IoT) connected safety and health monitoring applications. The system is aimed at improving safety in the outdoor workplace. The proposed system consists of a wearable body area network (WBAN) to collect user data and a low-power wide-area network (LPWAN) to connect the WBAN with the Internet. The wearable sensors in the WBAN are exerted to measure the environmental conditions around the subject using a Safe Node and monitor the vital signs of the subject using a Health Node. A standalone local server (gateway), which can process the raw sensor signals, display the environmental and physiological data, and trigger an alert if any emergency circumstance is detected, is designed within the proposed network. To connect the gateway with the Internet, an IoT cloud server is implemented to provide more functionalities, such as web monitoring and mobile applications.

193 citations


Posted Content
TL;DR: Chameleon combines the best aspects of generic SFE protocols with the ones that are based upon additive secret sharing, and improves the efficiency of mining and classification of encrypted data for algorithms based upon heavy matrix multiplications.
Abstract: We present Chameleon, a novel hybrid (mixed-protocol) framework for secure function evaluation (SFE) which enables two parties to jointly compute a function without disclosing their private inputs. Chameleon combines the best aspects of generic SFE protocols with the ones that are based upon additive secret sharing. In particular, the framework performs linear operations in the ring $\mathbb{Z}_{2^l}$ using additively secret shared values and nonlinear operations using Yao's Garbled Circuits or the Goldreich-Micali-Wigderson protocol. Chameleon departs from the common assumption of additive or linear secret sharing models where three or more parties need to communicate in the online phase: the framework allows two parties with private inputs to communicate in the online phase under the assumption of a third node generating correlated randomness in an offline phase. Almost all of the heavy cryptographic operations are precomputed in an offline phase which substantially reduces the communication overhead. Chameleon is both scalable and significantly more efficient than the ABY framework (NDSS'15) it is based on. Our framework supports signed fixed-point numbers. In particular, Chameleon's vector dot product of signed fixed-point numbers improves the efficiency of mining and classification of encrypted data for algorithms based upon heavy matrix multiplications. Our evaluation of Chameleon on a 5 layer convolutional deep neural network shows 133x and 4.2x faster executions than Microsoft CryptoNets (ICML'16) and MiniONN (CCS'17), respectively.

Journal ArticleDOI
TL;DR: Simulation results prove that the bat algorithm with weighted harmonic centroid (WHCBA) strategy is superior to other algorithms and can save more energy compared to the standard LEACH protocol.

Proceedings ArticleDOI
Ziwei Zhang1, Peng Cui1, Xiao Wang1, Jian Pei, Xuanrong Yao1, Wenwu Zhu1 
19 Jul 2018
TL;DR: The eigen-decomposition reweighting theorem is theoretically proved, revealing the intrinsic relationship between proximities of different orders and proposed AROPE (arbitrary-order proximity preserved embedding), a novel network embedding method based on SVD framework.
Abstract: Network embedding has received increasing research attention in recent years. The existing methods show that the high-order proximity plays a key role in capturing the underlying structure of the network. However, two fundamental problems in preserving the high-order proximity remain unsolved. First, all the existing methods can only preserve fixed-order proximities, despite that proximities of different orders are often desired for distinct networks and target applications. Second, given a certain order proximity, the existing methods cannot guarantee accuracy and efficiency simultaneously. To address these challenges, we propose AROPE (arbitrary-order proximity preserved embedding), a novel network embedding method based on SVD framework. We theoretically prove the eigen-decomposition reweighting theorem, revealing the intrinsic relationship between proximities of different orders. With this theorem, we propose a scalable eigen-decomposition solution to derive the embedding vectors and shift them between proximities of arbitrary orders. Theoretical analysis is provided to guarantee that i) our method has a low marginal cost in shifting the embedding vectors across different orders, ii) given a certain order, our method can get the global optimal solutions, and iii) the overall time complexity of our method is linear with respect to network size. Extensive experimental results on several large-scale networks demonstrate that our proposed method greatly and consistently outperforms the baselines in various tasks including network reconstruction, link prediction and node classification.

Proceedings ArticleDOI
01 Jul 2018
TL;DR: In the proposed method, the architectures of CNNs are represented by directed acyclic graphs, in which each node represents highly-functional modules such as convolutional blocks and tensor operations and each edge represents the connectivity of layers.
Abstract: The convolutional neural network (CNN), which is one of the deep learning models, has seen much success in a variety of computer vision tasks. However, designing CNN architectures still requires expert knowledge and a lot of trial and error. In this paper, we attempt to automatically construct CNN architectures for an image classification task based on Cartesian genetic programming (CGP). In our method, we adopt highly functional modules, such as convolutional blocks and tensor concatenation, as the node functions in CGP. The CNN structure and connectivity represented by the CGP encoding method are optimized to maximize the validation accuracy. To evaluate the proposed method, we constructed a CNN architecture for the image classification task with the CIFAR-10 dataset. The experimental result shows that the proposed method can be used to automatically find the competitive CNN architecture compared with state-of-the-art models.

Proceedings ArticleDOI
19 Jul 2018
TL;DR: This work proposes a new approach named Deep Recursive Network Embedding (DRNE) to learn network embeddings with regular equivalence, and proposes a layer normalized LSTM to represent each node by aggregating the representations of their neighborhoods in a recursive way.
Abstract: Network embedding aims to preserve vertex similarity in an embedding space. Existing approaches usually define the similarity by direct links or common neighborhoods between nodes, i.e. structural equivalence. However, vertexes which reside in different parts of the network may have similar roles or positions, i.e. regular equivalence, which is largely ignored by the literature of network embedding. Regular equivalence is defined in a recursive way that two regularly equivalent vertexes have network neighbors which are also regularly equivalent. Accordingly, we propose a new approach named Deep Recursive Network Embedding (DRNE) to learn network embeddings with regular equivalence. More specifically, we propose a layer normalized LSTM to represent each node by aggregating the representations of their neighborhoods in a recursive way. We theoretically prove that some popular and typical centrality measures which are consistent with regular equivalence are optimal solutions of our model. This is also demonstrated by empirical results that the learned node representations can well predict the indexes of regular equivalence and related centrality scores. Furthermore, the learned node representations can be directly used for end applications like structural role classification in networks, and the experimental results show that our method can consistently outperform centrality-based methods and other state-of-the-art network embedding methods.

Journal ArticleDOI
TL;DR: A blockchain-based data sharing system is proposed to tackle the issue of privacy of patients, which employs immutability and autonomy properties of the blockchain to sufficiently resolve challenges associated with access control and handle sensitive data.

Journal ArticleDOI
TL;DR: Simulation results of IEEE 30-bus and IEEE 57-bus test cases show that the key nodes can be effectively identified with high electrical centrality and resultant cascading failures that eventually lead to a severe decrease in net-ability, verifying the correctness and effectiveness of the analysis.
Abstract: The analysis of blackouts, which can inevitably lead to catastrophic damage to power grids, helps to explore the nature of complex power grids but becomes difficult using conventional methods This brief studies the vulnerability analysis and recognition of key nodes in power grids from a complex network perspective Based on the ac power flow model and the network topology weighted with admittance, the cascading failure model is established first The node electrical centrality is further pointed out, using complex network centrality theory, to identify the key nodes in power grids To effectively analyze the behavior and verify the correctness of node electrical centrality, the net-ability and vulnerability index are introduced to describe the transfer ability and performance under normal operation and assess the vulnerability of the power system under cascading failures, respectively Simulation results of IEEE 30-bus and IEEE 57-bus test cases show that the key nodes can be effectively identified with high electrical centrality, the resultant cascading failures that eventually lead to a severe decrease in net-ability, verifying the correctness and effectiveness of the analysis

Journal ArticleDOI
Rongfei Fan1, Jiannan Cui1, Song Jin1, Kai Yang1, Jianping An1 
TL;DR: The problems of UAV node placement and communication resource allocation are investigated jointly for a UAV relaying system for the first time and the global optimal solution is achieved.
Abstract: Utilizing unmanned aerial vehicle (UAV) as the relay is an effective technical solution for the wireless communication between ground terminals faraway or obstructed. In this letter, the problems of UAV node placement and communication resource allocation are investigated jointly for a UAV relaying system for the first time. Multiple communication pairs on the ground, with one rotary-wing UAV serving as relay, are considered. Transmission power, bandwidth, transmission rate, and UAV’s position are optimized jointly to maximize the system throughput. An optimization problem is formulated, which is non-convex. The global optimal solution is achieved by transforming the formulated problem to be a monotonic optimization problem.

Proceedings ArticleDOI
17 Oct 2018
TL;DR: In this paper, an unsupervised representation learning-based network alignment method is proposed to match nodes across different graphs. But the method is not suitable for multi-network problems and it is not scalable to networks with millions of nodes each.
Abstract: Problems involving multiple networks are prevalent in many scientific and other domains. In particular, network alignment, or the task of identifying corresponding nodes in different networks, has applications across the social and natural sciences. Motivated by recent advancements in node representation learning for single-graph tasks, we propose REGAL (REpresentation learning-based Graph ALignment), a framework that leverages the power of automatically-learned node representations to match nodes across different graphs. Within REGAL we devise xNetMF, an elegant and principled node embedding formulation that uniquely generalizes to multi-network problems. Our results demonstrate the utility and promise of unsupervised representation learning-based network alignment in terms of both speed and accuracy. REGAL runs up to 30x faster in the representation learning stage than comparable methods, outperforms existing network alignment methods by 20 to 30% accuracy on average, and scales to networks with millions of nodes each.

Journal ArticleDOI
TL;DR: This survey reviews, classify and discusses several recent advances and results obtained for each variant, including theoretical complexity, exact solving algorithms, approximation schemes and heuristic approaches, and proves new complexity results and induce some solving algorithms through relationships established between different variants.

Posted Content
TL;DR: In this paper, the authors considered both the downlink and uplink UAV communications with a ground node, and formulated new problems to maximize the average secrecy rates of the UAV's trajectory and the transmit power of the legitimate transmitter over a given flight period.
Abstract: Unmanned aerial vehicle (UAV) communication is anticipated to be widely applied in the forthcoming fifth-generation (5G) wireless networks, due to its many advantages such as low cost, high mobility, and on-demand deployment. However, the broadcast and line-of-sight (LoS) nature of air-to-ground wireless channels gives rise to a new challenge on how to realize secure UAV communications with the destined nodes on the ground. This paper aims to tackle this challenge by applying the physical layer security technique. We consider both the downlink and uplink UAV communications with a ground node, namely UAV-to-ground (U2G) and ground-to-UAV (G2U) communications, respectively, subject to a potential eavesdropper on the ground. In contrast to the existing literature on wireless physical layer security only with ground nodes at fixed or quasi-static locations, we exploit the high mobility of the UAV to proactively establish favorable and degraded channels for the legitimate and eavesdropping links, respectively, via its trajectory design. We formulate new problems to maximize the average secrecy rates of the U2G and G2U transmissions, respectively, by jointly optimizing the UAV's trajectory and the transmit power of the legitimate transmitter over a given flight period of the UAV. Although the formulated problems are non-convex, we propose iterative algorithms to solve them efficiently by applying the block coordinate descent and successive convex optimization methods. Specifically, the transmit power and UAV trajectory are each optimized with the other being fixed in an alternating manner, until the algorithms converge. Simulation results show that the proposed algorithms can improve the secrecy rates for both U2G and G2U communications, as compared to other benchmark schemes without power control and/or trajectory optimization.

Journal ArticleDOI
TL;DR: Results show that the proposed DTRPP mechanism protects the data privacy effectively and has better performance on the average delay, the delivery rate and the loading rate when compared to traditional mechanisms.
Abstract: Malicious network nodes often incur problems to network and data privacy by distributing forged public keys. To address this issue, this paper proposes a dynamic trust relationships aware data privacy protection (DTRPP) mechanism for mobile crowd-sensing. In this mechanism, combining key distribution with trust management, the trust value of a public key is evaluated according to both the number of supporter and the trust degree of the public key. The trust value is estimated from the accuracy of the public key provided by the encountering nodes. DTRPP achieves the dynamic management of nodes and estimates the trust degree of the public key. In addition, by classifying traffic data into different types and selecting a proper relay node to forward the data according to data types, it is more effective to use the network resource with the trust degree and centrality of the relays. With extensive evaluations, results show that the proposed mechanism protects the data privacy effectively and has better performance on the average delay, the delivery rate and the loading rate when compared to traditional mechanisms.

Journal ArticleDOI
TL;DR: An IoV-aided local traffic information collection architecture, a sink node selection scheme for the information influx, and an optimal traffic information transmission model are proposed, which show the efficiency and feasibility of the proposed models.
Abstract: In view of the emergence and rapid development of the Internet of Vehicles (IoV) and cloud computing, intelligent transport systems are beneficial in terms of enhancing the quality and interactivity of urban transportation services, reducing costs and resource wastage, and improving the traffic management capability Efficient traffic management relies on the accurate and prompt acquisition as well as diffusion of traffic information To achieve this, research is mostly focused on optimizing the mobility models and communication performance However, considering the escalating scale of IoV networks, the interconnection of heterogeneous smart vehicles plays a critical role in enhancing the efficiency of traffic information collection and diffusion In this paper, we commence by establishing a weighted and undirected graph model for IoV sensing networks and verify its time-invariant complex characteristics relying on a real-world taxi GPS dataset Moreover, we propose an IoV-aided local traffic information collection architecture, a sink node selection scheme for the information influx, and an optimal traffic information transmission model Our simulation results and theoretical analysis show the efficiency and feasibility of our proposed models

Journal ArticleDOI
13 Feb 2018-Sensors
TL;DR: An autonomous obstacle avoidance dynamic path-planning method for a robotic manipulator based on an improved RRT algorithm, called Smoothly RRT (S-RRT), that can increase the sampling speed and efficiency of RRT dramatically and provide theoretical reference value for other type of robots’ path planning.
Abstract: In a future intelligent factory, a robotic manipulator must work efficiently and safely in a Human-Robot collaborative and dynamic unstructured environment. Autonomous path planning is the most important issue which must be resolved first in the process of improving robotic manipulator intelligence. Among the path-planning methods, the Rapidly Exploring Random Tree (RRT) algorithm based on random sampling has been widely applied in dynamic path planning for a high-dimensional robotic manipulator, especially in a complex environment because of its probability completeness, perfect expansion, and fast exploring speed over other planning methods. However, the existing RRT algorithm has a limitation in path planning for a robotic manipulator in a dynamic unstructured environment. Therefore, an autonomous obstacle avoidance dynamic path-planning method for a robotic manipulator based on an improved RRT algorithm, called Smoothly RRT (S-RRT), is proposed. This method that targets a directional node extends and can increase the sampling speed and efficiency of RRT dramatically. A path optimization strategy based on the maximum curvature constraint is presented to generate a smooth and curved continuous executable path for a robotic manipulator. Finally, the correctness, effectiveness, and practicability of the proposed method are demonstrated and validated via a MATLAB static simulation and a Robot Operating System (ROS) dynamic simulation environment as well as a real autonomous obstacle avoidance experiment in a dynamic unstructured environment for a robotic manipulator. The proposed method not only provides great practical engineering significance for a robotic manipulator's obstacle avoidance in an intelligent factory, but also theoretical reference value for other type of robots' path planning.

Journal ArticleDOI
TL;DR: A mathematical model is used to construct network slice requests and map them to the infrastructure network, and a node importance metric is defined to rank the nodes in node mapping to efficiently utilize the limited physical resources.
Abstract: For fifth-generation wireless communication systems, network slicing has emerged as a key concept to meet the diverse requirements of various use cases. By slicing an infrastructure network into multiple dedicated logical networks, wireless networks can support a wide range of services. However, how to fast deploy the end-to-end slices is the main issue in a multi-domain wireless network infrastructure. In this paper, a mathematical model is used to construct network slice requests and map them to the infrastructure network. The mapping process consists of two steps: the placement of virtual network functions and the selection of link paths chaining them. To efficiently utilize the limited physical resources, we pay attention to the service-oriented deployment by offering different deployment policies for three typical slices: eMBB slices, mMTC slices, and uRLLC slices. Furthermore, we adopt complex network theory to obtain the topological information of slices and infrastructure network. With the topological information, we define a node importance metric to rank the nodes in node mapping. To evaluate the performance of deployment policy we proposed, extensive simulations have been conducted. The results have shown that our algorithm performed better in terms of resource efficiency and acceptance ratio. In addition, the average execute time of our algorithm is in a linear growth with the increase of infrastructure network size.

Journal ArticleDOI
TL;DR: A novel approach to minimize energy consumption of processing an application in MWSN while satisfying a certain completion time requirement is proposed by introducing the concept of cooperation and shows the significant energy saving of the proposed solution.
Abstract: Advances in future computing to support emerging sensor applications are becoming more important as the need to better utilize computation and communication resources and make them energy efficient. As a result, it is predicted that intelligent devices and networks, including mobile wireless sensor networks (MWSN), will become the new interfaces to support future applications. In this paper, we propose a novel approach to minimize energy consumption of processing an application in MWSN while satisfying a certain completion time requirement. Specifically, by introducing the concept of cooperation, the logics and related computation tasks can be optimally partitioned, offloaded and executed with the help of peer sensor nodes, thus the proposed solution can be treated as a joint optimization of computing and networking resources. Moreover, for a network with multiple mobile wireless sensor nodes, we propose energy efficient cooperation node selection strategies to offer a tradeoff between fairness and energy consumption. Our performance analysis is supplemented by simulation results to show the significant energy saving of the proposed solution.

Journal ArticleDOI
TL;DR: An improved protocol which is a low duty cycle energy-efficient MAC protocol for WSN and can be adaptively updated based on the prediction nodes’ wake-up time and can improve the adaptability of the network.
Abstract: In the medium access control layer (MAC) of WSN, the scheduling mechanism of nodes based on the periodical listen/sleep is an effective way of saving node energy consumption. In the case of data transmission which is not affected reduces the nodes’ proportions, we improve the protocols based on asynchronous MAC. This paper discusses a protocol which is a low duty cycle energy-efficient MAC protocol for WSN and can be adaptively updated based on the prediction nodes’ wake-up time. We call it AP-MAC protocol. In AP-MAC protocol, the nodes will not wake up or send data in the same period, and they will wake up in random time according to the algorithm that has been set. In this case, the network can avoid the problem of collision, cross-talk, etc. caused by all the nodes’ wake-up in the same time, and save more energy. To ensure the reliable transmission of network data, the node which sends data will predict the wake-up time of receiving nodes and ensure the receiving nodes wake up timely and establish a connection with sending note. At the same time, we join several adaptive update mechanisms in the network according to the dynamic changes of it. The experimental results show that the improved protocol not only can save the network energy consumption by effectively reducing the overall duty cycle of the network nodes and improving the reliable transmission of data but also can improve the adaptability of the network.

Journal ArticleDOI
TL;DR: The proposed enhanced protocol called Node Ranked–LEACH improves the total network lifetime based on node rank algorithm and gives a good performance in the network lifetime and energy consumption comparing with previous version of LEACH protocols.
Abstract: Summary In wireless sensor network, a large number of sensor nodes are distributed to cover a certain area. Sensor node is little in size with restricted processing power, memory, and limited battery life. Because of restricted battery power, wireless sensor network needs to broaden the system lifetime by reducing the energy consumption. A clustering-based protocols adapt the use of energy by giving a balance to all nodes to become a cluster head. In this paper, we concentrate on a recent hierarchical routing protocols, which are depending on LEACH protocol to enhance its performance and increase the lifetime of wireless sensor network. So our enhanced protocol called Node Ranked–LEACH is proposed. Our proposed protocol improves the total network lifetime based on node rank algorithm. Node rank algorithm depends on both path cost and number of links between nodes to select the cluster head of each cluster. This enhancement reflects the real weight of specific node to success and can be represented as a cluster head. The proposed algorithm overcomes the random process selection, which leads to unexpected fail for some cluster heads in other LEACH versions, and it gives a good performance in the network lifetime and energy consumption comparing with previous version of LEACH protocols.

Proceedings ArticleDOI
01 Jul 2018
TL;DR: ZoKrates is introduced, a toolbox to specify, integrate and deploy off-chain computations, which hides significant complexity inherent to zero-knowledge proofs, provides a more familiar and higher level of programming abstractions to developers and enables circuit integration, hence fostering adoption.
Abstract: Scalability and privacy are two challenges for today's blockchain systems. Processing transactions at every node in the system limits the system's ability to scale. Furthermore, the requirement to publish all corporate or individual information for processing at every node, essentially making the data public, is - despite of all other advantages - often considered a major obstacle to blockchain adoption. In this paper, we make two main contributions to address these two problems: (i)To increase efficiency, we propose a processing model which employs noninteractive proofs to off-chain computations, thereby reducing on-chain computational efforts to the verification of correctness of execution rather than the execution itself. Due to the verifiable computation scheme's zero-knowledge property, private information used in the off-chain computation does not have to become public to verify correctness. (ii)We introduce ZoKrates, a toolbox to specify, integrate and deploy such off-chain computations. It consists of a domain-specific language, a compiler, and generators for proofs and verification Smart Contracts. ZoKrates hides significant complexity inherent to zero-knowledge proofs, provides a more familiar and higher level of programming abstractions to developers and enables circuit integration, hence fostering adoption.

Journal ArticleDOI
TL;DR: In this article, a distributed observer that guarantees asymptotic reconstruction of the state for the most general class of LTI systems, sensor network topologies, and sensor measurement structures is proposed.
Abstract: We consider the problem of distributed state estimation of a linear time-invariant (LTI) system by a network of sensors. We develop a distributed observer that guarantees asymptotic reconstruction of the state for the most general class of LTI systems, sensor network topologies, and sensor measurement structures. Our analysis builds upon the following key observation—a given node can reconstruct a portion of the state solely by using its own measurements and constructing appropriate Luenberger observers; hence, it only needs to exchange information with neighbors (via consensus dynamics) for estimating the portion of the state that is not locally detectable. This intuitive approach leads to a new class of distributed observers with several appealing features. Furthermore, by imposing additional constraints on the system dynamics and network topology, we show that it is possible to construct a simpler version of the proposed distributed observer that achieves the same objective while admitting a fully distributed design phase. Our general framework allows extensions to time-varying networks that result from communication losses, and scenarios including faults or attacks at the nodes.