scispace - formally typeset
Search or ask a question

Showing papers on "Transmission delay published in 2018"


Journal ArticleDOI
TL;DR: A systematical survey of the state-of-the-art caching techniques that were recently developed in cellular networks, including macro-cellular networks, heterogeneous networks, device-to-device networks, cloud-radio access networks, and fog-radioaccess networks.
Abstract: Mobile data traffic is currently growing exponentially and these rapid increases have caused the backhaul data rate requirements to become the major bottleneck to reducing costs and raising revenue for operators. To address this problem, caching techniques have attracted significant attention since they can effectively reduce the backhaul traffic by eliminating duplicate data transmission that carries popular content. In addition, other system performance metrics can also be improved through caching techniques, e.g., spectrum efficiency, energy efficiency, and transmission delay. In this paper, we provide a systematical survey of the state-of-the-art caching techniques that were recently developed in cellular networks, including macro-cellular networks, heterogeneous networks, device-to-device networks, cloud-radio access networks, and fog-radio access networks. In particular, we give a tutorial on the fundamental caching techniques and introduce caching algorithms from three aspects, i.e., content placement, content delivery, and joint placement and delivery. We provide comprehensive comparisons among different algorithms in terms of different performance metrics, including throughput, backhaul cost, power consumption, and network delay. Finally, we summarize the main research achievements in different networks, and highlight main challenges and potential research directions.

226 citations


Journal ArticleDOI
TL;DR: In this article, the problem of resource management for a network of wireless virtual reality (VR) users communicating over small cell networks (SCNs) is studied for the purpose of capturing the VR users' quality-of-service (QoS) in SCNs, a novel VR model, based on multi-attribute utility theory, is proposed.
Abstract: In this paper, the problem of resource management is studied for a network of wireless virtual reality (VR) users communicating over small cell networks (SCNs). In order to capture the VR users’ quality-of-service (QoS) in SCNs, a novel VR model, based on multi-attribute utility theory, is proposed. This model jointly accounts for VR metrics, such as tracking accuracy, processing delay, and transmission delay. In this model, the small base stations (SBSs) act as the VR control centers that collect the tracking information from VR users over the cellular uplink. Once this information is collected, the SBSs will then send the 3-D images and accompanying audio to the VR users over the downlink. Therefore, the resource allocation problem in VR wireless networks must jointly consider both the uplink and downlink. This problem is then formulated as a noncooperative game and a distributed algorithm based on the machine learning framework of echo state networks (ESNs) is proposed to find the solution of this game. The proposed ESN algorithm enables the SBSs to predict the VR QoS of each SBS and is guaranteed to converge to mixed-strategy Nash equilibrium. The analytical result shows that each user’s VR QoS jointly depends on both VR tracking accuracy and wireless resource allocation. Simulation results show that the proposed algorithm yields significant gains, in terms of VR QoS utility, that reach up to 22.2% and 37.5%, respectively, compared with Q-learning and a baseline proportional fair algorithm. The results also show that the proposed algorithm has a faster convergence time than Q-learning and can guarantee low delays for VR services.

218 citations


Journal ArticleDOI
TL;DR: A blockchain-based security architecture for distributed cloud storage, where users can divide their own files into encrypted data chunks, and upload those data chunks randomly into the P2P network nodes that provide free storage capacity is proposed.

155 citations


Journal ArticleDOI
TL;DR: Some novel sufficient conditions are obtained for ensuring the exponential stability in mean square and the switching topology-dependent filters are derived such that an optimal disturbance rejection attenuation level can be guaranteed for the estimation disagreement of the filtering network.
Abstract: In this paper, the distributed ${H_{\infty }}$ state estimation problem is investigated for a class of filtering networks with time-varying switching topologies and packet losses. In the filter design, the time-varying switching topologies, partial information exchange between filters, the packet losses in transmission from the neighbor filters and the channel noises are simultaneously considered. The considered topology evolves not only over time, but also by event switches which are assumed to be subjects to a nonhomogeneous Markov chain, and its probability transition matrix is time-varying. Some novel sufficient conditions are obtained for ensuring the exponential stability in mean square and the switching topology-dependent filters are derived such that an optimal ${H_{\infty }}$ disturbance rejection attenuation level can be guaranteed for the estimation disagreement of the filtering network. Finally, simulation examples are provided to demonstrate the effectiveness of the theoretical results.

127 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a framework for cross-layer optimization to ensure ultra-high reliability and ultra-low latency in radio access networks, where both transmission delay and queueing delay are considered.
Abstract: In this paper, we propose a framework for cross-layer optimization to ensure ultra-high reliability and ultra-low latency in radio access networks, where both transmission delay and queueing delay are considered. With short transmission time, the blocklength of channel codes is finite, and the Shannon capacity cannot be used to characterize the maximal achievable rate with given transmission error probability. With randomly arrived packets, some packets may violate the queueing delay. Moreover, since the queueing delay is shorter than the channel coherence time in typical scenarios, the required transmit power to guarantee the queueing delay and transmission error probability will become unbounded even with spatial diversity. To ensure the required quality-of-service (QoS) with finite transmit power, a proactive packet dropping mechanism is introduced. Then, the overall packet loss probability includes transmission error probability , queueing delay violation probability , and packet dropping probability . We optimize the packet dropping policy, power allocation policy, and bandwidth allocation policy to minimize the transmit power under the QoS constraint. The optimal solution is obtained, which depends on both channel and queue state information. Simulation and numerical results validate our analysis, and show that setting the three packet loss probabilities as equal causes marginal power loss.

126 citations


Proceedings ArticleDOI
16 Mar 2018
TL;DR: A new hierarchical 5G Next generation VANET architecture is proposed to integrate the centralization and flexibility of Software Defined Networking and Cloud-RAN, with 5G communication technologies, to effectively allocate resources with a global view.
Abstract: The growth of technical revolution towards 5G Next generation networks is expected to meet various communication requirements of future Intelligent Transportation Systems (ITS). Motivated by the consumer needs for variety of ITS applications, bandwidth, high speed and ubiquity, researches are currently exploring different network architectures and techniques, which could be employed in Next generation ITS. To provide flexible network management, control and high resource utilization in Vehicular Ad-hoc Networks (VANETs) on large scale, a new hierarchical 5G Next generation VANET architecture is proposed. The key idea of this holistic architecture is to integrate the centralization and flexibility of Software Defined Networking (SDN) and Cloud-RAN (CRAN), with 5G communication technologies, to effectively allocate resources with a global view. Moreover, a fog computing framework (comprising of zones and clusters) has been proposed at the edge, to avoid frequent handovers between vehicles and RSUs. The transmission delay, throughput and control overhead on controller are analyzed and compared with other architectures. Simulation results indicate reduced transmission delay and minimized control overhead on controllers. Moreover, the throughput of proposed system is also improved.

112 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel spider-web-like transmission mechanism for emergency data (TMED) in vehicular ad hoc networks, in which a spider- web-like Transmission model combining geographic information system and electronic maps is established and outperforms previously proposed protocols.
Abstract: The vehicular ad hoc network (VANET) is an emerging mobile ad hoc network, which is an important component of the Internet of Things and has been widely applied in intelligent transportation systems in recent years. For large-scale VANETs, it is important to design efficient transmission schemes for time critical emergency data. Greedy perimeter stateless routing protocol is a typical geographic-based routing protocol and greedy perimeter coordinator routing protocol is typical for map-based but they do not consider the QoS of the transmission link and hence are not suitable for emergency data transmissions. Some bioinspired protocols and situation-aware protocols are more suitable for emergency situations. However, they have some limitations in terms of the computational complexity and convergence rate which can cause large time delays. In this paper, we propose a novel spider-web-like transmission mechanism for emergency data (TMED) in vehicular ad hoc networks, in which a spider-web-like transmission model combining geographic information system and electronic maps is established. In this mechanism, the request-spiders and confirmed-spiders are sent out to obtain the transmission path from a source vehicle to a destination vehicle to improve the packet delivery ratio and average transmission delay of emergency data. TMED combines a dynamic multipriority message queue management method with a restricted greedy forwarding strategy based on position prediction to significantly reduce the end-to-end delay of packets. We use SUMO and NS2 to simulate TMED in a city scenario and compare its performance with GPSR, GPCR, and ACAR based on the packet delivery ratio, average transmission delay, and routing overhead. The simulation results show that TMED outperforms these previously proposed protocols.

82 citations


Journal ArticleDOI
TL;DR: A tensor-based big data management scheme is proposed for dimensionality reduction problem of big data generated from various smart devices that minimizes the transmission delay incurred during the movement of the dimensionally reduced data between different nodes.
Abstract: Smart grid (SG) is an integration of traditional power grid with advanced information and communication infrastructure for bidirectional energy flow between grid and end users. A huge amount of data is being generated by various smart devices deployed in SG systems. Such a massive data generation from various smart devices in SG systems may lead to various challenges for the networking infrastructure deployed between users and the grid. Hence, an efficient data transmission technique is required for providing desired QoS to the end users in this environment. Generally, the data generated by smart devices in SG has high dimensions in the form of multiple heterogeneous attributes, values of which are changed with time. The high dimensions of data may affect the performance of most of the designed solutions in this environment. Most of the existing schemes reported in the literature have complex operations for the data dimensionality reduction problem which may deteriorate the performance of any implemented solution for this problem. To address these challenges, in this paper, a tensor-based big data management scheme is proposed for dimensionality reduction problem of big data generated from various smart devices. In the proposed scheme, first the Frobenius norm is applied on high-order tensors (used for data representation) to minimize the reconstruction error of the reduced tensors. Then, an empirical probability-based control algorithm is designed to estimate an optimal path to forward the reduced data using software-defined networks for minimization of the network load and effective bandwidth utilization. The proposed scheme minimizes the transmission delay incurred during the movement of the dimensionally reduced data between different nodes. The efficacy of the proposed scheme has been evaluated using extensive simulations carried out on the data traces using ‘R’ programming and Matlab. The big data traces considered for evaluation consist of more than two million entries (2,075,259) collected at one minute sampling rate having hetrogenous features such as–voltage, energy, frequency, electric signals, etc. Moreover, a comparative study for different data traces and a real SG testbed is also presented to prove the efficacy of the proposed scheme. The results obtained depict the effectiveness of the proposed scheme with respect to the parameters such as- network delay, accuracy, and throughput.

77 citations


Journal ArticleDOI
TL;DR: Experiment proves that cognitive engine-based security strategy deployment put forth in this paper is much better than other schemes.
Abstract: As for deployment of security strategy, corresponding forwarding rules for switches can be given in allusion to different traffic conditions. However, due to lack of global cognitive control for security strategy deployment in traditional Internet of Vehicles (IoV), it is quite difficult to realize global and optimized security strategy deployment scheme so as to meet security requirements in different traffic conditions. On basis of traditional IoV, cognitive engine is added in cognitive IoV (CIoV) to enhance the intelligence of traditional IoV. In allusion to CIoV, and in consideration of restrictions on transmission delay, the security strategy deployment for switches on core network is formulated in this paper, thus not only the safe transmission rules are met, but the transmission delay can also be the lowest. To be specific, the path selection of switches is modeled as 0-1 programming problem in this paper, and that optimization problem is proved to be a nonconvex optimization problem. Then we convert that problem into a convex optimization problem by log-det heuristic algorithm, thus to give path selection scheme to meet security requirements with the lowest delay on the whole. Experiment proves that cognitive engine-based security strategy deployment put forth in this paper is much better than other schemes.

71 citations


Journal ArticleDOI
13 Jun 2018-Sensors
TL;DR: This paper is inspired to propose a smart collaborative routing protocol, Geographic energy aware routing and Inspecting Node (GIN), for guaranteeing the reliability of data exchanging, and demonstrates that the proposed protocol is able to outperform the others.
Abstract: It is knotty for current routing protocols to meet the needs of reliable data diffusion during the Internet of Things (IoT) deployments. Due to the random placement, limited resources and unattended features of existing sensor nodes, the wireless transmissions are easily exposed to unauthorized users, which becomes a vulnerable area for various malicious attacks, such as wormhole and Sybil attacks. However, the scheme based on geographic location is a suitable candidate to defend against them. This paper is inspired to propose a smart collaborative routing protocol, Geographic energy aware routing and Inspecting Node (GIN), for guaranteeing the reliability of data exchanging. The proposed protocol integrates the directed diffusion routing, Greedy Perimeter Stateless Routing (GPSR), and the inspecting node mechanism. We first discuss current wireless routing protocols from three diverse perspectives (improving transmission rate, shortening transmission range and reducing transmission consumption). Then, the details of GIN, including the model establishment and implementation processes, are presented by means of the theoretical analysis. Through leveraging the game theory, the inspecting node is elected to monitor the network behaviors. Thirdly, we evaluate the network performances, in terms of transmission delay, packet loss ratio, and throughput, between GIN and three traditional schemes (i.e., Flooding, GPSR, and GEAR). The simulation results illustrate that the proposed protocol is able to outperform the others.

65 citations


Proceedings ArticleDOI
04 Jun 2018
TL;DR: DeepCache, a deep-learning-based solution to understand the request patterns in individual base stations and accordingly make intelligent cache decisions is developed, and a cooperation strategy for nearby base stations to collectively serve user requests is developed.
Abstract: The emerging 5G mobile networking promises ultrahigh network bandwidth and ultra-low communication latency ( 100ms), due to its store-and-forward design and the physical barrier from signal propagation speed, not to mention congestion that frequently happens. Caching is known to be effective to bridge the speed gap, which has become a critical component in the 5G deployment as well. Besides storage, 5G base stations (BSs) will also be powered with strong computing modules, offering mobile edge computing (MEC) capability. This paper explores the potentials of edge computing towards improving the cache performance, and we envision a learning-based framework that facilitates smart caching beyond simple frequency- and time-based replace strategies and cooperation among base stations. Within this framework, we develop DeepCache, a deep-learning-based solution to understand the request patterns in individual base stations and accordingly make intelligent cache decisions. Using mobile video, one of the most popular applications with high traffic demand, as a case, we further develop a cooperation strategy for nearby base stations to collectively serve user requests. Experimental results on real-world dataset show that using the collaborative DeepCache algorithm, the overall transmission delay is reduced by 14%∼22%, with a backhaul data traffic saving of 15%∼23%.

Journal ArticleDOI
TL;DR: A bus-trajectory-based street-centric routing algorithm, which uses buses as the main relay to deliver messages, and a bus-based forwarding strategy with ant colony optimization to find a reliable and a steady multihop link between two relay buses in order to decrease the end-to-end delay.
Abstract: This paper focuses on the routing algorithm for the communications between vehicles and places in urban vehicular ad hoc networks. As one of the basic transportation facilities in an urban setting, buses periodically run along their fixed routes and cover many city streets. The trajectory of bus lines can be seen as a submap of a city. Based on the characters of bus networks, we propose a bus-trajectory-based street-centric (BTSC) routing algorithm, which uses buses as the main relay to deliver messages. In BTSC, we build a routing graph based on the trajectories of bus lines by analyzing the probability of bus appearing on every street. We propose two novel concepts, i.e., the probability of street consistency and the probability of path consistency, which are used as metrics to determine routing paths for message delivery. This aims to choose the best path with higher density of busses and lower probability of transmission direction deviating from the routing path. In order to improve the bus forwarding opportunity, we design a bus-based forwarding strategy with ant colony optimization to find a reliable and a steady multihop link between two relay buses in order to decrease the end-to-end delay. The BTSC makes improvements in the selection of routing paths and a strategy of message forwarding. Simulation results show that our proposed routing algorithm has a better performance in terms of the transmission ratio, transmission delay, and adaptability to different networks.

Journal ArticleDOI
TL;DR: A comprehensive performance analysis demonstrates that the delay Differentiated Services based Data Routing scheme has obvious advantages in improving network performance compared to previous studies: it reduces transmission latency of delay-sensitive data, improves network energy utilization, and improves network lifetime, while also guaranteeing the network lifetime is not lower than previous studies.
Abstract: Energy-efficient data gathering techniques play a crucial role in promoting the development of smart portable devices as well as smart sensor devices based Internet of Things (IoT). For data gathering, different applications require different delay constraints; therefore, a delay Differentiated Services based Data Routing (DSDR) scheme is creatively proposed to improve the delay differentiated services constraint that is missed from previous data gathering studies. The DSDR scheme has three advantages: first, DSDR greatly reduces transmission delay by establishing energy-efficient routing paths (E2RPs). Multiple E2RPs are established in different locations of the network to forward data, and the duty cycles of nodes on E2RPs are increased to 1, so the data is forwarded by E2RPs without the existence of sleeping delay, which greatly reduces transmission latency. Secondly, DSDR intelligently chooses transmission method according to data urgency: the direct-forwarding strategy is adopted for delay-sensitive data to ensure minimum end-to-end delay, while wait-forwarding method is adopted for delay-tolerant data to perform data fusion for reducing energy consumption. Finally, DSDR make full use of the residual energy and improve the effective energy utilization. The E2RPs are built in the region with adequate residual energy and they are periodically rotated to equalize the energy consumption of the network. A comprehensive performance analysis demonstrates that the DSDR scheme has obvious advantages in improving network performance compared to previous studies: it reduces transmission latency of delay-sensitive data by 44.31%, reduces transmission latency of delay-tolerant data by 25.65%, and improves network energy utilization by 30.61%, while also guaranteeing the network lifetime is not lower than previous studies.

Journal ArticleDOI
TL;DR: Simulation results indicates proposed routing algorithm’s superiority in terms of residual energy along with considerable improvement regarding network lifetime, and significant reduction in delay when compared with existing PEGASIS protocol and optimised PEG-ACO chain respectively.

Journal ArticleDOI
TL;DR: An adjustable duty cycle based fast disseminate (ADCFD) scheme is proposed for minimum-transmission broadcast (MTB) in a smart wireless software-define network and the number of transmission in an ADCFD scheme is reduced, while retaining network lifetime.
Abstract: Program codes as one of big data should be disseminated to all sensor nodes in a wireless software-define smart network (WSDSN) quickly Due to the limited energy of sensor nodes, sensor nodes adopt asynchronous duty-cycle model to save energy But neighbor nodes with sleep status can’t receive program codes, resulting in longer transmission delay for spreading program codes In this paper, an adjustable duty cycle based fast disseminate (ADCFD) scheme is proposed for minimum-transmission broadcast (MTB) in a smart wireless software-define network In an ADCFD scheme, the duty cycle of nodes are adjusted to receive program codes timely Thus, the transmission times and emergency transmission delay are reduced The theoretical analysis and experimental results show that compare to previous broadcast scheme, the number of transmission in an ADCFD scheme is reduced by 44776%–118519%, the delay from disseminating program codes is reduced by 17895%- 107527%, while retaining network lifetime

Posted Content
TL;DR: In this paper, a novel framework is proposed to optimize a platoon's operation while jointly considering the delay of the wireless network and the stability of the vehicle's control system, and the control parameters are optimized to maximize the derived wireless system reliability.
Abstract: Autonomous vehicular platoons will play an important role in improving on-road safety in tomorrow's smart cities. Vehicles in an autonomous platoon can exploit vehicle-to-vehicle (V2V) communications to collect information, such as velocity and acceleration, from surrounding vehicles so as to maintain the target velocity and inter-vehicle distance. However, due to the dynamic on-vehicle data processing rate and the uncertainty of the wireless channel, V2V communications within a platoon will experience a delay. Such delay can impair the vehicles' ability to stabilize the operation of the platoon. In this paper, a novel framework is proposed to optimize a platoon's operation while jointly consider the delay of the wireless network and the stability of the vehicle's control system. First, stability analysis for the control system is performed and the maximum wireless system delay requirements which can prevent the instability of the control system are derived. Then, delay analysis is conducted to determine the end-to-end delay, including queuing, processing, and transmission delay for the V2V link in the wireless network. Subsequently, using the derived delay, a lower bound and an approximated expression of the reliability for the wireless system, defined as the probability that the wireless system meets the control system's delay needs, are derived. Then, the control parameters are optimized to maximize the derived wireless system reliability. Simulation results corroborate the analytical derivations and study the impact of parameters, such as the platoon size, on the reliability performance of the vehicular platoon. More importantly, the simulation results disclose the benefits of integrating control system and wireless network design while providing guidelines for designing autonomous platoons so as to realize the required wireless network reliability and control system stability.

Proceedings ArticleDOI
20 May 2018
TL;DR: This paper jointly addresses user scheduling and content caching issues with the purpose of minimizing the average transmission delay in the mobile edge network.
Abstract: Caching popular contents at the edge of mobile networks is an effective technology to relieve burden on backhaul links and reduce average transmission delay However, the effectiveness of content caching depends on the cache-hit probability, and thus the content caching strategy becomes an important topic This paper jointly addresses user scheduling and content caching issues with the purpose of minimizing the average transmission delay in the mobile edge network Since network states are unknown random variables, the reinforcement learning (RL) is adopted to learn the optimal stochastic policy through interactions with the network environment The actor of RL agent uses the Gibbs distribution as the probabilistic caching policy, and the parameters of the policy are updated with the gradient ascent method The critic of RL agent uses a deep neural network to approximate the value function and helps the actor estimate the gradient of the policy The techniques of experience replay and fixed target network are used to guarantee convergence and stability of the learning algorithm Simulation results are shown to illustrate the superior performance of the proposed approach

Journal ArticleDOI
TL;DR: The goal is to adaptively reduce the amount of data that needs to be transmitted in order to efficiently communicate and possibly store information, while maintaining the required application quality-of-service (QoS) requirements.
Abstract: The emergence of Internet of Things (IoT) applications and rapid advances in wireless communication technologies have motivated a paradigm shift in the development of viable applications such as mobile-health (m-health). These applications boost the opportunity for ubiquitous real-time monitoring using different data types such as electroencephalography (EEG), electrocardiography (ECG), etc. However, many remote monitoring applications require continuous sensing for different signals and vital signs, which result in generating large volumes of real time data that requires to be processed, recorded, and transmitted. Thus, designing efficient transceivers is crucial to reduce transmission delay and energy through leveraging data reduction techniques. In this context, we propose an efficient data-specific transceiver design that leverages the inherent characteristics of the generated data at the physical layer to reduce transmitted data size without significant overheads. The goal is to adaptively reduce the amount of data that needs to be transmitted in order to efficiently communicate and possibly store information, while maintaining the required application quality-of-service (QoS) requirements. Our results show the excellent performance of the proposed design in terms of data reduction gain, signal distortion, low complexity, and the advantages that it exhibits with respect to state-of-the-art techniques since we could obtain about 50% compression ratio at 0% distortion and sample error rate.

Journal ArticleDOI
Wenhao Zong1, Changzhu Zhang1, Zhuping Wang1, Jin Zhu1, Qijun Chen1 
TL;DR: A practical framework of hardware and software is proposed to reveal the external configuration and internal mechanism of an autonomous vehicle—a typical intelligent system and the performance of project cocktail is proven to be considerably better in terms of transmission delay and throughput.
Abstract: Architecture design is one of the most important problems for an intelligent system. In this paper, a practical framework of hardware and software is proposed to reveal the external configuration and internal mechanism of an autonomous vehicle—a typical intelligent system. The main contributions of this paper are as follows. First, we compare the advantages and disadvantages of three typical sensor plans and introduce a general autopilot for a vehicle. Second, we introduce a software architecture for an autonomous vehicle. The perception and planning performances are improved with the help of two inner loops of simultaneous localization and mapping. An algorithm to enlarge the detection range of the sensors is proposed by adding an inner loop to the perception system. A practical feedback to restrain mutations of two adjacent planning periods is also realized by the other inner loop. Third, a cross-platform virtual server (named project cocktail) for data transmission and exchange is presented in detail. Through comparisons with the robot operating system, the performance of project cocktail is proven to be considerably better in terms of transmission delay and throughput. Finally, a report on an autonomous driving test implemented using the proposed architecture is presented, which shows the effectiveness, flexibility, stability, and low-cost of the overall autonomous driving system.

Journal ArticleDOI
TL;DR: This paper proposes a posteriori caching mechanism rather than the mainstream apriority theory, in which the content placement strategy is determined based on the identical distribution of content popularity and user preference before the theoretical analysis of the placement gain is validated.
Abstract: Caching content on the edge of a network can effectively localize traffic, reduce network latency, and improve network throughput. In this paper, we propose a posteriori caching mechanism rather than the mainstream apriority theory, in which the content placement strategy is determined based on the identical distribution of content popularity and user preference before the theoretical analysis of the placement gain is validated. We primarily investigate the optimal caching strategy subjected to the constraint of storage capacity in a heterogeneous network, where a macro base station (MBS), small base stations (SBSs), and user terminals (UTs) are integrated for proactive content storage in the physical layer. To maximize the local hit rate while reducing the transmission delay, we first analyze the optimal strategy by converting content placement into a 0–1 knapsack problem and address the optimization problem using the method of Lagrangian multipliers. We then determine the request probability for each user by exploiting the context information of the user's request history, user similarity, and social ties to achieve reasonably well-optimized caching performance. The caching policy is further optimized into a low-complexity heuristic algorithm with the knowledge of request probability and the optimal copy volumes. Finally, the simulation results show that the proposed cooperative caching algorithm improves the performance metric in terms of hit rate and transmission delay under different benchmarks.

Journal ArticleDOI
TL;DR: A class of nonlinear interconnected NNs with transmission delay and random impulse effect is first formulated and analyzed, and a randomized broadcast impulsive coupling scheme is integrated into the protocol design to make network protocols more flexible.
Abstract: Inspired by security applications in the industrial Internet of things, this paper focuses on the usage of impulsive neural network (NN) synchronization technique for intelligent image protection against illegal swiping and abuse. A class of nonlinear interconnected NNs with transmission delay and random impulse effect is first formulated and analyzed in this paper. In order to make network protocols more flexible, a randomized broadcast impulsive coupling scheme is integrated into the protocol design. Impulsive synchronization criteria are then derived for the chaotic NNs in the presence of nonlinear protocol and random broadcast impulse, with the impulse effect discussed. Illustrative examples are provided to verify the developed impulsive synchronization results and to show its potential application in image encryption and decryption.

Journal ArticleDOI
TL;DR: This study considers the network-based H∞ state estimation problem for neural networks where transmitted measurements suffer from the sampling effect, external disturbance, network-induced delay, and packet dropout as network constraints.

Journal ArticleDOI
TL;DR: A learning-based synchronous (LS) approach from forwarding nodes is proposed to reduce the delay for IIoTs and improve the network performance by reducing the conflict between simultaneous data transmission.
Abstract: The Industrial Internet of Things (IIoTs) is creating a new world which incorporates machine learning, sensor data, and machine-to-machine (M2M) communications. In IIoTs, the length of the transmission delay is one of the pivotal performance because dilatory communication will cause heavy losses to industrial applications. In this paper, a learning-based synchronous (LS) approach from forwarding nodes is proposed to reduce the delay for IIoTs. In an asynchronous Media Access Control protocol, when senders need to send data, they always require to wait for their corresponding receiver to wake up. Thus, the delay here is greater than in the synchronous network. However, the synchronization cost of the whole network is enormous, and it is difficult to maintain. Therefore, LS mechanism uses a partial synchronization approach to reduce synchronization costs while effectively reducing delay. In LS approach, instead of synchronizing the nodes in the entire network, only sender nodes and part of the nodes in their forwarding node set are synchronized by self-learning methods, and accurate synchronization is not required here. Thus, the delay can be effectively reduced under the low cost. Secondly, the nodes near sink maintain the original duty cycle, while the nodes in the regions away from the sink use their remaining energy and perform synchronization operations, so as not to damage the network lifetime. Finally, because the synchronization in this paper is based on different synchronization periods among different nodes, it can improve the network performance by reducing the conflict between simultaneous data transmission. The theoretical analysis results show that compared with the previous approach FFSC, LS approach can reduce the end-to-end delay by 5.13–11.64% and increase the energy efficiency by 14.29–17.53% under the same lifetime with a more balanced energy utilization.

Journal ArticleDOI
TL;DR: A comprehensive performance analysis demonstrates that the HSBP approach has obvious advantages in improving network performance compared with previous studies; it reduces transmission delay by 48.10% and improves energy utilization by 38.21% while guaranteeing the same network lifetime.
Abstract: Quickly and efficiently transmitting data to sink via intelligent routing is an important issue in wireless sensor networks. In previous scenarios, there has existed the phenomenon of “energy hole,” which results in difficulties in synchronous optimization of energy and delay. Thus, a smart High-Speed Backbone Path (HSBP) construction approach is proposed in this paper. In the HSBP approach, several High-speed Backbone Paths (HBPs) are established at different locations of the network, and the duty cycles of nodes on the HBPs are increased to 1; therefore, the data are forwarded by HBPs without the existence of sleeping delay, which greatly reduces transmission latency. Furthermore, the HBPs are built in regions with adequate residual energy, and they are switched periodically; thus, more nodes can be utilized to equalize the energy consumption. A comprehensive performance analysis demonstrates that the HSBP approach has obvious advantages in improving network performance compared with previous studies; it reduces transmission delay by 48.10% and improves energy utilization by 38.21% while guaranteeing the same network lifetime.

Journal ArticleDOI
Jinhuan Zhang1, Peng Hu2, Xie Fang1, Jun Long1, An He1 
TL;DR: A novel ring-based in-network data aggregation scheme that adaptively unicasts variable number of aggregated packets copies continuously in a window according to the request transmission reliability and the imbalance of nodes energy cost.
Abstract: Data aggregation can reduce the data transmission between the nodes, and thus save the energy and extend the life of the network. Many related researches on in-network data aggregation take the generalized maximum functions. For the cases that the original packets of $N$ nodes aggregated into $M$ ( $1 ) packets, it is a challenge to improve the energy efficiency and reduce the transmission delay under the transmission reliability guarantee. In this paper, a novel ring-based in-network data aggregation scheme is proposed to this problem. The network is partitioned into rings and the data aggregation is executed ring by ring from outside to inside. To ensure transmission reliability, the source or intermediate aggregating node unicasts multiple aggregated packet copies to its next hop node in the inner ring with the maximum residual energy. The reliability is higher with the more unicasting packet copies. However, more sending packets copies will lead to more additional energy cost. Besides, nodes close to the sink tend to relay more size of data packets and the energy is depleted more quickly than nodes far to the sink. Meanwhile, the nodes close to the sink need to relay the aggregated packets, which contain more information. If the number of packet copies is too small, the packets loss will greatly worse the transmission reliability. Based on this, the number of unicasting packet copies is adaptively adjusted through fuzzy logic. The proposed scheme adaptively unicasts variable number of aggregated packets copies continuously in a window according to the request transmission reliability and the imbalance of nodes energy cost. Our analysis and simulation results show the effectiveness of the proposed scheme.

Journal ArticleDOI
TL;DR: It is shown that the proposed GABR scheme is significantly improved compared with protocols of the intersection based routing (IBR) and connectivity aware routing (CAR) in terms of transmission delay and packet loss rate.
Abstract: A genetic algorithm (GA) based QoS perception routing protocol (GABR) is proposed to guarantee the quality of service (QoS) influenced by broken links between vehicles and the failure of packets transmission in a vehicular ad hoc network (VANET). With the observation that all improvable paths are probed by the intersection based routing protocol, the genetic GA is utilized to optimize the global available paths which satisfies the QoS requirement. Moreover, by means of the numerical results, it is shown that the proposed scheme is significantly improved compared with protocols of the intersection based routing (IBR) and connectivity aware routing (CAR) in terms of transmission delay and packet loss rate.

Journal ArticleDOI
31 Aug 2018-Sensors
TL;DR: An energy conserving and transmission radius adaptive (ECTRA) scheme is proposed to reduce the cost and optimize the performance of solar-based EHWSNs and shows the following advantages compared with traditional method.
Abstract: In energy harvesting wireless sensor networks (EHWSNs), the energy tension of the network can be relieved by obtaining the energy from the surrounding environment, but the cost on hardware cannot be ignored Therefore, how to minimize the cost of energy harvesting hardware to reduce the network deployment cost, and further optimize the network performance, is still a challenging issue in EHWSNs In this paper, an energy conserving and transmission radius adaptive (ECTRA) scheme is proposed to reduce the cost and optimize the performance of solar-based EHWSNs There are two main innovations of the ECTRA scheme Firstly, an energy conserving approach is proposed to conserve energy and avoid outage for the nodes in hotspots, which are the bottleneck of the whole network The novelty of this scheme is adaptively rotating the transmission radius In this way, the nodes with maximum energy consumption are rotated, balancing energy consumption between nodes and reducing the maximum energy consumption in the network Therefore, the battery storage capacity of nodes and the cost on hardware Secondly, the ECTRA scheme selects a larger transmission radius for rotation when the node can absorb enough energy from the surroundings The advantages of using this method are: (a) reducing the energy consumption of nodes in near-sink areas, thereby reducing the maximum energy consumption and allowing the node of the hotspot area to conserve energy, in order to prevent the node from outage Hence, the network deployment costs can be further reduced; (b) reducing the network delay When a larger transmission radius is used to transmit data in the network, fewer hops are needed by data packet to the sink After the theoretical analyses, the results show the following advantages compared with traditional method Firstly, the ECTRA scheme can effectively reduce deployment costs by 2958% without effecting the network performance as shown in experiment analysis; Secondly, the ECTRA scheme can effectively reduce network data transmission delay by 44⁻71%; Thirdly, the ECTRA scheme shows a better balance in energy consumption and the maximum energy consumption is reduced by 2789%; And lastly, the energy utilization rate is effectively improved by 3009⁻5548%

Journal ArticleDOI
TL;DR: An efficient broadcast protocol to disseminate data in mobile IoT networks is proposed that can improve the success ratio of packet delivery by 13% ~ 28% with a similar end-to-end transmission delay and network overhead of the most state-of-art approaches.
Abstract: The recent trend of implementing Internet of Things (IoT) applications is to transmit sensing data to a powerful data center and try to discover the valuable knowledge behind “Big Data” by various intelligent but resource-consuming algorithms. However, from the discussion with some industrial companies, it is understood that disseminating real-time sensing data to their nearby network-edge applications directly would produce a more economical design and lower service latency for some important smart city applications. Therefore, this paper proposes an efficient broadcast protocol to disseminate data in mobile IoT networks. The proposed protocol exploits the neighbor knowledge of mobile nodes to determine a rebroadcast delay that prioritizes different packet broadcasts according to their profits. An adaptive connectivity factor is also introduced to make the proposed protocol adaptive to the node density of different network parts. By combining the neighbor knowledge of nodes and adaptive connectivity factor, a reasonable probability is calculated to determine whether a packet should be rebroadcasted to other nodes or be discarded to prevent redundant packet broadcast. Extensive simulation results have validated that this protocol can improve the success ratio of packet delivery by 13% ~ 28% with a similar end-to-end transmission delay and network overhead of the most state-of-art approaches.

Journal ArticleDOI
TL;DR: A high-energy node priority clustering algorithm, in which a cluster head would be selected according to the remaining energy of sensor nodes and the geometry distance among them, is proposed, and in order to improve the efficiency of data collection, the ant colony optimization is used to find the shortest path for autonomous underwater vehicle.
Abstract: Underwater wireless sensor networks (UWSNs) based on magnetic induction (MI) have been recently proposed as a promising candidate for underwater networking due to its benefits, such as small transmission delay, low vulnerability to environment changes, multipath fading negligibility, and high bandwidth. Most of the UWSN applications are location dependent and, thus, localization plays an important functionality for obtaining sensor positions. In this paper, we first study an MI-based monitoring network in shallow sea, then focus on how to design an optimal node deployment strategy and a clustering algorithm to prolong network lifetime for a 3D-UWSN by reducing the network energy consumption. Using the Voronoi diagram, we propose a high-energy node priority clustering algorithm, in which a cluster head would be selected according to the remaining energy of sensor nodes and the geometry distance among them. Moreover, in order to improve the efficiency of data collection, we use the ant colony optimization to find the shortest path for autonomous underwater vehicle. The simulation results show that the proposed approach outperforms other conventional protocols in some certain scenarios.

Journal ArticleDOI
TL;DR: This work proposes to integrate the Q-learning into the exploring process of an adaptive slot scheduling with high efficiency, which quickly approaches an approximate optimal sequence along with the execution of frames due to the convergence nature of the scheduling.
Abstract: The problems of reducing the transmission delay and maximizing the sensor lifetime are always hot research topics in the domain of wireless sensor networks (WSNs). By excluding the influence of routing protocol on the transmission direction of data packets, the MAC protocol which controls the time point of transmission and reception is also an important factor on the communication performance. Many existing works attempt to address these problems by using time slot scheduling policy. However, most of them exploit the global network knowledge to construct a stationary scheduling, which violates the dynamic and scalable nature of WSNs. In order to realize the distributed computation and self-learning, we propose to integrate the Q-learning into the exploring process of an adaptive slot scheduling with high efficiency. Due to the convergence nature, the scheduling quickly approaches an approximate optimal sequence along with the execution of frames. By conducting the corresponding simulations, the feasibility and the high efficiency of the proposed method can be validated.