scispace - formally typeset
Search or ask a question

Showing papers on "Transmission delay published in 2015"


Journal ArticleDOI
TL;DR: This work proposes a cross-layer distributed algorithm called interference-based topology control algorithm for delay-constrained (ITCD) MANETs with considering both the interference constraint and the delay constraint, which is different from the previous work.
Abstract: As the foundation of routing, topology control should minimize the interference among nodes, and increase the network capacity. With the development of mobile ad hoc networks (MANETs), there is a growing requirement of quality of service (QoS) in terms of delay. In order to meet the delay requirement, it is important to consider topology control in delay constrained environment, which is contradictory to the objective of minimizing interference. In this paper, we focus on the delay-constrained topology control problem, and take into account delay and interference jointly. We propose a cross-layer distributed algorithm called interference-based topology control algorithm for delay-constrained (ITCD) MANETs with considering both the interference constraint and the delay constraint, which is different from the previous work. The transmission delay, contention delay and the queuing delay are taken into account in the proposed algorithm. Moreover, the impact of node mobility on the interference-based topology control algorithm is investigated and the unstable links are removed from the topology. The simulation results show that ITCD can reduce the delay and improve the performance effectively in delay-constrained mobile ad hoc networks.

233 citations


Proceedings ArticleDOI
17 Aug 2015
TL;DR: Through a combination of simulations, empirical evaluations using cellular network traces, and real-world evaluations against standard TCP flavors and state of the art protocols like Sprout, it is shown that Verus outperforms these protocols in cellular channels.
Abstract: Legacy congestion controls including TCP and its variants are known to perform poorly over cellular networks due to highly variable capacities over short time scales, self-inflicted packet delays, and packet losses unrelated to congestion. To cope with these challenges, we present Verus, an end-to-end congestion control protocol that uses delay measurements to react quickly to the capacity changes in cellular networks without explicitly attempting to predict the cellular channel dynamics. The key idea of Verus is to continuously learn a delay profile that captures the relationship between end-to-end packet delay and outstanding window size over short epochs and uses this relationship to increment or decrement the window size based on the observed short-term packet delay variations. While the delay-based control is primarily for congestion avoidance, Verus uses standard TCP features including multiplicative decrease upon packet loss and slow start. Through a combination of simulations, empirical evaluations using cellular network traces, and real-world evaluations against standard TCP flavors and state of the art protocols like Sprout, we show that Verus outperforms these protocols in cellular channels. In comparison to TCP Cubic, Verus achieves an order of magnitude (> 10x) reduction in delay over 3G and LTE networks while achieving comparable throughput (sometimes marginally higher). In comparison to Sprout, Verus achieves up to 30% higher throughput in rapidly changing cellular networks.

180 citations


Journal ArticleDOI
TL;DR: A co-design method for the H∞ controller and the event-triggered scheme is proposed and the effectiveness and potential of the theoretic results obtained are illustrated by a simulation example.
Abstract: Summary The problem of H∞ control for networked Markovian jump system under event-triggered scheme is studied in this paper. In order to reduce the utilization of limited network bandwidth, a dynamic discrete event-triggered scheme to choose the transmitted data is designed. A Markovian jump time-delay system model is employed to describe the event-triggered scheme and the network related behavior, such as transmission delay, data package dropout, and disorder. Furthermore, a sufficient condition is derived to guarantee that the resulting closed-loop system is stable and has a prescribed performance index. A co-design method for the H∞ controller and the event-triggered scheme is then proposed. The effectiveness and potential of the theoretic results obtained are illustrated by a simulation example. Copyright © 2014 John Wiley & Sons, Ltd.

125 citations


Journal ArticleDOI
TL;DR: It is shown that the NCS is stabilizable iff the network-induced delay and the packet dropout rate satisfy some simple algebraic inequalities, and an existence theorem of the maximum packet drop out rate is proposed.

119 citations


Journal ArticleDOI
TL;DR: In this paper, power packet routers are designed and experimentally verified for realizing a networked power packet distribution system and the results successfully clarify the feasibility of the power packets distribution network.
Abstract: A power packet dispatching system is expected to be one of the advanced power distribution systems for controlling electric power, providing energy on demand, and reducing wasted energy consumption. In this paper, power packet routers are designed and experimentally verified for realizing a networked power packet distribution system. While the previously developed router directly forwards the power packet to a load, the new router forwards the packet to the other router with an information tag reattached to the power payload. In addition, the new router can adjust the starting time for forwarding the received power packet to the other site, thus utilizing storage capacity integrated into the router. The results successfully clarify the feasibility of the power packet distribution network.

105 citations


Journal ArticleDOI
21 Sep 2015
TL;DR: This work proposes a model to address the multiple node case of OpenFlow by approximating the data plane as an open Jackson network with the controller also modeled as an M/M/1 queue.
Abstract: OpenFlow (OF) is one of the most widely used protocols for controller-to-switch communication in a software defined network (SDN). Performance analysis of OF-based SDN using analytical models is both highly desirable and challenging. There already exists a very elegant analytical model based on M/M/1 queues to estimate the packet sojourn time and probability of lost packets for the case in which a controller is responsible for only a single node in the data plane. However the literature falls short when it comes to the multiple node case, i.e. when there is more than one node in the data plane. In this work we propose a model to address this challenge by approximating the data plane as an open Jackson network with the controller also modeled as an M/M/1 queue. The model is then used to evaluate the system in the light of some of the metrics, such as; how much time a packet spends on average in an OF-based network and how much data we can pump into the network given the average delay requirements. Finally the PDF and the CDF of the time spent by the packet in an OF-based SDN for a given path is derived.

93 citations


Journal ArticleDOI
TL;DR: This paper proposes a deployment strategy for multiple types of requirements to solve the problem of deterministic and grid-based deployment, which consists of three deployment algorithms, which are for different deployment objectives.
Abstract: Node deployment is one of the most crucial issues in wireless sensor networks, and it is of realistic significance to complete the deployment task with multiple types of application requirements. In this paper, we propose a deployment strategy for multiple types of requirements to solve the problem of deterministic and grid-based deployment. This deployment strategy consists of three deployment algorithms, which are for different deployment objectives. First, instead of general random search, we put forward a deterministic search mechanism and the related cost-based deployment algorithm, in which nodes are assigned to different groups which are connected by near-shortest paths, and realize significant reduction of path length and deployment cost. Second, rather than ordinary nondirection deployment, we present a notion of counterflow and the related delay-based deployment algorithm, in which the profit of deployment cost and loss of transmission delay are evaluated, and achieve much diminishing of transmission path length and transmission delay. Third, instead of conventional uneven deployment based on the distances to the sink, we propose a concept of node load level and the related lifetime-based deployment algorithm, in which node distribution is determined by the actual load levels and extra nodes are deployed only where really necessary. This contributes to great improvement of network lifetime. Last, extensive simulations are used to test and verify the effectiveness and superiority of our findings.

87 citations


Patent
16 Dec 2015
TL;DR: In this paper, a method for load balancing in a software-define networking (SDN) system includes, upon receiving a packet, determining whether a matching entry for the packet in a server distribution table contains both a current and new server selection.
Abstract: In one embodiment, a method for load balancing in a software-define networking (SDN) system includes, upon receiving a packet, determining whether a matching entry for the packet in a server distribution table contains both a current and new server selection. If the matching entry contains both, it is determined whether there is a matching entry for the packet in a transient flow table, where the transient flow table maintains server selections when at least one of the plurality of servers is reconfigured. Upon determining that there is no matching entry for the packet in the transient flow table, the method determines whether the packet is a first packet of a traffic flow. If the packet is the first packet of a traffic flow, the packet is forwarded according to the new server selection of the matching entry in the server distribution table, and the transient flow table is updated.

76 citations


Journal ArticleDOI
TL;DR: The fundamental tradeoffs between total energy consumption and overall delay in a BS with sleep mode operations by queueing models are characterized and cast light on designing BS sleeping and wake-up control policies that aim to save energy while maintaining acceptable quality of service.
Abstract: Base station (BS) sleeping operation is one of the effective ways to save energy consumption of cellular networks, but it may lead to longer delay to the customers. The fundamental question then arises: How much energy can be traded off by a tolerable delay ? In this paper, we characterize the fundamental tradeoffs between total energy consumption and overall delay in a BS with sleep mode operations by queueing models. Here, the BS total energy consumption includes not only the transmitting power but also basic power (for baseband processing, power amplifier, etc.) and switch-over power of the BS working mode, and the overall delay includes not only transmission delay but also queueing delay. Specifically, the BS is modeled as an M/G/1 vacation queue with setup and close-down times, where the BS enters sleep mode if no customers arrive during the close-down (hysteretic) time after the queue becomes empty. When asleep, the BS stays in sleep mode until the queue builds up to $N$ customers during the sleep period ( $N$ -Policy) . Several closed-form formulas are derived to demonstrate the tradeoffs between the energy consumption and the mean delay for different wake-up policies by changing the close-down time, setup time, and the parameter $N$ . It is shown that the relationship between the energy consumption and the mean delay is linear in terms of mean close-down time, but non-linear in terms of $N$ . The explicit relationship between total power consumption and average delay with varying service rate is also analyzed theoretically, indicating that sacrificing delay cannot always be traded off for energy saving. In other words, larger $N$ may lead to lower energy consumption, but there exists an optimal $N^{\ast} $ that minimizes the mean delay and energy consumption at the same time. We also investigate the maximum delay (delay bound) for certain percentage of service and find that the delay bound is nearly linear in mean delay in the cases tested. Therefore, similar tradeoffs exist between energy consumption and the delay bound. In summary, the closed-form energy–delay tradeoffs cast light on designing BS sleeping and wake-up control policies that aim to save energy while maintaining acceptable quality of service.

73 citations


Journal ArticleDOI
TL;DR: A new packet reordering method is presented to deal with packet disordering and to choose the newest control input and a sufficient condition for the NCS to be exponentially stable is presented by using the average dwell time method.

73 citations


Journal ArticleDOI
TL;DR: This paper derives the exact closed-form expression for the distribution function of harvested energy over a certain number of channel coherence time over Rayleigh fading channels with the consideration of hardware limitation, such as energy harvesting sensitivity and harvesting efficiency.
Abstract: RF energy harvesting is a promising potential solution for providing convenient and perpetual energy supply to low-power wireless sensor networks. In this paper, we investigate the performance of overlaid wireless sensor transmission powered by RF energy harvesting from existing wireless system. Specifically, we derive the exact closed-form expression for the distribution function of harvested energy over a certain number of channel coherence time over Rayleigh fading channels with the consideration of hardware limitation, such as energy harvesting sensitivity and harvesting efficiency. We also obtain the exact distribution of the number of coherence time needed for fully charging the sensor. Based on these analytical results, we analyze the average packet delay and packet loss probability of sensor transmission subject to interference from existing system, for delay insensitive traffics and delay sensitive traffics, respectively. Finally, we investigate the optimal design of energy storage capacity of the sensor nodes to minimize the average packet transmission delay for delay insensitive traffics with two candidate transmission strategies.

Journal ArticleDOI
TL;DR: The architecture of the torus-topology OPS and agile OCS intra-DC network is presented, together with a new flow management concept, where instantaneous optical path on-demand, so-called Express Path is established, and the power consumption and the throughput of a conventional fat-tree topology with the N-dimensional torus topology are compared.
Abstract: We review our work on an intra-data center (DC) network based on co-deployment of optical packet switching (OPS) and optical circuit switching (OCS), conducted within the framework of a five-year-long national R&D program in Japan (∼March 2016). For the starter, preceding works relevant to optical switching technologies in intra-DC networks are briefly reviewed. Next, we present the architecture of our torus-topology OPS and agile OCS intra-DC network, together with a new flow management concept, where instantaneous optical path on-demand, so-called Express Path is established. Then, our hybrid optoelectronic packet router (HOPR), which handles 100 Gbps (25 Gbps × 4-wavelength) optical packets and its enabling device and sub-system technologies are presented. The HOPR aims at a high energy-efficiency of 0.09 [W/Gbps] and low-latency of 100 ns regime. Next, we provide the contention resolution strategies in the OPS and agile OCS network and present the performance analysis with the simulation results. It is followed by the discussions on the power consumption of intra-DC networks. We compare the power consumption and the throughput of a conventional fat-tree topology with the N -dimensional torus topology. Finally, for further power saving, we propose a new scheme, which shuts off HOPR buffers according to the server operation status.

Journal ArticleDOI
TL;DR: A homomorphic linear authenticator (HLA) based public auditing architecture is developed that allows the detector to verify the truthfulness of the packet loss information reported by nodes, and a packet-block-based mechanism is proposed, which allows one to trade detection accuracy for lower computation complexity.
Abstract: Link error and malicious packet dropping are two sources for packet losses in multi-hop wireless ad hoc network. In this paper, while observing a sequence of packet losses in the network, we are interested in determining whether the losses are caused by link errors only, or by the combined effect of link errors and malicious drop. We are especially interested in the insider-attack case, whereby malicious nodes that are part of the route exploit their knowledge of the communication context to selectively drop a small amount of packets critical to the network performance. Because the packet dropping rate in this case is comparable to the channel error rate, conventional algorithms that are based on detecting the packet loss rate cannot achieve satisfactory detection accuracy. To improve the detection accuracy, we propose to exploit the correlations between lost packets. Furthermore, to ensure truthful calculation of these correlations, we develop a homomorphic linear authenticator (HLA) based public auditing architecture that allows the detector to verify the truthfulness of the packet loss information reported by nodes. This construction is privacy preserving, collusion proof, and incurs low communication and storage overheads. To reduce the computation overhead of the baseline scheme, a packet-block-based mechanism is also proposed, which allows one to trade detection accuracy for lower computation complexity. Through extensive simulations, we verify that the proposed mechanisms achieve significantly better detection accuracy than conventional methods such as a maximum-likelihood based detection.

Journal ArticleDOI
TL;DR: Based on the Lyapunov stability theory, a feedback controller is designed for achieving synchronization between two coupled networks with time-varying delays in finite time.

Journal ArticleDOI
05 Jan 2015-Energies
TL;DR: This paper provides a mathematical analysis of a fundamental problem in computer science related to the stability of the “join” synchronisation primitive and provides the explicit expression for the joint probability distribution of the number of energy and data packets in the system.
Abstract: We consider a wireless sensor node that gathers energy through harvesting and reaps data through sensing. The node has a wireless transmitter that sends out a data packet whenever there is at least one “energy packet” and one “data packet”, where an energy packet represents the amount of accumulated energy at the node that can allow the transmission of a data packet. We show that such a system is unstable when both the energy storage space and the data backlog buffer approach infinity, and we obtain the stable stationary solution when both buffers are finite. We then show that if a single energy packet is not sufficient to transmit a data packet, there are conditions under which the system is stable, and we provide the explicit expression for the joint probability distribution of the number of energy and data packets in the system. Since the two flows of energy and data can be viewed as flows that are instantaneously synchronised, this paper also provides a mathematical analysis of a fundamental problem in computer science related to the stability of the “join” synchronisation primitive.

Proceedings ArticleDOI
01 Dec 2015
TL;DR: It is shown that the NCS is stabilizable iff the generalized Lyapunov equation has a positive solution, which is in accordance with the classical result for a delay-free system.
Abstract: This paper investigates the mean-square stabilization problem for discrete-time networked control systems (NCSs). Different from most previous studies, we assume transmission delay and data packet dropout may occur simultaneously. The stabilization for such NCSs remains challenging because of the fundamental difficulty in stochastic control. The contributions of this paper are threefold. First, we present two different necessary and sufficient stabilizing conditions in terms of the unique positive solution to delay-dependent algebraic Riccati equation (DARE) or delay-dependent Lyapunov equation (DLE). Second, the maximum packet dropout rate can be calculated with a proposed optimization algorithm. Third, the stabilizing solution to developed DARE is investigated for its existence and uniqueness. We show the existence condition in terms of the Lyapunov operator and the unobservable mean-square eigenvalue, under which the general DARE has a unique stabilizing solution.

Journal ArticleDOI
Minming Ni1, Lei Zheng1, Fei Tong1, Jianping Pan1, Lin Cai1 
TL;DR: The concept of guard distance is introduced to explore a proper system model for enabling multiple concurrent D2D pairs in the same cell to derive the bounds of the maximum throughput improvement provided by D1D communications in a cell.
Abstract: Device-to-device (D2D) communications in cellular networks are promising technologies for improving network throughput, spectrum efficiency, and transmission delay. In this paper, we first introduce the concept of guard distance to explore a proper system model for enabling multiple concurrent D2D pairs in the same cell. Considering the Signal to Interference Ratio (SIR) requirements for both macro-cell and D2D communications, a geometrical method is proposed to obtain the guard distances from a D2D user equipment (DUE) to the base station (BS), to the transmitting cellular user equipment (CUE), and to other communicating D2D pairs, respectively, when the uplink resource is reused. By utilizing the guard distances, we then derive the bounds of the maximum throughput improvement provided by D2D communications in a cell. Extensive simulations are conducted to demonstrate the impact of different parameters on the optimal maximum throughput. We believe that the obtained results can provide useful guidelines for the deployment of future cellular networks with underlaying D2D communications.

Patent
22 Dec 2015
TL;DR: In this article, a method for forwarding packets in a network device is disclosed, which consists of receiving a packet and mapping the packet to a bucket, where the bucket is associated with a packet processing thread from a plurality of packet processing threads; and determining whether the packet process thread is oversubscribed.
Abstract: A method for forwarding packets in a network device is disclosed. The method comprises receiving a packet; mapping the packet to a bucket, where the bucket is associated with a packet processing thread from a plurality of packet processing threads; and determining whether the packet processing thread is oversubscribed. The method continues with, in response to determining that the packet processing thread is not oversubscribed, mapping the packet to the packet processing thread; and in response to determining that the packet processing thread is oversubscribed, the method comprises distributing the packet to one of the plurality of packet processing threads based on a predefined load balancing scheme, processing the packet in the one of the plurality of packet processing threads, and forwarding the packet according to a predetermined order, where the predetermined order is based on a position of the packet relative to other packets at their receipt.

Patent
01 Sep 2015
TL;DR: In this article, a method for automatically detecting a packet mode in a wireless communication system supporting a multiple transmission mode includes: acquiring at least one of data rate information, packet length information and channel bandwidth information from a transmitted frame; and determining the packet mode on the basis of the phase rotation check result of a symbol transmitted after a signal field signal.
Abstract: A method for automatically detecting a packet mode in a wireless communication system supporting a multiple transmission mode includes: acquiring at least one of data rate information, packet length information and channel bandwidth information from a transmitted frame; and determining the packet mode on the basis of the phase rotation check result of a symbol transmitted after a signal field signal and at least one of the data rate information, the packet length information and the channel bandwidth information acquired from the transmitted frame.

Proceedings ArticleDOI
07 Jul 2015
TL;DR: A new MPTCP packet scheduler is proposed that freezes the slow path temporarily when the delay difference between the slow and fast paths is significant, so that the small amount of data can be transmitted quickly via the fast path.
Abstract: Multipath TCP (MPTCP) has been an emerging transport protocol as it can greatly improve application throughput by utilizing multiple network interfaces at the same time, e.g., both of WiFi and 3G/LTE. While MPTCP is generally beneficial for long-lived flows, it shows worse performance than SPTCP that exploits the best path when the flow size is small, e.g., only hundreds of KB. In this case, it would be better to use only the fastest path since the delay is much more important than network bandwidth in such small data delivery. The problem is that the existing default MPTCP packet scheduler may choose a slow path if the congestion window of the fast path is not available, resulting in a long flow completion time. To avoid this problem, we propose a new MPTCP packet scheduler that freezes the slow path temporarily when the delay difference between the slow and fast paths is significant, so that the small amount of data can be transmitted quickly via the fast path. We implement the proposed scheduler into the MPTCP Linux kernel and evaluate on our testbed and compare to the default packet scheduler. Through the experiments, we confirm that the proposed scheme significantly reduces the flow completion time for short flows.

Journal ArticleDOI
TL;DR: The use of the residual energy and the transmission delay as routing metric in the next hop selection process for the RPL protocol is presented and an objective function for this metric based on ant colony optimization (ACO) is designed.
Abstract: Energy conservation, while ensuring an adequate level of service, is a major concern in Low power and Lossy Networks (LLNs), because the nodes are typically deployed and are not replaced in case of failure. Several efforts have recently led to the standardization of a routing protocol for LLNs. The standard provides several criteria that can be used as a routing metric. The working group RoLL of the IETF developed a routing protocol for 6LoWPAN sensor network (IPv6 over IEEE 802.15.4) (Ko et al., 2011), RPL, recently standardized. Using this protocol could become common and standard in IPv6 sensor networks in the future. Most implementation of the protocol makes use of the transmission rate successfully (ETX) as metric and focuses on the reliability of links. In this paper we present the use of the residual energy and the transmission delay as routingmetric in the next hop selection process for the RPL protocol. We design an objective function for this metric based on ant colony optimization (ACO), and then we compare the results of experiments realized with the RPL based on ETX.

Journal ArticleDOI
TL;DR: This paper identifies important factors from the data trace and shows that the important factors are not necessarily the same with those in the Internet, and proposes a delay model to capture those factors.
Abstract: We present a comprehensive delay performance measurement and analysis in a large-scale wireless sensor network. We build a lightweight delay measurement system and present a robust method to calculate the per-packet delay. We show that the method can identify incorrect delays and recover them with a bounded error. Through analysis of delay and other system metrics, we seek to answer the following fundamental questions: What are the spatial and temporal characteristics of delay performance in a real network? What are the most important impacting factors, and is there any practical model to capture those factors? What are the implications to protocol designs? In this paper, we identify important factors from the data trace and show that the important factors are not necessarily the same with those in the Internet. Furthermore, we propose a delay model to capture those factors. We revisit several prevalent protocol designs such as Collection Tree Protocol, opportunistic routing, and Dynamic Switching-based Forwarding and show that our model and analysis are useful to practical protocol designs.

Journal ArticleDOI
TL;DR: Residual Energy based Reliable Multicast Routing Protocol (RERMR) is proposed to attain more network lifetime and increased packet delivery and forwarding rate, and Reliable path criterion is estimated to choose best reliable path among all available paths.

Journal ArticleDOI
TL;DR: This work proposes a holistic model for intradomain networks to characterize the network performance of routing contents to clients and the network cost incurred by globally coordinating the in-network storage capability, and derives the optimal strategy for provisioning the storage capability that optimizes the overall network performance and cost.
Abstract: In content-centric networks, it is challenging how to optimally provision in-network storage to cache contents, to balance the tradeoffs between the network performance and the provisioning cost. To address this problem, we first propose a holistic model for intradomain networks to characterize the network performance of routing contents to clients and the network cost incurred by globally coordinating the in-network storage capability. We then derive the optimal strategy for provisioning the storage capability that optimizes the overall network performance and cost, and analyze the performance gains via numerical evaluations on real network topologies. Our results reveal interesting phenomena; for instance, different ranges of the Zipf exponent can lead to opposite optimal strategies, and the tradeoffs between the network performance and the provisioning cost have great impacts on the stability of the optimal strategy. We also demonstrate that the optimal strategy can achieve significant gain on both the load reduction at origin servers and the improvement on the routing performance. Moreover, given an optimal coordination level $\ell^\ast$ , we design a routing-aware content placement (RACP) algorithm that runs on a centralized server. The algorithm computes and assigns contents to each CCN router to store, which can minimize the overall routing cost, e.g., transmission delay or hop counts, to deliver contents to clients. By conducting extensive simulations using a large-scale trace dataset collected from a commercial 3G network in China, our results demonstrate that our caching scheme can achieve 4% to 22% latency reduction on average over the state-of-the-art caching mechanisms.

Posted Content
TL;DR: LMIMO-MBM provides a promising alternative to MIMO and Massive MIMM for the realization of 5G wireless networks and relaxes the need for complex FEC structures, and thereby minimizes the transmission delay.
Abstract: The idea of Media-based Modulation (MBM), is based on embedding information in the variations of the transmission media (channel state). This is in contrast to legacy wireless systems where data is embedded in a Radio Frequency (RF) source prior to the transmit antenna. MBM offers several advantages vs. legacy systems, including "additivity of information over multiple receive antennas", and "inherent diversity over a static fading channel". MBM is particularly suitable for transmitting high data rates using a single transmit and multiple receive antennas (Single Input-Multiple Output Media-Based Modulation, or SIMO-MBM). However, complexity issues limit the amount of data that can be embedded in the channel state using a single transmit unit. To address this shortcoming, the current article introduces the idea of Layered Multiple Input-Multiple Output Media-Based Modulation (LMIMO-MBM). Relying on a layered structure, LMIMO-MBM can significantly reduce both hardware and algorithmic complexities, as well as the training overhead, vs. SIMO-MBM. Simulation results show excellent performance in terms of Symbol Error Rate (SER) vs. Signal-to-Noise Ratio (SNR). For example, a $4\times 16$ LMIMO-MBM is capable of transmitting $32$ bits of information per (complex) channel-use, with SER $ \simeq 10^{-5}$ at $E_b/N_0\simeq -3.5$dB (or SER $ \simeq 10^{-4}$ at $E_b/N_0=-4.5$dB). This performance is achieved using a single transmission and without adding any redundancy for Forward-Error-Correction (FEC). This means, in addition to its excellent SER vs. energy/rate performance, MBM relaxes the need for complex FEC structures, and thereby minimizes the transmission delay. Overall, LMIMO-MBM provides a promising alternative to MIMO and Massive MIMO for the realization of 5G wireless networks.

Journal ArticleDOI
TL;DR: Analytical and simulation results show that the network based on the proposed MAC protocol has greater throughput than that of the traditional methods, and has less transmission delay, further enhancing its superiority.
Abstract: In a machine-to-machine network, the throughput performance plays a very important role Recently, an attractive energy harvesting technology has shown great potential to the improvement of the network throughput, as it can provide consistent energy for wireless devices to transmit data Motivated by that, an efficient energy harvesting-based medium access control (MAC) protocol is designed in this paper In this protocol, different devices first harvest energy adaptively and then contend the transmission opportunities with energy level related priorities Then, a new model is proposed to obtain the optimal throughput of the network, together with the corresponding hybrid differential evolution algorithm, where the involved variables are energy-harvesting time, contending time, and contending probability Analytical and simulation results show that the network based on the proposed MAC protocol has greater throughput than that of the traditional methods In addition, as expected, our scheme has less transmission delay, further enhancing its superiority

Journal ArticleDOI
TL;DR: A new model for analysing the connectivity probability, average hop count and one-hop delay of multi-hop safety-related message broadcasting in V2V communications is built, taking into account the following factors: propagation distance, one-Hop transmission range, distribution of vehicles, vehicle density, average length of vehicles and minimum safe distance between vehicles.
Abstract: In vehicle-to-vehicle (V2V) communications, low delay and long propagation distance are very important for multi-hop safety-related message broadcasting. Most earlier studies focused on one-hop broadcasting while little attention has been paid to multi-hop delay and propagation distance. In this study, a new model for analysing the connectivity probability, average hop count and one-hop delay of multi-hop safety-related message broadcasting in V2V communications is built, taking into account the following factors: propagation distance, one-hop transmission range, distribution of vehicles, vehicle density, average length of vehicles and minimum safe distance between vehicles. Simulation results demonstrate that the proposed model can provide better performance in terms of multi-hop delay and there exists an optimal one-hop transmission range to minimise the multi-hop delay. After that, A new scheme is proposed to track the optimal one-hop transmission range by using a Genetic Algorithm. With this scheme, vehicles are allowed to adjust the one-hop transmission range based on vehicle density to reduce the multi-hop delay. The proposed scheme is validated by simulations using realistic vehicular traces.

Proceedings ArticleDOI
08 Jun 2015
TL;DR: Numerical results indicate that the proposed scheme outperforms other resource allocation schemes and improves system rewards as well as the obtained experience by the vehicular users.
Abstract: In the era of the Internet-of-Vehicles (IoV), all components in an Intelligent Transportation System (ITS) can be connected to improve the traffic safety and transportation efficiency. In order to maximize the utilization of the resources, e.g., computation, communication and storage resources, the cloud computing technique could be integrated into vehicular networks. Meanwhile, a cloud-assisted vehicular network could be proposed for effective resource management. In this paper, the resource allocation problem in the cloud-assisted vehicular network architecture is investigated. Each cloud in the architecture has its own specific features, e.g., the remote cloud has sufficient resources but experience high end-to-end delay while the local cloud and vehicular cloud have limited resources but a lower transmission delay is attained. The optimal problem to maximize the system expected average reward is formulated as a Semi-Markov Decision Process (SMDP). Consquently, the corresponding optimal scheme is obtained by solving the SMDP problem via an iteration algorithm. The proposed scheme can provide guidelines that will be helpful to decide to which cloud a request should be admitted and how many resources are needed to be allocated. Numerical results indicate that the proposed scheme outperforms other resource allocation schemes and improves system rewards as well as the obtained experience by the vehicular users.

Patent
05 Mar 2015
TL;DR: In this article, a delay circuit provides a quadrature-delayed strobe, a tightly controlled DLL and write/read leveling delay lines by using the same physical delay line pair.
Abstract: A delay circuit provides a quadrature-delayed strobe, a tightly controlled quadrature DLL and write/read leveling delay lines by using the same physical delay line pair. By multiplexing different usage models, the need for multiple delay lines is significantly reduced to only two delay lines per byte. As a result, the delay circuit provides substantial saving in terms of layout area and power.

Journal ArticleDOI
TL;DR: A data transmission scheme that addresses the constraints of multihop communication in wireless sensor network and the selection of agent node maximizes the throughput while minimizing transmission delay in the network is proposed.
Abstract: Multihop communication in wireless sensor network (WSN) brings new challenges in reliable data transmission. Recent work shows that data collection from sensor nodes using mobile sink minimizes multihop data transmission and improves energy efficiency. However, due to continuous movements, mobile sink has limited communication time to collect data from sensor nodes, which results in rapid depletion of node’s energy. Therefore, we propose a data transmission scheme that addresses the aforementioned constraints. The proposed scheme first finds out the group based region on the basis of localization information of the sensor nodes and predefined trajectory information of a mobile sink. After determining the group region in the network, selection of master nodes is made. The master nodes directly transmit their data to the mobile sink upon its arrival at their group region through restricted flooding scheme. In addition, the agent node concept is introduced for swapping of the role of the master nodes in each group region. The master node when consuming energy up to a certain threshold, neighboring node with second highest residual energy is selected as an agent node. The mathematical analysis shows that the selection of agent node maximizes the throughput while minimizing transmission delay in the network.