scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

End-to-end congestion control in wireless mesh networks using a neural network

28 Mar 2011-pp 677-682
TL;DR: A novel neural network based congestion control technique for reliable data transfer over wireless mesh networks is proposed that exhibits a significant improvement in total network throughput and average energy consumption per bit compared to congestion control techniques used in other variants of TCP.
Abstract: Maintaining the performance of reliable transport protocols, such as TCP, over wireless mesh networks is a challenging problem due to the unique characteristics of wireless mesh networks such as the lossy nature of the communication medium, absence of a base station, similarity in traffic pattern experienced by neighboring mesh nodes, etc. One of the reasons for the poor performance of conventional TCP variants over wireless mesh networks is that the congestion control mechanisms in conventional TCP variants do not explicitly account for these unique characteristics. To address this problem, this paper proposes a novel neural network based congestion control technique for reliable data transfer over wireless mesh networks. We analyze the proposed congestion control technique in detail and incorporate it into TCP to create a variant that we name intelligent TCP or iTCP. We evaluate the performance of iTCP using ns-2 simulations. Our results demonstrate that our proposed congestion control technique exhibits a significant improvement in total network throughput and average energy consumption per bit compared to congestion control techniques used in other variants of TCP.
Citations
More filters
Journal ArticleDOI
TL;DR: This paper proposes a novel artificial intelligence based congestion control technique for reliable data transfer over WMNs that is established by exploiting a carefully designed neural network (NN) in the congestion control mechanism to create a new variant that is named as intelligent TCP or iTCP.
Abstract: Maintaining the performance of reliable transport protocols, such as transmission control protocol (TCP), over wireless mesh networks (WMNs) is a challenging problem due to the unique characteristics of data transmission over WMNs. The unique characteristics include multi-hop communication over lossy and non-deterministic wireless mediums, data transmission in the absence of a base station, similar traffic patterns over neighboring mesh nodes, etc. One of the reasons for the poor performance of conventional TCP variants over WMNs is that the congestion control mechanisms in conventional TCP variants do not explicitly account for these unique characteristics. To address this problem, this paper proposes a novel artificial intelligence based congestion control technique for reliable data transfer over WMNs. The synergy with artificial intelligence is established by exploiting a carefully designed neural network (NN) in the congestion control mechanism. We analyze the proposed NN based congestion control technique in detail and incorporate it into TCP to create a new variant that we name as intelligent TCP or iTCP. We evaluate the performance of iTCP using both ns-2 simulations and real testbed experiments. Our evaluation results demonstrate that our proposed congestion control technique exhibits a significant improvement in total network throughput and average energy consumption per transmitted bit compared to the congestion control techniques used in other TCP variants.

26 citations

Journal ArticleDOI
TL;DR: A neighborhood-aware and overhead-free congestion control scheme (NICC) that solves the starvation problem without impacting the scarce bandwidth of WMNs and its performance in terms of starvation avoidance and bandwidth efficiency is proven through extensive simulations.
Abstract: It has been reported that the IEEE 802.11 MAC protocol and the TCP congestion control are highly problematic in terms of flow starvation in wireless mesh networks (WMNs). However, the economic features of IEEE 802.11 make it the commonly-used MAC protocol in WMNs. Therefore, solving starvation at the transport layer seems to be more appropriate. Indeed, the main starvation cause in TCP is that congestion is managed as a link-based problem. However, since bandwidth is a spatially-shared resource in WMNs, congestion is a neighborhood phenomenon that should be handled using mutual cooperation within a congested neighborhood. Such cooperation considerably consumes the already scarce bandwidth of WMNs causing more congestion. In this paper, we propose a neighborhood-aware and overhead-free congestion control scheme (NICC) that solves the starvation problem without impacting the scarce bandwidth of WMNs. NICC makes use of some underexploited fields in the IEEE 802.11 frame header, without modifying the standard frame size, to provide an overhead-free multi-bit congestion feedback; being overhead-free, this feedback allows performing neighborhood cooperation without generating control overhead. Furthermore, being multi-bit, it yields source nodes a fine-grained indication of the congestion degree, providing accurate rate control. The NICC performance in terms of starvation avoidance and bandwidth efficiency is proven through extensive simulations.

18 citations


Cites background or methods from "End-to-end congestion control in wi..."

  • ...The authors in [7] propose iTCP in which cwnd is adapted using a Neuralbased model [25]....

    [...]

  • ...A number of TCP-based approaches to improve the performance of TCP in WMNs are presented in [7], [8], [24]....

    [...]

  • ...However, most of existing schemes [7], [8] do not explicitly consider congestion as a neighborhood phenomenon; indeed, competing flows are treated separately, eventually leading to starvation of some flows....

    [...]

Proceedings ArticleDOI
07 Aug 2012
TL;DR: This paper forms the congestion control problem and map that to the restless multi-armed bandit problem, a well-known decision problem in the literature, and proposes three myopic policies to achieve a near-optimal solution for the mapped problem.
Abstract: Congestion control in multi-hop infrastructure wireless mesh networks is both an important and a unique problem. It is unique because it has two prominent causes of failed transmissions which are difficult to tease apart - lossy nature of wireless medium and high extent of congestion around gateways in the network. The concurrent presence of these two causes limits applicability of already available congestion control mechanisms, proposed for wireless networks. Prior mechanisms mainly focus on the former cause, ignoring the latter one. Therefore, we address this issue to design an end-to-end congestion control mechanism for infrastructure wireless mesh networks in this paper. We formulate the congestion control problem and map that to the restless multi-armed bandit problem, a well-known decision problem in the literature. Then, we propose three myopic policies to achieve a near-optimal solution for the mapped problem since no optimal solution is known to this problem. We perform comparative evaluation through ns-2 simulation and a real testbed experiment with a wireline TCP variant and a wireless TCP protocol. The evaluation reveals that our proposed mechanism can achieve up to 52% increased network throughput and 34% decreased average energy consumption per transmitted bit in comparison to the other end-to-end congestion control variants.

16 citations


Cites background from "End-to-end congestion control in wi..."

  • ...The reason behind such outcomes is the conservative approach adopted in our proposed myopic policies to guard against a high extent of congestion in infrastructure WMNs....

    [...]

  • ...Nonetheless, iTCP [19] proposes the only end-to-end congestion control mechanism for WMNs....

    [...]

Proceedings ArticleDOI
29 Nov 2012
TL;DR: NICC is proposed as neighborhood-based and overhead-free congestion control protocol aiming to avoid starvation without disturbing the bandwidth resources, and a lightweight optimization of some underexploited fields in the 802.11 frames header so as to provide implicit multi-bit congestion feedback.
Abstract: Severe unfairness and even complete starvation may occur when using TCP-like congestion control in IEEE 802.11-based Wireless Mesh Networks (WMNs). Indeed, IEEE 802.11 is inherently unfair; however, economies of scale make it the commonly used MAC protocol in WMNs. Moreover, TCP-like protocols do not account for links interdependency within a neighborhood. In WMNs, congestion should be mutually handled using explicit coordination among neighboring contending links. Furthermore, the set of flows that should be regulated, to control congestion, must include all those traversing a congested neighborhood. However, neighborhood coordination and flows notification significantly consume the already scarce bandwidth. In this paper, we propose NICC as neighborhood-based and overhead-free congestion control protocol aiming to avoid starvation without disturbing the bandwidth resources. Instead of experiencing IEEE 802.11 as a handicap, NICC proposes a lightweight optimization of some underexploited fields in the 802.11 frames header so as to provide implicit multi-bit congestion feedback. Such feedback ensures accurate rate control without inducing additional overhead. The effectiveness of NICC in terms of starvation avoidance and bandwidth efficiency is proved through in-depth simulation.

9 citations


Cites background from "End-to-end congestion control in wi..."

  • ...However, most of the proposed protocols [8][9] do not explicitly recognize congestion as a neighborhood phenomenon so that competing flows are treated independently and fairness is not addressed....

    [...]

Journal ArticleDOI
TL;DR: This work proposes a simple and yet effective mechanism to detect and reduce channel-caused packet losses by adjusting the retry limit parameter of the IEEE 802.11 protocol while taking into account the delay requirement of the traffic.
Abstract: In wireless networks such as those based on IEEE 802.11, packet losses due to fading and interference are often misinterpreted as indications of congestion by the congestion control protocol at higher layers, causing an unnecessary decrease in the data sending rate. For delay-constrained applications such as video telephony, packet losses may result in excessive artifacts or freeze in the decoded video. We propose a simple and yet effective mechanism to detect and reduce channel-caused packet losses by adjusting the retry limit parameter of the IEEE 802.11 protocol while taking into account the delay requirement of the traffic. Since the retry limit is left configurable in the IEEE 802.11 standard and does not require cross-layer coordination, our scheme can be easily implemented and incrementally deployed. Experimental results of applying the proposed scheme to a WebRTC-based real-time video communication prototype show significant performance gain compared to the case where the retry limit is configured statically.

4 citations


Cites methods from "End-to-end congestion control in wi..."

  • ...However, most of the proposed protocols make use of the detection results in order to perform congestion control [2][3][4][5][6][7][8] or rate adaptation [9][12] at the MAC layer....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: A model of a system having a large number of simple equivalent components, based on aspects of neurobiology but readily adapted to integrated circuits, produces a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size.
Abstract: Computational properties of use of biological organisms or to the construction of computers can emerge as collective properties of systems having a large number of simple equivalent components (or neurons). The physical meaning of content-addressable memory is described by an appropriate phase space flow of the state of a system. A model of such a system is given, based on aspects of neurobiology but readily adapted to integrated circuits. The collective properties of this model produce a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size. The algorithm for the time evolution of the state of the system is based on asynchronous parallel processing. Additional emergent collective properties include some capacity for generalization, familiarity recognition, categorization, error correction, and time sequence retention. The collective properties are only weakly sensitive to details of the modeling or the failure of individual devices.

16,652 citations

Journal ArticleDOI
01 Aug 1988
TL;DR: The measurements and the reports of beta testers suggest that the final product is fairly good at dealing with congested conditions on the Internet, and an algorithm recently developed by Phil Karn of Bell Communications Research is described in a soon-to-be-published RFC.
Abstract: In October of '86, the Internet had the first of what became a series of 'congestion collapses'. During this period, the data throughput from LBL to UC Berkeley (sites separated by 400 yards and three IMP hops) dropped from 32 Kbps to 40 bps. Mike Karels1 and I were fascinated by this sudden factor-of-thousand drop in bandwidth and embarked on an investigation of why things had gotten so bad. We wondered, in particular, if the 4.3BSD (Berkeley UNIX) TCP was mis-behaving or if it could be tuned to work better under abysmal network conditions. The answer to both of these questions was “yes”.Since that time, we have put seven new algorithms into the 4BSD TCP: round-trip-time variance estimationexponential retransmit timer backoffslow-startmore aggressive receiver ack policydynamic window sizing on congestionKarn's clamped retransmit backofffast retransmit Our measurements and the reports of beta testers suggest that the final product is fairly good at dealing with congested conditions on the Internet.This paper is a brief description of (i) - (v) and the rationale behind them. (vi) is an algorithm recently developed by Phil Karn of Bell Communications Research, described in [KP87]. (viii) is described in a soon-to-be-published RFC.Algorithms (i) - (v) spring from one observation: The flow on a TCP connection (or ISO TP-4 or Xerox NS SPP connection) should obey a 'conservation of packets' principle. And, if this principle were obeyed, congestion collapse would become the exception rather than the rule. Thus congestion control involves finding places that violate conservation and fixing them.By 'conservation of packets' I mean that for a connection 'in equilibrium', i.e., running stably with a full window of data in transit, the packet flow is what a physicist would call 'conservative': A new packet isn't put into the network until an old packet leaves. The physics of flow predicts that systems with this property should be robust in the face of congestion. Observation of the Internet suggests that it was not particularly robust. Why the discrepancy?There are only three ways for packet conservation to fail: The connection doesn't get to equilibrium, orA sender injects a new packet before an old packet has exited, orThe equilibrium can't be reached because of resource limits along the path. In the following sections, we treat each of these in turn.

5,620 citations


"End-to-end congestion control in wi..." refers background in this paper

  • ...As a result of these differences, congestion control methods used in TCP variants [8], [14], [5], [11], [3] that were originally proposed for the wired Internet are not well-suited for direct use in WMNs....

    [...]

  • ...TCP Tahoe [8] uses slow start and congestion avoidance for congestion control in the Internet....

    [...]

Book ChapterDOI
01 May 1999
TL;DR: A model of a system having a large number of simple equivalent components, based on aspects of neurobiology but readily adapted to integrated circuits, produces a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size.
Abstract: Computational properties of use to biological organisms or to the construction of computers can emerge as collective properties of systems having a large number of simple equivalent components (or neurons). The physical meaning of content-addressable memory is described by an appropriate phase space flow of the state of a system. A model of such a system is given, based on aspects of neurobiology but readily adapted to integrated circuits. The collective properties of this model produce a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size. The algorithm for the time evolution of the state of the system is based on asynchronous parallel processing. Additional emergent collective properties include some capacity for generalization, familiarity recognition, categorization, error correction, and time sequence retention. The collective properties are only weakly sensitive to details of the modeling or the failure of individual devices.

2,865 citations


"End-to-end congestion control in wi..." refers background in this paper

  • ...OVERVIEW OF NEURAL NETWORKS Neural Networks (NNs) [7] are simplified mathematical or computational models that emulate biological brains....

    [...]

Journal ArticleDOI
TL;DR: The three key techniques employed by Vegas are described, and the results of a comprehensive experimental performance study, using both simulations and measurements on the Internet, of the Vegas and Reno implementations of TCP are presented.
Abstract: Vegas is an implementation of TCP that achieves between 37 and 71% better throughput on the Internet, with one-fifth to one-half the losses, as compared to the implementation of TCP in the Reno distribution of BSD Unix. This paper motivates and describes the three key techniques employed by Vegas, and presents the results of a comprehensive experimental performance study, using both simulations and measurements on the Internet, of the Vegas and Reno implementations of TCP. >

1,602 citations


"End-to-end congestion control in wi..." refers background in this paper

  • ...Finally, based on our simulations and the above two figures, we found TCP Vegas to be the most efficient and TCP Fack to be the most stable among the other variants of TCP....

    [...]

  • ...The table shows that iTCP results in an increased total network throughput and decreased average energy consumption per bit compared to TCP Vegas and TCP Fack....

    [...]

  • ...Table III summarizes the performance improvement using iTCP in comparison to TCP Fack and TCP Vegas....

    [...]

  • ...As a result of these differences, congestion control methods used in TCP variants [8], [14], [5], [11], [3] that were originally proposed for the wired Internet are not well-suited for direct use in WMNs....

    [...]

  • ...We compare the performance of iTCP in random topologies against the most efficient (TCP Vegas) and the most stable (TCP Fack) variants found from the results in the grid topology....

    [...]

01 Apr 2004
TL;DR: The purpose of this document is to advance NewReno TCP's Fast Retransmit and Fast Recovery algorithms in RFC 2582 from Experimental to Standards Track status.
Abstract: The purpose of this document is to advance NewReno TCP's Fast Retransmit and Fast Recovery algorithms in RFC 2582 from Experimental to Standards Track status.

1,602 citations


"End-to-end congestion control in wi..." refers background in this paper

  • ...TCP Sack [6] modifies TCP Newreno by incorporating selective acknowledgements....

    [...]

  • ...As a result of these differences, congestion control methods used in TCP variants [8], [14], [5], [11], [3] that were originally proposed for the wired Internet are not well-suited for direct use in WMNs....

    [...]

  • ...TCP Newreno [5] improves TCP Reno by maintaining fast recovery while acknowledging outstanding data at the time of entering fast recovery....

    [...]