scispace - formally typeset
Search or ask a question

Showing papers on "Throughput published in 2003"


Proceedings ArticleDOI
14 Sep 2003
TL;DR: Measurements taken from a 29-node 802.11b test-bed demonstrate the poor performance of minimum hop-count, illustrate the causes of that poor performance, and confirm that ETX improves performance.
Abstract: This paper presents the expected transmission count metric (ETX), which finds high-throughput paths on multi-hop wireless networks. ETX minimizes the expected total number of packet transmissions (including retransmissions) required to successfully deliver a packet to the ultimate destination. The ETX metric incorporates the effects of link loss ratios, asymmetry in the loss ratios between the two directions of each link, and interference among the successive links of a path. In contrast, the minimum hop-count metric chooses arbitrarily among the different paths of the same minimum length, regardless of the often large differences in throughput among those paths, and ignoring the possibility that a longer path might offer higher throughput.This paper describes the design and implementation of ETX as a metric for the DSDV and DSR routing protocols, as well as modifications to DSDV and DSR which allow them to use ETX. Measurements taken from a 29-node 802.11b test-bed demonstrate the poor performance of minimum hop-count, illustrate the causes of that poor performance, and confirm that ETX improves performance. For long paths the throughput improvement is often a factor of two or more, suggesting that ETX will become more useful as networks grow larger and paths become longer.

3,656 citations


Journal ArticleDOI
TL;DR: Under certain mild conditions, this scheme is found to be throughput-wise asymptotically optimal for both high and low signal-to-noise ratio (SNR), and some numerical results are provided for the ergodic throughput of the simplified zero-forcing scheme in independent Rayleigh fading.
Abstract: A Gaussian broadcast channel (GBC) with r single-antenna receivers and t antennas at the transmitter is considered. Both transmitter and receivers have perfect knowledge of the channel. Despite its apparent simplicity, this model is, in general, a nondegraded broadcast channel (BC), for which the capacity region is not fully known. For the two-user case, we find a special case of Marton's (1979) region that achieves optimal sum-rate (throughput). In brief, the transmitter decomposes the channel into two interference channels, where interference is caused by the other user signal. Users are successively encoded, such that encoding of the second user is based on the noncausal knowledge of the interference caused by the first user. The crosstalk parameters are optimized such that the overall throughput is maximum and, surprisingly, this is shown to be optimal over all possible strategies (not only with respect to Marton's achievable region). For the case of r>2 users, we find a somewhat simpler choice of Marton's region based on ordering and successively encoding the users. For each user i in the given ordering, the interference caused by users j>i is eliminated by zero forcing at the transmitter, while interference caused by users j

2,616 citations


Proceedings ArticleDOI
14 Sep 2003
TL;DR: It is shown that the routes derived from the analysis often yield noticeably better throughput than the default shortest path routes even in the presence of uncoordinated packet transmissions and MAC contention, suggesting that there is opportunity for achieving throughput gains by employing an interference-aware routing protocol.
Abstract: In this paper, we address the following question: given a specific placement of wireless nodes in physical space and a specific traffic workload, what is the maximum throughput that can be supported by the resulting network? Unlike previous work that has focused on computing asymptotic performance bounds under assumptions of homogeneity or randomness in the network topology and/or workload, we work with any given network and workload specified as inputs.A key issue impacting performance is wireless interference between neighboring nodes. We model such interference using a conflict graph, and present methods for computing upper and lower bounds on the optimal throughput for the given network and workload. To compute these bounds, we assume that packet transmissions at the individual nodes can be finely controlled and carefully scheduled by an omniscient and omnipotent central entity, which is unrealistic. Nevertheless, using ns-2 simulations, we show that the routes derived from our analysis often yield noticeably better throughput than the default shortest path routes even in the presence of uncoordinated packet transmissions and MAC contention. This suggests that there is opportunity for achieving throughput gains by employing an interference-aware routing protocol.

1,828 citations


Proceedings ArticleDOI
09 Jul 2003
TL;DR: In this paper, the performance of the IEEE 802.11b wireless local area networks is analyzed theoretically by deriving simple expressions for the useful throughput, validate them by means of simulation, and compare with several performance measurements.
Abstract: The performance of the IEEE 802.11b wireless local area networks is analyzed. We have observed that when some mobile hosts use a lower bit rate than the others, the performance of all hosts is considerably degraded. Such a situation is a common case in wireless local area networks in which a host far away from an access point is subject to important signal fading and interference. To cope with this problem, the host changes its modulation type, which degrades its bit rate to some lower value. Typically, 802.11b products degrade the bit rate from 11 Mb/s to 5.5, 2, or 1 Mb/s when repeated unsuccessful frame transmissions are detected. In such a case, a host transmitting for example at 1 Mb/s reduces the throughput of all other hosts transmitting at 11 Mb/s to a low value below 1 Mb/s. The basic CSMA/CA channel access method is at the root of this anomaly: it guarantees an equal long term channel access probability to all hosts. When one host captures the channel for a long time because its bit rate is low, it penalizes other hosts that use the higher rate. We analyze the anomaly theoretically by deriving simple expressions for the useful throughput, validate them by means of simulation, and compare with several performance measurements.

1,273 citations


Journal ArticleDOI
TL;DR: A review of the recent bandwidth estimation literature focusing on underlying techniques and methodologies as well as open source bandwidth measurement tools is reviewed.
Abstract: In a packet network, the terms bandwidth and throughput often characterize the amount of data that the network can transfer per unit of time. Bandwidth estimation is of interest to users wishing to optimize end-to-end transport performance, overlay network routing, and peer-to-peer file distribution. Techniques for accurate bandwidth estimation are also important for traffic engineering and capacity planning support. Existing bandwidth estimation tools measure one or more of three related metrics: capacity, available bandwidth, and bulk transfer capacity. Currently available bandwidth estimation tools employ a variety of strategies to measure these metrics. In this survey we review the recent bandwidth estimation literature focusing on underlying techniques and methodologies as well as open source bandwidth measurement tools.

845 citations


Journal ArticleDOI
TL;DR: This work describes an end-to-end methodology, called self-loading periodic streams (SLoPS), for measuring avail-bw, and uses pathload, a nonintrusive tool, to evaluate the variability ("dynamics") of the avail- bw in Internet paths.
Abstract: The available bandwidth (avail-bw) in a network path is of major importance in congestion control, streaming applications, quality-of-service verification, server selection, and overlay networks. We describe an end-to-end methodology, called self-loading periodic streams (SLoPS), for measuring avail-bw. The basic idea in SLoPS is that the one-way delays of a periodic packet stream show an increasing trend when the stream's rate is higher than the avail-bw. We have implemented SLoPS in a tool called pathload. The accuracy of the tool has been evaluated with both simulations and experiments over real-world Internet paths. Pathload is nonintrusive, meaning that it does not cause significant increases in the network utilization, delays, or losses. We used pathload to evaluate the variability ("dynamics") of the avail-bw in Internet paths. The avail-bw becomes significantly more variable in heavily utilized paths, as well as in paths with limited capacity (probably due to a lower degree of statistical multiplexing). We finally examine the relation between avail-bw and TCP throughput. A persistent TCP connection can be used to measure roughly the avail-bw in a path, but TCP saturates the path and increases significantly the path delays and jitter.

765 citations


Proceedings ArticleDOI
09 Jul 2003
TL;DR: This work considers two different routing strategies and study the scaling behavior of the throughput capacity of a hybrid network, finding that if m grows asymptotically slower than √n, the benefit of adding base stations on capacity is insignificant, however, ifm grows faster than ∞, the throughputcapacity increases linearly with the number of base stations, providing an effective improvement over a pure ad hoc network.
Abstract: This paper involves the study of the throughput capacity of hybrid wireless networks. A hybrid network is formed by placing a sparse network of base stations in an ad hoc network. These base stations are assumed to be connected by a high-bandwidth wired network and act as relays for wireless nodes. They are not data sources nor data receivers. Hybrid networks present a tradeoff between traditional cellular networks and pure ad hoc networks in that data may be forwarded in a multihop fashion or through the infrastructure. It has been shown that the capacity of a random ad hoc network does not scale well with the number of nodes in the system. In this work, we consider two different routing strategies and study the scaling behavior of the throughput capacity of a hybrid network. Analytical expressions of the throughput capacity are obtained. For a hybrid network of n nodes and m base stations, the results show that if m grows asymptotically slower than √n, the benefit of adding base stations on capacity is insignificant. However, if m grows faster than √n, the throughput capacity increases linearly with the number of base stations, providing an effective improvement over a pure ad hoc network. Therefore, in order to achieve nonnegligible capacity gain, the investment in the wired infrastructure should be high enough.

571 citations


Proceedings ArticleDOI
09 Jul 2003
TL;DR: This paper studies TCP performance over multihop wireless networks that use the IEEE 802.11 protocol as the access method and proposes two techniques, link RED and adaptive pacing, through which it is able to improve TCP throughput by 5% to 30% in various simulated topologies.
Abstract: This paper studies TCP performance over multihop wireless networks that use the IEEE 802.11 protocol as the access method. Our analysis and simulations show that, given a specific network topology and flow patterns, there exists a TCP window size W*, at which TCP achieves best throughput via improved spatial channel reuse. However, TCP does not operate around W*, and typically grows its average window size much larger; this leads to decreased throughput and increased packet loss. The TCP throughput reduction can be explained by its loss behavior. Our results show that network overload is mainly signified by wireless link contention in multihop wireless networks. As long as the buffer size at each node is reasonably large (say, larger than 10 packets), buffer overflow-induced packet loss is rare and packet drops due to link-layer contention dominate. Link-layer drops offer the first sign for network overload. We further show that multihop wireless links collectively exhibit graceful drop behavior: as the offered load increases, the link contention drop probability also increases, but saturates eventually. In general, the link drop probability is insufficient to stabilize the average TCP window size around W*. Consequently, TCP suffers from reduced throughput due to reduced spatial reuse. We further propose two techniques, link RED and adaptive pacing, through which we are able to improve TCP throughput by 5% to 30% in various simulated topologies. Some simulation results are also validated by real hardware experiments.

570 citations


Journal ArticleDOI
TL;DR: This work proposes and study a novel end-to-end congestion control mechanism called Veno that is simple and effective for dealing with random packet loss in wireless access networks and can achieve significant throughput improvements without adversely affecting other concurrent TCP connections, including other concurrent Reno connections.
Abstract: Wireless access networks in the form of wireless local area networks, home networks, and cellular networks are becoming an integral part of the Internet. Unlike wired networks, random packet loss due to bit errors is not negligible in wireless networks, and this causes significant performance degradation of transmission control protocol (TCP). We propose and study a novel end-to-end congestion control mechanism called TCP Veno that is simple and effective for dealing with random packet loss. A key ingredient of Veno is that it monitors the network congestion level and uses that information to decide whether packet losses are likely to be due to congestion or random bit errors. Specifically: (1) it refines the multiplicative decrease algorithm of TCP Reno-the most widely deployed TCP version in practice-by adjusting the slow-start threshold according to the perceived network congestion level rather than a fixed drop factor and (2) it refines the linear increase algorithm so that the connection can stay longer in an operating region in which the network bandwidth is fully utilized. Based on extensive network testbed experiments and live Internet measurements, we show that Veno can achieve significant throughput improvements without adversely affecting other concurrent TCP connections, including other concurrent Reno connections. In typical wireless access networks with 1% random packet loss rate, throughput improvement of up to 80% can be demonstrated. A salient feature of Veno is that it modifies only the sender-side protocol of Reno without changing the receiver-side protocol stack.

530 citations


Proceedings ArticleDOI
14 Sep 2003
TL;DR: This paper proposes the Unified Cellular and Ad-Hoc Network (UCAN) architecture for enhancing cell throughput, while maintaining fairness, and refine the 3G base station scheduling algorithm so that the throughput gains of active clients are distributed proportional to their average channel rate.
Abstract: In third-generation (3G) wireless data networks, mobile users experiencing poor channel quality usually have low data-rate connections with the base-station. Providing service to low data-rate users is required for maintaining fairness, but at the cost of reducing the cell's aggregate throughput. In this paper, we propose the Unified Cellular and Ad-Hoc Network (UCAN) architecture for enhancing cell throughput, while maintaining fairness. In UCAN, a mobile client has both 3G cellular link and IEEE 802.11-based peer-to-peer links. The 3G base station forwards packets for destination clients with poor channel quality to proxy clients with better channel quality. The proxy clients then use an ad-hoc network composed of other mobile clients and IEEE 802.11 wireless links to forward the packets to the appropriate destinations, thereby improving cell throughput. We refine the 3G base station scheduling algorithm so that the throughput gains of active clients are distributed proportional to their average channel rate, thereby maintaining fairness. With the UCAN architecture in place, we propose novel greedy and on-demand protocols for proxy discovery and ad-hoc routing that explicitly leverage the existence of the 3G infrastructure to reduce complexity and improve reliability. We further propose a secure crediting mechanism to motivate users to participate in relaying packets for others. Through extensive simulations with HDR and IEEE 802.11b, we show that the UCAN architecture can improve individual user's throughput by up to 310% and the aggregate throughput of the HDR downlink by up to 60%.

509 citations


Proceedings ArticleDOI
11 May 2003
TL;DR: A novel link adaptation algorithm is presented, which aims to improve the system throughput by adapting the transmission rate to the current link condition and it is shown that the proposed algorithm closely approximates the ideal case with the perfect knowledge about the channel and receiver conditions.
Abstract: IEEE 802.11 wireless local area network (WLAN) physical layers (PHYs) support multiple transmission rates. The PHY rate to be used for a particular frame transmission is solely determined by the transmitting station. The transmitting rate should be chosen in an adaptive manner since the wireless channel condition varies over time due to such factors as station mobility, time-varying interference, and location-dependent errors. In this paper, we present a novel link adaptation algorithm, which aims to improve the system throughput by adapting the transmission rate to the current link condition. Our algorithm is simply based on the received signal strength measured from the received frames, and hence it does not require any changes in the current IEEE 802.11 WLAN medium access control (MAC) protocol. Based on the simulation and its comparison with a numerical analysis, it is shown that the proposed algorithm closely approximates the ideal case with the perfect knowledge about the channel and receiver conditions.

Journal ArticleDOI
01 Jan 2003
TL;DR: Experimental evidence from two wireless test-beds shows that there are usually multiple minimum hop-count paths, many of which have poor throughput, and suggests that more attention be paid to link quality when choosing ad hoc routes.
Abstract: Existing wireless ad hoc routing protocols typically find routes with the minimum hop-count. This paper presents experimental evidence from two wireless test-beds which shows that there are usually multiple minimum hop-count paths, many of which have poor throughput. As a result, minimum-hop-count routing often chooses routes that have significantly less capacity than the best paths that exist in the network. Much of the reason for this is that many of the radio links between nodes have loss rates low enough that the routing protocol is willing to use them, but high enough that much of the capacity is consumed by retransmissions. These observations suggest that more attention be paid to link quality when choosing ad hoc routes; the paper presents measured link characteristics likely to be useful in devising a better path quality metric.

Journal ArticleDOI
TL;DR: This work generalizes the zero-forcing beamforming technique to the multiple receive antennas case and uses this as the baseline for the packet data throughput evaluation, and examines the long-term average throughputs that can be achieved using the proportionally fair scheduling algorithm.
Abstract: Recently, the capacity region of a multiple-input multiple-output (MIMO) Gaussian broadcast channel, with Gaussian codebooks and known-interference cancellation through dirty paper coding, was shown to equal the union of the capacity regions of a collection of MIMO multiple-access channels. We use this duality result to evaluate the system capacity achievable in a cellular wireless network with multiple antennas at the base station and multiple antennas at each terminal. Some fundamental properties of the rate region are exhibited and algorithms for determining the optimal weighted rate sum and the optimal covariance matrices for achieving a given rate vector on the boundary of the rate region are presented. These algorithms are then used in a simulation study to determine potential capacity enhancements to a cellular system through known-interference cancellation. We study both the circuit data scenario in which each user requires a constant data rate in every frame and the packet data scenario in which users can be assigned a variable rate in each frame so as to maximize the long-term average throughput. In the case of circuit data, the outage probability as a function of the number of active users served at a given rate is determined through simulations. For the packet data case, long-term average throughputs that can be achieved using the proportionally fair scheduling algorithm are determined. We generalize the zero-forcing beamforming technique to the multiple receive antennas case and use this as the baseline for the packet data throughput evaluation.

Proceedings ArticleDOI
09 Jul 2003
TL;DR: A theoretical framework is considered and a routing algorithm is proposed which exploits the patterns in the mobility of nodes to provide guarantees on the delay and the throughput achieved by the algorithm is only a poly-logarithmic factor off from the optimal.
Abstract: Network throughput and packet delay are two important parameters in the design and the evaluation of routing protocols for ad-hoc networks. While mobility has been shown to increase the capacity of a network, it is not clear whether the delay can be kept low without trading off the throughput. We consider a theoretical framework and propose a routing algorithm which exploits the patterns in the mobility of nodes to provide guarantees on the delay. Moreover, the throughput achieved by the algorithm is only a poly-logarithmic factor off from the optimal. The algorithm itself is fairly simple. In order to analyze its feasibility and the performance guarantee, we used various techniques of probabilistic analysis of algorithms. The approach taken in this paper could be applied to the analyses of some other routing algorithms for mobile ad hoc networks proposed in the literature.

Proceedings ArticleDOI
09 Jul 2003
TL;DR: This paper identifies four different regions of TCP unfairness that depend on the buffer availability at the base station, with some regions exhibiting significant unfairness of over 10 in terms of throughput ratio between upstream and downstream TCP flows.
Abstract: As local area wireless networks based on the IEEE 802.11 standard see increasing public deployment, it is important to ensure that access to the network by different users remains fair. While fairness issues in 802.11 networks have been studied before, this paper is the first to focus on TCP fairness in 802.11 networks in the presence of both mobile senders and receivers. In this paper, we evaluate extensively through analysis, simulation, and experimentation the interaction between the 802.11 MAC protocol and TCP. We identify four different regions of TCP unfairness that depend on the buffer availability at the base station, with some regions exhibiting significant unfairness of over 10 in terms of throughput ratio between upstream and downstream TCP flows. We also propose a simple solution that can be implemented at the base station above the MAC layer that ensures that different TCP flows share the 802.11 bandwidth equitably irrespective of the buffer availability at the base station.

Proceedings ArticleDOI
09 Jul 2003
TL;DR: Simulation studies using the proposed extensible on-demand power management framework with the dynamic source routing protocol show a reduction in energy consumption near 50% when compared to a network without power management under both long-lived CBR traffic and on-off traffic loads, with comparable throughput and latency.
Abstract: Battery power is an important resource in ad hoc networks. It has been observed that in ad hoc networks, energy consumption does not reflect the communication activities in the network. Many existing energy conservation protocols based on electing a routing backbone for global connectivity are oblivious to traffic characteristics. In this paper, we propose an extensible on-demand power management framework for ad hoc networks that adapts to traffic load. Nodes maintain soft-state timers that determine power management transitions. By monitoring routing control messages and data transmission, these timers are set and refreshed on-demand. Nodes that are not involved in data delivery may go to sleep as supported by the MAC protocol. This soft state is aggregated across multiple flows and its maintenance requires no additional out-of-band messages. We implement a prototype of our framework in the ns-2 simulator that uses the IEEE 802.11 MAC protocol. Simulation studies using our scheme with the dynamic source routing protocol show a reduction in energy consumption near 50% when compared to a network without power management under both long-lived CBR traffic and on-off traffic loads, with comparable throughput and latency. Preliminary results also show that it outperforms existing routing backbone election approaches.

Proceedings ArticleDOI
09 Jul 2003
TL;DR: The results reveals that in comparison with general single-path routing protocol, multipath routing mechanism creates more overheads but provides better performance in congestion and capacity provided that the route length is within a certain upper bound which is derivable.
Abstract: Research on multipath routing protocols to provide improved throughput and route resilience as compared with single-path routing has been explored in details in the context of wired networks. However, multipath routing mechanism has not been explored thoroughly in the domain of ad hoc networks. In this paper, we analyze and compare reactive single-path and multipath routing with load balance mechanisms in ad hoc networks, in terms of overhead, traffic distribution and connection throughput. The results reveals that in comparison with general single-path routing protocol, multipath routing mechanism creates more overheads but provides better performance in congestion and capacity provided that the route length is within a certain upper bound which is derivable. The analytical results are further confirmed by simulation.

Journal ArticleDOI
TL;DR: This paper evaluates different error control and adaptation mechanisms available in the different layers for robust transmission of video, namely MAC retransmission strategy, application-layer forward error correction, bandwidth-adaptive compression using scalable coding, and adaptive packetization strategies, and proposes a novel adaptive cross-layer protection strategy.
Abstract: Robust streaming of video over 802.11 wireless local area networks poses many challenges, including coping with bandwidth variations, data losses, and heterogeneity of the receivers. Currently, each network layer (including physical layer, media access control (MAC), transport, and application layers) provides a separate solution to these challenges by providing its own optimized adaptation and protection mechanisms. However, this layered strategy does not always result in an optimal overall performance for the transmission of video. Moreover, certain protection strategies can be implemented simultaneously in several layers and, hence, the optimal choices from the application and complexity perspective need to be identified. In this paper, we evaluate different error control and adaptation mechanisms available in the different layers for robust transmission of video, namely MAC retransmission strategy, application-layer forward error correction, bandwidth-adaptive compression using scalable coding, and adaptive packetization strategies. Subsequently, we propose a novel adaptive cross-layer protection strategy for enhancing the robustness and efficiency of scalable video transmission by performing tradeoffs between throughput, reliability, and delay depending on the channel conditions and application requirements. The results obtained using the proposed adaptive cross-layer protection strategies show a significantly improved visual performance for the transmitted video over a variety of channel conditions.

Proceedings ArticleDOI
Sem Borst1
09 Jul 2003
TL;DR: This paper shows that in certain cases the user-level performance may be evaluated by means of a multiclass Processor-Sharing model where the total service rate varies with the total number of users, and shows that, in the presence of channel variations, greedy, myopic strategies which maximize throughput in a static scenario may result in sub-optimal throughput performance for a dynamic user configuration and cause potential instability effects.
Abstract: Channel-aware scheduling strategies, such as the Proportional Fair algorithm for the CDMA 1xEV-DO system, provide an effective mechanism for improving throughput performance in wireless data networks by exploiting channel fluctuations. The performance of channel-aware scheduling algorithms has mostly been explored at the packet level for a static user population, often assuming infinite backlogs. In the present paper, we focus on the performance at the flow level in a dynamic setting with random finite-size service demands. We show that in certain cases the user-level performance may be evaluated by means of a multiclass Processor-Sharing model where the total service rate varies with the total number of users. The latter model provides explicit formulas for the distribution of the number of active users of the various classes, the mean response times, the blocking probabilities, and the mean throughput. In addition we show that, in the presence of channel variations, greedy, myopic strategies which maximize throughput in a static scenario, may result in sub-optimal throughput performance for a dynamic user configuration and cause potential instability effects.

Proceedings ArticleDOI
09 Jul 2003
TL;DR: A decentralized medium access control (MAC) protocol, where each user only has knowledge of its own channel gain is considered; it is proved that a variation of channel-aware ALOHA is stable for any total arrival rate in a memoryless channel, given that users can estimate the backlog.
Abstract: Multiuser diversity refers to a type of diversity present across different users in a fading environment. This diversity can be exploited by scheduling transmissions so that users transmit when their channel conditions are favorable. Using such an approach leads to a system capacity that increases with the number of users. However, such scheduling requires centralized control. In this paper, we consider a decentralized medium access control (MAC) protocol, where each user only has knowledge of its own channel gain. We consider a variation of the ALOHA protocol, channel-aware ALOHA; using this protocol we show that users can still exploit multiuser diversity gains. First we consider a backlogged model, where each user always has packets to send. In this case we show that the total system throughput increases at the same rate as in a system with a centralized scheduler. Asymptotically, the fraction of throughput lost due to the random access protocol is shown to be 1/e. We also consider a splitting algorithm, where the splitting sequence depends on the users' channel gains; this algorithm is shown to approach the throughput of an optimal centralized scheme. Next we consider a system with an infinite user population and random arrivals. In this case, it is proved that a variation of channel-aware ALOHA is stable for any total arrival rate in a memoryless channel, given that users can estimate the backlog. Extensions for channels with memory are also discussed.

Proceedings ArticleDOI
30 Mar 2003
TL;DR: The extensive simulation studies show that the FCR algorithm could significantly improve the performance of the IEEE 802.11 MAC protocol if the efficient collision resolution algorithm is used and that the fairly scheduled FCR (FS-FCR) algorithm could simultaneously achieve high throughput performance and a high degree of fairness.
Abstract: Design of efficient medium access control (MAC) protocols with both high throughput performance and high-degree of fairness performance is a major focus in distributed contention-based MAC protocol research. In this paper, we propose a novel and efficient contention-based MAC protocol for wireless local area networks, namely, the fast collision resolution (FCR) algorithm. This algorithm is developed based on the following innovative ideas: to speed up the collision resolution, we actively redistribute the backoff timers for all active nodes; to reduce the average number of idle slots, we use smaller contention window sizes for nodes with successful packet transmissions and reduce the backoff timers exponentially fast when a fixed number of consecutive idle slots are detected. We show that the proposed FCR algorithm provides high throughput performance and low latency in wireless LANs. The extensive simulation studies show that the FCR algorithm could significantly improve the performance of the IEEE 802.11 MAC protocol if our efficient collision resolution algorithm is used and that the fairly scheduled FCR (FS-FCR) algorithm could simultaneously achieve high throughput performance and a high degree of fairness.

Proceedings ArticleDOI
09 Jul 2003
TL;DR: Compared to the IEEE 802.11 approach, the proposed protocol achieves a significant increase in the channel utilization and end-to-end network throughput, and a significant decrease in the total energy consumption.
Abstract: In this paper, we propose a comprehensive solution for power control in mobile ad hoc networks (MANETs). Our solution emphasizes the interplay between the MAC and network layers, whereby the MAC layer indirectly influences the selection of the next-hop by properly adjusting the power of route request packets. This is done while maintaining network connectivity. Directional and channel-gain information obtained mainly from overheard RTS and CTS packets is used to dynamically construct the network topology. By properly estimating the required transmission power for data packets, our protocol allows for interference-limited simultaneous transmissions to take place in the neighborhood of a receiving node. Simulation results indicate that compared to the IEEE 802.11 approach, the proposed protocol achieves a significant increase in the channel utilization and end-to-end network throughput, and a significant decrease in the total energy consumption.

Proceedings ArticleDOI
01 Dec 2003
TL;DR: This work considers a TDMA cellular multihop network where relaying - via wireless terminals that have a good communication link to the base station - is used as a coverage enhancement technique and investigates the effects of relaying node selection strategies and maximum relayer transmit power level on coverage.
Abstract: We consider a TDMA cellular multihop network where relaying - via wireless terminals that have a good communication link to the base station - is used as a coverage enhancement technique. Provided that the subscriber density is not very low, relaying via wireless terminals can have a significant impact on coverage, capacity, and throughput. This is mainly due to the fact that the signals only have to travel through shorter distances and/or improved paths. In this work, we investigated the effects of relaying node selection strategies (essentially a routing issue) and maximum relayer transmit power level on coverage. Our simulation results show that with a very modest level of relaying node transmit power and with some moderate intelligence incorporated in the relaying node selection scheme, the (high data rate) coverage can be improved significantly through two-hop relaying without consuming any additional bandwidth.

Proceedings ArticleDOI
25 Aug 2003
TL;DR: This paper considers how optics can be used to scale capacity and reduce power in a router, and describes two different implementations based on technology available within the next three years.
Abstract: Routers built around a single-stage crossbar and a centralized scheduler do not scale, and (in practice) do not provide the throughput guarantees that network operators need to make efficient use of their expensive long-haul links. In this paper we consider how optics can be used to scale capacity and reduce power in a router. We start with the promising load-balanced switch architecture proposed by C-S. Chang. This approach eliminates the scheduler, is scalable, and guarantees 100% throughput for a broad class of traffic. But several problems need to be solved to make this architecture practical: (1) Packets can be mis-sequenced, (2) Pathological periodic traffic patterns can make throughput arbitrarily small, (3) The architecture requires a rapidly configuring switch fabric, and (4) It does not work when linecards are missing or have failed. In this paper we solve each problem in turn, and describe new architectures that include our solutions. We motivate our work by designing a 100Tb/s packet-switched router arranged as 640 linecards, each operating at 160Gb/s. We describe two different implementations based on technology available within the next three years.

Proceedings ArticleDOI
09 Jul 2003
TL;DR: It is demonstrated that in the case of asymmetric traffic distribution, where load imbalance is most pronounced, significant throughput gains can be obtained while the gains in the symmetric case are modest.
Abstract: Third generation code-division multiple access (CDMA) systems propose to provide packet data service through a high speed shared channel with intelligent and fast scheduling at the base-stations. In the current approach base-stations schedule independently of other base-stations. We consider scheduling schemes in which scheduling decisions are made jointly for a cluster of cells thereby enhancing performance through interference avoidance and dynamic load balancing. We consider algorithms that assume complete knowledge of the channel quality information from each of the base-stations to the terminals at the centralized scheduler as well as a two-tier scheduling strategy that assumes only the knowledge of the long term channel conditions at the centralized scheduler. We demonstrate that in the case of asymmetric traffic distribution, where load imbalance is most pronounced, significant throughput gains can be obtained while the gains in the symmetric case are modest. Since the load balancing is achieved through centralized scheduling, our scheme can adapt to time-varying traffic patterns dynamically.

Journal ArticleDOI
TL;DR: This article summarizes in a systematic way the main OBS design parameters and the solutions that have been proposed in the open literature and shows how the framework achieves high traffic throughput and high resource utilization.
Abstract: Optical burst switching is a promising solution for all-optical WDM networks. It combines the benefits of optical packet switching and wavelength routing while taking into account the limitations of the current all-optical technology. In OBS, the user data is collected at the edge of the network, sorted based on a destination address, and grouped into variable sized bursts. Prior to transmitting a burst, a control packet is created and immediately sent toward the destination in order to set up a bufferless optical path for its corresponding burst. After an offset delay time, the data burst itself is transmitted without waiting for a positive acknowledgment from the destination node. The OBS framework has been widely studied in the past few years because it achieves high traffic throughput and high resource utilization. However, despite the OBS trademarks such as dynamic connection setup or strong separation between data and control, there are many differences in the published OBS architectures. In this article we summarize in a systematic way the main OBS design parameters and the solutions that have been proposed in the open literature.

Proceedings ArticleDOI
09 Jul 2003
TL;DR: This paper develops a framework for opportunistic scheduling over multiple wireless channels that transforms selection of the best users and rates from a complex general optimization problem into a decoupled and tractable formulation.
Abstract: Emerging spread spectrum high-speed data networks utilize multiple channels via orthogonal codes or frequency-hopping patterns such that multiple users can transmit concurrently. In this paper, we develop a framework for opportunistic scheduling over multiple wireless channels. With a realistic channel model, any subset of users can be selected for data transmission at any time, albeit with different throughputs and system resource requirements. We first transform selection of the best users and rates from a complex general optimization problem into a decoupled and tractable formulation: a multiuser scheduling problem that maximizes total system throughput and a control-update problem that ensures long-term deterministic or probabilistic fairness constraints. We then design and evaluate practical schedulers that approximate these objectives.

Book ChapterDOI
01 Sep 2003
TL;DR: In this paper, a scalable, low-latency architecture is proposed to tackle the fan-out, match, and encoding bottlenecks and achieve operating frequencies in excess of 340MHz for fast Virtex devices.
Abstract: Intrusion Detection Systems such as Snort scan incoming packets for evidence of security threats. The most computation-intensive part of these systems is a text search against hundreds of patterns, and must be performed at wire-speed. FPGAs are particularly well suited for this task and several such systems have been proposed. In this paper we expand on previous work, in order to achieve and exceed a processing bandwidth of 11Gbps. We employ a scalable, low-latency architecture, and use extensive fine-grain pipelining to tackle the fan-out, match, and encode bottlenecks and achieve operating frequencies in excess of 340MHz for fast Virtex devices. To increase throughput, we use multiple comparators and allow for parallel matching of multiple search strings. We evaluate the area and latency cost of our approach and find that the match cost per search pattern character is between 4 and 5 logic cells.

Journal ArticleDOI
TL;DR: The trade-offs between achieving capacity and energy consumption, how transport capacity might be affected by considering in-network processing and the implications this study has on the design of practical protocols for large-scale data-gathering wireless sensor networks are discussed.

Journal ArticleDOI
TL;DR: It is shown through simulation that PARO is capable of outperforming traditional broadcast-based routing protocols (e.g., MANET routing protocols) due to its energy conserving point-to-point on-demand design.
Abstract: This paper introduces PARO, a dynamic power controlled routing scheme that helps to minimize the transmission power needed to forward packets between wireless devices in ad hoc networks. Using PARO, one or more intermediate nodes called "redirectors" elects to forward packets on behalf of source-destination pairs thus reducing the aggregate transmission power consumed by wireless devices. PARO is applicable to a number of networking environments including wireless sensor networks, home networks and mobile ad hoc networks. In this paper, we present the detailed design of PARO and evaluate the protocol using simulation and experimentation. We show through simulation that PARO is capable of outperforming traditional broadcast-based routing protocols (e.g., MANET routing protocols) due to its energy conserving point-to-point on-demand design. We discuss our experiences from an implementation of the protocol in an experimental wireless testbed using off-the-shelf radio technology. We also evaluate the impact of dynamic power controlled routing on traditional network performance metrics such as end-to-end delay and throughput.