scispace - formally typeset
Search or ask a question

Showing papers by "Sonia Fahmy published in 2001"


Journal ArticleDOI
TL;DR: Application level techniques, including methods based on compression algorithm features, layered encoding, rate shaping, adaptive error control, and bandwidth smoothing are focused on.
Abstract: Though the integrated services model and resource reservation protocol (RSVP) provide support for quality of service, in the current Internet only best-effort traffic is widely supported. New high-speed technologies such as ATM (asynchronous transfer mode), gigabit Ethernet, fast Ethernet, and frame relay, have spurred higher user expectations. These technologies are expected to support real-time applications such as video-on-demand, Internet telephony, distance education and video-broadcasting. Towards this end, networking methods such as service classes and quality of service models are being developed. Today's Internet is a heterogeneous networking environment. In such an environment, resources available to multimedia applications vary. To adapt to the changes in network conditions, both networking techniques and application layer techniques have been proposed. In this paper, we focus on the application level techniques, including methods based on compression algorithm features, layered encoding, rate shaping, adaptive error control, and bandwidth smoothing. We also discuss operating system methods to support adaptive multimedia. Throughout the paper, we discuss how feedback from lower networking layers can be used by these application-level adaptation schemes to deliver the highest quality content.

96 citations


Journal ArticleDOI
TL;DR: A firewall dataflow model composed of discrete processing stages that reflect the processing characteristics of a given firewall is created, which provides a more complete view of what happens inside a firewall, other than handling the filtering and possibly other rules that the administrator may have established.

43 citations



Journal ArticleDOI
TL;DR: A performance analysis of TCP over satellite-ATM links using a best effort service—the ATM unspecified bit rate (UBR) service shows that the relative impacts of buffer management, TCP policies and rate guarantees on TCP performance, depend heavily on the latency of the network.
Abstract: Future broadband satellite networks will support a variety of service types. Many such systems are being design with ATM or ATM-like technology. A majority of Internet applications use TCP for data transfer. As a result, these systems must efficiently transport TCP traffic and provide service guarantees to such traffic. Several mechanisms have been presented in recent literature to improve TCP performance. Most of these can be categorized as either TCP enhancements or network-based buffer management techniques. Providing minimum rate guarantees to TCP traffic has also been suggested as a way to improve its performance in the presence of higher priority traffic sharing the link. However, the relative performance of the TCP enhancements versus the buffer management schemes has not been analyzed for long latency networks. In this paper, we address three issues. First, we present a performance analysis of TCP over satellite-ATM links using a best effort service—the ATM unspecified bit rate (UBR) service. This analysis shows that the relative impacts of buffer management, TCP policies and rate guarantees on TCP performance, depend heavily on the latency of the network. Second, we show through simulations that the buffer size required in the network for high TCP performance is proportional to the delay-bandwidth product of the network. Third, we propose a buffer management scheme called differential fair buffer allocation (DFBA) and show how it is used to implement a service that provides minimum rate guarantees to TCP traffic. An example of such a service is the ATM guaranteed frame rate (GFR) service, which is being standardized by the ATM Forum and the ITU. Copyright © 2001 John Wiley & Sons, Ltd.

14 citations


01 Jan 2001
TL;DR: The Transmission Control Protocol is a reliable, connection-oriented stream protocol in the Internet Protocol suite that starts with a 3-way handshake.
Abstract: The Transmission Control Protocol (TCP) is a reliable conne ction-oriented stream protocol in the Internet Protocol suite. A TCP connection is like a virtual circuit be tw en two computers, conceptually very much like a telephone connection. To maintain this virtual circu it, TCP at each end needs to store information on the current status of the connection, e.g., the last byte sen t. TCP is called connection-oriented because it starts with a 3-way handshake, and because it maintains this tate information for each connection. TCP is called a stream protocol because it works with units of bytes .

11 citations


Proceedings ArticleDOI
15 Oct 2001
TL;DR: Simulation results indicate that this adaptive conditioner improves throughput of data extensive applications like large FTP transfers, and achieves low packet delays and response times for Telnet and WWW traffic.
Abstract: We design and evaluate an adaptive traffic conditioner to improve application performance over the differentiated services assured forwarding behavior. The conditioner is adaptive because the marking algorithm changes based upon the current number of flows traversing through an edge router. If there are a small number of flows, the conditioner maintains and uses state information to intelligently protect critical TCP packets. On the other hand, if there are many flows going through the edge router, the conditioner only uses flow characteristics as indicated in the TCP packet headers to mark without requiring per flow state. Simulation results indicate that this adaptive conditioner improves throughput of data extensive applications like large FTP transfers, and achieves low packet delays and response times for Telnet and WWW traffic.

5 citations


01 Jan 2001
TL;DR: The results indicate that pgmcc is robust, but may need some modifications such as an algorithm for dynamically determining the timeout algorithm and handling switches among receiver representatives better.
Abstract: Fairness to current Internet traffic, particularly TCP, is one of the important requirements for deploying multicast protocols. In this paper, we investigate the fairness of the multicast congestion control protocol “pgmcc,” implemented on top of the PGM multicast protocol. Pgmcc is one of the most promising multicast congestion control proposals, but it has not yet been extensively stress-tested in the literature. Two sets of experiments are conducted in this paper. In first set of experiments, we examine the effect of feedback aggregation on pgmcc. In the second set of experiments, we investigate the performance of pgmcc when competing with bursty TCP and UDP flows in scenario with multiple time-varying bottlenecks and round trip times. Our results indicate that pgmcc is robust, but may need some modifications such as an algorithm for dynamically determining the timeout algorithm and handling switches among receiver representatives better. Keywords—multicast, congestion control, pgmcc, fairness, feedback aggregation

5 citations