scispace - formally typeset
Search or ask a question

Showing papers by "Nirwan Ansari published in 2004"


Journal Article•DOI•
TL;DR: Results from simulations show that in a congestion free network with 1% of random wireless packet loss rate, TCP-Jersey achieves 17% and 85% improvements in goodput over TCP-Westwood and TCP-Reno, respectively; in a congested network where TCP flow competes with VoIP flows, the design and results from experiments using the NS-2 network simulator show that the scheme maintains the fair and friendly behavior with respect to other TCP flows.
Abstract: Improving the performance of the transmission control protocol (TCP) in wireless Internet protocol (IP) communications has been an active research area. The performance degradation of TCP in wireless and wired-wireless hybrid networks is mainly due to its lack of the ability to differentiate the packet losses caused by network congestions from the losses caused by wireless link errors. In this paper, we propose a new TCP scheme, called TCP-Jersey, which is capable of distinguishing the wireless packet losses from the congestion packet losses, and reacting accordingly. TCP-Jersey consists of two key components, the available bandwidth estimation (ABE) algorithm and the congestion warning (CW) router configuration. ABE is a TCP sender side addition that continuously estimates the bandwidth available to the connection and guides the sender to adjust its transmission rate when the network becomes congested. CW is a configuration of network routers such that routers alert end stations by marking all packets when there is a sign of an incipient congestion. The marking of packets by the CW configured routers helps the sender of the TCP connection to effectively differentiate packet losses caused by network congestion from those caused by wireless link errors. This paper describes the design of TCP-Jersey, and presents results from experiments using the NS-2 network simulator. Results from simulations show that in a congestion free network with 1% of random wireless packet loss rate, TCP-Jersey achieves 17% and 85% improvements in goodput over TCP-Westwood and TCP-Reno, respectively; in a congested network where TCP flow competes with VoIP flows, with 1% of random wireless packet loss rate, TCP-Jersey achieves 9% and 76% improvements in goodput over TCP-Westwood and TCP-Reno, respectively. Our experiments of multiple TCP flows show that TCP-Jersey maintains the fair and friendly behavior with respect to other TCP flows.

238 citations


Journal Article•DOI•
TL;DR: The fundamentals and algorithms of the state of the art of M-D interleaving - the t-interleaved array approach by Blaum, Bruck and Vardy and the successive packing approach by Shi and Zhang are presented and analyzed and the performance comparison between different approaches is made.
Abstract: To ensure data fidelity, a number of random error correction codes (ECCs) have been developed. ECC is, however, not efficient in combating bursts of errors, i.e., a group of consecutive (in one-dimensional (1-D) case) or connected (in two- and three- dimensional (2-D and 3-D) case) erroneous code symbols owing to the bursty nature of errors. Interleaving is a process to rearrange code symbols so as to spread bursts of errors over multiple code-words that can be corrected by ECCs. By converting bursts of errors into random-like errors, interleaving thus becomes an effective means to combat error bursts. In this article, we first illustrate the philosophy of interleaving by introducing a 1-D block interleaving technique. Then multi-dimensional (M-D) bursts of errors and optimality of interleaving are defined. The fundamentals and algorithms of the state of the art of M-D interleaving - the t-interleaved array approach by Blaum, Bruck and Vardy and the successive packing approach by Shi and Zhang-are presented and analyzed. In essence, a t-interleaved array is constructed by closely tiling a building block, which is solely determined by the burst size t. Therefore, the algorithm needs to be implemented each time for a different burst size in order to maintain either the error burst correction capability or optimality. Since the size of error bursts is usually not known in advance, the application of the technique is somewhat limited. The successive packing algorithm, based on the concept of 2 /spl times/ 2 basis array, only needs to be implemented once for a given square 2-D array, and yet it remains optimal for a set of bursts of errors having different sizes. The performance comparison between different approaches is made. Future research on the successive packing approach is discussed. Finally, applications of 2-D/3-D successive packing interleaving in enhancing the robustness of image/video data hiding are presented as examples of practical utilization of interleaving.

101 citations


Proceedings Article•DOI•
27 Jun 2004
TL;DR: This work proposes a novel robust lossless data hiding technique, which does not generate salt-and-pepper noise, and has been successfully applied to many commonly used images, thus demonstrating its generality.
Abstract: Recently, among various data hiding techniques, a new subset, lossless data hiding, has drawn tremendous interest. Most existing lossless data hiding algorithms are, however, fragile in the sense that they can be defeated when compression or other small alteration is applied to the marked image. The method of C. De Vleeschouwer et al. (see IEEE Trans. Multimedia, vol.5, p.97-105, 2003) is the only existing semi-fragile lossless data hiding technique (also referred to as robust lossless data hiding), which is robust against high quality JPEG compression. We first point out that this technique has a fatal problem: salt-and-pepper noise caused by using modulo 256 addition. We then propose a novel robust lossless data hiding technique, which does not generate salt-and-pepper noise. This technique has been successfully applied to many commonly used images (including medical images, more than 1000 images in the CorelDRAW database, and JPEG2000 test images), thus demonstrating its generality. The experimental results show that the visual quality, payload and robustness are acceptable. In addition to medical and law enforcement fields, it has been applied to authenticate losslessly compressed JPEG2000 images.

86 citations


Journal Article•DOI•
TL;DR: An overview of various features of BWA systems toward realizing a high level of scalability to support a potentially fast expanding network is presented.
Abstract: Fixed broadband wireless access systems, such as the local multipoint distribution service, use an open system architecture that supports a scalable solution for the Internet services over IEEE 802.16 wireless networks. This article presents an overview of various features of BWA systems toward realizing a high level of scalability to support a potentially fast expanding network. This is achieved by optimizing various network resources, which include utilizing the available bandwidth efficiency, making a minor enhancement to an existing system that minimizes the disruption to network services during the network expansion process, and combining the benefits of different features to increase network capacity.

79 citations


Patent•
03 Dec 2004
TL;DR: In this article, a method was proposed to identify at least two subsets of pixels within a block of an image and form a plurality of pixel groups from these subsets, each pixel group having at least one pixel from each of the at least 2 subsets.
Abstract: A method including identifying at least two subsets of pixels within a block of an image; forming a plurality of pixel groups from the at least two subsets of pixels, each pixel group having at least one pixel from a first of the at least two subsets and at least one pixel from a second of the at least two subsets; producing a plurality of difference values, each pixel group providing one of said difference values, each difference value being based on differences between pixel values of pixels within one of the pixel groups; and modifying pixel values of pixels in less than all of the at least two subsets, thereby embedding a bit value into the block.

40 citations


Journal Article•DOI•
TL;DR: It is shown that the computational complexity of computing the subnet topology over which link state information is distributed is the same as that of Computing the minimum spanning tree.
Abstract: Distributing link state information may place a heavy burden on the network resource. In this letter, based on the tree-based reliable topology (TRT), we propose a simple but efficient and reliable scheme for disseminating link state information. We show that the computational complexity of computing the subnet topology over which link state information is distributed is the same as that of computing the minimum spanning tree.

38 citations


Journal Article•DOI•
TL;DR: Simulation results for different fiber-wavelength configurations conform closely to the numerical results based on the proposed model, thus demonstrating the feasibility of this proposed model for estimating the blocking performance of multifiber WDM optical networks.
Abstract: In this letter, we propose a computational model for calculating blocking probabilities of multifiber wavelength division multiplexing (WDM) optical networks. We first derive the blocking probability of a fiber based on a Markov chain, from which the blocking probability of a link is derived by means of conditional probabilities. The blocking probability of a lightpath can be computed by a recursive formula. Finally, the network-wide blocking probability can be expressed as the ratio of the total blocked load versus the total offered load. Simulation results for different fiber-wavelength configurations conform closely to the numerical results based on our proposed model, thus demonstrating the feasibility of our proposed model for estimating the blocking performance of multifiber WDM optical networks.

22 citations


Journal Article•DOI•
TL;DR: A tight lower bound is derived on the worst-case computational complexities of the optimal comparison-based solutions to AHSP, the All Hops Shortest Paths problem.
Abstract: In this letter, we introduce and investigate a new problem referred to as the All Hops Shortest Paths (AHSP) problem. The AHSP problem involves selecting, for all hop counts, the shortest paths from a given source to any other node in a network. We derive a tight lower bound on the worst-case computational complexities of the optimal comparison-based solutions to AHSP.

20 citations


Journal Article•DOI•
TL;DR: This paper forms the concept of extrema equivalence for estimating the complexity of a function, and proposes a Mini-max initialization method to select the initial values of the weights for the network that is proven to greatly speed up training.

19 citations


Proceedings Article•DOI•
19 Apr 2004
TL;DR: The low complexity distributed bandwidth allocation (LCDBA) algorithm is proposed to allocate bandwidth fairly to RPR nodes with a very low computational complexity, O(1), which converges to the exact max-min fairness in a few round trip times with no oscillation at the steady state.
Abstract: The resilient packet ring (RPR), defined under IEEE 802.17, has been proposed as a high-speed backbone technology for metropolitan area networks. RPR is introduced to mitigate the underutilization and unfairness problems associated, respectively, with the current SONET and Ethernet technologies. The key performance objectives of RPR are to achieve high bandwidth utilization, optimum spatial reuse on the dual rings, and fairness. The challenge is to design an algorithm that can react dynamically to the traffic in achieving these objectives. Previous attempts have critical limitations, such as oscillation of the allocated bandwidth or high computational complexity. We propose the low complexity distributed bandwidth allocation (LCDBA) algorithm to allocate bandwidth fairly to RPR nodes with a very low computational complexity, O(1). It converges to the exact max-min fairness in a few round trip times with no oscillation at the steady state.

18 citations


Journal Article•DOI•
TL;DR: It is proved that the proposed Adaptive Queue Management scheme can guarantee local stability for a network with an arbitrary topology and heterogeneous round-trip times for TCP users.
Abstract: In this letter, we propose a new Adaptive Queue Management scheme based on the virtual queue size and the aggregate flow rate. Our proposal is tailored for the widely deployed TCP Reno and does not require modifications to TCP end users. We prove that our scheme can guarantee local stability for a network with an arbitrary topology and heterogeneous round-trip times for TCP users.

Proceedings Article•DOI•
29 Nov 2004
TL;DR: This paper develops a simple core-stateless proportional fair queuing algorithm (CSPFQ) for the assured forward (AF) traffic in DiffServ networks and proves analytically and instantiate through simulations that the algorithm can achieve proportional fair bandwidth allocation among competing flows without requiring routers to estimate flows' fair share rates.
Abstract: Proportional fair queuing is to ensure that a flow passing through the network only consumes a fair share of the network resource that is proportional to its committed rate or other service level agreement (SLA). It is of great importance in Differentiated Services (DiffServ) networks as well as other price incentive network services. In this paper, we propose a simple core-stateless proportional fair queuing algorithm (CSPFQ) for the assured forward (AF) traffic in DiffServ networks. We first develop our algorithm based on a fluid model analysis and then extend it to a realizable packet level algorithm. We prove analytically and instantiate through simulations that our algorithm can achieve proportional fair bandwidth allocation among competing flows without requiring routers to estimate flows' fair share rates. Our simulation results also demonstrate that our algorithm outperforms the weighted core-stateless fair queuing (WC-SFQ) in terms of proportional fairness.

Journal Article•DOI•
27 Dec 2004
TL;DR: Simulation and analytical results demonstrate that this scheme provides guaranteed delay and achieves high bandwidth utilisation in a QoS guaranteed bandwidth allocation for a given resource utilisation.
Abstract: Variable bit rate (VBR) video traffic exhibits high burstiness and long range dependence properties, which, in conjunction with the stringent quality of service (QoS) requirements, pose a great challenge in transporting video traffic over a communication network. The authors propose a QoS guaranteed bandwidth allocation for a given resource utilisation. Simulation and analytical results demonstrate that this scheme provides guaranteed delay and achieves high bandwidth utilisation.

Proceedings Article•DOI•
16 Nov 2004
TL;DR: A novel way to convert the raw alerts into machine understandable uniform streams, correlate the streams, and extract the attack scenario knowledge is proposed to make the alerts machine understandable.
Abstract: The increasing use of intrusion detection systems and a relatively high false alarm rate can lead to a huge volume of alerts. This makes it very difficult for security administrators to analyze and detect network attacks. Our solution for this problem is to make the alerts machine understandable. We propose a novel way to convert the raw alerts into machine understandable uniform streams, correlate the streams, and extract the attack scenario knowledge. The modified case grammar principal-subordinate consequence tagging case grammar and the 2-atom alert semantic network are used to generate the attack scenario classes. Alert mutual information is also applied to calculate the alert semantic context window size. Based on the alert context, the attack scenario instances are extracted and the attack scenario descriptions are forwarded to the security administrator.

Proceedings Article•DOI•
29 Nov 2004
TL;DR: A high performance algorithm, dual iterative all hops k-shortest paths (DIAHKP) algorithm, that can achieve 100% success ratio in finding the delay constrained least cost (DCLC) path with very low average computational complexity.
Abstract: We introduce an iterative all hops k-shortest paths (IAHKP) algorithm that is capable of iteratively computing all hops k-shortest path (AHKP) from a source to a destination. Based on IAHKP, a high performance algorithm, dual iterative all hops k-shortest paths (DIAHKP) algorithm, is proposed. It can achieve 100% success ratio in finding the delay constrained least cost (DCLC) path with very low average computational complexity. The underlying concept is that since DIAHKP is a k-shortest-paths-based solution to DCLC, implying that its computational complexity increases with k, we can minimize its computational complexity by adaptively minimizing k, while achieving 100% success ratio in finding the optimal feasible path. Through extensive analysis and simulations, we show that DIAHKP is highly effective and flexible. By setting a very small upper bound to k (k=1,2), DIAHKP still can achieve very satisfactory performance. With only an average computational complexity of twice that of the standard Bellman-Ford algorithm, DIAHKP achieves 100% success ratio in finding the optimal feasible path in the typical 32-node network.

Proceedings Article•DOI•
29 Nov 2004
TL;DR: This paper proposes a new mechanism to defend against distributed denial of service (DDoS) attacks with path information rather than IP address information based on the four color theorem, which allows color reuse so that even if some portions of the map have more than 4 neighbors, 4 colors are still sufficient to mark all their borders.
Abstract: In this paper, we propose a new mechanism to defend against distributed denial of service (DDoS) attacks with path information rather than IP address information. Instead of the complete binary tree model, our proposal is based on the four color theorem. The salient feature of the theorem is that it allows color reuse so that even if some portions of the map have more than 4 neighbors, 4 colors are still sufficient to mark all their borders. This idea of reuse is very important because some routers have many interfaces and the length of the ID field in the header of an IP packet, where the marking information is embedded, is very limited. Furthermore, our marking scheme takes the Internet hierarchy into account, and greatly relaxes the limitation on the number of interfaces of routers, thus making the scheme more practical. Simulation results have validated our design.

Proceedings Article•DOI•
28 Apr 2004
TL;DR: A low complexity fairness algorithm (LCFA) in allocating the bandwidth fairly to RPR nodes with a very low computational complexity O(1) that requires a simple hardware requirement similar to that of the RPR fairness algorithm.
Abstract: The resilient packet ring (RPR), defined under IEEE 802.17, has been proposed as a high-speed backbone technology for metropolitan area networks. RPR is introduced to mitigate the underutilization and unfairness problems associated with the current technologies SONET and Ethernet, respectively. The key performance objectives of RPR is to achieve high bandwidth utilization, optimum spatial reuse on the dual rings, and fairness. The challenge is to design an algorithm that can react dynamically to the traffics in achieving these objectives. The RPR fairness algorithm (J. Kao et al., January 2002) is comparatively simple, but it poses some critical limitations that require further investigation and remedy. One of the major problems is that the amount of bandwidth allocated by the algorithm oscillates severely under the unbalanced traffic scenarios. These oscillations presents a barrier in achieving spatial reuse and high bandwidth utilization. We propose a low complexity fairness algorithm (LCFA) in allocating the bandwidth fairly to RPR nodes with a very low computational complexity O(1) that requires a simple hardware requirement similar to that of the RPR fairness algorithm.

Proceedings Article•DOI•
29 Nov 2004
TL;DR: Results demonstrate that the proposed mechanism can have better service differentiation capability and lower request dropping probability than the Intserv over Diffserv schemes while it still maintains the simplicity feature of the Diffserv network model.
Abstract: In this paper, a novel concept of decoupling the end-to-end QoS provisioning from the service provisioning at routers in the Diffserv network is proposed to enhance the QoS granularity offered in the Diffserv model and improve both the network resource utilization and user benefits. To realize the concept, we implement a new endpoint admission control, referred to as explicit endpoint admission control, with the service vector concept at the user side, which allows a data flow to choose different services at different routers. At the router side, we propose a new packet marking scheme, by which the end host can obtain the performance of each service class at each router and determine the service vector. The achievable performance of the proposed approach is studied and the corresponding results demonstrate that the proposed mechanism can have better service differentiation capability and lower request dropping probability than the Intserv over Diffserv schemes while it still maintains the simplicity feature of the Diffserv network model.

Journal Article•DOI•
TL;DR: In this letter, based on information theory, a theoretical framework for the optimal link-state update is presented, upon which efficient link- state update policies may be developed.
Abstract: In this letter, based on information theory, we present a theoretical framework for the optimal link-state update, upon which efficient link-state update policies may be developed.

Patent•
12 Apr 2004
TL;DR: In this article, a pixel domain image with hidden data is encoded by modifying the histogram of the image to make space for such hidden data, and a method and apparatus are provided for encoding such images.
Abstract: Methods and apparatus are provided for encoding (118) a pixel domain image with hidden data (120) by modifying the histogram (112) of the pixel domain image to make space for such hidden data.

Proceedings Article•DOI•
20 Jun 2004
TL;DR: This paper addresses the burstification interval scaling problem when the timer-based burstification mechanisms, including the periodic and the nonperiodic alternatives, are employed and develops an analytical model to evaluate the burst delay at the edge node of the OBS-enabled WDM backbone.
Abstract: This paper addresses the burstification interval scaling problem when the timer-based burstification mechanisms, including the periodic and the nonperiodic alternatives, are employed. We investigate the impact of the burstification interval on the burst traffic characteristics in terms of the data burst inter-arrival time and the data burst length, respectively. An analytical model is developed to evaluate the burst delay at the edge node of the OBS-enabled WDM backbone. Numerical and simulation results have justified our analysis.

Journal Article•DOI•
27 Dec 2004
TL;DR: The authors propose a fluid hose-modelled VPN, and based on this model, an idealised fluid fair bandwidth allocation scheme is developed, which achieves two goals: maximising the overall throughput of the VPN; and providing a mechanism that enables the VPN customers to allocate the bandwidth according to their own requirements, thus achieving the predictable QoS performance.
Abstract: The virtual private network (VPN) provides customers with predictable and secure network connections over a shared network infrastructure. The recently proposed hose model for VPNs has desirable properties in terms of greater flexibility and better multiplexing gain. However, the 'classic' fair bandwidth allocation scheme introduces the issue of low overall utilisation in this model; furthermore, when the VPN links are established, the VPN customers cannot manage their VPN resources by themselves dynamically. The authors propose a fluid hose-modelled VPN, and based on this model they develop an idealised fluid fair bandwidth allocation scheme to improve the performance of the VPN. With the proposed scheme, they achieve two goals: maximising the overall throughput of the VPN; and providing a mechanism that enables the VPN customers to allocate the bandwidth according to their own requirements, thus achieving the predictable QoS performance. Based on deficit round robin (DRR), a transmission scheduling scheme for output buffer switches, a novel scheme, two-dimensional deficit round robin (2-D DRR), is developed to approximate/realise the idealised fluid fair bandwidth allocation scheme for the hose-modelled VPN. The simulation results also show that the 2-D DRR can improve the overall throughput without compromising fairness and implementation complexity.

Journal Article•DOI•
TL;DR: An enhanced address resolution protocol is proposed to reduce the call setup latency and the signaling overhead associated with the address probing procedure, a burst-based transmission mechanism is adopted to improve the network throughput and resource utilization efficiency, and a wavelength allocation algorithm is investigated to provide flexible bandwidth multiplexing with fairness and high scalability.
Abstract: This paper focuses on the control architecture and the enabling technologies for the Ethernet-supported Internet protocol-over-wavelength-division-multiplexing metropolitan area networks. We present the general architecture of an access node of such networks and propose solutions to facilitate the essential system functionalities. The aim is to render the flexible and high-capacity metropolitan network, which provides service provisioning improvement and resource utilization efficiency for the packet-dominated data traffic. Specifically, an enhanced address resolution protocol is proposed to reduce the call setup latency and the signaling overhead associated with the address probing procedure, a burst-based transmission mechanism is adopted to improve the network throughput and resource utilization efficiency, and a wavelength allocation algorithm is investigated to provide flexible bandwidth multiplexing with fairness and high scalability. Theoretical analysis and simulations are conducted to evaluate the performance of our algorithms, demonstrating that the proposed architecture and technologies deliver substantial transport performance improvement with efficient network resource utilization.

Proceedings Article•DOI•
10 Jun 2004
TL;DR: A robust variant packet sending-interval link padding algorithm for bursty traffics is proposed and principle component analysis is used to test the performance of the algorithm.
Abstract: Preventing networks from being attacked has become a critical issue for network administrators and researchers. Even for systems where encryption is used they are still vulnerable to traffic analysis attacks. Attackers can launch catastrophic distributed denial of services attacks based on the critical link information derived from traffic analysis. Link padding can be used to defend against such traffic analysis attacks. In this paper, we propose a robust variant packet sending-interval link padding algorithm for bursty traffics. The histogram feature vector method is used to simulate the traffic analysis attack and principle component analysis is used to test the performance of the algorithm.

01 Jan 2004
TL;DR: In this paper, an enhanced address resolution protocol is proposed to reduce the call setup latency and the signaling overhead associated with the address probing procedure, a burst-based transmission mechanism is adopted to improve the network throughput and resource utilization effi ciency, and a wavelength allocation algorithm is investigated to provide flexible bandwidth multiplexing with fairness and high scalability.
Abstract: This paper focuses on the control architecture and the enabling technologies for the Ethernet-supported Internet protocol-over-wavelength-division-multiplexing metropolitan area networks. We present the general architecture of an access node of such networks and propose solutions to facilitate the es- sential system functionalities. The aim is to render the flexible and high-capacity metropolitan network, which provides service pro- visioning improvement and resource utilization efficiency for the packet-dominated data traffic. Specifically, an enhanced address resolution protocol is proposed to reduce the call setup latency and the signaling overhead associated with the address probing procedure, a burst-based transmission mechanism is adopted to improve the network throughput and resource utilization effi- ciency, and a wavelength allocation algorithm is investigated to provide flexible bandwidth multiplexing with fairness and high scalability. Theoretical analysis and simulations are conducted to evaluate the performance of our algorithms, demonstrating that the proposed architecture and technologies deliver substan- tial transport performance improvement with efficient network resource utilization.

Book Chapter•DOI•
01 Jul 2004