scispace - formally typeset
Search or ask a question
Author

J.S. Hansen

Bio: J.S. Hansen is an academic researcher from University of Copenhagen. The author has contributed to research in topics: TCP acceleration & Mobile computing. The author has an hindex of 6, co-authored 13 publications receiving 275 citations. Previous affiliations of J.S. Hansen include Lincoln University (Pennsylvania) & French Institute for Research in Computer Science and Automation.

Papers
More filters
Journal ArticleDOI
TL;DR: An optimization technique, called connection splicing, is introduced that can be applied to a TCP forwarder and improves TCP forwarding performance by a factor of two to four, making it competitive with IP router performance on the same hardware.
Abstract: A TCP forwarder is a network node that establishes and forwards data between a pair of TCP connections. An example of a TCP forwarder is a firewall that places a proxy between a TCP connection to an external host and a TCP connection to an internal host, controlling access to a resource on the internal host. Once the proxy approves the access, it simply forwards data from one connection to the other. We use the term TCP forwarding to describe indirect TCP communication via a proxy in general. This paper characterizes the behavior of TCP forwarding, and illustrates the role TCP forwarding plays in common network services like firewalls and HTTP proxies. We then introduce an optimization technique, called connection splicing, that can be applied to a TCP forwarder, and report the results of a performance study designed to evaluate its impact. Connection splicing improves TCP forwarding performance by a factor of two to four, making it competitive with IP router performance on the same hardware.

195 citations

Journal ArticleDOI
TL;DR: The authors present the design of a communication layer for mobile computing that dynamically adapts to changes in network connections, to allow existing TCP/IP based applications to be used in a mobile environment, without application modifications.
Abstract: Most network applications assume continuous connectivity-an assumption that does not "migrate" to wireless environments. The authors present the design of a communication layer for mobile computing that dynamically adapts to changes in network connections. Our work was part of AMIGOS (Advanced Mobile Integration in General Operating Systems), a collaboration between researchers at the University of Copenhagen in Denmark and the University of Minho in Portugal. The AMIGOS project provides transparent support for semi connected operations on mobile computers running a standard operating system; the project home page is at http://www.econ.cbs.dk/people/birger/AMIGOS/. Briefly, our design lets a mobile user connect a mobile host to a LAN, then disconnect the host from it. The user then can reconnect, for example, via a Global System for Mobile Communications (GSM) cellular modem without losing TCP/IP connections. We want to allow existing TCP/IP based applications to be used in a mobile environment, without application modifications.

28 citations

Proceedings ArticleDOI
23 Sep 2002
TL;DR: The software support for sharing of disks in clusters, where the disks are distributed across the nodes in the cluster, thereby allowing them to be combined into a high-performance storage system, is described.
Abstract: In many clusters today, the local disks of a node are only used sporadically. This paper describes the software support for sharing of disks in clusters, where the disks are distributed across the nodes in the cluster, thereby allowing them to be combined into a high-performance storage system. Compared to centralized storage servers, such an architecture allows the total I/O capacity of the cluster to scale up with the number of nodes and disks. Additionally, our software allows customizing the functionality of the remote disk access using a library of code modules. A prototype has been implemented on a cluster connected by a Scalable Coherent Interface (SCI) network and performance measurements using both raw device access and a distributed file system show that the performance is comparable to dedicated storage systems and that the overhead of the framework is moderate even during high load. Thus, the prospects are that clusters sharing disks distributed among the nodes will allow both the application processing power and total I/O capacity of the cluster to scale up with the number of nodes.

17 citations

Proceedings Article
11 Aug 1997
TL;DR: Performance evaluation of the multiprocessor prototype implementation shows that using a two level network interface servicing scheme that uses interrupts during low network loads to provide low latency and polling threads during high network loads can improve performance when used carefully.
Abstract: The use of high performance networking technologies, e.g., ATM networks, demands much from both operating systems and processors. During high network loads, user threads may be starved because the processor spends all its time handling interrupts. To alleviate this problem, we propose using a two level network interface servicing scheme that uses interrupts during low network loads to provide low latency, and polling threads during high network loads to avoid user thread starvation. In this paper, we examine the use of such a scheme on dual-processor workstations running Windows NT connected by an ATM network. Performance evaluation of our multiprocessor prototype implementation shows that using our two level scheme can improve performance when used carefully.

7 citations

Proceedings ArticleDOI
09 May 2005
TL;DR: This work proposes separating control and data transfer traffic by accessing data through a DSM-like cluster-wide shared buffer space and only including buffer references in the control messages, using a generic API for accessing buffers.
Abstract: Efficient memory allocation and data transfer for cluster-based data-intensive applications is a difficult task. Both changes in cluster interconnects and application workloads usually require timing of the application and network code. We propose separating control and data transfer traffic by accessing data through a DSM-like cluster-wide shared buffer space and only including buffer references in the control messages. Using a generic API for accessing buffers allows for tuning data transfer without changing the application code. A prototype, implemented in the context of a distributed storage system, has been validated with several networking technologies, showing that such a framework can combine performance and flexibility.

7 citations


Cited by
More filters
01 Mar 2006
TL;DR: The Datagram Congestion Control Protocol is a transport protocol that provides bidirectional unicast connections of congestion-controlled unreliable datagrams that is suitable for applications that transfer fairly large amounts of data.
Abstract: The Datagram Congestion Control Protocol (DCCP) is a transport protocol that provides bidirectional unicast connections of congestion-controlled unreliable datagrams. DCCP is suitable for applications that transfer fairly large amounts of data and that can benefit from control over the tradeoff between timeliness and reliability. [STANDARDS-TRACK]

714 citations

Journal ArticleDOI
TL;DR: This article classifies and describes main mechanisms to split the traffic load among the server nodes, discussing both the alternative architectures and the load sharing policies.
Abstract: The overall increase in traffic on the World Wide Web is augmenting user-perceived response times from popular Web sites, especially in conjunction with special events. System platforms that do not replicate information content cannot provide the needed scalability to handle large traffic volumes and to match rapid and dramatic changes in the number of clients. The need to improve the performance of Web-based services has produced a variety of novel content delivery architectures. This article will focus on Web system architectures that consist of multiple server nodes distributed on a local area, with one or more mechanisms to spread client requests among the nodes. After years of continual proposals of new system solutions, routing mechanisms, and policies (the first dated back to 1994 when the NCSA Web site had to face the first million of requests per day), many problems concerning multiple server architectures for Web sites have been solved. Other issues remain to be addressed, especially at the network application layer, but the main techniques and methodologies for building scalable Web content delivery architectures placed in a single location are settled now. This article classifies and describes main mechanisms to split the traffic load among the server nodes, discussing both the alternative architectures and the load sharing policies. To this purpose, it focuses on architectures, internal routing mechanisms, and dispatching request algorithms for designing and implementing scalable Web-server systems under the control of one content provider. It identifies also some of the open research issues associated with the use of distributed systems for highly accessed Web sites.

525 citations

Proceedings ArticleDOI
21 Oct 2001
TL;DR: It is shown it is possible to combine an IXP1200 development board and a PC to build an inexpensive router that forwards minimum-sized packets at a rate of 3.47Mpps, nearly an order of magnitude faster than existing pure PC-based routers, and sufficient to support 1.77Gbps of aggregate link bandwidth.
Abstract: Recent efforts to add new services to the Internet have increased interest in software-based routers that are easy to extend and evolve. This paper describes our experiences using emerging network processors---in particular, the Intel IXP1200---to implement a router. We show it is possible to combine an IXP1200 development board and a PC to build an inexpensive router that forwards minimum-sized packets at a rate of 3.47Mpps. This is nearly an order of magnitude faster than existing pure PC-based routers, and sufficient to support 1.77Gbps of aggregate link bandwidth. At lesser aggregate line speeds, our design also allows the excess resources available on the IXP1200 to be used robustly for extra packet processing. For example, with 8 × 100Mbps links, 240 register operations and 96 bytes of state storage are available for each 64-byte packet. Using a hierarchical architecture we can guarantee line-speed forwarding rates for simple packets with the IXP1200, and still have extra capacity to process exceptional packets with the Pentium. Up to 310Kpps of the traffic can be routed through the Pentium to receive 1510 cycles of extra per-packet processing.

251 citations

Patent
02 Nov 2001
TL;DR: In this paper, a method for accelerating TCP/UDP packet switching is proposed, which involves determining whether exception processing is necessary; if not, the packet is forwarded to a special stack for expedited processing.
Abstract: A method for accelerating TCP/UDP packet switching. The method involves determining whether exception processing is necessary; if not, the packet is forwarded to a special stack for expedited processing. Packets requiring exception processing are forwarded to the conventional stack.

200 citations

01 Jan 2014

197 citations