scispace - formally typeset
Search or ask a question

Showing papers presented at "USENIX Security Symposium in 2001"


Proceedings Article•
13 Aug 2001
TL;DR: This article presents a new technique, called “backscatter analysis,” that provides a conservative estimate of worldwide denial-of-service activity, and believes it is the first to provide quantitative estimates of Internet-wide denial- of- service activity.
Abstract: In this paper, we seek to answer a simple question: "How prevalent are denial-of-service attacks in the Internet today?". Our motivation is to understand quantitatively the nature of the current threat as well as to enable longer-term analyses of trends and recurring patterns of attacks. We present a new technique, called "backscatter analysis", that provides an estimate of worldwide denial-of-service activity. We use this approach on three week-long datasets to assess the number, duration and focus of attacks, and to characterize their behavior. During this period, we observe more than 12,000 attacks against more than 5,000 distinct targets, ranging from well known e-commerce companies such as Amazon and Hotmail to small foreign ISPs and dial-up connections. We believe that our work is the only publically available data quantifying denial-of-service activity in the Internet.

1,444 citations


Proceedings Article•
13 Aug 2001
TL;DR: Improved methods for information hiding are presented and an a priori estimate is presented to determine the amount of data that can be hidden in the image while still being able to maintain frequency count based statistics.
Abstract: The main purpose of steganography is to hide the occurrence of communication. While most methods in use today are invisible to an observer's senses, mathematical analysis may reveal statistical anomalies in the stego medium. These discrepancies expose the fact that hidden communication is happening. This paper presents improved methods for information hiding. One method uses probabilistic embedding to minimize modifications to the cover medium. Another method employs error-correcting codes, which allow the embedding process to choose which bits to modify in a way that decreases the likelihood of being detected. In addition, we can hide multiple data sets in the same cover medium to provide plausible deniability. To prevent detection by statistical tests, we preserve the statistical properties of the cover medium. After applying a correcting transform to an image, statistical steganalysis is no longer able to detect the presence of steganography. We present an a priori estimate to determine the amount of data that can be hidden in the image while still being able to maintain frequency count based statistics. This way, we can quickly choose an image in which a message of a given size can be hidden safely. To evaluate the effectiveness of our approach, we present statistical tests for the JPEG image format and explain how our new method defeats them.

655 citations


Proceedings Article•
13 Aug 2001
TL;DR: A statistical study of users' typing patterns is performed and it is shown that these patterns reveal information about the keys typed, and that timing leaks open a new set of security risks, and hence caution must be taken when designing this type of protocol.
Abstract: SSH is designed to provide a secure channel between two hosts. Despite the encryption and authentication mechanisms it uses, SSH has two weakness: First, the transmitted packets are padded only to an eight-byte boundary (if a block cipher is in use), which reveals the approximate size of the original data. Second, in interactive mode, every individual keystroke that a user types is sent to the remote machine in a separate IP packet immediately after the key is pressed, which leaks the interkeystroke timing information of users' typing. In this paper, we show how these seemingly minor weaknesses result in serious security risks. First we show that even very simple statistical techniques suffice to reveal sensitive information such as the length of users' passwords or even root passwords. More importantly, we further show that by using more advanced statistical techniques on timing information collected from the network, the eavesdropper can learn significant information about what users type in SSH sessions. In particular, we perform a statistical study of users' typing patterns and show that these patterns reveal information about the keys typed. By developing a Hidden Markov Model and our key sequence prediction algorithm, we can predict key sequences from the interkeystroke timings. We further develop an attacker system, Herbivore, which tries to learn users' passwords by monitoring SSH sessions. By collecting timing information on the network, Herbivore can speed up exhaustive search for passwords by a factor of 50. We also propose some countermeasures. In general our results apply not only to SSH, but also to a general class of protocols for encrypting interactive traffic. We show that timing leaks open a new set of security risks, and hence caution must be taken when designing this type of protocol

573 citations


Proceedings Article•
13 Aug 2001
TL;DR: The traffic normalizer as discussed by the authors sits directly in the path of traffic into a site and patches up the packet stream to eliminate potential ambiguities before the traffic is seen by the monitor, removing evasion opportunities.
Abstract: A fundamental problem for network intrusion detection systems is the ability of a skilled attacker to evade detection by exploiting ambiguities in the traffic stream as seen by the monitor. We discuss the viability of addressing this problem by introducing a new network forwarding element called a traffic normalizer. The normalizer sits directly in the path of traffic into a site and patches up the packet stream to eliminate potential ambiguities before the traffic is seen by the monitor, removing evasion opportunities. We examine a number of tradeoffs in designing a normalizer, emphasizing the important question of the degree to which normalizations undermine end-to-end protocol semantics. We discuss the key practical issues of "cold start" and attacks on the normalizer, and develop a methodology for systematically examining the ambiguities present in a protocol based on walking the protocol's header. We then present norm, a publicly available user-level implementation of a normalizer that can normalize a TCP traffic stream at 100,000 pkts/sec in memory-to-memory copies, suggesting that a kernel implementation using PC hardware could keep pace with a bidirectional 100 Mbps link with sufficient headroom to weather a high-speed flooding attack of small packets.

494 citations


Proceedings Article•
13 Aug 2001
TL;DR: A new system for automatically detecting format string security vulnerabilities in C programs using a constraint-based type-inference engine and new techniques for presenting the results of such an analysis to the user in a form that makes bugs easier to find and to fix are presented.
Abstract: We present a new system for automatically detecting format string security vulnerabilities in C programs using a constraint-based type-inference engine. We describe new techniques for presenting the results of such an analysis to the user in a form that makes bugs easier to find and to fix. The system has been implemented and tested on several real-world software packages. Our tests show that the system is very effective, detecting several bugs previously unknown to the authors and exhibiting a low rate of false positives in almost all cases. Many of our techniques are applicable to additional classes of security vulnerabilities, as well as other type- and constraint-based systems.

425 citations


Report•DOI•
13 Aug 2001
TL;DR: This work proposes a heuristic and a data-structure that network devices (such as routers) can use to detect (and eliminate) denial-of-service bandwidth attacks.
Abstract: A denial-of-service bandwidth attack is an attempt to disrupt an online service by generating a traffic overload that clogs links or causes routers near the victim to crash. We propose a heuristic and a data-structure that network devices (such as routers) can use to detect (and eliminate) such attacks. With our method, each network device maintains a data-structure, MULTOPS, that monitors certain traffic characteristics. MULTOPS (MUlti-Level Tree for Online Packet Statistics) is a tree of nodes that contains packet rate statistics for subnet prefixes at different aggregation levels. The tree expands and contracts within a fixed memory budget. A network device using MULTOPS detects ongoing bandwidth attacks by the significant, disproportional difference between packet rates going to and coming from the victim or the attacker. MULTOPS-equipped routing software running on an off-the-shelf 700 Mhz Pentium III PC can process up to 340,000 packets per second.

412 citations


Proceedings Article•
13 Aug 2001
TL;DR: An implementation of a new approach to mitigating buffer overflow vulnerabilities by detecting likely vulnerabilities through an analysis of the program source code is described that extends the LCLint annotation-assisted static checking tool.
Abstract: Buffer overflow attacks may be today's single most important security threat. This paper presents a new approach to mitigating buffer overflow vulnerabilities by detecting likely vulnerabilities through an analysis of the program source code. Our approach exploits information provided in semantic comments and uses lightweight and efficient static analyses. This paper describes an implementation of our approach that extends the LCLint annotation-assisted static checking tool. Our tool is as fast as a compiler and nearly as easy to use. We present experience using our approach to detect buffer overflow vulnerabilities in two security-sensitive programs.

376 citations


Proceedings Article•
13 Aug 2001
TL;DR: Stack-Ghost advances exploit prevention in that it protects every application run on the system without their knowledge nor does it require their source or binary modification.
Abstract: Conventional Security have relied on overwriting the saved return pointer on the stack to hijack the path of execution. Under Sun Microsystem's Sparc processor architecture, we were able to implement a kernel modification to transparently and automatically guard application's return pointers. Our implementation called Stack Ghost Open-BSD 2.8 acts as a ghost in the machine. Stack-Ghost advances exploit prevention in that it protects every application run on the system without their knowledge nor does it require their source or binary modification. We will document several of the methods devised to preserve the sanctity of the system and will explore the performance ramifications of Stack Ghost.

270 citations


Proceedings Article•
Drew Dean1, Adam Stubblefield2•
13 Aug 2001
TL;DR: Measurements of CPU load and latency when the modified library is used to protect a secure webserver show that client puzzles are a viable method for protecting SSL servers from SSL based denial-of-service attacks.
Abstract: Client puzzles are commonly proposed as a solution to denial-of-service attacks. However, very few implementations of the idea actually exist, and there are a number of subtle details in the implementation. In this paper, we describe our implementation of a simple and backwards compatible client puzzle extension to TLS. We also present measurements of CPU load and latency when our modified library is used to protect a secure webserver. These measurements show that client puzzles are a viable method for protecting SSL servers from SSL based denial-of-service attacks.

269 citations


Proceedings Article•
13 Aug 2001
TL;DR: This paper presents a new approach to fast certificate revocation centered around the concept of an on-line semi-trusted mediator (SEM) and shows that threshold cryptography is practical for certificate revocation.
Abstract: We present a new approach to fast certificate revocation centered around the concept of an on-line semi-trusted mediator (SEM). The use of a SEM in conjunction with a simple threshold variant of the RSA cryptosystem (mediated RSA) offers a number of practical advantages over current revocation techniques. Our approach simplifies validation of digital signatures and enables certificate revocation within legacy systems. It also provides immediate revocation of all security capabilities. This paper discusses both the architecture and implementation of our approach as well as performance and compatibility with the existing infrastructure. Our results show that threshold cryptography is practical for certificate revocation.

262 citations


Proceedings Article•
13 Aug 2001
TL;DR: This paper describes the format bug problem, and FormatGuard is a small patch to glibc that provides general protection against format bugs that is effective in protecting several real programs with format vulnerabilities against live exploits.
Abstract: In June 2000, a major new class of vulnerabilities called "format bugs" was discovered when an vulnerability in WU-FTP appeared that acted almost like a buffer overflow, but wasn't. Since then, dozens of format string vulnerabilities have appeared. This paper describes the format bug problem, and presents FormatGuard: our proposed solution. FormatGuard is a small patch to glibc that provides general protection against format bugs. We show that FormatGuard is effective in protecting several real programs with format vulnerabilities against live exploits, and we show that FormatGuard imposes minimal compatibility and performance costs.

Proceedings Article•
Peter Gutmann1•
13 Aug 2001
TL;DR: This work extends the brief coverage of this area given in the earlier paper by providing the technical background information necessary to understand remanence issues in semiconductor devices.
Abstract: A paper published in 1996 examined the problems involved in truly deleting data from magnetic storage media and also made a mention of the fact that similar problems affect data held in semiconductor memory. This work extends the brief coverage of this area given in the earlier paper by providing the technical background information necessary to understand remanence issues in semiconductor devices. Data remanence problems affect not only obvious areas such as RAM and non-volatile memory cells but can also occur in other areas of the device through hot-carrier effects (which change the characteristics of the semiconductors in the device), electromigration (which physically alter the device itself), and various other effects which are examined alongside the more obvious memory-cell remanence problems. The paper concludes with some design and device usage guidelines which can be useful in reducing remanence effects.

Proceedings Article•
13 Aug 2001
TL;DR: This work proposes a set of hints for designing a secure client authentication scheme and presents the design and analysis of a simple authentication scheme secure against forgeries by the interrogative adversary, in conjunction with SSL.
Abstract: Client authentication has been a continuous source of problems on the Web. Although many well-studied techniques exist for authentication, Web sites continue to use extremely weak authentication schemes, especially in non-enterprise environments such as store fronts. These weaknesses often result from careless use of authenticators within Web cookies. Of the twenty-seven sites we investigated, we weakened the client authentication on two systems, gained unauthorized access on eight, and extracted the secret key used to mint authenticators from one. We provide a description of the limitations, requirements, and security models specific to Web client authentication. This includes the introduction of the interrogative adversary, a surprisingly powerful adversary that can adaptively query a Web site. We propose a set of hints for designing a secure client authentication scheme. Using these hints, we present the design and analysis of a simple authentication scheme secure against forgeries by the interrogative adversary. In conjunction with SSL, our scheme is secure against forgeries by the active adversary.

Proceedings Article•
13 Aug 2001
TL;DR: The design and architecture of the Lumeta Firewall Analyzer (LFA) system is described, which improves upon Fang in many ways, including that human interaction is limited to providing the firewall configuration, and that LFA automatically issues the "interesting" queries and displays the outputs of all of them, in a way that highlights the risks without cluttering the high-level view.
Abstract: Practically every corporation that is connected to the Internet has at least one firewall, and often many more. However, the protection that these firewalls provide is only as good as the policy they are configured to implement. Therefore, testing, auditing, or reverse-engineering existing firewall configurations should be important components of every corporation's network security practice. Unfortunately, this is easier said than done. Firewall configuration files are written in notoriously hard to read languages, using vendor-specific GUIs. A tool that is sorely missing in the arsenal of firewall administrators and auditors is one that will allow them to analyze the policy on a firewall. The first passive, analytical, firewall analysis system was the Fang prototype system [MWZ00]. This was the starting point for the new Lumeta Firewall Analyzer (LFA) system. LFA improves upon Fang in many ways. The most significant improvements are that human interaction is limited to providing the firewall configuration, and that LFA automatically issues the "interesting" queries and displays the outputs of all of them, in a way that highlights the risks without cluttering the high-level view. This solves a major usability problem we found with Fang, namely, that users do not know which queries to issue. The input to the LFA consists of the firewall's routing table, and the firewall's configuration files. The LFA parses these various low-level, vendor-specific, files, and simulates the firewall's behavior against all the packets it could possibly receive. The simulation is done completely offline, without sending any packets. The administrator gets a comprehensive report showingwhich types of traffic the firewall allows to enter from the Internet into the customer's intranet and which types of traffic are allowed out of the intranet. The LFA's report is presented as a set of explicit web pages, which are rich with links and cross references to further detail (allowing for easy drill-down). This paper describes the design and architecture of the LFA.

Proceedings Article•
13 Aug 2001
TL;DR: The Secure Digital Music Initiative recently held a challenge to test the strength of four watermarking technologies, and two other security technologies, which accepted the challenge, and explored the inner workings of the technologies.
Abstract: The Secure Digital Music Initiative is a consortium of parties interested in preventing piracy of digital music, and to this end they are developing architectures for content protection on untrusted platforms. SDMI recently held a challenge to test the strength of four watermarking technologies, and two other security technologies. No documentation explained the implementations of the technologies, and neither watermark embedding nor detecting software was directly accessible to challenge participants. We nevertheless accepted the challenge, and explored the inner workings of the technologies. We report on our results here.

Proceedings Article•
13 Aug 2001
TL;DR: The difficulties in applying existing group key management techniques to addressing the problem of secure distribution of events and a number of approaches to reduce the number of encryptions and to increase message throughput are proposed.
Abstract: Content-based publish-subscribe systems are an emerging paradigm for building a range of distributed applications. A specific problem in content-based systems is the secure distribution of events to clients subscribing to those events. In content-based systems, every event can potentially have a different set of interested subscribers. To provide confidentiality guarantee, we would like to encrypt messages so that only interested subscribers can read the message. In the worst case, for n clients, there can be 2n subgroups, and each event can go to a potentially different subgroup. A major problem is managing subgroup keys so that the number of encryptions required per event can be kept low. We first show the difficulties in applying existing group key management techniques to addressing the problem. We then propose and compare a number of approaches to reduce the number of encryptions and to increase message throughput. We present analytical analysis of described algorithms as well as simulation results.

Proceedings Article•
13 Aug 2001
TL;DR: This project adds support to the Linux kernel for asynchronous secure deletion of file data and meta-data by arguing that user-level secure deletion tools are inadequate in many respects and that synchronous deletion facilities are too time consuming to be acceptable to users.
Abstract: Security conscious users of file systems require that deleted information and its associated meta-data are no longer accessible on the underlying physical disk. Existing file system implementations only reset the file system data structures to reflect the removal of data, leaving both the actual data and its associated meta-data on the physical disk. Even when this information has been overwritten, it may remain visible to advanced probing techniques such as magnetic force microscopy or magnetic force scanning tunneling microscopy. Our project addresses this problem by adding support to the Linux kernel for asynchronous secure deletion of file data and meta-data. We provide an implementation for the Ext2 file system; other file systems can be accommodated easily. An asynchronous overwriting process sacrifices immediate security but ultimately provides a far more usable and complete secure deletion facility. We justify our design by arguing that user-level secure deletion tools are inadequate in many respects and that synchronous deletion facilities are too time consuming to be acceptable to users. Further, we contend that encrypting file information, either using manual tools or a encrypted file system, is not a sufficient solution to alleviate the need for secure data deletion.

Proceedings Article•
13 Aug 2001
TL;DR: RaceGuard is presented: a kernel enhancement that detects attempts to exploit temporary file race vulnerabilities, and does so with sufficient speed and precision that the attack can be halted before it takes effect.
Abstract: Temporary file race vulnerabilities occur when privileged programs attempt to create temporary files in an unsafe manner. "Unsafe" means "non-atomic with respect to an attacker's activities." There is no portable standard for safely (atomically) creating temporary files, and many operating systems have no safe temporary file creation at all. As a result, many programs continue to use unsafe means to create temporary files, resulting in widespread vulnerabilities. This paper presents Race-Guard: a kernel enhancement that detects attempts to exploit temporary file race vulnerabilities, and does so with sufficient speed and precision that the attack can be halted before it takes effect. RaceGuard has been implemented, tested, and measured. We show that RaceGuard is effective at stopping temporary file race attacks, preserves compatibility (no legitimate software is broken), and preserves performance (overhead is minimal).

Proceedings Article•
13 Aug 2001
TL;DR: The design, implentation, and performance of a system that provides controlled access to Kerberized services through a browser that provides a singole sign-on that produces both Kerberos and public key credentials are described.
Abstract: Kerberos, a widely used network authentication mechanism, is integrated into numerous applications: UNIX and Windows 2000 login, AFS, Telnet, and SSH to name a few. Yet, Web applications rely on SSL to estabilish authenticated and secure connections. SSL Provides strong authentication by using certificates and public key challenge response authentication. The expansion of the Internet requires each system to leverage the strength of the other, which suggets the importance of interoperability between them. This paper descirbes the design, implentation, and performance of a system that provides controlled access to Kerberized services through a browser. This system provides a singole sign-on that produces both Kerberos and public key credentials. The Web server uses a plugin that translates public key credentials to Kerberos credentials. The Web server's subsequent authenticated actions taken on a user's behalf are limited in time and scope. Performance measurements show how the overhead introduced by credential trnslation is amortized over the login session.

Proceedings Article•
13 Aug 2001
TL;DR: The presented research provides detail into specific scenarios, weaknesses, and mitigation recommendations related to data protection, malicious code, virus storage, and virus propagation related to the Palm OS and its supporting hardware platform.
Abstract: Portable devices, such as Personal Digital Assistants (PDAs), are particularly vulnerable to malicious code threats due to their widespread implementation and current lack of a security framework. Although well known in the security industry to be insecure, PDAs are ubiquitous in enterprise environments and are being used for such applications as one-time-password generation, storage of medical and company confidential information, and e-commerce. It is not enough to assume all users are conscious of computer security and it is crucial to understand the risks of using portable devices in a security infrastructure. Furthermore, it is not possible to employ a secure application on top of an insecure foundation. Palm operating system (OS) devices own nearly 80 percent of the global handheld computing market [11]. It is because of this that the design of the Palm OS and its supporting hardware platform were analyzed. The presented research provides detail into specific scenarios, weaknesses, and mitigation recommendations related to data protection, malicious code, virus storage, and virus propagation. Additionally, this work can be used as a model by users and developers to gain a deeper understanding of the additional security risks that these and other portable devices introduce.

Proceedings Article•
01 Jan 2001

Proceedings Article•
13 Aug 2001
TL;DR: This paper presents Capability File Names, a new access control mechanism, in which self-certifying file names are used as sparse capabilities that allow a user ubiquitous access to his files and enables him to delegate this right to a dynamic group of remote users.
Abstract: The ability to access and share information over the Internet has introduced the need for new flexible, dynamic and fine-grained access control mechanisms. None of the current mechanisms for sharing information - distributed file systems and the web - offer adequate support for sharing in a large and highly dynamic group of users. Distributed file systems lack the ability to share information with unauthenticated users, and the web lacks fine grained access controls, i.e. the ability to grant individual users access to selected files. In this paper we present Capability File Names, a new access control mechanism, in which self-certifying file names are used as sparse capabilities that allow a user ubiquitous access to his files and enables him to delegate this right to a dynamic group of remote users. Encoding the capaility in the file name has two major advantages: it is self-supporting and it ensures full compatablity with existing programs. Capability file names have been implemented in a new file system called CapaFS. CapaFS separates user identification from authorisation, thus allowing users to share selected files with remote users without the intervention of a system administrator. The implementation of CapaFS is described and evaluated in this paper

Proceedings Article•
13 Aug 2001
TL;DR: The enhancement advocated for allowing PDM to avoid storing a password-equivalent at the server is less expensive than existing schemes, and the approach can be used as a more efficient (at the server) variant of augmented EKE and SPEKE than the currently published schemes.
Abstract: In this paper we present PDM (Password Derived Moduli), a new approach to strong password-based protocols usable either for mutual authentication or for downloading security information such as the user's private key We describe how the properties desirable for strong password mutual authentication differ from the properties desirable for credentials download In particular, a protocol used solely for credentials download can be simpler and less expensive than one used for mutual authentication since some properties (such as authentication of the server) are not necessary for credentials download The features necessary for mutual authentication can be easily added to a credentials download protocol, but many of the protocols designed for mutual authentication are not as desirable for use in credentials download as protocols like PDM and basic EKE and SPEKE because they are unnecessarily expensive when used for that purpose PDM's performance is vastly more expensive at the client than any of the protocols in the literature, but it is more efficient at the server We claim that performance at the server, since a server must handle a large and potentially unpredictable number of clients, is more important than performance at the client, assuming that client performance is "good enough" We describe PDM for credentials download, and then show how to enhance it to have the properties desirable for mutual authentication In particular, the enhancement we advocate for allowing PDM to avoid storing a password-equivalent at the server is less expensive than existing schemes, and our approach can be used as a more efficient (at the server) variant of augmented EKE and SPEKE than the currently published schemes PDM is important because it is a very different approach to the problem than any in the literature, we believe it to be unencumbered by patents, and because it can be a lot less expensive at the server than existing schemes

Proceedings Article•
Naomaru Itoi1•
13 Aug 2001
TL;DR: SC-CFS is developed, a file system that encrypts files and takes advantage of a smartcard for per-file key generation and minimizes the damage caused by physical attack and bug exploitation.
Abstract: Storing information securely is one of the most important roles expected for computer systems, but it is difficult to achieve with current commodity computers The computers may yield secrets through physical breach, software bug exploitation, or password guessing attack Even file systems that provide strong security, such as the cryptographic file system, are not perfect against these attacks We have developed SC-CFS, a file system that encrypts files and takes advantage of a smartcard for per-file key generation SC-CFS counters password guessing attack, and minimizes the damage caused by physical attack and bug exploitation The performance of the system is not yet satisfactory, taking 300 ms for accessing a file

Proceedings Article•
13 Aug 2001
TL;DR: The results of this real-world systems exercise in hardware cryptographic acceleration bring the short-DES performance close to 3 megabytes/second--and demonstrates the importance of, when designing specialty hardware, not overlooking the software aspects governing how a device can be used.
Abstract: Over the last several years, our research team built a commercially-offered secure coprocessor that, besides other features, offers high-speed DES: over 20 megabytes/second. However, it obtains these speeds only on operations with large data lengths. For DES operations on short data (e.g., 8-80 bytes), our commercial offering was benchmarked at less than 2 kilobytes/second. The programmability of our device enabled us to investigate this issue, identify and address a series of bottlenecks that were not initially apparent, and ultimately bring our short-DES performance close to 3 megabytes/second. This paper reports the results of this real-world systems exercise in hardware cryptographic acceleration--and demonstrates the importance of, when designing specialty hardware, not overlooking the software aspects governing how a device can be used.


Proceedings Article•DOI•
Peter M. Gleitz1, Steven M. Bellovin1•
13 Aug 2001
TL;DR: A new method, called Transient Addressing for Related Processes (TARP), whereby hosts temporarily employ and subsequently discard IPv6 addresses in servicing a client host's network requests is proposed.
Abstract: Traditionally, hosts have tended to assign relatively few network addresses to an interface for extended periods. Encouraged by the new abundance of addressing possibilities provided by IPv6, we propose a new method, called Transient Addressing for Related Processes (TARP), whereby hosts temporarily employ and subsequently discard IPv6 addresses in servicing a client host's network requests. The method provides certain security advantages and neatly finesses some well-known firewall problems caused by dynamic port negotiation used in a variety of application protocols. A prototype implementation exists as a small set of kame/BSD kernel enhancements and allows socket programmers and applications nearly transparent access to TARP addressing's advantages.

Proceedings Article•
01 Jan 2001