scispace - formally typeset
Search or ask a question

Showing papers by "Craig Partridge published in 2003"


Proceedings ArticleDOI
25 Aug 2003
TL;DR: It is argued that cognitive techniques, rather than traditional algorithmic approaches, are best suited to meeting the uncertainties and complexity of the objective of network research.
Abstract: We propose a new objective for network research: to build a fundamentally different sort of network that can assemble itself given high level instructions, reassemble itself as requirements change, automatically discover when something goes wrong, and automatically fix a detected problem or explain why it cannot do so.We further argue that to achieve this goal, it is not sufficient to improve incrementally on the techniques and algorithms we know today. Instead, we propose a new construct, the Knowledge Plane, a pervasive system within the network that builds and maintains high-level models of what the network is supposed to do, in order to provide services and advice to other elements of the network. The knowledge plane is novel in its reliance on the tools of AI and cognitive systems. We argue that cognitive techniques, rather than traditional algorithmic approaches, are best suited to meeting the uncertainties and complexity of our objective.

635 citations


Posted Content
TL;DR: No definitive analyses exist on the impact of September 11 on the Internet, though a few conflicting anecdotal reports about its performance that day — such as several presentations at NANOG indicating relatively little effect and press accounts suggesting that the impact was severe have appeared.
Abstract: Although secondary to the human tragedy resulting from the September 11, 2001, attacks on the World Trade Center and the Pentagon, telecommunications issues were significant that day both in terms of damage (physical as well as functional) and of mounting response and recovery efforts. The Internet has come to be a major component of the nation’s (and the world’s) communications and information infrastructure. People rely on it for business, social, and personal activities of many kinds, and government depends on it for communications and transactions with the media and the public. Thus there is interest in how the Internet performed and was used on September 11. Unlike the situation with longer-standing telecommunications services (notably the public telephone network), there are few regulations, policies, or practices related to the Internet’s functioning in emergency situations. Nor are there many publicly available data to help policy makers or the industry itself assess the Internet’s performance — either on a continuing basis or in the aftermath of a crisis. No regular system exists for reporting failures and outages, nor is there agreement on metrics of performance. Some experiences are shared informally among network operators or in forums such as the North American Network Operators Group (NANOG), but that information is not readily accessible for national planning or research purpos es. The decentralized architecture of the Internet — although widely characterized as one of the Internet’s strengths — further confounds the difficulty of collecting comprehensive data about how the Internet is performing. It is therefore unsurprising that no definitive analyses exist on the impact of September 11 on the Internet, though a few conflicting anecdotal reports about its performance that day — such as several presentations at NANOG indicating relatively little effect and press accounts suggesting that the impact was severe — have appeared.

57 citations



Patent
14 Nov 2003
TL;DR: In this article, the identification field of the IP header of each transmitted packet is augmented with at least one bit from another field of header, and the probability of random collisions may be reduced by ensuring that packets sent from a transmitting node to a receiving IPsec node are not fragmented.
Abstract: Embodiments of the invention reduce the probability of success of a DOS attack on a node receiving packets by decreasing the probability of random collisions of packets sent by a malicious user with those sent by honest users. The probability of random collisions may be reduced in one class of embodiments of the invention by supplementing the identification field of the IP header of each transmitted packet with at least one bit from another field of the header. The probability of random collisions may be reduced in another class of embodiments of the invention by ensuring that packets sent from a transmitting IPsec node to a receiving IPsec node are not fragmented.

14 citations


Proceedings ArticleDOI
08 Mar 2003
TL;DR: Results show that under this analysis, traces from both wireless and wire-line networks leak useful information about the properties of the network and applications under examination, even when the actual packets are encrypted or attempts are made to mask the traffic timing.
Abstract: Recent studies have shown that signal-processing techniques are quite valuable for the modeling and analysis of modern networks and network traffic [1] [2]. However, to date most of these studies have focused on characterizing the multi-scale and long-memory stochastic nature of single streams or traces of non-encrypted network traffic. The key approach used has been to transform traces of packet arrival times and/or packet size into encoded time signals, which then allow analysts to perform standard statistical and timefrequency-scale signal analyses. In this paper we summarize some of our results which show that under this analysis, traces from both wireless and wire-line networks leak useful information about the properties of the network and applications under examination, even when the actual packets are encrypted or attempts are made to mask the traffic timing. Furthermore, when multiple signal techniques are used between individual time streams, even more information about the underlying routing and flows can be uncovered.

13 citations


Proceedings ArticleDOI
22 Apr 2003
TL;DR: SPIE, the Source Path Isolation Engine, is presented, a hash-based technique for IP traceback that generates audit trails for traffic within the network, and can trace the origin of a single IP packet delivered by the network in the recent past.
Abstract: The design of the Internet protocol makes it difficult to reliably identify the originator of an IP packet. IP traceback techniques have been developed to determine the source of large packet flows, but, to date, no system has been presented to track individual packets in an efficient, scalable fashion. We present SPIE, the Source Path Isolation Engine, a hash-based technique for IP traceback that generates audit trails for traffic within the network, and can trace the origin of a single IP packet delivered by the network in the recent past.

9 citations