scispace - formally typeset
Search or ask a question

Showing papers on "The Internet published in 1992"


Proceedings Article
01 Apr 1992
TL;DR: This document describes the MD5 message-digest algorithm, which takes as input a message of arbitrary length and produces as output a 128-bit "fingerprint" or "message digest" of the input.
Abstract: This document describes the MD5 message-digest algorithm. The algorithm takes as input a message of arbitrary length and produces as output a 128-bit "fingerprint" or "message digest" of the input. This memo provides information for the Internet community. It does not specify an Internet standard.

3,514 citations


01 Mar 1992
TL;DR: This document describes the Network Time Protocol (NTP), specifies its formal structure and summarizes information useful for its implementation and describes the methods used for their implementation.
Abstract: This document describes the Network Time Protocol (NTP), specifies its formal structure and summarizes information useful for its implementation. [STANDARDS-TRACK]

1,057 citations


Journal ArticleDOI
TL;DR: The aims, data model, and protocols needed to implement the “web” and compares them with various contemporary systems are described.
Abstract: The World‐Wide Web (W3) initiative is a practical project designed to bring a global information universe into existence using available technology. This article describes the aims, data model, and protocols needed to implement the “web” and compares them with various contemporary systems.

595 citations


Book
02 Jan 1992
TL;DR: The TCP/IP protocol suite was first proposed fifteen years ago as discussed by the authors, and has been used widely in military and commercial systems, but it is difficult to deduce from these why the protocol is as it is.
Abstract: The Internet protocol suite, TCP/IP, was first proposed fifteen years ago. It was developed by the Defense Advanced Research Projects Agency (DARPA), and has been used widely in military and commercial systems. While there have been papers and specifications that describe how the protocols work, it is sometimes difficult to deduce from these why the protocol is as it is. For example, the Internet protocol is based on a connectionless or datagram mode of service. The motivation for this has been greatly misunderstood. This paper attempts to capture some of the early reasoning which shaped the Internet protocols.

536 citations


01 Mar 1992
TL;DR: The Network Time Protocol provides the mechanisms to synchronize time and coordinate time distribution in a large, diverse internet operating atrates from mundane to lightwave.
Abstract: This document describes the Network Time Protocol (NTP), specifies itsformal structure and summarizes information useful for itsimplementation. NTP provides the mechanisms to synchronize time andcoordinate time distribution in a large, diverse internet operating atrates from mundane to lightwave. It uses a returnable-time design inwhich a distributed subnet of time servers operating in a self-organizing, hierarchical-master-slave configuration synchronizes localclocks within the subnet and to national time standards via wire orradio. The servers can also redistribute reference time via localrouting algorithms and time daemons.

321 citations


01 Jul 1992
TL;DR: This memo changes and clarifies some aspects of the semantics of the Type of Service octet in the Internet Protocol (IP) header.
Abstract: This memo changes and clarifies some aspects of the semantics of the Type of Service octet in the Internet Protocol (IP) header. [STANDARDS-TRACK]

268 citations


01 Jan 1992
TL;DR: This study attempts to characterize the dynamics of Internet workload from an end-point perspective and concludes that efficient congestion control is still a very difficult problem in large internetworks.
Abstract: Dynamics of Internet load are investigated using statistics of round-trip delays, packet losses and out-of-order sequence of acknowledgments. Several segments of the Internet are studied. They include a regional network (the Jon von Neumann Center Network), a segment of the NSFNet backbone and a cross-country network consisting of regional and backbone segments. Issues addressed include: (a) dominant time scales in network workload; (b) the relationship between packet loss and different statistics of round-trip delay (average, minimum, maximum and standard-deviation); (c) the relationship between out of sequence acknowledgments and different statistics of delay; (d) the distribution of delay; (e) a comparison of results across different network segments (regional, backbone and cross-country); and (f) a comparison of results across time for a specific network segment. This study attempts to characterize the dynamics of Internet workload from an end-point perspective. A key conclusion from the data is that efficient congestion control is still a very difficult problem in large internetworks. Nevertheless, there are interesting signals of congestion that may be inferred from the data. Examples include (a) presence of slow oscillation components in smoothed network delay, (b) increase in conditional expected loss and conditional out-of-sequence acknowledgments as a function of various statistics of delay, (c) change in delay distribution parameters as a function of load, while the distribution itself remains the same, etc. The results have potential application in heuristic algorithms and analytical approximations for congestion control. Comments University of Pennsylvania Department of Computer and Information Sciences Technical Report No. MSCIS-92-83. This technical report is available at ScholarlyCommons: http://repository.upenn.edu/cis_reports/300 On The Dynamics and Significance of Low Frequency Components of Internet Load MS-CIS-92-83 DISTRIBUTED SYSTEMS LAB 12 Universit.y of Pe1111sylva11ia School of Engi l~ceri~~g and Applied Science Conrputer arrd 111for1nnt.ion Science Department.

262 citations


Book
01 Jan 1992
TL;DR: In this paper, a guide to the theory and application of selling strategies and tools is presented, including the use of cell phones, presentation software and other technologies in the market place.
Abstract: A guide to the theory and application of selling strategies and tools. Topics covered include the use of cell phones, presentation software and other technologies in the market place. This updated edition also has coverage of the Internet and more global examples.

223 citations


Journal ArticleDOI
01 Jul 1992
TL;DR: This experiment was not only the first sizeabl e audio muldcast over apacket network, but also significan t for the size of the IP multicast network topology itself.
Abstract: At the March, 1992 meeting of the Internet Engineerin g Task Force (IETF) in San Diego, live audio from severa l sessions of the meeting was "audiocast" using multicas t packet transmission from the IETF site over the Internet t o participants at 20 sites on three continents spanning 1 6 time zones . This experiment was not only the first sizeabl e audio muldcast over apacket network . but also significan t for the size of the IP multicast network topology itself .

220 citations


01 Feb 1992
TL;DR: This memo describes a protocol for reliable transport that utilizes the multicast capability of applicable lower layer networking architectures that permits an arbitrary number of transport providers to perform realtime collaborations without requiring networking clients to possess detailed knowledge of the population or geographical dispersion of the participating members.
Abstract: This memo describes a protocol for reliable transport that utilizes the multicast capability of applicable lower layer networking architectures. The transport definition permits an arbitrary number of transport providers to perform realtime collaborations without requiring networking clients (aka, applications) to possess detailed knowledge of the population or geographical dispersion of the participating members. It is not network architectural specific, but does implicitly require some form of multicasting (or broadcasting) at the data link level, as well as some means of communicating that capability up through the layers to the transport. This memo provides information for the Internet community. It does not specify an Internet standard.

157 citations


Journal Article
TL;DR: This paper presents a taxonomy of approaches to resource discovery, and uses this taxonomy to compare a number of resource discovery systems, and examine several gateways between existing systems.
Abstract: In the past several years, the number and variety of resources available on the Internet have increased dramatically. With this increase, many new systems have been developed that allow users to search for and access these resources. As these systems begin to interconnect with one another through "information gateways", the conceptual relationships between the systems come into question. Understanding these relationships is important, because they address the degree to which the systems can be made to interoperate seamlessly, without the need for users to learn the details of each system. In this paper we present a taxonomy of approaches to resource discovery. The taxonomy provides insights into the interrelated problems of organizing, browsing, and searching for information. Using this taxonomy, we compare a number of resource discovery systems, and examine several gateways between existing systems.

01 Sep 1992
TL;DR: A flow specification (or "flow spec") is a data structure used by internetwork hosts to request special services of the internet work, often guarantees about how the internetwork will handle some of the hosts' traffic.
Abstract: A flow specification (or "flow spec") is a data structure used by internetwork hosts to request special services of the internetwork, often guarantees about how the internetwork will handle some of the hosts' traffic. In the future, hosts are expected to have to request such services on behalf of distributed applications such as multimedia conferencing.

Journal ArticleDOI
01 Jul 1992
TL;DR: This paper confine the discussion to higher level issues: an overall connection architecture, a connection control protocol, and configuration management.
Abstract: : What ingredients are needed to enable widespread personal teleconferencing over the Internet and NREN? The major focus of our work in the Multimedia Conferencing Project at ISI is on the design and implementation of protocols at a number of levels in the protocol stack; at the lower levels, real-time data communication services for the Internet in general; and at higher levels, a connection management architecture to facilitate connections among heterogeneous systems. In this paper, we confine our discussion to higher level issues: an overall connection architecture, a connection control protocol, and configuration management. Multimedia, Teleconferencing, Connection architecture, Connection management, Configuration management, Connection control protocol.

Proceedings ArticleDOI
01 Oct 1992
TL;DR: This paper explores the performance of DNS based on two 24-hour traces of traffic destined to one of these root name servers, and calls for a fundamental change in the way name servers and distributed applications are specified and implemented.
Abstract: Over a million computers implement the Internet's Domain Name System of DNS, making it the world's most distributed database and the Internet's most significant source of wide-area RPC-like traffic. Last year, over eight percent of the packets and four percent of the bytes that traversed the NSFnet were due to DNS. We estimate that a third of this wide-area DNS traffic was destined to seven root name servers. This paper explores the performance of DNS based on two 24-hour traces of traffic destined to one of these root name servers. It considers the effectiveness of name caching and retransmission timeout calculation, shows how algorithms to increase DNS's resiliency lead to disastrous behavior when servers fail or when certain implementation faults are triggered, explains the paradoxically high fraction of wide-area DNS packets, and evaluates the impact of flaws in various implementations of DNS. It shows that negative caching would improve DNS performance only marginally in an internet of correctly implemented name servers. It concludes by calling for a fundamental change in the way we specify and implement future name servers and distributed applications.

Book
02 Jan 1992
TL;DR: The DARPA experimental internet system consisting of satellite, terrestrial, radio, and local networks, all interconnected through a system of gateways and a set of common protocols, supports an architecture consisting of multiple packet switched networks interconnected by gateways.
Abstract: THE MILITARY requirement for computer communications between heterogeneous computers on heterogeneous networks has driven the development of a standard suite of protocols to permit such communications to take place in a robust and flexible manner. These protocols support an architecture consisting of multiple packet switched networks interconnected by gateways. The DARPA experimental internet system consists of satellite, terrestrial, radio, and local networks, all interconnected through a system of gateways and a set of common protocols.

01 Apr 1992
TL;DR: This document describes the MD2 message-digest algorithm, which takes as input a message of arbitrary length and produces as output a 128-bit "fingerprint" or "message digest" of the input.
Abstract: This document describes the MD2 message-digest algorithm. The algorithm takes as input a message of arbitrary length and produces as output a 128-bit "fingerprint" or "message digest" of the input. This memo provides information for the Internet community. It does not specify an Internet standard.

Book
11 Aug 1992
TL;DR: This new second edition of TCP/IP Network Administration discusses advanced routing protocols (RIPv2, OSPF, and BGP) and the gated software package that implements them and is a command and syntax reference for several important packages, including gated, pppd, named, dhcpd, and sendmail.
Abstract: TCP/IP Network Administration, Third Edition is a complete guide to setting up and running a TCP/IP network for administrators of networks of systems or users of home systems that access the Internet. It starts with the fundamentals: what the protocols do and how they work, how addresses and routing are used to move data through the network, and how to set up your network connection. Beyond basic setup, this new second edition discusses advanced routing protocols (RIPv2, OSPF, and BGP) and the gated software package that implements them. It also provides a tutorial on how to configure important network services, including PPP, SLIP, sendmail, Domain Name Service (DNS), BOOTP and DHCP configuration servers, and some simple setups for NIS and NFS. There are also chapters on troubleshooting and security. In addition, this book is a command and syntax reference for several important packages, including gated, pppd, named, dhcpd, and sendmail.

Proceedings ArticleDOI
01 May 1992
TL;DR: The authors explore a variety of possibilities to adapt the wireless environment to that of IP, finding several alternatives making use of a different combination of the addressing and routing features offered by IP.
Abstract: IP is the basic protocol in the Internet. The authors explore a variety of possibilities to adapt the wireless environment to that of IP. They describe the requirements and show how these can be accommodated by using the existing IP. At the heart of the problem is the lack of a capability in the current IP routing services to track topological changes. Several alternatives are described, each making use of a different combination of the addressing and routing features offered by IP. The alternatives are compared. The tradeoffs among these alternatives are explored. >


Book
01 Sep 1992
TL;DR: The second edition of an introduction to the Internet, the international network that includes virtually every major computer site in the world, was published by as discussed by the authors, which is aimed at researchers, students, or just people who like electronic mail.
Abstract: This is the second edition of an introduction to the Internet, the international network that includes virtually every major computer site in the world. The Internet is a resource of almost unimaginable wealth. In addition to electronic mail and news services, thousands of public archives, databases, and other special services are available: everything from space flight announcements to ski reports, remote login, and network news, The Whole Internet pays special attention to some new tools for helping you find information - like the World-Wide Web and its multimedia browser, Mosaic. Aimed at researchers, students, or just people who likes electronic mail, this book should help readers to explore what's possible. It also includes a pull-out quick-reference card.

Journal ArticleDOI
TL;DR: Peter G. Neumann, SRI International EL243, Menlo Park CA 94025-3493 (e-mail Neumann@csl.sri.com)
Abstract: Copyright 2004, Peter G. Neumann, SRI International EL243, Menlo Park CA 94025-3493 (e-mail Neumann@csl.sri.com; http://www.CSL.sri.com/neumann; telephone 1-650-859-2375; fax 1-650-859-2844): Editor, ACM SIGSOFT Software Engineering Notes, 1976–93, Assoc.Ed., 1994–; Chairman, ACM Committee on Computers and Public Policy (CCPP); Moderator of the Risks Forum (comp.risks); cofounder with Lauren Weinstein of People For Internet Responsibility (http://www.pfir.org).

Journal Article
TL;DR: This architecture implements the replicas as a weak-consistency process group, which provides good scalability and availability, handles portable computer systems, and minimizes the effect of users on each other.
Abstract: Services provided on wide-area networks like the Internet present several challenges. The reliability, performance, and scalability expected of such services often requires they be implemented using multiple, replicated servers. One possible architecture implements the replicas as a {\em weak-consistency process group.} This architecture provides good scalability and availability, handles portable computer systems, and minimizes the effect of users on each other. The key principles in this architecture are component independence, a process group protocol that provides small summaries of database contents, caching database {\em slices}, and the {\em quorum multicast} client-to-server communication protocol. A distributed bibliographic database system serves as an example.

Book
01 Nov 1992
TL;DR: The Billion-Node Internet and Future Directions, a review of the internet evolution and future Directions, and annotated Bibliography are published.
Abstract: I. INTRODUCTION. 1. Historical Evolution. 2. Globalization of the Internet. 3. Evolving the System. II. TECHNOLOGIES. 1. Core Protocols. 2. Routing Protocols. 3. Main Applications. 4. A Practical Perspective on Routers. 5. A Practical Perspective on Host Networking. 6. Architectural Security. 7. Creating New Applications. III. INFRASTRUCTURE. 1. Directory Services. 2. Network Management. 3. Tools for anInternet Backbone. 4. Tools for an Internet Component IP. 5. Network Performance. 6. Operational Security. IV. DIRECTIONS. 1. The Billion-Node Internet. 2. Internet Evolution and Future Directions. Annotated Bibliography. Index. 0201567415T04062001

01 Mar 1992
TL;DR: This memo documents the process currently used for the standardization of Internet protocols and procedures.
Abstract: This memo documents the process currently used for the standardization of Internet protocols and procedures. [STANDARDS-TRACK]

01 Oct 1992
TL;DR: This document has been reviewed by the Federal Engineering Planning Group (FEPG), the co-chairs of the Intercontinental Engineering planning Group (IEPG), and the Reseaux IP Europeens to support the recommendations proposed in this document for management of the IP address space.
Abstract: This document has been reviewed by the Federal Engineering Planning Group (FEPG) on behalf of the Federal Networking Council (FNC), the co-chairs of the Intercontinental Engineering Planning Group (IEPG), and the Reseaux IP Europeens (RIPE). There was general consensus by those groups to support the recommendations proposed in this document for management of the IP address space.

01 Jan 1992
TL;DR: This document illustrates the growth of the Internet by examination of entries in the Domain Name System (DNS) and pre-DNS host table data, which were retrieved from system archive tapes.
Abstract: This document illustrates the growth of the Internet by examination of entries in the Domain Name System (DNS) and pre-DNS host tables. DNS entries are collected by a program called ZONE, which searches the Internet and retrieves data from all known domains. Pre-DNS host table data were retrieved from system archive tapes. Various statistics are presented on the number of hosts and domains.

Journal ArticleDOI
01 Oct 1992
TL;DR: SUIT is a C subroutine library which provides an external control UIMS, an interactive layout editor, and a set of standard “widgets,” such as sliders, buttons, and check boxes, which run transparently across the Macintosh, DOS, and UNIX/X platforms.
Abstract: In recent years, the computer science community has realized the advantages of GUIs (Graphical User Interfaces). Because high-quality GUIs are difficult to build, support tools such as UIMSs, UI Toolkits, and Interface Builders have been developed. Although these tools are powerful, they typically make two assumptions: first, that the programmer has some familiarity with the GUI model, and second, that he is willing to invest several weeks becoming proficient with the tool. These tools typically operate only on specific platforms, such as DOS, the Macintosh, or UNIX/X-windows.The existing tools are beyond the reach of most undergraduate computer science majors, or professional programmers who wish to quickly build GUIs without investing the time to become specialists in GUI design. For this class of users, we developed SUIT, the Simple User Iinterface Toolkit. SUIT is an attempt to distill the fundamental components of an interface builder and GUI toolkit, and to explain those concepts with the tool itself, all in a short period of time. We have measured that college juniors with no previous GUI programming experience can use SUIT productively after less than three hours. SUIT is a C subroutine library which provides an external control UIMS, an interactive layout editor, and a set of standard “widgets,” such as sliders, buttons, and check boxes. SUIT-based applications run transparently across the Macintosh, DOS, and UNIX/X platforms. SUIT has been exported to hundreds of external sites on the internet. This paper describes SUIT's architecture, the design decisions we made during its development, and the lessons we learned from extensive observations of over 120 users.

Journal ArticleDOI
TL;DR: Over a million computers implement the Internet's Domain Name System of DNS, making it the world's most distributed database and the internet's most significant source of wide-area RPC-like traffic.
Abstract: Over a million computers implement the Internet's Domain Name System of DNS, making it the world's most distributed database and the Internet's most significant source of wide-area RPC-like traffic...


Book
01 Jan 1992
TL;DR: Nesheim's underground Silicon Valley bestseller incorporates twenty-three case studies of successful start-ups, including tables of wealth showing how much money founders and investors realized from each venture as discussed by the authors.
Abstract: This revised and updated edition of Nesheim's underground Silicon Valley bestseller incorporates twenty-three case studies of successful start-ups, including tables of wealth showing how much money founders and investors realized from each venture. The phenomenal success of the initial public offerings (IPOs) of many new internet companies obscures the fact that fewer than six out of 1 million business plans submitted to venture capital firms will ever reach the IPO stage. Many fail, according to start-up expert John Nesheim, because the entrepreneurs did not have access to the invaluable lessons that come from studying the real-world venture experiences of successful companies. Now they do. Acclaimed by entrepreneurs the world over, this practical handbook is filled with hard-to-find information and guidance covering every key phase of a start-up, from idea to IPO: how to create a winning business plan, how to value the firm, how venture capitalists work, how they make their money, where to find alternative sources of funding, how to select a good lawyer, and how to protect intellectual property. Nesheim aims to improve the odds of success for first-time high-tech entrepreneurs, and offers an insider's perspective from firsthand experience on one of the toughest challenges they face -- convincing venture capitalists or investment banks to provide financing. This complete, classic reference tool is essential reading for first-time high-tech entrepreneurs, and entrepreneurs already involved in a start-up who want to increase their chances of success to rise to the top.