scispace - formally typeset
Search or ask a question

Showing papers on "The Internet published in 1988"


Book
01 Jan 1988
TL;DR: The fifth edition of this best-selling text continues to provide a comprehensive source of material on the principles and practice of distributed computer systems and the exciting new developments based on them, using a wealth of modern case studies to illustrate their design and development.
Abstract: Broad and up-to-date coverage of the principles and practice in the fast moving area of Distributed Systems. Distributed Systems provides students of computer science and engineering with the skills they will need to design and maintain software for distributed applications. It will also be invaluable to software engineers and systems designers wishing to understand new and future developments in the field. From mobile phones to the Internet, our lives depend increasingly on distributed systems linking computers and other devices together in a seamless and transparent way. The fifth edition of this best-selling text continues to provide a comprehensive source of material on the principles and practice of distributed computer systems and the exciting new developments based on them, using a wealth of modern case studies to illustrate their design and development. The depth of coverage will enable readers to evaluate existing distributed systems and design new ones.

2,406 citations


Book
01 Jan 1988
TL;DR: This book describes the security pitfalls inherent in many important computing tasks today and points out where existing controls are inadequate and serious consideration must be given to the risk present in the computing situation.
Abstract: From the Book: PREFACE: When the first edition of this book was published in 1989, viruses and other forms of malicious code were fairly uncommon, the Internet was used largely by just computing professionals, a Clipper was a sailing ship, and computer crime was seldom a headline topic in daily newspapers. In that era most people were unconcerned about--even unaware of--how serious is the threat to security in the use of computers. The use of computers has spread at a rate completely unexpected back then. Now you can bank by computer, order and pay for merchandise, and even commit to contracts by computer. And the uses of computers in business have similarly increased both in volume and in richness. Alas, the security threats to computing have also increased significantly. Why Read This Book? Are your data and programs at risk? If you answer "yes" to any of the following questions, you have a potential security risk. Have you acquired any new programs within the last year? Do you use your computer to communicate electronically with other computers? Do you ever receive programs or data from other people? Is there any significant program or data item of which you do not have a second copy? Relax; you are not alone. Most computer users have a security risk. Being at risk does not mean you should stop using computers. It does mean you should learn more about the risk you face, and how to control that risk. Users and managers of large mainframe computing systems of the 1960s and l970s developed computer security techniques that were reasonably effective against thethreatsof that era. However, two factors have made those security procedures outdated: Personal computer use. Vast numbers of people have become dedicated users of personal computing systems, both for business and pleasure. We try to make applications "user friendly" so that computers can be used by people who know nothing of hardware or programming, just as people who can drive a car do not need to know how to design an engine. Users may not be especially conscious of the security threats involved in computer use; even users who are aware may not know what to do to reduce their risk. Networked remote-access systems. Machines are being linked in large numbers. The Internet and its cousin, the World-Wide Web, seem to double every year in number of users. A user of a mainframe computer may not realize that access to the same machine is allowed to people throughout the world from an almost uncountable number of computing systems. Every computing professional must understand the threats and the countermeasures currently available in computing. This book addresses that need. This book is designed for the student or professional in computing. Beginning at a level appropriate for an experienced computer user, this book describes the security pitfalls inherent in many important computing tasks today. Then, the book explores the controls that can check these weaknesses. The book also points out where existing controls are inadequate and serious consideration must be given to the risk present in the computing situation. Uses of This Book The chapters of this book progress in an orderly manner. After an introduction, the topic of encryption, the process of disguising something written to conceal its meaning, is presented as the first tool in computer security. The book continues through the different kinds of computing applications, their weaknesses, and their controls. The applications areas include: general programs operating systems data base management systems remote access computing multicomputer networks These sections begin with a definition of the topic, continue with a description of the relationship of security to the topic, and conclude with a statement of the current state of the art of computer security research related to the topic. The book concludes with an examination of risk analysis and planning for computer security, and a study of the relationship of law and ethics to computer security. Background required to appreciate the book is an understanding of programming and computer systems. Someone who is a senior or graduate student in computer science or a professional who has been in the field for a few years would have the appropriate level of understanding. Although some facility with mathematics is useful, all necessary mathematical background is developed in the book. Similarly, the necessary material on design of software systems, operating systems, data bases, or networks is given in the relevant chapters. One need not have a detailed knowledge of these areas before reading this book. The book is designed to be a textbook for a one- or two-semester course in computer security. The book functions equally well as a reference for a computer professional. The introduction and the chapters on encryption are fundamental to the understanding of the rest of the book. After studying those pieces, however, the reader can study any of the later chapters in any order. Furthermore, many chapters follow the format of introduction, then security aspects of the topic, then current work in the area. Someone who is interested more in background than in current work can stop in the middle of one chapter and go on to the next. This book has been used in classes throughout the world. Roughly half of the book can be covered in a semester. Therefore, an instructor can design a one-semester course that considers some of the topics of greater interest. What Does This Book Contain? This is the revised edition of Security in Computing. It is based largely on the previous version, with many updates to cover newer topics in computer security. Among the salient additions to the new edition are these items: Viruses, worms, Trojan horses, and other malicious code. Complete new section (first half of Chapter 5) including sources of these kinds of code, how they are written, how they can be detected and/or prevented, and several actual examples. Firewalls. Complete new section (end of Chapter 9) describing what they do, how they work, how they are constructed, and what degree of protection they provide. Private e-mail. Complete new section (middle of Chapter 9) explaining exposures in e-mail, kind of protection available, PEM and PGP, key management, and certificates. Clipper, Capstone, Tessera, Mosaic, and key escrow. Several sections, in Chapter 3 as an encryption technology, and Chapter 4 as a key management protocol, and in Chapter 11 as a privacy and ethics issue. Trusted system evaluation. Extensive addition (in Chapter 7) including criteria from the United States, Europe, Canada, and the soon-to-be-released Common Criteria. Program development processes, including ISO 9000 and the SEI CMM. A major section in Chapter 5 gives comparisons between these methodologies. Guidance for administering PC, Unix, and networked environments. In addition to these major changes, there are numerous small changes, ranging from wording changes to subtle notational changes for pedagogic reasons, to replacement, deletion, rearrangement, and expansion of sections. The focus of the book remains the same, however. This is still a book covering the complete subject of computer security. The target audience is college students (advanced undergraduates or graduate students) and professionals. A reader is expected to bring a background in general computing technology; some knowledge of programming, operating systems, and networking is expected, although advanced knowledge in those areas is not necessary. Mathematics is used as appropriate, although a student can ignore most of the mathematical foundation if he or she chooses. Acknowledgments Many people have contributed to the content and structure of this book. The following friends and colleagues have supplied thoughts, advice, challenges, criticism, and suggestions that have influenced my writing of this book: Lance Hoffman, Marv Schaefer, Dave Balenson, Terry Benzel, Curt Barker, Debbie Cooper, and Staffan Persson. Two people from outside the computer security community were very encouraging: Gene Davenport and Bruce Barnes. I apologize if I have forgotten to mention someone else; the oversight is accidental. Lance Hoffman deserves special mention. He used a preliminary copy of the book in a course at George Washington University. Not only did he provide me with suggestions of his own, but his students also supplied invaluable comments from the student perspective on sections that did and did not communicate effectively. I want to thank them for their constructive criticisms. Finally, if someone alleges to have written a book alone, distrust the person immediately. While an author is working 16-hour days on the writing of the book, someone else needs to see to all the other aspects of life, from simple things like food, clothing, and shelter, to complex things like social and family responsibilities. My wife, Shari Lawrence Pfleeger, took the time from her professional schedule so that I could devote my full energies to writing. Furthermore, she soothed me when the schedule inexplicably slipped, when the computer went down, when I had writerOs block, or when some other crisis beset this project. On top of that, she reviewed the entire manuscript, giving the most thorough and constructive review this book has had. Her suggestions have improved the content, organization, readability, and overall quality of this book immeasurably. Therefore, it is with great pleasure that I dedicate this book to Shari, the other half of the team that caused this book to be written. Charles P. Pfleeger Washington DC

1,332 citations


Book
01 Jan 1988
TL;DR: An internationally best-selling, conceptual introduction to the TCP/IP protocols and Internetworking, this book interweaves a clear discussion of fundamentals and scientific principles with details and examples drawn from the latest technologies.
Abstract: An internationally best-selling, conceptual introduction to the TCP/IP protocols andInternetworking, this book interweaves a clear discussion of fundamentals and scientificprinciples with details and examples drawn from the latest technologies. Leading authorDouglas Comer covers layering and packet formats for all the Internet protocols, includingTCP, IPv4, IPv6, DHCP, and DNS. In addition, the text explains new trends in Internetsystems, including packet classification, Software Defined Networking (SDN), and meshprotocols used in The Internet of Things. The text is appropriate for individuals interested in learning more about TCP/IP protocols,Internet architecture, and current networking technologies, as well as engineers who buildnetwork systems. It is suitable for junior to graduate-level courses in Computer Networks,Data Networks, Network Protocols, and Internetworking.

1,320 citations


Journal ArticleDOI
01 Aug 1988
TL;DR: This paper attempts to capture some of the early reasoning which shaped the Internet protocols.
Abstract: The Internet protocol suite, TCP/IP, was first proposed fifteen years ago. It was developed by the Defense Advanced Research Projects Agency (DARPA), and has been used widely in military and commercial systems. While there have been papers and specifications that describe how the protocols work, it is sometimes difficult to deduce from these why the protocol is as it is. For example, the Internet protocol is based on a connectionless or datagram mode of service. The motivation for this has been greatly misunderstood. This paper attempts to capture some of the early reasoning which shaped the Internet protocols.

1,042 citations


Book
01 Jan 1988
TL;DR: This book includes expanded coverage of data flow diagrams, data dictionaries, and process specifications, as it introduces examples of new software used by analysts and designers to manage projects, analyze and document systems, design new systems, and implement their plans.
Abstract: From the Publisher: OFTEN IMITATED, NEVER DUPLICATED... There is simply no other Systems Analysis and Design textbook as exciting as Kendall & Kendall'. Their dynamic, comprehensive presentation leaves no stone unturned. With plenty of review questions and problems, hypothetical consulting situations, an ongoing case study, and even an associated Internet-based case study, this book makes the concepts of the course understandable and motivating. You are given all the tools to learn how to become a successful systems analyst, ace your exams, and have fun doing it' YOU WANT TO LEARN HOW TO BE A SYSTEMS ANALYST AND SYSTEMS DESIGNER... This book includes expanded coverage of data flow diagrams, data dictionaries, and process specifications, as it introduces examples of new software used by analysts and designers to manage projects, analyze and document systems, design new systems, and implement their plans. YOU NEED CURRENT TOPICS IN SYSTEMS ANALYSIS AND DESIGN... The fifth edition of Systems Analysis and Design has new coverage of UML, wireless technologies, ERP, Web-based systems for e-commerce, and expanded coverage on RAD and GUI design. YOU WANT YOUR SYSTEMS ANALYSIS AND DESIGN TEXT TO GIVE YOU HANDS-ON EXPERIENCE... HyperCase—original, hypertext-based software created by the authors—is located on the text's Companion Web site: www.prenhall.com/kendall. This innovative software gives users firsthand experience with a business and organizational structure, allowing you to interview employees, observe office dynamics and practices, analyze prototypes, and review existing systems. The case, a businesssimulation called "Maple Ridge Engineering," is revisited throughout the text, with end-of-chapter exercises. All activities are based on real-life consulting experiences. YOU MUST HAVE INFORMATION ON HOW TO GET EXPERIENCE AS A SYSTEMS ANALYST... The authors include new coverage on analyzing e-commerce—including thorough new material of e-commerce Web site design—and more than 70 consulting opportunities, including new ones with an e-commerce focus. Students can evaluate the political, ethical, social, and economic implications of their systems design in these mini-cases. KENDALL & KENDALL, SYSTEMS ANALYSIS AND DESIGN, FIFTH EDITION giving you the tools to learn, practice, and perfect your skills in Systems Analysis and Design like no other text on the market ever has or ever will!

607 citations


Journal ArticleDOI
01 Aug 1988
TL;DR: The ideas behind the initial design of the DNS in 1983 are examined, the evolution of these ideas into the current implementations and usages are discussed, conspicuous surprises, successes and shortcomings are noted, and attempts to predict its future evolution are attempted.
Abstract: The Domain Name System (DNS) provides name service for the DARPA Internet. It is one of the largest name services in operation today, serves a highly diverse community of hosts, users, and networks, and uses a unique combination of hierarchies, caching, and datagram access.This paper examines the ideas behind the initial design of the DNS in 1983, discusses the evolution of these ideas into the current implementations and usages, notes conspicuous surprises, successes and shortcomings, and attempts to predict its future evolution.

425 citations


01 Aug 1988
TL;DR: This RFC is a re-release of RFC 1065, with a changed "Status of this Memo", plus a few minor typographical corrections.
Abstract: This RFC provides the common definitions for the structure and identification of management information for TCP/IP-based internets. In particular, together with its companion memos, which describe the initial management information base along with the initial network management protocol, these documents provide a simple, working architecture and system for managing TCP/IP-based internets and in particular, the Internet. This memo specifies a draft standard for the Internet community. TCP/IP implementation in the Internet which are network manageable are expected to adopt and implement this specification.

413 citations


01 Nov 1988
TL;DR: The Post Office Protocol - Version 3 (POP3) is intended to permit a workstation to dynamically access a maildrop on a server host in a useful fashion.
Abstract: The Post Office Protocol - Version 3 (POP3) is intended to permit a workstation to dynamically access a maildrop on a server host in a useful fashion. [STANDARDS-TRACK]

239 citations


Book
04 Jul 1988
TL;DR: Cost Studies of Buildings as mentioned in this paper is a practical and easy-to-use guide to the cost management role of building construction, focusing on the importance of costs of constructing projects during different phases of the construction process.
Abstract: This practical guide to cost studies of buildings has been updated and revised throughout for the 6th edition. New developments in RICS New Rules of Measurement (NRM) are incorporated throughout the book, in addition to new material on e-business, the internet, social media, building information modelling, sustainability, building resilience and carbon estimating. This trusted and easy to use guide to the cost management role: Focuses on the importance of costs of constructing projects during the different phases of the construction process Features learning outcomes and self-assessment questions for each chapter Addresses the requirements of international readers From introductory data on the construction industry and the history of construction economics, to recommended methods for cost analysis and post-contract cost control, Cost Studies of Buildings is an ideal companion for anyone learning about cost management.

238 citations


Journal ArticleDOI
TL;DR: The TCP/IP protocol suite was first proposed fifteen years ago and was developed by the Defense Advanced Research Projects Agency (DARPA) and has been used widely in military and commercial applications as discussed by the authors.
Abstract: The Internet protocol suite, TCP/IP, was first proposed fifteen years ago It was developed by the Defense Advanced Research Projects Agency (DARPA), and has been used widely in military and commer

178 citations


01 Jan 1988
TL;DR: This memo summarizes techniques and algorithms for efficiently computing the Internet checksum, not a standard, but a set of useful implementation techniques.
Abstract: This memo discusses methods for efficiently computing the Internet checksum that is used by the standard Internet protocols IP, UDP, and TCP.An efficient checksum implementation is critical to good performance. As advances in implementation techniques streamline the rest of the protocol processing, the checksum computation becomes one of the limiting factors on TCP performance, for example. It is usually approapriate to carefully hand-craft the checksum routine, exploiting every machine-dependent trick possible; a fraction of a microsecond per TCP data byte can add up to significant CPU time savings overall.

Journal ArticleDOI
TL;DR: The relationships between GIS and other activities having to do with geographic information are reviewed and the use of GIS in social and behavioral sciences is discussed as an increasingly essential component of the research infrastructure and as a tool for acquiring and communicating geographic knowledge.
Abstract: Geographic information systems (GISs) are defined as software systems. In this article, the relationships between GIS and other activities having to do with geographic information are reviewed. The use of GIS in social and behavioral sciences is discussed as an increasingly essential component of the research infrastructure and as a tool for acquiring and communicating geographic knowledge. Examples are used to discuss the importance of GIS across the social and behavioral sciences. Sources of data are reviewed, and GISs are discussed from the perspectives of client–server architectures, the Internet, Internet-based services, data archives, and digital libraries. GIS use is intimately related to the role of space in scientific explanation. The article ends with a discussion on the future of GIS.

Book
01 Dec 1988
TL;DR: Issues such as the level of interconnection, the role of gateways, naming and addressing, flow and congestion control, accounting and access control, and basic internet services are discussed in detail.
Abstract: This paper introduces the wide range of technical, legal, and political issues associated with interconnection of packet-switched data communication networks. Motivations for interconnection are given, desired user services are described, and a range of technical choices for achieving interconnection are compared. Issues such as the level of interconnection, the role of gateways, naming and addressing, flow and congestion control, accounting and access control, and basic internet services are discussed in detail. The CCITT X.25/ X.75 packet-network interface recommendations are evoluated in terms of their applicability to network interconnection. Alternatives such as datagram operation and general host gateways are compared with the virtual circuit methods. Some Observations on the regulatory aspects of interconnection are offered and the paper concludes with a statement open research problems and some tentative conclusions.

Journal ArticleDOI
TL;DR: This paper presents a structural overview of Profile's three major components: a confederation of attribute-based name servers, a name space abstraction that unifies the name servers and a user interface that integrates the name space with existing naming systems.
Abstract: Profile is a descriptive naming service used to identify users and organizations. This paper presents a structural overview of Profile's three major components: a confederation of attribute-based name servers, a name space abstraction that unifies the name servers, and a user interface that integrates the name space with existing naming systems. Each name server is an independent authority that allows clients to describe users and organizations with a multiplicity of attributes; the name space abstraction is a client program that implements a discipline for searching a sequence of name servers; and the interface provides a tool with which users build customized commands. Experience with an implementation in the DARPA/NSF Internet demonstrates that Profile is a feasible and effective mechanism for naming users and organizations in a large internet.

01 Feb 1988
TL;DR: The full function of VMTP, including support for security, real-time, asynchronous message exchanges, streaming, multicast and idempotency, provides a rich selection to the VMTP user level.
Abstract: This memo specifies the Versatile Message Transaction Protocol (VMTP) [Version 0.7 of 19-Feb-88], a transport protocol specifically designed to support the transaction model of communication, as exemplified by remote procedure call (RPC). The full function of VMTP, including support for security, real-time, asynchronous message exchanges, streaming, multicast and idempotency, provides a rich selection to the VMTP user level. Subsettability allows the VMTP module for particular clients and servers to be specialized and simplified to the services actually required. Examples of such simple clients and servers include PROM network bootload programs, network boot servers, data sensors and simple controllers, to mention but a few examples. This RFC describes a protocol proposed as a standard for the Internet community.

Journal ArticleDOI
TL;DR: The system described is intended to link institutions (such as universities or industrial research organizations) that have most of their computers connected to local area networks (LANs) to collaborate in reviewing and editing documents containing text, graphs and image objects.
Abstract: The system described is intended to link institutions (such as universities or industrial research organizations) that have most of their computers connected to local area networks (LANs). The main objective of the research is to bring users of remote systems together to collaborate in reviewing and editing documents containing text, graphs and image objects. The term workspace is used to denote a collection of such objects belonging to some application, and the software tools needed to access these objects. The major components of the system and the user interface are presented. A prototype implemented in C under 4.3BSD UNIX is discussed. The system is then related to and contrasted with other work on group collaboration. The suitability and shortcomings of UNIX as an operating system for building group collaboration software tools are assessed. >

01 Apr 1988
TL;DR: This RFC is intended to convey to the Internet community and other interested parties the recommendations of the Internet Activities Board for the development of network management protocols for use in the TCP/IP environment.
Abstract: This RFC is intended to convey to the Internet community and other interested parties the recommendations of the Internet Activities Board (IAB) for the development of network management protocols for use in the TCP/IP environment. This memo does NOT, in and of itself, define or propose an Official Internet Protocol. It does reflect, however, the policy of the IAB with respect to further network management development in the short and long term.

Journal ArticleDOI
TL;DR: The automated network management (ANM) system provides an integrated set of tools for real-time monitoring, control, and analysis of internets consisting of diverse network entities such as internet gateways, packet-switching nodes, packet radio systems and hosts.
Abstract: A description is given of the automated network management (ANM) system, which assists the network operator and analyst in understanding and controlling complex internets. The ANM system provides an integrated set of tools for real-time monitoring, control, and analysis of internets consisting of diverse network entities such as internet gateways, packet-switching nodes, packet radio systems and hosts. It can reduce maintenance costs by providing capabilities such as fault isolation and alarm generation, so that the network operators can efficiently and effectively monitor and control networks. ANM also provides advanced data gathering, analysis, and presentation tools that enable the network analyst to understand better the behaviour of the network, and to enhance network performance. >

Proceedings ArticleDOI
01 Aug 1988
TL;DR: The Fuzzball is an operating system and applications library designed for the PDP11 family of computers and its applications, including a description of its novel congestion avoidance/control and timekeeping mechanisms are described.
Abstract: The Fuzzball is an operating system and applications library designed for the PDP11 family of computers. It was intended as a development platform and research pipewrench for the DARPA/NSF Internet, but has occasionally escaped to earn revenue in commercial service. It was designed, implemented and evolved over a seventeen-year era spanning the development of the ARPANET and TCP/IP protocol suites and can today be found at Internet outposts from Hawaii to Italy standing watch for adventurous applications and enduring experiments. This paper describes the Fuzzball and its applications, including a description of its novel congestion avoidance/control and timekeeping mechanisms.

Journal ArticleDOI
TL;DR: Some of the dynamic effects that occur in large local area networks (LANs) are described and should apply, in a general way, to large networks using other protocols as well.
Abstract: Some of the dynamic effects that occur in large local area networks (LANs) are described. Effects are considered at three different levels: single-network segments, connections between adjacent segments, and problems that appear only in large networks. To a large extent, the discussion is based on recent experience within the internet community. However, the observations should apply, in a general way, to large networks using other protocols as well. >

01 Mar 1988
TL;DR: This memo suggests proposed additions to the Internet Mail Protocol, RFC-822, for the Internet community, and requests discussion and suggestions for improvements.
Abstract: This memo suggests proposed additions to the Internet Mail Protocol, RFC-822, for the Internet community, and requests discussion and suggestions for improvements.

01 Nov 1988
TL;DR: This memo presents the results of a working group on High Bandwidth Networking, which concluded that highbandwidth networks should be considered for high-speed mobile connections.
Abstract: This memo presents the results of a working group on High Bandwidth Networking. This RFC is for your information and you are encouraged to comment on the issues presented.

Book
01 Jan 1988
TL;DR: 1. The Practice of Writing, Using the Library and the Internet, and Papers Bases on Original Research: Foundations for Revising.
Abstract: 1. The Practice of Writing. 2. Using the Library and the Internet. 3. Summaries and Reviews of Social Science Literature. 4. Papers Bases on Original Research. 5. Library Research Papers. 6. Oral Presentations and Written Examinations. 7. Form. 8. Revising.

Journal ArticleDOI
TL;DR: HEMS, the high-level entity management system, is an internetwork management protocol designed to work with the TCP-IP protocol suite and provides database query language primitives that allow remote users to modify the database.
Abstract: HEMS, the high-level entity management system, is an internetwork management protocol designed to work with the TCP-IP protocol suite. HEMS expects each node on a network to support a virtual management database and provides database query language primitives that allow remote users to modify the database. A detailed overview of the system is presented. >

Book
01 Jan 1988
TL;DR: Censorship and Selection as mentioned in this paper is a comprehensive guide for protecting the freedom to read in schools, including the legal issues in dispute and the legal precedents that have been set in recent cases relating to popular fiction (e.g., Stephen King, R. L. Stine, J. K. Rowling).
Abstract: Censorship! The word itself sparks debate, especially when the context is the public school. Since the publication of the second edition of this landmark book in 1993, wired classrooms, legal challenges, and societal shifts have changed the landscape for the free exchange of ideas. Completely revised and updated, this new edition remains the most comprehensive guide for protecting the freedom to read in schools: For school librarians and media specialists, teachers, and administrators, Reichman covers the different media (including books, school newspapers, and the Internet), the important court cases (including recent litigations involving Harry Potter, the Internet, and Huck Finn), the issues in dispute (including violence, religion, and profanity), and how the laws on the books can be incorporated into selection policies. An entire chapter is devoted to troubleshooting and answering the question of "What do we do if...?" Look no further for the best and most specific information on providing access and facing challenges to intellectual freedom. You'll find answers if you are asking questions like these: * What is the distinction between making selection decisions and censoring? * What are the legal constraints on schools offering electronic information sources and the Internet? * What rights and responsibilities does a school administration have when faced with censorship challenges? * What are the legal precedents that have been set in recent cases relating to popular fiction (e.g., Stephen King, R. L. Stine, J. K. Rowling)? Written by a long-time expert on the protections of the First Amendment and U.S. Constitution, the new Censorship and Selection will provide you with all of the need-to-knows for crafting a selection policy in the digital age.

Journal ArticleDOI
TL;DR: A description is given of the SGMP, its architecture, some of its uses and applications, the author's implementation experience, and some future directions for its development.
Abstract: The simple gateway monitoring protocol (SGMP) is a simple application-layer protocol that allows logically remote users to inspect or alter management information for a gateway. In the internet context, gateway performs the functions of the classical packet switch. A description is given of the SGMP, its architecture, some of its uses and applications, the author's implementation experience, and some future directions for its development. The design philosophy leading to the SGMP is discussed. >

Proceedings ArticleDOI
12 Dec 1988
TL;DR: It is found that a formal method can be defined and incorporated that yields a development process with noticeable positive effects, within the constraints of a traditional development process.
Abstract: Specific issues associated with the development of secure systems are described. The authors focus on what an application of a mathematically-based development method means, within the constraints of a traditional development process. They then describe their experiences in the development of a secure internet system, the Multinet Gateway System. The description outlines the solutions developed in response to some of those issues. Among the results obtained is that a formal method can be defined and incorporated that yields a development process with noticeable positive effects. >

01 Jul 1988
TL;DR: A pair of IP options that can be used to learn the minimum MTU of a path through an internet is described, along with its possible uses.
Abstract: A pair of IP options that can be used to learn the minimum MTU of a path through an internet is described, along with its possible uses. This is a proposal for an Experimental protocol.

01 Jul 1988
TL;DR: This RFC suggests a method for personal computers and workstations to dynamically access mail from a mailbox server ("repository") and obosoletes RFC 1064, an Experimental Protocol for the Internet community.
Abstract: This RFC suggests a method for personal computers and workstations to dynamically access mail from a mailbox server ("repository"). It obosoletes RFC 1064. This RFC specifies an Experimental Protocol for the Internet community. Discussion and suggestions for improvement are requested. Please refer to the current edition of the "IAB Official Protocol Standards" for the standardization state and status of this protocol.

Journal ArticleDOI
TL;DR: DUNIX is an operating system that integrates several computers, connected by a packet switching network, into a single UNIX machine, which exhibits surprisingly high performance.
Abstract: DUNIX is an operating system that integrates several computers, connected by a packet switching network, into a single UNIX machine. As far as the users and their software can tell, the system is a single large computer running UNIX. This illusion is created by cooperation of the computers' kernels. The kernels' mode of operation is novel. The software is procedure call oriented. The code that implements a specific system call (e.g., open) does not know whether the object in question (the file) is local or remote. That uniformity makes the kernel small and easy to maintain. The system behaves gracefully under subcomponents' failures. Users which do not have objects (files, processes, tty, etc) in a given computer are not disturbed when that computer crashes. The system administrator may switch a disk from a "dead" computer to a healthy one, and remount the disk under the original path-name. After the switch, users may access files in that disk via the same old names. DUNIX exhibits surprisingly high performance. For a compilation benchmark, DUNIX is faster than 4.2 BSD, even if in the DUNIX case all the files in question are remote. Currently, in Bell Communications Research we have an installation running DUNIX over five DEC VAX computers connected by an Ethernet. This installation speaks TCP/IP and is on the Internet network.