scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Distributed Secure System

Rushby1, Randell1
01 Jul 1983-IEEE Computer (IEEE)-Vol. 16, Iss: 7, pp 55-67
TL;DR: The fundamental requirement is that no individual should see information classified above his clearance.
About: This article is published in IEEE Computer.The article was published on 1983-07-01 and is currently open access. It has received 132 citations till now. The article focuses on the topics: Information security & Cryptography.

Summary (3 min read)

Acronym Definitions

  • The trusted portion of a secure system is generally identified with a small operating system nucleus known as a security kernel; the rest of the operating system and all applications and user programs belong to the untrusted component.
  • This can be minor when specialized applications are concerned, since the kernel can be tuned to the application, but general-purpose kernelized operating systems are three to ten times slower than their insecure counterparts.
  • Finally, and as the authors have argued elsewhere [3] , security kernels for general-purpose operating systems tend to be complex, and their interactions with nonkernel trusted processes are also complex.
  • The authors approach is to finesse the problems that have caused difficulty in the past by constructing a distributed secure system instead of a secure operating system.
  • The latter each provide services to a single security partition and continue to run at full speed.

Principles and mechanisms for secure and distributed systems

  • The structure of all secure systems constructed or designed recently has been influenced by the concept of a reference monitor.
  • The real challenge is to find ways of structuring the system so that the separation provided by physical distribution is fully exploited to simplify the mechanisms of security enforcement without destroying the coherence of the overall system.
  • Because it is costly to provide physically separate systems for each security partition and reference monitor, the authors use physical separation only for the untrusted computing resources of their system and for the security processors that house its trusted components.
  • Unix United conforms to a design principle for distributed systems that the authors call the "recursive structuring principle".
  • Just as the operating system of an ordinary host machine can return an exception when asked to operate on a nonexistent file, so a specialized server that provides no file storage can always return exceptions when asked to perform file operations.

A securely partitioned distributed system

  • The authors will describe a secure Unix United system composed of standard Unix systems (and possibly some specialized servers that can masquerade as Unix) interconnected by a local area network, or LAN.
  • The consequence of not trusting the individual systems is that the unit of protection must be those systems themselves; thus, the authors will dedicate each to a fixed security partition.
  • The initial and very restrictive purpose of TNIUs is to permit communication only between machines belonging to the same security partition.
  • A corrupt host can therefore signal to a wiretapping accomplice by modulating the length of the prefix that successive messages have in common.
  • The careful use of CBC-mode encryption prevents information from leaking through channels that modulate message contents, but significant channels for information leakage still remain.

All techniques for introducing noise inevitably reduce the bandwidth available for legitimate communications and may increase the latency of message delivery.

  • (Presumably the source is fixed at the location of the corrupt host.).
  • Long messages must be broken into a number of separate message units; short ones, and the residue of long ones, must be padded to fill a whole unit.
  • When this is done, a wiretapper cannot observe the exact length of a message but can only estimate the number of message units that it occupies.
  • Two methods are available for securely separating the communications channels belonging to different security partitions.
  • The integrity of all message units accepted is thereby guaranteed because they cannot be forged, modified, or formed by splicing parts of different units together during transmission over the LAN, Consequently, TNIUs can trust the value of the security partition identifier embedded in each message unit, then they can (and must) reject those bearing a different identifier.

A multilevel secure file store

  • The design introduced so far imposes a very restrictive security policy.
  • If the Secret-level user John of SUnix wishes to make his "paper" file available to the Top Secret user Brian, he does so by simply copying it into a directory that is subordinate to the SFS directory.
  • This machine could then encode the information received into a file that could subsequently and legitimately be retrieved by a Secret-level host.
  • Any attempt by a file storage machine to modify a file will be detected on its subsequent retrieval by the SFM when the recomputed checksum fails to match the one stored with the file.
  • Once clandestine information has been prevented from leaving a file storage machine, there is no longer any need to provide separate file storage machines for each security partition; the integrity checks performed by the SFM constitute the required separation mechanism.

The accessing and allocation of security partitions

  • A Secret-level user can send mail to a Top Secret user via the secure file system, but the recipient can only reply by leaving his Top Secret machine and logging in to one at the Secret level, or lower.
  • The Newcastle Connection software in the TTIU will then be able to contact its counterpart in a host machine belonging to the appropriate security partition, and the user will thereafter interact with that remote machine exactly as if he were connected to it directly.
  • With the exception of the file system, the local storage available to a host is all used for strictly temporary purposes and can simply be erased and reinitialized when the host changes security partitions.
  • In outline, the complete scenario for automatically changing the security partition in which a host operates is as follows.
  • The security mechanisms of the prototype will be provided by ordinary user processes in a standard Unix United system.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

1
A distributed general-purpose computing system that
enforces a multilevel security policy can be created by
properly linking standard Unix systems and small trustworthy
security mechanisms.
A Distributed Secure System
John Rushby and Brian Randell
University of Newcastle upon Tyne
A secure system is one that can be trusted to keep secrets, and the important word here is
“trusted.” Individuals, governments, and institutions such as banks, hospitals, and other
commercial enterprises will only consign their secrets to a computer system if they can be
absolutely certain of confidentiality.
The problems of maintaining security are compounded because the sharing of secrets is
generally desired but only in a tightly controlled manner. In the simplest case, an individual
can choose other individuals or groups with whom he wishes to share his private information.
This type of controlled sharing is called discretionary security because it is permitted at the
discretion of the individual.
When the individuals concerned are members of an organization, however, that organization
may circumscribe their discretionary power to grant access to information by imposing a
mandatory security policy to safeguard the interests of the organization as a whole. The most
widely used scheme of this type is the multilevel security, or MLS, policy employed in
military and government environments[1]. Here, each individual is assigned a clearance
chosen from the four hierarchically ordered levels, Unclassified, Confidential, Secret, and Top
Secret, and each item of information is assigned a classification chosen from the same four
levels. The fundamental requirement is that no individual should see information classified
above his clearance.
The fewer the people who share a secret, the less the risk of its disclosure through accident
or betrayal to unauthorized persons. Consequently, the basic MLS policy is enhanced by the
use of compartments or categories designed to enforce “need -to-know” controls on the
sharing of sensitive information. Each individual’s clearance includes the set of compartments
of information to which he is permitted access, and the classification of information is
similarly extended to include the set of compartments to which it belongs. The combination of
a set of compartments and a clearance or classification is called a security partition. An
individual is permitted access to information only if his clearance level equals or exceeds the
classification of the information and if his set of compartments includes that of the
information. Thus an individual with a Secret-level clearance for the NATO and Atomic
compartments, abbreviated as a Secret(NATO, Atomic) clearance, may see information
classified as Secret(NATO) or Confidential(NATO, Atomic), but not that classified as Top
Secret(NATO) or Confidential(NATO, Crypto).
A multilevel secure system should enforce the policy outlined above; unfortunately,
conventional computer systems are quite incapable of doing so. In the first place, they
generally have no cognizance of the policy and therefore make no provision for enforcing it;
there is usually no way of marking the security classification to which a file, for example,
belongs. In the second place, experience shows that conventional systems are vulnerable to
outside penetration. Their protection mechanisms can always be broken by sufficiently skilled
and determined adversaries. Finally, and most worrisome of all, there is no assurance that the
system itself cannot be subverted by the insertion of “trap doors” into its own code or by the
infiltration of “Trojan horse” programs. In these cases, the enemy is located “inside the walls”
© 1983 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this
material for advertising or promotional purposes or for creating new collective works for resale or
redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be
obtained from the IEEE.

2
and the system’s protection mechanisms may be rendered worthless. This type of attack is
particularly insidious and hard to detect or counter because it can compromise security without
doing anything so flagrant as directly copying a Top Secret file into an Unclassified one. A
Trojan horse program with legitimate access to a Top Secret file can convey the information
therein to an Unclassified collaborator by “tapping it out” over clandestine communication
channels that depend on the modulation of some apparently innocuous but visible component
of the system state, such as the amount of disk space available.
Drastic measures have been adopted to overcome these deficiencies in the security
mechanisms of conventional systems. One approach is to dedicate the entire system to a
single security partition. Thus a system dedicated to Secret(NATO) operations would support
only information and users belonging to that single security partition. The principal
objection to this method of operation is that it fails to provide one of the main functions
required of a secure system the controlled sharing of information between different security
partitions. Another drawback is the cost of providing separate systems for each security
partition. This problem can be mitigated to some extent by employing periods processing in
which a single system is dedicated to different security partitions at different times and is
cleared of all information belonging to one partition before it is reallocated to a different one.
Another crude method for coping with the security problems of ordinary systems is to
require all users to be cleared to the level of the most highly classified information that the
system contains. This is called “system high” operation. The rationale is that even if the
system has been subverted, it can reveal information only to those who can be trusted with it.
The disadvantage to this scheme is that it is very expensive (and counter to normal security
doctrines) to clear large numbers of people for highly classified information that they have no
real need to know. Furthermore, many excellent people may be unable or unwilling to obtain
the necessary clearances. This approach can also lead to the overclassification of information,
thereby reducing its availability unnecessarily.
Acronym Definitions
CBC: Cipher block chaining
DES: Data Encryption Standard
FARM: File access reference monitor
FIG: File integrity guarantor
IFS: Isolated file store
LAN: Local area network
MARI: Microelectronics Applications Research Institute
MLS: Multilevel security
RPC: Remote procedure call
RSRS: Royal Signals and Radar Establishment
SFM: Secure file manager
SFS: Secure file store
TNIU: Trustworthy network interface unit
TTIU: Trustworthy terminal interface unit
Several attempts have been made to construct truly secure systems for use in classified and
other sensitive environments. However, the builders of such systems face a new problem:
They must not only make their systems secure, but also convince those who will rely on them
that they are secure. A full general-purpose operating system is far too complex for anyone to
be able to guarantee this security. Accordingly, most efforts have focused on partitioning the
system into a small and simple trusted portion and a much larger and more complex untrusted
one. The system should be structured so that all securityrelevant decisions and operations are
performed by the trusted portion in a way that makes the untrusted portion irrelevant to the
security of the overall system. It is then necessary to rigorously establish the properties
required of the trusted portion and prove that it does indeed possess them. Such proofs
constitute security verification; they use the techniques of formal program verification to
show that the system implementation (usually its formal specification) is consistent with a
mathematical model of the security properties required[1,2].
The trusted portion of a secure system is generally identified with a small operating system
nucleus known as a security kernel; the rest of the operating system and all applications and
user programs belong to the untrusted component. Certain difficulties attend the use of such
kernelized systems, however.
Because it provides an additional level of interpretation beneath the main operating system,
a security kernel necessarily imposes some performance degradation. This can be minor when
specialized applications are concerned, since the kernel can be tuned to the application, but
general-purpose kernelized operating systems are three to ten times slower than their insecure

3
counterparts. Also, the division of a conventional operating system into trusted and untrusted
components is a complex and expensive task that cannot easily accommodate changes and
enhancements to its base operating system. Consequently, kernelized systems often lag many
versions behind the conventional operating systems from which they are derived.
Finally, and as we have argued elsewhere[3], security kernels for general-purpose operating
systems tend to be complex, and their interactions with nonkernel trusted processes are also
complex. The result is that the verification of their security properties is neither as complete
nor as convincing as might be desired. None of these problems are arguments against security
kernels per se, which have proved very successful for certain limited and specialized
applications such as cryptographic processors and message systems[4]; but they do indicate
that security kernels are unlikely to prove satisfactory as the primary security mechanism for
general-purpose systems[5].
Our approach is to finesse the problems that have caused difficulty in the past by
constructing a distributed secure system instead of a secure operating system. Our system
combines a number of different security mechanisms to provide a general-purpose distributed
computing system that is not only demonstrably secure but also highly efficient,
cost-effective, and convenient to use. The approach involves interconnecting small,
specialized, provably trustworthy systems and a number of larger, untrusted host machines.
The latter each provide services to a single security partition and continue to run at full speed.
The trusted components mediate access to and communications between the untrusted hosts;
they also provide specialized services such as a multilevel secure file store and a means for
changing the security partition to which a given host belongs.
The most significant benefits of our approach to secure computing are that it requires no
modifications to the untrusted host machines and it allows them to provide their full
functionality and performance. Another benefit is that it enables the mechanisms of security
enforcement to be isolated, single purpose, and simple. We therefore believe that this
approach makes it possible to construct secure systems whose verification is more compelling
and whose performance, cost, and functionality are more attractive than in previous
approaches.
Principles and mechanisms for secure and distributed systems
The structure of all secure systems constructed or designed recently has been influenced by
the concept of a reference monitor. A reference monitor is a small, isolated, trustworthy
mechanism that controls the behavior of untrusted system components by mediating their
references to such external entities as data and other untrusted components. Each proposed
access is checked against a record of the accesses that the security policy authorizes for that
component.
It is implicit in the idea of a reference monitor, and utterly fundamental to its appreciation
and application, that information, programs in execution, users, and all other entities
belonging to different security classifications be kept totally separate from one another. All
channels for the flow of information between or among users and data of different security
classifications must be mediated by reference monitors. For their own protection, reference
monitors must also be kept separate from untrusted system components.
Our approach to the design of secure systems is based on these key notions of separation
and mediation. These are distinct logical concerns, and for ease of development and
verification, the mechanisms that realize them are best kept distinct also. We consider it a
weakness that many previous secure system designs confused these two issues and used a
single mechanism a security kernel to provide both. Once we recognize that separation is
distinct from mediation, we can consider a number of different mechanisms for providing it and
use each wherever it is most appropriate. In fact, our system uses four different separation
mechanisms: physical, temporal, logical, and cryptographical.
Physical separation is achieved by allocating physically different resources to each security
partition and function. Unfortunately, the structure of conventional centralized systems is
antithetical to this approach; centralized systems constitute a single resource that must be
shared by a number of users and functions. For secure operation, a security kernel is needed to
synthesize separate virtual resources from the shared resources actually available. This is not
only inimical to the efficiency of the system, but it requires complex mechanisms whose own
correctness is difficult to guarantee.
In contrast with traditional centralized systems, modern distributed systems are well suited
to the provision of physical separation. They necessarily comprise a number of physically
separated components, each with the potential for dedication to a single security level or a

4
single function. To achieve security, then, we must provide trustworthy reference monitors to
control communications between the distributed components and to perform other security-
critical operations. The real challenge is to find ways of structuring the system so that the
separation provided by physical distribution is fully exploited to simplify the mechanisms of
security enforcement without destroying the coherence of the overall system.
Because it is costly to provide physically separate systems for each security partition and
reference monitor, we use physical separation only for the untrusted computing resources
(hosts) of our system and for the security processors that house its trusted components.
Temporal separation allows the untrusted host machines to be used for activities in different
security partitions by separating those activities in time. The system state is reinitialized
between activities belonging to different security partitions.
The real challenge is to find ways of structuring the
system so that the separation provided by physical
distribution is fully exploited to simplify the
mechanisms of security enforcement without
destroying the coherence of the overall system.
The security processors can each support a number of different separation and reference
monitor functions, and also some untrusted support functions, by using a separation kernel to
provide logical separation between those functions. Experience indicates that separation
kernels (simple security kernels whose only function is to provide separation) can be
relatively small, uncomplicated, and fast, and verification seems simpler and potentially more
complete for them than it does for general-purpose security kernels[3].
Our fourth technique, cryptographic separation, uses encryption and related (checksum)
techniques to separate different uses of shared communications and storage media.
The four separation techniques provide the basis for our distributed secure system. This is a
heterogeneous system comprising both untrusted general-purpose systems and trusted
specialized components, and to be useful it must operate as a coherent whole. To this end, our
mechanisms for providing security are built on a distributed system called Unix United,
developed in the Computing Laboratory at the University of Newcastle upon Tyne[6]. A Unix
United system is composed of a (possibly large) set of interlinked standard Unix systems, or
systems that can masquerade as Unix at the kernel interface level, each with its own storage
and peripheral devices, accredited set of users, and system administrator. The naming structures
(for files, devices, commands, and directories) of each component Unix system are joined into
a single naming structure in which each Unix system is, to all intents and purposes, just a
directory. The result is that, subject to proper accreditation and appropriate access control,
each user on each Unix system can read or write any file, use any device, execute any command,
or inspect any directory regardless of which system it belongs to. The directory naming
structure of a Unix United system is set up to reflect the desired logical relationships between
its various machines and is quite independent of the routing of their physical
interconnections.
The simplest possible case of such a structure, incorporating just two Unix systems, named
unix1 and unix2, is shown in Figure 1. From unixl, and with the root (“/”)and current working
directory (“.”) as shown, one could copy the file “a” into the corresponding directory on the
other machine with the Unix shell command
cp a /../unix2/user/brian/a
(For those unfamiliar with Unix, the initial “/” symbol indicates that a path name starts at the
root directory rather than at the current working directory, and the “..” symbol is used to
indicate a parent directory.)

5
Figure 1: The naming structure of a simple Unix United system.
Figure 2: The Newcastle Connection.
This command is in fact a perfectly conventional use of the standard Unix shell command
interpreter and would have exactly the same effect if the naming structure shown had been set
up on a single machine and unix I and unix2 had been conventional directories.
All the standard Unix facilities, whether invoked by shell commands or by system calls
within user programs, apply unchanged to Unix United, causing intermachine communication
as necessary. A user can therefore specify a directory on a remote machine as his current
working directory, request execution of a program held in a file on a remote machine, redirect
input and/or output, use files and peripheral devices on a remote machine, and set up pipelines
that cause parallel execution of communicating processes on different machines. Since these
are completely standard Unix facilities, a user need not be concerned that several machines are
involved.
Unix United conforms to a design principle for distributed systems that we call the
“recursive structuring principle”. This requires that each component of a distributed system be
functionally equivalent to the entire system. Applying this principle results in a system that
automatically provides network transparency and can be extended (or contracted) without
requiring any change to its user interface or to its external or internal program interfaces. The
principle may seem to preclude systems containing specialized components such as servers,
but this is not so. Any system interface must contain provisions for exception conditions to
be returned when a requested operation cannot be carried out. Just as the operating system of an
ordinary host machine can return an exception when asked to operate on a nonexistent file, so
a specialized server that provides no file storage can always return exceptions when asked to
perform file operations.
Unix United has been implemented without changing the standard Unix software in any way;
neither the Unix kernel nor any of its utility programs not even the shell command
interpreter have been reprogrammed. This has been accomplished by incorporating an
additional layer of software called the Newcastle Connection in each of the component Unix
systems. This layer of software sits on top of the resident Unix kernel; from above it is
functionally indistinguishable from the kernel, while from below it looks like a normal user
process. Its role is to filter out system calls that have to be redirected to another Unix system
and to accept system calls that have been directed to it from other systems. Communication

Citations
More filters
Book
01 May 1988
TL;DR: This paper aims to clarify the role of encryption in the development of knowledge representation and provides some examples of how the model has changed over time from simple to complex to understandable.
Abstract: machines, 178, 179 Abstract model, 30, 31–32, 105–30. See also Security models Access class, 52, 112 dominates relationship between, 53, 122, 183-84 partial ordering of, 53, 122 SYSTEM HIGH/SYSTEM LOW, 123, 148 Access control, 22–23, 45–46. See also Multilevel security discretionary, 45, 47–50 input/output, 96–102 limiting Trojan horses with, 63–64 mandatory, 45, 50–51 with memory management, 83–86 network, 213–15 Access control list (ACL), 49–50 Access list, 48 Access matrix model, 109, 110 Bell and La Padula model, 123 ACF2 (software), 9 ACL. See Access control list (ACL) Address. See Virtual address space Adleman, L., 202 AFFIRM, 167, 168 Akers, R. L., 167 Algebraic specifications, 168 Algorithmic refinement, 178–81 Ames, S. R., Jr., 28, 131 Anderson, J. P., 131 Application mode, 27 Applications programs, 25, 26 Argument validation, 153 Arpanet Reference Model, 196 Ashland, R. E., 9, 51 Assertions, entry and exit, 190–92 Assurance, security control, 31 Asynchronous attack, 153–54 Atomic functions, 115 Authentication. See also Password(s) vs. identification, 18–19, 45–46 provided by encryption, 208 Authentication server, 221 Authorization, 22 Authorization server, 221

274 citations

Proceedings ArticleDOI
20 May 1991
TL;DR: The authors describe how some functions of distributed systems can be designed to tolerate intrusions, and a prototype of the persistent file server presented has been successfully developed and implemented as part of the Delta-4 project of the European ESPRIT program.
Abstract: An intrusion-tolerant distributed system is a system which is designed so that any intrusion into a part of the system will not endanger confidentiality, integrity and availability. This approach is suitable for distributed systems, because distribution enables isolation of elements so that an intrusion gives physical access to only a part of the system. In particular, the intrusion-tolerant authentication and authorization servers enable a consistent security policy to be implemented on a set of heterogeneous, untrusted sites, administered by untrusted (but nonconspiring) people. The authors describe how some functions of distributed systems can be designed to tolerate intrusions. A prototype of the persistent file server presented has been successfully developed and implemented as part of the Delta-4 project of the European ESPRIT program. >

259 citations

Patent
11 Jul 2007
TL;DR: In this paper, a multi-level network security system for a computer host device coupled to at least one computer network is described, which includes a secure network interface unit (SNIU) contained within a communications stack of the computer device that operates at a user layer communications protocol.
Abstract: A multi-level network security system is disclosed for a computer host device coupled to at least one computer network. The system including a secure network interface Unit (SNIU) contained within a communications stack of the computer device that operates at a user layer communications protocol. The SNIU communicates with other like SNIU devices on the network by establishing an association, thereby creating a global security perimeter for end-to-end communications and wherein the network may be individually secure or non-secure without compromising security of communications within the global security perimeter. The SNIU includes a host/network interface for receiving messages sent between the computer device and network. The interface operative to convert the received messages to and from a format utilized by the network. A message parser for determining whether the association already exists with another SNIU device. A session manager coupled to said network interface for identifying and verifying the computer device requesting access to said network. The session manager also for transmitting messages received from the computer device when the message parser determines the association already exists. An association manager coupled to the host/network interface for establishing an association with other like SNIU devices when the message parser determines the association does not exist.

180 citations

Journal ArticleDOI
TL;DR: This paper provides an overview of the Multiple Independent Levels of Security and Safety (MILS) approach to high-assurance system design for security and safety critical embedded systems.
Abstract: High-assurance systems require a level of rigor, in both design and analysis, not typical of conventional systems. This paper provides an overview of the Multiple Independent Levels of Security and Safety (MILS) approach to high-assurance system design for security and safety critical embedded systems. MILS enables the development of a system using manageable units, each of which can be analysed separately, avoiding costly analysis required of more conventional designs. MILS is particularly well suited to embedded systems that must provide guaranteed safety or security properties.

177 citations

Journal ArticleDOI
TL;DR: The paper presents a connectionist realization of semantic networks, that is, it describes how knowledge about concepts, their properties, and the hierarchical relationship between them may be encoded as an interpreter-free massively parallel network of simple processing elements that can solve an interesting class of inheritance and recognition problems extremely fast—in time proportional to the depth of the conceptual hierarchy.

138 citations

References
More filters
Journal ArticleDOI
TL;DR: Use of encryption to achieve authenticated communication in computer networks is discussed and example protocols are presented for the establishment of authenticated connections, for the management of authenticated mail, and for signature verification and document integrity guarantee.
Abstract: Use of encryption to achieve authenticated communication in computer networks is discussed. Example protocols are presented for the establishment of authenticated connections, for the management of authenticated mail, and for signature verification and document integrity guarantee. Both conventional and public-key encryption algorithms are considered as the basis for protocols.

2,671 citations

Book
01 Jan 1982
TL;DR: The goal of this book is to introduce the mathematical principles of data security and to show how these principles apply to operating systems, database systems, and computer networks.
Abstract: From the Preface (See Front Matter for full Preface) Electronic computers have evolved from exiguous experimental enterprises in the 1940s to prolific practical data processing systems in the 1980s. As we have come to rely on these systems to process and store data, we have also come to wonder about their ability to protect valuable data. Data security is the science and study of methods of protecting data in computer and communication systems from unauthorized disclosure and modification. The goal of this book is to introduce the mathematical principles of data security and to show how these principles apply to operating systems, database systems, and computer networks. The book is for students and professionals seeking an introduction to these principles. There are many references for those who would like to study specific topics further. Data security has evolved rapidly since 1975. We have seen exciting developments in cryptography: public-key encryption, digital signatures, the Data Encryption Standard (DES), key safeguarding schemes, and key distribution protocols. We have developed techniques for verifying that programs do not leak confidential data, or transmit classified data to users with lower security clearances. We have found new controls for protecting data in statistical databases--and new methods of attacking these databases. We have come to a better understanding of the theoretical and practical limitations to security.

1,937 citations


"A Distributed Secure System" refers background or methods in this paper

  • ...... and that because the encrypted value of each block within a message unit is a complex function of all previous blocks, messages formed by splicing parts of different messages together will decrypt unintelligibly, In fact, this is not so. Although the encrypted value of each block produced by CBC-mode encryption depends implicitly on all prior plaintext blocks, it depends explicitly on only the immediately preceding ciphertext block[ 8 ]....

    [...]

  • ...Readers who wish to learn more about issues and techniques relating to computer security should consult the excellent book by D. E. Denning[ 8 ]....

    [...]

  • ...Trustworthy network interface units use the Data Encryption Standard, or DES[ 8 ] to protect information sent over the LAN....

    [...]

  • ...Clandestine communications channels based on plaintext patterns that persist into the ciphertext can be thwarted by employing a more elaborate mode of encryption called cipher block chaining, or CBC, which uses a feedback technique to mask such patterns by causing the encrypted value of each block to be a complex function of all previous blocks[ 8 ]....

    [...]

  • ...Although the basic principles of encryption management are well established[ 8 ], a tutorial outline of the issues and techniques as they affect our system may benefit readers to whom this material is new....

    [...]

Journal ArticleDOI
TL;DR: The need for formal security models is described, the structure and operation of military security controls are described, how automation has affected security problems is considered, and possible models that have been proposed and applied to date are surveyed.
Abstract: Efforts to build "secure" computer systems have now been underway for more than a decade. Many designs have been proposed, some prototypes have been constructed, and a few systems are approaching the production stage. A small number of systems are even operating in what the Department of Defense calls the "multilevel" mode some information contained m these computer systems may have a clasmfication higher than the clearance of some of the users of those systems. This paper revmws the need for formal security models, describes the structure and operation of military security controls, considers how automation has affected security problems, surveys models that have been proposed and applied to date, and suggests possible d~rectlons for future models

470 citations

Journal ArticleDOI
01 Dec 1981
TL;DR: A new verification technique called 'proof of separability' which explicitly addresses the security relevant aspects of interrupt handling and other issues ignored by present methods is suggested.
Abstract: This paper reviews some of the difficulties that arise in the verification of kernelized secure systems and suggests new techniques for their resolution.It is proposed that secure systems should be conceived as distributed systems in which security is achieved partly through the physical separation of its individual components and partly through the mediation of trusted functions performed within some of those components. The purpose of a security kernel is simply to allow such a 'distributed' system to actually run within a single processor; policy enforcement is not the concern of a security kernel.This approach decouples verification of components which perform trusted functions from verification of the security kernel. This latter task may be accomplished by a new verification technique called 'proof of separability' which explicitly addresses the security relevant aspects of interrupt handling and other issues ignored by present methods.

459 citations

Journal ArticleDOI
TL;DR: In this paper, the authors describe a software subsystem that can be added to each of a set of physically interconnected UNIX or UNIX look-alike systems, so as to construct a distributed system which is functionally indistinguishable at both the user and the program level from a conventional single-processor UNIX system.
Abstract: In this paper we describe a software subsystem that can be added to each of a set of physically interconnected UNIX or UNIX look-alike systems, so as to construct a distributed system which is functionally indistinguishable at both the user and the program level from a conventional single-processor UNIX system. The techniques used are applicable to a variety and multiplicity of both local and wide area networks, and enable all issues of inter-processor communication, network protocols, etc., to be hidden. A brief account is given of experience with such a distributed system, which is currently operational on a set of PDPlls connected by a Cambridge Ring. The final sections compare our scheme to various precursor schemes and discuss its potential relevance to other operating systems.

195 citations