Showing papers in "IEEE Computer in 1983"
01 Aug 1983-IEEE Computer
TL;DR: As I talked with enthusiasts and examined the systems they used, I began to develop a model of the features that produced such delight, and the central ideas seemed to be visibility of the object of interest; rapid, reversible, incremental actions; and replacement of complex command language syntax by direct manipulation of the objects of interest.
Abstract: These feelings are not, of course, universal, but the amalgam does convey an image of the truly pleased user. As I talked with these enthusiasts and examined the systems they used, I began to develop a model of the features that produced such delight. The central ideas seemed to be visibility of the object of interest; rapid, reversible, incremental actions; and replacement of complex command language syntax by direct manipulation of the object of interest-hence the term \"direct manipulation.\" Examples of direct manipulation systems
TL;DR: For the last five years, the University of Michigan and the Environmental Research Institute of Michigan have conducted a unique series of studies that involve the processing of biomedical imagery on a highly parallel computer specifically designed for image processing, finding that quantification by automated image analysis not only increases diagnostic accuracy but also provides significant data not obtainable from qualitative analysis alone.
Abstract: medical fields has led to the automated processing ofpictorial data. Here, a device called the cytocomputer searches for genetic mutations. A computer revolution has occurred not only in technical fields but also in medicine, where vast amounts of information must be processed quickly and accurately. Nowhere is the need for image processing techniques more apparent than in clinical diagnosis or mass screening applications where data take the form of digital images. New high-resolution scanning techniques such as computed tomography, nuclear magnetic resonance, po-sitron emission tomography, and digital radiography produce images containing immense amounts of relevant information for medical analysis. But as these scanning techniques become more vital to clinical diagnosis, the work for specialists who must visually examine the resultant images increases. In many cases, quantitative data in the form of measurements and counts are needed to supplement nonimage patient data, and the manual extraction of these data is a time-consuming and costly step in an otherwise automated process. Furthermore, subtle variants of shade and shape can be the earliest clues to a diagnosis, placing the additional burden of complete thoroughness on the examining specialist. For the last five years, the University of Michigan and the Environmental Research Institute of Michigan have conducted a unique series of studies that involve the processing of biomedical imagery on a highly parallel computer specifically designed for image processing. System designers have incorporated the requirements of extracting a verifiable answer from an image in a reasonable time into an integrated approach to hardware and software design. The system includes a parallel pipelined image processor, called a cytocomputer, and a high-level language specifically created for image processing, C-3PL, the cytocomputer parallel picture processing language. These studies have involved a great many people from both the medical and engineering communities and have highlighted the interdisciplinary aspects of biomedical image processing. The methods have been tested in anatomy, developmental biology, nuclear medicine, car-diology, and transplant rejection. The general consensus is that quantification by automated image analysis not only increases diagnostic accuracy but also provides significant data not obtainable from qualitative analysis alone. One study in particular, on which descriptions in this article are based, involves a joint effort by the University of Michigan's human genetics and electrical and computer engineering departments and is supported by a grant from the National Cancer Institute. Basically, automated image analysis is being applied via sophisticated biochemical and computer techniques to derive …
TL;DR: In this article* the more common interpretations of IS-A are cataloged and some differences between systems that, on the surface, appear very similar are pointed out.
Abstract: A was quite simple. Today, however, there are almost as many meanings for this inheritance link as there are knowledge-representation systems. Many systems for representing knowledge can be conisidered semantic networks largely becau.se they feature the notion of an explicit taxonomic hierarchy, a tree or lattice-like structure for categorizing classes of things in the world being represented. The backbone of the hierarchy is proFvided by some sort of \"inheritance\" link betweeni the representational objects, known as \"nodes\" in some systems and as \"frames\" in others. This link, otten called \"IS-A\" (also known as \"IS,\" \"SUPERC,\" \"AKO,\" \"SUBSET,\" etc.), has been perhaps the most stable element of semantic nets as they have evolved over t he yvears. Unfortunately, thi.s stability may be illusory. There are almost as many meanings for the IS-A link as there are knowledge-representation systems. In this article* we catalog the nmore common interpretations of IS-A and point out some differences between systems that, on the surface, appear very similar. Background. The idea of IS-A is quite simple. Early in the history of semantic nets, researchers observed that much representation of the world was concerned with the conceptual relations expressed in English sentences such as \"John is a bachelor\" and \"A dog is a domesticated carnivorous mammal.\" That is, two predominant forms of statements handled by Al knowledge-representation systems were the predication, expressing that an individual (e.g., John) was of a certain type (e.g.,
TL;DR: The authors have developed a design strategy for avoiding these types of problems and have implemented a representation system based on it, called Krypton, which clearly distinguishes between definitional and factual information.
Abstract: A great deal of effort has focused on developing frame-based languages for knowledge representation. While the basic ideas of frame systems are straightforward, complications arise in their design and use. The authors have developed a design strategy for avoiding these types of problems and have implemented a representation system based on it. The system, called Krypton, clearly distinguishes between definitional and factual information. In particular, Krypton has two representation languages, one for forming descriptive terms and one for making statements about the world using these terms. Further, Krypton provides a functional view of a knowledge base, characterized in terms of what it can be asked or told, rather than in terms of the particular structures it uses to represent knowledge. 11 references.
01 Nov 1983-IEEE Computer
TL;DR: This special issue is about the methodologies and tools that may help to solve the VLSI design crisis of the 1980s, and the collection of articles presented, depicting all three approaches to V LSI design tools (each representing a different methodology in VLSi design), are briefly summarized.
Abstract: This special issue is about the methodologies and tools that may help us solve the VLSI design crisis of the 1980s. Basically, there are three approaches. The first school of thought believes that all design decisions should be made solely by the human designer, who has gained experience through good design practices in the last 10 or 20 years. This approach, called computeraided design, gives the designer an "efficient paper and pencil" by providing graphic editors, design verification and simulation tools, and efficient databases. This approach is evolutionary and tends to be bottom up, since building blocks are first designed and then later used to realize higher level structures. The resulting design is of high quality, since humans are very good in optimizing designs. On the other hand, the human designer is slow and error-prone. Furthermore, creative designers optimize design by creating new design rules, thereby creating a demand for new verification tools and design description languages for documentation and communication between designers. After a bried review of the three approaches, the collection of articles presented, depicting all three approaches to VLSI design tools (each representing a different methodology in VLSI design), are briefly summarized.
TL;DR: Since these systems use a combination of artificial intelligence (AI) problem-solving and knowledgerepresentation techniques, information on these areas is also included.
Abstract: Artificial intelligence is no longer science theory. A variety of thinking systems are out of the laboratory and successfully solving problems using ai knowledge-representation techniques. 50 references.
TL;DR: The approach to the representation of commonsense knowledge described in this article is based on the idea that propositions characterizing commomsense knowledge are for the most part, dispositions -- that is, propositions with implied fuzzy quantifiers.
Abstract: The approach to the representation of commonsense knowledge described in this article is based on the idea that propositions characterizing commomsense knowledge are for the most part, dispositions -- that is, propositions with implied fuzzy quantifiers. To deal with dispositions systematically the author uses Fuzzy-Logic -- The logic underlying approximate or fuzzy reasoning
TL;DR: Some of the controls of the inference problem in on-line, general-purpose database systems allowing both statistical and nonstatistical access are surveyed, divided into two categories: those that place restrictions on the set of allowable queries and those that add "noise" to the data or to the released statistics.
Abstract: The goal of statistical databases is to provide frequencies, averages, and other statistics about groups of persons (or organizations), while protecting the privacy of the individuals represented in the database. This objective is difficult to achieve, since seemingly innocuous statistics contain small vestiges of the data used to compute them. By correlating enough statistics, sensitive data about an individual can be inferred. As a simple example, suppose there is only one female professor in an electrical engineering department. If statistics are released for the total salary of all professors in the department and the total salary of all male professors, the female professor's salary is easily obtained by subtraction. The problem of protecting against such indirect disclosures of sensitive data is called the inference problem. Over the last several decades, census agencies have developed many techniques for controlling inferences in population surveys. These techniques are applied before data are released so that the distributed data are free from disclosure problems. The data are typically released either in the form of microstatistics, which are files of \"sanitized\" records, or in the form of macrostatistics, which are tables of counts, sums, and higher order statistics. Starting with a study by Hoffman and Miller,' computer scientists began to look at the inference problem in on-line, general-purpose database systems allowing both statistical and nonstatistical access. A hospital database, for example, can give doctors direct access to a patient's medical records, while hospital administrators are permitted access only to statistical summaries of the records. Up until the late 1970's, most studies of the inference problem in these systems led to negative results; every conceivable control seemed to be easy to circumvent, to severely restrict the free flow of information, or to be intractable to implement. Recently, the results have become more positive, since we are now discovering controls that can potentially keep security and information loss at acceptable levels for a reasonable cost. This article surveys some of the controls that have been studied, comparing them with respect to their security, information loss, and cost. The controls are divided into two categories: those that place restrictions on the set of allowable queries and those that add \"noise\" to the data or to the released statistics. The controls are described and further classified within the context of a lattice model.
TL;DR: The fundamental requirement is that no individual should see information classified above his clearance.
TL;DR: The Scomp trusted operating program, or STOP, is a security kernel based, general-purpose operating system that provides a multilevel hierarchical file system, inter-process communication, security administrator functions, and operator commands.
Abstract: The Honeywell Secure Communications Processor supports a variety of specialized applications that require the processing of information with multilevel security attributes. A commercial hardware product, the Scomp system is a unique implementation of a hardware/soft-ware general-purpose operating system based on the security kernel concept. Scomp hardware supports a Multics-like, hardware-enforced ring mechanism, virtual memory, virtual I/O processing, page-fault recovery support, and performance mechanisms to aid in the implementation of an efficient operating system. The Scomp trusted operating program, or STOP, is a security kernel based , general-purpose operating system that provides a multilevel hierarchical file system, inter-process communication, security administrator functions , and operator commands. The idea for the Scomp system originated in a joint Honeywell-Air Force program called Project Guardian, which was an attempt to further enhance the security of Honeywell's Multics system.' A secure front-end processor was needed that would use the security kernel approach to control communications access to Multics. Multics was designed to provide program and data sharing while simultaneously protecting against both program and data misuse. The system emphasizes information availability, applications implementation, database facilities, decentralized administrative control, simplified system operation, productivity, and growth. The Multics system uses the combination of hardware and software mechanisms to provide a dynamic multiuser environment. The Multics security mechanisms, considered far more advanced than those available in most large commercial systems, use access control lists, a hardware-enforced ring structure supporting eight rings, and the Access Isolation Mechanism that allows the definition of privilege independent of other controls. Access control provided by these mechanisms is interpreted by software but enforced by hardware on each reference to information. The hardware implementation includes a demand-paged virtual memory capability that is invisible to the user programs. Although Project Guardian was never completed, the use of Multics features to provide multilevel security was pursued in a revised Scomp effort, a joint project of Honeywell Information Systems and the Department of Defense (specifically, the Naval Electronics Systems Command, or Navelex). In this implementation, the Scomp is a trusted minicomputer operating system using software verification techniques.* Originally the plan was to use the traditional approach to building a trusted operating system: Namely, to build a security kernel and an emulator ofan existing operating system to run on top of the kernel. This approach was taken by UCLA2 and Mitre in their early development programs and by Ford for KSOS-11.3 One conclusion drawn from these efforts was …
TL;DR: The basic techniques of image processing using two-dimensional arrays of processors, or cellular arrays, are reviewed and various extensions and generalizations of the cellular array concept are discussed and their possible implementations and applications are discussed.
Abstract: array computers are not new to image processing, but more refined techniques have led to broader implementations. We can now construct arrays with up to 128 x 128 processors. Nearly 25 years ago, Ungerl\"2 suggested a two-dimensional array of processing elements as a natural computer architecture for image processing and recognition. Ideally, in this approach, each processor is responsible for one pixel, or one element of the image, with neighboring processors responsible for neighboring pix-els. Thus, using hardwired communication between neighboring processors, local operations can be performed on the image, or local image features can be detected in parallel, with every processor simultaneously accessing its neighbors and computing the appropriate function for its neighborhood. Over the last two decades, several machines embodying this concept have been constructed. The Illiac 1113 used a 36 x 36 processor array (the Illiac IV used only an 8 x 8 array) to analyze \"events\" in nuclear bubble-chamber images by examining 36 x 36 \"windows\" of the images. In later machines, such as the CLIP,4 DAP,5 and Mpp,6 arrays of up to 128 x 128 processors were used and were applied blockwise to larger images. This article reviews the basic techniques of image processing using two-dimensional arrays of processors, or cellular arrays. It also discusses various extensions and generalizations of the cellular array concept and their possible implementations and applications. The term cellular array is used because these machines can be regarded as generalizations of bounded cellular automata, which have been studied extensively on a theoretical level. The relative merits of cellular arrays for image processing as compared to other architectures* are not discussed here; but they have been studied extensively for such purposes, on levels from theory to hardware. A cellular array (Figure 1) is a two-dimensional array of processors, or cells, usually rectangular, each of which can directly communicate with its neighbors in the array. Here, for simplicity, I assume that each cell is connected to its four horizontal and vertical neighbors. Each cell on the borders of the array then has only three neighbors, and each cell in the four corners of the array has only two. I also assume that a cell can distinguish its neighbors; i.e., it can send a different message to each neighbor, and when it receives messages from them, it knows which message came from which neighbor. To use a cellular array for image processing, we give …
TL;DR: The quality of designs produced by automatic synthesis programs are not yet adequate for production use, but their use as a computer aid permitting designer interaction is becoming a realitv, 1 and promises further.
Abstract: specification. Digital system design actually consists of many synthesis steps, each adding detail. The quality of designs produced by automatic synthesis programs are not yet adequate for production use. However, their use as a computer aid permitting designer interaction is becoming a realitv, 1 and promises further
TL;DR: The security kernel approach described here directly addresses the size and complexity problem by limiting the protection mechanism to a small portion of the system by adapting the concept of the reference monitor, an abstract notion adapted from the models of Butler Lampson.
Abstract: Providing highly reliable protection for computerized information has traditionally been a game of wits. No sooner are security controls introduced into systems than are penetrators finding ways to circumvent them. Security kernel technology provides a conceptual base on which to build secure computer systems, thereby replacing this game of wits with a methodical design process. The kernel approach is equally applicable to all types of systems, from general-purpose, multiuser operating systems to special-purpose systems such as communication processors-wherever the protection of shared information is a concern. Most computer installations rely solely on a physical security perimeter, protecting the computer and its users by guards, dogs, and fences. Communications between the computer and remote devices may be encrypted to geographically extend the security perimeter, but if only physical security is used, all users can potentially access all information in the computer system. Consequently, all users must be trusted to the same degree. When the system contains sensitive information that only certain users should access, we must introduce some additional protection mechanisms. One solution is to give each class of users a separate machine. This solution is becoming increasingly less costly because of declining hardware prices, but it does not address the controlled sharing of information among users. Sharing information within a single computer requires internal controls to isolate sensitive information. Continual efforts are being made to develop reliable internal security controls solely through tenacity and hard work. Unfortunately, these attempts have been uniformly unsuccessful for a number of reasons. The first is that the operating system and utility software are typically large and complex. The second is that no one has precisely defined the security provided by the internal controls. Finally, little has been done to ensure the cor-rectness of the security controls that have been implemented. The security kernel approach described here directly addresses the size and complexity problem by limiting the protection mechanism to a small portion of the system. The second and third problems are addressed by clearly defining a security policy and then following a rigorous methodology that includes developing a mathematical model, constructing a precise specification of behavior, and coding in a high-level language. The security kernel approach is based on the concept of the reference monitor, an abstract notion adapted from the models of Butler Lampson.I The reference monitor provides an underlying security theory for conceptualizing the idea of protection. In a reference monitor, all …
TL;DR: li t CeIsi11 that hats I-CCCIetlV Undercrone initeinsc dcclo0n1vtC scr`t n ns nc cT-iticisni at the imiomenit, therc is ex en disacireeient 0cer ite meaninu ot' the tCi-Ini ''silicOlo tomli at ion itelt'.
Abstract: S ilrCon collpilat ionl a ncsw techniclue 1'o0 integratecd i.li t CeIsi11 that hats I-CCCIetlV Undercrone initeinsc dcclo0n1vtC scr`t n ns nc cT-iticisni At the imiomenit, therc is ex en disacireeient 0cer ite meaninu ot' the tCi-Ini ''silicOlo tomli at ion itelt'. III t hi a rtielc 'silicon 'onpila tion' williicl'cr to tie antotullat ic sxviIti csis ot' ani ilte-ratced eieLitilIaont tl0o1il0 a c cs-riptioni ot' its behasiot. Tiadoitional otterl-atcclircLuit clesieti tcclniquCs, on the otrher hailndl arc edcpceildld t onl the s lruclure, irait lerthan the hautt 101o out ilic iWttuAtcd . lirtilit. I cour sc, dcsielin is not the onl stcp in thcl roeductioi ofI trsct'l I( '. IFahri atioitand testirte ate also important. III1 ac t, ads anlls it [abr lcation capabilitv hase actualix o01t-ilpped adsaens in cdcsitgnx,wiich is whli dcsien is a Ilopic of' oiih iiiticcst It plrcerilt Sotiic tcstiri isstiues Cde'sIlt ItOrtetabilitV lilCi aILutoiiatic ^cieiration ot' tcst patticrris t'or cxritlpl-arte potentially part ot' the desigJn o)I osCs. CoLIr clut rseat cit is bch itiniig to siow progrcss in tihes arcas alic t hel-e is reasoti to believe t hat silicon rirpiler dedsieitIlIetehodoloex Is iiiore atmcniable to tcst auitonmatioil ii a t iot c i Latidcaif't ccl nietlociso 1-iclel.diparitr bet\\xccnxi t'abrication aridc design is stich} tht x hlllc it isorionaiM to i 'iibI-i caLtcr a moderatclv coriipl\\ citrciilt--oltie thlat is to tb eplicated in pi o-odLcrtio atbo ii 1()()() 00( i Ie it i s iiot ccoiiornie to clesigni suic} a cil-t otili rilses it xxill he replicatecd at least 10,000 tinies .* ib ii ,! malo IC'x-etidorxxil1 riot accept conilillissiois ('ot desitls xiti hout a coltlillit lmerit to a 1000( -Lnit produicti(ir xVolIoIeC. RecLtIeIP tliCe desieSll cost of' an IC' h a l'actolof xxi xxoleCI pr-obahiMx redceC theC tllilliirriiium econoniiiC pnrodctICtioII xolVir1ne to abouIt 1000un)ilts, atici a good i Icor corn itOll hler pa obably Iuccitluc dCsiril cost bh m }ch niot c thall t hat pro b(bl hv a f'actor of' thlree to .sc (see ' i,c-cr I).
TL;DR: This article on digital signature schemes is a survey of work done in the area since the concept was introduced in 1976.
Abstract: As paper gives way to electronic mail, a secure means for validating and authenticating messages is required. The answer could be one of several digital signature schemes. In the last few years, research in cryptography has provided various methods for generating digital signatures, both true and arbitrated. Some of these methods utilize conventional private-key cryptosystems such as the Data Encryption Standard (DES), while others are based on the so-called public-key approach. This article on digital signature schemes is a survey of work done in the area since the concept was introduced in 1976. For readers unfamiliar with modern cryptology several overview articles and a number of texts on the subject are noted among the list of references of this article.
TL;DR: The Palladio environment is part of a growing trend toward creating integrated design environments and away from isolated design aids, and several commercial computer-aided engineering workstations have emerged, providing multiple-level, circuit-specification entry systems and integrated analysis aids.
Abstract: Palladio is a circuit design environment for experimenting with methodologies and knowledge-based, expert-system design aids. Its framework is based on several premises about circuit design: (1) circuit design is a process of incremental refinement; (2) it is an exploratory process in which design specifications and design goals coevolve; and (3) most important, circuit designers need an integrated design environment that provides compatible design tools ranging from simulators to layout generators, that permits specification of digital systems in compatible languages ranging anywhere from architectural to layout, and includes the means for explicitly representing, constructing, and testing such design tools and languages. The Palladio environment is part of a growing trend toward creating integrated design environments and away from isolated design aids. Recently several commercial computer-aided engineering (CAE) workstations have emerged, providing multiple-level, circuit-specification entry systems and integrated analysis aids. Integrated circuit designers have a special need for such workstations because of the complexity of large integrated circuits and the high costs of prototyping them.
TL;DR: The author discusses a number of issues that serve as research goals for discovering the principles of knowledge representation, using techniques and concepts evolved while developing the knowledge-representation system KL-one as illustrations.
Abstract: The author discusses a number of issues that serve as research goals for discovering the principles of knowledge representation, using techniques and concepts evolved while developing the knowledge-representation system KL-one as illustrations. The focus is on what constitutes a good representational system and a good set of representational primitives for dealing with an open-ended range of knowledge domains. Issues of interest include those problems that arise in attempting to construct intelligent computer programs that use knowledge to perform some task. 7 references.
TL;DR: Current scene analysis methodology is examined under two criteria: descriptive adequacy, the ability of a representational formalism to capture the essential visual properties of objects and the relationships among objects in the visual world, and procedural adequacy), the capability of the representation to support efficient processes of recognition and search.
Abstract: The central issue in artificial intelligence the representation and use of knowledge unifies areas as diverse as natural-language understanding, speech recognition, story understanding, planning, problem solving, and vision. This article focuses on how computational vision systems represent knowledge of the visual world. It examines current methodology under two criteria: descriptive adequacy, the ability of a representational formalism to capture the essential visual properties of objects and the relationships among objects in the visual world, and procedural adequacy, the capability of the representation to support efficient processes of recognition and search. A major theme in computational vision has been the distinction between the methodology of image analysis (or early vision) and scene analysis (or high-level vision). Briefly, image analysis can be characterized as the science of extracting from images useful descriptions of lines, regions, edges, and Ssurface characteristics up to the level of Marr's 21/2 -D sketch. It is generally assumed that image analysis is domain independent and passive, that is, data driven. Scene analysis attempts to recognize visual objects and their configurations. It is viewed as domain dependent and goal driven, mnotivated by the necessity of identifying particular objects expected to be present in a scene. Although some may disagree, these distinctions should be seen not as a strict dichotomy but as a spectrum. Early vision exploits constraint.s that are usually valid in the particular visual world for which it has evolved (or been designed). Although early visioni is predominantly data driven, high-level visual processes must be able to establish parameters for and control the attention of lower level processes. As we argue later, efficient scene analysis systems must combinie goal-driven and datadriven recognition processes. If that dichotomy is actually a spectrum then establishing the exact boundary is not a research issue. In this article, we outline current scene analysis methodology (early vision is ably described elsewherelt2) and identity a number of its deficiencies. In response to these problems, some recent systems use schema-based knowledge representations. Examples taken from one called Mapsee2 illustrate our arguments.
TL;DR: Methods are described that are designed to supplement a deductive question-answering algorithm that is now operational that draws on a base of logical propositions organized as a semantic net.
Abstract: The development of a simple question-answering system is considered. In particular, methods are described that are designed to supplement a deductive question-answering algorithm that is now operational. The algorithm draws on a base of logical propositions organized as a semantic net. The net permits selective access to the contents of individual mental worlds and narratives, to sets of entities of any specified type, and to propositions involving any specified entity and classified under any specified topic. The problems involved in determining type, part-of, color, and time relationships are discussed. It is shown that much combinatory reasoning in a question-answering system can be short-circuited by the use of special graphical and geometrical methods. 13 references.
TL;DR: This work describes what design data management is and how it can integrate design tools, and presents the architecture of a prototype design management system in which designs are organized into a richly interconnected data structure, using an object data model.
Abstract: A design data management system bridges the gap between design tools and database systems, creating an integrated tool environment. Although design databases have long been of interest, they have been largely ignored within the context of the new tools. Our purpose is to describe what design data management is and how it can integrate design tools. Our discussion focuses on unique data management requirements of design systems and conventional database facilities and their shortcomings for supporting design data. We also present the architecture of a prototype design management system in which designs are organized into a richly interconnected data structure, using an object data model. The structures of that system's storage, recovery, concurrency control, and design validation subsystems are also described as is a "browser" for interactively viewing design data.
01 Sep 1983-IEEE Computer
TL;DR: There are networks everywhere; networks span continents and oceans; tie office buildings in iiles of wire, fiber, and other nerse media; reach into land, air, and space vehicles; and confront microcomputers as well as large maint'rame computers.
Abstract: Armies of spiders could not weave a wider web networks are everywhere. Networks span continents and oceans; tie office buildings in iiles of wire, fiber, and other nerse media; reach into land, air, and space vehicles; and confront microcomputers as well as large maint'rame computers. Someii networks are incredibly fast and others are pragmatically slow; some work better than others, and some do not work well at all. However, despite the present abundance, new networks are still being developed coinstantly to challeinge the competitioni. If' we had a way to initerconnect various networks, many problems could be solved. For example, a user may want to comiimuilicate with a site that is not on the same public network as the host computer. Perhaps there are sev eral hosts but no single network to which they will all coinnect. In some cases, the cost of coninection will be a factor; coninecting 100 hosts on a coaxial local net is more cost-effective than putting them all on a public net, but running 1000 miles of coaxial cable to the 101st host is ab.surd. In other cases, pragmatics or the basic laws of nature apply; for example, radio-based networks are about the only choice if mobility is needed. A network technology that supports a maximum of 256 hosts becomies a problenm when you acquire the 257th. G,iven that all hosts caninot be put on a single network, the next best option is to interconniect networks.
TL;DR: The massively parallel processor's computing power and extreme flexibility will allow the development of new techniques for scene analysis - realtime scene analysis, for example, in which the sensor can interact with the scene as needed.
Abstract: A review of the massively parallel processor (MPP) is provided. The MPP, a single instruction, multiple data parallel computer with 16K processors being built for NASA by Goodyear Aerospace, can perform over six billion eight-bit adds and 1.8 billion eight-bit multiplies per second. Its SIMD architecture and immense computing power promise to make the MPP an extremely useful and exciting new tool for all types of pattern recognition and image processing applications. The SIMD parallelism can be used to directly calculate 16K statistical pattern recognition results simultaneously. Moreover, the 16K processors can be configured into a two-dimensional array to efficiently extract features from a 128x128 subimage in parallel. The parallel search capability of SIMD architectures can be used to search recognition and production rules in parallel, thus eliminating the need to sort them. This feature is particularly important if the rules are dynamically changing. Finally, the MPP's computing power and extreme flexibility will allow the development of new techniques for scene analysis - realtime scene analysis, for example, in which the sensor can interact with the scene as needed.
TL;DR: The methods the authors will describe in this article require the participants to execute communications algorithms, called protocols, which must maintain the properties that Alice and Bob's protocol must maintain in order to guard against cheating by either side.
Abstract: Aice lives in Atlanta and Bob lives in Detroit. They have never met, but they vvish to play poker. After some negotiation, they decide to play cards over the telephone. The first problem that arises is how to deal the cards fairly. If, for instance, Bob deals to Alice, how will Alice know that Bob has not cheated? On the other hand, if Bob manages to somehow deal a fair hand to Alice, without looking at her cards, what wvill stop Alice from changing her hand to a more favorable one? The problem confronting Alice and Bob is very similar to problems confronting users of modern communications systems such as electronic funds transfer systems, military communication networks, and distributed database systems. Such systems operate by series of message exchanges, and the possibility always exists that one or more of the participants in the exchanges will cheat to gain some advantage, or that some external agent will interfere w\\ith normal communications. Security in this context refers to the ability of such a system to withstand attacks by determined cheaters or enemies. Although other methods have been proposed for withstanding such attacksk. the methods we will describe in this article require the participants to execute communications algorithms, called protocols. W'hat are the properties that Alice and Bob's protocol must maintain in order to guard against cheating by either side? The card game they play should have rules just like the ordinary game of poker, except that no cards are actually exchanged. Alice and Bob must know the cards in their own hand, but neither can have any information about the other's hand. The deal must distribute all possible hands w'ith equal probability and should not allow the same card to appear in two hands simultaneously. A player should be able to discard from his own Can two mutually suspicious participants play poker over the telephone? Certainly, if they are clever enough to institute a secure protocol.
TL;DR: Authentication by the customary methods using symmetric ciphers can do nothing to resolve disputes arising from the dishonesty of either sender or receiver and was proposed as a solution to the dispute problem.
Abstract: Because of the increased cost-effectiveness of computer technology and its subsequent acceptance into the business world, computer-based message systems are likely to become the principal carriers of business correspondence. Unfortunately with the efficiency of these systems come new possibilities for crime based on interference with digital messages. But the same technology that poses the threat can be used to resist and perhaps entirely frustrate potential crimes. For some messages, a degree of privacy or secrecy is needed, which is possible with encryption. However, predicting the extent encryption will be used in electronic mail is difficult, since much depends on the cost and convenience of its applications. For nearly all messages, authenticity is a prime requirement. Authenticity implies that the message is genuine in two respects: its text has not changed since it left the sender and the identity of the sender is correctly represented in the text header or in the signature attached to the message. Neither of these authenticity indicators is sufficient by itself because an altered message from sender A is in no way different from a message appearing to come from A but in fact coming from an enemy. The technique of authentication, which is closely related to cryptography, normally uses the symmmetric type of cipher, typified by the Data Encryption Standard, or DES, algorithm. This kind of authentication is seriously deficient because both the sender and receiver must know a secret key. The sender uses the key to generate an authenticator, and the receiver uses it to check the authenticator. With this key, the receiver can also generate authenticators and can therefore forge messages appearing to come from the sender. In other words, authentication can protect both sender and receiver against thirdparty enemies, but it cannot protect one against fraud committed by the other. If A sends a message to B, for example, B might fraudulently claim to have received a different message. Supposing B takes some action in response to a genuine received message, A can still claim that B in fact forged the message. For these reasons, authentication by the customary methods using symmetric ciphers can do nothing to resolve disputes arising from the dishonesty of either sender or receiver. As a solution to the dispute problem, Diffie and Hellmant proposed the use of a digital signature based on certain public-key cryptosystems (Figure 1). The sender of the message is responsible for generating the
TL;DR: Past experience is summarized to guide developers on how to develop computer systems that can be trusted to enforce military security rules.
Abstract: For more than a decade, government, industry, and academic centers have investigated techniques for developing computer systems that can be trusted to enforce military security rules. A good deal has been learned about the problem, and a number of approaches have been explored. But what useful advice can be given to those about to embark on a system development? If the application requires a trusted system, how should the developer proceed? The purpose of this article is to summarize past experience and to guide developers.
01 May 1983-IEEE Computer
TL;DR: This article summarizes the Project 802 subcommittee's focus on calculating the maximum mean data rate and offers the best available evidence, based on this study and related studies.
Abstract: In February 1980 a group of people met in San Francisco, CA, USA, to form Project 802: Local Area Network Standards, sponsored by the IEEE Computer Society. The subcommittee held two open meetings, each followed by circulation of a draft report for comment. All claims had to have enough evidence to allow independent verification. Such evidence included source code plus data for all simulation results, all numbers and formulas for analytic results, and complete experimental conditions and measurement procedures for data analysis on actual systems. This article summarizes the subcommittee's focus on calculating the maximum mean data rate. The best available evidence, based on this study and related studies, is as follows. Token passing via a ring is the least sensitive to workload, offers short delay under light load, and offers controlled delay under heavy load. Token passing via a bus has the greatest delay under light load, cannot carry as much traffic as a ring under heavy load, and is quite sensitive to the bus length (through the propagation time for energy to traverse the bus). Carrier sense collision detection offers the shortest delay under light load, is quite sensitive under heavy load to the workload, and is sensitive to the bus length (the shorter the bus the better it performs) and to message length (the longer the packet the better it does). While this evidence is currently being examined by those actually building local area networks, other independent testing of these plots for confirmation is sought and encouraged.
TL;DR: It is shown that suitably represented descriptions of structure and behavior set an important foundation and offers a unity of device description and simulation, since the descriptions themselves are runnable.
Abstract: The development of expert systems for troubleshooting digital electronics is considered. It is shown that suitably represented descriptions of structure and behavior set an important foundation. The authors approach offers a unity of device description and simulation, since the descriptions themselves are runnable. Unsolved problems are noted. 10 references.
TL;DR: This survey of cellular logic computer architectures for pattern processing in image analysis concentrates on recent efforts and examines some newer architectures that combine logical and numerical computations.
Abstract: genealogy of cellular logic computers reveals an interesting diversity of architectures, but it took the IC technology of the seventies to significantly expand their practical applications. Cellular logic computers, under development since the 1950's, are now in use for image processing in hundreds of laboratories worldwide. This survey of cellular logic computer architectures for pattern processing in image analysis concentrates on recent efforts and examines some newer architectures that combine logical and numerical computations. A logical (or \"binary\") image is one in which the value of each picture element is a single bit. Such images are black and white, and they are processed or modified by use of logical rather than numerical transforms. Boolean algebra provides the mathematics for such transforms. This does not mean that so-called \"gray-level\" images cannot be processed by the cellular logic computer, or CLC. Any gray-level image can be consverted into a registered stack of binary images through multithresh-olding. After each member of the stack is processed logically, the stack can be returned to gray-level format by arithmetically summing the results. Whether the final output is generated faster or more economically by a CLC or by a computer system carrying out numerical computations depends upon (1) the number of binary images required in the stack, (2) the speed of thresholding, (3) the speed of the CLC itself, and (4) the speed of arithmetic recombination. Logical processing often has advantages over more traditional numerical methods in that multilevel, recur-sive logical transforms followed by arithmetic recom-bination have certain unique properties with respect to their use in image processing. Logical transforms can be considered as filters; many are constant phase, pass absolutely no signal beyond cutoff, and have a cutoff frequency that decreases inversely with the number of recur-sions. I Furthermore, logical transforms, when executed as convolution functions using small (say, 3 x 3) kernels, can be executed at ultra high speed (less than one nanose-cond per convolution step) by doing all computations by table lookup and paralleling lookup tables as well as pipelining the computational steps. Neighborhood functions Cellular logic computers are used for the digital computation of two-dimensional and, more recently, three-dimensional logical neighborhood functions in image processing. In general, a logical neighborhood function is one in which the output v alue of each picture element is a function of the original value of the element and the values of the directly adjacent neighbors of the …
01 Mar 1983-IEEE Computer
TL;DR: "superior communication" in the manmachine interface refers to an approximation of human communication, which means software that allows input and otput in less esoteric formats.
Abstract: Similarly, \"superior communication\" in the manmachine interface refers to an approximation of human communication. Users want software that allows input and otput in less esoteric formats. The operator should have to descend to the level of the machine and Agage in dialog at each and every step. Voice I/O and dati processing in natural languages are sterling examples of superior communication. The rapid growth of software