scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Computer in 1996"


Journal ArticleDOI
TL;DR: Why RBAC is receiving renewed attention as a method of security administration and review is explained, a framework of four reference models developed to better understandRBAC is described, and the use of RBAC to manage itself is discussed.
Abstract: Security administration of large systems is complex, but it can be simplified by a role-based access control approach. This article explains why RBAC is receiving renewed attention as a method of security administration and review, describes a framework of four reference models developed to better understand RBAC and categorizes different implementations, and discusses the use of RBAC to manage itself.

5,418 citations


Journal ArticleDOI
TL;DR: This work describes an alternative, programmer-centric view of relaxed consistency models that describes them in terms of program behavior, not system optimizations, and most of these models emphasize the system optimizations they support.
Abstract: The memory consistency model of a system affects performance, programmability, and portability. We aim to describe memory consistency models in a way that most computer professionals would understand. This is important if the performance-enhancing features being incorporated by system designers are to be correctly and widely used by programmers. Our focus is consistency models proposed for hardware-based shared memory systems. Most of these models emphasize the system optimizations they support, and we retain this system-centric emphasis. We also describe an alternative, programmer-centric view of relaxed consistency models that describes them in terms of program behavior, not system optimizations.

1,213 citations


Journal ArticleDOI
TL;DR: This work discusses the experience with parallel computing on networks of workstations using the TreadMarks distributed shared memory system, which allows processes to assume a globally shared virtual memory even though they execute on nodes that do not physically share memory.
Abstract: Shared memory facilitates the transition from sequential to parallel processing. Since most data structures can be retained, simply adding synchronization achieves correct, efficient programs for many applications. We discuss our experience with parallel computing on networks of workstations using the TreadMarks distributed shared memory system. DSM allows processes to assume a globally shared virtual memory even though they execute on nodes that do not physically share memory. We illustrate a DSM system consisting of N networked workstations, each with its own memory. The DSM software provides the abstraction of a globally shared memory, in which each processor can access any data item without the programmer having to worry about where the data is or how to obtain its value.

917 citations


Journal ArticleDOI
TL;DR: In this paper, the authors describe automatic parallelization techniques in the SUIF (Stanford University Intermediate Format) compiler that result in good multiprocessor performance for array-based numerical programs.
Abstract: This article describes automatic parallelization techniques in the SUIF (Stanford University Intermediate Format) compiler that result in good multiprocessor performance for array-based numerical programs. Parallelizing compilers for multiprocessors face many hurdles. However, SUIF's robust analysis and memory optimization techniques enabled speedups on three fourths of the NAS and SPECfp95 benchmark programs.

592 citations


Journal ArticleDOI
TL;DR: Carnegie Mellon's Informedia Digital Video Library project will establish a large, on-line digital video library featuring full-content and knowledge-based search and retrieval, and focused the work on two corpuses.
Abstract: Carnegie Mellon's Informedia Digital Video Library project will establish a large, on-line digital video library featuring full-content and knowledge-based search and retrieval. Intelligent, automatic mechanisms will be developed to populate the library. Search and retrieval from digital video, audio, and text libraries will take place via desktop computer over local, metropolitan, and wide-area networks. The project's approach applies several techniques for content-based searching and video-sequence retrieval. Content is conveyed in both the narrative (speech and language) and the image. Only by the collaborative interaction of image, speech, and natural language understanding technology is it possible to successfully populate, segment, index, and search diverse video collections with satisfactory recall and precision. This collaborative interaction approach uniquely compensates for problems of interpretation and search in error-ridden and ambiguous data sets. The authors have focused the work on two corpuses. One is science documentaries and lectures, the other is broadcast news content with partial closed-captions. Further work will continue to improve the accuracy and performance of the underlying processing as well as explore performance issues related to Web-based access and interoperability with other digital video resources.

439 citations


Journal ArticleDOI
Thomas Ball1, Stephen G. Eick1
TL;DR: The invisible nature of software hides system complexity, particularly for large team-oriented projects, and four innovative visual representations of code have evolved to help solve this problem: line representation; pixel representation; file summary representation; and hierarchical representation.
Abstract: The invisible nature of software hides system complexity, particularly for large team-oriented projects. The authors have evolved four innovative visual representations of code to help solve this problem: line representation; pixel representation; file summary representation; and hierarchical representation. We first describe these four visual code representations and then discuss the interaction techniques for manipulating them. We illustrate our software visualization techniques through five case studies. The first three focus on software history and static software characteristics; the last two discuss execution behavior. The software library and its implementation are then described. Finally, we briefly review some related work and compare and contrast our different techniques for visualizing software.

424 citations


Journal ArticleDOI
Alan Wood1
TL;DR: This work applied software reliability modeling to a subset of products for four major releases of major software releases at Tandem, and learned what was learned.
Abstract: Critical business applications require reliable software, but developing reliable software is one of the most difficult problems facing the software industry After the software is shipped, software vendors receive customer feedback about software reliability However, by then it is too late; software vendors need to know whether their products are reliable before they are delivered to customers Software reliability growth models help provide that information Unfortunately, very little real data and models from commercial applications have been published, possibly because of the proprietary nature of the data Over the past few years, the author and his colleagues at Tandem have experimented with software reliability growth models At Tandem, a major software release consists of substantial modifications to many products and may contain several million lines of code Major software releases follow a well defined development process and involve a coordinated quality assurance effort We applied software reliability modeling to a subset of products for four major releases The article reports on what was learned

394 citations


Journal ArticleDOI
TL;DR: Polaris, an experimental translator of conventional Fortran programs that target machines such as the Cray T3D, is discussed, which would liberate programmers from the complexities of explicit, machine oriented parallel programming.
Abstract: Parallel programming tools are limited, making effective parallel programming difficult and cumbersome. Compilers that translate conventional sequential programs into parallel form would liberate programmers from the complexities of explicit, machine oriented parallel programming. The paper discusses parallel programming with Polaris, an experimental translator of conventional Fortran programs that target machines such as the Cray T3D.

350 citations


Journal ArticleDOI
TL;DR: The paper discusses some collaboratory prototypes and considers the sociology of collaboration, which means team members distributed across a widespread area can collaborate, using the newest instruments and computing resources.
Abstract: The success of many complex scientific investigations hinges on bringing the capabilities of diverse individuals from multiple institutions together with state-of-the-art instrumentation. Computer scientists working with domain specialists have made progress on several fronts to create and integrate the tools required for Internet-based scientific collaboration. However, both technical and sociological challenges remain. The tools of computer-supported cooperative work are now being applied to such collaborations. Through immersive electronic interaction, team members distributed across a widespread area can collaborate, using the newest instruments and computing resources. The paper discusses some collaboratory prototypes and considers the sociology of collaboration.

298 citations


Journal ArticleDOI
TL;DR: The World Wide Web is simply defined as the universe of global network-accessible information, an abstract space within which people can interact, and it is chiefly populated by interlinked pages of text, images, and animations.
Abstract: The World Wide Web is simply defined as the universe of global network-accessible information. It is an abstract space within which people can interact, and it is chiefly populated by interlinked pages of text, images, and animations, with occasional sounds, videos, and three-dimensional worlds. The Web marks the end of an era of frustrating and debilitating incompatibility between computer systems. It has created an explosion of accessibility, with many potential social and economical impacts. The Web was designed to be a space within which people could work on a project. This was a powerful concept, in that: people who build a hypertext document of their shared understanding can refer to it at all times; people who join a project team can have access to a history of the team's activities, decisions, and so on; the work of people who leave a team can be captured for future reference; and a team's operations, if placed on the Web, can be machine-analyzed in a way that could not be done otherwise. The Web was originally supposed to be a personal information system and a tool for groups of all sizes, from a team of two to the entire world. People have rapidly developed new features for the Web, because of its tremendous commercial potential. This has made the maintenance of globalWeb interoperability a continuous task. This has also created a number of areas into which research must continue.

212 citations


Journal ArticleDOI
TL;DR: Janus-II as discussed by the authors uses paraphrasing and interactive error correction to boost performance on spontaneous conversational human dialogue in limited domains with vocabularies of 3,000 or more words.
Abstract: As communication becomes increasingly automated and transnational, the need for rapid, computer-aided speech translation grows. The Janus-II system uses paraphrasing and interactive error correction to boost performance. Janus-II operates on spontaneous conversational human dialogue in limited domains with vocabularies of 3,000 or more words. Current experiments involve 10,000 to 40,000 word vocabularies. It now accepts English, German, Japanese, Spanish, and Korean input, which it translates into any other of these languages. Beyond translating syntactically well-formed speech or carefully structured human-to-machine speech utterances, Janus-II research has focused on the more difficult task of translating spontaneous conversational speech between humans. This naturally requires a suitable database and task domain.

Journal ArticleDOI
TL;DR: This article presents a general framework of a system of logical clocks in distributed systems and discusses three methods: scalar, vector and matrix, for implementing logical time in these systems.
Abstract: Causality is vital in distributed computations. Distributed systems can determine causality using logical clocks. Human beings use the concept of causality to plan, schedule, and execute an enterprise, or to determine a plan's feasibility. In daily life, we use global time to deduce causality from loosely synchronized clocks such as wrist watches and wall clocks. But in distributed computing systems, the rate of event occurrence is several magnitudes higher, and the event-execution time several magnitudes smaller. If the physical clocks in these systems are not synchronized precisely the causality relation between events cannot be captured accurately. However, distributed systems have no built-in physical time and can only approximate it. This article presents a general framework of a system of logical clocks in distributed systems and discusses three methods: scalar, vector and matrix, for implementing logical time in these systems.

Journal ArticleDOI
TL;DR: The author discusses some basic and advanced features of Java, including garbage collection, multithreading and application programming interfaces.
Abstract: Java is an object-oriented programming language with a syntax similar to C and C++, only simpler. Because Java is an interpreted language, the typical C or C++ compile-link-load-test-debug cycle is reduced. Java development environments actually let the entire software-development life cycle take place within a Web browser. The author discusses some basic and advanced features of Java, including garbage collection, multithreading and application programming interfaces.

Journal ArticleDOI
TL;DR: Digital watermarking has been proposed as a way to identify the source, creator, owner, distributor, or authorized consumer of a document or image and to permanently and unalterably mark the image so that the credit or assignment is beyond dispute.
Abstract: The Internet revolution is now in full swing, and commercial interests abound. As with other maturing media technologies, the focus is moving from technology to content, as commercial vendors and developers try to use network technology to deliver media products for profit. This shift inevitably raises questions about how to protect ownership rights. Digital watermarking has been proposed as a way to identify the source, creator, owner, distributor, or authorized consumer of a document or image. Its objective is to permanently and unalterably mark the image so that the credit or assignment is beyond dispute. In the event of illicit use, the watermark would facilitate the claim of ownership, the receipt of copyright revenues, or successful prosecution. Watermarking has also been proposed for tracing images that have been illicitly redistributed. In the past, the infeasibility of large-scale photocopying and distribution often limited copyright infringement, but modern digital networks make large-scale dissemination simple and inexpensive. Digital watermarking allows each image to be uniquely marked for every buyer. If that buyer makes an illicit copy, the copy itself identifies the buyer as the source.

Journal ArticleDOI
TL;DR: This roundtable brings together some preeminent experts in the field, asking them to address the question "What is hindering the use of formal methods in industry?"
Abstract: One of the most challenging tasks in software system design is to assure reliability, especially as these systems are increasingly used in sensitive and often life-critical environments such as medical systems, air traffic control, and space applications. Many claim that formal methods not only provide assurance of reliability but also have the potential to reduce costs. Although the literature contains many excellent examples of applications of formal methods for large, critical, or even business transaction systems, a large percentage of practitioners see formal methods as irrelevant to their daily work. Why? This roundtable brings together some preeminent experts in the field, asking them to address the question "What is hindering the use of formal methods in industry?"

Journal ArticleDOI
TL;DR: The algorithms in this research create individualized, subject-specific atlases of the head, created from MRI scans and labeled by experts, using an annotated volume of images, charts or tables that systematically illustrate an anatomical part.
Abstract: An anatomical atlas is an annotated volume of images, charts or tables that systematically illustrate an anatomical part. Atlas annotations often include structure names, descriptions, locations and functions, as well as other information specific to the atlas anatomy. Individualized digital atlases can be generated by using a computer to transform the shape of the atlas into the shape of images taken of the individual. Generalized electronic atlases of the head, created from MRI scans and labeled by experts, are currently available. The algorithms in this research create individualized, subject-specific atlases.

Journal ArticleDOI
J.D. Musa1
TL;DR: Software reliability engineered testing (SRET), an ATT it in no way implies that SRET is limited to telecommunications systems, and only five of 30 proposed best current practices were approved in 1991.
Abstract: Software testing often results in delays to market and high cost without assuring product reliability. Software reliability engineered testing (SRET), an ATT it in no way implies that SRET is limited to telecommunications systems. SRET is based on the ATT only five of 30 proposed best current practices were approved in 1991.

Journal ArticleDOI
TL;DR: The goal of the Alexandria Project is to build a distributed digital library for geographically referenced material from throughout the world, and will provide user interfaces and on-line catalogs that support the formulation and evaluation of geographically constrained queries.
Abstract: Presently, maps, aerial photos, and other material referenced in geographic terms, such as by the names of communities that appear in the material, are largely inaccessible. Much of the information is only on paper or film, and can be found only in major research libraries. There is a need to make such material more widely available. The goal of the Alexandria Project, which is based at the University of California at Santa Barbara, is to build a distributed digital library for geographically referenced material from throughout the world. The Alexandria Digital Library is scheduled to be made available to the public in July 1996. At that time, Internet users will be able to access and extract information from the ADL's large collection. To accomplish its goal, ADL will provide user interfaces and on-line catalogs that support the formulation and evaluation of geographically constrained queries.

Journal ArticleDOI
TL;DR: The paper discusses some of the Digital Library Initiative (DLI) projects which are a good measure of the research into large scale digital libraries and span a wide range of the major topics necessary to develop the National Information Infrastructure.
Abstract: In this era of the Internet and the World Wide Web, the long-time topic of digital libraries has suddenly become white hot. As the Internet expands, particularly the WWW, more people are recognizing the need to search indexed collections. The paper discusses some of the Digital Library Initiative (DLI) projects which are a good measure of the research into large scale digital libraries. They span a wide range of the major topics necessary to develop the National Information Infrastructure.

Journal ArticleDOI
TL;DR: The author has implemented Passion on Intel's Paragon, Touchstone Delta, and iPSC/860 systems, and on the IBM SP system, and made it publicly available through the World Wide Web.
Abstract: We have implemented Passion on Intel's Paragon, Touchstone Delta, and iPSC/860 systems, and on the IBM SP system. We have also made it publicly available through the World Wide Web (http://www.cat.syr.edu/passion.html). We are in the process of porting the library to other machines and extending its functionality.

Journal ArticleDOI
TL;DR: The System Meter (SM) is a new software sizing approach based on the notion of system description that distinguishes between components to be developed and those to be reused, thus reflecting the idea of incremental functionality.
Abstract: Cost and time estimation are difficult problems in software development projects. Software metrics tackle this problem by assuming a statistical correlation between the size of a software project and the amount of effort typically required to realize it. To be useful in estimating cost, a size metric must take into account the inherent complexity of the system. Such metrics have been applied with varying degrees of success, but the nature of software development has been changing, and some of the assumptions behind the established cost-estimation techniques are slowly being invalidated. The System Meter (SM) is a new software sizing approach based on the notion of system description. System descriptions encompass all kinds of software artifacts, from requirement documents to final code. For each kind or level of artifacts, there is a corresponding flavor of SM. In our studies we used the first operational flavor, the SM at the preliminary analysis level, or Pre-SM. In contrast to the well-known Function Point (FP) metric, which is measurable after the more detailed but costly phase of domain analysis only, the SM explicitly takes OO concepts into account. It also distinguishes between components to be developed and those to be reused, thus reflecting the idea of incremental functionality. We present results of a field study of 36 projects developed using object technology. We measured both FP and the Pre-SM method in all 36 projects and compared their correlation to the development effort.

Journal ArticleDOI
S. Sparks1, K. Benner1, C. Faris1
TL;DR: This work has found that framework-based reuse offers many more benefits with the right management approach, and describes the lessons it learned when building the Knowledge-Based Software Assistant/Advanced Development Model.
Abstract: Reusing frameworks instead of libraries can cause subtle architectural changes in an application, calling for innovative management solutions. We relate our experience in managing the Knowledge-Based Software Assistant project and offer tips for buying, building and using frameworks. One of the promises of object-oriented software development is that organizations can get a significant return on development investment because the code is easier to reuse. Software project managers are often eager to take the OO plunge for that reason, but are uncertain about the management issues they will face. There is also the problem of choosing the best form of reuse. Library-based reuse, the traditional reuse form, is more popular than framework-based reuse, but we have found that framework-based reuse offers many more benefits with the right management approach. We describe the lessons we learned when building the Knowledge-Based Software Assistant/Advanced Development Model.

Journal ArticleDOI
TL;DR: The MESSENGERS system is a distributed system based on autonomous objects that combines powerful navigational capabilities found in other autonomous objects based systems with efficient dynamic linking mechanisms supported by some new programming languages, like Java.
Abstract: Most existing distributed systems are structured as statically compiled processes communicating with each other via messages. The system's intelligence is embodied in the processes, while the messages contain simple, passive pieces of information. This is referred to as the communicating objects paradigm. In the autonomous objects paradigm, a message has its own identity and behavior. It decides at runtime where it wants to propagate and what tasks to perform there; the nodes become simply generic interpreters that enable messages to navigate and compute. In this scenario, an application's intelligence is embodied in and carried by messages as they propagate through the network. The autonomous objects paradigm is more flexible than the communicating objects paradigm because it allows developers to change the program's behavior after it has started to run. We based our system, MESSENGERS, on autonomous objects, and intended it for the composition and coordination of concurrent activities in a distributed environment. It combines powerful navigational capabilities found in other autonomous objects based systems with efficient dynamic linking mechanisms supported by some new programming languages, like Java.

Journal ArticleDOI
TL;DR: The article examines the market and technology trends affecting the testing of integrated circuits, with emphasis on the role of predesigned components-cores-and built in self test.
Abstract: The article examines the market and technology trends affecting the testing of integrated circuits, with emphasis on the role of predesigned components-cores-and built in self test. We explain manufacturing testing, as opposed to design testing, which happens before manufacturing, and online testing, which happens after.

Journal ArticleDOI
TL;DR: The Digital Library Initiative project at the University of Illinois at Urbana-Champaign is developing the information infrastructure to effectively search technical documents on the Internet, constructing a large testbed of scientific literature, evaluating its effectiveness under significant use, and researching enhanced search technology.
Abstract: The Digital Library Initiative (DLI) project at the University of Illinois at Urbana-Champaign is developing the information infrastructure to effectively search technical documents on the Internet. The authors are constructing a large testbed of scientific literature, evaluating its effectiveness under significant use, and researching enhanced search technology. They are building repositories (organized collections) of indexed multiple-source collections and federating (merging and mapping) them by searching the material via multiple views of a single virtual collection. Developing widely usable Web technology is also a key goal. Improving Web search beyond full-text retrieval will require using document structure in the short term and document semantics in the long term. Their testbed efforts concentrate on journal articles from the scientific literature, with structure specified by the Standard Generalized Markup Language (SGML). Research efforts extract semantics from documents using the scalable technology of concept spaces based on context frequency. They then merge these efforts with traditional library indexing to provide a single Internet interface to indexes of multiple repositories.

Journal ArticleDOI
TL;DR: Although creeping requirements are troublesome, they are often a technical necessity and several threads of research and some emerging technologies are aimed at either clarifying requirements earlier in development or minimizing the disruptive effect of changing requirements later.
Abstract: One of the most chronic problems in software development is the fact that application requirements are almost never stable and fixed. Frequent changes in requirements are not always caused by capricious clients (although sometimes they are). The root cause of requirements volatility is that many applications are attempting to automate domains that are only partly understood. As software design and development proceeds, the process of automation begins to expose these ill-defined situations. Therefore, although creeping requirements are troublesome, they are often a technical necessity. Several threads of research and some emerging technologies are aimed at either clarifying requirements earlier in development or minimizing the disruptive effect of changing requirements later.

Journal ArticleDOI
TL;DR: The authors used CORBA to implement information-access and payment protocols that provide the interface uniformity necessary for interoperability, while leaving implementers a large amount of leeway to optimize performance and to provide choices in service performance profiles.
Abstract: Information repositories are just one of many services tomorrow's digital libraries might offer. Other services include automated news summarization, trend analysis across news repositories, and copyright-related facilities. This distributed collection of services has the potential to be enormously helpful in performing information-intensive tasks. It could also turn such tasks into confusing, frustrating annoyances by forcing programmers and users to learn many interfaces and by confronting users with the bewildering details of fee-based services that were previously only accessible to professional librarians. The Stanford Digital Library project is addressing the problem of interoperability, which is particularly important because standardization efforts are lagging behind the development of digital library services. The authors used CORBA to implement information-access and payment protocols. These protocols provide the interface uniformity necessary for interoperability, while leaving implementers a large amount of leeway to optimize performance and to provide choices in service performance profiles. The authors' initial experience indicates that a distributed object framework does give clients and servers the flexibility to manage their communication and processing resources effectively.

Journal ArticleDOI
TL;DR: The development team expects the scale and diversity of the project to test their technical ideas about distributed agents, interoperability, mediation, and economical resource allocation.
Abstract: The University of Michigan Digital Library (UMDL) project is creating an infrastructure for rendering library services over a digital network. When fully developed, the UMDL will provide a wealth of information sources and library services to students, researchers, and educators. Tasks are distributed among numerous specialized modules called agents. The three classes of agents are user interface agents, mediator agents, and collection interface agents. Complex tasks are accomplished by teams of specialized agents working together-for example, by interleaving various types of search. The UMDL is being deployed in three arenas: secondary-school science classrooms, the University of Michigan library, and space-science laboratories. The development team expects the scale and diversity of the project to test their technical ideas about distributed agents, interoperability, mediation, and economical resource allocation.

Journal ArticleDOI
TL;DR: A hybrid algorithm combines proposed solutions to address the problem of how to tell good sensor data from faulty data in a distributed system.
Abstract: Sensors that supply data to computer systems are inherently unreliable. When sensors are distributed, reliability is further compromised. How can a system tell good sensor data from faulty? A hybrid algorithm combines proposed solutions to address the problem.

Journal ArticleDOI
TL;DR: Voxel-Man is an attempt to combine in a single framework a detailed spatial model enabling realistic visualization with a symbolic model of the human body to provide a variety of novel features for surgical education and training.
Abstract: A general digital model of human anatomy is very helpful both in supporting the process of anatomical segmentation and as a reference system for simulating surgical situations or even rehearsal of interventions. This article describes the data structure and implementation of such a model. Neither superhero nor crash-test dummy, Voxel-Man is an attempt to combine in a single framework a detailed spatial model enabling realistic visualization with a symbolic model of the human body. We show that although a general model does not correspond in detail to an individual patient, it does provide a variety of novel features for surgical education and training.