scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Computer in 1997"


Journal ArticleDOI
TL;DR: This work uses hardware methods to evaluate low-level error detection and masking mechanisms, and software methods to test higher level mechanisms to evaluate the dependability of computer systems.
Abstract: Fault injection is important to evaluating the dependability of computer systems. Researchers and engineers have created many novel methods to inject faults, which can be implemented in both hardware and software. The contrast between the hardware and software methods lies mainly in the fault injection points they can access, the cost and the level of perturbation. Hardware methods can inject faults into chip pins and internal components, such as combinational circuits and registers that are not software-addressable. On the other hand, software methods are convenient for directly producing changes at the software-state level. Thus, we use hardware methods to evaluate low-level error detection and masking mechanisms, and software methods to test higher level mechanisms. Software methods are less expensive, but they also incur a higher perturbation overhead because they execute software on the target system.

876 citations


Journal ArticleDOI
TL;DR: The most radical of the architectures that appear in this issue are Raw processors-highly parallel architectures with hundreds of very simple processors coupled to a small portion of the on-chip memory, allowing synthesis of complex operations directly in configured hardware.
Abstract: The most radical of the architectures that appear in this issue are Raw processors-highly parallel architectures with hundreds of very simple processors coupled to a small portion of the on-chip memory. Each processor, or tile, also contains a small bank of configurable logic, allowing synthesis of complex operations directly in configurable hardware. Unlike the others, this architecture does not use a traditional instruction set architecture. Instead, programs are compiled directly onto the Raw hardware, with all units told explicitly what to do by the compiler. The compiler even schedules most of the intertile communication. The real limitation to this architecture is the efficacy of the compiler. The authors demonstrate impressive speedups for simple algorithms that lend themselves well to this architectural model, but whether this architecture will be effective for future workloads is an open question.

696 citations


Journal ArticleDOI
TL;DR: The wearable personal imaging system as mentioned in this paper is based on the idea of keeping an eye on the screen while walking around and doing other things, which is similar to the wearable personal digital assistant.
Abstract: Miniaturization of components has enabled systems that are wearable and nearly invisible, so that individuals can move about and interact freely, supported by their personal information domain. To explore such new concepts in imaging and lighting, I designed and built the wearable personal imaging system. My invention differed from present-day laptops and personal digital assistants in that I could keep an eye on the screen while walking around and doing other things. Just as computers have come to serve as organizational and personal information repositories, computer clothing, when worn regularly, could become a visual memory prosthetic and perception enhancer.

568 citations


Journal ArticleDOI
TL;DR: An effort to develop an integrated set of diagrammatic languages for modeling object-oriented systems to be intuitive and well-structured, yet fully executable and analyzable, enabling automatic synthesis of usable and efficient code in object- oriented languages such as C++.
Abstract: Statecharts, popular for modeling system behavior in the structural analysis paradigm, are part of a fully executable language set for modeling object-oriented systems. The languages form the core of the emerging Unified Modeling Language. The authors embarked on an effort to develop an integrated set of diagrammatic languages for object modeling, built around statecharts, and to construct a supporting tool that produces a fully executable model and allows automatic code synthesis. The language set includes two constructive modeling languages (languages containing the information needed to execute the model or translate it into executable code).

512 citations


Journal ArticleDOI
TL;DR: The paper considers the Multigraph Architecture framework for model-integrated computing developed at Vanderbilt's Measurement and Computing Systems Laboratory, which includes integrated, multiple-view models that capture information relevant to the system under design.
Abstract: Computers now control many critical systems in our lives, from the brakes on our cars to the avionics control systems on planes. Such computers wed physical systems to software, tightly integrating the two and generating complex component interactions unknown in earlier systems. Thus, it is imperative that we construct software and its associated physical system so they can evolve together. The paper discusses one approach that accomplishes this called model-integrated computing. This works by extending the scope and use of models. It starts by defining the computational processes that a system must perform and develops models that become the backbone for the development of computer-based systems. In this approach, integrated, multiple-view models capture information relevant to the system under design. The paper considers the Multigraph Architecture framework for model-integrated computing developed at Vanderbilt's Measurement and Computing Systems Laboratory.

491 citations


Journal ArticleDOI
TL;DR: Presents the case for billion-transistor processor architectures that will consist of chip multiprocessors (CMPs): multiple (four to 16) simple, fast processors on one chip, and all processors share a larger level-two cache.
Abstract: Presents the case for billion-transistor processor architectures that will consist of chip multiprocessors (CMPs): multiple (four to 16) simple, fast processors on one chip. In their proposal, each processor is tightly coupled to a small, fast, level-one cache, and all processors share a larger level-two cache. The processors may collaborate on a parallel job or run independent tasks (as in the SMT proposal). The CMP architecture lends itself to simpler design, faster validation, cleaner functional partitioning, and higher theoretical peak performance. However for this architecture to realize its performance potential, either programmers or compilers will have to make code explicitly parallel. Old ISAs will be incompatible with this architecture (although they could run slowly on one of the small processors).

434 citations


Journal ArticleDOI
TL;DR: This work presents a survey of the techniques developed since the mid-1980s to implement replicated services, emphasizing the relationship between replication techniques and group communication.
Abstract: Replication handled by software on off-the-shelf hardware costs less than using specialized hardware. Although an intuitive concept, replication requires sophisticated techniques for successful implementation. Group communication provides an adequate framework. We present a survey of the techniques developed since the mid-1980s to implement replicated services, emphasizing the relationship between replication techniques and group communication.

414 citations


Journal ArticleDOI
TL;DR: Design by contract is the principle that the interfaces between modules of a software system-especially a mission-critical one-should be governed by precise specifications, and this lesson has not been heeded by such recent designs as IDL, Ada 95 or Java.
Abstract: Design by contract is the principle that the interfaces between modules of a software system-especially a mission-critical one-should be governed by precise specifications. The contracts cover mutual obligations (pre-conditions), benefits (post-conditions), and consistency constraints (invariants). Together, these properties are known as assertions, and are directly supported in some design and programming languages. A recent $500 million software error provides a sobering reminder that this principle is not just a pleasant academic ideal. On June 4, 1996, the maiden flight of the European Ariane 5 launcher crashed, about 40 seconds after takeoff. The rocket was uninsured. The French space agency, CNES (Centre National d'Etudes Spatiales), and the European Space Agency (ESA) immediately appointed an international inquiry board. The board makes several recommendations with respect to software process improvement. There is a simple lesson to be learned from this event: reuse without a precise, rigorous specification mechanism is a risk of potentially disastrous proportions. It is regrettable that this lesson has not been heeded by such recent designs as IDL, Ada 95 or Java. None of these languages has built-in support for design by contract. Effective reuse requires design by contract. Without a precise specification attached to each reusable component, no-one can trust a supposedly reusable component. Without a specification, it is probably safer to redo than to reuse.

297 citations


Journal ArticleDOI
TL;DR: The author points out that it will soon be impossible to maintain one global clock over the entire chip, and sending signals across a billion-transistor processor may require as many as 20 cycles.
Abstract: The most important physical trend facing chip architects is the fact that on-chip wires are becoming much slower relative to logic as the on-chip devices shrink. The author points out that it will soon be impossible to maintain one global clock over the entire chip, and sending signals across a billion-transistor processor may require as many as 20 cycles.

257 citations


Journal ArticleDOI
TL;DR: An overview of electronic payment systems is provided, focusing on issues related to security, which can actually provide better security than traditional means of payments, in addition to flexibility.
Abstract: The exchange of goods conducted face-to-face between two parties dates back to before the beginning of recorded history. Traditional means of payment have always had security problems, but now electronic payments retain the same drawbacks and add some risks. Unlike paper, digital "documents" can be copied perfectly and arbitrarily often, digital signatures can be produced by anybody who knows the secret cryptographic key, and a buyer's name can be associated with every payment, eliminating the anonymity of cash. Without new security measures, widespread electronic commerce is not viable. On the other hand, properly designed electronic payment systems can actually provide better security than traditional means of payments, in addition to flexibility. This article provides an overview of electronic payment systems, focusing on issues related to security.

250 citations


Journal ArticleDOI
K. Diefendorff1, Pradeep Dubey2
TL;DR: The authors predict high-performance, general-purpose processors will incorporate more media processing capabilities, eventually bringing about the demise of specialized media processors, except perhaps, in embedded applications.
Abstract: Workloads drive architecture design and will change in the next two decades. For high-performance, general-purpose processors, there is a consensus that multimedia will continue to grow in importance. The authors predict these processors will incorporate more media processing capabilities, eventually bringing about the demise of specialized media processors, except perhaps, in embedded applications. These enhanced general-purpose processor capabilities will arise from multimedia applications that require real-time response, continuous-media data types and significant fine-grained data parallelism.

Journal ArticleDOI
TL;DR: A real-time object structure that can flexibly yet accurately specify the temporal behavior of modeled subjects is described, which supports strong requirements-design traceability, the feasibility of thorough and cost-effective validation, and ease of maintenance.
Abstract: The market for real-time applications has grown considerably in years, and in response engineering methods have also improved. Today's techniques, while adequate for building moderately complex embedded applications, are inadequate for building the large, highly reliable, very complex real-time applications that are increasingly in demand. To build such large systems, engineering teams need a more uniform, integrated approach than is available today. Ideally, the development approach would make uniform the representations of both application environments and control systems as they proceed through various system engineering phases. The ideal representation (or modeling) scheme should be effective not only for abstracting system designs but also for representing the application environment. It should also be capable of manipulating logical values and temporal characteristics at varying degrees of accuracy. This ideal modeling scheme is not likely to be realized through conventional object models. Although they are natural building blocks for modular systems, conventional object models lack concrete mechanisms to represent the temporal behavior of complex, dynamic systems. This article describes a real-time object structure that can flexibly yet accurately specify the temporal behavior of modeled subjects. This approach supports strong requirements-design traceability, the feasibility of thorough and cost-effective validation, and ease of maintenance.

Journal ArticleDOI
TL;DR: The configurable computing community should focus on refining the emerging architectures, producing more effective software/hardware APIs, better tools for application development that incorporate the models of hardware reconfiguration, and effective benchmarking strategies.
Abstract: Configurable computing offers the potential of producing powerful new computing systems. Will current research overcome the dearth of commercial applicability to make such systems a reality? Unfortunately, no system to date has yet proven attractive or competitive enough to establish a commercial presence. We believe that ample opportunity exists for work in a broad range of areas. In particular, the configurable computing community should focus on refining the emerging architectures, producing more effective software/hardware APIs, better tools for application development that incorporate the models of hardware reconfiguration, and effective benchmarking strategies.

Journal ArticleDOI
TL;DR: The intelligent RAM or IRAM is proposed, which greatly increases the on-chip memory capacity by using DRAM technology instead of much less dense SRAM memory cells, and should allow cost-effective vector processors to reach performance levels much higher than those of traditional architectures.
Abstract: Members of the University of California, Berkeley, argue that the memory system will be the greatest inhibitor of performance gains in future architectures. Thus, they propose the intelligent RAM or IRAM. This approach greatly increases the on-chip memory capacity by using DRAM technology instead of much less dense SRAM memory cells. The resultant on-chip memory capacity coupled with the high bandwidths available on chip should allow cost-effective vector processors to reach performance levels much higher than those of traditional architectures. Although vector processors require explicit compilation, the authors claim that vector compilation technology is mature (having been used for decades in supercomputers), and furthermore, that future workloads will contain more heavily vectorizable components.

Journal ArticleDOI
TL;DR: Ten key causes of poor information quality, warning signs, and typical patches are described so that organisations can identify and address these problems before they have financial and legal consequences.
Abstract: Poor information quality can create chaos Unless its root cause is diagnosed, efforts to address it are akin to patching potholes The article describes ten key causes, warning signs, and typical patches With this knowledge, organisations can identify and address these problems before they have financial and legal consequences

Journal ArticleDOI
TL;DR: The PSTN network's high dependability indicates that the trade-off between dependability gains and complexity introduced by built-in self-test and recovery mechanisms can be positive.
Abstract: What makes a distributed system reliable? A study of failures in the US public switched telephone network (PSTN) shows that human intervention is one key to this large system's reliability. Software is not the weak link in the PSTN system's dependability. Extensive use of built-in self-test and recovery mechanisms in major system components (switches) contributed to software dependability and are significant design features in the PSTN. The network's high dependability indicates that the trade-off between dependability gains and complexity introduced by built-in self-test and recovery mechanisms can be positive. Likewise, the tradeoff between complex interactions and the loose coupling of system components has been positive, permitting quick human intervention in most system failures and resulting in an extremely reliable system.

Journal ArticleDOI
TL;DR: Eventually, one enterprising chip builder will deliver the first fault-tolerant microprocessor at a competitive price, and soon thereafter fault tolerance will be considered as indispensable to computers as immunity is to humans.
Abstract: After 30 years of study and practice in fault tolerance, high-confidence computing remains a costly privilege of several critical applications. It is time to explore ways to deliver high-confidence computing to all users. The speed of computing will ultimately be limited by the laws of physics, but the demand for affordable high-confidence computing will continue as long as people use computers to enhance the quality of their lives. Eventually, one enterprising chip builder will deliver the first fault-tolerant microprocessor at a competitive price, and soon thereafter fault tolerance will be considered as indispensable to computers as immunity is to humans. The remaining manufacturers will follow suit or go the way of the dinosaurs. Once again, Darwin will be proven right.

Journal ArticleDOI
TL;DR: Eco System is a cross industry effort to build a framework of frameworks, involving both e-commerce vendors and end users, and an industry roadmap and interoperability example that promotes open standards and helps technology vendors communicate with end users about product features.
Abstract: Robust electronic commerce will require several proprietary systems to interoperate. CommerceNet is proposing Eco System, a cross industry effort to build a framework of frameworks, involving both e-commerce vendors and end users. Eco System will consist of an extensible object oriented framework (class libraries, application programming interfaces and shared services) from which developers can assemble applications quickly from existing components. These applications could subsequently be reused in other applications. We are also developing a Common Business Language (CBL) that lets application agents communicate using messages and objects that model communications in the real business world. A network services architecture (protocols, APIs, and data formats) will insulate application agents from each other and from platform dependencies, while facilitating their interoperation. Functionally, Eco System fills three distinct roles. It is: a layer of middleware that facilitates agent interoperation through services such as authentication, billing, payment, and directories; an object oriented development environment that encourages the reuse of e-commerce modules (even modules that represent the product line of an entire company); and an industry roadmap and interoperability example that promotes open standards and helps technology vendors communicate with end users about product features.

Journal ArticleDOI
TL;DR: This work investigates design options for mobile user devices that are used in legally significant applications that need security for their prospective electronic commerce applications.
Abstract: The market for devices like mobile phones, multifunctional watches, and personal digital assistants is growing rapidly. Most of these mobile user devices need security for their prospective electronic commerce applications. While new technology has simplified many business and personal transactions, it has also opened the door to high-tech crime. We investigate design options for mobile user devices that are used in legally significant applications.

Journal ArticleDOI
TL;DR: The authors advocate a large, out-of-order-issue instruction window, clustered (separated) banks of functional units and hierarchical scheduling of ready instructions to provide a high-speed, implementable execution core that is capable of sustaining the necessary instruction throughput.
Abstract: Billion-transistor processors will be much as they are today, just bigger, faster and wider (issuing more instructions at once). The authors describe the key problems (instruction supply, data memory supply and an implementable execution core) that prevent current superscalar computers from scaling up to 16- or 32-instructions per issue. They propose using out-of-order fetching, multi-hybrid branch predictors and trace caches to improve the instruction supply. They predict that replicated first-level caches, huge on-chip caches and data value speculation will enhance the data supply. To provide a high-speed, implementable execution core that is capable of sustaining the necessary instruction throughput, they advocate a large, out-of-order-issue instruction window (2,000 instructions), clustered (separated) banks of functional units and hierarchical scheduling of ready instructions. They contend that the current uniprocessor model can provide sufficient performance and use a billion transistors effectively without changing the programming model or discarding software compatibility.

Journal ArticleDOI
TL;DR: This article describes the application of neural and fuzzy methods to three problems: recognition of handwritten words; recognition of numeric fields; and location of handwritten street numbers in address images.
Abstract: Handwriting recognition requires tools and techniques that recognize complex character patterns and represent imprecise, common-sense knowledge about the general appearance of characters, words and phrases. Neural networks and fuzzy logic are complementary tools for solving such problems. Neural networks, which are highly nonlinear and highly interconnected for processing imprecise information, can finely approximate complicated decision boundaries. Fuzzy set methods can represent degrees of truth or belonging. Fuzzy logic encodes imprecise knowledge and naturally maintains multiple hypotheses that result from the uncertainty and vagueness inherent in real problems. By combining the complementary strengths of neural and fuzzy approaches into a hybrid system, we can attain an increased recognition capability for solving handwriting recognition problems. This article describes the application of neural and fuzzy methods to three problems: recognition of handwritten words; recognition of numeric fields; and location of handwritten street numbers in address images.

Journal ArticleDOI
TL;DR: The Personal Software Process, developed by Watts Humphrey at the Software Engineering Institute, provides software engineers with a methodology for consistently and efficiently developing high quality products.
Abstract: Too often, software developers follow inefficient methods and procedures. The Personal Software Process, developed by Watts Humphrey at the Software Engineering Institute, provides software engineers with a methodology for consistently and efficiently developing high quality products. The value of PSP has been shown in three case studies-three industrial software groups have used PSP and have collected data to show its effectiveness. They are: Advanced Information Services, Inc., Motorola Paging Products Group, and Union Switch and Signal Inc. Each has trained several groups of engineers and measured the results of several projects that used PSP methods. In all cases, the projects were part of the companies' normal operations and not designed for this study. The three companies offered a variety of situations useful for demonstrating the versatility of PSP. The projects at Motorola and US&S involved software maintenance and enhancement, while those at AIS involved new product development and enhancement. Among the companies, application areas included commercial data processing, internal manufacturing support, communications product support, and real time process control. Work was done in C or C++.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the effects of using formal methods to develop an air-traffic-control information system and found that the evaluation is designed as part of overall project planning and then carried out as software development progresses.
Abstract: Practitioners and researchers continue to seek methods and tools for improving software development processes and products. Candidate technologies promise increased productivity, better quality, lower cost, or enhanced customer satisfaction. We must test these methods and tools empirically and rigorously to determine any significant, quantifiable improvement. We tend to consider evaluation only after using the technology, which makes careful, quantitative analysis difficult if not impossible. However, when an evaluation is designed as part of overall project planning, and then carried out as software development progresses, the result can be a rich record of a tool's or technique's effectiveness. In this study, we investigated the effects of using formal methods to develop an air-traffic-control information system.

Journal ArticleDOI
TL;DR: The goals for this issue are to explore both the trends that will affect future archi-tectures and the space of these architectures and to make the case for a different billion transistor architecture.
Abstract: In September 1997, Computer published a special issue on billion-transistor microprocessor architectures. Comparing that issue's predictions about the trends that would drive architectural developm...

Journal ArticleDOI
D. Dikel, D. Kane, S. Ornburn1, W. Loftus, J. Wilson 
TL;DR: To learn what factors determine the effective use of software architecture, the authors looked at Nortel (Northern Telecom), a company with nearly 20 years of experience developing complex software architecture for telecommunications product families and identified six principles that help reduce the complexity of an evolving family of products.
Abstract: Many organizations today are investing in software product-line architecture-for good reason: a well-executed architecture enables organizations to respond quickly to a redefined mission or to new and changing markets. It allows them to accelerate the introduction of new products and improve their quality, to reengineer legacy systems, and to manage and enhance the many product variations needed for international markets. However, technically excellent product line architectures do fail, often because they are not effectively used. Some are developed but never used; others lose value as product teams stop sharing the common architecture; still others achieve initial success, but fail to keep up with a rapidly growing product mix. Sometimes the architecture deterioration is not noticed at first, masked by what appears to be a productivity increase. To learn what factors determine the effective use of software architecture, the authors looked at Nortel (Northern Telecom), a company with nearly 20 years of experience developing complex software architecture for telecommunications product families. They identified six principles that help reduce the complexity of an evolving family of products and that support and maintain the effective use and integrity of the architecture.

Journal ArticleDOI
TL;DR: Not all failures are avoidable, but in many cases the system or system operator could have taken corrective action that would have prevented or mitigated the failure.
Abstract: Most people who use computers regularly have encountered a failure, either in the form of a software crash, disk failure, power loss, or bus error. In some instances these failures are no more than annoyances; in others they result in significant losses. The latter result will probably become more common than the former, as society’s dependence on automated systems increases. The ideal system would be perfectly reliable and never fail. This, of course, is impossible to achieve in practice: System builders have finite resources to devote to reliability and consumers will only pay so much for this feature. Over the years, the industry has used various techniques to best approximate the ideal scenario. The discipline of fault-tolerant and reliable computing deals with numerous issues pertaining to different aspects of system development, use, and maintenance. The expression “disaster waiting to happen” is often used to describe causes of failure that are seemingly well known, but have not been adequately accounted for in the system design. In these cases we need to learn from experience how to avoid failure. Not all failures are avoidable, but in many cases the system or system operator could have taken corrective action that would have prevented or mitigated the failure. The main reason we don’t prevent failures is our inability to learn from our mistakes. It often takes more than one occurrence of the same failure before corrective action is taken.

Journal ArticleDOI
Danny Lange1, Y. Nakamura1
TL;DR: The approach presented combines static information with actual execution information to produce views that summarize the relevant computation and focuses on reducing the search space for extracting dynamic program information and on creating visualizations that may improve a programmer's understanding of object behaviour in real world OO systems.
Abstract: Conventional program analysis and presentation techniques are insufficient when dealing with object oriented concepts, but tool developers have nevertheless found a way to obtain and visualize OO traces. The approach presented combines static information with actual execution information to produce views that summarize the relevant computation. In developing this approach, the authors focused on reducing the search space for extracting dynamic program information and on creating visualizations that may improve a programmer's understanding of object behaviour in real world OO systems. They applied the research prototype, Program Explorer, to a real project outside IBM. Although Program Explorer was originally designed for C++, a version for IBM's System Object Model (SOM) has demonstrated that the concepts are applicable to OO languages in general.

Journal ArticleDOI
TL;DR: Sync is described, a development framework that provides high-level primitives that enable programmers to create arbitrarily complex, synchronized, replicated data objects that enables applications to share changes at a granularity as small as updates to basic types and so enables better performance on low-bandwidth connections.
Abstract: Introducing the factors of wireless mobile systems into the development of collaborative applications complicates developers' lives significantly. Application frameworks targeted for coordinating wireless mobile applications simplify development. The authors describe Sync, a development framework that provides high-level primitives that enable programmers to create arbitrarily complex, synchronized, replicated data objects. Designed for wireless networks, Sync enables applications to share changes at a granularity as small as updates to basic types and so enables better performance on low-bandwidth connections.

Journal ArticleDOI
TL;DR: In this paper, the authors present a reverse-engineering technique for Excel, a product that comprises about 1.2 million lines of C code, which is designed to be lightweight and iterative.
Abstract: Reengineering large and complex software systems is often very costly. The article presents a reverse engineering technique and relates how a Microsoft engineer used it to aid an experimental reengineering of Excel-a product that comprises about 1.2 million lines of C code. The reflexion technique is designed to be lightweight and iterative. To use it, the user first defines a high-level structural model, then extracts a map of the source code and uses a set of computation tools to compare the two models. The approach lets software engineers effectively validate their high-level reasoning with information from the source code. The engineer in this case study-a developer with 10-plus years at Microsoft-specified and computed an initial reflexion model of Excel in a day and then spent four weeks iteratively refining it. He estimated that gaining the same degree of familiarity with the Excel source code might have taken up to two years with other available approaches. On the basis of this experience, the authors believe that the reflexion technique has practical applications.

Journal ArticleDOI
TL;DR: In developing the Patricia system, the developers had to overcome the problems of syntactically parsing natural language comments and syntactical analyzing identifiers-all prior to a semantic understanding of the comments and identifiers.
Abstract: Much object oriented code has been written without reuse in mind, making identification of useful components difficult. The Patricia (Program Analysis Tool for Reuse) system automatically identifies these components through understanding comments and identifiers. To understand a program, Patricia uses a unique heuristic approach, deriving information from the linguistic aspects of comments and identifiers and from other nonlinguistic aspects of OO code, such as a class hierarchy. In developing the Patricia system, we had to overcome the problems of syntactically parsing natural language comments and syntactically analyzing identifiers-all prior to a semantic understanding of the comments and identifiers. Another challenge was the semantic understanding phase, when the organization of the knowledge base and an inferencing scheme were developed.