scispace - formally typeset
Search or ask a question

Showing papers on "Software published in 1993"


04 Oct 1993
TL;DR: In this paper, the authors provide an overview of economic analysis techniques and their applicability to software engineering and management, including the major estimation techniques available, the state of the art in algorithmic cost models, and the outstanding research issues in software cost estimation.
Abstract: This paper summarizes the current state of the art and recent trends in software engineering economics. It provides an overview of economic analysis techniques and their applicability to software engineering and management. It surveys the field of software cost estimation, including the major estimation techniques available, the state of the art in algorithmic cost models, and the outstanding research issues in software cost estimation.

5,899 citations


Book
18 Feb 1993
TL;DR: In this paper, the authors present a set of programs that summarize data with histograms and other graphics, calculate measures of spatial continuity, provide smooth least-squares-type maps, and perform stochastic spatial simulation.
Abstract: Thirty-seven programs that summarize data with histograms and other graphics, calculate measures of spatial continuity, provide smooth least-squares-type maps, and perform stochastic spatial simulation.

4,301 citations


Book
01 Jan 1993
TL;DR: The third edition of the book as mentioned in this paper has been updated with new pedagogical features, such as new information and challenging exercises for the advanced student, as well as a complete index of the material in the book and on the CD appears in the printed index.
Abstract: What's New in the Third Edition, Revised Printing The same great book gets better! This revised printing features all of the original content along with these additional features:. Appendix A (Assemblers, Linkers, and the SPIM Simulator) has been moved from the CD-ROM into the printed book. Corrections and bug fixesThird Edition featuresNew pedagogical features.Understanding Program Performance -Analyzes key performance issues from the programmer's perspective .Check Yourself Questions -Helps students assess their understanding of key points of a section .Computers In the Real World -Illustrates the diversity of applications of computing technology beyond traditional desktop and servers .For More Practice -Provides students with additional problems they can tackle .In More Depth -Presents new information and challenging exercises for the advanced student New reference features .Highlighted glossary terms and definitions appear on the book page, as bold-faced entries in the index, and as a separate and searchable reference on the CD. .A complete index of the material in the book and on the CD appears in the printed index and the CD includes a fully searchable version of the same index. .Historical Perspectives and Further Readings have been updated and expanded to include the history of software R&D. .CD-Library provides materials collected from the web which directly support the text. In addition to thoroughly updating every aspect of the text to reflect the most current computing technology, the third edition .Uses standard 32-bit MIPS 32 as the primary teaching ISA. .Presents the assembler-to-HLL translations in both C and Java. .Highlights the latest developments in architecture in Real Stuff sections: -Intel IA-32 -Power PC 604 -Google's PC cluster -Pentium P4 -SPEC CPU2000 benchmark suite for processors -SPEC Web99 benchmark for web servers -EEMBC benchmark for embedded systems -AMD Opteron memory hierarchy -AMD vs. 1A-64 New support for distinct course goals Many of the adopters who have used our book throughout its two editions are refining their courses with a greater hardware or software focus. We have provided new material to support these course goals: New material to support a Hardware Focus .Using logic design conventions .Designing with hardware description languages .Advanced pipelining .Designing with FPGAs .HDL simulators and tutorials .Xilinx CAD tools New material to support a Software Focus .How compilers work .How to optimize compilers .How to implement object oriented languages .MIPS simulator and tutorial .History sections on programming languages, compilers, operating systems and databases On the CD.NEW: Search function to search for content on both the CD-ROM and the printed text.CD-Bars: Full length sections that are introduced in the book and presented on the CD .CD-Appendixes: Appendices B-D .CD-Library: Materials collected from the web which directly support the text .CD-Exercises: For More Practice provides exercises and solutions for self-study.In More Depth presents new information and challenging exercises for the advanced or curious student .Glossary: Terms that are defined in the text are collected in this searchable reference .Further Reading: References are organized by the chapter they support .Software: HDL simulators, MIPS simulators, and FPGA design tools .Tutorials: SPIM, Verilog, and VHDL .Additional Support: Processor Models, Labs, Homeworks, Index covering the book and CD contents Instructor Support Instructor support provided on textbooks.elsevier.com:.Solutions to all the exercises .Figures from the book in a number of formats .Lecture slides prepared by the authors and other instructors .Lecture notes

1,521 citations


Proceedings ArticleDOI
01 Dec 1993
TL;DR: It is demonstrated that for frequently communicating modules, implementing fault isolation in software rather than hardware can substantially improve end-to-end application performance.
Abstract: One way to provide fault isolation among cooperating software modules is to place each in its own address space. However, for tightly-coupled modules, this solution incurs prohibitive context switch overhead. In this paper, we present a software approach to implementing fault isolation within a single address space.Our approach has two parts. First, we load the code and data for a distrusted module into its own fault do main, a logically separate portion of the application's address space. Second, we modify the object code of a distrusted module to prevent it from writing or jumping to an address outside its fault domain. Both these software operations are portable and programming language independent.Our approach poses a tradeoff relative to hardware fault isolation: substantially faster communication between fault domains, at a cost of slightly increased execution time for distrusted modules. We demonstrate that for frequently communicating modules, implementing fault isolation in software rather than hardware can substantially improve end-to-end application performance.

1,370 citations


Journal ArticleDOI
Wei Li, Sallie M. Henry1
TL;DR: This research concentrates on several object-oriented software metrics and the validation of these metrics with maintenance effort in two commercial systems.

1,111 citations


Journal ArticleDOI
TL;DR: The authors present a software-oriented approach to hardware-software partitioning which avoids restrictions on the software semantics as well as an iterative partitioning process based on hardware extraction controlled by a cost function.
Abstract: The authors present a software-oriented approach to hardware-software partitioning which avoids restrictions on the software semantics as well as an iterative partitioning process based on hardware extraction controlled by a cost function. This process is used in Cosyma, an experimental cosynthesis system for embedded controllers. As an example, the extraction of coprocessors for loops is demonstrated. Results are presented for several benchmark designs. >

644 citations


Patent
19 Jan 1993
TL;DR: In this article, the authors propose a method to detect undesirable software entities, such as a computer virus, worm, or Trojan Horse, in a data processing system by detecting anomalous behavior that may indicate the presence of an undesirable software entity.
Abstract: A method includes the following component steps, or some functional subset of these steps: (A) periodic monitoring of a data processing system (10) for anomalous behavior that may indicate the presence of an undesirable software entity such as a computer virus, worm, or Trojan Horse; (B) automatic scanning for occurrences of known types of undesirable software entities and taking remedial action if they are discovered; (C) deploying decoy programs to capture samples of unknown types of computer viruses; (D) identifying machine code portions of the captured samples which are unlikely to vary from one instance of the virus to another; (E) extracting an identifying signature from the executable code portion and adding the signature to a signature database; (F) informing neighboring data processing systems on a network of an occurrence of the undesirable software entity; and (G) generating a distress signal, if appropriate, so as to call upon an expert to resolve difficult cases. A feature of this invention is the automatic execution of the foregoing steps in response to a detection of an undesired software entity, such as a virus or a worm, within a data processing system. The automatic extraction of the identifying signature, the addition of the signature to a signature data base, and the immediate use of the signature by a scanner provides protection from subsequent infections of the system, and also a network of systems, by the same or an altered form of the undesirable software entity.

585 citations


Journal ArticleDOI
TL;DR: The authors demonstrate the feasibility of synthesizing heterogeneous systems by using timing constraints to delegate tasks between hardware and software so that performance requirements can be met.
Abstract: As system design grows increasingly complex, the use of predesigned components, such as general-purpose microprocessors can simplify synthesized hardware. While the problems in designing systems that contain processors and application-specific integrated circuit chips are not new, computer-aided synthesis of such heterogeneous or mixed systems poses unique problems. The authors demonstrate the feasibility of synthesizing heterogeneous systems by using timing constraints to delegate tasks between hardware and software so that performance requirements can be met. System functionality is captured using the HardwareC hardware description language. The synthesis of an Ethernet-based network coprocessor is discussed as an example. >

556 citations


Book
01 Jan 1993
TL;DR: The aim of this text is to teach the student what discrete event systems are about and how they differ from "classical systems"; describe the differences between various modelling approaches; and show how to simulate DES using commercially-available software or from first principles.
Abstract: The aim of this text is to teach the student what discrete event systems (DES) are about and how they differ from "classical systems"; describe the differences between various modelling approaches; and show how to simulate DES using commercially-available software or from first principles

422 citations


Journal ArticleDOI
TL;DR: This work affirms that the quantification of life-critical software reliability is infeasible using statistical methods, whether these methods are applied to standard software or fault-tolerant software.
Abstract: This work affirms that the quantification of life-critical software reliability is infeasible using statistical methods, whether these methods are applied to standard software or fault-tolerant software. The classical methods of estimating reliability are shown to lead to exorbitant amounts of testing when applied to life-critical software. Reliability growth models are examined and also shown to be incapable of overcoming the need for excessive amounts of testing. The key assumption of software fault tolerance-separately programmed versions fail independently-is shown to be problematic. This assumption cannot be justified by experimentation in the ultrareliability region, and subjective arguments in its favor are not sufficiently strong to justify it as an axiom. Also, the implications of the recent multiversion software experiments support this affirmation. >

396 citations


Book
04 Oct 1993
Abstract: The distributions and relationships derived from the change data collected during the development of a medium scale satellite software project show that meaningful results can be obtained which allow an insight into software traits and the environment in which it is developed. Modified and new modules were shown to behave similarly. An abstract classification scheme for errors which allows a better understanding of the overall traits of a software project is also shown. Finally, various size and complexity metrics are examined with respect to errors detected within the software yielding some interesting results.

Journal ArticleDOI
01 Jan 1993
TL;DR: It is shown how top-down decompositions of a subject system can be (re)constructed via bottom-up subsystem composition, which involves identifying groups of building blocks using composition operations based on software engineering principles such as low coupling and high cohesion.
Abstract: Reverse-engineering is the process of extracting system abstractions and design information out of existing software systems. This process involves the identification of software artefacts in a particular subject system, the exploration of how these artefacts interact with one another, and their aggregation to form more abstract system representations that facilitate program understanding. This paper describes our approach to creating higher-level abstract representations of a subject system, which involves the identification of related components and dependencies, the construction of layered subsystem structures, and the computation of exact interfaces among subsystems. We show how top-down decompositions of a subject system can be (re)constructed via bottom-up subsystem composition. This process involves identifying groups of building blocks (e.g., variables, procedures, modules, and subsystems) using composition operations based on software engineering principles such as low coupling and high cohesion. The result is an architecture of layered subsystem structures. The structures are manipulated and recorded using the Rigi system, which consists of a distributed graph editor and a parsing system with a central repository. The editor provides graph filters and clustering operations to build and explore subsystem hierarchies interactively. The paper concludes with a detailed, step-by-step analysis of a 30-module software system using Rigi.

Journal ArticleDOI
TL;DR: Before empirical evidence linking software complexity to software maintenance costs is relatively weak, several researchers have noted that such results must be applied cautiously to the large-scale commercial application systems that account for most software maintenance expenditures.
Abstract: While the link between the difficulty in understanding computer software and the cost of maintaining it is appealing, prior empirical evidence linking software complexity to software maintenance costs is relatively weak [21]. Many of the attempts to link software complexity to maintainability are based on experiments involving small pieces of code, or are based on analysis of software written by students. Such evidence is valuable, but several researchers have noted that such results must be applied cautiously to the large-scale commercial application systems that account for most software maintenance expenditures [13,17]

Patent
14 Dec 1993
TL;DR: In this paper, the authors propose a method, system and program for testing programmatic interfaces by systematically exploring valid call sequences defined by a collection of subroutines with data.
Abstract: A method, system and program for testing programmatic interfaces by systematically exploring valid call sequences defined by a collection of subroutines with a collection of data. The subject invention does not explicitly write unit tests. Instead, it provides the tools to develop rules which model and verify the correct operation of software while leaving the complexities of test selection, execution, sequencing, coverage, and verification to an automated system. The number of test scenarios generated by the system depends upon cumulative execution time and is reliably independent of human effort.

Journal ArticleDOI
TL;DR: The structure of the mm-array database and the implementation of a data analysis program are described, both of which make extensive use of Sybase, a commercial database management system with application development software.
Abstract: A relational database management system has been implemented on the Caltech millimeter-wave array for both real-time astronomical engineering data and post-processing calibration and analysis This system provides high storage-efficiency for the data and on-line access to data from multiple observing seasons The ability to access easily the full database enables more accurate calibration of the raw data and greatly facilitates the calibration process In this article we describe both the structure of the mm-array database and the implementation of a data analysis program, both of which make extensive use of Sybase, a commercial database management system with application development software This use of relational database technology in real-time astronomical data storage and calibration may serve as a prototype for similar systems at other observatories

Patent
Stephen M. Platt1
22 Dec 1993
TL;DR: In this article, a method for remote installation of software over a computer network allows a user to interactively select each remote computer system for software installation, or to provide a file containing a list of all remote computer systems.
Abstract: A method for remote installation of software over a computer network allows a user to interactively select each remote computer system for software installation, or to provide a file containing a list of all remote computer systems. Before attempting to install the software, the method ensures that the remote system can be reached through the network, that the remote system has the capability of running processes remotely, that the remote system has all the commands necessary to perform the installation, that the remote system has the correct hardware and software to support the installation, and that sufficient disk space exists on the remote computer system for the installation. The method then combines all files that are being remotely installed into a single data stream, sends this single data stream over the network to the remote computer system, and separates the data stream into the original files on the remote system.

Patent
19 Jan 1993
TL;DR: In this article, a system for inserting code markers for observing indications (external to the microprocessor upon which the software operates) of the occurrence of an event in the execution of the software.
Abstract: A system (Figure 10) for inserting code markers for observing indications (external to the microprocessor upon which the software operates) of the occurrence of an event in the execution of the software. Additional instructions or markers are added to the software to be debugged to produce simple, encoded, memory references to otherwise unused memory or I/O locations that will always be visible to a logic analyzer as bus cycles. Although the code markers cause a minimal intrusion in the underlying software, they make tracing events by a conventional logic analyzer much simpler and allow for performance evaluations in manners not heretofore possible. In particular, the inserted code markers provide a method of dynamically extracting information from a running host or real-time "black box" embedded system (902) under test using simple low intrusion print statements, encoded I/O writes on procedure entries and exits, and/or an interface to service calls and the like which writes out the passed parameters. Generally, the code markers are inserted at compile time or interactively during the debug session to make visible critical points in the code execution, such as function calls, task creation, semaphore operations and other resource usage so as to speed isolation of problems at test points during debugging. Performance analysis and event analysis use the code markers of the invention to cut through the ambiguities of microprocessor prefetch and cache operations. Because of these features, the invention is particularly advantageous for use by software design teams developing complex embedded host or real-time operating systems using multi-task operating systems and/or object oriented systems.

Journal ArticleDOI
TL;DR: A behavioral model of a class of mixed hardware-software systems is presented and a codesign methodology for such systems is defined.
Abstract: A behavioral model of a class of mixed hardware-software systems is presented. A codesign methodology for such systems is defined. The methodology includes hardware-software partitioning, behavioral synthesis, software compilation, and demonstration on a testbed consisting of a commercial central processing unit (CPU), field-programmable gate arrays, and programmable interconnections. Design examples that illustrate how certain characteristics of system behavior and constraints suggest hardware or software implementation are presented. >


Book
28 Mar 1993
TL;DR: This guide introduces HCI, a science of computer programming based on the principles of psychology, with a focus on system development and evaluation.
Abstract: About this Guide Who Is This Guide For? How to Use This Guide? 1. Introduction to HCI 2. The Human Element: Applying Psychology 3. System Development 4. System and Interface Features 5. Software Tools and Prototyping 6. Evaluation 7. Future trends

Patent
22 Dec 1993
TL;DR: In this paper, a secure software rental system is described, which enables a user in a remote location using a personal computer and a modem to connect to a central rental facility, transfer application software from the central rental facilities to the remote computers, and execute the application software on the remote computer while electronically connected to the central renting facility.
Abstract: A system is disclosed for providing secure access and execution of application software stored on a first computer by a second computer using a communication device while a communication link is maintained between the first and second computers. More specifically, a secure software rental system is disclosed. The system enables a user in a remote location using a personal computer and a modem to connect to a central rental facility, transfer application software from the central rental facility to the remote computer, and execute the application software on the remote computer while electronically connected to the central rental facility. When the communication link between the central rental facility and the remote computer is interrupted or terminated, the application software no longer executes on the remote computer. This is accomplished by integrating header software with the application software. The application software stored on the central rental facility is integrated with the header software to provide a security feature. The use of header software allows the user to execute the application software only while the user is electronically connected to the central rental facility continuously. This prevents the user from copying the application software to a storage device of the remote computer, and subsequently executing the application software after interrupting or terminating the communications link between the central rental facility and the remote computer.

Patent
James W. Moore1
27 Oct 1993
TL;DR: In this article, a system and method for providing a reuser of a software reuse library with an indication of whether a software component from the reuse library is authentic and whether or not the software component has been modified.
Abstract: Disclosed is a system and method for providing a reuser of a software reuse library with an indication of whether or not a software component from the reuse library is authentic and whether or not the software component has been modified. The system and method disclosed provides a reuser with assurance that the software component retrieved was placed in the reuse library by the original publisher and has not modified by a third party. The system and method disclosed uses a hybrid cryptographic technique that combines a conventional or private key algorithm with a public key algorithm.

Journal ArticleDOI
TL;DR: The mechanisms a process language should possess in order to support changes are discussed and how the proposed mechanisms can be used to model different policies for changing a software process model are discussed.
Abstract: Software processes are long-lived entities. Careful design and thorough validation of software process models are necessary to ensure the quality of the process. They do not prevent, however, process models from undergoing change. Change requests may occur in the context of reuse, i.e. statically, in order to support software process model customization. They can also occur dynamically, while software process models are being executed, in order to support timely reaction as data are gathered from the field during process enactment. We discuss the mechanisms a process language should possess in order to support changes. We illustrate the solution adopted in the context of the SPADE environment and discuss how the proposed mechanisms can be used to model different policies for changing a software process model. >


Proceedings ArticleDOI
01 Sep 1993
TL;DR: It is shown that memory bandwidth is the primary limitation in performance of the decoder, not the computational complexity of the inverse discrete cosine transform as is commonly thought.
Abstract: The design and implementation of a software decoder for MPEG video bitstreams is described. The software has been ported to numerous platforms including PC''s, workstations, and mainframe computers. Performance comparisons are given for several different bitstreams and platforms including a unique metric devised to compare price/performance across different platforms (percentage of required bit rate per dollar). We also show that memory bandwidth is the primary limitation in performance of the decoder, not the computational complexity of the inverse discrete cosine transform as is commonly thought.

Proceedings ArticleDOI
25 Apr 1993
TL;DR: ROCLAB is a public domain software package written in Microsoft QuickBASIC for PC microprocessors that computes ROC functions and their useful derived features for discrete and fuzzy class membership data.
Abstract: Receiver operating characteristic (ROC) methodology evaluates how well a decision strategy classifies retrospective dichotomous or fuzzy events. It also provides a rational basis for designing decision strategies for classifying prospective events. ROCLAB is a public domain software package written in Microsoft QuickBASIC for PC microprocessors that computes ROC functions and their useful derived features for discrete and fuzzy class membership data. Decision strategies that account for uncertainties related to prevalence, false classification costs, and fuzzy class membership are easily constructed with ROCLAB. ROC methodology is explained and ROCLAB features are demonstrated with examples from clinical medicine. >

Patent
29 Jan 1993
TL;DR: In this paper, a distributed software contains a plurality of entitlement verification triggers, each trigger is a single machine instruction in the object code, identifying a product number of the software module.
Abstract: Software is distributed without entitlement to run, while a separately distributed encrypted entitlement key enables execution of the software. The key includes the serial number of the computer for which the software is licensed, together with a plurality of entitlement bits indicating which software modules are entitled to run on the machine. A secure decryption mechanism contained on the computer fetches its serial number and uses it as a key to decrypt the entitlement information, which is then stored in a product lock table in memory. The distributed software contains a plurality of entitlement verification triggers. Each trigger is a single machine instruction in the object code, identifying a product number of the software module. When a trigger is encountered during execution, the computer checks the product lock table entry corresponding to the product number of the software. If the product is entitled to run, execution continues normally; otherwise execution is aborted. Because this verification involves only a single machine instruction, it can be done with virtually no impact to overall system performance. As a result, it is possible to place a substantial number of such entitlement verification triggers in the object code, making it virtually impossible for someone to alter the code by "patching" the triggers. The triggering instruction may alternatively perform some useful work in parallel with entitlement verification.

Proceedings ArticleDOI
01 May 1993
TL;DR: This paper presents the user-centred iterative design of software that supports collaborative writing that grew out of a study of how people write together that included a survey of writers and a laboratory study of writing teams linked by a variety of communications media.
Abstract: This paper presents the user-centred iterative design of software that supports collaborative writing. The design grew out of a study of how people write together that included a survey of writers and a laboratory study of writing teams linked by a variety of communications media. The resulting taxonomy of collaborative writing is summarized in the paper, followed by a list of design requirements for collaborative writing software suggested by the work. The paper describes two designs of the software. The first prototype supports synchronous writing and editing from workstations linked over local area and wide area networks. The second prototype also supports brainstorming, outlining, and document review, as well as asynchronous work. Lessons learned from the user testing and actual usage of the two systems are also presented.

Book
04 Oct 1993
TL;DR: This critique demonstrates that McCabe's cyclomatic complexity metric is based upon poor theoretical foundations and an inadequate model of software development, and for a large class of software it is no more than a proxy for, and in many cases is outperformed by, lines of code.
Abstract: McCabe's cyclomatic complexity metric (1976) is widely cited as a useful predictor of various software attributes such as reliability and development effort This critique demonstrates that it is based upon poor theoretical foundations and an inadequate model of software development The argument that the metric provides the developer with a useful engineering approximation is not borne out by the empirical evidence Furthermore, it would appear that for a large class of software it is no more than a proxy for, and in many cases is outperformed by, lines of code< >

Patent
23 Sep 1993
TL;DR: A distributed system includes a non-distributed computing environment (DCE) computer system and at least one DCE computer system which are loosely coupled together through a communications network operating with a standard communications protocol as mentioned in this paper.
Abstract: A distributed system includes a non-distributed computing environment (DCE) computer system and at least one DCE computer system which are loosely coupled together through a communications network operating with a standard communications protocol. The non-DCE and DCE computer systems operate under the control of proprietary and UNIX based operating systems respectively. The non-DCE computer system further includes application client software for providing access to distributed DCE service components via a remote procedure call (RPC) mechanism obtained through application server software included on the DCE computer system. A minimum number of software components modules which comprise client RPC runtime component and an import API component included in the non-DCE and an Ally component on the DCE computer systems to operate in conjunction with the client and server software to provide access to DCE services by non-DCE user applications through the RPC mechanisms of both systems eliminating the need to port the DCE software service components onto the non-DCE computer system.