scispace - formally typeset
Search or ask a question

Showing papers on "Software published in 1993"


04 Oct 1993
TL;DR: In this paper, the authors provide an overview of economic analysis techniques and their applicability to software engineering and management, including the major estimation techniques available, the state of the art in algorithmic cost models, and the outstanding research issues in software cost estimation.
Abstract: This paper summarizes the current state of the art and recent trends in software engineering economics. It provides an overview of economic analysis techniques and their applicability to software engineering and management. It surveys the field of software cost estimation, including the major estimation techniques available, the state of the art in algorithmic cost models, and the outstanding research issues in software cost estimation.

5,899 citations


Book
18 Feb 1993
TL;DR: In this paper, the authors present a set of programs that summarize data with histograms and other graphics, calculate measures of spatial continuity, provide smooth least-squares-type maps, and perform stochastic spatial simulation.
Abstract: Thirty-seven programs that summarize data with histograms and other graphics, calculate measures of spatial continuity, provide smooth least-squares-type maps, and perform stochastic spatial simulation.

4,301 citations


Journal ArticleDOI
Mark D. Weiser1
TL;DR: What is new and different about the computer science in ubiquitous computing is explained, and a series of examples drawn from various subdisciplines of computer science are outlined.
Abstract: Ubiquitous computing is the method of enhancing computer use by making many computers available throughout the physical environment, but making them effectively invisible to the user. Since we started this work at Xerox PARC in 1988, a number of researchers around the world have begun to work in the ubiquitous computing framework. This paper explains what is new and different about the computer science in ubiquitous computing. It starts with a brief overview of ubiquitous computing, and then elaborates through a series of examples drawn from various subdisciplines of computer science: hardware components (e.g. chips), network protocols, interaction substrates (e.g. software for screens and pens), applications, privacy, and computational methods. Ubiquitous computing offers a framework for new and exciting research across the spectrum of computer science.

2,662 citations


Book
01 Jan 1993
TL;DR: The third edition of the book as mentioned in this paper has been updated with new pedagogical features, such as new information and challenging exercises for the advanced student, as well as a complete index of the material in the book and on the CD appears in the printed index.
Abstract: What's New in the Third Edition, Revised Printing The same great book gets better! This revised printing features all of the original content along with these additional features:. Appendix A (Assemblers, Linkers, and the SPIM Simulator) has been moved from the CD-ROM into the printed book. Corrections and bug fixesThird Edition featuresNew pedagogical features.Understanding Program Performance -Analyzes key performance issues from the programmer's perspective .Check Yourself Questions -Helps students assess their understanding of key points of a section .Computers In the Real World -Illustrates the diversity of applications of computing technology beyond traditional desktop and servers .For More Practice -Provides students with additional problems they can tackle .In More Depth -Presents new information and challenging exercises for the advanced student New reference features .Highlighted glossary terms and definitions appear on the book page, as bold-faced entries in the index, and as a separate and searchable reference on the CD. .A complete index of the material in the book and on the CD appears in the printed index and the CD includes a fully searchable version of the same index. .Historical Perspectives and Further Readings have been updated and expanded to include the history of software R&D. .CD-Library provides materials collected from the web which directly support the text. In addition to thoroughly updating every aspect of the text to reflect the most current computing technology, the third edition .Uses standard 32-bit MIPS 32 as the primary teaching ISA. .Presents the assembler-to-HLL translations in both C and Java. .Highlights the latest developments in architecture in Real Stuff sections: -Intel IA-32 -Power PC 604 -Google's PC cluster -Pentium P4 -SPEC CPU2000 benchmark suite for processors -SPEC Web99 benchmark for web servers -EEMBC benchmark for embedded systems -AMD Opteron memory hierarchy -AMD vs. 1A-64 New support for distinct course goals Many of the adopters who have used our book throughout its two editions are refining their courses with a greater hardware or software focus. We have provided new material to support these course goals: New material to support a Hardware Focus .Using logic design conventions .Designing with hardware description languages .Advanced pipelining .Designing with FPGAs .HDL simulators and tutorials .Xilinx CAD tools New material to support a Software Focus .How compilers work .How to optimize compilers .How to implement object oriented languages .MIPS simulator and tutorial .History sections on programming languages, compilers, operating systems and databases On the CD.NEW: Search function to search for content on both the CD-ROM and the printed text.CD-Bars: Full length sections that are introduced in the book and presented on the CD .CD-Appendixes: Appendices B-D .CD-Library: Materials collected from the web which directly support the text .CD-Exercises: For More Practice provides exercises and solutions for self-study.In More Depth presents new information and challenging exercises for the advanced or curious student .Glossary: Terms that are defined in the text are collected in this searchable reference .Further Reading: References are organized by the chapter they support .Software: HDL simulators, MIPS simulators, and FPGA design tools .Tutorials: SPIM, Verilog, and VHDL .Additional Support: Processor Models, Labs, Homeworks, Index covering the book and CD contents Instructor Support Instructor support provided on textbooks.elsevier.com:.Solutions to all the exercises .Figures from the book in a number of formats .Lecture slides prepared by the authors and other instructors .Lecture notes

1,521 citations


Proceedings ArticleDOI
01 Dec 1993
TL;DR: It is demonstrated that for frequently communicating modules, implementing fault isolation in software rather than hardware can substantially improve end-to-end application performance.
Abstract: One way to provide fault isolation among cooperating software modules is to place each in its own address space. However, for tightly-coupled modules, this solution incurs prohibitive context switch overhead. In this paper, we present a software approach to implementing fault isolation within a single address space.Our approach has two parts. First, we load the code and data for a distrusted module into its own fault do main, a logically separate portion of the application's address space. Second, we modify the object code of a distrusted module to prevent it from writing or jumping to an address outside its fault domain. Both these software operations are portable and programming language independent.Our approach poses a tradeoff relative to hardware fault isolation: substantially faster communication between fault domains, at a cost of slightly increased execution time for distrusted modules. We demonstrate that for frequently communicating modules, implementing fault isolation in software rather than hardware can substantially improve end-to-end application performance.

1,370 citations


Journal ArticleDOI
Wei Li, Sallie M. Henry1
TL;DR: This research concentrates on several object-oriented software metrics and the validation of these metrics with maintenance effort in two commercial systems.

1,111 citations


Book
Bonnie Nardi1
15 Jul 1993
TL;DR: A Small Matter of Programming asks why it has been so difficult for end users to command programming power and explores the problems of end-user-driven application development that must be solved to afford end users greater computational power.
Abstract: From the Publisher: A Small Matter of Programming asks why it has been so difficult for end users to command programming power and explores the problems of end-user-driven application development that must be solved to afford end users greater computational power. Drawing on empirical research on existing end user systems, the book analyzes cognitive, social, and technical issues of end user programming. In particular, it examines the importance of task-specific programming languages, visual application frameworks, and collaborative work practices for end user computing with the goal of helping the designers and programmers understand and better satisfy the needs of end users who want the capability to create, customize, and extend their applications software. The ideas in the book are based on the author's research on two successful end user programming systems - spreadsheets and CAD systems - as well as other empirical research. Nardi concentrates on broad issues in end user programming, especially end users' strengths and problems, introducing tools and techniques as they are related to higher-level user issues.

699 citations


Journal ArticleDOI
TL;DR: The authors present a software-oriented approach to hardware-software partitioning which avoids restrictions on the software semantics as well as an iterative partitioning process based on hardware extraction controlled by a cost function.
Abstract: The authors present a software-oriented approach to hardware-software partitioning which avoids restrictions on the software semantics as well as an iterative partitioning process based on hardware extraction controlled by a cost function. This process is used in Cosyma, an experimental cosynthesis system for embedded controllers. As an example, the extraction of coprocessors for loops is demonstrated. Results are presented for several benchmark designs. >

644 citations


Patent
19 Jan 1993
TL;DR: In this article, the authors propose a method to detect undesirable software entities, such as a computer virus, worm, or Trojan Horse, in a data processing system by detecting anomalous behavior that may indicate the presence of an undesirable software entity.
Abstract: A method includes the following component steps, or some functional subset of these steps: (A) periodic monitoring of a data processing system (10) for anomalous behavior that may indicate the presence of an undesirable software entity such as a computer virus, worm, or Trojan Horse; (B) automatic scanning for occurrences of known types of undesirable software entities and taking remedial action if they are discovered; (C) deploying decoy programs to capture samples of unknown types of computer viruses; (D) identifying machine code portions of the captured samples which are unlikely to vary from one instance of the virus to another; (E) extracting an identifying signature from the executable code portion and adding the signature to a signature database; (F) informing neighboring data processing systems on a network of an occurrence of the undesirable software entity; and (G) generating a distress signal, if appropriate, so as to call upon an expert to resolve difficult cases. A feature of this invention is the automatic execution of the foregoing steps in response to a detection of an undesired software entity, such as a virus or a worm, within a data processing system. The automatic extraction of the identifying signature, the addition of the signature to a signature data base, and the immediate use of the signature by a scanner provides protection from subsequent infections of the system, and also a network of systems, by the same or an altered form of the undesirable software entity.

585 citations


Journal ArticleDOI
TL;DR: Common guidelines for organizing and interpreting unstructured data are provided and software programs for the microcomputer are presented as a way to facilitate the organization and interpretation of qualitative data.
Abstract: In the last several years there has been an increase in the amount of qualitative research using in-depth interviews and comprehensive content analyses in sport psychology. However, no explicit method has been provided to deal with the large amount of unstructured data. This article provides common guidelines for organizing and interpreting unstructured data. Two main operations are suggested and discussed: first, coding meaningful text segments, or creating tags, and second, regrouping similar text segments, or creating categories. Furthermore, software programs for the microcomputer are presented as a way to facilitate the organization and interpretation of qualitative data.

584 citations


Journal ArticleDOI
TL;DR: The Molecular Surface Package is a reimplementation, in C, of a set of earlier FORTRAN programs for computing analytical molecular surfaces, areas, volumes, polyhedra, and surface curvatures.

Journal ArticleDOI
TL;DR: The authors demonstrate the feasibility of synthesizing heterogeneous systems by using timing constraints to delegate tasks between hardware and software so that performance requirements can be met.
Abstract: As system design grows increasingly complex, the use of predesigned components, such as general-purpose microprocessors can simplify synthesized hardware. While the problems in designing systems that contain processors and application-specific integrated circuit chips are not new, computer-aided synthesis of such heterogeneous or mixed systems poses unique problems. The authors demonstrate the feasibility of synthesizing heterogeneous systems by using timing constraints to delegate tasks between hardware and software so that performance requirements can be met. System functionality is captured using the HardwareC hardware description language. The synthesis of an Ethernet-based network coprocessor is discussed as an example. >

Book
01 Jan 1993
TL;DR: The aim of this text is to teach the student what discrete event systems are about and how they differ from "classical systems"; describe the differences between various modelling approaches; and show how to simulate DES using commercially-available software or from first principles.
Abstract: The aim of this text is to teach the student what discrete event systems (DES) are about and how they differ from "classical systems"; describe the differences between various modelling approaches; and show how to simulate DES using commercially-available software or from first principles

Journal ArticleDOI
TL;DR: This work affirms that the quantification of life-critical software reliability is infeasible using statistical methods, whether these methods are applied to standard software or fault-tolerant software.
Abstract: This work affirms that the quantification of life-critical software reliability is infeasible using statistical methods, whether these methods are applied to standard software or fault-tolerant software. The classical methods of estimating reliability are shown to lead to exorbitant amounts of testing when applied to life-critical software. Reliability growth models are examined and also shown to be incapable of overcoming the need for excessive amounts of testing. The key assumption of software fault tolerance-separately programmed versions fail independently-is shown to be problematic. This assumption cannot be justified by experimentation in the ultrareliability region, and subjective arguments in its favor are not sufficiently strong to justify it as an axiom. Also, the implications of the recent multiversion software experiments support this affirmation. >

Book
04 Oct 1993
Abstract: The distributions and relationships derived from the change data collected during the development of a medium scale satellite software project show that meaningful results can be obtained which allow an insight into software traits and the environment in which it is developed. Modified and new modules were shown to behave similarly. An abstract classification scheme for errors which allows a better understanding of the overall traits of a software project is also shown. Finally, various size and complexity metrics are examined with respect to errors detected within the software yielding some interesting results.

Journal ArticleDOI
01 Jan 1993
TL;DR: It is shown how top-down decompositions of a subject system can be (re)constructed via bottom-up subsystem composition, which involves identifying groups of building blocks using composition operations based on software engineering principles such as low coupling and high cohesion.
Abstract: Reverse-engineering is the process of extracting system abstractions and design information out of existing software systems. This process involves the identification of software artefacts in a particular subject system, the exploration of how these artefacts interact with one another, and their aggregation to form more abstract system representations that facilitate program understanding. This paper describes our approach to creating higher-level abstract representations of a subject system, which involves the identification of related components and dependencies, the construction of layered subsystem structures, and the computation of exact interfaces among subsystems. We show how top-down decompositions of a subject system can be (re)constructed via bottom-up subsystem composition. This process involves identifying groups of building blocks (e.g., variables, procedures, modules, and subsystems) using composition operations based on software engineering principles such as low coupling and high cohesion. The result is an architecture of layered subsystem structures. The structures are manipulated and recorded using the Rigi system, which consists of a distributed graph editor and a parsing system with a central repository. The editor provides graph filters and clustering operations to build and explore subsystem hierarchies interactively. The paper concludes with a detailed, step-by-step analysis of a 30-module software system using Rigi.

Book ChapterDOI
TL;DR: Software for calculation of 6 diatom indices and diversity indices has been created for both Macintosh and IBM compatible computers and runs with the data base Omnis 5 under Windows 3.0.
Abstract: Software for calculation of 6 diatom indices and diversity indices has been created for both Macintosh and IBM compatible computers. This software runs with the data base Omnis 5 under Windows 3.0. It includes three taxonomic files: families, genera and species, and an inventories file. Four types of data inputs are possible. It is always possible to operate simulations and to carry out investigations with simple or combined characters. All files and results of investigations can be listed in different ways. This data base is compatible with both word processing and spreadsheet systems.

Journal ArticleDOI
TL;DR: Before empirical evidence linking software complexity to software maintenance costs is relatively weak, several researchers have noted that such results must be applied cautiously to the large-scale commercial application systems that account for most software maintenance expenditures.
Abstract: While the link between the difficulty in understanding computer software and the cost of maintaining it is appealing, prior empirical evidence linking software complexity to software maintenance costs is relatively weak [21]. Many of the attempts to link software complexity to maintainability are based on experiments involving small pieces of code, or are based on analysis of software written by students. Such evidence is valuable, but several researchers have noted that such results must be applied cautiously to the large-scale commercial application systems that account for most software maintenance expenditures [13,17]

Patent
14 Dec 1993
TL;DR: In this paper, the authors propose a method, system and program for testing programmatic interfaces by systematically exploring valid call sequences defined by a collection of subroutines with data.
Abstract: A method, system and program for testing programmatic interfaces by systematically exploring valid call sequences defined by a collection of subroutines with a collection of data. The subject invention does not explicitly write unit tests. Instead, it provides the tools to develop rules which model and verify the correct operation of software while leaving the complexities of test selection, execution, sequencing, coverage, and verification to an automated system. The number of test scenarios generated by the system depends upon cumulative execution time and is reliably independent of human effort.

ReportDOI
01 Jun 1993
TL;DR: This method for facilitating the systematic and repeatable identification of risks associated with the development of a software-dependent project was tested in active government- funded defense and civilian software development projects for both its usefulness and for improving the method itself.
Abstract: : This report describes a method for facilitating the systematic and repeatable identification of risks associated with the development of a software-dependent project. This method, derived from published literature and previous experience in developing software, was tested in active government- funded defense and civilian software development projects for both its usefulness and for improving the method itself. Results of the field tests encouraged the claim that the described method is useful, usable, and efficient. The report concludes with some macro-level lessons learned from the field tests and brief overview of future work in establishing risk management on a firm footing in software development projects.

Patent
Kenneth C. Kung1
01 Jun 1993
TL;DR: In this paper, a multiple logon procedure (16) and secure transport layer protocol (SPLP) are used with a user's communication software and network communication software to authenticate users in a distributed networked computing system.
Abstract: Apparatus and methods of authenticating users in a distributed networked computing system (10). The system (10) may comprise a central server (12) embodiment that includes a file (19) wherein IDs and encrypted passwords (30) are stored, or a distributed system embodiment where IDs and encrypted passwords (30) are stored in files (19) at each respective computer in the system (10). A multiple logon procedure (16) and secure transport layer protocol are used with a user's communication software and network communication software. When a user desires to use a particular computer (13), logon requests are processed by the multiple logon procedure (16) and it accesses the stored file (19) that contains the user's ID and encrypted password, decrypts the password (30), accesses the remote computer (13), and logs the user onto that computer (13). In the central server system all IDs and encrypted passwords (30) are stored on a single computer (the server (12)) that controls access to the entire distributed system (10). Once access is granted to a particular user, nonencrypted passwords (30) are transmitted to the remote computers (13), since the server (12) controls the entire system. In the distributed version, password files (19) are stored in all networked computers (13), and once a user logs on to a computer (11), if the user wishes to use services at a second computer (13), the authentication information is forwarded to the second computer (13) using the secure transport layer protocol to protect its integrity, and after receiving the authentication information, it is compared with authentication information for the same user stored in the second computer (13). If the authentication information matches, the user is logged onto the second computer (13).

Journal ArticleDOI
TL;DR: The structure of the mm-array database and the implementation of a data analysis program are described, both of which make extensive use of Sybase, a commercial database management system with application development software.
Abstract: A relational database management system has been implemented on the Caltech millimeter-wave array for both real-time astronomical engineering data and post-processing calibration and analysis This system provides high storage-efficiency for the data and on-line access to data from multiple observing seasons The ability to access easily the full database enables more accurate calibration of the raw data and greatly facilitates the calibration process In this article we describe both the structure of the mm-array database and the implementation of a data analysis program, both of which make extensive use of Sybase, a commercial database management system with application development software This use of relational database technology in real-time astronomical data storage and calibration may serve as a prototype for similar systems at other observatories

Patent
Stephen M. Platt1
22 Dec 1993
TL;DR: In this article, a method for remote installation of software over a computer network allows a user to interactively select each remote computer system for software installation, or to provide a file containing a list of all remote computer systems.
Abstract: A method for remote installation of software over a computer network allows a user to interactively select each remote computer system for software installation, or to provide a file containing a list of all remote computer systems. Before attempting to install the software, the method ensures that the remote system can be reached through the network, that the remote system has the capability of running processes remotely, that the remote system has all the commands necessary to perform the installation, that the remote system has the correct hardware and software to support the installation, and that sufficient disk space exists on the remote computer system for the installation. The method then combines all files that are being remotely installed into a single data stream, sends this single data stream over the network to the remote computer system, and separates the data stream into the original files on the remote system.

Patent
19 Jan 1993
TL;DR: In this article, a system for inserting code markers for observing indications (external to the microprocessor upon which the software operates) of the occurrence of an event in the execution of the software.
Abstract: A system (Figure 10) for inserting code markers for observing indications (external to the microprocessor upon which the software operates) of the occurrence of an event in the execution of the software. Additional instructions or markers are added to the software to be debugged to produce simple, encoded, memory references to otherwise unused memory or I/O locations that will always be visible to a logic analyzer as bus cycles. Although the code markers cause a minimal intrusion in the underlying software, they make tracing events by a conventional logic analyzer much simpler and allow for performance evaluations in manners not heretofore possible. In particular, the inserted code markers provide a method of dynamically extracting information from a running host or real-time "black box" embedded system (902) under test using simple low intrusion print statements, encoded I/O writes on procedure entries and exits, and/or an interface to service calls and the like which writes out the passed parameters. Generally, the code markers are inserted at compile time or interactively during the debug session to make visible critical points in the code execution, such as function calls, task creation, semaphore operations and other resource usage so as to speed isolation of problems at test points during debugging. Performance analysis and event analysis use the code markers of the invention to cut through the ambiguities of microprocessor prefetch and cache operations. Because of these features, the invention is particularly advantageous for use by software design teams developing complex embedded host or real-time operating systems using multi-task operating systems and/or object oriented systems.

Patent
20 Sep 1993
TL;DR: A registration system for licensing execution of digital data in a use mode is described in this article, where the system includes local license unique ID generating means (LIDG) and remote licensee unique ID generation means (IDG).
Abstract: A registration system for licensing execution of digital data in a use mode, said digital data executable on a platform (12), said system including local license unique ID generating means (14), and remote licensee unique ID generating means (14), said system further including mode switching means operable on said platform which permits use of said digital data in said use mode on said platform only if a licensee unique ID generated by said local licensee unique ID generating means (14) has matched a licensee unique ID generated by said remote licensee unique ID generating means (14).

Journal ArticleDOI
TL;DR: A behavioral model of a class of mixed hardware-software systems is presented and a codesign methodology for such systems is defined.
Abstract: A behavioral model of a class of mixed hardware-software systems is presented. A codesign methodology for such systems is defined. The methodology includes hardware-software partitioning, behavioral synthesis, software compilation, and demonstration on a testbed consisting of a commercial central processing unit (CPU), field-programmable gate arrays, and programmable interconnections. Design examples that illustrate how certain characteristics of system behavior and constraints suggest hardware or software implementation are presented. >


Journal ArticleDOI
TL;DR: The evolution of the novel shared drawing medium ClearBoard is described which was designed to seamlessly integrate an interpersonal space and a shared workspace and which incorporates TeamPaint, a multiuser paint editor.
Abstract: We describe the evolution of the novel shared drawing medium ClearBoard which was designed to seamlessly integrate an interpersonal space and a shared workspace. ClearBoard permits coworkers in two locations to draw with color markers or with electronic pens and software tools while maintaining direct eye contact and the ability to employ natural gestures. The ClearBoard design is based on the key metaphor of “talking through and drawing on a transparent glass window.” We describe the evolution from ClearBoard-1 (which enables shared video drawing) to ClearBoard-2 (which incorporates TeamPaint, a multiuser paint editor). Initial observations and findings gained through the experimental use of the prototype, including the feature of “gaze awareness,” are discussed. Further experiments are conducted with ClearBoard-0 (a simple mockup), ClearBoard-1, and an actual desktop as a control. In the settings we examined, the ClearBoard environment led to more eye contact and potential awareness of collaborator's gaze direction over the traditional desktop environment.

Journal ArticleDOI
TL;DR: In this article, the effectiveness of the all-uses and all-edges test data adequacy criteria is discussed, and a large number of test sets was randomly generated for each of nine subject programs with subtle errors, and it was determined whether the test set exposed an error.
Abstract: An experiment comparing the effectiveness of the all-uses and all-edges test data adequacy criteria is discussed. The experiment was designed to overcome some of the deficiencies of previous software testing experiments. A large number of test sets was randomly generated for each of nine subject programs with subtle errors. For each test set, the percentages of executable edges and definition-use associations covered were measured, and it was determined whether the test set exposed an error. Hypothesis testing was used to investigate whether all-uses adequate test sets are more likely to expose errors than are all-edges adequate test sets. Logistic regression analysis was used to investigate whether the probability that a test set exposes an error increases as the percentage of definition-use associations or edges covered by it increases. Error exposing ability was shown to be strongly positively correlated to percentage of covered definition-use associations in only four of the nine subjects. Error exposing ability was also shown to be positively correlated to the percentage of covered edges in four different subjects, but the relationship was weaker. >

Book
28 Mar 1993
TL;DR: This guide introduces HCI, a science of computer programming based on the principles of psychology, with a focus on system development and evaluation.
Abstract: About this Guide Who Is This Guide For? How to Use This Guide? 1. Introduction to HCI 2. The Human Element: Applying Psychology 3. System Development 4. System and Interface Features 5. Software Tools and Prototyping 6. Evaluation 7. Future trends