scispace - formally typeset
Search or ask a question

Showing papers on "Software published in 1995"


Journal ArticleDOI
TL;DR: The asynchronous pipeline scheme provides other substantial advantages, including high flexibility, favorable processing speeds, choice of both all-in-memory and disk-bound processing, easy adaptation to different data formats, simpler software development and maintenance, and the ability to distribute processing tasks on multi-CPU computers and computer networks.
Abstract: The NMRPipe system is a UNIX software environment of processing, graphics, and analysis tools designed to meet current routine and research-oriented multidimensional processing requirements, and to anticipate and accommodate future demands and developments. The system is based on UNIX pipes, which allow programs running simultaneously to exchange streams of data under user control. In an NMRPipe processing scheme, a stream of spectral data flows through a pipeline of processing programs, each of which performs one component of the overall scheme, such as Fourier transformation or linear prediction. Complete multidimensional processing schemes are constructed as simple UNIX shell scripts. The processing modules themselves maintain and exploit accurate records of data sizes, detection modes, and calibration information in all dimensions, so that schemes can be constructed without the need to explicitly define or anticipate data sizes or storage details of real and imaginary channels during processing. The asynchronous pipeline scheme provides other substantial advantages, including high flexibility, favorable processing speeds, choice of both all-in-memory and disk-bound processing, easy adaptation to different data formats, simpler software development and maintenance, and the ability to distribute processing tasks on multi-CPU computers and computer networks.

13,804 citations


Journal Article
TL;DR: In this article, the authors propose that specialized elements of hardware and software, connected by wires, radio waves and infrared, will soon be so ubiquitous that no-one will notice their presence.
Abstract: Specialized elements of hardware and software, connected by wires, radio waves and infrared, will soon be so ubiquitous that no-one will notice their presence

5,041 citations


Journal ArticleDOI
TL;DR: GSLIB as discussed by the authors is a source code that can be used as a starting point for custom programs, advanced applications and research, and is addressed to the reasonably advanced practitioner or researcher who need powerful, flexible and documented programs that are not confined to user-friendly menus.
Abstract: Geostatistics can be defined as the statistical study of phenomena that fluctuate in space and time. Originally the objective of the field was to improve forecasts of ore grade and reserves, but the mathematical generality of the approach has led to the application of geostatistics to other areas, ranging from pest control to pollution monitoring. GSLIB provides a source code that can be used as a starting point for custom programs, advanced applications and research. GSLIB is addressed to the reasonably advanced practitioner or researcher who needs powerful, flexible and documented programs that are not confined to user-friendly menus. It offers the most advanced methods in the field, including co-kriging and several forms of conditional simulations, all developed in three-dimensions, which can be run on any kind of computer.

3,173 citations


Book
01 Feb 1995
TL;DR: The third edition of LAPACK provided a guide to troubleshooting and installation of Routines, as well as providing examples of how to convert from LINPACK or EISPACK to BLAS.
Abstract: Preface to the third edition Preface to the secondedition Part 1. Guide. 1. Essentials 2. Contents of LAPACK 3. Performance of LAPACK 4. Accuracy and Stability 5. Documentation and Software Conventions 6. Installing LAPACK Routines 7. Troubleshooting Appendix A. Index of Driver and Computational Routines Appendix B. Index of Auxiliary Routines Appendix C. Quick Reference Guide to the BLAS Appendix D. Converting from LINPACK or EISPACK Appendix E. LAPACK Working Notes Part 2. Specifications of Routines. Bibliography Index by Keyword Index by Routine Name.

2,958 citations


Journal ArticleDOI
TL;DR: The 4+1 View Model organizes a description of a software architecture using five concurrent views, each of which addresses a specific set of concerns.
Abstract: The 4+1 View Model organizes a description of a software architecture using five concurrent views, each of which addresses a specific set of concerns. Architects capture their design decisions in four views and use the fifth view to illustrate and validate them. The logical view describes the design's object model when an object-oriented design method is used. To design an application that is very data driven, you can use an alternative approach to develop some other form of logical view, such as an entity-relationship diagram. The process view describes the design's concurrency and synchronization aspects. The physical view describes the mapping of the software onto the hardware and reflects its distributed aspect. The development view describes the software's static organization in its development environment. >

2,177 citations


Journal ArticleDOI
Joseph Mitola1
TL;DR: A closer look at the canonical functional partitioning of channel coding into antenna, RF, IF, baseband, and bitstream segments and a brief treatment of the economics and likely future directions of software radio technology are provided.
Abstract: As communications technology continues its rapid transition from analog to digital, more functions of contemporary radio systems are implemented in software, leading toward the software radio. This article provides a tutorial review of software radio architectures and technology, highlighting benefits, pitfalls, and lessons learned. This includes a closer look at the canonical functional partitioning of channel coding into antenna, RF, IF, baseband, and bitstream segments. A more detailed look at the estimation of demand for critical resources is key. This leads to a discussion of affordable hardware configurations, the mapping of functions to component hardware, and related software tools. This article then concludes with a brief treatment of the economics and likely future directions of software radio technology. >

2,002 citations


Book
13 Nov 1995
TL;DR: Biomedical and social science researchers who want to analyze survival data with SAS will find just what they need with this easy-to-read and comprehensive guide.
Abstract: Biomedical and social science researchers who want to analyze survival data with SAS will find just what they need with this easy-to-read and comprehensive guide. Written for the reader with a modest statistical background and minimal knowledge of SAS software, this book teaches many aspects of data input and manipulation. Numerous examples of SAS code and output make this an eminently practical resource, ensuring that even the uninitiated becomes a sophisticated user of survival analysis. The main topics presented include censoring, survival curves, Kaplan-Meier estimation, accelerated failure time models, Cox regression models, and discrete-time analysis. Also included are topics not usually covered, such as time-dependent covariates, competing risks, and repeated events.

1,684 citations


Journal ArticleDOI
TL;DR: In this paper, structural equation modeling analysis is used for the analysis of large-scale surveys using complex sample designs, where the authors identify several recent methodological lines of inquiry which taken together provide a powerful and general statistical basis for a complex sample.
Abstract: Large-scale surveys using complex sample designs are frequently carried out by government agencies. The statistical analysis technology available for such data is, however, limited in scope. This study investigates and further develops statistical methods that could be used in software for the analysis of data collected under complex sample designs. First, it identifies several recent methodological lines of inquiry which taken together provide a powerful and general statistical basis for a complex sample, structural equation modeling analysis. Second, it extends some of this research to new situations of interest. A Monte Carlo study that empirically evaluates these techniques on simulated data comparable to those in largescale complex surveys demonstrates that they work well in practice. Due to the generality of the approaches, the methods cover not only continuous normal variables but also continuous nonnormal variables and dichotomous variables. Two methods designed to take into account the complex sample structure were

1,407 citations


Proceedings ArticleDOI
20 Aug 1995
TL;DR: STM is used to provide a general highly concurrent method for translating sequential object implementations to non-blocking ones based on implementing a k-word compare&swap STM-transaction, a novel software method for supporting flexible transactional programming of synchronization operations.
Abstract: As we learn from the literature, flexibility in choosing synchronization operations greatly simplifies the task of designing highly concurrent programs. Unfortunately, existing hardware is inflexible and is at best on the level of a Load–Linked/Store–Conditional operation on a single word. Building on the hardware based transactional synchronization methodology of Herlihy and Moss, we offer software transactional memory (STM), a novel software method for supporting flexible transactional programming of synchronization operations. STM is non-blocking, and can be implemented on existing machines using only a Load–Linked/Store–Conditional operation. We use STM to provide a general highly concurrent method for translating sequential object implementations to non-blocking ones based on implementing a k-word compare&swap STM-transaction. Empirical evidence collected on simulated multiprocessor architectures shows that our method always outperforms the non-blocking translation methods in the style of Barnes, and outperforms Herlihy’s translation method for sufficiently large numbers of processors. The key to the efficiency of our software-transactional approach is that unlike Barnes style methods, it is not based on a costly “recursive helping” policy.

1,369 citations


Journal ArticleDOI
TL;DR: Since its inception, the software industry has been in crisis and problems with software systems are common and highly-publicized occurrences.
Abstract: Since its inception, the software industry has been in crisis. As Blazer noted 20 years ago, “[Software] is unreliable, delivered late, unresponsive to change, inefficient, and expensive … and has been for the past 20 years” [4]. In a survey of software contractors and government contract officers, over half of the respondents believed that calendar overruns, cost overruns, code that required in-house modifications before being usable, and code that was difficult to modify were common problems in the software projects they supervised [22]. Even today, problems with software systems are common and highly-publicized occurrences.

1,121 citations


Journal ArticleDOI
TL;DR: The purpose is to support the abstractions used in practice by software designers, and sketches a model for defining architectures and presents an implementation of the basic level of that model.
Abstract: Architectures for software use rich abstractions and idioms to describe system components, the nature of interactions among the components, and the patterns that guide the composition of components into systems. These abstractions are higher level than the elements usually supported by programming languages and tools. They capture packaging and interaction issues as well as computational functionality. Well-established (if informal) patterns guide the architectural design of systems. We sketch a model for defining architectures and present an implementation of the basic level of that model. Our purpose is to support the abstractions used in practice by software designers. The implementation provides a testbed for experiments with a variety of system construction mechanisms. It distinguishes among different types of components and different ways these components can interact. It supports abstract interactions such as data flow and scheduling on the same footing as simple procedure call. It can express and check appropriate compatibility restrictions and configuration constraints. It accepts existing code as components, incurring no runtime overhead after initialization. It allows easy incorporation of specifications and associated analysis tools developed elsewhere. The implementation provides a base for extending the notation and validating the model. >

Journal ArticleDOI
TL;DR: The scope and functionality of a versatile environment for testing small- and large-scale nonlinear optimization algorithms, and tools to assist in building an interface between this input format and other optimization packages are discussed.
Abstract: The purpose of this article is to discuss the scope and functionality of a versatile environment for testing small- and large-scale nonlinear optimization algorithms. Although many of these facilities were originally produced by the authors in conjunction with the software package LANCELOT, we believe that they will be useful in their own right and should be available to researchers for their development of optimization software. The tools can be obtained by anonymous ftp from a number of sources and may, in many cases, be installed automatically. The scope of a major collection of test problems written in the standard input format (SIF) used by the LANCELOT software package is described. Recognizing that most software was not written with the SIF in mind, we provide tools to assist in building an interface between this input format and other optimization packages. These tools provide a link between the SIF and a number of existing packages, including MINOS and OSL. Additionally, as each problem includes a specific classification that is designed to be useful in identifying particular classes of problems, facilities are provided to build and manage a database of this information. There is a Unix and C shell bias to many of the descriptions in the article, since, for the sake of simplicity, we do not illustrate everything in its fullest generality. We trust that the majority of potential users are sufficiently familiar with Unix that these examples will not lead to undue confusion.

Patent
10 May 1995
TL;DR: In this paper, an automated purchasing control system which can be customized for a corporate customer is presented, where different authorization tests can be established for each position in a hierarchy, with a particular position being required to pass not only its own test, but the test of elements higher in the hierarchical tree.
Abstract: An automated purchasing control system which can be customized for a corporate customer. The system (94) receives an authorization request over the phone lines from a remote point-of-sale terminal (98) and processes the request using unique software. The software has a database customized to a corporate user (70) to establish that company's hierarchical structure. Elements of the hierarchical structure are independently reconfigurable, so that a company can specify different hierarchical relationships in the software for authorization, billing and reporting purposes. Different authorization tests can be established for each position in a hierarchy, with a particular position being required to pass not only its own test, but the test of elements higher in the hierarchical tree.

Journal ArticleDOI
TL;DR: The implications of reuse on the production are discussed, with an emphasis on the technical challenges, and proposed models for their economic analysis are discussed.
Abstract: Software productivity has been steadily increasing over the past 30 years, but not enough to close the gap between the demands placed on the software industry and what the state of the practice can deliver; nothing short of an order of magnitude increase in productivity will extricate the software industry from its perennial crisis. Several decades of intensive research in software engineering and artificial intelligence left few alternatives but software reuse as the (only) realistic approach to bring about the gains of productivity and quality that the software industry needs. In this paper, we discuss the implications of reuse on the production, with an emphasis on the technical challenges. Software reuse involves building software that is reusable by design and building with reusable software. Software reuse includes reusing both the products of previous software projects and the processes deployed to produce them, leading to a wide spectrum of reuse approaches, from the building blocks (reusing products) approach, on one hand, to the generative or reusable processor (reusing processes), on the other. We discuss the implication of such approaches on the organization, control, and method of software development and discuss proposed models for their economic analysis. Software reuse benefits from methodologies and tools to: (1) build more readily reusable software and (2) locate, evaluate, and tailor reusable software, the last being critical for the building blocks approach. Both sets of issues are discussed in this paper, with a focus on application generators and OO development for the first and a thorough discussion of retrieval techniques for software components, component composition (or bottom-up design), and transformational systems for the second. We conclude by highlighting areas that, in our opinion, are worthy of further investigation. >

Book
01 Mar 1995
TL;DR: In this paper, the authors present the results of simulations of 18 different test programs under 375 different models of available parallelism analysis, including branch prediction, register renaming and alias analysis.
Abstract: Growing interest in ambitious multiple-issue machines and heavilypipelined machines requires a careful examination of how much instructionlevel parallelism exists in typical programs. Such an examination is complicated by the wide variety of hardware and software techniques for increasing the parallelism that can be exploited, including branch prediction, register renaming, and alias analysis. By performing simulations based on instruction traces, we can model techniques at the limits of feasibility and even beyond. This paper presents the results of simulations of 18 different test programs under 375 different models of available parallelism analysis. This paper replaces Technical Note TN-15, an earlier version of the same material.

Journal ArticleDOI
TL;DR: A graphics-based software system that enables users to develop and analyze musculoskeletal models without programming and can enhance the productivity of investigators working on diverse problems in biomechanics.

Proceedings ArticleDOI
01 Jan 1995
TL;DR: A solution for this problem is presented, which considers all paths implicitly by using integer linear programming, which is implemented in the program cinderella which currently targets a popular embedded processor - the Intel i960.
Abstract: Embedded computer systems are characterized by the presence of a processor running application specific software. A large number of these systems must satisfy real-time constraints. This paper examines the problem of determining the bound on the running time of a given program on a given processor. An important aspect of this problem is determining the extreme case program paths. The state of the art solution here relies on an explicit enumeration of program paths. This runs out of steam rather quickly since the number of feasible program paths is typically exponential in the size of the program. We present a solution for this problem, which considers all paths implicitly by using integer linear programming. This solution is implemented in the program cinderella which currently targets a popular embedded processor - the Intel i960. The preliminary results of using this tool are presented here.

01 Jan 1995
TL;DR: In this article, an overview of state-of-the-art approaches in object-oriented technology as well as practical guidance for their use in software design is provided, covering forming class hierarchies and interaction relationships between objects.
Abstract: Provides an overview of state-of-the-art approaches in object-oriented technology as well as practical guidance for their use in software design. Covers forming class hierarchies and interaction relationships between objects, software architectures that allow for reuse of code and design, and documenting object-oriented design on an adequate abstraction level. Includes examples and a case study.

Journal ArticleDOI
TL;DR: The syntax and semantics of the subset of the Rapide language that is designed to satisfy general requirements for architecture definition languages are described, and the use of event pattern mappings to define the relationship between two architectures at different levels of abstraction is illustrated.
Abstract: This paper discusses general requirements for architecture definition languages, and describes the syntax and semantics of the subset of the Rapide language that is designed to satisfy these requirements. Rapide is a concurrent event-based simulation language for defining and simulating the behavior of system architectures. Rapide is intended for modelling the architectures of concurrent and distributed systems, both hardware and software in order to represent the behavior of distributed systems in as much detail as possible. Rapide is designed to make the greatest possible use of event-based modelling by producing causal event simulations. When a Rapide model is executed it produces a simulation that shows not only the events that make up the model's behavior, and their timestamps, but also which events caused other events, and which events happened independently. The architecture definition features of Rapide are described: event patterns, interfaces, architectures and event pattern mappings. The use of these features to build causal event models of both static and dynamic architectures is illustrated by a series of simple examples from both software and hardware. Also we give a detailed example of the use of event pattern mappings to define the relationship between two architectures at different levels of abstraction. Finally, we discuss briefly how Rapide is related to other event-based languages. >

Journal ArticleDOI
TL;DR: Two methods of machine learning are described, which are used to build estimators of software development effort from historical data, which indicate that these techniques are competitive with traditional estimators on one dataset, but also illustrate that these methods are sensitive to the data on which they are trained.
Abstract: Accurate estimation of software development effort is critical in software engineering. Underestimates lead to time pressures that may compromise full functional development and thorough testing of software. In contrast, overestimates can result in noncompetitive contract bids and/or over allocation of development resources and personnel. As a result, many models for estimating software development effort have been proposed. This article describes two methods of machine learning, which we use to build estimators of software development effort from historical data. Our experiments indicate that these techniques are competitive with traditional estimators on one dataset, but also illustrate that these methods are sensitive to the data on which they are trained. This cautionary note applies to any model-construction strategy that relies on historical data. All such models for software effort estimation should be evaluated by exploring model sensitivity on a variety of historical data. >

Journal ArticleDOI
R.G. Dromey1
TL;DR: The model supports building quality into software, definition of language-specific coding standards, systematically classifying quality defects, and the development of automated code auditors for detecting defects in software.
Abstract: A model for software product quality is defined, it has been formulated by associating a set of quality-carrying properties with each of the structural forms that are used to define the statements and statement components of a programming language. These quality-carrying properties are in turn linked to the high-level quality attributes of the International Standard for Software Product Evaluation ISO-9126. The model supports building quality into software, definition of language-specific coding standards, systematically classifying quality defects, and the development of automated code auditors for detecting defects in software. >

01 Jan 1995
TL;DR: In this article, an approach that helps an engineer use a high-level model of the structure of an existing software system as a lens through which to see a model of that system's source code is presented.
Abstract: Software engineers often use high-level models (for instance, box and arrow sketches) to reason and communicate about an existing software system. One problem with high-level models is that they are almost always inaccurate with respect to the system's source code. We have developed an approach that helps an engineer use a high-level model of the structure of an existing software system as a lens through which to see a model of that system's source code. In particular, an engineer de nes a high-level model and speci es how the model maps to the source. A tool then computes a software re exion model that shows where the engineer's high-level model agrees with and where it di ers from a model of the source. The paper provides a formal characterization of reexion models, discusses practical aspects of the approach, and relates experiences of applying the approach and tools to a number of di erent systems. The illustrative example used in the paper describes the application of re exion models to NetBSD, an implementation of Unix comprised of 250,000 lines of C code. In only a few hours, an engineer computed several re exion models that provided him with a useful, global overview of the structure of the NetBSD virtual memory subsystem. The approach has also been applied to aid in the understanding and experimental reengineering of the Microsoft Excel spreadsheet product. This research was funded in part by the NSF grant CCR-8858804 and a Canadian NSERC post-graduate scholarship. 0 Permission to make digital/hard copies of all or part of this material without fee is granted provided that the copies are not made or distributed for pro t or commercial advantage, the ACM copyright/server notice, the title of the publication and its date appear, and notice is given that copyright is by permission of the Association for Computing Machinery, Inc. (ACM). To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior speci c permission and/or a fee. SIGSOFT '95 Washington, D.C., USA c 1995 ACM 0-89791-716-2/95/0010...$3.50

Proceedings ArticleDOI
01 Oct 1995
TL;DR: An approach is developed that helps an engineer use a high-level model of the structure of an existing software system as a lens through which to see a model of that system's source code.
Abstract: Software engineers often use high-level models (for instance, box and arrow sketches) to reason and communicate about an existing software system. One problem with high-level models is that they are almost always inaccurate with respect to the system's source code. We have developed an approach that helps an engineer use a high-level model of the structure of an existing software system as a lens through which to see a model of that system's source code. In particular, an engineer de nes a high-level model and speci es how the model maps to the source. A tool then computes a software re exion model that shows where the engineer's high-level model agrees with and where it di ers from a model of the source. The paper provides a formal characterization of reexion models, discusses practical aspects of the approach, and relates experiences of applying the approach and tools to a number of di erent systems. The illustrative example used in the paper describes the application of re exion models to NetBSD, an implementation of Unix comprised of 250,000 lines of C code. In only a few hours, an engineer computed several re exion models that provided him with a useful, global overview of the structure of the NetBSD virtual memory subsystem. The approach has also been applied to aid in the understanding and experimental reengineering of the Microsoft Excel spreadsheet product. This research was funded in part by the NSF grant CCR-8858804 and a Canadian NSERC post-graduate scholarship. 0 Permission to make digital/hard copies of all or part of this material without fee is granted provided that the copies are not made or distributed for pro t or commercial advantage, the ACM copyright/server notice, the title of the publication and its date appear, and notice is given that copyright is by permission of the Association for Computing Machinery, Inc. (ACM). To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior speci c permission and/or a fee. SIGSOFT '95 Washington, D.C., USA c 1995 ACM 0-89791-716-2/95/0010...$3.50

Patent
20 Mar 1995
TL;DR: In this article, a digital camera is used for capturing and storing images in a removable storage device, which is also preloaded with enhancement files for effecting the operation of the system.
Abstract: An electronic imaging system includes a digital electronic camera for capturing and storing images in a removable storage device, which is also preloaded with enhancement files for effecting the operation of the system. The camera includes an optical section for establishing the optical parameters of image capture, an image sensing section for electrically capturing the image, and a signal processing section for operating upon the electrically captured image prior to storage. The several sections of the camera are coordinated and controlled by a programmable processor, which is capable of receiving the enhancement files preloaded into the storage device. These files may contain software for updating the operating code of the camera, for modifying the electrically captured image in selected ways, for modifying camera in special situations, or for communicating non-captured image-like data, such as text and image overlays, to the camera.

Journal ArticleDOI
TL;DR: In this paper, a 3/spl times/2/sup 4/ partial factorial, randomized experimental design was used to evaluate the performance of Scenario-based and Checklist-based methods for software requirements inspection.
Abstract: Software requirements specifications (SRS) are often validated manually. One such process is inspection, in which several reviewers independently analyze all or part of the specification and search for faults. These faults are then collected at a meeting of the reviewers and author(s). Usually, reviewers use Ad Hoc or Checklist methods to uncover faults. These methods force all reviewers to rely on nonsystematic techniques to search for a wide variety of faults. We hypothesize that a Scenario-based method, in which each reviewer uses different, systematic techniques to search for different, specific classes of faults, will have a significantly higher success rate. We evaluated this hypothesis using a 3/spl times/2/sup 4/ partial factorial, randomized experimental design. Forty eight graduate students in computer science participated in the experiment. They were assembled into sixteen, three-person teams. Each team inspected two SRS using some combination of Ad Hoc, Checklist or Scenario methods. For each inspection we performed four measurements: (1) individual fault detection rate, (2) team fault detection rate, (3) percentage of faults first identified at the collection meeting (meeting gain rate), and (4) percentage of faults first identified by an individual, but never reported at the collection meeting (meeting loss rate). The experimental results are that (1) the Scenario method had a higher fault detection rate than either Ad Hoc or Checklist methods, (2) Scenario reviewers were more effective at detecting the faults their scenarios are designed to uncover, and were no less effective at detecting other faults than both Ad Hoc or Checklist reviewers, (3) Checklist reviewers were no more effective than Ad Hoc reviewers, and (4) Collection meetings produced no net improvement in the fault detection rate-meeting gains were offset by meeting losses. >

Journal ArticleDOI
TL;DR: The Method of Layers (MOL) is proposed to provide performance estimates for distributed applications that contain one or more layers of software servers and uses the mean value analysis (MVA) linearizer algorithm as a subprogram to assist in predicting model performance measures.
Abstract: Distributed applications are being developed that contain one or more layers of software servers. Software processes within such systems suffer contention delays both for shared hardware and at the software servers. The responsiveness of these systems is affected by the software design, the threading level and number of instances of software processes, and the allocation of processes to processors. The Method of Layers (MOL) is proposed to provide performance estimates for such systems. The MOL uses the mean value analysis (MVA) linearizer algorithm as a subprogram to assist in predicting model performance measures. >

Journal ArticleDOI
TL;DR: The method seems to be effective in identifying a small number of code components that are unique to a particular program feature, though it may not find all components that make up the feature's delocalized plan.
Abstract: Maintainers of old code often need to discover where particular program features are implemented. This paper presents a method, called ‘software reconnaissance’ for answering this question through an analysis of the execution of different test cases. The method is quite easy to implement, requiring only a test coverage monitor, some simple tools, and a surprisingly small number of test cases. A statistical case study is presented that shows the kind of results that can be obtained on a typical medium-sized program. The method seems to be effective in identifying a small number of code components that are unique to a particular program feature, though it may not find all components that make up the feature's delocalized plan. A small protocol case study shows that professional programmers can learn to use the method quickly and can use the information that it produces. Software reconnaissance may be a simple but useful addition to the maintainer's tool kit in that it provides starting points for understanding a large program and a way of recovering some requirements traceability information from old code. For the researcher, it also provides a novel functionality ‘view’ of software that maps features to program components at different levels of precision.

Patent
06 Sep 1995
TL;DR: In this paper, a distributed computer system employs a license management system to account for software product usage, where a management policy having a variety of alternative styles and contexts is provided, and each licensed program upon start-up makes a call to a license server to check on whether usage is permitted, and the license server checks a database of the licenses, called product use authorizations.
Abstract: A distributed computer system employs a license management system to account for software product usage. A management policy having a variety of alternative styles and contexts is provided. Each licensed program upon start-up makes a call to a license server to check on whether usage is permitted, and the license server checks a database of the licenses, called product use authorizations, that it administers. If the particular use requested is permitted, a grant is returned to the requesting user node. The product use authorization is structured to define a license management policy allowing a variety of license alternatives by values called "style", "context", "duration" and "usage requirements determination method". The license administration may be delegated by the license server to a subsection of the organization, by creating another license management facility duplicating the main facility. The license server must receive a license document (a product use authorization) from an issuer of licenses, where a license document generator is provided. A mechanism is provided for one user node to make a call to use a software product located on another user node; this is referred to as a "calling card", by which a user node obtains permission to make a procedure call to use a program on another node.

Journal ArticleDOI
TL;DR: The modeling approach is applied to study the diffusion of two types of software in the United Kingdom and suggests that although six of every seven software users utilized pirated copies, these pirates were responsible for generating more than 80% of new software buyers, thereby significantly influencing the legal diffusion of the software.
Abstract: Software piracy by users has been identified as the worst problem facing the software industry today. Software piracy permits the shadow diffusion of a software parallel to its legal diffusion in t...

Proceedings ArticleDOI
01 Oct 1995
TL;DR: This work uses formal specifications to describe the behavior of software components and, hence, to determine whether two components match, and gives precise definitions of not just exact match, but, more relevantly, various flavors of relaxed match.
Abstract: : Specification matching is a way to compare two software components. In the context of software reuse and library retrieval it can help determine whether one component can be substituted for another or bow one can be modified to fit the requirements of the other. In the context of object-oriented programming, it can help determine when one type is a behavioral subtype of another. In the context of system interoperability, it can help determine whether the interfaces of two components mismatch. We use formal specifications to describe the behavior of software components, and hence, to determine whether two components match. We give precise definitions of not just exact match, but more relevantly, various flavors of relaxed match. These definitions capture the notions of generalization, specialization, substitutability, subtyping, and interoperability of software components. We write our formal specifications of components in terms of pre-and post-condition predicates. Thus, we rely on theorem proving to determine match and mismatch. We give examples from our implementation of specification matching using the Larch Prover.