scispace - formally typeset
Search or ask a question

Showing papers on "Application software published in 1988"


Journal ArticleDOI
TL;DR: An outline is given of the process steps involved in the spiral model, an evolving risk-driven approach that provides a framework for guiding the software process and its application to a software project is shown.
Abstract: A short description is given of software process models and the issues they address. An outline is given of the process steps involved in the spiral model, an evolving risk-driven approach that provides a framework for guiding the software process, and its application to a software project is shown. A summary is given of the primary advantages and implications involved in using the spiral model and the primary difficulties in using it at its current incomplete level of elaboration. >

5,055 citations


Journal ArticleDOI
J.C. Cleaveland1
TL;DR: The development of application generators, which offer increased productivity through customized reusable software, is addressed and a method, called the dialogue-code generation system, for building application generators is described.
Abstract: The development of application generators, which offer increased productivity through customized reusable software, is addressed. Their advantages and drawbacks are first discussed. They describe a method, called the dialogue-code generation system, for building application generators, and discuss its use for various projects. Unix tools for building application generators are briefly considered. >

205 citations


Journal ArticleDOI
TL;DR: The AIS-5000 is a commercially available massively parallel processor which was designed to operate in an industrial environment with fine-grained parallelism with up to 1024 processing elements arranged in a single-instruction multiple-data (SIMD) architecture.
Abstract: The AIS-5000 is a commercially available massively parallel processor which was designed to operate in an industrial environment. It has fine-grained parallelism with up to 1024 processing elements arranged in a single-instruction multiple-data (SIMD) architecture. The processing elements are arranged in a one-dimensional chain that, for computer vision applications, can be as wide as the image itself. The overall architecture of the system is described. Various components of the system are discussed, including details of the processing elements, data I/O (input/output) pathways and parallel memory organization. A virtual two-dimensional model for programming image-based algorithms for the system is also presented. Performance benchmarks are given for certain simple and complex functions. >

87 citations


Proceedings ArticleDOI
27 Jun 1988
TL;DR: A unified architectural approach extends a well-known hardware fault tolerant without violating the fundamental hardware fault-tolerance design principles, and it provides a possible solution to the problem of correlated software errors.
Abstract: A computer architecture, FTP-AP, has been designed that can efficiently implement N-version fault-tolerant software and still tolerate random hardware failures with extremely high coverage. A unified architectural approach extends a well-known hardware fault tolerant without violating the fundamental hardware fault-tolerance design principles, and it provides a possible solution to the problem of correlated software errors. >

84 citations


Journal ArticleDOI
TL;DR: ABE defines a virtual machine for module-oriented programming and a cooperative operating system that provides access to the capabilities of that virtual machine and provides a number of system design and development frameworks, which embody such programming metaphors as control flow, blackboards, and dataflow.
Abstract: The ABE multilevel architecture for developing intelligent systems addresses the key problems of intelligent systems engineering: large-scale applications and the reuse and integration of software components. ABE defines a virtual machine for module-oriented programming and a cooperative operating system that provides access to the capabilities of that virtual machine. On top of the virtual machine, ABE provides a number of system design and development frameworks, which embody such programming metaphors as control flow, blackboards, and dataflow. These frameworks support the construction of capabilities, including knowledge processing tools, which span a range from primitive modules to skeletal systems. Finally, applications can be built on skeletal systems. In addition, ABE supports the importation of existing software, including both conventional and knowledge processing tools. >

48 citations


Journal ArticleDOI
TL;DR: The authors identify problems with present parallel processing systems, and present Armstrong, a hardware-software system that is designed to address some of the problems discussed, and performs well on a real application, namely, and computation of the 2-D discrete Fourier transform of an image.
Abstract: The authors identify problems with present parallel processing systems, and present Armstrong, a hardware-software system that is designed to address some of the problems discussed. They briefly describe the Armstrong hardware and discuss, in depth, the operating system software, and performance of the system on a real application, namely, and computation of the 2-D discrete Fourier transform of an image. >

47 citations


Proceedings ArticleDOI
01 Nov 1988
TL;DR: A distributed parallel processing system based on the Linda programming constructs has been implemented on a local area network and has achieved performances considerably faster than that of a Cray-1S.
Abstract: A distributed parallel processing system based on the LINDA programming constructs has been implemented on a local area network of computers. This system allows a single application program to utilize many machines on the network simultaneously. Several applications have been implemented on the network at Sandia National Laboratories and have achieved performances considerably faster than that of a Cray-1S. Several collections of machines have been used including up to eleven DEC VAXes, three Sun/3 workstations, and a PC.

46 citations


Journal ArticleDOI
01 Aug 1988
TL;DR: A review of image-processing systems developed until now is given, highlighting the weak points of such systems and the trends that have dictated their evolution through the years producing different generations of machines.
Abstract: A review of image-processing systems developed until now is given, highlighting the weak points of such systems and the trends that have dictated their evolution through the years producing different generations of machines. Each generation may be characterized by the hardware architecture, the programmability features and the relative application areas. The need for multiprocessing hierarchical systems is discussed, focusing on pyramidal architectures. Their computational paradigms, their virtual and physical implementation, their programming and languages, are discussed. >

45 citations


Journal ArticleDOI
TL;DR: In this paper, the authors reviewed recent progress in network security analysis and optimization in modern power system energy management systems, including state estimation, observability analysis, bad data processing, topology error detection, parameter error estimation, external network modelling, contingency selection, optimal power flow, corrective rescheduling and constrained economic dispatch.

39 citations


Journal ArticleDOI
TL;DR: The authors report on ESPRIT Project 401, building the Application Software Prototype Implementation System (ASPIS) for computer-aided software engineering (CASE), focusing on a knowledge-based assistant called the analysis assistant, the primary goal of which is to provide the user with domain-dependent suggestions and advice during a particular method phase.
Abstract: The authors report on ESPRIT Project 401, building the Application Software Prototype Implementation System (ASPIS) for computer-aided software engineering (CASE). ASPIS uses artificial intelligence techniques in a software-development environment. The goal is to encourage a more flexible and effective software-development life cycle, smoothing the transition between user needs, analysis, and design. The focus is on a knowledge-based assistant called the analysis assistant, the primary goal of which is to provide the user with domain-dependent suggestions and advice during a particular method phase. >

38 citations


Proceedings ArticleDOI
14 Sep 1988
TL;DR: A model which separates application concerns from those at the configuration level is presented, which permits the formulation of general structural rules for change without the need to consider application state and the specification of application actions without knowledge of the actual changes which might be introduced.
Abstract: The requirements needed to manage arbitrary changes to a system configuration are analyzed, and a model which separates application concerns from those at the configuration level is presented. This permits the formulation of general structural rules for change without the need to consider application state, as well as the specification of application actions without knowledge of the actual changes which might be introduced. The changes can be affected in such a way as to leave the modified system in a consistent state and cause minimal disturbance to the application during change. The model is applied to an example problem, the 'evolving philosophers' problem. The principles described in this model have been implemented and tested in the Conic environment for distributed systems. >

Proceedings ArticleDOI
24 Oct 1988
TL;DR: A prototype data-flow analysis program has been developed for a subset of the Ada language and an example analyzed by the prototype shows its possible use by program maintenance personnel.
Abstract: Algorithms are presented that limit the scope of recalculation of data-flow information for representative program changes. A prototype data-flow analysis program has been developed for a subset of the Ada language. An example analyzed by the prototype shows its possible use by program maintenance personnel. >

Journal ArticleDOI
TL;DR: Direction in meeting challenges in data collection, analysis and sharing is given by means of an illustrative multiple-case study of application software maintenance, which combines quantitative with qualitative data.

Proceedings ArticleDOI
24 Oct 1988
TL;DR: A revision is presented of E.B. Swanson's classification scheme for software maintenance (1976), which was developed by analyzing the changes that occur between different versions of COBOL programs that were produced in commercial environments.
Abstract: A revision is presented of E.B. Swanson's classification scheme for software maintenance (1976). The proposed classification system can be objectively determined from the changes that occur between versions of the software. The system was developed by analyzing the changes that occur between different versions of COBOL programs that were produced in commercial environments. >

Proceedings ArticleDOI
03 Jan 1988
TL;DR: The design, implementation, and preliminary performance evaluation of an experimental flexible processor is presented, which can potentially provide the performance advantages of special purpose processors as well as the cost advantages of general purpose processors.
Abstract: A new approach to application specific processor design is presented in this paper. Existing application specific processors are either based on existing general purpose processors or custom designed special purpose processors. The availability of a new technology, the Xilinx Logic Cell Array, presents the opportunity for a new alternative. The Flexible Processor Cell is a prototype of an extremely reconfigurable application specific processor. Flexible processors can potentially provide the performance advantages of special purpose processors as well as the cost advantages of general purpose processors. The flexible processor concept opens many potential areas for future research in processor architecture and implementation. This paper presents the design, implementation, and preliminary performance evaluation of an experimental flexible processor.

Proceedings ArticleDOI
01 Jan 1988
TL;DR: This work investigates the principal approaches for processing a query on complex objects (molecules) in parallel and chooses a multiprocessor system sharing an instruction addressable common memory which is used for buffer management, synchronization, and logging/recovery.
Abstract: Complex objects to support non-standard database applications require the use of substantial computing resources because their powerful operations and their related integrity constraints must be performed and maintained in an interactive environment. Since the exploitation of parallelism within such operations seems to be promising, we investigate the principal approaches for processing a query on complex objects (molecules) in parallel. A number of arguments favor methods based on inter-molecule parallelism as against intramolecule parallelism. Retrieval of molecules may be optimized by multiple storage structures and access paths. Hence, maintenance of such storage redundancy seems to be another good application area to explore the use of parallelism. Deferred update seems to be a bad idea, whereas concurrent update strategies incorporate salient application features. For performance reasons, we have chosen a multiprocessor system sharing an instruction addressable common memory which is used for buffer management, synchronization, and logging/recovery. Activation of concurrent requests is supported by a nested transaction concept which allows a safe and effective execution control within parallel actions of an operation.

Journal ArticleDOI
TL;DR: A knowledge-based design methodology for computer communication systems is proposed, along with a framework for a knowledge- based design support system, called KDSS, which consists of many expert systems corresponding to the design phases given by methodology.
Abstract: A knowledge-based design methodology for computer communication systems is proposed, along with a framework for a knowledge-based design support system. The design methodology is based on the user-requirements-oriented approach parameters. According to these requirements, formal specifications are defined as user's view virtual machine (UVM) and designer's view virtual machine (DVM). These virtual machines are constructed as knowledge models from the specifications. Due to the design methodology, a knowledge-based design support system for computer communication systems, called KDSS, is constructed. This design support system consists of many expert systems corresponding to the design phases given by methodology. A design example is given to illustrate the methodology. >

Proceedings ArticleDOI
24 Oct 1988
TL;DR: The experiments indicate that a high degree of reusability could be achieved for certain tasks using the information contained in the closure of that task, and access to the decision structure and the ability to assess the impact of decisions constitute the most important issue to be addressed by the SMSE.
Abstract: The authors have designed a software maintenance support environment (SMSE) to assist in the maintenance of existing source code. They model software maintenance as a hybrid distributed problem-solving activity, in which the application-dependent knowledge is provided by the system, and common sense, programming expertise, and problem-solving knowledge are provided by the people on the project team. To define a set of requirements to be satisfied by a SMSE, a series of maintenance tasks were performed on an inventory control application consisting of about 68000 lines of Cobol program. These results of experiments, including lessons learned and requirements for an SMSE, are presented. A key role of the SMSE is to help the software engineer find all information relevant to a particular maintenance task without examining a lot of extraneous material. Access to the decision structure and the ability to assess the impact of decisions constitute the most important issue to be addressed by the SMSE. The experiments indicate that a high degree of reusability could be achieved for certain tasks using the information contained in the closure of that task. >

Journal ArticleDOI
TL;DR: The author addresses the designer's perspective and illustrates how these principles apply to typical design problems and examples illustrate requirements and design of: communication, user interfaces, information storage, retrieval and update, information hiding, and data availability.
Abstract: The general principles for formulating software requirements and designs that meet response-time goals are reviewed. The principles are related to the system performance parameters that they improve, and thus their application may not be obvious to those whose speciality is system architecture and design. The author addresses the designer's perspective and illustrates how these principles apply to typical design problems. The examples illustrate requirements and design of: communication, user interfaces, information storage, retrieval and update, information hiding, and data availability. Strategies for effective use of the principles are described. >

Proceedings ArticleDOI
03 Oct 1988
TL;DR: An overview of the Cydra 5 computer system architecture is presented, showing how a single numeric processor and multiple interactive processors are used to provide sustained compute performance in numeric applications.
Abstract: An overview of the Cydra 5 computer system architecture is presented. A single numeric processor and multiple interactive processors are used to provide sustained compute performance in numeric applications. The numeric processors' unique directed dataflow architecture supports the parallelization of a much broader range of algorithms than vector processor architectures. Applications can be ported to the Cydra 5 and achieve very high performance with significantly less reprogramming than on alternative architectures. The numeric processor requires significantly less application reprogramming to make efficient use of its architecture to accelerate typical numeric applications. The numeric processor parallelizes programs with recurrences, conditionals within loops, unstructured memory reference, and other difficult-to-vectorize program constructs. >

Proceedings ArticleDOI
01 Jan 1988
TL;DR: The approach here has been to implement a microcoded machine based on simple register-to-register programmable instructions, with added hardware to support typical image processing calculations, to help implement a broad range of algorithms.
Abstract: THIS PAPER will discuss the development of a Parallel Image Processor (PIP), a data processing component of an image computing system, which has been implemented on a single chip. The term image computer in the context of this paper has three properties: ( I ) a specialization in the application of digital signal processing techniques t o the processing of two-dimensional sampled data (pixels), e.g., filtering, resampling, affine transforms: ( 2 ) a further specialization in the rendering or generation of pixels from an encoded data base, i.e., computer graphics: (3) a means for visualizing the data and processing results interactively, i.e., the capability of processing and displaying the same memory at interactive speeds. As specialized as the preceding three properties seem, the data-processing element of an image computer must still accomodate a broad range of algorithms, and implement them in a general way, particularly if a single chip solution is sought; Table 1. The approach here has been to implement a microcoded machine based on simple register-to-register programmable instructions, with added hardware to support typical image processing calculations. The PIP architecture is shown in Figure 1. The chip consists of a controller (lower left corner), an IjO Processor (upper lcft), and a data path. The datapath consists of a register file and associated parallel 64b data buses, 8 parallel ALUs, a funnel shifter and replicator, 8 parallel multipliers, and a sum-of-products unit comprised of a 9-way adder, and accumulator, a scale/ saturate/round circuit, and an expander register. The funnel shifter is used to align 8-pixel memory words, while the replicator supports replicated zooming. The 64b buses carry eight 8b pixels, such that the datapath could be regarded as an 8 x 1 Single-instruction multiple-data array.

Proceedings ArticleDOI
01 Dec 1988
TL;DR: The general considerations involved when applying simulation as an analysis tool and potential real-time control tool in the CIM environment are discussed, and the model structure is different than traditional manufacturing applications of simulation.
Abstract: Simulation has long been recognized as a valuable tool for analyzing manufacturing systems. It is effective for assessing the impact of changing system parameters (e.g. reducing processing time) upon system performance measures, and it can also aid decisions concerning system configuration. At Penn State, simulation is playing an important role in the development of a Computer Integrated Manufacturing Laboratory. Currently, it is being used as an analysis tool studying system design and computer communication issues. Future plans are to use simulation as a real-time scheduling and control tool. Due to this ultimate long-term goal of the simulation model use, the model structure is different than traditional manufacturing applications of simulation. Rather than having events associated with workpiece movement and processing drive the simulation, computer communication events drive the model. This paper first discusses the general considerations involved when applying simulation as an analysis tool and potential real-time control tool in the CIM environment, The paper then discusses these analysis and real-time control issues in detail using the Penn State CIM Lab application as an illustrative example.

Proceedings ArticleDOI
18 Apr 1988
TL;DR: A method for reasoning about knowledge in multilevel secure distributed systems is introduced, based on a behavioral semantics for operator nets that can be used to specify a variety of security properties such as nondisclosure, integrity, and authority systems.
Abstract: A method for reasoning about knowledge in multilevel secure distributed systems is introduced. This method, based on a behavioral semantics for operator nets, can be used to specify a variety of security properties such as nondisclosure, integrity, and authority systems. The major attributes of the method are the intuitive nature of the specifications and the expressibility of the model, which allows statements about temporal properties and deductive capabilities of processes. >

Journal ArticleDOI
TL;DR: This paper introduces the ideas and techniques that define the spy, and uses Petri nets as the formal tool to describe concurrency for real-time control of an industrial manufacturing system.

Proceedings Article
01 Jan 1988
TL;DR: In this paper, the authors reviewed recent progress in network security analysis and optimization in modern power system energy management systems, including state estimation, observability analysis, bad data processing, topology error detection, parameter error estimation, external network modelling, contingency selection, optimal power flow, corrective rescheduling and constrained economic dispatch.
Abstract: Network security analysis and optimization forms the core of the advanced application software in modern power system energy management systems. Recent progress in network security analysis and optimization is reviewed in this paper. Topics include state estimation, observability analysis, bad data processing, topology error detection, parameter error estimation, external network modelling, contingency selection, optimal power flow, corrective rescheduling and constrained economic dispatch. Issues concerning modelling, assumptions and limitations, solution methods, computational efficiency, solution accuracy and applications are addressed. Future trends in research are also discussed.

Proceedings ArticleDOI
01 Feb 1988
TL;DR: The proposed extended Modula-2 based database programming system implements a federative client/server architecture and integrates standardized communication services supporting the communication requirements of a heterogeneous network environment.
Abstract: Reports on extended database programming support which leads quite naturally from a centralized modular environment to distributed computing systems where modules can be spread over different nodes in a computer network. The proposed extended Modula-2 based database programming system implements a federative client/server architecture and integrates standardized communication services supporting the communication requirements of a heterogeneous network environment. The authors also report on the design and status of a prototype implementation that is currently being realized in a joint project of their corresponding institutions. >

Proceedings ArticleDOI
06 Jun 1988
TL;DR: The author performs a structural analysis of large software systems via adequate clustering techniques and claims that a lot of useful information can be directly retrieved from the existing code and that maintenance tools should be based on such automatically extracted information.
Abstract: With the ever-increasing size and complexity of software systems, their maintenance becomes a more and more difficult issue. Therefore, classical managerial solutions cannot be applied for maintaining very large software systems. The maintenance task must be assisted by automated techniques. Most existing tools can assist maintenance tasks only by requiring a lot of human-given information at the development stage. In contrast, the author claims that a lot of useful information can be directly retrieved from the existing code (and in the best cases, the natural language documentation) and that maintenance tools should be based on such automatically extracted information. The author performs a structural analysis of large software systems via adequate clustering techniques. This analysis allows retrieving useful information from the system that directs the considered maintenance task. Two tools that embody this approach in the domains of change management and reusability are also described. >

Journal ArticleDOI
TL;DR: The authors summarize the capabilities of the current release of LAS (version 4.0) and discuss plans for future development, with particular emphasis on the issue of system portability and the importance of removing and/or isolating hardware and software dependencies.
Abstract: The Land Analysis System (LAS) is an interactive software system available in the public domain for the analysis, display, and management of multispectral and other digital image data. LAS provides over 240 applications functions and utilities, a flexible user interface, complete online and hard-copy documentation, extensive image-data file management, reformatting, conversion utilities, and high-level device independent access to image display hardware. The authors summarize the capabilities of the current release of LAS (version 4.0) and discuss plans for future development. Particular emphasis is given to the issue of system portability and the importance of removing and/or isolating hardware and software dependencies. >

Patent
04 Nov 1988
TL;DR: In this article, the authors present a system and method for providing application program portability and consistency across a number of different hardware, database, transaction processing and operating system environments, including a plurality of processes for performing one or more tasks required by the application software.
Abstract: VIRTUAL INTERFACE SYSTEM AND METHOD FOR ENABLING SOFTWARE: APPLICATIONS TO BE ENVIRONMENT INDEPENDENT ABSTRACT OF THE DISCLOSURE A system and method for providing application program portability and consistency across a number of different hardware, database, transaction processing and operating system environments. In the preferred embodiment, the system includes a plurality of processes for performing one or more tasks required by the application software in one or more distributed processors of a heterogenous or "target" computer. In a run-time mode, program code of the application software is pre-processed, compiled and linked with system interface modules to create code executable by a operating system of the target computer. The executable code, which includes a number of functional calls to the processes, is run by the operating system to enable the processes to perform the tasks required by the application software. Communications to and from the processes are routed by a blackboard switch logic through a partitioned storage area or "blackboard".