scispace - formally typeset
Search or ask a question

Showing papers in "Ibm Systems Journal in 1985"


Journal ArticleDOI
TL;DR: The Programming Process Architecture is a framework describing required activities for an operational process that can be used to develop system or application software, and requires explicit entry criteria, validation, and exit criteria for each task in the process.
Abstract: The Programming Process Architecture is a framework describing required activities for an operational process that can be used to develop system or application software. The architecture includes process management tasks, mechanisms for analysis and development of the process, and product quality reviews duringt he various stages of the development cycle. It requires explicit entry criteria, validation, and exit criteria for each task in the process, which combined form the "essence" of the architecture. The architecture describes requirements for a process needing no new invention, but rather using the best proven methodologies, techniques, and tools available today. This paper describes the Programming Process Architecture and its use, emphasizing the reasons for its development.

152 citations


Journal ArticleDOI
TL;DR: This paper presents a programming process methodology for using causal analysis and feedback as a means for achieving quality improvements and ultimately defect prevention.
Abstract: Recent efforts to improve quality in software have concentrated on defect detection. This paper presents a programming process methodology for using causal analysis and feedback as a means for achieving quality improvements and ultimately defect prevention. The methodology emphasizes effective utilization of all error data to prevent the recurrence of defects.

62 citations


Journal ArticleDOI
G. F. Hoffnagle1, W. E. Beregi1
TL;DR: The architecture of a software engineering support facility to support long-term process experimentation, evolution, and automation is defined and the architectural concepts for such a facility are presented and examined ways in which it can be used to foster software automation.
Abstract: Demand for reliable software systems is stressing software production capability, and automation is seen as a practical approach to increasing productivity and quality. Discussed in this paper are an approach and an architecture for automating the software development process. The concepts are developed from the viewpoint of the needs of the software development process, rather than that of established tools or technology. We discuss why automation of software development must be accomplished by evolutionary means. We define the architecture of a software engineering support facility to support long-term process experimentation, evolution, and automation. Such a facility would provide flexibility, tool portability, tool and process integration, and process automation for a wide range of methodologies and tools. We present the architectural concepts for such a facility and examine ways in which it can be used to foster software automation.

46 citations


Journal ArticleDOI
C. P. Grossman1
TL;DR: It is shown that a cache as a high-speed intermediary between the processor and DASD is a major and effective step toward matching processor speed and DasD speed.
Abstract: This paper discusses three examples of a cache-DASD storage design. Precursors and developments leading up to the IBM 3880 Storage Control Subsystems are presented. The development of storage hierarchies is discussed, and the role of cache control units in the storage hierarchy is reviewed. Design and implementations are presented. Other topics discussed are cache management, performance of the subsystem, and experience using the subsystem. It is shown that a cache as a high-speed intermediary between the processor and DASD is a major and effective step toward matching processor speed and DASD speed.

33 citations


Journal ArticleDOI
M. W. Mudie1, D. J. Schafer1
TL;DR: Key to the architecture is a supporting framework comprising the communications network, a data service function, an office services function, enabling software, and support organizations to support organizations whose business environment is changing, and where flexibility, responsiveness to change, and cost effectiveness are vital.
Abstract: This paper defines a technology architecture for information processing in large corporations. It describes a matrix of processing environments consisting of three processing types: production, decision support, and office; three processing locations: centralized, departmental, and workstation; and a methodology for implementing applications in those environments. Key to the architecture is a supporting framework comprising the communications network, a data service function, an office services function, enabling software, and support organizations. This approach is designed to pro vide an integrated information system to support organizations whose business environment is changing, and where flexibility, responsiveness to change, and cost effectiveness are vital. The approach is representative of methods used by systems engineers in assisting customers to decide on a system configuration that best suits their needs.

21 citations


Journal ArticleDOI
K. P. Hein1
TL;DR: The overall architectural concepts of integrated data systems development, the place of ISMOD within it, and the specific facilities, techniques, and information provided by the system are discussed.
Abstract: The advent of integrated, shared-data systems has made it increasingly necessary to address the application development process from the architectural and manufacturing perspective rather than from a build-as-you-go job shop viewpoint. Although the Business Systems Planning (BSP) methodology provides an enterprise-wide strategic Information Systems plan, it is still at an abstraction level that leaves the traditional gap between “requirements” and implementations untouched. The Information System Model and Architecture Generator (ISMOD) tool complements and enhances BSP by mechanizing the planning process, thus providing a facility to narrow this gap by allowing orderly and consistent top-to-bottom architectural decomposition of the enterprise environment. It is an enterprise planning vehicle and not an implementation system, but it is the first critical component to support an integrated systems architecture effort. It automates and, to a large extent, formalizes a laborious requirements documentation process preceding code development, and it does this “top to bottom,” from a global, enterprise-wide, information requirements viewpoint. This paper discusses the overall architectural concepts of integrated data systems development, the place of ISMOD within it, and the specific facilities, techniques, and information provided by the system.

20 citations


Journal ArticleDOI
Watts S. Humphrey1
TL;DR: A special issue of the IBM Systems Journal on the IBM large-systems software development process is introduced, providing an overview of the subject and a summary of the key principles of theIBM software quality and productivity efforts in large-scale systems programming.
Abstract: This paper introduces a special issue of the IBM Systems Journal on the IBM large-systems software development process. The issue provides an overview of the subject and a summary of the key principles of the IBM software quality and productivity efforts in large-scale systems programming. The major topics addressed in this issue are the software development process, software development tools and methodologies, quality and productivity measurements, and programmer education.

17 citations


Journal ArticleDOI
TL;DR: PDM is described, a requirements planning process that supports the collection, analysis, documentation, and tracking of software requirements and has been applied in three development areas with positive results.
Abstract: Traditional requirements processes often do not address the many problems encountered in the development of software products. Conventional processes begin with the structural definition of the proposed system, under the assumption that the raw requirements are understood. How this understanding is developed is not formally addressed. The IBM software development process requires a methodology to develop the rationale of the requirement, both in terms of its underlying problem and its business justification, prior to the development of the functional specification. Conventional requirements processes address a single software application intended for use by a uniform set of end users. The resulting system is usually a one-time replacement of some existing system. Many IBM software products, however, address requirements received from a large, diverse set of customers who use the products in a wide array of computing environments. Product releases are typically developed as incremental enhancements to an existing base product. This paper describes the Planning and Design Methodology (PDM), a requirements planning process that supports the collection, analysis, documentation, and tracking of software requirements. The process includes requirements collection, definition of the underlying problems, development of an external functional description that addresses the problems, and development of system and product designs from the external functional descriptions. PDM has been applied in three development areas with positive results.

13 citations


Journal ArticleDOI
TL;DR: An overview of the present CICS architecture is presented as the evolution of that original design as a transaction management system that accommodates data base management, operating systems, and input and output devices as well as hardware of increasing numbers and complexity.
Abstract: Presented is an overview of the present CICS architecture. Discussed is the evolution of that original design as a transaction management system that accommodates data base management, operating systems, and input and output devices as well as hardware of increasing numbers and complexity. User needs past and present are analyzed with a view toward understanding how CICS might evolve in the future.

13 citations


Journal ArticleDOI
M. B. Carpenter1, H. K. Hallman1
TL;DR: The role of the Software Engineering Institute, its background and offerings, and some results obtained are described.
Abstract: Improvements in quality and productivity in the development of programs can be obtained by instructing the programming development groups in the use of modern software engineering methodology To provide this instruction for its employees, IBM has established a Software Engineering Institute Currently training in the methodology is being offered through an education program of the Institute known as the Software Engineering Workshop This paper describes the role of the Institute, its background and offerings, and some results obtained

11 citations


Journal ArticleDOI
M. J. Flaherty1
TL;DR: The underlying principles of a programmer productivity measuring system are the key measures are people and lines of code, and a data base design for retaining and retrieving these metrics under a wide variety of applications and other circumstances is presented.
Abstract: Discussed in this paper are the underlying principles of a programmer productivity measuring system. The key measures (or metrics) are people and lines of code. Definitions of these metrics are refined and qualified, according to the conditions under which they are used. Presented also is a data base design for retaining and retrieving these metrics under a wide variety of applications and other circumstances. Depending on definitions, applications, and other circumstances, productivity measurements may differ widely. On the other hand, after suitable productivity metrics have been defined, consistency of application of the same metrics yields comparable results from project to project.

Journal ArticleDOI
J. Newton1
TL;DR: Comprehensive and formally managed testing strategies are discussed, and it is shown that they also support disaster backup/recovery plans.
Abstract: A philosophy of preventing problems from occurring in a data processing installation rather than reacting to problems is becoming increasingly necessary. The institution of comprehensive and formally managed testing strategies is an important step in this direction. Such strategies are discussed, and it is shown that they also support disaster backup/recovery plans.

Journal ArticleDOI
B. R. Buckelew1
TL;DR: A model for building a set of integrated architectural guidelines to ensure that a "system" is being built and the use of the System Planning Grid as a model for setting product standards and organization responsibilities is discussed.
Abstract: Information systems have evolved as a result of technological advances and the increasing demand for information. Over the past few years, systems that developed separately are being forced to merge. This paper describes a model for building a set of integrated architectural guidelines to ensure that a "system" is being built. The use of the System Planning Grid as a model for setting product standards and organization responsibilities will also be discussed.

Journal ArticleDOI
M. L. Tavera1, Manuel Alfonseca1, J. Rojas1
TL;DR: This paper discusses the design and building of an APL interpreter for the IBM Personal Computer and the provision of the APL character set presented problems, the solutions of which are presented.
Abstract: This paper discusses the design and building of an APL interpreter for the IBM Personal Computer. Discussed is the writing of the interpreter itself, which required the use of an intermediate language designed by the authors. This machine-independent language also made possible the development of APL interpreters for two other systems--System/370 and Series/1. The particularizing of the interpreter required a compiler, which in the case of the Personal Computer produced Intel 8088 and 8087 assembly language code. The matching of the APL interpreter to the operating system (DOS) required an APL supervisor, which is also discussed in this paper. The provision of the APL character set presented problems, the solutions of which are also presented. Other topics discussed are the display, the keyboard, and the session manager.

Journal ArticleDOI
R. R. Ryan1, H. Spiller1
TL;DR: The C language and its history is described and a specific implementation of C, the Microsoft C Compiler, which runs on the IBM Personal Computer is presented.
Abstract: In the last few years, the C programming language has become one of the most widely used languages for applications and systems software on microcomputer systems. This paper describes the C language and its history. It then presents a specific implementation of C, the Microsoft C Compiler, which runs on the IBM Personal Computer.

Journal ArticleDOI
TL;DR: The history of graphics as used with personal computers is traced, the diificulties that standardization efforts have met are explored, the VDI model is explained, and it is shown how this model operates in the IBM Personal Computer environment to make graphics a natural extension of the operating system.
Abstract: Although acknowledged to be an effective means of communicating information, graphics has not progressed more rapidly in the burgeoning use of personal computers due to the lack of standards for both writing and running graphics applications. A graphics standard--the Virtual Device Interface (VDI)--has been proposed for national use and is in the process of being adopted. An implementation of the VDI is Currently available for the IBM Personal Computer. This paper briefly traces the history of graphics as used with personal computers, explores the diificulties that standardization efforts have met, explainst he VDI model, and shows how this model operates in the IBM Personal Computer environment to make graphics a natural extension of the operating system.

Journal ArticleDOI
M. M. Ghiotti1
TL;DR: A rationale and the experience gained with a single terminal type-the IBM 3270-PC-interconnected with hosts via the Application Program Interface to achieve enhanced user efficiency is presented.
Abstract: Many businesses use a variety of terminal types connected to central host computers. Presented here is a rationale and the experience gained with a single terminal type-the IBM 3270-PC-interconnected with hosts via the Application Program Interface to achieve enhanced user efficiency.

Journal ArticleDOI
TL;DR: An optimizing FORTRAN compiler with power to handle large applications at execution speeds comparable to those of large computers has been implemented on the IBM Personal Computer.
Abstract: An optimizing FORTRAN compiler with power to handle large applications at execution speeds comparable to those of large computers has been implemented on the IBM Personal Computer. This implementation is described, with emphasis on the design decisions that were considered in the development of the compiler.

Journal ArticleDOI
R. C. Brooks1
TL;DR: In business enterprises, it is important that high availability be maintained in the computer systems used by the enterprises, particularly in systems that have high transaction rates, and a way of maintaining high availability is discussed.
Abstract: In business enterprises, it is important that high availability be maintained in the computer systems used by the enterprises, particularly in systems that have high transaction rates. A way of maintaining high availability is discussed, including the implementation that should be undertaken and the design issues involved. Some additional steps for further improvements are also offered.

Journal ArticleDOI
K. A. Duke1, W. A. Wall1
TL;DR: The function and discusses the design of the Professional Graphics Controller are described and the function allows existing productivity software to be executed in an execution mode.
Abstract: The IBM Professional Graphics Controller and Display were developed to meet the needs of engineers and scientists for an improved graphics capability int he Personal Computer environment. These units provide graphics systems with improved function, resolution, and color range, and at he same time they allow existing productivity software to be executed in an emula tion mode. This paper describes the function and discusses the design of the Professional Graphics Controller.

Journal ArticleDOI
W. Boos1
TL;DR: The storage, retrieval, and dissemination of data pertaining to a large, complex product line is made possible by the Hands-on Network Environment (HONE), discussed in this paper.
Abstract: The storage, retrieval, and dissemination of data pertaining to a large, complex product line is made possible by the Hands-on Network Environment (HONE) discussed in this paper. HONE provides on-line interactive support to marketing, systems, and administrative personnel, and, most recently, to customers. The evolution of HONE is presented. Discussed in detail are new HONE distributed processing capabilities now enabled under an advanced network architecture. In that environment, the processing power and data bases of HONE and other host systems will be interconnected and support the speed and processing autonomy of IBM Personal Computers as workstations.

Journal ArticleDOI
P. A. Korn1, J. P. McAdaragh1, C. L. Tondo1
TL;DR: The XENIX™ Operating System for the IBM Personal Computer AT incorporates capabilities of a mainframe operating system--multiusage, multitasking, file management, program compilation, and networking.
Abstract: Discussed is the XENIX™ Operating System for the IBM Personal Computer AT. The operating system incorporates capabilities of a mainframe operating system--multiusage, multitasking, file management ansde curity, program compilation, and networking. Th XENIX shell structure is introduced, Pipes and pipelining are presented. The XENIX file structure is explained and illustrated with examples. Software development and text formatting are treated in detail. The ability to compile C programc ode developed under XENIX and run it on the IBM Personal Computer Disk Operating System is explained

Journal ArticleDOI
S. Agassi1
TL;DR: The planning process for a distributed data processing system to meet high availability requirements was performed as a systems engineering activity in order to assess the feasibility of the presented approach, which was proposed to a customer.
Abstract: The high-availability requirements of computerized systems that are needed to meet the objectives of the organization are being acknowledged more and more by the data processing community. The paper presents the planning process for a distributed data processing system deisgned to meet high availability requirements. This process was performed as a systems engineering activity in order to assess the feasibility of the presented approach, which was proposed to a customer.

Journal ArticleDOI
T. G. Peck1
TL;DR: A perspective of the part systems engineering has played in the success of IBM in the information processing business during that 25-year period is provided.
Abstract: IBM systems engineering celebrates its 25th anniversary in 1985. This paper provides a perspective of the part systems engineering has played in the success of IBM in the information processing business during that 25-year period. The history of systems engineering is briefly reviewed, and the similarities and differences in worldwide systems engineer functions are examined. The relationships among marketing, systems engineering, and customers are discussed. Also discussed are career paths for systems engineers. Expectations and challenges for systems engineering in the future are explored.