scispace - formally typeset
Search or ask a question

Showing papers in "Ibm Systems Journal in 1994"


Journal ArticleDOI
Frank Leymann1, W. Altenhuber1
TL;DR: A system that supports the two fundamental aspects of business process management, namely the modeling of processes and their execution; the meta-model of the system deals with models of business processes as weighted, colored, directed graphs of activities.
Abstract: The relevance of business processes as a major asset of an enterprise is more and more accepted: Business processes prescribe the way in which the resources of an enterprise are used, i.e., they describe how an enterprise will achieve its business goals. Organizations typically prescribe how business processes have to be performed, and they seek information technology that supports these processes. We describe a system that supports the two fundamental aspects of business process management, namely the modeling of processes and their execution. The meta-model of our system deals with models of business processes as weighted, colored, directed graphs of activities; execution is performed by navigation through the graphs according to a well-defined set of rules. The architecture consists of a distributed system with a client/server structure, and stores its data in an object-oriented database system.

327 citations


Journal ArticleDOI
TL;DR: This essay presents a tutorial that discusses software quality in the context of total quality management (TQM) along with its key elements: customer focus, process improvement, the human side of quality, and data, measurement, and analysis.
Abstract: This essay presents a tutorial that discusses software quality in the context of total quality management (TQM). Beginning with a historical perspective of software engineering, the tutorial examines the definition of software quality and discusses TQM as a management philosophy along with its key elements: customer focus, process improvement, the human side of quality, and data, measurement, and analysis. It then focuses on the software-development specifics and the advancements made on many fronts that are related to each of the TQM elements. In conclusion, key directions for software quality improvements are summarized.

96 citations


Journal ArticleDOI
C. Billings1, J. Clifton1, B. Kolkhorst1, E. Lee1, W. Bret Wingert1 
TL;DR: This paper focuses on the experiences of the Space Shuttle Onboard Software project in the journey to process maturity and the factors that have made it successful.
Abstract: Development process maturity is strongly linked to the success or failure of software projects. As the word “maturity” implies, time and effort are necessary to gain it. The Space Shuttle Onboard Software project has been in existence for nearly 20 years. In 1989 the project was rated at the highest level of the Software Engineering Institute's Capability Maturity Model. The high-quality software produced by the project is directly linked to its maturity. This paper focuses on the experiences of the Space Shuttle Onboard Software project in the journey to process maturity and the factors that have made it successful.

78 citations


Journal ArticleDOI
TL;DR: The scope and results of an ongoing research project on program understanding undertaken by the IBM Toronto Software Solutions Laboratory Centre for Advanced Studies (CAS) are described, including an approach adopted to integrate the various tools under a single reverse engineering environment.
Abstract: Corporations face mounting maintenance and re-engineering costs for large legacy systems. Evolving over several years, these systems embody substantial corporate knowledge, including requirements, design decisions, and business rules. Such knowledge is difficult to recover after many years of operation, evolution, and personnel change. To address the problem of program understanding, software engineers are spending an ever-growing amount of effort on reverse engineering technologies. This paper describes the scope and results of an ongoing research project on program understanding undertaken by the IBM Toronto Software Solutions Laboratory Centre for Advanced Studies (CAS). The project involves a team from CAS and five research groups working cooperatively on complementary reverse engineering approaches. All the groups are using the source code of SQL/DS™ (a multimillion-line relational database system) as the reference legacy system. Also discussed is an approach adopted to integrate the various tools under a single reverse engineering environment.

69 citations


Journal ArticleDOI
TL;DR: The design, architecture, and features of Hy + are described, along with a number of applications in software engineering and network management, that supports a novel visual query language called GraphLog.
Abstract: The Hy + system is a generic visualization tool that supports a novel visual query language called GraphLog. In Hy + , visualizations are based on a graphical formalism that allows comprehensible representations of databases, queries, and query answers to be interactively manipulated. This paper describes the design, architecture, and features of Hy + with a number of applications in software engineering and network management.

61 citations


Journal ArticleDOI
TL;DR: The AOEXPERT/MVS™ project, the largest IBM Cleanroom effort to date, successfully applied an introductory level of implementation and this paper presents both the implementation strategy and the project results.
Abstract: Cleanroom software engineering is a theory-based, team-oriented engineering process for developing very high quality software under statistical quality control. The Cleanroom process combines formal methods of object-based box structure specification and design, function-theoretic correctness verification, and statistical usage testing for reliability certification to produce software approaching zero defects. Management of the Cleanroom process is based on a life cycle of development and certification of a pipeline of user-function increments that accumulate into the final product. Teams in IBM and other organizations that use the process are achieving remarkable quality results with high productivity. A phased implementation of the Cleanroom process enables quality and productivity improvements with an increased control of change. An introductory implementation involves the application of Cleanroom principles without the full formality of the process; full implementation involves the comprehensive use of formal Cleanroom methods; and advanced implementation optimizes the process through additional formal methods, reuse, and continual improvement. The AOEXPERT/MVS™ project, the largest IBM Cleanroom effort to date, successfully applied an introductory level of implementation. This paper presents both the implementation strategy and the project results.

52 citations


Journal ArticleDOI
TL;DR: The methodology is shown to benefit different kinds of projects beyond what can be achieved by current practices, and the collection of examples discussed represents the experiences of using a model of correction.
Abstract: An approach that involves both automatic and human interpretation to correct the software production process during development is becoming important in IBM as a means to improve quality and productivity. A key step of the approach is the interpretation of defect data by the project team. This paper uses examples of such correction to evaluate and evolve the approach, and to inform and teach those who will use the approach in software development. The methodology is shown to benefit different kinds of projects beyond what can be achieved by current practices, and the collection of examples discussed represents the experiences of using a model of correction.

48 citations


Journal ArticleDOI
TL;DR: This paper addresses the issues and solutions relating to intraquery parallelism in a relational database management system (DBMS) and provides a broad framework for the study of the numerous issues that need to be addressed in supporting parallelism efficiently and flexibly.
Abstract: In order to provide real-time responses to complex queries involving large volumes of data, it has become necessary to exploit parallelism in query processing. This paper addresses the issues and solutions relating to intraquery parallelism in a relational database management system (DBMS). We provide a broad framework for the study of the numerous issues that need to be addressed in supporting parallelism efficiently and flexibly. The alternatives for a parallel architecture system are discussed, followed by the focus on how a query can be parallelized and how that affects load balancing of the different tasks created. The final part of the paper contains information about how the IBM DATABASE 2™ (DB2®) Version 3 product provides support for I/O parallelism to reduce response time for data-intensive queries.

34 citations


Journal ArticleDOI
TL;DR: A reference architecture for distributed systems management is proposed that integrates system monitoring, information management, and system modeling techniques, and a detailed hospital application is presented to clarify the requirements for managing applications.
Abstract: Management of computing systems is needed to ensure efficient use of resources and provide reliable and timely service to users. Distributed systems are much more difficult to manage because of their size and complexity, and they require a new approach. A reference architecture for distributed systems management is proposed that integrates system monitoring, information management, and system modeling techniques. Three classes of system management—network services and devices, operating system services and resources, and user applications—are defined within this framework, and a detailed hospital application is presented to clarify the requirements for managing applications. It is argued that the performance management of distributed applications must be considered from all three perspectives. Several management prototypes under study within the COnsortium for Research on Distributed Systems (CORDS) are described to illustrate how such an architecture could be realized.

30 citations


Journal ArticleDOI
TL;DR: In this paper, existing objectives for the development and application of models of software processes are restated, and current research sponsored by the IBM Centre for Advanced Studies is discussed as it applies to furthering each of the objectives.
Abstract: The goal of developing quality software can be achieved by focusing on the improvement of both product quality and process quality. While the traditional focus has been on product quality, there is an increased awareness of the benefits of improving the quality of the processes used to develop and support those products. These processes are key elements in understanding and improving the practice of software engineering. In this paper, existing objectives for the development and application of models of software processes are restated, and current research sponsored by the IBM Centre for Advanced Studies (CAS) is discussed as it applies to furthering each of the objectives. A framework is also presented that relates the research work to the various sectors of a software process life cycle. The on-going research involves four universities, CAS, and collaboration with IBM Toronto Laboratory developers.

30 citations


Journal ArticleDOI
TL;DR: The results of four empirical studies indicate that BOR testing is practical and effective for both specification- and program-based test generation.
Abstract: In this paper, we report the results of four empirical studies for evaluating a predicate-based software testing strategy, called BOR (Boolean operator) testing. The BOR testing strategy focuses on the detection of Boolean operator faults in a predicate, including incorrect AND/OR operators and missing or extra NOT operators. Our empirical studies involved comparisons of BOR testing with several other predicate-based testing strategies, using Boolean expressions, a real-time control system, and a set of N-version programs. For program-based test generation, BOR testing was applied to predicates in a program. For specification-based test generation, BOR testing was applied to cause-effect graphs representing software specification. The results of our studies indicate that BOR testing is practical and effective for both specification- and program-based test generation.

Journal ArticleDOI
TL;DR: The paper outlines the rationale for a peer-to-peer view of distributed systems, presents motivation for the research directions, describes an architecture, and reports on some preliminary experiences with a prototype system.
Abstract: Advances in communications technology, development of powerful desktop workstations, and increased user demands for sophisticated applications are rapidly changing computing from a traditional centralized model to a distributed one. The tools and services for supporting the design, development, deployment, and management of applications in such an environment must change as well. This paper is concerned with the architecture and framework of services required to support distributed applications through this evolution to new environments. In particular, the paper outlines our rationale for a peer-to-peer view of distributed systems, presents motivation for our research directions, describes an architecture, and reports on some preliminary experiences with a prototype system.

Journal ArticleDOI
TL;DR: Techniques to obtain software quality are examined from the experiences of three very different object-oriented projects carried out by IBM Information Solutions Limited in 1991 and 1992.
Abstract: Techniques to obtain software quality are examined from the experiences of three very different object-oriented projects carried out by IBM Information Solutions Limited in 1991 and 1992. Object-oriented programming systems are sold on the promise of improved productivity from object reuse and a high level of code modularity. Yet it is precisely these aspects that also lead to their greatest benefit, namely improved software quality. In this paper, lessons learned from the three projects are described and compared, indicating approaches to consider in using object-oriented technology.

Journal ArticleDOI
TL;DR: The RE-Analyzer is an automated, reverse engineering system providing a high level of integration with a computer-aided software engineering (CASE) tool, where legacy code is transformed into abstractions within a structured analysis methodology.
Abstract: The RE-Analyzer is an automated, reverse engineering system providing a high level of integration with a computer-aided software engineering (CASE) tool. Specifically, legacy code is transformed into abstractions within a structured analysis methodology. The abstractions are based on data flow diagrams, state transition diagrams, and entity-relationship data models. Since the resulting abstractions can be browsed and modified within a CASE tool environment, a broad range of software engineering activities are supported, including program understanding, reengineering, and redocumentation. In addition, diagram complexity is reduced through the application of control partitioning: algorithmic technique for managing complexity by partitioning source code modules into smaller yet semantically coherent units. This approach also preserves the information content of the original source code. It is in contrast to other reverse engineering techniques that produce only structure charts and thus suffer from loss of information, unmanaged complexity, and a lack of correspondence to structured analysis abstractions. The RE-Analyzer has been implemented and currently supports the reverse engineering of software written in the C language. It has been integrated with a CASE tool based on the VIEWS method.

Journal ArticleDOI
TL;DR: A set of extensions to relational database technology, designed to meet the requirements of the new generation of applications, are described, which include a rich and extensible type subsystem that is tightly integrated into the Structured Query Language (SQL), a rules subsystem to enforce global database semantics, and a variety of performance enhancements.
Abstract: Relational database systems have been very successful in meeting the needs of today's commercial applications However, emerging applications in disciplines such as engineering design are now generating new requirements for database functionality and performance This paper describes a set of extensions to relational database technology, designed to meet the requirements of the new generation of applications These extensions include a rich and extensible type subsystem that is tightly integrated into the Structured Query Language (SQL), a rules subsystem to enforce global database semantics, and a variety of performance enhancements Many of the extensions described here have been prototyped at the IBM Database Technology Institute and in research projects at the IBM Almaden Research Center in order to demonstrate their feasibility and to validate their design Furthermore, many of these extensions are now under consideration as part of the evolving American National Standards Institute/International Organization for Standardization (ANSI/ISO) standard for the SQL database language

Journal ArticleDOI
S. H. Kan1, S. D. Dull1, D. N. Amundson1, R. J. Lindner1, R. J. Hedger1 
TL;DR: The quality action road map that describes the various quality actions that were deployed is presented, as are the other elements that enabled the implementation of the quality management system.
Abstract: This paper describes the software quality management system for the Application System/400® (AS/400®) computer system. Key elements of the quality management system such as customer satisfaction, product quality, continuous process improvement, and people are discussed. Based on empirical data, recent progress in several quality parameters of the AS/400 software system are examined. The quality action road map that describes the various quality actions that were deployed is presented, as are the other elements that enabled the implementation of the quality management system.

Journal ArticleDOI
TL;DR: The Business Object Management System is a distributed resource manager that generalizes and extends the concepts of shared corporate information to include not only data that are structured such that the data can be held in relational tables but also generalized, complex business information objects.
Abstract: The Business Object Management System (BOMS) is a distributed resource manager that generalizes and extends the concepts of shared corporate information to include not only data that are structured such that the data can be held in relational tables but also generalized, complex business information objects. BOMS allows enterprises to store, manage, and query the totality of their documents, business transaction records, images, etc., in a uniform and consistent way. With this system, businesses can make more effective use of information that has in the past been inaccessible to thorough and systematic queries and that could not be integrated effectively into existing or new business processes. BOMS is targeted toward very large collections of information objects (on the order of a billion objects, equivalent to terabytes of data) and allows enterprises to unlock information treasures that would otherwise remain hidden in collections of that size. BOMS is influenced by theoretical concepts, such as object-orientation and hypermedia, but relies on proven relational database and transaction processing concepts. BOMS has been implemented with DATABASE 2™ (DB2®) and Customer Information Control System/Enterprise Systems Architecture (CICS/ESA™) and has been in productive use since 1991.

Journal ArticleDOI
TL;DR: Four basic approaches to develop technologies are proposed that directly address the three essential attributes of software entities from which a number of consequences arise in software development: conceptual content, representation, and multiple subdomains.
Abstract: Most improvements in software development technology have occurred by eliminating the accidental aspects of the technology. Further progress now depends on addressing the essence of software. Fred Brooks has characterized the essence of software as a complex construct of interlocking concepts. He concludes that no silver bullet will magically reduce the essential conceptual complexity of software. This paper expands on Brooks's definition to lay a foundation for forging a possible silver bullet. Discussed are the three essential attributes of software entities from which a number of consequences arise in software development: (1) conceptual content, (2) representation, and (3) multiple subdomains. Four basic approaches to develop technologies are proposed that directly address the essential attributes. Although some of these technologies require additional development or testing, they present the most promise for forging a silver bullet. Among them, design reabstraction addresses the most difficult attribute, multiple subdomains, and the most difficult consequence, enhancing existing code, making it the best prospect.

Journal ArticleDOI
TL;DR: An approach to formal derivation that employs the new concept of generic algorithms that provides the developer with a form of reuse of program derivation techniques, correctness proofs, and formal specifications.
Abstract: We suggest a new approach to the derivation of programs from their specifications. The formal derivation and proof of programs as is practiced today is a very powerful tool for the development of high-quality software. However, its application by the software development community has been slowed by the amount of mathematical expertise needed to apply these formal methods to complex projects and by the lack of reuse within the framework of program derivation. To address these problems, we have developed an approach to formal derivation that employs the new concept of generic algorithms. A generic algorithm is one that has (1) a formal specification, (2) a proof that it satisfies this specification, and (3) generic identifiers representing types and operations. It may have embedded program specifications or pseudocode instructions describing the next steps in the stepwise refinement process. Using generic algorithms, most software developers need to know only how to pick and adapt them, rather than perform more technically challenging tasks such as finding loop invariants and deriving loop programs. The adaptation consists of replacing the generic identifiers by concrete types and operations. Since each generic algorithm can be used in the derivation of many different programs, this new methodology provides the developer with a form of reuse of program derivation techniques, correctness proofs, and formal specifications.

Journal ArticleDOI
TL;DR: This paper shows that ODBMS frameworks provide a natural repository for supporting object-oriented systems, because they store and manage objects as their atomic units.
Abstract: With increasing frequency, object database management systems (ODBMSs) are being used as a persistent storage framework for applications. This paper shows that ODBMS frameworks provide a natural repository for supporting object-oriented systems, because they store and manage objects as their atomic units. In addition, these frameworks can offer a great deal of leverage to the developers of applications with the integration of two distinct paradigm shifts: the object-oriented development model, and the direct-reference storage model. Software developers who understand the implications of both paradigm shifts are more likely to use the technology effectively and realize most or all of the potential leverage. Highlighted is ObjectStore™ from Object Design, Inc., which is available as part of the IBM object database solution.


Journal ArticleDOI
TL;DR: Some of the business requirements that guided the development of the corporate database are discussed, and the database design process, tool selection, and implementation experiences are described.
Abstract: A decision support system with over one billion rows of data has been developed at Lands' End using the IBM DATABASE 2™ (DB2®) relational database management system. This corporate database is a subset of an Information Warehouse framework and functions as both a decision support system server and an application enabler. The corporate database uses operational data gathered from order processing and customer mailing systems. Weekly processes reformat these real-time data for loading into the corporate database. This paper discusses some of the business requirements that guided the development of the corporate database, and also describes the database design process, tool selection, and implementation experiences.

Journal ArticleDOI
J. P. Singleton1, M. M. Schwartz1
TL;DR: Concepts of independence between software components and how this independence can provide flexibility for change are discussed as a key emphasis of this paper.
Abstract: IBM's Information Warehouse™ framework provides a basis for satisfying enterprise requirements for effective use of business data resources. It includes an architecture that defines the structure and interfaces for integrated solutions and includes products and services that can be used to create solutions. This paper uses the Information Warehouse architecture as a context to describe software components that can be used for direct access to formatted business data in a heterogeneous systems environment. Concepts of independence between software components and how this independence can provide flexibility for change are discussed. The integration of software from multiple vendors to create effective solutions is a key emphasis of this paper.

Journal ArticleDOI
TL;DR: This essay provides some background to the formation of the Centre for Advanced Studies, describes some of the challenges deemed important in defining the role of the centre, identifies a number of principles used to guide its formation and current operation, and reports on its progress.
Abstract: The Centre for Advanced Studies (CAS) is an applied research centre formed in 1990 within the IBM Toronto Software Solutions Laboratory. Its primary aim is to facilitate the transfer of research ideas into the various product groups of the laboratory. Although we are still learning how to make CAS operate more effectively, and it is too early to assess its long-term success, the model for CAS has proved to be workable. The primary partners, namely the IBM Toronto Software Solutions Laboratory, the IBM research community, universities in North America, and government agencies that support collaborative research, have found it a viable approach. As an overview, this essay provides some background to the formation of the centre, describes some of the challenges deemed important in defining the role of the centre, identifies a number of principles that are used to guide its formation and current operation, and reports on its progress. We conclude with a discussion of some lessons learned in the operation of the centre to date and identify future activities and directions for the centre.

Journal Article
TL;DR: The IBM RISC System/6000@ processor as discussed by the authors is a superscalar processor that supports hardware square-root computation and speed floating-point-to-integer conversion, which has been shown to be an attractive alternative to aggressive clock rates.
Abstract: Since its announcement, the IBM RISC System/6000@ processor has characterized the aggressive instruction-level parallelism approach to achieving performance. Recent enhancements to the architecture and implementation provide greater superscalar capability. This paper describes the architectural extensions which improve storage reference bandwidth, allow hardware square-root computation, and speed floating-point-to-integer conversion. The implementation, which exploits these extensions and doubles the number of functional units, is also described. A comparison of performance results on a variety of industry standard benchmarks demonstrates that superscalar capabilities are an attractive alternative to aggressive clock rates.

Journal ArticleDOI
TL;DR: A management system was established in IBM to improve the quality of its software products that was based on empowering programming development teams, guided by a set of principles that were defined by the programming community and driven by aggressive goals established to improve quality and enhance customer satisfaction.
Abstract: A management system was established in IBM to improve the quality of its software products. It represents a nontraditional approach to quality improvement. The approach is based on empowering programming development teams, guided by a set of principles that were defined by the programming community and driven by aggressive goals established to improve quality and enhance customer satisfaction. In turn the experiences of the newly empowered teams led to a set of good programming practices that were shared across the programming community in IBM. The result has not only been a dramatic improvement in the quality of IBM's program products, but also the fostering of a work environment based on creativity and excellence that engenders pride of ownership for work performed.

Journal ArticleDOI
TL;DR: This essay provides information about the IBM Systems Journal and offers guidelines for prospective authors to aid the writer in preparing clear, complete papers of high quality.
Abstract: Effective communication of technical work is the primary goal of the technical journal. This essay provides information about the IBM Systems Journal and offers guidelines for prospective authors. The Systems Journal and its audience are described, and the processing of papers is discussed, along with suggestions for content and structure. To further aid the writer in preparing clear, complete papers of high quality, we include a bibliography of technical writing references.