scispace - formally typeset
Search or ask a question

Showing papers in "Ibm Systems Journal in 1997"


Journal ArticleDOI
TL;DR: This paper introduces the concept of a software bookshelf as a means to capture, organize, and manage information about a legacy software system and illustrates how a softwareBookshelf is populated with the information of a given software project and how the bookshe shelf can be used in a program-understanding scenario.
Abstract: Legacy software systems are typically complex, geriatric, and difficult to change, having evolved over decades and having passed through many developers. Nevertheless, these systems are mature, heavily used, and constitute massive corporate assets. Migrating such systems to modern platforms is a significant challenge due to the loss of information over time. As a result, we embarked on a research project to design and implement an environment to support software migration. In particular, we focused on migrating legacy PL/I source code to C++, with an initial phase of looking at redocumentation strategies. Recent technologies such as reverse engineering tools and World Wide Web standards now make it possible to build tools that greatly simplify the process of redocumenting a legacy software system. In this paper we introduce the concept of a software bookshelf as a means to capture, organize, and manage information about a legacy software system. We distinguish three roles directly involved in the construction, population, and use of such a bookshelf: the builder, the librarian, and the patron. From these perspectives, we describe requirements for the bookshelf, as well as a generic architecture and a prototype implementation. We also discuss various parsing and analysis tools that were developed and integrated to assist in the recovery of useful information about a legacy system. In addition, we illustrate how a software bookshelf is populated with the information of a given software project and how the bookshelf can be used in a program-understanding scenario. Reported results are based on a pilot project that developed a prototype bookshelf for a software system consisting of approximately 300K lines of code written in a PL/I dialect.

233 citations


Journal ArticleDOI
Frank Leymann1, Dieter Roller1
TL;DR: A method is proposed to develop workflow-based applications in a cohesive and consistent way and their principal advantages are derived and set in context to transaction, object, and CASE technology.
Abstract: A significant number of companies are reengineering their business to be more effective and productive. Consequently, existing applications must be modified, and new applications must be written. The new applications typically run in a distributed and heterogeneous environment, performing single tasks in parallel, and demanding special transaction functionality for the supporting environments. Workflow-based applications offer this type of capability. In this paper, their principal advantages are derived and set in context to transaction, object, and CASE (computer-assisted software engineering) technology. In particular, a method is proposed to develop these workflow-based applications in a cohesive and consistent way.

229 citations


Journal ArticleDOI
TL;DR: The technology to bring an electronic book comprised of hundreds of electronically addressable display pages printed on real paper substrates that may be typeset in situ, thus giving such a book the capability to be any book is outlined.
Abstract: In this paper we describe our efforts at the Massachusetts Institute of Technology Media Laboratory toward realizing an electronic book comprised of hundreds of electronically addressable display pages printed on real paper substrates. Such pages may be typeset in situ, thus giving such a book the capability to be any book. We outline the technology we are developing to bring this about and describe a number of applications that such a device enables.

214 citations


Journal ArticleDOI
TL;DR: This paper presents algorithms developed to simplify performance management, dynamically adjust computing resources, and balance work across parallel systems that provide a single-system image to manage competing workloads running across multiple systems.
Abstract: Workload management, a function of the OS/390™ operating system base control program, allows installations to define business objectives for a clustered environment (Parallel Sysplex™ in OS/390). This business policy is expressed in terms that relate to business goals and importance, rather than the internal controls used by the operating system. OS/390 ensures that system resources are assigned to achieve the specified business objectives. This paper presents algorithms developed to simplify performance management, dynamically adjust computing resources, and balance work across parallel systems. We examine the types of data the algorithms require and the measurements that were devised to assess how well work is achieving customer-set goals. Two examples demonstrate how the algorithms adjust system resource allocations to enable a smooth adaptation to changing processing conditions. To the customer, these algorithms provide a single-system image to manage competing workloads running across multiple systems.

116 citations


Journal ArticleDOI
TL;DR: This paper describes a clustered multiprocessor system developed for the general-purpose, large-scale commercial marketplace based on an architecture designed to combine the benefits of full data sharing and parallel processing in a highly scalable clustered computing environment.
Abstract: This paper describes a clustered multiprocessor system developed for the general-purpose, large-scale commercial marketplace. The system (S/390e Parallel Sysplex™) is based on an architecture designed to combine the benefits of full data sharing and parallel processing in a highly scalable clustered computing environment. The Parallel Sysplex offers significant advantages in the areas of cost, performance range, and availability.

76 citations


Journal ArticleDOI
V. Srinivasan1, D. T. Chang1
TL;DR: The user and programming interfaces provided by various products and tools for object-oriented applications that create and manipulate persistent objects are discussed, including implementation requirements and limitations imposed by each of the three approaches to object persistence.
Abstract: Object-oriented models have rapidly become the model of choice for programming most new computer applications. Since most application programs need to deal with persistent data, adding persistence to objects is essential to making object-oriented applications useful in practice. There are three classes of solutions for implementing persistence in object-oriented applications: the gateway-based object persistence approach, which involves adding object-oriented programming access to persistent data stored using traditional non-object-oriented data stores, the object-relational database management system (DBMS) approach, which involves enhancing the extremely popular relational data model by adding object-oriented modeling features, and the object-oriented DBMS approach (also called the persistent programming language approach), which involves adding persistence support to objects in an object-oriented programming language. In this paper, we describe the major characteristics and requirements of object-oriented applications and how they may affect the choice of a system and method for making objects persistent in that application. We discuss the user and programming interfaces provided by various products and tools for object-oriented applications that create and manipulate persistent objects. In addition, we describe the pros and cons of choosing a particular mechanism for making objects persistent, including implementation requirements and limitations imposed by each of the three approaches to object persistence previously mentioned. Given that several object-oriented applications might need to share the same data, we describe how such applications can interoperate with each other. Finally, we describe the problems and solutions of how object-oriented applications can coexist with non-object-oriented (legacy) applications that access the same data.

59 citations


Journal ArticleDOI
TL;DR: The mixed effects of CASE tools use on AD performance suggest that a cautious approach is appropriate for predicting the impact of similar AD tools in the future, and highlight the importance of carefully managing the introduction and use of such tools if they are to be used successfully in the modern enterprise.
Abstract: In this paper we report on the results of a four-year study of how automated tools are used in application development (AD). Drawing on data collected from over 100 projects at 22 sites in 15 Fortune 500 companies, we focus on understanding the relationship between using such automated AD tools and various measures of AD performance—including user satisfaction, labor cost per function point, schedule slippage, and stakeholder-rated effectiveness. Using extensive data from numerous surveys, on-site observations, and field interviews, we found that the direct effects of automated tool use on AD performance were mixed, and that the use of such tools by themselves makes little difference in the results. Further analysis of key intervening factors finds that training, structured methods use, project size, design quality, and focusing on the combined use of AD tools adds a great deal of insight into what contributes to the successful use of automated tools in AD. Despite the many grand predictions of the trade press over the past decade, computer-assisted software engineering (CASE) tools failed to emerge as the promised “silver bullet.” The mixed effects of CASE tools use on AD performance that we found, coupled with the complex impact of other key factors such as training, methods, and group interaction, suggest that a cautious approach is appropriate for predicting the impact of similar AD tools (e.g., object-oriented, visual environments, etc.) in the future, and highlight the importance of carefully managing the introduction and use of such tools if they are to be used successfully in the modern enterprise.

57 citations


Journal ArticleDOI
TL;DR: This work examines the problems encountered in extending DATABASE 2™ (DB2®) for Multiple Virtual Storage/Enterprise Systems Architecture (MVS/ESA™), also called DB2 for OS/390™, an industrial-strength relational database management system originally designed for a single-system environment, to support the multisystem shared-data architecture.
Abstract: We examine the problems encountered in extending DATABASE 2™ (DB2®) for Multiple Virtual Storage/Enterprise Systems Architecture (MVS/ESA™), also called DB2 for OS/390™, an industrial-strength relational database management system originally designed for a single-system environment, to support the multisystem shared-data architecture. The multisystem data sharing function was delivered in DB2 Version 4. DB2 data sharing requires a System® Parallel Sysplex™ environment because DB2's use of the coupling facility technology plays a central role in delivering highly efficient and scalable data sharing functions. We call this the shared-data architecture because the coupling facility is a unique feature that it employs.

54 citations


Journal ArticleDOI
TL;DR: The results of this study indicate that several organizational factors do affect communication effort, but not always in a simple, straightforward way.
Abstract: The empirical study described in this paper addresses the issue of communication among members of a software development organization. In particular, we have studied interactions between participants in a review process. The question of interest is whether or not organizational relationships among the review participants have an effect on the amount of communication effort expended. The study uses both quantitative and qualitative methods for data collection and analysis. These methods include participant observation, structured interviews, graphical data presentation, and nonparametric statistics. The results of this study indicate that several organizational factors do affect communication effort, but not always in a simple, straightforward way. Not surprisingly, people take less time to communicate when they are familiar with one another and when they work in close physical proximity. However, certain mixtures of organizationally “close” and “distant” participants in an interaction result in more effort needed to communicate. Also, interactions tend to be more effort-intensive when they occur in a meeting and when more people are involved. These results provide a better understanding of how organizational structure helps or hinders communication in software development.

53 citations


Journal ArticleDOI
G. M. King1, Daniel M. Dias1, Philip S. Yu1
TL;DR: The scalability of the S/390 Parallel Sysplex is quantified and it is shown that the transaction rate supported is close to linear as nodes are added to the system.
Abstract: Supporting high transaction rates and high availability for on-line transaction processing and emerging applications requires systems consisting of multiple computing nodes. We outline various cluster architectures and describe the factors that motivate the S/390® Parallel Sysplex™ architecture and its resulting advantages. We quantify the scalability of the S/390 Parallel Sysplex and show that the transaction rate supported is close to linear as nodes are added to the system. The key facet of the S/390 Parallel Sysplex architecture is the coupling facility. The coupling facility provides for very efficient intertransaction concurrency control, buffer cache coherency control, and shared buffer management, among other functions, that lead to the excellent scalability achieved. It also provides for effective dynamic load balancing, high data buffer hit ratios, and load balancing after a failure.

46 citations


Journal ArticleDOI
TL;DR: The method for on-line reorganization copies data while arranging the data in the new copy in reorganized form, and maintains a table that maps between old and new record identifiers, to match log entries with data records in thenew copy.
Abstract: Any database management system may need some type of reorganization. However, reorganization typically requires taking a database off line, which can be unacceptable for a very large or highly available (24-hour) database. A solution is to reorganize on line (concurrently with users' reading and writing of data in the database). This paper describes a method for performing one type of reorganization on line. The type of reorganization distributes free space evenly, removes overflow, and clusters data. The method for on-line reorganization copies data while arranging the data in the new copy in reorganized form. The method then applies the database log to bring the new copy up to date (to reflect users' writing of the old copy). The method maintains a table that maps between old and new record identifiers, to match log entries with data records in the new copy.

Journal ArticleDOI
TL;DR: This work presents a novel approach to documentary storytelling that celebrates electronic narrative as a process in which the author(s), a networked presentation system, and the audience actively collaborate in the co-construction of meaning.
Abstract: We present a novel approach to documentary storytelling that celebrates electronic narrative as a process in which the author(s), a networked presentation system, and the audience actively collaborate in the co-construction of meaning. A spreading-activation network is used to select relevant story elements from a multimedia database and dynamically conjoin them into an appealing, coherent narrative presentation. The flow of positive or negative “energies” through associative keyword links determines which story materials are presented as especially relevant “next steps” and which ones recede into the background, out of sight. The associative nature of this navigation serves to enhance meaning while preserving narrative continuity. This approach is well-suited for the telling of stories that—because of their complexity, breadth, or bulk—are best communicated through variable-presentation systems. Connected to the narrative engine through rich feedback loops and intuitively understandable interfaces, the audience becomes an active partner in the shaping and presentation of story.

Journal ArticleDOI
TL;DR: This paper examines empirical data from several commercial products developed using object-oriented methods and model and simulate the impact of the software task-completion incentives and deadlines on the productivity that might be expected from a technology with high-performance potential.
Abstract: Unless the business model that governs software production adjusts to new technology, it is unlikely that an investment in the technology will result in real productivity benefits. Commercial development always takes place in the context of a business model, and in that context an understanding of how business constraints influence commercial software development is imperative. As software markets become more competitive and business pressures shorten software development cycles, improved software development productivity continues to be a major concern in the software industry. Many believe that new software technology, such as object-oriented development, provides a breakthrough solution to this problem. Unfortunately, there is little quantitative evidence for this belief. In this paper we explore the relationship between the business model and the productivity that a software development methodology can achieve in a commercial environment under that model. We first examine empirical data from several commercial products developed using object-oriented methods. The results indicate that object-oriented development may not perform any better than “procedural” development in environments that lack incentives for early completion of intermediate project tasks. We then model and simulate the impact of the software task-completion incentives and deadlines on the productivity that might be expected from a technology with high-performance potential. We show how and why some common business practices might lower project productivity and project completion probability. We also discuss to what extent poor software process control and (im)maturity of the technology compounds the problem.

Journal ArticleDOI
TL;DR: The use of “restart techniques” as an important strategy in providing increased availability in a parallel structure is discussed and a set of functions that have been developed for the S/390® Parallel Sysplex™ are covered.
Abstract: Parallel and clustered architectures are increasingly being used as a foundation for high-capacity servers. At the same time, the availability expectations are also rising rapidly, since the effects of down time become more apparent and have higher economic consequences for larger systems. The use of parallel structures generally implies more hardware and software components. The presence of more and larger components increases the chances that an individual component will fail, and that failure has the potential to hurt the overall availability of the system. This paper discusses the use of “restart techniques” as an important strategy in providing increased availability in a parallel structure. The paper covers a set of functions that have been developed for the S/390® Parallel Sysplex™.

Journal ArticleDOI
TL;DR: This paper presents a framework for management of distributed applications and systems based on a set of common management services that support management activities, which include monitoring, control, configuration, and data repository services.
Abstract: A distributed computing system consists of heterogeneous computing devices, communication networks, operating system services, and applications As organisations move toward distributed computing environments, there will be a corresponding growth in distributed applications central to the enterprise The design, development, and management of distributed applications presents many difficult challenges As these systems grow to hundreds or even thousands of devices and similar or greater magnitude of software components, it will become increasingly difficult to manage them without appropriate support tools and frameworks Further, the design and deployment of additional applications and services will be, at best, ad hoc without modelling tools and timely data on which to base design and configuration decisions This paper presents a framework for management of distributed applications and systems The framework is based on a set of common management services that support management activities The services include monitoring, control, configuration, and data repository services A prototype system built on the framework is described that implements and integrates management applications providing visualisation, fault location, performance monitoring and modelling, and configuration management The prototype also demonstrates how various management services can be implemented

Journal ArticleDOI
TL;DR: The use of networks of workstations for parallel computing is becoming increasingly common and attractive for a large class of parallel applications that can tolerate the noise of the distributed system.
Abstract: The use of networks of workstations for parallel computing is becoming increasingly common. Networks of workstations are attractive for a large class of parallel applications that can tolerate the ...

Journal ArticleDOI
TL;DR: The system architecture, research results, and the prototyping effort are presented, and multimedia news has been selected as a target application for development and the results from the various projects have been integrated into a multimedia news prototype.
Abstract: In September 1993, the Canadian Institute for Telecommunications Research, in collaboration with the IBM Toronto Laboratory Centre for Advanced Studies, initiated a major project on broadband services. The goal of this major project is to provide the software technologies required for the development of distributed multimedia applications. Of particular interest are “presentational“ applications where multimedia documents, stored in database servers, are retrieved by remote users over a broadband network. Emphasis is placed on efficiency and service flexibility. By efficiency, we mean the ability to support many users and many multimedia documents. By service flexibility, we mean that the application is able to support a wide range of quality-of-service requirements from the users, adapt to changing network conditions, and support multiple document types. The research program consists of six constituent projects: multimedia data management, continuous media file server, quality-of-service negotiation and adaptation, scalable video encoding, synchronization of multimedia data, and project integration. These projects are investigated by a multidisciplinary team from eight institutions across Canada. Multimedia news has been selected as a target application for development, and the results from the various projects have been integrated into a multimedia news prototype. In this paper, the system architecture, research results, and the prototyping effort are presented.

Journal ArticleDOI
TL;DR: A locking architecture and operating system support provided for the locking services in a clustered environment that is initially targeted toward database systems, but is general enough for use in many other environments.
Abstract: Clustered and parallel architectures provide the ability to construct systems of high capacity, scalable features, and high availability. In order to provide high throughput in a shared-disk architecture, fundamental advances in multisystem locking technologies are required. This paper describes a locking architecture and operating system support provided for the locking services in a clustered environment. Although initially targeted toward database systems, the functions are general enough for use in many other environments. The paper also describes the products that have deployed this technology.

Journal ArticleDOI
TL;DR: Development tool strategies are presented that encourage a more evolutionary approach, easing the transition between the traditional and object worlds, and masking the complexities of object technology by exploiting higher-level rapid application development techniques.
Abstract: Object technology is a well-known advance for developing software that is receiving a great deal of attention. Unfortunately, the educational investment required and the additional complexity introduced by most tools that support this technology have dampened its rate of adoption by many enterprise developers. To bridge this skills and technology gap, development tool strategies are presented that encourage a more evolutionary approach, easing the transition. Rather than requiring totally new skills and tools, these strategies take advantage of the strengths and familiarity of traditional facilities—they hide much of the raw technological complexities and yet exploit the strengths of object technology by supporting the creation of transitional applications. The strategies described fall into two categories: bridging between the traditional and object worlds, and masking the complexities of object technology by exploiting higher-level rapid application development techniques.

Journal ArticleDOI
P. Johnson1
TL;DR: CICSPlex SM for MVS/ESA provides simplified systems management of multiple CICS regions within a sysplex environment and is integrated with MVS workload manager, the MVS automatic restart manager, and automation products such as NetView and the NetView resource object data model (RODM).
Abstract: IBM S/390® Parallel Sysplex™, a multisystem parallel processing environment, provides benefits in terms of reliability, availability, and the total cost of computing. These benefits, however, bring about a systems management challenge because of the increased number of address spaces that need to be managed. This paper describes how CICSPlex® System Manager (SM) for MVS/ESA™ provides simplified systems management of multiple CICS® regions within a sysplex environment. CICSPlex SM provides workload-sensitive balancing of CICS transactions, a single-system image for CICS operations and monitoring, and general-purpose thresholds for resource conditions within CICS. It also describes how CICSPlex SM is integrated with the MVS workload manager, the MVS automatic restart manager, and automation products such as NetView® and the NetView resource object data model (RODM).

Journal ArticleDOI
D. Connolly1
TL;DR: The very features that made the Internet so attractive to businesses—its openness, global reach, and ease of use—were also its greatest impediments to exchanging money, intellectual property, and products securely and reliably over a channel.
Abstract: When businesses first began to adopt the Internet as a communication tool, using it as a direct sales channel was the idea that held the most allure for them. Unfortunately, the very features that made the Internet so attractive to businesses—its openness, global reach, and ease of use—were also its greatest impediments to exchanging money, intellectual property, and products securely and reliably over such a channel. Simply put, the Internet was not designed to address the unique needs of a marketplace.


Journal ArticleDOI
E. S. Flint1
TL;DR: This paper examines the use of object wrappers and introduces two other types of wrappers, the procedural wrapper and the combination wrapper, for practical use with COBOL legacy applications.
Abstract: Object wrappers have been presented as a way to allow legacy applications and object-oriented applications to work together. However, object wrappers do not always solve the interoperability problem for COBOL legacy applications. This paper examines the use of object wrappers and introduces two other types of wrappers, the procedural wrapper and the combination wrapper, for practical use with COBOL legacy applications. The main concerns of a developer of an object-oriented application that uses the services of or provides services to a legacy application are addressed. Examples of “real-world” COBOL legacy applications are cited and samples of all three types of wrapper code are provided.

Journal ArticleDOI
M. G. Kienzle1, R. R. Berbec1, G. P. Bozman1, C. K. Eilert1, M. Eshel1, R. Mansell1 
TL;DR: The OS/390™ LAN Server has been enhanced to support multimedia data delivery and benefits from the robustness, scalability, and flexibility of the S/390® system environment, which allows it to move into new multimedia applications.
Abstract: The rapidly increasing storage and transmission capacities of computers and the progress in compression algorithms make it possible to build multimedia applications that include audio and video. Such applications range from educational and training videos, delivered to desktops in schools and enterprises, to entertainment services at home. Applications developed for stand-alone personal computers can be deployed in distributed systems without change by using the client/server model and file servers that allow the sharing of applications among many users. The OS/390™ LAN Server has been enhanced to support multimedia data delivery. Resource management and admission control, wide disk striping to provide high data bandwidths, and multimedia-specific performance enhancements have been added. The resulting server benefits from the robustness, scalability, and flexibility of the S/390® system environment, which allows it to move into new multimedia applications. Multimedia support on a robust, widely installed platform with little or no additional hardware requirements gives customers the opportunity to enhance their existing applications with multimedia features and then expand their capacity as the demands of the applications increase. This multimedia server platform is in use with several interesting applications.

Journal ArticleDOI
TL;DR: The Centre for Advanced Studies was founded in 1990 to facilitate the transfer of research ideas into products and has successfully moved concepts into products, matched students with appropriate jobs, demonstrated IBM's commitment to leadership by integrating current research into products available to customers.
Abstract: The Centre for Advanced Studies (CAS) was founded in 1990 to facilitate the transfer of research ideas into products. This essay describes the original goals of CAS, how they have been applied, and how they have evolved. CAS has successfully moved concepts into products, matched students with appropriate jobs, demonstrated IBM's commitment to leadership by integrating current research into products available to customers, and supported academic research by doctoral students and faculty from more than 30 universities. More than 150 professors have received CAS funds. As we move into our eighth year, CAS is poised to expand beyond North America and to involve additional IBM sites.

Journal ArticleDOI
T. Banks1, K. E. Davies1, C. Moxey1
TL;DR: This paper describes how CICS/ESA takes advantage of and integrates new Multiple Virtual Storage features to provide an external view of the sysplex as a single entity with workload-sensitive routing algorithms for passing work requests between nodes of thesysplex.
Abstract: The IBM CICS/ESA® transaction processing subsystem has been enhanced in a way that enables transparent exploitation of Parallel Sysplex™ technology, even by existing large-scale applications. This paper describes how CICS/ESA takes advantage of and integrates new Multiple Virtual Storage (MVS) features to provide an external view of the sysplex as a single entity with workload-sensitive routing algorithms for passing work requests between nodes of the sysplex. Sysplex-wide access is provided to application scratch-pad data, to security data, and to database and file data with complete integrity.

Journal ArticleDOI
R. Prins1, A. Blokdijk1, N. E. van Oosterom1
TL;DR: A component-based development process with well-defined reuse points and rapid-application-development (RAD) characteristics is described, which can be developed and tested individually and concurrently in large teams, then dynamically assembled into business applications and work-flows as desired.
Abstract: The business information system of the future will take the form of a swarm of business objects that are event-driven, concurrently executing, and running in a heterogeneous distributed environment The inherent complexity of the business-object development process requires a difficult-to-find combination of skills in its developers This complexity needs to be reduced to enable the participation of typical developers and to yield more successful projects Fortunately, there are many common aspects among business objects This paper describes a development approach that exploits these commonalities, reducing complexity through systematically defined, separate layers The approach was developed in a research effort performed by the Application Development Effectiveness practice of the IBM Consulting Group in the Netherlands It was subsidized by the Dutch Ministry of Economic Affairs as an information technology innovation project A “proof of concept” was obtained in a joint project with Rabobank in the Netherlands The result is a component-based development process with well-defined reuse points and rapid-application-development (RAD) characteristics With this approach, robust business objects can be developed and tested individually and concurrently in large teams, then dynamically assembled into business applications and work-flows as desired

Journal ArticleDOI
TL;DR: The resource manager component of the OS/390 LAN Server (video server) that implements resource reservation is described, including the real-time data import function that places videos on the storage devices after making the necessary space and bandwidth reservations.
Abstract: In a multimedia server, resource reservation is critical for guaranteeing jitter-free delivery of video. In this paper, we describe the resource manager component of the OS/390 LAN Server (video server) that implements resource reservation. The resource manager functions can be divided into three categories: (1) the system management functions that allow arbitrary multimedia resources to be dynamically defined, undefined, and calibrated, and their capacities monitored remotely via the Simple Network Management Protocol, (2) the operational functions that allow video streams to reserve and release resources for supporting playback without explicit specification of the needed resources, and (3) the real-time data import function that places videos on the storage devices so as to balance the load among the devices after making the necessary space and bandwidth reservations. We finally discuss research issues to exploit economies of scale.

Journal ArticleDOI
Norbert Bieberstein1
TL;DR: This essay reflects on the historical path of software development toward an engineering discipline and introduces the papers collected for this theme on application development, which demonstrate this progress in selected areas.
Abstract: The title question is answered differently according to the nature of the person being asked. A talented person with a new solution to a particular problem in the existing technology may be a revolutionary, gathering devoted followers who spread the new idea. Such leaders and their followers then propagate paradigm shifts promoting the one answer, the “silver bullet” to solve all problems. When we look more closely, in most cases only a single aspect was solved; we were not given a whole new way to develop software. This confirms the position of the traditionalists, who continue to keep and protect what is well known. In the end, in application development as in any other discipline, evolution is driven by new inventions and kept on track by the conservatives among us. This essay reflects on the historical path of software development toward an engineering discipline. It also introduces the papers collected for this theme on application development, which demonstrate this progress in selected areas.

Journal ArticleDOI
M. Meier1, H. Pan1, G. Y. Fuh1
TL;DR: A set of extensions to a distributed debugger and DB2/CS to support the debugging of external programs is described and a prototype was implemented to show the feasibility of the proposed approach.
Abstract: The technology of running external programs on the server side of a relational database management system (RDBMS) has been developed in the past few years Database 2 TM / Common Server (DB2 TM /CS) for UNIX TM -based platforms supports external programs (ie, userdefined functions and stored procedures) that are written by the application developer in a third-generation language such as C or C+ + The main difficulty in debugging these external programs is that they are executed under the control of DB2/CS, which is itself a large software system for which no source code is provided It is therefore impractical for a debugger to penetrate through the layers of software of DB2/CS to locate and debug the external programs It is also very difficult for the debugger to determine when an external program will be invoked by the database engine and in which process it will be rbn In addition, in an environment where the DB2/CS server is shared between a large number of users, it is necessary to ensure that the debugger does not violate the security of the DB2/CS system In this paper, we describe a set of extensions to a distributed debugger and DB2/CS to support the debugging of external programs A prototype was implemented to show the feasibility of the proposed approach