scispace - formally typeset
Search or ask a question

Showing papers in "Ibm Systems Journal in 2008"


Journal ArticleDOI
TL;DR: An emerging logic of value creation and exchange called service-dominant logic is suggested, which promotes the conceptualization of service as a process, rather than a unit of output, and a focus on dynamic resources.
Abstract: Advancing service science requires a service-centered conceptual foundation. Toward this goal, we suggest that an emerging logic of value creation and exchange called service-dominant logic is a more robust framework for service science than the traditional goods-dominant logic. The primary tenets of service-dominant logic are: (1) the conceptualization of service as a process, rather than a unit of output; (2) a focus on dynamic resources, such as knowledge and skills, rather than static resources, such as natural resources; and (3) an understanding of value as a collaborative process between providers and customers, rather than what producers create and subsequently deliver to customers. These tenets are explored and a foundational lexicon for service science is suggested.

565 citations


Journal ArticleDOI
Ali Arsanjani1, S. Ghosh1, Abdul Allam1, T. Abdollah1, S. Gariapathy1, Kerrie L. Holley1 
TL;DR: A fractal model of software development is presented that can enable the SOMA method to evolve in an approach that goes beyond the iterative and incremental and instead leverages method components and patterns in a recursive, self-similar manner opportunistically at points of variability in the life cycle.
Abstract: Service-oriented modeling and architecture (SOMA) has been used to conduct projects of varying scope in multiple industries worldwide for the past five years. We report on the usage and structure of the method used to effectively analyze, design, implement, and deploy service-oriented architecture (SOA) projects as part of a fractal model of software development. We also assert that the construct of a service and service modeling, although introduced by SOA, is a software engineering best practice for which an SOA method aids both SOA usage and adoption. In this paper we present the latest updates to this method and share some of the lessons learned. The SOMA method incorporates the key aspects of overall SOA solution design and delivery and is integrated with existing software development methods through a set of placeholders for key activity areas, forming what we call solution templates. We also present a fractal model of software development that can enable the SOMA method to evolve in an approach that goes beyond the iterative and incremental and instead leverages method components and patterns in a recursive, self-similar manner opportunistically at points of variability in the life cycle.

458 citations


Journal ArticleDOI
TL;DR: How service value is created in a network context and how the structure and dynamics of the value network as well as customer expectations influence the complexity of the services ecosystem are explored.
Abstract: This paper explores how service value is created in a network context and how the structure and dynamics of the value network as well as customer expectations influence the complexity of the services ecosystem. The paper then discusses what transformative role information and communication technology (ICT) plays in coordinating and delivering value and managing this complexity. A conceptual model is developed for understanding and investigating the nature, delivery, and exchange of service value and assessing the complexity of a service value network. Three central arguments are presented. First, value in the services economy is driven and determined by the end consumer and delivered through a complex web of direct and indirect relationships between value network actors. Second, the complexity of service value networks not only depends on the number of actors but also on the conditional probabilities that these actors are involved in delivering the service to the consumer. Third, ICT plays a central role in reducing complexity for consumers by providing greater levels of value network integration, information visibility, and means to manage and anticipate change.

418 citations


Journal ArticleDOI
TL;DR: Three interrelated frameworks are presented as a first attempt to define the fundamentals of service systems, which can be applied together to describe, analyze, and study how service systems are created, how they operate, and how they evolve through a combination of planned and unplanned change.
Abstract: Service systems produce all services of significance and scope, yet the concept of a service system is not well articulated in the service literature. This paper presents three interrelated frameworks as a first attempt to define the fundamentals of service systems. These frameworks identify basic building blocks and organize important attributes and change processes that apply across all service systems. Although relevant regardless of whether a service system uses information technology, the frameworks are also potentially useful in visualizing the realities of moving toward automated service architectures. This paper uses two examples, one largely manual and one highly automated, to illustrate the potential usefulness of the three frameworks, which can be applied together to describe, analyze, and study how service systems are created, how they operate, and how they evolve through a combination of planned and unplanned change.

355 citations


Journal ArticleDOI
TL;DR: This paper uses input/output and other data to depict how service industries vary in such areas as products, markets, work organization, and technological characteristics-most being very distinctive from primary industries.
Abstract: The diversity of service activities means that service innovations and innovation processes take various forms. In this paper, we use input/output and other data to depict how service industries vary in such areas as products, markets, work organization, and technological characteristics-most being very distinctive from primary industries (i.e., extractive industries such as agriculture, fisheries, forestry, mining, petroleum, quarrying, and the like) and secondary industries (i.e., manufacturing, construction, and utilities). Innovation survey data indicates that some service organizations behave very much like high-technology manufacturing. This is especially true of technology-based, knowledge-intensive business services (T-KIBS). Distinctive innovation patterns are displayed by KIBS based more on professional knowledge and by large network-based service firms, while many smaller service firms conform to a supplier-driven pattern. Only a small segment of service innovation conforms to the typical manufacturing-based model, in which innovation is largely organized and led by formal research and development (R&D) departments and production engineering. Project management and on-the-job innovation are common ways of organizing service innovation. Innovation policy and management have to be much more than R&D policy and R&D management: This is recognized by some national governments and in some business schools, but the full implications of a service-dominant logic are still rarely found.

322 citations


Journal ArticleDOI
TL;DR: The two-level hierarchical decomposition proposed here is suitable for the availability modeling of blade server systems such as IBM BladeCenter®, a commercial, high-availability multicomponent system comprising up to 14 separate blade servers and contained within a chassis that provides shared subsystems such as power and cooling.
Abstract: The successful development and marketing of commercial high-availability systems requires the ability to evaluate the availability of systems. Specifically, one should be able to demonstrate that projected customer requirements are met, to identify availability bottlenecks, to evaluate and compare different configurations, and to evaluate and compare different designs. For evaluation approaches based on analytic modeling, these systems are often sufficiently complex so that state-space methods are not effective due to the large number of states, whereas combinatorial methods are inadequate for capturing all significant dependencies. The two-level hierarchical decomposition proposed here is suitable for the availability modeling of blade server systems such as IBM BladeCenter®, a commercial, high-availability multicomponent system comprising up to 14 separate blade servers and contained within a chassis that provides shared subsystems such as power and cooling. This approach is based on an availability model that combines a high-level fault tree model with a number of lower-level Markov models. It is used to determine component level contributions to downtime as well as steady-state availability for both standalone and clustered blade servers. Sensitivity of the results to input parameters is examined, extensions to the models are described, and availability bottlenecks and possible solutions are identified.

101 citations


Journal ArticleDOI
TL;DR: This analysis identified gaps, overlaps, and opportunities that shaped the design of the curriculum and in particular a new survey course that serves as the cornerstone of service science education at UC Berkeley.
Abstract: This paper relates our experiences at the University of California, Berkeley (UC Berkeley), designing a service science discipline. We wanted to design a discipline of service science in a principled and theoretically motivated way. We began our work by asking, "What questions would a service science have to answer?" and from that we developed a new framework for understanding service science. This framework can be visualized as a matrix whose rows are stages in a service life cycle and whose columns are disciplines that can provide answers to the questions that span the life cycle. This matrix systematically organizes the issues and challenges of service science and enables us to compare our model of a service science discipline with other definitions and curricula. This analysis identified gaps, overlaps, and opportunities that shaped the design of our curriculum and in particular a new survey course that serves as the cornerstone of service science education at UC Berkeley.

97 citations


Journal ArticleDOI
TL;DR: A variant of RCU that allows preemption of read-side critical sections and thus is better suited for real-time applications is presented.
Abstract: Read-copy update (RCU) is a synchronization mechanism in the Linux™ kernel that provides significant improvements in multiprocessor scalability by eliminating the writer-delay problem of readers-writer locking. RCU implementations to date, however, have had the side effect of expanding non-preemptible regions of code, thereby degrading real-time response. We present here a variant of RCU that allows preemption of read-side critical sections and thus is better suited for real-time applications. We summarize priority-inversion issues with locking, present an overview of the RCU mechanism, discuss our counter-based adaptation of RCU for real-time use, describe an additional adaptation of RCU that permits general blocking in read-side critical sections, and present performance results. We also discuss an approach for replacing the readers-writer synchronization with RCU in existing implementations.

92 citations


Journal ArticleDOI
C. H. Tian1, Bonnie K. Ray1, Juhnyoung Lee1, R. Cao1, W. Ding1 
TL;DR: This paper presents a framework for the modeling and analysis of business model designs involving a network of interconnected business entities and introduces a role-based paradigm for characterizing ecosystem entities to allow for the evolution of the ecosystem and duplicated functionality for entities.
Abstract: This paper presents a framework for the modeling and analysis of business model designs involving a network of interconnected business entities. The framework includes an ecosystem-modeling component, a simulation component, and a service-analysis component, and integrates methods from value network modeling, game theory analysis, and multiagent systems. A role-based paradigm is introduced for characterizing ecosystem entities in order to easily allow for the evolution of the ecosystem and duplicated functionality for entities. We show how the framework can be used to provide insight into value distribution among the entities and evaluation of business model performance under different scenarios. The methods are illustrated using a case study of a retail business-to-business service ecosystem.

81 citations


Journal ArticleDOI
G. Sharon1, Opher Etzion1
TL;DR: A unified model serving as a metamodel to existing approaches to standardization in the areas of configuring and expressing the event-processing directives in event-driven systems is presented.
Abstract: This paper presents a conceptual model of an event-processing network for expressing the event-based interactions and event-processing specifications among components. The model is based on event-driven architecture, a pattern promoting the production, detection, consumption, and reaction to events. The motivation is the lack of standardization in the areas of configuring and expressing the event-processing directives in event-driven systems. Some existing approaches are through Structured Query Language, script languages, and rule languages, and are executed by standalone software, messaging systems, or datastream management systems. This paper provides a step toward standardization through a conceptual model, making it possible to express event-processing intentions independent of the implementation models and executions. It is a unified model serving as a metamodel to these existing approaches.

74 citations


Journal ArticleDOI
TL;DR: A framework is described that can be used to scope and identify what is required for effective SOA governance, and how to use these four approaches to make shared services, reuse, and flexibility a reality.
Abstract: Most organizations understand the need to address service-oriented architecture (SOA) governance during SOA adoption. An abundance of information is available defining SOA governance: what it is and what it is not, why it is important, and why organizational change must be addressed. Increasingly business and information technology (IT) stakeholders, executive and technical, acknowledge that SOA governance is essential for realizing the benefits of SOA adoption: building more-flexible IT architectures, improving the fusion between business and IT models, and making business processes more flexible and reusable. However, what is not clear is how an organization gets started. What works and what does not work? More importantly, what is required in SOA governance for organizations to see sustained and realized benefits? This paper describes a framework, the SOA governance model, that can be used to scope and identify what is required for effective SOA governance. Based on client experiences, we describe four approaches to getting started with SOA governance, and we describe how to use these four approaches to make shared services (services used by two or more consumers), reuse, and flexibility a reality. We also discuss lessons learned in using these four approaches.

Journal ArticleDOI
TL;DR: Extensibility, traceability, variation-oriented design, and automatic generation of technical documentation and code artifacts are shown to be some of the properties of the SOMA-ME tool.
Abstract: The service-oriented modeling and architecture modeling environment (SOMA-ME) is first a framework for the model-driven design of service-oriented architecture (SOA) solutions using the service-oriented modeling and architecture (SOMA) method. In SOMA-ME, Unified Modeling Language (UML™) profiles extend the UML 2.0 metamodel to domain-specific concepts. SOMA-ME is also a tool that extends the IBM Rational® Software Architect product to provide a development environment and automation features for designing SOA solutions in a systematic and model-driven fashion. Extensibility, traceability, variation-oriented design, and automatic generation of technical documentation and code artifacts are shown to be some of the properties of the SOMA-ME tool.

Journal ArticleDOI
S. Loveland1, Eli M. Dow1, Frank R. LeFevre1, Duane K. Beyer1, P. F. Chan1 
TL;DR: Special attention is paid to applying HA configurations to virtualized environments, stretching virtual environments across physical machine boundaries, resource-sharing approaches, field experiences, and avoiding potential hazards.
Abstract: Leveraging redundant resources is a common means of addressing availability requirements, but it often implies redundant costs as well. At the same time, virtualization technologies promise cost reduction through resource consolidation. Virtualization and high-availability (HA) technologies can be combined to optimize availability while minimizing costs, but merging them properly introduces new challenges. This paper looks at how virtualization technologies and techniques can augment and amplify traditional HA approaches while avoiding potential pitfalls. Special attention is paid to applying HA configurations (such as active/active and active/passive) to virtualized environments, stretching virtual environments across physical machine boundaries, resource-sharing approaches, field experiences, and avoiding potential hazards.

Journal ArticleDOI
J. K. Strosnider1, Prabir Nandi1, Santhosh Kumaran1, S. Ghosh1, Ali Arsanjani1 
TL;DR: The business entity life cycle analysis (BELA) technique for MDBT-based SOA solution realization and its integration into service-oriented modeling and architecture (SOMA), the end-to-end method from IBM for SOA application and solution development is presented.
Abstract: The current approach to the design, maintenance, and governance of service-oriented architecture (SOA) solutions has focused primarily on flow-driven assembly and orchestration of reusable service components. The practical application of this approach in creating industry solutions has been limited, because flow-driven assembly and orchestration models are too rigid and static to accommodate complex, real-world business processes. Furthermore, the approach assumes a rich, easily configured library of reusable service components when in fact the development, maintenance, and governance of these libraries is difficult. An alternative approach pioneered by the IBM Research Division, model-driven business transformation (MDBT), uses a model-driven software synthesis technology to automatically generate production-quality business service components from high-level business process models. In this paper, we present the business entity life cycle analysis (BELA) technique for MDBT-based SOA solution realization and its integration into service-oriented modeling and architecture (SOMA), the end-to-end method from IBM for SOA application and solution development. BELA shifts the process-modeling paradigm from one that is centered on activities to one that is centered on entities. BELA teams process subject-matter experts with IT and data architects to identify and specify business entities and decompose business processes. Supporting synthesis tools then automatically generate the interacting business entity service components and their associated data stores and service interface definitions. We use a large-scale project as an example demonstrating the benefits of this innovation, which include an estimated 40 percent project cost reduction and an estimated 20 percent reduction in cycle time when compared with conventional SOA approaches.

Journal ArticleDOI
TL;DR: A descriptive structure for the analysis of this complexity which combines graph theory and network flows with economic tools is offered and can be used to analyze service systems in terms of the value they deliver, how they deliver it, and how value can be discovered and increased.
Abstract: The economic structure of service systems has steadily increased in complexity in recent years. This is due not only to specialization in direct material production and services offered, but also in the ownership and management of resources, the role of intangible assets such as process knowledge, and the context in which goods and services are consumed. This increase in complexity represents both a challenge and an opportunity in a service-oriented economy. In this paper, we offer a descriptive structure for the analysis of this complexity which combines graph theory and network flows with economic tools. Our analysis is based on publicly observable information and can be used to analyze service systems in terms of the value they deliver, how they deliver it, and how value can be discovered and increased. We show how this analysis can be applied (in the example of a car manufacturer and its service system for suppliers and dealerships) to improve customer satisfaction and provide options and analysis models for outsourcing decision makers.

Journal ArticleDOI
TL;DR: In this article, the authors discuss how newly emerging service systems require such a 3-way integrated analysis, and they deliberately select some non-standard services, as many business services such as supply chains have been studied extensively.
Abstract: Services industries comprise about 75% of the economy of developed nations. To design and operate services systems for today and tomorrow, we need to educate a new type of engineer who focuses not on manufacturing but on services. Such an engineer must be able to integrate 3 sciences - management, social and engineering ‐ into her analysis of service systems. Within the context of a new research center at MIT ‐ CESF (Center for Engineering Systems Fundamentals) ‐ we discuss how newly emerging service systems require such a 3-way integrated analysis. We deliberately select some non-standard services, as many business services such as supply chains have been studied extensively.

Journal ArticleDOI
Z. J. Li1, H. F. Tan1, H. H. Liu1, J. Zhu1, Naomi M. Mitsumori1 
TL;DR: A gray-box testing approach, that is, an approach that involves having access to internal workings, data structures, and algorithms when designing the test cases but tests at the user level as a black box, by applying inputs and observing outputs.
Abstract: Challenges are emerging in testing service-oriented architecture (SOA) systems. Current testing is not sufficient to deal with the new requirements arising from several SOA features such as composition, loose coupling, and code without a graphical user interface. The most critical architecture information of an SOA solution is actually how services are composed and interact with each other. This paper proposes a gray-box testing approach, that is, an approach that involves having access to internal workings, data structures, and algorithms when designing the test cases but tests at the user level as a black box, that is, by applying inputs and observing outputs. This approach leverages business processes and the underlying SOA layered architecture to better test SOA solutions. A commonly used language to model business processes is BPEL (Business Process Execution Language), which is the focus of the approach described in this paper. Among the layered artifacts, the business process view represents the global behavior of the SOA system and thus is a good candidate as supplemental architectural information to the functional requirement or specification in test-case design and generation. This approach has three key enablers: test-path exploration, trace analysis, and regression test selection. BPELTester is an innovative tool that implements this method. It has been piloted in several projects and the initial pilot results are presented in this paper.

Journal ArticleDOI
TL;DR: This paper explores the underlying causes of the gap between the education received by business school graduates and the skills that they need to succeed in today's service-intense environment and suggests ways in which the emerging field of service science can facilitate the changes in business school curricula that will make them more relevant in meeting the needs of today's businesses and organizations.
Abstract: For a service delivery system to produce optimal solutions to service-related business problems, it must be based on an approach that involves many of the traditional functional areas in an organization Unfortunately, most business school curricula mirror the older traditional organizational structure that dominated businesses throughout most of the twentieth century This structure typically consisted of vertically organized functions (or silos), such as production, marketing, and finance, with each silo operating largely independently of the others Similarly, business schools today are usually organized by functional departments-such as marketing, finance, accounting, and operations management-with little interaction among them Within this traditional silo-structured environment, it is very difficult to properly develop a curriculum, or even a course, in service management Consequently, a significant gap exists between the education received by business school graduates and the skills that they need to succeed in today's service-intense environment This paper explores the underlying causes of this gap and suggests ways in which the emerging field of service science can facilitate the changes in business school curricula that will make them more relevant in meeting the needs of today's businesses and organizations

Journal ArticleDOI
TL;DR: In this article, the authors provide an overview of the state-of-the-art architectures for continuous availability, briefly covering such traditional concepts as high-availability clustering on distributed platforms and on the mainframe.
Abstract: We first provide an overview of the state-of-the-art architectures for continuous availability, briefly covering such traditional concepts as high-availability (HA) clustering on distributed platforms and on the mainframe. We explain how HA can be achieved in environments based on Sun Microsystems J2EE™, which differ from classical clustering approaches, and we discuss how disaster recovery DR) has become an extension of HA. The paper then presents aspects of service management, including the use and orchestration of process-based (ITIL®) systems management tasks within DR scenarios, where the key challenge is to ensure the right level of redundancy in the integration and service-oriented management of heterogeneous information technology landscapes.

Journal ArticleDOI
TL;DR: This paper provides directions for designing and executing discrete choice studies for services and discusses several examples for a number of industries including health care, financial services, retail, hospitality, and online services.
Abstract: This paper presents an overview of the science and art of discrete choice modeling for service sector applications. With the ongoing momentum of service science, management, and engineering, the discrete choice modeling approach provides a sophisticated tool kit for assessing the needs and preferences of service customers. We provide directions for designing and executing discrete choice studies for services and discuss several examples for a number of industries including health care, financial services, retail, hospitality, and online services. We conclude with a discussion of the many managerial implications of the discrete choice approach.

Journal ArticleDOI
R. H. High1, G. Krishnan1, M. Sanchez1
TL;DR: This paper examines how coherency can be created and maintained in loosely coupled applications using service-oriented architecture, and examines various techniques and design approaches, such as service management, the use of service buses, the role of industry models and semantic ontologies, and governance, to achieve and maintain co herency of composite applications using SOA.
Abstract: The primary objective of service-oriented architecture (SOA) is to use information technology to address the key goals of business today: innovation, agility, and market value. Agility in SOA is achieved by use of the principles of encapsulation, modularity, and loose coupling, which facilitates a cleaner separation of concerns. While loose coupling enables customers to rapidly reuse services in new applications, strong coherency must be maintained to achieve the primary business objectives of the application. When applications are composed of loosely coupled services that are independent (owned by different parts of the organization, based on disparate technology assumptions, and evolving on independent schedules and with diverse priorities) the coherency of the composite application can be undermined. In this paper, we examine how coherency can be created and maintained in loosely coupled applications. We examine, in this context, various techniques and design approaches, such as service management, the use of service buses, the role of industry models and semantic ontologies, and governance, to achieve and maintain coherency of composite applications using SOA.

Journal ArticleDOI
TL;DR: An operations support system that is compliant with NGOSS and implements a service-oriented architecture that relies on an enhanced enterprise service bus (ESB) that makes it possible to carry out changes to business rules at runtime, thus avoiding costly shutdowns to the billing application.
Abstract: The complexity that telecommunications companies are faced with in their business processes and their information technology (IT) systems is especially apparent in their billing systems. These systems are required not only to handle large volumes of data and frequent changes in business rules, but also to ensure that the billing be done accurately and on time. This paper describes a solution that was developed to address this problem. It consists of an operations support system that is compliant with NGOSS (Next Generation Operations System and Software) and it implements a service-oriented architecture (SOA) that relies on an enhanced enterprise service bus (ESB). This enhanced ESB, referred to here as an adaptable service bus (ASB), makes it possible to carry out changes to business rules at runtime, thus avoiding costly shutdowns to the billing application. An implementation of this system has been operational in ChungHwa Telecom Company, Taiwan, since January 2008 and provides complete support to its billing application. As a result, the billing process cycle time has been reduced from 10–16 days to 3–4 days, which cleared the way for further growth of the business.

Journal ArticleDOI
TL;DR: An approach to business processes and services which views work practices as recurrent patterns of communication called genres is described, which enables us to determine if business demands have changed, something that is difficult to achieve using conventional service engineering approaches.
Abstract: In this paper, we describe an approach to business processes and services which views work practices as recurrent patterns of communication called genres. Although defining work practices in this way is unorthodox, it provides two major advantages. First, the communication resources employed by the parties engaging in a service transaction can be clearly described, understood, and communicated. Business processes and services can be differentiated on the basis of the structural and functional arrangement of their constituent genres. This provides a view of a business process or service that is technology-independent. Second, using this approach means that work practices are defined contextually—an important consideration when trying to understand how business processes and services will influence organizations. Because genres are represented using directed graphs, prototypes can be developed to assist during the analysis of existing services and the design of new ones. Structural and functional change of genres can be used to reveal how a specific service is evolving within an organization. This enables us to determine if business demands have changed, something that is difficult to achieve using conventional service engineering approaches.

Journal ArticleDOI
TL;DR: This paper discusses high availability and disaster recovery solutions and their differences and presents the concepts and technical details of various solutions that combine them for highly critical environments.
Abstract: This paper discusses high availability and disaster recovery solutions and their differences and presents the concepts and technical details of various solutions that combine them for highly critical environments. It discusses the business and regulatory issues that are driving the requirements for these solutions and presents various data center topologies that customers are choosing when implementing 3-site solutions.

Journal ArticleDOI
TL;DR: In this paper, the authors examine managed service in the information and communication technology (ICT) sector, characterized by the polarization between an infrastructure service that is growing in scale and increasingly becoming a commodity and customized or even one-of-a-kind projects.
Abstract: In this paper we examine managed service in the information and communication technology (ICT) sector, characterized by the polarization between an infrastructure service that is growing in scale and increasingly becoming a commodity and customized or even one-of-a-kind projects. We refer to the approaches taken by three highly innovative advanced service companies, IBM, Ericsson, and Cable & Wireless, to package and deliver ICT service on a more industrialized basis. We identify the six-stage process that describes their journeys to date. We also describe some of the challenges they faced on that journey as well those currently facing them as they move to a higher degree of industrialization. To address these challenges, we propose a model with three axes: offering development, service delivery, and go to market. The model demonstrates how the increasing industrialization of managed service requires an approach integrating all three of these dimensions. We also show that strong governance is required to address the impacts of technological evolution, marketplace dynamics, and corporate culture.

Journal ArticleDOI
Rajesh Radhakrishnan1, K. Mark1, B. Powell1
TL;DR: This paper examines HASM and discusses the process flow for designing and implementing HA technologies and the use of the Six Sigma method and analytical tools applied to key service management processes and services.
Abstract: High-availability service management (HASM) is defined as information technology (IT) service management that is designed to meet the business demand for availability of critical IT and IT-enabled business services. HASM requires the use of the Six Sigma method and analytical tools applied to key service management processes and services; event and incident monitoring and management design; high-end and high-quality infrastructure and application configuration; high-availability (HA) architecture and design; and special solutions that implement HA patterns and associated technologies. In this paper, we examine HASM and discuss the process flow for designing and implementing HA technologies.

Journal ArticleDOI
R. Cocchiara1, H. Davis1, D. Kinnaird1
TL;DR: A methodology for ensuring business resilience by assessing the risks to the mission-critical business systems and then designing a data center topology to mitigate these risks is proposed, based on the IBM Business Resilience Framework.
Abstract: In this paper we examine the ways in which data center topology choices affect mission-critical business system availability and we propose a methodology for ensuring business resilience by assessing the risks to the mission-critical business systems and then designing a data center topology to mitigate these risks. This methodology is based on the IBM Business Resilience Framework, a framework that accounts for a wide range of concerns, from data center facilities to the business strategy and vision.

Journal ArticleDOI
D. Hart1, J. Stultz1, T. Ts'o1
TL;DR: This paper describes how IBM developers helped to direct, implement, and test the real-time Linux kernel, bringing it from software patches to a finished product in nine months.
Abstract: The increasing market demand for systems characterized by low-latency, deterministic behavior and the emphasis on the use of commodity hardware and software have led to a new breed of real-time operating systems (OSs), known as enterprise real-time OSs In response to the demand for accelerated access to such features in a Linux™ kernel, the IBM Linux and Java™ Technology Centers collaborated to provide the first commercially available enterprise real-time Linux kernel with real-time Java support Extending the PREEMPT RT patch from Ingo Molnar of Red Hat, Inc, the kernel contains additional features that were required to meet the demands of enterprise real-time OS customers This paper describes how IBM developers helped to direct, implement, and test the real-time Linux kernel, bringing it from software patches to a finished product in nine months

Journal ArticleDOI
TL;DR: The IBM TotalStorage™ Productivity Center for Replication (TPC-R) is presented, a tool designed to help customers implement cost-effective data replication solutions for continuous availability and disaster recovery and a focus on the various trade-offs customers must consider when choosing between different storage replication technologies.
Abstract: Designing and implementing a business resilience (or disaster recovery) plan is a complex procedure for customers, and the impact of implementing an incorrect or incomplete plan can be significant. For some customers, being able to recover their data center functionality in a short period of time may be of the utmost importance; for others, recovering in a short period of time may be worthless if the data with which their database is restored is hours or days old. Also of importance is the impact to business-critical applications when copies of data are being made. This paper presents the IBM TotalStorage™ Productivity Center for Replication (TPC-R), a tool designed to help customers implement cost-effective data replication solutions for continuous availability and disaster recovery. We give an overview of TPC-R, describe recent enhancements to TPC-R that are available on all supported platforms (as well as those that are unique to the z/OS™ platform) and discuss the ways in which customers can exploit TPC-R to implement business resilience solutions, with a focus on the various trade-offs customers must consider when choosing between different storage replication technologies.

Journal ArticleDOI
TL;DR: The use of one of these techniques, reliability block diagrams, in evaluating the availability of information technology systems through a case study involving an IT system supported by a three-tier Web-server configuration is demonstrated.
Abstract: We present a brief introduction to three reliability engineering techniques: failure mode, effects, and criticality analysis; reliability block diagrams; and fault tree analysis. We demonstrate the use of one of these techniques, reliability block diagrams, in evaluating the availability of information technology (IT) systems through a case study involving an IT system supported by a three-tier Web-server configuration.