scispace - formally typeset
Search or ask a question

Showing papers on "Implementation published in 2001"


Journal ArticleDOI
TL;DR: It was found that management support and resources help to address organizational issues that arise during warehouse implementations; resources, user participation, and highly-skilled project team members increase the likelihood that warehousing projects will finish on-time, on-budget, with the right functionality; and diverse, unstandardized source systems and poor development technology will increase the technical issues that project teams must overcome.
Abstract: The IT implementation literature suggests that various implementation factors play critical roles in the success of an information system; however, there is little empirical research about the implementation of data warehousing projects. Data warehousing has unique characteristics that may impact the importance of factors that apply to it. In this study, a cross-sectional survey investigated a model of data warehousing success. Data warehousing managers and data suppliers from 111 organizations completed paired mail questionnaires on implementation factors and the success of the warehouse. The results from a Partial Least Squares analysis of the data identified significant relationships between the system quality and data quality factors and perceived net benefits. It was found that management support and resources help to address organizational issues that arise during warehouse implementations; resources, user participation, and highly-skilled project team members increase the likelihood that warehousing projects will finish on-time, on-budget, with the right functionality; and diverse, unstandardized source systems and poor development technology will increase the technical issues that project teams must overcome. The implementation's success with organizational and project issues, in turn, influence the system quality of the data warehouse; however, data quality is best explained by factors not included in the research model.

1,579 citations


Journal ArticleDOI
TL;DR: Through a comprehensive review of the literature, 11 factors were found to be critical to ERP implementation success – ERP teamwork and composition, change management program and culture, top management support, business plan and vision, and appropriate business and IT legacy systems are found.
Abstract: Enterprise resource planning (ERP) systems have emerged as the core of successful information management and the enterprise backbone of organizations. The difficulties of ERP implementations have been widely cited in the literature but research on the critical factors for initial and ongoing ERP implementation success is rare and fragmented. Through a comprehensive review of the literature, 11 factors were found to be critical to ERP implementation success – ERP teamwork and composition; change management program and culture; top management support; business plan and vision; business process reengineering with minimum customization; project management; monitoring and evaluation of performance; effective communication; software development, testing and troubleshooting; project champion; appropriate business and IT legacy systems. The classification of these factors into the respective phases (chartering, project, shakedown, onward and upward) in Markus and Tanis’ ERP life cycle model is presented and the importance of each factor is discussed.

1,433 citations


Journal ArticleDOI
01 Dec 2001
TL;DR: This paper discusses three myths that often hamper implementation processes in patient care information systems (PCIS) and suggests a top down framework for the implementation is crucial to turn user-input into a coherent steering force, creating a solid basis for organizational transformation.
Abstract: Successfully implementing patient care information systems (PCIS) in health care organizations appears to be a difficult task. After critically examining the very notions of 'success' and 'failure', and after discussing the problematic nature of lists of 'critical success- or failure factors', this paper discusses three myths that often hamper implementation processes. Alternative insights are presented, and illustrated with concrete examples. First of all, the implementation of a PCIS is a process of mutual transformation; the organization and the technology transform each other during the implementation process. When this is foreseen, PCIS implementations can be intended strategically to help transform the organization. Second, such a process can only get off the ground when properly supported by both central management and future users. A top down framework for the implementation is crucial to turn user-input into a coherent steering force, creating a solid basis for organizational transformation. Finally, the management of IS implementation processes is a careful balancing act between initiating organizational change, and drawing upon IS as a change agent, without attempting to pre-specify and control this process. Accepting, and even drawing upon, this inevitable uncertainty might be the hardest lesson to learn.

762 citations


01 Feb 2001
TL;DR: This document presents the object-oriented information model for representing policy information developed jointly in the IETF Policy Framework WG and as extensions to the Common Information Model activity in the Distributed Management Task Force (DMTF).
Abstract: This document presents the object-oriented information model for representing policy information developed jointly in the IETF Policy Framework WG and as extensions to the Common Information Model (CIM) activity in the Distributed Management Task Force (DMTF). This model defines two hierarchies of object classes: structural classes representing policy information and control of policies, and association classes that indicate how instances of the structural classes are related to each other. Subsequent documents will define mappings of this information model to various concrete implementations, for example, to a directory that uses LDAPv3 as its access protocol.

450 citations


Journal ArticleDOI
TL;DR: A knowledge based engineering system (KBES) to extend the current capabilities of automotive body-in-white (BIW) engineers to respond dynamically to changes within a rapid timeframe and to assess the effects of change with respect to the constraints imposed upon them by other product cycle factors.

238 citations


Journal ArticleDOI
TL;DR: A maturity model for ERP systems that identifies three stages is presented and it is shown that the organizations follow an S-shaped curve, and that most companies are in the middle stage.
Abstract: Enterprise Resource Planning (ERP) systems dominate the information technology landscape of many companies. Organizations are at different stages in the implementation process ranging from the initial analysis of implementation options, through completed standard implementations and to the sophisticated exploitation of ERP systems using advanced knowledge management, customer relationship management and supply chain management systems. The authors present a maturity model for ERP systems that identifies three stages and this is illustrated using case data selected from the study which is based on 24 organizations in the US and Europe. In Stage 1, organizations are managing legacy systems and starting the ERP project. In Stage 2, implementation is complete and the functionality of the ERP system is being exploited across the organization. In Stage 3, organizations have normalised the ERP system into the organization and are engaged in the process of obtaining strategic value from the system by using additional systems such as customer relationship management, knowledge management and supply chain planning. It is shown that the organizations follow an S-shaped curve, and that most companies are in the middle stage. An analysis of the implications for organizations at each stage of the maturity model is presented which will be of value to practising managers. The implications are categorised as impacts on cost, entropy, complexity, flexibility and strategic competitiveness.

184 citations


Patent
01 Jun 2001
TL;DR: In this paper, the authors present a method and apparatus for developing enterprise applications using design patterns, which can be used to develop and implement applications in a three-tier or multi-tier computer architecture.
Abstract: The present invention provides a method and apparatus for developing enterprise applications using design patterns. Over time, different types of enterprise applications have been developed and implemented by various software developers for different purposes. The present invention determines the purpose of the software that is needed by the developer and obtains a design pattern to solve the problem that is in accord with the best practices and patterns derived from these implementations. In turn a developer can rely on the present invention as a tool to develop and implement applications in a three-tier or multi-tier computer architecture.

123 citations


Proceedings Article
07 Jun 2001
TL;DR: By the end of next year, it is expected that CCM providers will implement the complete specification, as well as support value-added enhancements to their implementations, just as operating system and ORB providers have done historically.
Abstract: The Common Object Request Broker Architecture (CORBA) object model is increasingly gaining acceptance as the industry standard, cross-platform, cross-language distributed object computing model The recent addition of the CORBA Component Model (CCM) integrates a successful component programming model from EJB, while maintaining the interoperability and language-neutrality of CORBA The CCM programming model is thus suitable for leveraging proven technologies and existing services to develop the next-generation of highly scalable distributed applications However, the CCM specification is large and complex Therefore, ORB providers have only started implementing the specification recently As with first-generation CORBA implementations several years ago, it is still hard to evaluate the quality and performance of CCM implementations Moreover, the interoperability of components and containers from different providers is not well understood yet By the end of next year, we expect that CCM providers will implement the complete specification, as well as support value-added enhancements to their implementations, just as operating system and ORB providers have done historically In particular, containers provided by the CCM component model implementation provide quality of service (QoS) capabilities for CCM components, and can be extended to provide more services to components to relieve components from implementing these functionalities in an ad-hoc way (Wang, 2000b) These container QoS extensions provide services that can monitor and control certain aspects of components behaviors that cross-cut different programming layers or require close interaction among components, containers, and operating systems As CORBA and the CCM evolve, we expect some of these enhancements will be incorporated into the CCM specification

110 citations


Journal ArticleDOI
TL;DR: An electronic educational system model (EES model) is defined and described to assist the designers of different e-learning settings to plan and implement a specific learning situation, with the focus on the individual requirements and milieu of the learning group.
Abstract: E-learning 1 efforts and experiments currently receive much attention across the globe. The availability of electronic and web-enabling technologies also dramatically influences the way we view the learning strategies of the future [ Kramer, B. J. (2000) . Forming a federal virtual university through course broker middleware. In Proceedings: LearnTec 2000. Heidelberg, Germany, 2000. Hiltz, S. R. (1995) . Teaching in a virtual classroom. In Proceedings: International conference on computer assisted instruction.(ICCAI'95), Taiwan, March 1995]. However, due to disappointing experiences in wide spread implementation of computers in schools [ Foshay, W. R. (1998). Education technology in schools and in business: a personal experience. Education Horizons, 66(4),154–157 ], many are already predicting the failure of web technologies for learning [ Rogers, A. (2000). The failure and the promise of technology in education. Global SchoolNet Foundation, 27 May 2000 (http://www.gsm.org/teacharticles/promise.html )]. It is indeed likely that e-learning, making use of technological advances such as the Internet, may also be dissatisfying and frustrating unless we design electronic educational models that can avoid potential complications. In this paper, we define and describe an electronic educational system model (EES model). The aim of this model is to assist the designers of different e-learning settings to plan and implement a specific learning situation, with the focus on the individual requirements and milieu of the learning group. The EES model is composed of four layers, each consisting of different objects (components) addressing issues specific to each layer. When constructing a learning situation, the planners, schedulers and facilitators come together with a clear view of their particular learning situation in mind. They then use the EES model to design their course layer by layer, including objects from each layer. Each object consists of one or more methods/strategies to be implemented in order to achieve the learning objectives of the course. This approach promises to increase the chances of successful and quality implementations [Cloete, E. (2000). Quality issues in system engineering affecting virtual distance learning systems. To appear in Proceedings. COMPSAC'2000. Taiwan, October 2000] with as few frustrations and disappointments as possible.

88 citations


01 Jan 2001
TL;DR: This paper analyzes Service-Oriented technologies to identify the key characteristics and patterns of Sop, and demonstrates the value of SOP to developers and end users.
Abstract: A new programming paradigm is forming throughout the software industry. This paradigm is driven by the exploitation of networking technology and the need to be able to create more powerful capabilities more quickly. The diversity in languages, middleware, and platforms has prevented larger constructs from being formed and the shortage of qualified software engineers only aggravates the problem. The inception of the Service-Oriented Programming (SOP) paradigm is being defined throughout the industry including: Sun’s JiniTM, OpenwingsTM, Microsoft’s. NETTM, and HP’s CoolTownTM. Much like the early days of Object-Oriented Programming (OOP), certain characteristics of SOP are covered by some implementations, but no one approach covers all of them. Until the key features of OOP (encapsulation, inheritance, and polymorphism) and a design methodology (OOA/OOD) had been defined, consistency in OOP programming models was not achieved. This paper analyzes Service-Oriented technologies to identify the key characteristics and patterns of SOP, and demonstrates the value of SOP to developers and end users.

87 citations


Journal ArticleDOI
TL;DR: This work describes the use of a reconfigurable processor core based on an RISC architecture as starting point for application-specific processor design and shows how hardware emulation based on programmable logic can be integrated into the hardware/software codesign flow.
Abstract: Application-specific processors offer an attractive option in the design of embedded systems by providing high performance for a specific application domain. In this work, we describe the use of a reconfigurable processor core based on an RISC architecture as starting point for application-specific processor design. By using a common base instruction set, development cost can be reduced and design space exploration is focused on the application-specific aspects of performance. An important aspect of deploying any new architecture is verification which usually requires lengthy software simulation of a design model. We show how hardware emulation based on programmable logic can be integrated into the hardware/software codesign flow. While previously hardware emulation required massive investment in design effort and special purpose emulators, an emulation approach based on high-density field-programmable gate array (FPGA) devices now makes hardware emulation practical and cost effective for embedded processor designs. To reduce development cost and avoid duplication of design effort, FPGA prototypes and ASIC implementations are derived from a common source: We show how to perform targeted optimizations to fully exploit the capabilities of the target technology while maintaining a common source base.

Book ChapterDOI
27 Nov 2001
TL;DR: A methodology is developed resulting in a language SiteLang which allows specification of information services based on the concepts of the story and interaction spaces as well as media objects which can be automatically mapped to implementations.
Abstract: Internet information services are developed everywhere. Such services include content generation and functionality support which has to be modeled in a consistent way. Within our projects we developed a methodology resulting in a language SiteLang which allows specification of information services based on the concepts of the story and interaction spaces as well as media objects.The specification can be automatically mapped to implementations.

Journal ArticleDOI
TL;DR: A new way to teach computer organization and architecture concepts with extensive hands-on hardware design experience very early in computer science curricula, and exposes students to many of the essential issues incurred in the analysis, simulation, design and effective implementation of processors.
Abstract: This paper describes a new way to teach computer organization and architecture concepts with extensive hands-on hardware design experience very early in computer science curricula. While describing the approach, it addresses relevant questions about teaching computer organization, computer architecture and hardware design to students in computer science and related fields. The justification to concomitantly teach two often separately addressed subjects is twofold. First, to provide a better insight into the practical aspects of computer organization and architecture. Second, to allow addressing only highly abstract design levels yet achieving reasonably performing implementations, to make the integrated teaching approach feasible. The approach exposes students to many of the essential issues incurred in the analysis, simulation, design and effective implementation of processors. Although the former separation of such connected disciplines has certainly brought academic benefits in the past, some modern technologies allow capitalizing on their integration. The practical implementation of the teaching approach comprises lecture as well as laboratory courses, starting in the third semester of an undergraduate computer science curriculum. In four editions of the first two courses, most students have obtained successful processor implementations. In some cases, considerably complex applications, such as bubble sort and quick sort procedures were programmed in assembly and or machine code and run at the hardware description language simulation level in the designed processors.

Proceedings ArticleDOI
10 Nov 2001
TL;DR: This paper describes an integrated grid environment that is open, extensible and platform independent and demonstrates the effectiveness of this architecture through high-level specification and solution of a set of linear equations by automatic and optimal resource and implementation selection.
Abstract: Effective exploitation of computational grids can only be achieved when applications are fully integrated with the grid middleware and the underlying computational resources Fundamental to this exploitation is information Information about the structure and behavior of the application, the capability of the computational and networking resources, and the availability and access to these resources by an individual, a group or an organization This paper describes an integrated grid environment that is open, extensible and platform independent We match a high-level application specification, defined as a network of components, to an optimal combination of the currently available component implementations within our grid environment We demonstrate the effectiveness of this architecture through high-level specification and solution of a set of linear equations by automatic and optimal resource and implementation selection

Journal ArticleDOI
TL;DR: A prototype tool that exploits recently developed techniques for automatic model construction from traces, based on traces captured for certain selected scenarios which are determined to be important for performance, is described.

Proceedings ArticleDOI
26 Nov 2001
TL;DR: A system where software architects sketch an outline of their proposed system architecture at a high level of abstraction, including indicating client requests, server services, and choosing particular kinds of middleware and database technologies is described.
Abstract: Most distributed system specifications have performance benchmark requirements. However, determining the likely performance of complex distributed system architectures during development is very challenging. We describe a system where software architects sketch an outline of their proposed system architecture at a high level of abstraction, including indicating client requests, server services, and choosing particular kinds of middleware and database technologies. A fully working implementation of this system is then automatically generated, allowing multiple clients and servers to be run. Performance tests are then automatically run for this generated code and results are displayed back in the original high-level architectural diagrams. Architects may change performance parameters and architecture characteristics, comparing multiple test run results to determine the most suitable abstractions to refine to detailed designs for actual system implementation. We demonstrate the utility of this approach and the accuracy of our generated performance test-beds for validating architectural choices during early system development.

Journal Article
TL;DR: In this article, the authors identify the risks and controls used in ERP implementations, with the objective to understand the ways in which organizations can minimize the business risks involved in implementing ERP systems.
Abstract: The implementation of ERP systems has been problematic for many organizations. Given the many reports of substantial failures, the implementation of packaged ERP software and associated changes in business processes has proved not to be an easy task. As many organizations have discovered, the implementation of ERP systems can be a monumental disaster unless the process is handled carefully. The aim of this study is to identify the risks and controls used in ERP implementations, with the objective to understand the ways in which organizations can minimize the business risks involved. By controlling and minimizing the major business risks in the first instance, the scene can be set for the successful implementation of an ERP system. The study was motivated by the significance, for both the research and practice communities, of understanding the risks and controls critical for the successful implementation of ERP systems. Following the development of a model of risks and controls, a field study of an ERP system implementation project in an organization was conducted to provide a limited test of the model. The results from the field study provided support for risks and controls identified in the literature. The results also identified several controls not mentioned in the reviewed literature. The study lays the foundation for further research into the risk/control framework so important for the success of the implementations of ERP systems.

Book
30 Aug 2001
TL;DR: This book is primarily for programmers interested in writing services for residential gateways in the Java programming language and should be useful to anyone who wants to learn about residential gateway technology and the efforts made by the OSGi consortium.
Abstract: From the Book: Technology is invented and advanced by, well, technical people. However, a truly successful technology is marked by its adoption by people in their daily lives. Few ponder radio frequency modulation when they turn on the TV, or the internal combustion engine when they drive around. The technology has disappeared behind the utility. The last decade saw two new technologies begin to blend into our lives: the computer and the Internet. We only need to launch a browser and the resources of the World Wide Web are at our fingertips, and we are hard pressed to tell the difference between a computer and a game console, a personal digital assistant (such as PalmPilot), or a cell phone. It is now entirely feasible to bring services to smart consumer devices at home and to small businesses through the Internet. Utility providers and network, computer, wireless, consumer electronics, and home appliance companies recognize the tremendous potential and have started to tap into this market. As a result, new horizons are open for application developers. The Open Services Gateway Initiative (OSGi) was formed to explore these exciting opportunities, and its membership includes such diverse companies as Bell South, Echelon, Electricite de France, IBM, Sun, Ericsson, Nokia, Sony, Maytag, and Whirlpool, to name just a few from a roster of more than 80 organizations. With these combined resources, OSGi stands a good chance to turn this vision into reality. The OSGi Service Gateway Specification 1.0 defines a Java™ technology-based software architecture for developing and deploying services, which is the topic of this book. What compels us to write this book, inaddition to our enthusiasm for the emerging new applications, is the unique software model involved. We stumbled through a lot of unfamiliar territory ourselves when we worked on the Java Embedded Server™ product, the predecessor to the OSGi Service Gateway Specification, only to find our fellow developers encountering and struggling with the same class of problems. It is our hope to be able to elucidate the model and capture the hard-won solutions in one place. This book is primarily for programmers interested in writing services for residential gateways in the Java programming language. It should also be useful to anyone who wants to learn about residential gateway technology and the efforts made by the OSGi consortium. This book may be of interest to those who are involved with component-based software construction in general. Interestingly, nothing in the underlying programming model limits the kinds of applications that can be written. It aims at residential gateway applications at the "small" end of the spectrum in terms of code size and resource consumption, but it is just as viable for developing applications for desktop and enterprise environments. Indeed, the task will be made easier and the end result will be more powerful when fewer constraints on computing resources are imposed. We assume the readers are well versed in the Java programming language and experienced in software development on the Java platform. However, no experience is needed in embedded systems at the hardware and operating system levels. Many trade-offs on the contents of the book had to be considered, and these were not easy decisions to make. We wrote this book with the following goals in mind: Practical. This book is about programming service gateways and is primarily for programmers; therefore, a lot of its content is devoted to coding. The book does not dwell on the high-level vision, and it gets down to earth promptly. As a result, the material is best understood by practicing the examples near a computer. Reading it on a beach chair will almost surely ruin your vacation. Software Only. We are primarily concerned with the software aspect of the residential gateway, and particularly with applications for the Java platform. We don't deal with hardware design and configuration or operating system and system software of the gateway in this book. "Horizontal." One of the biggest challenges in developing examples for the book is to stay "horizontal" and relevant at the same time. By "horizontal" we mean you do not need to acquire highly specialized hardware and software to learn how to program a gateway. All the examples in this book can be built and run on a familiar personal computer or a workstation. We want to focus our effort on the generic mechanisms that apply to all service gateways with the OSGi architecture, rather than diverge into specifics of certain systems that are interesting to some readers but alien to others. For instance, as part of the Java Embedded Server project we have developed code to control a vending machine, a smart coffee maker, an NEC touch-panel golf score keeper, an Ericsson e-box, and an X10 lamp module. From first-hand experience we know that what we present is entirely within the realm of feasibility. However, the aforementioned applications are simply too complicated or have details that a specialized to be good tutorials. Realistic. We are not going to program a refrigerator, a washing machine, a microwave, a thermostat, or a toaster in this book. It is still not possible to go to Sears and buy a freezer that watches inventory and downloads e-coupons. Our smart espresso machine, for which we programmed a Web interface to monitor its water level and temperature and to control caffeine potency, uses proprietary commands and is not generally available. Many of the similar appliances we've seen are prototypes. This, however, is more an issue of business development than technological know-how. With the application development paradigm presented here, you should be able to develop applications for these smart appliances when they do roll down the production line en masse. Focused Scope. The technologies applicable to residential gateway applications have mushroomed during the last few years. Each warrants a book of its own to treat the subject thoroughly. Therefore, we do not teach you BlueTooth, USB, or HomePNA here. We are confident that experts in these areas can readily plug implementations of these technologies into the OSGi framework after they have learned how it works and what benefits it brings. Organization of the Book You can read this book from cover to cover, or you can select the chapters that address your particular needs. For the impatient, it is possible to jump to Chapter 4 and try out the code in action, because clear step-by-step instructions are given. However, you are strongly encouraged to read Chapter 3, which puts things into context. Chapter 1 describes the backdrop from which the residential gateway market em and its propellants and challenges, then explains the history of the Java Embedded Server product and the OSGi consortium, and introduces our view of what OSGi is trying to achieve. Chapter 2 outlines steps to develop your first bundle and familiarizes you with the Java Embedded Server execution environment. Chapter 3 explains the OSGi architecture and basic concepts, including the interaction of various entities during interbundle class sharing, service registration and retrieval, and bundle life cycle operations. Chapter 4 teaches you how to develop services, how to write library bundles, and how to include native code in your bundles. Two advanced examples are given in this chapter. Chapter 5 analyzes the dynamic nature of cooperation with services, and proposes strategies to cope with the situation. Events are also discussed at the beginning of this chapter. Chapter 6 describes design patterns and pitfalls. Chapter 7 explains how to use the OSGi standard services: HTTP and Log services. Chapter 8 explains the OSGi Device Access (DA) and how to develop services to communicate with devices using the DA. Chapter 9 discusses permission-based security and administration. Chapter 10 summarizes the issues being worked and our view of the future directions that the OSGi consortium could take. Appendix A contains the complete source code of the examples in this book. Appendix B is a copy of the OSGi specification. A list of references is included at the end of the book. Online Resources A copy of the Java Embedded Server product can be downloaded from Sun Microsystems' Web site at http://www.sun.com/software/embeddedserver For updated information about the book, visit the following URL: http://java.sun.com/docs/books/jes Full details of the OSGi consortium can be found at http://www.osgi.org

Journal ArticleDOI
TL;DR: It is demonstrated that successful reference laboratory reporting can be implemented if surveillance issues and components are planned and three major areas to be considered by international organizations for successful implementation are described.
Abstract: Electronic data reporting from public health laboratories to a central site provides a mechanism for public health officials to rapidly identify problems and take action to prevent further spread of disease. However, implementation of reference laboratory systems is much more complex than simply adopting new technology, especially in international settings. We describe three major areas to be considered by international organizations for successful implementation of electronic reporting systems from public health reference laboratories: benefits of electronic reporting, planning for system implementation (e.g., support, resources, data analysis, country sovereignty), and components of system initiation (e.g., authority, disease definition, feedback, site selection, assessing readiness, problem resolution). Our experience with implementation of electronic public health laboratory data management and reporting systems in the United States and working with international organizations to initiate similar efforts demonstrates that successful reference laboratory reporting can be implemented if surveillance issues and components are planned.

Journal Article
TL;DR: Te lessons learned include developing a broad base of support, making decisions through consensus, addressing conflict when it occurs, keeping user expectations realistic, preparing for the change process, implementing the computer information system in stages, challenging existing work processes, and viewing the implementation as a process.
Abstract: This article describes lessons learned during an initial intensive care unit point-of-care clinical information system implementation and subsequent expansions to other units and hospitals in a multihospital healthcare delivery system. Although the implementation and expansions were primarily successful, lessons learned include developing a broad base of support, making decisions through consensus, addressing conflict when it occurs, keeping user expectations realistic, preparing for the change process, implementing the computer information system in stages, challenging existing work processes, viewing the implementation as a process, and choosing a project leader with outstanding communication and group process skills in addition to technical skills.

Journal ArticleDOI
TL;DR: A knowledge-based approach and develops an expert design system to support top-down design for assembled products and give users the possibility to assess and reduce the total production cost at an early stage during the design process.

Journal ArticleDOI
TL;DR: An emerging large‐scale strategy involves an institutional partnership with a for‐profit application service provider (ASP) that specializes in total systems solutions for developing and delivering Web‐based distance learning programs.
Abstract: Strategies for implementing distance learning coursework have evolved and expanded with the growth and maturation of the World Wide Web. The requirements, advantages, and disadvantages of the most common strategies are compared and contrasted. Initially limited to individual efforts, software development has eased the burden of individual faculty and has opened up strategies for greater participation. Institutions attempting large‐scale implementations, however, may find infrastructure requirements overwhelming. An emerging large‐scale strategy involves an institutional partnership with a for‐profit application service provider (ASP). The ASP specializes in total systems solutions for developing and delivering Web‐based distance learning programs. Recent experiences at California State University, Fullerton, with an ASP are discussed.

DOI
01 Nov 2001
TL;DR: This technical note describes some useful lessons learned at a number of organizations that have implemented measurement programs using the Goal-Driven Software Measurement methodology to provide some practical advice and guidelines for planning and implementing measurement programs.
Abstract: Despite significant improvements in implementing measurement programs for software development in industry, data collected by Rubin Systems show that a large percentage of metrics programs fail. This technical note describes some useful lessons learned at a number of organizations that have implemented measurement programs using the Goal-Driven Software Measurement methodology. It includes a description of the methodology, a discussion of the challenges, obstacles, and their solutions, an initial set of indicators and measures, as well as some artifacts (such as templates and checklists) that we have found to enable successful implementations. The main motivation of this technical note is to provide some practical advice and guidelines for planning and implementing measurement programs.

01 Jan 2001
TL;DR: In this article, the most common mistakes and pitfalls associated with developing embedded real-time software are discussed, and the origin, causes, and hidden dangers of these mistakes are highlighted.
Abstract: The most common mistakes and pitfalls associated with developing embedded real-time software will be presented. The origin, causes, and hidden dangers of these mistakes will be highlighted. Methods ranging from better education to using new technology and recent research results will be discussed. The mistakes vary from problems with the highlevel project management methodologies, to poor decisions on low-level technical issues relating to the design and implementation. The most common mistakes have been identified from experience in reviewing the software designs and implementations of many embedded programmers, ranging from seasoned experts in industry to rookies just learning the material in college.

Book
01 Apr 2001
TL;DR: In this paper, a prescriptive model using five key guidelines is proposed for effective management of the system delivery process, which can save corporations millions in investment dollars, reduce negative impacts to customer service and enhance employee morale and systems acceptance.
Abstract: A pervasive theme today regarding the performance of new systems is "many systems are technical successes, but organizational failures." Systems that are well designed often fail to meet user expectations at implementation. This paper details and analyzes the implementation of a major operations support system at a large U.S. firm that fits this theme. Measurements of success from a quasi-experiment are used to accurately measure user performance and user expectations pre and post system implementation. These measurements offer solid proof that the system achieved key user defined objectives.... And yet, the system is widely viewed as a failure. This paper highlights the "organizational chaos" that "technically successful" systems often cause in user organizations when the Systems Delivery process how systems are delivered to users is ineffectual. In effect, systems are dropped off at the users' doorsteps. A prescriptive model using five key guidelines is proposed for effective management of the Systems Delivery process. These five relatively small secrets can save corporations millions in investment dollars, reduce negative impacts to customer service and enhance employee morale and systems acceptance.

Proceedings ArticleDOI
25 Jun 2001
TL;DR: This paper formalizes the notion of "execution machine" for synchronous code, and a generic architecture for centralized execution machines is introduced.
Abstract: Synchronous languages allow a high level, concurrent, and deterministic description of the behavior of reactive systems. Thus, they can be used advantageously for the programming of embedded control systems. The runtime requirements of synchronous code are light, but several critical properties must be fulfilled. In this paper, we address the problem of the software implementation of synchronous programs. After a brief introduction to reactive systems, this paper formalizes the notion of "execution machine" for synchronous code. Then, a generic architecture for centralized execution machines is introduced. Finally, several effective implementations are presented.

01 Jun 2001
TL;DR: A new method is presented to automatically derive a component that manages all of the angelic non-determinism for an arbitrary implementation/specification pair.
Abstract: Conformance checking of a component is a testing method to see if an implementation and its executable specification are behaviorally equivalent relative to any interactions performed on the implementation. Such checking is complicated by the presence of non-determinism in the specification: the specification may permit a set of possible behaviors. We present a new method to automatically derive a component that manages all of the angelic non-determinism for an arbitrary implementation/specification pair. The new component just plugs in; no instrumentation of any implementation is necessary. Conformance checking thus helps to keep high-level non-determinstic specifications of components and their low-level implementations in sync.

Patent
26 Dec 2001
TL;DR: In this article, a meta model for representing facts, intelligence, and packaging facts and intelligence into readily usable knowledge components implemented in off-the-shelf object oriented programming languages and tools is presented.
Abstract: A method, apparatus, and program product for designing, implementing, distributing, and deploying computer programs that consist of packaged knowledge components written in modem object oriented programming languages. A meta model defines a model for representing facts, intelligence, and packaging facts and intelligence into readily usable knowledge components implemented in off the shelf object oriented programming languages and tools. In addition, the meta model defines knowledge algebra to assemble and cascade knowledge components into larger and more powerful knowledge components and knowledge oriented software applications. A kernel is provided that links and executes knowledge components and knowledge oriented applications. The kernel dynamically links logical definition of knowledge components and the knowledge application to real implementations.

Proceedings ArticleDOI
05 Nov 2001
TL;DR: The implementation details of the parallel computing infrastructure to support broad-band MFP, the Web-based interface, user authentication, job launch, the RDMS, and a cost/benefit analysis of the migration to a multiuser high performance computing (HPC) environment are described.
Abstract: Under the sponsorship of the Center of Excellence in Research Ocean Sciences (CEROS), the Defense Advanced Research Projects Agency (DARPA), and the US Navy, Science Applications International Corporation (SAIC) has developed several ocean acoustic data modeling and analysis utilities. Execution of these programs was limited however, because the implementations (which include broad-band matched field processing) exceeded the computing capacity of desktop computers, and the interface was burdened by the management of large quantities of parameter and output files. Further, the migration of these algorithms to powerful processors resulted in additional constraints such as difficulty in resource access, user authentication, run-time security, and non-standardized interfaces. This project addressed these concerns through simultaneous development of a World Wide Web (WWW)-based interface, and the migration of the core algorithms to the Maui Supercomputer Center (MSC) and a SAIC Linux cluster. Operators can now use common gateway interface (CGI) scripts to input parameters, launch jobs, view graphical display results, and monitor job status from the massively parallel processors (MPP). After which, the run-time parameters and output files are managed via a Web-based relational database management system (RDBMS). This paper describes the implementation details of the parallel computing infrastructure to support broad-band MFP, the Web-based interface, user authentication, job launch, the RDMS, and a cost/benefit analysis of the migration to a multiuser high performance computing (HPC) environment. Also discussed is the security measures selected to insure resource and user protection. To serve as a feasibility baseline for the future adaptation of other applications and platforms, performance metrics are presented for the migration and processing on the MSC MPP system and the SAIC Linux cluster.

Journal ArticleDOI
TL;DR: An established theory of systems design, using formal constructs and set-theory notation, is used throughout this paper as the basis for the presentation of ideas.
Abstract: Systems theory is now a mature part of the discipline of general systems engineering science, with a substantial amount of research effort having been undertaken in the last 40 years—however, there is still very little evidence of the widespread practical use of systems-theoretic methods within the engineering industry. This is despite there being strong evidence that many of the current problems in the delivery of acceptable (or even usable) large, complex, systems solutions result from a failure to apply a rigorous systems-science approach. This paper therefore introduces some practical ideas for the effective use of an established mathematical systems theory to the specification and design of engineered system solutions. In particular, the following areas are explored: the capture of system requirement (and in particular ways of ensuring a properand comprehensive specification of input/output requirements); the modeling of system (complicated) behaviors, including anomalous behaviors arising as a consequence of real system implementation; and the formal relationship between a comprehensive input/output requirement specification and the “complicated” behaviors of the candidate system design solutions. An established theory of systems design, using formal constructs and set-theory notation, is used throughout this paper as the basis for the presentation of ideas. © 2001 John Wiley & Sons, Inc. Syst Eng 4: 58–75, 2001