scispace - formally typeset
Search or ask a question

Showing papers on "Implementation published in 2002"


Book
01 Oct 2002
TL;DR: The Editors put the MPEG-4 FA standard against the historical background of research on facial animation and model-based coding, and provide a brief history of the development of the standard itself.
Abstract: From the Publisher: This book concentrates on the animation of faces. The Editors put the MPEG-4 FA standard against the historical background of research on facial animation and model-based coding, and provide a brief history of the development of the standard itself. In part 2 there is a comprehensive overview of the FA specification with the goal of helping the reader understand how the standard really works. Part 3, forms the bulk of the book and covers the implementations of the standard on both encoding and decoding side. Several face animation techniques for MPEG-4 FA decoders are presented. While the standard itself actually specifies only the decoder, for applications it is interesting to look at the wide range of technologies for producing and encoding the FA contents. These include video analysis, speech analysis and synthesis, as well as keyframe animation. The last part of the book provides several examples of applications using the MPEG-4 FA standard. It will be useful for companies implementing products and services related to the new standard, researchers in several domains related to facial animation, as well as wider technical audience interested in new technologies and applications.· The main people involved in the standardization process are contributors to the book Provides several examples of applications using the MPEG-4 Facial Animation standard, including video and speech analysis This will become THE reference for the MPEG-4 Facial Animation Aids the understanding of the reasoning behind the standard specification, how it works and what the potential applications are Gives an overview of the technologies directly related to the standard and its implementation Essential reading for the Industry and research community especially engineers, researchers and developers.

423 citations


Proceedings ArticleDOI
Amza1, Chanda1, Cox1, Elnikety1, Gil1, Rajamani, Zwaenepoel, Cecchet, Marguerite 
01 Jan 2002
TL;DR: In this article, the authors describe three benchmarks for evaluating the performance of Web sites with dynamic content: an online bookstore, an auction site, and a bulletin board, modeled after ebay.com and slahdot.org.
Abstract: The absence of benchmarks for Web sites with dynamic content has been a major impediment to research in this area. We describe three benchmarks for evaluating the performance of Web sites with dynamic content. The benchmarks model three common types of dynamic content Web sites with widely varying application characteristics: an online bookstore, an auction site, and a bulletin board. For the online bookstore, we use the TPCW specification. For the auction site and the bulletin board, we provide our own specification, modeled after ebay.com and slahdot.org, respectively. For each benchmark we describe the design of the database and the interactions provided by the Web server. We have implemented these three benchmarks with a variety of methods for building dynamic-content applications, including PHP, Java servlets and EJB (Enterprise Java Beans). In all cases, we use commonly used open-source software. We also provide a client emulator that allows a dynamic content Web server to be driven with various workloads. Our implementations are available freely from our Web site for other researchers to use. These benchmarks can be used for research in dynamic Web and application server design. In this paper, we provide one example of such possible use, namely discovering the bottlenecks for applications in a particular server configuration. Other possible uses include studies of clustering and caching for dynamic content, comparison of different application implementation methods, and studying the effect of different workload characteristics on the performance of servers. With these benchmarks we hope to provide a common reference point for studies in these areas.

279 citations


Journal ArticleDOI
TL;DR: In this article, an interpretive study was conducted to understand the reasons for the apparent lack of success of ERP implementations by analyzing issues raised by representatives of key stakeholder groups.
Abstract: Enterprise Resource Planning (ERP) systems may be defined as the implementation of standard software modules for core business processes, usually combined with customization for competitive differentiation. The aim is to provide breadth of integration and depth of functionality across multi-functional and often multi-national organizations. However, current research has shown that there has been a notable decrease in the satisfaction levels of ERP implementations over the period 1998–2000. The environment in which such software is selected, implemented andused may be viewed as a social activity system, which consists of a variety ofstakeholders e.g.users, developers, managers, suppliers and consultants. In such a context, an interpretive research approach is appropriate in order to understand the influences at work. This paper reports on an interpretive study that attempts to understand the reasons for this apparent lack ofsuccess by analyzing issues raised by representatives of key stakeholder groups. Conclusions are drawn on a wide range of organizational, management, cultural and political issues that provide guidance in managing such large-scale, complex business projects. These conclusions have led theauthors to review the area of critical success factors (CSFs) for IS projects and to identify those peculiar to ERP projects. Copyright © 2002 John Wiley & Sons, Ltd.

200 citations


Journal ArticleDOI
TL;DR: The conclusions are that successful implementation programs leave more room to explore innovative technology but are more rigorous in rolling out proven technology.
Abstract: The central questions of this article are: How can the design of program management contribute to the success of complex software implementations? How do we deal with the complexity that large implementation projects encounter from changes over time and from strong integration needs? These questions become increasingly pertinent as software projects include the implementation of new internet-based IT architectures and business models. To answer these questions, we propose ways that have worked well in recent complex multi-project Enterprise Resource Planning (ERP) implementations (“programs”). We start by defining complexity in this context and making it measurable through three dimensions: variety, variability, and integration. We then investigate 15 cases and outline how our framework can be applied to deal with increasing ERP implementation complexity. One of our conclusions is that successful implementation programs leave more room to explore innovative technology but are more rigorous in roll...

157 citations


Proceedings ArticleDOI
03 Jun 2002
TL;DR: This paper demonstrates how the Alloy language is used for the specification of a conflict-free role-based system and provides a suitable basis for further analysis by the Alloy constraint analyser.
Abstract: Role-based access control is a powerful and policy-neutral concept for enforcing access control. Many extensions have been proposed, the most significant of which are the decentralised administration of role-based systems and the enforcement of constraints. However, the simultaneous integration of these extensions can cause conflicts in a later system implementation. We demonstrate how we use the Alloy language for the specification of a conflict-free role-based system. This specification provides us at the same time with a suitable basis for further analysis by the Alloy constraint analyser.

107 citations


Proceedings ArticleDOI
15 Jul 2002
TL;DR: The methodology and tools analyze the documents and interactions in terms of four linguistic primitives and convert the diagrams into specifications and implementations of software agents, which cooperate in automating the resultant supply chain.
Abstract: This paper explores a linguistic approach to coordination modeling as a formal basis for supply-chain management (SCM) in manufacturing. The approach promotes the interchange of standard documents: enterprises need only describe their supply processes using OAG business object documents and UML interaction diagrams. Our methodology and tools analyze the documents and interactions in terms of four linguistic primitives and convert the diagrams into specifications and implementations of software agents. The agents then cooperate in automating the resultant supply chain. We evaluate our methodology in the context of several industrial scenarios. We conclude that supply-chain automation using software-agent technology is feasible.

107 citations


Journal ArticleDOI
TL;DR: The availability of specialty-specific order sets, the engagement of physician leadership, and a large-scale system implementation were key strategic factors that enabled physician-users to accept a physician order entry system despite significant changes in workflow.

96 citations


Journal ArticleDOI
TL;DR: The paper presents the specification in the form of the Exactly-Once Transaction (e-Transaction) abstraction: an abstraction that encompasses both safety and liveness properties in three-tier environments.
Abstract: A three-tier application is organized as three layers: human users interact with front-end clients (e.g., browsers), middle-tier application servers (e.g., Web servers) contain the business logic of the application, and perform transactions against back-end databases. Although three-tier applications are becoming mainstream, they usually fail to provide sufficient reliability guarantees to the users. Usually, replication and transaction-processing techniques are applied to specific parts of the application, but their combination does not provide end-to-end reliability. The aim of this paper is to provide a precise specification of a desirable, yet realistic, end-to-end reliability contract in three-tier applications. The paper presents the specification in the form of the Exactly-Once Transaction (e-Transaction) abstraction: an abstraction that encompasses both safety and liveness properties in three-tier environments. It gives an example implementation of that abstraction and points out alternative implementations and tradeoffs.

74 citations


Book ChapterDOI
30 Oct 2002
TL;DR: al. as mentioned in this paper present how they apply generative programming techniques to develop jRate, which is an open-source ahead-of-time-compiled implementation of the Real-time Specification for Java (RTSJ).
Abstract: Over 90 percent of all microprocessors are now used for real-time and embedded applications. Since the behavior of these applications is often constrained by the physical world, it is important to devise higher-level programming languages and middleware that robustly and productively enforce real-time constraints, as well as meeting conventional functional requirements. This paper provides two contributions to the study of programming languages and middleware for real-time and embedded applications. We first present how we are applying generative programming techniques to develop jRate, which is an open-source ahead-of-time-compiled implementation of the Real-time Specification for Java (RTSJ). The goal of jRate is to provide developers the ability to generate RTSJ implementations that are customized for their needs. We then show performance results of jRate that illustrate how well it performs compared to the TimeSys RTSJ Reference Implementation (RI).

70 citations


Patent
03 Jun 2002
TL;DR: In this article, a policy is generated that includes an action to be applied to a resource, and a policy assignment is created in association with but separate from the policy, including a reference to the policy and criteria for a client to determine appropriateness of subsequent access to policy to apply the action to the resource.
Abstract: The following described implementations provide for efficient distribution of policy. Specifically, a policy is generated that includes an action to be applied to a resource. A policy assignment is created in association with but separate from the policy. The policy assignment includes a reference to the policy, as well as criteria for a client to determine appropriateness of subsequent access to the policy to apply the action to the resource.

61 citations


Book
01 Jan 2002
TL;DR: This book is an introduction into methodology and practice of analysis, design and implementation of distributed health information systems, with special attention to security and interoperability of such systems as well as to advanced electronic health record approaches.
Abstract: This book is an introduction into methodology and practice of analysis, design and implementation of distributed health information systems. Special attention is dedicated to security and interoperability of such systems as well as to advanced electronic health record approaches. In the book, both available architectures and implementations but also current and future innovations are considered. Therefore, the component paradigm, UML, XML, eHealth are discussed in a concise way. Many practical solutions specified and implemented first in the author's environment are presented in greater detail. The book addresses information scientists, administrators, health professionals, managers and other users of health information systems.

Patent
26 Apr 2002
TL;DR: In this article, a method for dynamically invoking a Web service based on a common port type is presented, where one or more ports bound to the remote implementations of the Web service can be identified and a set of port selection rules can be applied to the identified ports to select a particular one of the ports.
Abstract: A method for dynamically invoking a Web service. The method can include assembling a collection of references to remote implementations of the Web service based upon a common port type. One or more ports bound to the remote implementations of the Web service can be identified, and a set of port selection rules can be applied to the identified ports to select a particular one of the ports. Finally, the Web service can be invoked through the selected port. Notably, the identifying step can include parsing a Web service implementation document for each referenced remote implementation in the collection. The parsing can produce a list of ports through which the remote implementations can be invoked. Also, the method can further include compiling the set of port selection rules according to at least one of high-availability concerns, quality of service concerns and economic concerns.

Proceedings ArticleDOI
18 Aug 2002
TL;DR: It is pointed out that ERP software can provide only one solution for a particular business process unless workarounds are available from experienced implementators and that an ERP implementation cannot solve all problems that a global corporation may have in multiple countries in the scope of the project due to the absence of standard global processes across the organization.
Abstract: Enterprise resource planning (ERP) software packages, originally developed in one country addressing unique requirements of the businesses in that country, have matured significantly. In the new global economy this software is implemented in global corporations and used in multiple countries from one installation in a shared mode. Most of these corporations are implementing with out any customization or with minimal customization to avoid high maintenance costs and other reasons. Project managers of these implementations in a global environment face some unique uncertainties and challenges different from any custom software development and implementation projects which was part of the old economy. Some of the unique challenges are satisfying multiple countries' statutory requirements and reporting to the corporate headquarters from one installation of the software, conflicting interests between different business entities, lack of experienced implementators in all the countries, and efficient use of metanational advantages. We discuss different aspects of these challenges in detail and point out that ERP software can provide only one solution for a particular business process unless workarounds are available from experienced implementators. Different counties adopt different reporting and accounting practices and therefore an ERP implementation cannot solve all problems that a global corporation may have in multiple countries in the scope of the project due to the absence of standard global processes across the organization. We conclude that the most important critical success factor of these international implementations are how quickly these corporations can reengineer existing business process to adopt best practices of using an ERP system.

Proceedings ArticleDOI
20 Apr 2002
TL;DR: The efforts of user groups, industry, government and academia to develop a standard for 'Alternate Interface Access' within the V2 technical committee of the National Committee for Information Technology Standards (NCITS) are introduced.
Abstract: A 'Universal Remote Console' (URC) is a personal device that can be used to control any electronic and information technology device (target device/service), such as thermostats, TVs, or copy machines. The URC renders the user interface (UI) of the target device in a way that accommodates the user's preferences and abilities. This paper introduces the efforts of user groups, industry, government and academia to develop a standard for 'Alternate Interface Access' within the V2 technical committee of the National Committee for Information Technology Standards (NCITS). Some preliminary design aspects of the standard in work are discussed shortly.

01 Apr 2002
TL;DR: In this article, the authors present a set of requirements for a benchmark suite for distributed publish/subscribe services, and outline its primary components based on their own experience in building and studying publish and subscribe infrastructures, and on existing evaluation frameworks.
Abstract: Building a distributed publish/subscribe infrastructure amounts to defining a service model (or interface) and providing an implementation for it. A typical distributed implementation is architected as a network of dispatcher components, each one implementing appropriate protocols and algorithms, that collectively realize the chosen service model. The service model should provide a value-added service for a wide variety of applications, while the implementation should gracefully scale up to handle an intense traffic of publications and subscriptions. We believe that the design of such service models and implementations must be guided by a systematic evaluation method, which in turns must be based on a carefully chosen benchmark suite. In this paper, we lay out a set of requirements for a benchmark suite for distributed publish/subscribe services, and we outline its primary components. The ideas proposed in this paper are based on our own experience in building and studying publish/subscribe infrastructures, and on existing evaluation frameworks.

Proceedings ArticleDOI
07 Aug 2002
TL;DR: This paper combines key concepts developed in these two efforts and proposes a policy-based configuration management architecture built upon the distributed management infrastructure.
Abstract: The IETF has developed several specifications for policy-based configuration management. In addition, an infrastructure for distributed management by delegation has been specified. This paper combines key concepts developed in these two efforts and proposes a policy-based configuration management architecture built upon the distributed management infrastructure. Two prototype implementations for managing Differentiated Services are discussed and evaluated.

Book
15 Mar 2002
TL;DR: Detailed technical information, scenario-based explanations, and a real-world case study provide the skills and insight you need to deploy PKI in e-commerce, e-mail, messaging, and any other system where data security is essential.
Abstract: From the Publisher: Public Key Infrastructure Implementation and DesignYour PKI Road Map With its power to ensure data security, confidentiality, and integrity, Public Key Infrastructure is an essential component of today’s business systems. Whether you’re a network administrator, a systems engineer, or a security professional, this hands-on reference guide delivers all the information you need to harness this fast-growing technology. The book covers all aspects of PKI, including architecture, cryptography, standards, certificates, design, and execution. Detailed technical information, scenario-based explanations, and a real-world case study provide the skills and insight you need to deploy PKI in e-commerce, e-mail, messaging, and any other system where data security is essential. Put PKI to Work Get an expert tour of the world of digital cryptography Find out about the various PKI architectures available today – and when to use each one Master functions for issuing, revoking, and managing certificates Learn how to install and configure Windows 2000 Certificate Server for SSL, IPSec, and S/MIME Build PKI solutions based on the latest PKI management protocols and standards Evaluate PKI-enabled services on the market and decide which one’s right for your project Plan your PKI deployment with full insight into both operational and legal considerations Author Biography: NIIT is a global IT solutions company that develops customized multimedia training products and trains more than 150,000 people in 37 countries every year. Suranjan Choudhury, MCSE, CACP, CADC, Sun, is a network security specialist for NIIT, a global training and software organization. He has developed security policies and overseen implementations of secure Web sites and messaging systems (using PKI, firewall, portal, and VPN technologies) for GE, Amro Bank, NALCO, the Indian Ministry of Defense, and other organizations. Kartik Bhatnagar has an MBA in systems and is currently employed as a development executive with NIIT. Wasim Haque has over 7 years of experience in information technology with expertise in analysis, design, and implementation of enterprise-wide networks using various security solutions for the enterprise.

Journal ArticleDOI
10 Dec 2002
TL;DR: This work proposes a software architecture based on a combination of object-oriented models and executable formal specifications that supports reconfigurability to facilitate heterogeneous implementations and vendor-neutral products.
Abstract: We propose a software architecture based on a combination of object-oriented models and executable formal specifications. In this architecture, the machine control software is viewed as an integration of a set of reusable software components, each modeled with a set of event-based external interfaces for functional definitions, a control logic driver for execution of behavioral specifications, and a set of service protocols for platform adaptation. The behaviors of the entire software can be viewed as an integration of behaviors of components and their integration. Separation of structural specification from behavioral specification enables the controller software structure to be reconfigured independently of application, and software behavior to be reconfigured independently of controller software structure. When the system needs reconfiguration due to changes in either application requirements or the execution platform, the software with our architecture can then be reconfigured by changing reusable components and their interactions in structure for functional capability, and by changing the Control Plan program for behavior. Both types of reconfiguration can be done at the executable code level after the software is implemented. The proposed architecture also supports reconfigurability to facilitate heterogeneous implementations and vendor-neutral products.

Journal ArticleDOI
TL;DR: This survey describes and classify 14 relevant proposals and environments that tackle Java's performance bottlenecks in order to make the language an effective option for high‐performance network‐based computing.
Abstract: There has been an increasing research interest in extending the use of Java towards high-performance demanding applications such as scalable Web servers, distributed multimedia applications, and large-scale scientific applications. However, extending Java to a multicomputer environment and improving the low performance of current Java implementations pose great challenges to both the systems developer and application designer. In this survey, we describe and classify 14 relevant proposals and environments that tackle Java's performance bottlenecks in order to make the language an effective option for high-performance network-based computing. We further survey significant performance issues while exposing the potential benefits and limitations of current solutions in such a way that a framework for future research efforts can be established. Most of the proposed solutions can be classified according to some combination of three basic parameters: the model adopted for inter-process communication, language extensions, and the implementation strategy. In addition, where appropriate to each individual proposal, we examine other relevant issues, such as interoperability, portability, and garbage collection. Copyright © 2002 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: A systematic structure for the implementation and analysis of CAx systems is presented to eliminate--or at least reduce--these kinds of problems, and some techniques, such as the analytic hierarchy process (AHP), benchmarking and simulation approach are used together to make the implemented and analysis studies more effective, easy and applicable for the companies.
Abstract: The many successful implementations of computer-aided systems (CAx) have created major advantages for most companies in the competitive world market. In particular, some companies have implemented these systems in order to keep up their competitive power, as computer applications in various fields of production systems are more widely used than before. Unfortunately, these companies have met some problems in their implementation processes, such as a lack of welleducated personnel, in sufficient management support, wrong implementation strategies and techniques, and so on. In order to overcome these problems, in this paper a systematic structure for the implementation and analysis of CAx systems is presented to eliminate--or at least reduce--these kinds of problems. In addition, some techniques, such as the analytic hierarchy process (AHP), benchmarking and simulation approach are used together to make the implementation and analysis studies more effective, easy and applicable for the companies. The object...

Patent
29 Mar 2002
TL;DR: In this paper, a method, system and program storage device for measurement acquisition using predictive models which can improve scalability, accommodate devices that operate in a disconnected mode, and enable integration of data from sources with different time granularities is described.
Abstract: A method, system and program storage device for measurement acquisition using predictive models which: (a) can improve scalability; (b) can accommodate devices that operate in a disconnected mode; and (c) enable integration of data from sources with different time granularities. Various features can be embodied in software and an object-oriented implementation is described. Different implementations are described, such as standalone predictive models implemented only on a manager (for example systems management/load balancing) or managed system (for example router management); or a parallel implementation with predictive models running on both the manager and managed (agent) systems (for example financial trading or system health monitoring). In a parallel model implementation, the agent constructs a predictive model that is conveyed to the manager system. The models are used in parallel, possibly with no communication for an extended time. The manager uses its model to provide tentative values of measurement variables to management applications. The agent uses its model to check its accuracy. If the model is found to be insufficiently accurate, an updated model is transmitted to the manager. The invention allows other measurement acquisition protocols to operate concurrently on the same measurement variables.

Proceedings ArticleDOI
25 Jun 2002
TL;DR: By augmenting an everyday artefact, namely the standard cardboard box, this paper has created a simple yet powerful interactive environment that, judging from the experience of the "users", has achieved its goal of stirring children's imagination.
Abstract: This paper documents the design process for an augmented children's play environment centred on that most ubiquitous and simple of objects, the cardboard box. The purpose of the exercise is to show how computer technology can be used in innovative ways to stimulate discovery, play and adventure among children. Our starting point was a dissatisfaction with current computer technology as it is presented to children, which, all to often in our view, focuses inappropriately on the computer per se as a fetishized object. Shifting the focus of attention from the Graphical User Interface (GUI) to familiar objects, and children's interactions around and through these augmented objects, results in the computer becoming a facilitator of exploration and learning. The paper documents the journey from initial design concept, through a number of prototype implementations, to the final implementation. Each design iteration was triggered by observation of use of the prototypes, and reflection on that use, and on new design possibilities. By augmenting an everyday artefact, namely the standard cardboard box, we have created a simple yet powerful interactive environment that, judging from the experience of our "users", has achieved its goal of stirring children's imagination.

01 Jun 2002
TL;DR: Preliminary issues of interest and conclusions are delineated that demonstrate that the TPM methodology is a powerful integrated diagnostic tool in support of the new paradigm advocating a multidisciplinary approach to program management.
Abstract: : This research effort, sponsored by the Program Executive Office for Air ASW, Assault, and Special Mission Programs (PEO(A)), is known as the Navy PEO(A) Technical Performance Measurement (TPM) System A retrospective analysis was conducted on the T45TS Cockpit-21 program and real-time test implementations are being conducted on the Federal Aviation Administration's (FAA) Wide Area Augmentation System (WAAS) program, the Navy's H-1 helicopter upgrade program, and is currently under consideration for other test implementations across the Department of Defense (DoD) and in private industry Currently-reported earned value data contains invaluable planning and budget information with proven techniques for program management, however, shortcomings of the system are its emphasis on retrospection and lack of integration with technical achievement The TPM approach, using the techniques of risk analysis and probability, offers a promising method to incorporate technical assessments resulting systematically from technical parameter measurements to derive more discrete management data sufficiently early to allow for cost avoidance Results obtained from TPM pilot programs, particularly the Cockpit-21 program, support this premise Several preliminary issues of interest and conclusions are delineated in this paper that demonstrate that the TPM methodology is a powerful integrated diagnostic tool in support of the new paradigm advocating a multidisciplinary approach to program management It also promises to provide a powerful new tool in proactive risk management

Proceedings ArticleDOI
01 Jul 2002
TL;DR: Non-obvious implementation strategies, state machines, predictive algorithms and internal representation techniques which are used to minimize memory consumption and maximize the throughput of these implementations of the JPEG2000 standard are described.
Abstract: This paper is concerned with software architectures for JPEG2000. The paper is informed by the author's own work on the JPEG2000 VM (Verification Model) and the Kakadu implementation of the standard. The paper describes non-obvious implementation strategies, state machines, predictive algorithms and internal representation techniques which are used to minimize memory consumption and maximize the throughput of these implementations.

Proceedings ArticleDOI
08 Apr 2002
TL;DR: This paper proposes a method which, by assigning priorities and offsets to tasks, guarantees that complex timing constraints can be met and supports sporadic tasks, shared resources, and varying execution times of tasks.
Abstract: Design and implementation of motion control applications includes the transition from control design to real-time system implementation. To make this transition smooth, the specification model for the real-time system should allow also for temporal requirements other than deadlines, e.g., deviation from nominal period time of an activity, end-to-end timing constraints, temporal correlation between different sampling tasks and constraints on temporal variations in output. Many real-time systems in industry today are based on pre-emptive priority based run-time systems, and hence, the temporal requirements should be fulfilled by correctly assigning attributes such as priorities and offsets to the tasks executing in such systems. Assigning priorities and offsets in order to fulfill complex temporal requirements originating from control design and computer system design is a hard task that should be supported by powerful methods and tools. In this paper we propose a method which, by assigning priorities and offsets to tasks, guarantees that complex timing constraints can be met. In addition to the complex timing constraints, the method supports sporadic tasks, shared resources, and varying execution times of tasks. We present the idea and implementation, which is based on a genetic algorithm, and illustrate the method by an example.

Journal ArticleDOI
TL;DR: This article discusses development process models in general, and proposes a model-based process - the "A" process, which consists of a sequence of models, each of which serves a specific purpose and hence contains only those pieces of information it requires for this purpose.
Abstract: Distributed embedded computer systems are the key enablers of X-by-wire systems and control system functions. While developers can validate the correct operation of the communication and operating systems and the silicon implementations - the basis of embedded computer systems - once and for all, they cannot validate the application-dependent software and data structures in these systems in the same manner. The developer must configure the communication system for the respective application, create middleware code to access the communication system, and, last but not least, implement the application software. Because this is necessary for every new application, we need a well-defined process and a complementary set of tools to minimize error and support a high-quality development life cycle. We propose a model-based process - the "A" process. It consists of a sequence of models, each of which serves a specific purpose and hence contains only those pieces of information it requires for this purpose. The models are linked to each other by process transitions that either add information to or extract information from their predecessors. The A process guides the developer from one model to the next and is supported by a set Of tools. In this article, we discuss development process models in general, and our model-based process in particular.

Yi Liu1
01 Jan 2002
TL;DR: In this paper, the authors describe methods for identifying appropriate software components for an application and for specifying the components' operations rigorously using the theory and methods of the design by contract approach for specification of the functionality.
Abstract: This paper describes methods for identifying appropriate software components for an application and for specifying the components’ operations rigorously. It uses the theory and methods of the design by contract approach for specification of the functionality. The actual implementations of a component’s operations are hidden from the clients and encapsulated within the component. A component communicates with another component only through one of the other component’s supported interfaces. Hence, a component can be easily replaced by another that implements the same operations. By using design by contract, we build reliable reusable components.

Journal ArticleDOI
01 Feb 2002-Infor
TL;DR: A framework is presented, based on empirical findings, to guide and structure the thinking of businesses approaching the planning and implementation of an EC capability, and encourages the kind of holistic, systematic and integrated thinking with respect to EC visioning and strategy development that the researchers found was both so necessary but also so often lacking in EC implementations.
Abstract: The research described in this paper reports on some of the outcomes of qualitative and exploratory studies into the experiences of small & medium enterprises (SMEs) with respect to electronic commerce (EC). A framework is subsequently presented, based on empirical findings, to guide and structure the thinking of businesses approaching the planning and implementation of an EC capability. The framework encourages the kind of holistic, systematic and integrated thinking with respect to EC visioning and strategy development that the researchers found was both so necessary but also so often lacking in EC implementations amongst many of the SMEs participating in this study. A discussion of how the framework can be used to shape a “strategic conversation” about the implications and requirements of a move into EC, based on a preliminary action research study, is offered as a conclusion to the paper.

Proceedings ArticleDOI
29 May 2002
TL;DR: This paper presents a policy specification, implementation, and enforcement methodology based on formal models of interactive behavior and satisfiability of system properties, and shows that changing the operational parameters of policy implementation entities does not affect the behavioral guarantees specified by the properties.
Abstract: In this paper we define and provide a general construction for a class of policies we call dynamic policies. In most existing systems, policies are implemented and enforced by changing the operational parameters of shared system objects. These policies do not account for the behavior of the entire system, and enforcing these policies can have unexpected interactive or concurrent behavior. We present a policy specification, implementation, and enforcement methodology based on formal models of interactive behavior and satisfiability of system properties. We show that changing the operational parameters of our policy implementation entities does not affect the behavioral guarantees specified by the properties. We demonstrate the construction of dynamic access control policies based on safety property specifications and describe an implementation of these policies in the Seraphim active network architecture. We present examples of reactive security systems that demonstrate the power and dynamism of our policy implementations. We also describe other types of dynamic policies for information flow and availability based on safety, liveness, fairness, and other properties. We believe that dynamic policies are important building blocks of reactive security solutions for active networks.

Journal ArticleDOI
TL;DR: A vision of a scenario where the amalgamation of various aspects and views into a heterogeneous assembly of coordinated products is required is presented.