scispace - formally typeset
Search or ask a question

Showing papers on "Implementation published in 2005"


Journal ArticleDOI
TL;DR: An integrated theoretical model is developed that posits that knowledge transfer is influenced by knowledge-related, motivational, and communication-related factors and suggests that all three groups of factors influence knowledge transfer.
Abstract: Enterprise resource planning (ERP) systems and other complex information systems represent critical organizational resources. For such systems, firms typically use consultants to aid in the implementation process. Client firms expect consultants to transfer their implementation knowledge to their employees so that they can contribute to successful implementations and learn to maintain the systems independent of the consultants. This study examines the antecedents of knowledge transfer in the context of such an interfirm complex information systems implementation environment. Drawing from the knowledge transfer, information systems, and communication literatures, an integrated theoretical model is developed that posits that knowledge transfer is influenced by knowledge-related, motivational, and communication-related factors. Data were collected from consultant-and-client matched-pair samples from 96 ERP implementation projects. Unlike most prior studies, a behavioral measure of knowledge transfer that incorporates the application of knowledge was used. The analysis suggests that all three groups of factors influence knowledge transfer, and provides support for 9 of the 13 hypotheses. The analysis also confirms two mediating relationships. These results (1) adapt prior research, primarily done in non-IS contexts, to the ERP implementation context, (2) enhance prior findings by confirming the significance of an antecedent that has previously shown mixed results, and (3) incorporate new IS-related constructs and measures in developing an integrated model that should be broadly applicable to the interfirm IS implementation context and other IS situations. Managerial and research implications are discussed.

1,217 citations


Journal ArticleDOI
TL;DR: In this paper, the authors classify collaboration initiatives using a conceptual water-tank analogy, and discuss their dynamic behavior and key characteristics, concluding that the effectiveness of supply chain collaboration relies upon two factors: the level to which it integrates internal and external operations, and the efforts are aligned to the supply chain settings in terms of the geographical dispersion, the demand pattern, and product characteristics.

747 citations


Journal ArticleDOI
TL;DR: The results from a comparative case study of 4 firms that implemented an ERP system suggest that a cautious, evolutionary, bureaucratic implementation process backed with careful change management, network relationships, and cultural readiness have a positive impact on several ERP implementations.

381 citations


Journal ArticleDOI
12 Jun 2005
TL;DR: StreamBit is developed as a sketching methodology for the important class of bit-streaming programs (e.g., coding and cryptography), which allows a programmer to write clean and portable reference code, and then obtain a high-quality implementation by simply sketching the outlines of the desired implementation.
Abstract: This paper introduces the concept of programming with sketches, an approach for the rapid development of high-performance applications. This approach allows a programmer to write clean and portable reference code, and then obtain a high-quality implementation by simply sketching the outlines of the desired implementation. Subsequently, a compiler automatically fills in the missing details while also ensuring that a completed sketch is faithful to the input reference code. In this paper, we develop StreamBit as a sketching methodology for the important class of bit-streaming programs (e.g., coding and cryptography).A sketch is a partial specification of the implementation, and as such, it affords several benefits to programmer in terms of productivity and code robustness. First, a sketch is easier to write compared to a complete implementation. Second, sketching allows the programmer to focus on exploiting algorithmic properties rather than on orchestrating low-level details. Third, a sketch-aware compiler rejects "buggy" sketches, thus improving reliability while allowing the programmer to quickly evaluate sophisticated implementation ideas.We evaluated the productivity and performance benefits of our programming methodology in a user-study, where a group of novice StreamBit programmers competed with a group of experienced C programmers on implementing a cipher. We learned that, given the same time budget, the ciphers developed in StreamBit ran 2.5x faster than ciphers coded in C. We also produced implementations of DES and Serpent that were competitive with hand optimized implementations available in the public domain.

286 citations


01 Jan 2005
TL;DR: In this article, the authors provide a more extensive analysis of the empirical evidence available to assess the impact of the implementation of lean construction practices and provide recommendations for future implementation and research.
Abstract: Over the last 10 years an increasing number of companies have implemented lean construction practices in an attempt to improve performance in construction projects. Most companies, and also some researchers, have reported satisfactory results from their implementation. However, there is still a need to provide more extensive analysis of the empirical evidence available to assess the impact of the implementation of lean construction. The authors have researched the implementation of the Last Planner System and other Lean Construction techniques in over one hundred construction projects over the last five years. They have also developed strategies and support tools for implementation. This paper analyses some of the main impacts observed in the studied projects, and some of the lessons learned from implementations. The paper discusses difficulties and barriers for implementation, productivity improvements, variability reduction and effectiveness of implementation strategies. The paper also provides recommendations for future implementation and research.

188 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present the analysis from a study into the key lessons learned from e-procurement implementation across a range of UK public sector organisations, identifying five main themes addressed by the current literature: impact on cost efficiency; the impact on the form and nature of supplier transaction; e-procurement system implementation; broader IT infrastructure issues; and the behavioural and relational impact of e- Procurement.
Abstract: This paper presents the analysis from a study into the key lessons learned from e-procurement implementation across a range of UK public sector organisations. The literature relating to e-procurement implementation and operation is reviewed, identifying five main themes addressed by the current literature: impact on cost efficiency; the impact on the form and nature of supplier transaction; e-procurement system implementation; broader IT infrastructure issues; and the behavioural and relational impact of eprocurement. The research carried out was intended to explore the perceptions and reflections of both 'early' and 'late' adopters of e-procurement. Seven key lessons are drawn from the study and presented here. We conclude by proposing areas for further research, including the need for research into failed eprocurement projects.

168 citations


Proceedings ArticleDOI
18 Jan 2005
TL;DR: This paper presents the concepts of better than worst-case design and highlights two exemplary designs: the DIVA checker and Razor logic and shows how this approach to system implementation relaxes design constraints on core components, which reduces the effects of physical design challenges and creates opportunities to optimize performance and power characteristics.
Abstract: The progressive trend of fabrication technologies towards the nanometer regime has created a number of new physical design challenges for computer architects. Design complexity, uncertainty in environmental and fabrication conditions, and single-event upsets all conspire to compromise system correctness and reliability. Recently, researchers have begun to advocate a new design strategy called Better Than Worst-Case design that couples a complex core component with a simple reliable checker mechanism. By delegating the responsibility for correctness and reliability of the design to the checker, it becomes possible to build provably correct designs that effectively address the challenges of deep submicron design. In this paper, we present the concepts of Better Than Worst-Case design and high light two exemplary designs: the DIVA checker and Razor logic. We show how this approach to system implementation relaxes design constraints on core components, which reduces the effects of physical design challenges and creates opportunities to optimize performance and power characteristics. We demonstrate the advantages of relaxed design constraints for the core components by applying typical-case optimization (TCO) techniques to an adder circuit. Finally, we discuss the challenges and opportunities posed to CAD tools in the context of Better Than Worst-Case design. In particular, we describe the additional support required for analyzing run-time characteristics of designs and the many opportunities which are created to incorporate typical-case optimizations into synthesis and verification.

135 citations


Journal ArticleDOI
TL;DR: A framework is synthesised for analysing and understanding how different approaches to managing ES implementations both address and influence the behaviours of key interest groups and hence the achievement of the benefits expected from the investment.
Abstract: Over the last 10 years many organisations have made significant investments in Enterprise-wide Systems (ES), particularly Enterprise Resource Planning (ERP) and Customer Relationship Management (CRM) software packages. Whilst in most cases technical implementation is relatively successful, many of the initiatives have failed to deliver the benefits expected. Research studies have identified a wide range of factors, that can affect the success of ES implementations, and the general consensus is that organisational issues are more difficult to resolve than technical ones. This research set out to synthesise a framework, from prior research, for analysing and understanding these organisational issues and to apply and refine the framework by studying four ES initiatives in different organisational and industry contexts. The findings from the case studies suggest that the framework can help understand how different approaches to managing ES implementations both address and influence the behaviours of key interest groups and hence the achievement of the benefits expected from the investment.

123 citations


Journal ArticleDOI
TL;DR: In this article, the authors describe three phases of research into design and implementation of performance measurement systems involving 16 different businesses and conclude that senior management commitment is a key driver of success, but the main factors which influence and change this commitment over the life of a performance measurement implementation project.
Abstract: There is a growing literature concerned with the design and implementation of performance measurement systems but few studies of success and failure. This paper describes three phases of research into design and implementation of performance measurement systems involving 16 different businesses. The conclusion from the research is that senior management commitment is a key driver of success, but the paper will also describe the main factors which influence and change this commitment over the life of a performance measurement implementation project.

109 citations


01 Jan 2005
TL;DR: The ATLAS Computing Model establishes the environment and operational requirements that ATLAS data-handling systems must support, and, together with the operational experience gained to date in test beams and data challenges, provides the primary guidance for the development of the data management systems.
Abstract: The ATLAS Computing Model embraces the Grid paradigm and a high degree of decentralization and sharing of computing resources. The required level of computing resources means that off-site facilities will be vital to the operation of ATLAS in a way that was not the case for previous CERN-based experiments. The primary event processing occurs at CERN in a Tier-0 facility. The RAW data is archived at CERN and copied (along with the primary processed data) to the Tier-1 facilities around the world. These facilities archive the raw data, provide the reprocessing capacity, provide access to the various processed versions, and allow scheduled analysis of the processed data by physics analysis groups. Derived datasets produced by the physics groups are copied to the Tier-2 facilities for further analysis. The Tier-2 facilities also provide the simulation capacity for the experiment, with the simulated data housed at Tier-1s. In addition, Tier-2 centres will provide analysis facilities, and some will provide the capacity to produce calibrations based on processing raw data. A CERN Analysis Facility provides an additional analysis capacity, with an important role in the calibration and algorithmic development work. ATLAS has adopted an object-oriented approach to software, based primarily on the C++ programming language, but with some components implemented using FORTRAN and Java. A component-based model has been adopted, whereby applications are built up from collections of plug-compatible components based on a variety of configuration files. This capability is supported by a common framework that provides common data-processing support. This approach results in great flexibility in meeting the basic processing needs of the experiment, and also for responding to changing requirements throughout its lifetime. The heavy use of abstract interfaces allows for different implementations to be provided, supporting different persistency technologies, or optimized for the offline or high-level trigger environments. The Athena framework is an enhanced version of the Gaudi framework that was originally developed by the LHCb experiment, but is now a common ATLAS-LHCb project. Major design principles are the clear separation of data and algorithms, and of transient (in-memory) and persistent (in-file) data. All levels of processing of ATLAS data, from high-level trigger to event simulation, reconstruction and analysis, take place within the Athena framework; in this way it is easier for code developers and users to test and run algorithmic code, with the assurance that all geometry and conditions data will be the same for all types of applications (simulation, reconstruction, analysis, visualization). One of the principal challenges for ATLAS computing is to develop and operate a data storage and management infrastructure able to meet the demands of a yearly data volume of O(10 PB) utilized by data processing and analysis activities spread around the world. The ATLAS Computing Model establishes the environment and operational requirements that ATLAS data-handling systems must support, and, together with the operational experience gained to date in test beams and data challenges, provides the primary guidance for the development of the data management systems. The ATLAS Databases and Data Management Project (DB Project) leads and coordinates ATLAS activities in these areas, with a scope encompassing technical databases (detector production, installation and survey data), detector geometry, online/TDAQ databases, conditions databases (online and offline), event data, offline processing configuration and book-keeping, distributed data management, and distributed database and data management services. The project is responsible for ensuring the coherent development, integration, and operational capability of the distributed database and data management software and infrastructure for ATLAS across these areas. The ATLAS Computing Model foresees the distribution of raw and processed data to Tier-1 and Tier-2 centres, so as to be able to exploit fully the computing resources that are made available to the Collaboration. Additional computing resources will be available for data processing and analysis at Tier-3 centres and other computing facilities to which ATLAS may have access. A complex set of tools and distributed services, enabling the automatic distribution and processing of the large amounts of data, has been developed and deployed by ATLAS in cooperation with the LHC Computing Grid (LCG) Project and with the middleware providers of the three large Grid infrastructures we use: EGEE, OSG and NorduGrid. The tools are designed in a flexible way, in order to have the possibility to extend them to use other types of Grid middleware in the future. These tools, and the service infrastructure on which they depend, were initially developed in the context of centrally managed, distributed Monte Carlo production exercises. They will be re-used wherever possible to create systems and tools for individual users to access data and compute resources, providing a distributed analysis environment for general usage by the ATLAS Collaboration. The first version of the production system was deployed in summer 2004 and has been used since the second half of 2004. It was used for Data Challenge 2, for the production of simulated data for the 5th ATLAS Physics Workshop (Rome, June 2005) and for the reconstruction and analysis of the 2004 Combined Test-Beam data. The main computing operations that ATLAS will have to run comprise the preparation, distribution and validation of ATLAS software, and the computing and data management operations run centrally on Tier-0, Tier-1s and Tier-2s. The ATLAS Virtual Organization will allow production and analysis users to run jobs and access data at remote sites using the ATLAS-developed Grid tools. In the past few years the Computing Model has been tested and developed by running Data Challenges of increasing scope and magnitude, as was proposed by the LHC Computing Review in 2001. We have run two major Data Challenges since 2002 and performed other massive productions in order to provide simulated data to the physicists and to reconstruct and analyse real data coming from test-beam activities; this experience is now useful in setting up the operations model for the start of LHC data-taking in 2007. The Computing Model, together with the knowledge of the resources needed to store and process each ATLAS event, gives rise to estimates of required resources that can be used to design and set up the various facilities. It is not assumed that all Tier-1s or Tier-2s will be of the same size; however, in order to ensure a smooth operation of the Computing Model, all Tier-1s should have broadly similar proportions of disk, tape and CPU, and the same should apply for the Tier-2s. The organization of the ATLAS Software & Computing Project reflects all areas of activity within the project itself. Strong high-level links have been established with other parts of the ATLAS organization, such as the T-DAQ Project and Physics Coordination, through cross-representation in the respective steering boards. The Computing Management Board, and in particular the Planning Officer, acts to make sure that software and computing developments take place coherently across sub-systems and that the project as a whole meets its milestones. The International Computing Board assures the information flow between the ATLAS Software & Computing Project and the national resources and their Funding Agencies.

104 citations


01 Jan 2005
TL;DR: The wiki was rather aptly described by Ward Cunningham as 'the simplest online database that could possibly work'; it is nonetheless important that the application chosen has the right span of features for the user requirements.
Abstract: The wiki was rather aptly described by Ward Cunningham as 'the simplest online database that could possibly work'. Wiki implementations are available for almost every system, even the most unexpected, and the concept is mature enough to take seriously as one of a number of possible enabling technologies for computer-mediated communication and information storage and retrieval. When considering the addition of a wiki implementation into a given environment, it is nonetheless important to ensure that the application chosen has the right span of features for the user requirements; furthermore that the expected users are comfortable with the software, its capabilities, and the intended community

Journal ArticleDOI
TL;DR: This study focuses on the integration of the human resource characteristics in business processes, which is a key issue for the ERP adoption and optimisation phases, and suggests to better adapt business processes to human actors by explicitly taking into account concepts like the role, competence and knowledge of human resources.

Proceedings ArticleDOI
24 Jul 2005
TL;DR: This work analyzes five independent and quite different implementations of the Web services resource framework from the perspectives of architecture, functionality, standards compliance, performance, and interoperability.
Abstract: The Web services resource framework defines conventions for managing state in distributed systems based on Web services, and WS-notification defines topic-based publish/subscribe mechanisms. We analyze five independent and quite different implementations of these specifications from the perspectives of architecture, functionality, standards compliance, performance, and interoperability. We identify both commonalities among the different systems (e.g., similar dispatching and SOAP processing mechanisms) and differences (e.g., security, programming models, and performance). Our results provide insights into effective implementation approaches. Our results may also provide application developers, system architects, and deployers with guidance in identifying the right implementation for their requirements and in determining how best to use that implementation and what to expect with regard to performance and interoperability.

Journal ArticleDOI
TL;DR: An approach to predict the performance of component-based server-side applications during the design phase of software development is presented and the results allow the architect to make early decisions between alternative application architectures in terms of their performance and scalability.
Abstract: Server-side component technologies such as Enterprise JavaBeans (EJBs), .NET, and CORBA are commonly used in enterprise applications that have requirements for high performance and scalability. When designing such applications, architects must select suitable component technology platform and application architecture to provide the required performance. This is challenging as no methods or tools exist to predict application performance without building a significant prototype version for subsequent benchmarking. In this paper, we present an approach to predict the performance of component-based server-side applications during the design phase of software development. The approach constructs a quantitative performance model for a proposed application. The model requires inputs from an application-independent performance profile of the underlying component technology platform, and a design description of the application. The results from the model allow the architect to make early decisions between alternative application architectures in terms of their performance and scalability. We demonstrate the method using an EJB application and validate predictions from the model by implementing two different application architectures and measuring their performance on two different implementations of the EJB platform.

Journal ArticleDOI
TL;DR: In this article, the authors use a case study approach to demonstrate the lessons learned from a successful implementation of an ERP system, and point out some strategic, tactic and operational considerations inherent in ERP implementation that are prerequisites to effective organizational transformation required by a system implementation such as SAP R/3.

Proceedings ArticleDOI
07 Nov 2005
TL;DR: A pattern specification language, Spine, is presented that allows patterns to be defined in terms of constraints on their implementation in Java and highlights some repeated mini-patterns discovered in the formalisation of these design patterns.
Abstract: Design patterns are widely used by designers and developers for building complex systems in object-oriented programming languages such as Java. However, systems evolve over time, increasing the chance that the pattern in its original form will be broken.To verify that a design pattern has not been broken requires specifying the original intent of the design pattern. Whilst informal descriptions of design patterns exist, no formal specifications are available due to differences in implementations between programming languages.We present a pattern specification language, Spine, that allows patterns to be defined in terms of constraints on their implementation in Java. We also present some examples of patterns defined in Spine and show how they are processed using a proof engine called Hedgehog.The conclusion discusses the type of patterns that are amenable to defining in Spine, and highlights some repeated mini-patterns discovered in the formalisation of these design patterns.

Proceedings ArticleDOI
11 Jul 2005
TL;DR: The tool supports verification of properties created from design specifications and implementation models to confirm expected results from the viewpoints of both the designer and implementer to ease the implementation, testing and deployment of Web service compositions.
Abstract: In this paper we describe tool support for a model-based approach to verifying compositions of Web service implementations. The tool supports verification of properties created from design specifications and implementation models to confirm expected results from the viewpoints of both the designer and implementer. Scenarios are modeled in UML, in the form of message sequence charts (MSCs), and then compiled into the finite state process (FSP) algebra to concisely model the required behavior. BPEL4WS implementations are mechanically translated to FSP to allow an equivalence trace verification process to be performed. By providing early design verification and validation, the implementation, testing and deployment of Web service compositions can be eased through the understanding of the behavior exhibited by the composition. The tool is implemented as a plug-in for the Eclipse development environment providing cooperating tools for specification, formal modeling and trace animation of the composition process.

Book ChapterDOI
29 May 2005
TL;DR: This paper proposes the use of ontologies to simplify the tasks of policy specification and administration, discusses how to represent policy inheritance and composition based on credential ontologies, formalize these representations and the according constraints in Frame-Logic, and presents POLICYTAB, a prototype implementation of the proposed scheme as a Protege plug-in to support policy specification.
Abstract: The World Wide Web makes it easy to share information and resources, but offers few ways to limit the manner in which these resources are shared. The specification and automated enforcement of security-related policies offer promise as a way of providing controlled sharing, but few tools are available to assist in policy specification and management, especially in an open system such as the Web, where resource providers and users are often strangers to one another and exact and correct specification of policies will be crucial. In this paper, we propose the use of ontologies to simplify the tasks of policy specification and administration, discuss how to represent policy inheritance and composition based on credential ontologies, formalize these representations and the according constraints in Frame-Logic, and present POLICYTAB, a prototype implementation of our proposed scheme as a Protege plug-in to support policy specification.

01 Jan 2005
TL;DR: Some viewpoints and potential solutions to the problem of lossless, incremental data flow through the different applications used by the project participants are discussed.
Abstract: The development of the Industry Foundation Classes (IFC) started from the vision of an integrated building product model which would cover all necessary information for buildings' whole lifecycle: requirements management, different design activities and construction and maintenance processes. Although the IFC model specification covers a substantial part of the required information its implementations into practical applications have shown several serious problems. One of the main problems is that the internal structures of the different software products do not support the information needs for the whole process. Thus, the idea of lossless, incremental data flow through the different applications used by the project participants has not come true. It is obvious that file based data exchange is not feasible solution, and some other solution for integrated project data model is necessary for the AEC industry. This paper discusses some viewpoints and potential solutions to the above problem.

Journal Article
TL;DR: Implementation of enterprise-wide systems (ES) continues apace as organizations adopt CRM and SCM packaged software, or replace their aging ERP systems with third-generation ERP packages.
Abstract: The 1990s saw the widespread adoption of enterprise resource planning (ERP) systems in many countries. Implementation of enterprise-wide systems (ES) continues apace as organizations adopt CRM and SCM packaged software, or replace their aging ERP systems with third-generation ERP packages. These implementations are often motivated by promises of integrated information and processes across the enterprise and “best practices.” However, many implementing organizations have not experienced the expected benefits, and there have been expensive ES implementation failures. One prescription for minimizing the risks of ES implementation is to implement

MonographDOI
01 Jan 2005
TL;DR: This book combines e-government implementation experiences from both developed and developing countries, and is useful to researchers and practitioners in the area as well as instructors teaching courses related to digital government and/or electronic commerce.
Abstract: Digital government is a new frontier of the development of electronic commerce. Electronic Government Strategies and Implementation is a timely piece to address the issues involved in strategically implementing digital government, which is a collection of high-quality papers that covers the various aspects of digital government strategic issues and implementations from the perspectives of both developed and developing countries. This book combines e-government implementation experiences from both developed and developing countries, and is useful to researchers and practitioners in the area as well as instructors teaching courses related to digital government and/or electronic commerce.

Journal ArticleDOI
TL;DR: The National Implementing Evidence-Based Practice Project is an ongoing effort to promote the implementation of effective practices for adults who have severe mental illnesses and is field-testing the approach in eight states.

Journal ArticleDOI
12 Jun 2005
TL;DR: A runtime technique for checking that a concurrently-accessed data structure implementation, such as a file system or the storage management module of a database, conforms to an executable specification that contains an atomic method per data structure operation.
Abstract: We present a runtime technique for checking that a concurrently-accessed data structure implementation, such as a file system or the storage management module of a database, conforms to an executable specification that contains an atomic method per data structure operation. The specification can be provided separately or a non-concurrent, "atomized" interpretation of the implementation can serve as the specification. The technique consists of two phases. In the first phase, the implementation is instrumented in order to record information into a log during execution. In the second, a separate verification thread uses the logged information to drive an instance of the specification and to check whether the logged execution conforms to it. We paid special attention to the general applicability and scalability of the techniques and to minimizing their concurrency and performance impact. The result is a lightweight verification method that provides a significant improvement over testing for concurrent programs.We formalize conformance to a specification using the notion of refinement: Each trace of the implementation must be equivalent to some trace of the specification. Among the novel features of our work are two variations on the definition of refinement appropriate for runtime checking: I/O and "view" refinement. These definitions were motivated by our experience with two industrial-scale concurrent data structure implementations: the Boxwood project, a B-link tree data structure built on a novel storage infrastructure [10] and the Scan file system [9]. I/O and view refinement checking were implemented as a verification tool named VRYD (VerifYing concurrent programs by Runtime Refinement-violation Detection). VYRD was applied to the verification of Boxwood, Java class libraries, and, previously, to the Scan filesystem. It was able to detect previously unnoticed subtle concurrency bugs in Boxwood and the Scan file system, and the known bugs in the Java class libraries and manually constructed examples. Experimental results indicate that our techniques have modest computational cost.

Journal ArticleDOI
TL;DR: In this paper, the authors distinguish and develop a conceptual model of e-business and its predecessor concepts of ecommerce, supply chain management (SCM), and enterprise resource planning (ERP) and demonstrate how these systems relate and serve significantly different strategic objectives.
Abstract: :The rapid deployment of e-business systems has surprised even the most futuristic management thinkers. Unfortunately, little empirical research has documented the variations of e-business solutions as major software vendors release complex IT products into the marketplace. The literature holds simultaneous evidence of major successes and major failures as implementations evolve. The current economic conditions have slowed implementation efforts but most companies report ongoing efforts to further strengthen their investment in e-business as they anticipate a reinvigorated marketplace. In this research, we first distinguish and develop a conceptual model of e-business and its predecessor concepts of e-commerce, supply chain management (SCM), and enterprise resource planning (ERP) and demonstrate how these systems relate and serve significantly different strategic objectives. This research combines interviews, case studies and an industry survey to determine the significant variables leading to suc...

Journal ArticleDOI
01 Jan 2005
TL;DR: The utility of SoftArch/MTE, a software tool that allows software architects to sketch an outline of their proposed system architecture at a high level of abstraction, and the accuracy of the generated performance test-beds, for validating architectural choices during early system development are demonstrated.
Abstract: Most distributed system specifications have performance benchmark requirements, for example the number of particular kinds of transactions per second required to be supported by the system However, determining the likely eventual performance of complex distributed system architectures during their development is very challenging We describe SoftArch/MTE, a software tool that allows software architects to sketch an outline of their proposed system architecture at a high level of abstraction These descriptions include client requests, servers, server objects and object services, database servers and tables, and particular choices of middleware and database technologies A fully-working implementation of this system is then automatically generated from this high-level architectural description This implementation is deployed on multiple client and server machines and performance tests are then automatically run for this generated code Performance test results are recorded, sent back to the SoftArch/MTE environment and are then displayed to the architect using graphs or by annotating the original high-level architectural diagrams Architects may change performance parameters and architecture characteristics, comparing multiple test run results to determine the most suitable abstractions to refine to detailed designs for actual system implementation Further tests may be run on refined architecture descriptions at any stage during system development We demonstrate the utility of our approach and prototype tool, and the accuracy of our generated performance test-beds, for validating architectural choices during early system development

Journal Article
TL;DR: Research Methods MEDLINE and reference lists of articles were searched for English-language articles published between 1990 and 2005 and key results were extracted and tabulated.
Abstract: Research Methods MEDLINE and reference lists of articles were searched for English-language articles published between 1990 and 2005. Studies assessing the effect of management support, financial resource availability, implementation climate and/or implementation policies and practices on the effectiveness of EMR system implementation were selected for inclusion in the review. Relevant data on study objective, study design, study population and setting, measures of implementation effectiveness and organizational factors, and key results were extracted and tabulated.

Journal ArticleDOI
TL;DR: The approach is evolutionary, in the sense that the specification may evolve while the system is in use, in response to changes in requirements, and any changes to the specification are automatically reflected in the structure of the implementation and in the representation of any data currently stored.

Proceedings ArticleDOI
18 Jan 2005
TL;DR: This paper presents an approach to implementing a static scheduler, which controls all the task executions and communication transactions of a system according to a pre-determined schedule, and addresses the issue of centralized implementation versus distributed implementation.
Abstract: In the design of a heterogeneous multiprocessor system on chip, we face a new design problem; scheduler implementation. In this paper, we present an approach to implementing a static scheduler, which controls all the task executions and communication transactions of a system according to a pre-determined schedule. For the scheduler implementation, we consider both intra-processor and inter-processor synchronization. We also consider scheduler overhead, which is often neglected. In particular, we address the issue of centralized implementation versus distributed implementation. We investigate the pros and cons of the two different scheduler implementations. Through experiments with synthetic examples and a real world multimedia application, we show the effectiveness of our approach.

Book ChapterDOI
01 Jan 2005
TL;DR: The initial study at The Open University has shown that even before full LD implementations are available the approach advocated by LD is allowing a fresh look at the structures and designs in use across the University and giving a practical way to implement reviews in a way that can support staff and potentially improve the student experience.
Abstract: LD is an exciting concept that enables us to engage with ways to describe educational design and material in a new way. The consequences of a full LD implementation could mean entirely new ways of working with separation of design, content and presentation with benefits for sharing and reuse. What the initial study at The Open University has shown is that even before such implementations are available the approach advocated by LD is allowing a fresh look at the structures and designs in use across the University and giving a practical way to implement reviews in a way that can support staff and potentially improve the student experience. LD can produce good descriptions of activities and in doing so reveal aspects that are unclear. It may be possible to break down courses informally into tasks and roles without using the full IMS specification, however, the formal approach taken by LD means that technical validation of materials can automate some of the checking and management of the designs. Forward plans to adopt LD can build on the significant community activity now taking place, both within the Valkenburg Group, supported by the UNFOLD project, and outside any formal support system. We expect that progress will be made on integrated players, the design of tools that can support specialised design aspects, sharing of designs, and research into pedagogic validation.

01 May 2005
TL;DR: The required and suggested algorithms in the original Internet Key Exchange version 1 (IKEv1) specification do not reflect the current reality of the IPsec market requirements and are updated.
Abstract: The required and suggested algorithms in the original Internet Key Exchange version 1 (IKEv1) specification do not reflect the current reality of the IPsec market requirements. The original specification allows weak security and suggests algorithms that are thinly implemented. This document updates RFC 2409, the original specification, and is intended for all IKEv1 implementations deployed today. [STANDARDS-TRACK]