scispace - formally typeset
Search or ask a question

Showing papers on "Implementation published in 2017"


Journal ArticleDOI
TL;DR: The ACTS model is a step toward bringing implementation and sustainment into the design and evaluation of TESs, public health into clinical research, research into clinics, and treatment into the lives of patients.
Abstract: Mental health problems are common and pose a tremendous societal burden in terms of cost, morbidity, quality of life, and mortality. The great majority of people experience barriers that prevent access to treatment, aggravated by a lack of mental health specialists. Digital mental health is potentially useful in meeting the treatment needs of large numbers of people. A growing number of efficacy trials have shown strong outcomes for digital mental health treatments. Yet despite their positive findings, there are very few examples of successful implementations and many failures. Although the research-to-practice gap is not unique to digital mental health, the inclusion of technology poses unique challenges. We outline some of the reasons for this gap and propose a collection of methods that can result in sustainable digital mental health interventions. These methods draw from human-computer interaction and implementation science and are integrated into an Accelerated Creation-to-Sustainment (ACTS) model. The ACTS model uses an iterative process that includes 2 basic functions (design and evaluate) across 3 general phases (Create, Trial, and Sustain). The ultimate goal in using the ACTS model is to produce a functioning technology-enabled service (TES) that is sustainable in a real-world treatment setting. We emphasize the importance of the service component because evidence from both research and practice has suggested that human touch is a critical ingredient in the most efficacious and used digital mental health treatments. The Create phase results in at least a minimally viable TES and an implementation blueprint. The Trial phase requires evaluation of both effectiveness and implementation while allowing optimization and continuous quality improvement of the TES and implementation plan. Finally, the Sustainment phase involves the withdrawal of research or donor support, while leaving a functioning, continuously improving TES in place. The ACTS model is a step toward bringing implementation and sustainment into the design and evaluation of TESs, public health into clinical research, research into clinics, and treatment into the lives of our patients.

223 citations


Journal ArticleDOI
TL;DR: A better understanding of ERP implementation is provided, which can be applied towards overcoming operational difficulties during the implementation process, and guidance is provided to researchers and practitioners with an insight into published research work and their findings.
Abstract: Purpose Enterprise resource planning (ERP) implementation brings with it a set of challenges In order to gain a better understanding of these and they can be mitigated during the implementation process, the purpose of this paper is to use Esteves and Bohorquez (2007) classification based on ERP lifecycle framework, and content analysis to review the literature on ERP implementation in a structured format with a focus on larger enterprises, and provide a platform for practitioners to plan implementation with minimum possibility of failure Design/methodology/approach Esteves and Bohorquez (2007) classification based on the ERP lifecycle framework is used to develop and present a comprehensive structured review of the literature on ERP system implementation in large enterprises (LEs), with a particular focus on pre-implementation, implementation, and post-implementation Findings Drawing from the literature, organisations can plan implementation based on the findings and strategies presented in the study This can lead to a better understanding of implementation with minimal probability of failure The authors find that top management support, good project management teams, and good communications are the top three most important critical successful factors during implementation The authors also identify critical gaps in current research Existing research focusses predominantly on the implementation phase, but research on pre- and post-implementation is lacking, and that no industry standard implementation methodology has been developed Research implications This review primarily focusses on the literature in the area of ERP implementation ERP implementation planning involves access to effective implementation strategies Despite the literature identifying a myriad of different ERP implementation models, no standard industry ERP implementation model has been developed The findings for ERP implementation are repetitive, inconsistent, and lack empirical research, rendering these two of the most critical areas for future research, and collaboration between ERP practitioners, system developers, and researchers Researchers, in turn, need to become more innovative in terms of their research techniques when examining ERP implementation Practical implications This paper provides guidance to researchers and practitioners with an insight into published research work and their findings It provides a better understanding of ERP implementation, which can be applied towards overcoming operational difficulties during the implementation process Originality/value This study is innovated in its use of Esteves and Bohorquez (2007) classification based on the ERP lifecycle framework, and content analysis to present a comprehensive structured literature review of the ERP implementation literature with a specific focus on pre-implementation, implementation, and post-implementation in LEs between the period 1989 and 2014 The technique and time period used in this study differs from those of other studies on ERP implementation The paper brings together theoretical and practical developments on ERP implementation under a single source, which should aid practitioners, researchers and ERP developers with future research and decision making

127 citations


Proceedings ArticleDOI
01 May 2017
TL;DR: This paper documents the approach taken by the C language specification subcommittee and presents the main concepts, constructs, and objects within the GraphBLAS API.
Abstract: The purpose of the GraphBLAS Forum is to standardize linear-algebraic building blocks for graph computations. An important part of this standardization effort is to translate the mathematical specification into an actual Application Programming Interface (API) that (i) is faithful to the mathematics and (ii) enables efficient implementations on modern hardware. This paper documents the approach taken by the C language specification subcommittee and presents the main concepts, constructs, and objects within the GraphBLAS API. Use of the API is illustrated by showing an implementation of the betweenness centrality algorithm.

78 citations


Patent
14 Sep 2017
TL;DR: In this article, a policy neural network is trained by iteratively updating policy parameters of the policy network based on a batch of collected experience data, prior to performance of each of a plurality of episodes performed by the robots.
Abstract: Implementations utilize deep reinforcement learning to train a policy neural network that parameterizes a policy for determining a robotic action based on a current state. Some of those implementations collect experience data from multiple robots that operate simultaneously. Each robot generates instances of experience data during iterative performance of episodes that are each explorations of performing a task, and that are each guided based on the policy network and the current policy parameters for the policy network during the episode. The collected experience data is generated during the episodes and is used to train the policy network by iteratively updating policy parameters of the policy network based on a batch of collected experience data. Further, prior to performance of each of a plurality of episodes performed by the robots, the current updated policy parameters can be provided (or retrieved) for utilization in performance of the episode.

74 citations



Journal ArticleDOI
TL;DR: A new approach to NHIE is proposed that builds on the notion of consumer-mediated HIE, albeit without the focus on central health record banks, and provides special interfaces between mPHRs and provider systems.

62 citations


Book
27 Mar 2017
TL;DR: In this article, a broad introduction to the field of cloud service benchmarking is presented, which guides readers step-by-step through the process of designing, implementing and executing a Cloud service benchmark, as well as understanding and dealing with its results.
Abstract: Cloud service benchmarking can provide important, sometimes surprising insights into the quality of services and leads to a more quality-driven design and engineering of complex software architectures that use such services. Starting with a broad introduction to the field, this book guides readers step-by-step through the process of designing, implementing and executing a cloud service benchmark, as well as understanding and dealing with its results. It covers all aspects of cloud service benchmarking, i.e., both benchmarking the cloud and benchmarking in the cloud, at a basic level. The book is divided into five parts: Part I discusses what cloud benchmarking is, provides an overview of cloud services and their key properties, and describes the notion of a cloud system and cloud-service quality. It also addresses the benchmarking lifecycle and the motivations behind running benchmarks in particular phases of an application lifecycle. Part II then focuses on benchmark design by discussing key objectives (e.g., repeatability, fairness, or understandability) and defining metrics and measurement methods, and by giving advice on developing own measurement methods and metrics. Next, Part III explores benchmark execution and implementation challenges and objectives as well as aspects like runtime monitoring and result collection. Subsequently, Part IV addresses benchmark results, covering topics such as an abstract process for turning data into insights, data preprocessing, and basic data analysis methods. Lastly, Part V concludes the book with a summary, suggestions for further reading and pointers to benchmarking tools available on the Web. The book is intended for researchers and graduate students of computer science and related subjects looking for an introduction to benchmarking cloud services, but also for industry practitioners who are interested in evaluating the quality of cloud services or who want to assess key qualities of their own implementations through cloud-based experiments.

54 citations


Journal ArticleDOI
TL;DR: A conceptual framework for ERP system implementation is presented by combining the state-gate approach with the pre-implementation roadmap to enhance the overall ERP implementation outcomes, ensuring critical success factors and eliminating common causes of failures.
Abstract: Purpose The purpose of this paper is to propose an alternative integrated approach based on the stage-gate method to implement enterprise resource planning (ERP) systems which will enhance the effectiveness of ERP projects. Design/methodology/approach A literature review was conducted on ERP system implementation and its effectiveness. The need for improving implementation approaches and methodologies was examined. Based on the insights gained, a conceptual framework for ERP system implementation is presented by combining the state-gate approach with the pre-implementation roadmap. Findings The proposed framework aims to enhance the overall ERP implementation outcomes, ensuring critical success factors and eliminating common causes of failures. A pre-implementation roadmap is identified as a key element for eliminating many causes of failure including lack of organisations’ readiness for ERP. The post-implementation stage can be used for further improvements to the system through internal research and development. Research limitations/implications The development of the framework is an attempt to contribute to improving ERP implementation. This research is expected to motivate researchers to work in this area, and it will be beneficial to practicing managers in the identification of opportunities for improvements in ERP systems. Case studies will be valuable to refine and validate the proposed model. Originality/value This paper explores research in a needy area and offers a framework to help researchers and practitioners in improving ERP implementation. This framework is expected to reduce the implementation project duration, strengthen critical success factors and minimise common problems of ERP implementation projects.

52 citations


Journal ArticleDOI
TL;DR: These findings are among the first to characterize organizational variability in response to system-driven implementation and suggest ways that implementation interventions might be tailored by organizational characteristics.
Abstract: Large mental health systems are increasingly using fiscal policies to encourage the implementation of multiple evidence-based practices (EBPs). Although many implementation strategies have been identified, little is known about the types and impacts of strategies that are used by organizations within implementation as usual. This study examined organizational-level responses to a fiscally-driven, rapid, and large scale EBP implementation in children’s mental health within the Los Angeles County Department of Mental Health. Qualitative methods using the principles of grounded theory were used to characterize the responses of 83 community-based agencies to the implementation effort using documentation from site visits conducted 2 years post reform. Findings indicated that agencies perceived the rapid system-driven implementation to have both positive and negative organizational impacts. Identified challenges were primarily related to system implementation requirements rather than to characteristics of specific EBPs. Agencies employed a variety of implementation strategies in response to the system-driven implementation, with agency size associated with implementation strategies used. Moderate- and large-sized agencies were more likely than small agencies to have employed systematic strategies at multiple levels (i.e., organization, therapist, client) to support implementation. These findings are among the first to characterize organizational variability in response to system-driven implementation and suggest ways that implementation interventions might be tailored by organizational characteristics.

47 citations


Proceedings ArticleDOI
19 Oct 2017
TL;DR: ChainerCV as mentioned in this paper is a software library that supports numerous neural network models as well as software components needed to conduct research in computer vision, including object detection and semantic segmentation.
Abstract: Despite significant progress of deep learning in the field of computer vision, there has not been a software library that covers these methods in a unifying manner. We introduce ChainerCV, a software library that is intended to fill this gap. ChainerCV supports numerous neural network models as well as software components needed to conduct research in computer vision. These implementations emphasize simplicity, flexibility and good software engineering practices. The library is designed to perform on par with the results reported in published papers and its tools can be used as a baseline for future research in computer vision. Our implementation includes sophisticated models like Faster R-CNN and SSD, and covers tasks such as object detection and semantic segmentation.

46 citations


Journal ArticleDOI
TL;DR: A large-scale numerical study uses data obtained from a major hotel to compare the performance of several heuristic approaches proposed in the literature and offers a cautionary tale on the choice of heuristic methods for practical network pricing problems.
Abstract: Dynamic pricing for network revenue management has received considerable attention in research and practice. Based on data obtained from a major hotel, we use a large-scale numerical study to compare the performance of several heuristic approaches proposed in the literature. The heuristic approaches we consider include deterministic linear programming with resolving and three variants of dynamic programming decomposition. Dynamic programming decomposition is considered one of the strongest heuristics and is the method chosen in some recent commercial implementations, and remains a topic of research in the recent academic literature. In addition to a plain-vanilla implementation of dynamic programming decomposition, we consider two variants proposed in recent literature. For the base scenario generated from the real data, we show that the method based on Zhang (2011) [An improved dynamic programming decomposition approach for network revenue management. Manufacturing Service Oper. Management 13(1):35–52.] ...

Journal ArticleDOI
TL;DR: In this article, the authors investigate the current visual management practices in highways construction projects in England and identify a set of recommendations and some visual management ideas for future implementation efforts in the highways construction and maintenance sector.
Abstract: Purpose: The purpose of this paper is to investigate the current Visual Management practices in highways construction projects in England. Design/methodology/approach: Following a comprehensive literature review, the research topic was investigated by using five case studies and focus groups. Findings: The main findings are (i) the current implementation of VM is limited, particularly on the construction field, (ii) there are some identified points (suggestions) that require attention to disseminate and advance the current practices further (iii) many conventional and BIM based opportunities to extend the current Visual Management implementations exist for the sector. Originality/value: The highways construction and maintenance sector in England has been systematically deploying lean construction techniques in its operations for a while. One of those lean techniques is a close-range visual communication strategy called Visual Management. The literature on the Visual Management implementation in construction is scarce and generally limited to the building construction context. This paper documents the current industry practice in conventional and Building Information Modelling (BIM) based Visual Management and identifies a set of recommendations and some Visual Management ideas for future implementation efforts in the highways construction and maintenance sector.

23 Jun 2017
TL;DR: This book is a report on the new state of the art created by BETTY, and the title “Behavioural Types: from Theory to Tools” summarises the trajectory of the community during the last four years.
Abstract: This book presents research produced by members of COST Action IC1201: Behavioural Types for Reliable Large-Scale Software Systems (BETTY), a European research network that was funded from October 2012 to October 2016. The technical theme of BETTY was the use of behavioural type systems in programming languages, to specify and verify properties of programs beyond the traditional use of type systems to describe data processing. A significant area within behavioural types is session types, which concerns the use of type-theoretic techniques to describe communication protocols so that static typechecking or dynamic monitoring can verify that protocols are implemented correctly. This is closely related to the topic of choreography, in which system design starts from a description of the overall communication flows. Another area is behavioural contracts, which describe the obligations of interacting agents in a way that enables blame to be attributed to the agent responsible for failed interaction. Type-theoretic techniques can also be used to analyse potential deadlocks due to cyclic dependencies between inter-process interactions. BETTY was organised into four Working Groups: (1) Foundations; (2) Security; (3) Programming Languages; (4) Tools and Applications. Working Groups 1–3 produced “state-of-the-art reports”, which originally intended to take snapshots of the field at the time the network started, but grew into substantial survey articles including much research carried out during the network [1–3]. The situation for Working Group 4 was different. When the network started, the community had produced relatively few implementations of programming languages or tools. One of the aims of the network was to encourage more implementation work, and this was a great success. The community as a whole has developed a greater interest in putting theoretical ideas into practice. The sixteen chapters in this book describe systems that were either completely developed, or substantially extended, during BETTY. The total of 41 co-authors represents a significant proportion of the active participants in the network (around 120 people who attended at least one meeting). The book is a report on the new state of the art created by BETTY in xv xvi Preface the area of Working Group 4, and the title “Behavioural Types: from Theory to Tools” summarises the trajectory of the community during the last four years. The book begins with two tutorials by Atzei et al. on contract-oriented design of distributed systems. Chapter 1 introduces the CO2 contract specifi- cation language and the Diogenes toolchain. Chapter 2 describes how timing constraints can be incorporated into the framework and checked with the CO2 middleware. Part of the CO2 middleware is a monitoring system, and the theme of monitoring continues in the next two chapters. In Chapter 3, Attard et al. present detectEr, a runtime monitoring tool for Erlang programs that allows correctness properties to be expressed in Hennessy-Milner logic. In Chapter 4, which is the first chapter about session types, Neykova and Yoshida describe a runtime verification framework for Python programs. Communication protocols are specified in the Scribble language, which is based on multiparty session types. The next three chapters deal with choreographic programming. In Chap- ter 5, Debois and Hildebrandt present a toolset for working with dynamic condition response (DCR) graphs, which are a graphical formalism for choreography. Chapter 6, by Lange et al., continues the graphical theme with ChorGram, a tool for synthesising global graphical choreographies from collections of communicating finite-state automata. Giallorenzo et al., in Chapter 7, consider runtime adaptation. They describe AIOCJ, a choreographic programming language in which runtime adaptation is supported with a guarantee that it doesn’t introduce deadlocks or races. Deadlock analysis is important in other settings too, and there are two more chapters about it. In Chapter 8, Padovani describes the Hypha tool, which uses a type-based approach to check deadlock-freedom and lock-freedom of systems modelled in a form of pi-calculus. In Chapter 9, Garcia and Laneve present a tool for analysing deadlocks in Java programs; this tool, called JaDA, is based on a behavioural type system. The next three chapters report on projects that have added session types to functional programming languages in order to support typechecking of communication-based code. In Chapter 10, Orchard and Yoshida describe an implementation of session types in Haskell, and survey several approaches to typechecking the linearity conditions required for safe session implemen- tation. In Chapter 11, Melgratti and Padovani describe an implementation of session types in OCaml. Their system uses runtime linearity checking. In Chapter 12, Lindley and Morris describe an extension of the web programming language Links with session types; their work contrasts with the previous two chapters in being less constrained by an existing language design. Continuing the theme of session types in programming languages, the next two chapters describe two approaches based on Java. Hu’s work, presented in Chapter 13, starts with the Scribble description of a multiparty session type and generates an API in the form of a collection of Java classes, each class containing the communication methods that are available in a particular state of the protocol. Dardha et al., in Chapter 14, also start with a Scribble specification. Their StMungo tool generates an API as a single class with an associated typestate specification to constrain sequences of method calls. Code that uses the API can be checked for correctness with the Mungo typechecker. Finally, there are two chapters about programming with the MPI libraries. Chapter 15, by Ng and Yoshida, uses an extension of Scribble, called Pabble, to describe protocols that parametric in the number of runtime roles. From a Pabble specification they generate C code that uses MPI for communication and is guaranteed correct by construction. Chapter 16, by Ng et al., describes the ParTypes framework for analysing existing C+MPI programs with respect to protocols defined in an extension of Scribble. We hope that the book will serve a useful purpose as a report on the activities of COST Action IC1201 and as a survey of programming languages and tools based on behavioural types.

Proceedings ArticleDOI
14 Oct 2017
TL;DR: This paper proposes three new execution models equipped with much improved controllability, including a hybrid model that is capable of getting the strengths of all, and leads to the development of a software programming framework named VersaPipe.
Abstract: Pipeline is an important programming pattern, while GPU, designed mostly for data-level parallel executions, lacks an efficient mechanism to support pipeline programming and executions. This paper provides a systematic examination of various existing pipeline execution models on GPU, and analyzes their strengths and weaknesses. To address their shortcomings, this paper then proposes three new execution models equipped with much improved controllability, including a hybrid model that is capable of getting the strengths of all. These insights ultimately lead to the development of a software programming framework named VersaPipe. With VersaPipe, users only need to write the operations for each pipeline stage. VersaPipe will then automatically assemble the stages into a hybrid execution model and configure it to achieve the best performance. Experiments on a set of pipeline benchmarks and a real-world face detection application show that VersaPipe produces up to $6.90 \times (2.88 \times$ on average) speedups over the original manual implementations. CCS CONCEPTS • General and reference $\rightarrow$ Performance; • Computing methodologies $\rightarrow$ Parallel computing methodologies; • Computer systems organization $\rightarrow$ Heterogeneous (hybrid) systems;

Proceedings ArticleDOI
30 Oct 2017
TL;DR: This paper presents the experience of two research teams that independently added floating-point support to KLEE, a popular symbolic execution engine, and conducts a rigorous comparison between the two implementations.
Abstract: Symbolic execution is a well-known program analysis technique for testing software, which makes intensive use of constraint solvers. Recent support for floating-point constraint solving has made it feasible to support floating-point reasoning in symbolic execution tools. In this paper, we present the experience of two research teams that independently added floating-point support to KLEE, a popular symbolic execution engine. Since the two teams independently developed their extensions, this created the rare opportunity to conduct a rigorous comparison between the two implementations, essentially a modern case study on N-version programming. As part of our comparison, we report on the different design and implementation decisions taken by each team, and show their impact on a rigorously assembled and tested set of benchmarks, itself a contribution of the paper.

Book ChapterDOI
03 Apr 2017
TL;DR: This position paper proposes a list of requirements for a human and machine-readable contract authoring language, friendly to lawyers, serving as a common (and a specification) language, for programmers, and the parties to a contract.
Abstract: Distributed ledger technologies are rising in popularity, mainly for the host of financial applications they potentially enable, through smart contracts. Several implementations of distributed ledgers have been proposed, and different languages for the development of smart contracts have been suggested. A great deal of attention is given to the practice of development, i.e. programming, of smart contracts. In this position paper, we argue that more attention should be given to the “traditional developers” of contracts, namely the lawyers, and we propose a list of requirements for a human and machine-readable contract authoring language, friendly to lawyers, serving as a common (and a specification) language, for programmers, and the parties to a contract.

Journal ArticleDOI
TL;DR: This paper will discuss agent-based modeling and simulation options for STI policy with a particular emphasis on the contribution of the social sciences both in offering theoretical grounding and in providing empirical data.
Abstract: Policymaking implies planning, and planning requires prediction--or at least some knowledge about the future. This contribution starts from the challenges of complexity, uncertainty, and agency, which refute the prediction of social systems, especially where new knowledge (scientific discoveries, emergent technologies, and disruptive innovations) is involved as a radical game-changer. It is important to be aware of the fundamental critiques, approaches, and fields such as Technology Assessment, the Forrester World Models, Economic Growth Theory, or the Linear Model of Innovation have received in the past decades. It is likewise important to appreciate the limitations and consequences these diagnoses pose on science, technology and innovation policy (STI policy). However, agent-based modeling and simulation now provide new options to address the challenges of planning and prediction in social systems. This paper will discuss these options for STI policy with a particular emphasis on the contribution of the social sciences both in offering theoretical grounding and in providing empirical data. Fields such as Science and Technology Studies, Innovation Economics, Sociology of Knowledge/Science/Technology etc. inform agent-based simulation models in a way that realistic representations of STI policy worlds can be brought to the computer. These computational STI worlds allow scenario analysis, experimentation, policy modeling and testing prior to any policy implementations in the real world. This contribution will illustrate this for the area of STI policy using examples from the SKIN model. Agent-based simulation can help us to shed light into the darkness of the future--not in predicting it, but in coping with the challenges of complexity, in understanding the dynamics of the system under investigation, and in finding potential access points for planning of its future offering "weak prediction".

Proceedings ArticleDOI
20 May 2017
TL;DR: This paper presents an efficient, fully-automated, type constraint-based refactoring approach that assists developers in taking advantage of enhanced interfaces for their legacy Java software and provides insight to language designers on how this new construct applies to existing software.
Abstract: Java 8 default methods, which allow interfaces to contain (instance) method implementations, are useful for the skeletal implementation software design pattern. However, it is not easy to transform existing software to exploit default methods as it requires analyzing complex type hierarchies, resolving multiple implementation inheritance issues, reconciling differences between class and interface methods, and analyzing tie-breakers (dispatch precedence) with overriding class methods to preserve type-correctness and confirm semantics preservation. In this paper, we present an efficient, fully-automated, type constraint-based refactoring approach that assists developers in taking advantage of enhanced interfaces for their legacy Java software. The approach features an extensive rule set that covers various corner-cases where default methods cannot be used. To demonstrate applicability, we implemented our approach as an Eclipse plug-in and applied it to 19 real-world Java projects, as well as submitted pull requests to popular GitHub repositories. The indication is that it is useful in migrating skeletal implementation methods to interfaces as default methods, sheds light onto the pattern's usage, and provides insight to language designers on how this new construct applies to existing software.

Proceedings ArticleDOI
01 Oct 2017
TL;DR: This work reports on its work with cyber analysts to understand the analytic process and how one such model, the MITRE ATT&CK Matrix, is used to structure their analytic thinking, and presents efforts to map specific data needed by analysts into this threat model to inform their visualization designs.
Abstract: Cyber network analysts follow complex processes in their investigations of potential threats to their network. Much research is dedicated to providing automated decision support in the effort to make their tasks more efficient, accurate, and timely. Support tools come in a variety of implementations from machine learning algorithms that monitor streams of data to visual analytic environments for exploring rich and noisy data sets. Cyber analysts, however, need tools which help them merge the data they already have and help them establish appropriate baselines against which to compare anomalies. Furthermore, existing threat models that cyber analysts regularly use to structure their investigation are not often leveraged in support tools. We report on our work with cyber analysts to understand the analytic process and how one such model, the MITRE ATT&CK Matrix [42], is used to structure their analytic thinking. We present our efforts to map specific data needed by analysts into this threat model to inform our visualization designs. We leverage this expert knowledge elicitation to identify a capability gaps that might be filled with visual analytic tools. We propose a prototype visual analytic-supported alert management workflow to aid cyber analysts working with threat models.

Journal ArticleDOI
TL;DR: This paper presents a design study visualizing an aggregated collection from diverse cultural institutions in Germany, consisting of four stylistically and functionally coordinated visualizations, each one focusing on different facets of the collection.
Abstract: As cultural institutions are digitizing their artifacts and interlinking their collections, new opportunities emerge to engage with cultural heritage. However, it is the often comprehensive and complex nature of collections that can make it difficult to grasp their distribution and extent across a variety of dimensions. After a brief introduction to the research area of collection visualizations, this paper presents a design study visualizing an aggregated collection from diverse cultural institutions in Germany. We detail our iterative design process leading to prototypical implementations of four stylistically and functionally coordinated visualizations, each one focusing on different facets of the collection.

Journal ArticleDOI
TL;DR: A wide literature review of empirical and epistemological studies on new public management (NPM) and its evolvement across continents and cultures, and a critical analysis of lessons learned from implementations of ideas and practices is presented in this paper.
Abstract: Purpose The purpose of this paper is to present the views of authors in regard to the provenance and future of PM and the advantages of using management science in administrative science. The authors point to the meaning of both sciences for government studies and to the use that both theoreticians and practitioners may gain from adequately balancing the disciplines for the public interest. Design/methodology/approach The paper is based on a wide literature review of empirical and epistemological studies on new public management (NPM) and its evolvement across continents and cultures, and on a critical analysis of lessons learned from implementations of ideas and practices. Findings The authors identify the managerial reform in public administration as one of the more influential reforms of modern nations that cut across multiple policy areas, public agencies and cultures. The authors expect that public managers in the years to come are about to play a decisive role and be required to build collaborative capacity while governing creatively but without bending the rules. Ingenious civil servants are going to carry weight and devise new mechanisms for coping with prospective challenges; no doubt they will have to be savvier, more adept and open-minded than ever to be able to step up to the plate and assist governments to deal with impending crises. Originality/value The originality of this essay is reflected in the wide coverage of transitions in the managerial language of the discipline. Using manifold examples from different perspectives on NPM provides a unique and balanced look into what became the most influential reforms in public administration since the second half of the twentieth century, and is still alive and kicking.

Proceedings ArticleDOI
01 Dec 2017
TL;DR: Ground theory is used to study the rapid parallel processes of testing, validation, and verification for different phases of ERP implementation through using agile methods, and results show that agile methods reduce complexity and increase the quality of ERPs implementation.
Abstract: Most organizations are now adopting Enterprise Resource Planning (ERP) solutions to standardize, formalized and automate their processes. The organization usually hire IT Consulting Firms (ITCF) for ERP implementation due to its complexity, large scale, and domain knowledge. ERP brings radical changes in the environment, daily processes, and interactions of the organization. To make these changes gradual and incremental, ITCF are moving towards agile methodology. Quality assurance and quality control play a pivotal role in the success of an ERP implementation, with the use of agile methods the implementation becomes more complex. In this paper, grounded theory is used to study the rapid parallel processes of testing, validation, and verification for different phases of ERP implementation through using agile methods. In our study, 23 industry professionals participated from six different organizations. Each participant had experience of various ERP implementations in diverse geographical locations in various domains with different roles. This paper discuss and analyze different agile methods, which can reduce challenges in quality control and assurance during ERP implementation. The results show that agile methods (daily scrum meeting, pair programming and frequent reviews) reduce complexity and increase the quality of ERP implementation.

Posted Content
TL;DR: The introduction of ChainerCV, a software library that supports numerous neural network models as well as software components needed to conduct research in computer vision, and covers tasks such as object detection and semantic segmentation.
Abstract: Despite significant progress of deep learning in the field of computer vision, there has not been a software library that covers these methods in a unifying manner. We introduce ChainerCV, a software library that is intended to fill this gap. ChainerCV supports numerous neural network models as well as software components needed to conduct research in computer vision. These implementations emphasize simplicity, flexibility and good software engineering practices. The library is designed to perform on par with the results reported in published papers and its tools can be used as a baseline for future research in computer vision. Our implementation includes sophisticated models like Faster R-CNN and SSD, and covers tasks such as object detection and semantic segmentation.

Proceedings ArticleDOI
TL;DR: A Label Management Service (LMS) is proposed which provides meaningful abstractions and their relationships by analyzing target cloud infrastructure and helps the cloud administrators to model their policy requirements efficiently by decoupling the intents from underlying specifics.

Journal ArticleDOI
TL;DR: In this paper, the impact of risk factors in ERP implementation using the structural equation model (SEM) approach was analyzed and six hypotheses were also developed to evaluate the interrelationship between risk factors and success of ERP implementations.
Abstract: Purpose In order to reduce the high failure rate of enterprise resource planning system (ERP) projects in Indian retail, project managers need to analyze and understand the impact of risk factor on ERP implementation. The purpose of this paper is to identify the key risk factors solely or primarily for the Indian retail sector. Furthermore, this study also analyzes the impact of risk factors in ERP implementation using the structural equation model (SEM) approach. “User risk,” “project management risk,” “technological risk,” “team risk,” “organizational risk,” and “project performance risk” are the examined factors. Design/methodology/approach A theoretical model is created that explains the risk factors which may impact the success of ERP implementation. Hypotheses were also developed to evaluate the interrelationship between risk factors and success of ERP implementation. Empirical data are collected through survey questionnaire from practitioners such as project sponsors, project managers, implementation consultants, and team members who are involved in ERP implementation in the retail sector to test the theoretical model. Findings Using the SEM, it is found that 40 percent of the variations in ERP implementation success can be explained with the help of the model suggested in the research study. The results of the study has empirically verified that “user risk,” “project management risk,” “technological risk,” “team risk,” “organizational risk,” and “project performance risk” factors are positively impacting ERP implementation success. All six hypotheses were supported by the results of the study. Research limitations/implications The findings from this paper can provide greater understanding of ERP implementations. Researchers, practicing managers, and those seeking to implement ERP in retail organization can also use the findings of this study as a vehicle for improving ERP implementation success in the retail sector. Originality/value The study integrates the impact of risk factor on ERP implementation. Very few studies have been performed to investigate and understand this issue. Therefore, the research can make a useful contribution.

Journal ArticleDOI
TL;DR: A novel Tactic Search Engine called ArchEngine, which helps developers find implementation examples of an architectural tactic for a given technical context and uses information retrieval and program analysis techniques to retrieve applications that implement these design concepts.

Journal ArticleDOI
01 Jan 2017
TL;DR: A case study for the application of a pre-existing model (based on the Business Process Modelling method) for the technical, economic and financial evaluation of an RFId technology application in the area of industrial logistics for a bike manufacturer is presented.
Abstract: In recent years, the importance of the RFId technology within the operations management environment has become more evident. In particular, the RFId technology is recognised as an accelerator of the change towards a more efficient way to manage operations in an industrial context. The aim of this paper is to present a case study for the application of a pre-existing model (based on the Business Process Modelling method) for the technical, economic and financial evaluation of an RFId technology application in the area of industrial logistics for a bike manufacturer. The paper will face this issue preliminarily analysing the RFId utilization in the industrial context, afterwards analysing the existing literature on the BPM use for the evaluation of the applicability of RFId to the industrial context and lastly illustrating the case study and the results of the application of the BPM to the specific firm. The results demonstrate the improvement that it is possible to achieve in terms of financial returns and in terms of bikes worked in the warehouse per year.

Journal ArticleDOI
TL;DR: In this article, the authors explore the advantages of activity-based costing vs. traditional costing systems and present arguments for the potential benefits to the world's millions of small to medium business (SMEs) from implementing ABC.
Abstract: Since the 1970s, activity-based costing (ABC) has enabled companies to identify the true costs of processes and products and to make sound decisions related to the profitability and expense of the products they produce, as well as the effectiveness of their manufacturing and business processes. This paper explores the advantages of activity-based costing vs. traditional costing systems and presents arguments for the potential benefits to the world’s millions of small to medium business (SMEs) from implementing ABC. Issues related to the implementation of ABC are discussed. A framework for ABC implementations in SMEs is presented that shows the variables (characteristics of SME and implementation challenges) that can impact the ABC implementation process, and/or ultimately, implementation outcomes.

Book ChapterDOI
28 Sep 2017
TL;DR: JACPoL is introduced, a descriptive, scalable and expressive policy language in JSON that provides a flexible and fine-grained ABAC (Attribute-based Access Control), and meanwhile it can be easily tailored to express a broad range of other access control models.
Abstract: Along with the rapid development of ICT technologies, new areas like Industry 4.0, IoT and 5G have emerged and brought out the need for protecting shared resources and services under time-critical and energy-constrained scenarios with real-time policy-based access control. The process of policy evaluation under these circumstances must be executed within an unobservable delay and strictly comply with security objectives. To achieve this, the policy language needs to be very expressive but lightweight and efficient. Many existing implementations are using XML (Extensible Markup Language) to encode policies, which is verbose, inefficient to parse, and not readable by humans. On the contrary, JSON (JavaScript Object Notation) is a lightweight, text-based and language-independent data-interchange format that is simple for humans to read and write and easy for machines to parse and generate. Several attempts have emerged to convert existing XML policies and requests into JSON, however, there are very few policy specification proposals that are based on JSON with well-defined syntax and semantics. This paper investigates these challenges, and identifies a set of key requirements for a policy language to optimize the policy evaluation performance. According to these performance requirements, we introduce JACPoL, a descriptive, scalable and expressive policy language in JSON. JACPoL by design provides a flexible and fine-grained ABAC (Attribute-based Access Control), and meanwhile it can be easily tailored to express a broad range of other access control models. This paper systematically illustrates the design and implementation of JACPoL and evaluates it in comparison with other existing policy languages. The result shows that JACPoL can be as expressive as existing ones but more simple, scalable and efficient.

Journal ArticleDOI
TL;DR: In this article, the authors identify challenges experienced by SMEs when implementing ERP systems, and suggest requirements of achieving successful implementations in SMEs in Southern Africa using a thematic analysis methodology.
Abstract: Many international Enterprise Resource Planning (ERP) systems were developed based on the best practices of organizations in which they were developed. These organizations are usually large, and in developed countries. However, small organizations in other parts of the world are also implementing ERP. Implementing a system based on different practices that differ from yours is certainly bound to come with issues. The objective of the study is to identify challenges experienced by SMEs when implementing ERP systems, and to suggest requirements of achieving successful implementations in SMEs in Southern Africa. A thematic analysis methodology was used to explore identified challenges from fourteen SMEs and to identify themes within the data. The study suggested that a successful ERP implementation requires sufficient and appropriate training, reliable internet connection, involvement of end-users, change management, as well as sufficient demonstration of the prospective ERP system.