scispace - formally typeset
Search or ask a question

Showing papers on "Workflow published in 2007"


Journal ArticleDOI
TL;DR: This paper describes the application of process mining in one of the provincial offices of the Dutch National Public Works Department, responsible for the construction and maintenance of the road and water infrastructure.

804 citations


Patent
27 Jul 2007
TL;DR: In this article, the authors describe a system and methods for performing policy-managed, peer-to-peer service orchestration in a manner that supports the formation of self-organizing service networks that enable rich media experiences.
Abstract: Systems and methods are described for performing policy-managed, peer-to-peer service orchestration in a manner that supports the formation of self-organizing service networks that enable rich media experiences. In one embodiment, services are distributed across peer-to-peer communicating nodes, and each node provides message routing and orchestration using a message pump and workflow collator. Distributed policy management of service interfaces helps to provide trust and security, supporting commercial exchange of value. Peer-to-peer messaging and workflow collation allow services to be dynamically created from a heterogeneous set of primitive services. The shared resources are services of many different types, using different service interface bindings beyond those typically supported in a web service deployments built on UDDI, SOAP, and WSDL. In a preferred embodiment, a media services framework is provided that enables nodes to find one another, interact, exchange value, and cooperate across tiers of networks from WANs to PANs.

667 citations


Journal ArticleDOI
TL;DR: A recent National Science Foundation workshop brought together domain, computer, and social scientists to discuss requirements of future scientific applications and the challenges they present to current workflow technologies.
Abstract: Workflows have emerged as a paradigm for representing and managing complex distributed computations and are used to accelerate the pace of scientific progress. A recent National Science Foundation workshop brought together domain, computer, and social scientists to discuss requirements of future scientific applications and the challenges they present to current workflow technologies.

563 citations


Proceedings ArticleDOI
15 Oct 2007
TL;DR: It is shown how DECLARE can support loosely-structured processes without sacrificing important WFMSs features like user support, model verification, analysis of past executions, changing models at run-time, etc.
Abstract: Traditional workflow management systems (WFMSs) are not flexible enough to support loosely-structured processes. Furthermore, flexibility in contemporary WFMSs usually comes at a certain cost, such as lack of support for users, lack of methods for model analysis, lack of methods for analysis of past executions, etc. DECLARE is a proto-type of a WFMS that uses a constraint-based process modeling language for the development of declarative models describing loosely-structured processes. In this paper we show how DECLARE can support loosely-structured processes without sacrificing important WFMSs features like user support, model verification, analysis of past executions, changing models at run-time, etc.

554 citations


Book
01 Jan 2007
TL;DR: This is a timely book presenting an overview of the current state-of-the-art within established projects, presenting many different aspects of workflow from users to tool builders.
Abstract: This is a timely book presenting an overview of the current state-of-the-art within established projects, presenting many different aspects of workflow from users to tool builders. It provides an overview of active research, from a number of different perspectives. It includes theoretical aspects of workflow and deals with workflow for e-Science as opposed to e-Commerce. The topics covered will be of interest to a wide range of practitioners.

432 citations


Book ChapterDOI
25 Nov 2007
TL;DR: This paper proposes a general framework for a constraint-based process modeling language and its implementation that supports both ad-hoc and dynamic change, and the transfer of instances can be done easier than in traditional approaches.
Abstract: The degree of flexibility of workflow management systems heavily influences the way business processes are executed. Constraint-based models are considered to be more flexible than traditional models because of their semantics: everything that does not violate constraints is allowed. Although constraint-based models are flexible, changes to process definitions might be needed to comply with evolving business domains and exceptional situations. Flexibility can be increased by run-time support for dynamic changes - transferring instances to a new model - and ad-hoc changes - changing the process definition for one instance. In this paper we propose a general framework for a constraint-based process modeling language and its implementation. Our approach supports both ad-hoc and dynamic change, and the transfer of instances can be done easier than in traditional approaches.

270 citations


Book ChapterDOI
01 Jan 2007
TL;DR: In this chapter, the Triana workflow environment is described and an overview of Taverna, a system designed to support scientists using Grid technology to conduct in silico experiments in biology is given.
Abstract: In this chapter, the Triana workflow environment is described. Triana focuses on supporting services within multiple environments, such as peer-to-peer (P2P) and the Grid, by integrating with various types of middleware toolkits. This approach differs from that of the last chapter, which gave an overview of Taverna, a system designed to support scientists using Grid technology to conduct in silico experiments in biology. Taverna focuses workflow at the Web services level and addresses concerns of how such services should be presented to its users.

269 citations


Book ChapterDOI
09 Sep 2007
TL;DR: A concise survey of existing workflow technology from the business and scientific domain is presented and a number of key suggestions towards the future development of scientific workflow systems are made.
Abstract: Workflow technologies are emerging as the dominant approach to coordinate groups of distributed services. However with a space filled with competing specifications, standards and frameworks from multiple domains, choosing the right tool for the job is not always a straightforward task. Researchers are often unaware of the range of technology that already exists and focus on implementing yet another proprietary workflow system. As an antidote to this common problem, this paper presents a concise survey of existing workflow technology from the business and scientific domain and makes a number of key suggestions towards the future development of scientific workflow systems.

268 citations


Journal ArticleDOI
TL;DR: This paper presents the design and implementation of AO4BPEL, an aspect-oriented extension to BPEL that makes the composition specification more modular and the composition itself more flexible and adaptable.
Abstract: Process-oriented composition languages such as BPEL allow Web Services to be composed into more sophisticated services using a workflow process. However, such languages exhibit some limitations with respect to modularity and flexibility. They do not provide means for a well-modularized specification of crosscutting concerns such as logging, persistence, auditing, and security. They also do not support the dynamic adaptation of composition at runtime. In this paper, we advocate an aspect-oriented approach to Web Service composition and present the design and implementation of AO4BPEL, an aspect-oriented extension to BPEL. We illustrate through examples how AO4BPEL makes the composition specification more modular and the composition itself more flexible and adaptable.

266 citations


Book ChapterDOI
01 Jan 2007
TL;DR: This paper considers a basic model for workflow applications modelled as Directed Acyclic Graphs (DAGs) and investigates heuristics that allow to schedule the nodes of the DAG (or tasks of a workflow) onto resources in a way that satisfies a budget constraint and is still optimized for overall time.
Abstract: Grids are emerging as a promising solution for resource and computation demanding applications. However, the heterogeneity of resources in Grid computing, complicates resource management and scheduling of applications. In addition, the commercialization of the Grid requires policies that can take into account user requirements, and budget considerations in particular. This paper considers a basic model for workflow applications modelled as Directed Acyclic Graphs (DAGs) and investigates heuristics that allow to schedule the nodes of the DAG (or tasks of a workflow) onto resources in a way that satisfies a budget constraint and is still optimized for overall time. Two different approaches are implemented, evaluated and presented using four different types of basic DAGs.

259 citations


Book ChapterDOI
25 Apr 2007
TL;DR: This paper will present both a survey of the two approaches as well as a critical and comparative analysis of these two approaches.
Abstract: There has been a huge influx of business process modeling langu ages as business process management (BPM) and process-aware information systems continue to expand into various business domains. The origins of process modeling languages are quite diverse, although two dominant approaches can be observed; one based on graphical models, and the other based on rule specifications. However, at this time, there is no report in literature that specifically targets a comparative analysis of these two approaches, on aspects such as the relative areas of application, power of expression, and limitations. In this paper we have attempted to address this question. We will present both a survey of the two approaches as well as a critical and comparative analysis.

Journal ArticleDOI
TL;DR: This paper identifies a set of QoS metrics in the context of WS workflows, and proposes a unified probabilistic model for describing QoS values of a broader spectrum of atomic and composite Web services.

Journal ArticleDOI
TL;DR: A proper understanding of CPOE as a collaborative effort and the transformation of the health care activities into integrated care programs requires an understanding of how orders are created and processed, how C POE as part of an integrated system can support the workflow, and how risks affecting patient care can be identified and reduced.

Book ChapterDOI
22 Jul 2007
TL;DR: This case study is the Featured Article (FA) process, one of the best established procedures on Wikipedia, and it is demonstrated how this process blends elements of traditional workflow with peer production.
Abstract: We examine the procedural side of Wikipedia, the well-known internet encyclopedia. Despite the lack of structure in the underlying wiki technology, users abide by hundreds of rules and follow well-defined processes. Our case study is the Featured Article (FA) process, one of the best established procedures on the site. We analyze the FA process through the theoretical framework of commons governance, and demonstrate how this process blends elements of traditional workflow with peer production. We conclude that rather than encouraging anarchy, many aspects of wiki technology lend themselves to the collective creation of formalized process and policy.

Book ChapterDOI
05 Sep 2007
TL;DR: Swaddler analyzes the internal state of a web application and learns the relationships between the application's critical execution points and the application' internal state, and is able to identify attacks that attempt to bring an application in an inconsistent, anomalous state.
Abstract: In recent years, web applications have become tremendously popular, and nowadays they are routinely used in security-critical environments, such as medical, financial, and military systems. As the use of web applications for critical services has increased, the number and sophistication of attacks against these applications have grown as well. Most approaches to the detection of web-based attacks analyze the interaction of a web application with its clients and back-end servers. Even though these approaches can effectively detect and block a number of attacks, there are attacks that cannot be detected only by looking at the external behavior of a web application. In this paper, we present Swaddler, a novel approach to the anomaly-based detection of attacks against web applications. Swaddler analyzes the internal state of a web application and learns the relationships between the application's critical execution points and the application's internal state. By doing this, Swaddler is able to identify attacks that attempt to bring an application in an inconsistent, anomalous state, such as violations of the intended workflow of a web application. We developed a prototype of our approach for the PHP language and we evaluated it with respect to several real-world applications.

Journal ArticleDOI
TL;DR: The architecture, which is comprehensive since it is derived from the extended requirements from the lifecycle perspective, will provide a basis for research and development of process-oriented knowledge management systems.

Patent
07 Feb 2007
TL;DR: An information workflow for a medical diagnostic workstation in which patient data is captured, arranged and displayed in predetermined formats for a user in the handling of patients is presented in this article, where vitals capture and storage and creation of a comprehensive patient record in which the workstation can operate in a stand-alone or network connected mode.
Abstract: An information workflow for a medical diagnostic workstation in which patient data is captured, arranged and displayed in predetermined formats for a user in the handling of patients. The workflow permits vitals capture and storage and. creation of a comprehensive patient record in which the workstation can operate in a stand-alone or network connected mode.

Proceedings ArticleDOI
24 May 2007
TL;DR: Nine lessons learned from five representative projects are presented, along with their software engineering implications, to provide insight into the software development environments in this domain.
Abstract: The need for high performance computing applications for computational science and engineering projects is growing rapidly, yet there have been few detailed studies of the software engineering process used for these applications. The DARPA High Productivity Computing Systems Program has sponsored a series of case studies of representative computational science and engineering projects to identify the steps involved in developing such applications (i.e. the life cycle, the workflows, technical challenges, and organizational challenges). Secondary goals were to characterize tool usage and identify enhancements that would increase the programmers' productivity. Finally, these studies were designed to develop a set of lessons learned that can be transferred to the general computational science and engineering community to improve the software engineering process used for their applications. Nine lessons learned from five representative projects are presented, along with their software engineering implications, to provide insight into the software development environments in this domain.

Patent
15 May 2007
TL;DR: In this paper, the authors present a workflow management environment that provides different workflow management perspectives with different views on a variety of reusable workflow components or workflow resources, such as a swim lane view having multiple separate sections that each represents a different performer and a list view that represents the resources in transactional order.
Abstract: Methods and apparatuses enable providing a workflow management environment that provides different workflow management perspectives with different views on a variety of reusable workflow components or workflow resources. The different views may include a swim lane view having multiple separate sections that each represents a different performer, and a list view that represents the resources in transactional order. The workflow management environment defines reusable workflow components and associates the components with the different performers and with each other. The defining of the components and the relationships can define a portion of a workflow.

Proceedings ArticleDOI
19 Sep 2007
TL;DR: This paper proposes a workflow execution planning approach using multi-objective evolutionary algorithms (MOEAs) to generate a set of trade-off scheduling solutions according to the users QoS requirements, and shows that MOEAs are able to find a range of compromise solutions in a short computational time.
Abstract: Utility grids create an infrastructure for enabling users to consume services transparently over a global network. When optimizing workflow execution on utility grids, we need to consider multiple quality of service (QoS) parameters including service prices and execution time. These optimization objectives may be in conflict. In this paper, we have proposed a workflow execution planning approach using multi-objective evolutionary algorithms (MOEAs). Our goal was to generate a set of trade-off scheduling solutions according to the users QoS requirements. The alternative trade-off solutions offer more flexibility to users when estimating their QoS requirements of workflow executions. Simulation results show that MOEAs are able to find a range of compromise solutions in a short computational time.

Journal ArticleDOI
TL;DR: A generalize-able cognitive model is developed to represent the intricate workflow applicable to other health care settings and can be used to identify and characterize medical errors and for error prediction in practice.

Book ChapterDOI
01 Jan 2007
TL;DR: While a variety of graphical workflow composition tools are currently being proposed, none of them is based on standard modeling techniques such as Unified Modeling Language (UML).
Abstract: Most existing Grid application development environments provide the application developer with a nontransparent Grid Commonly, application developers are explicitly involved in tedious tasks such as selecting software components deployed on specific sites, mapping applications onto the Grid, or selecting appropriate computers for their applications Moreover, many programming interfaces are either implementation-technology-specific (eg, based on Web services [24]) or force the application developer to program at a low-level middleware abstraction (eg, start task, transfer data [22, 153]) While a variety of graphical workflow composition tools are currently being proposed, none of them is based on standard modeling techniques such as Unified Modeling Language (UML)

Book ChapterDOI
01 Jan 2007-Scopus
TL;DR: The Condor project began in 1988 and has evolved into a feature-rich batch system that targets high-throughput computing; that is, Condor focuses on providing reliable access to computing over long periods of time instead of highly tuned, high-performance computing for short period of time or a small number of applications.
Abstract: The Condor project began in 1988 and has evolved into a feature-rich batch system that targets high-throughput computing; that is, Condor ([262], [414]) focuses on providing reliable access to computing over long periods of time instead of highly tuned, high-performance computing for short periods of time or a small number of applications.

Proceedings ArticleDOI
25 Jun 2007
TL;DR: It is argued that actively engaging with a scientist's needs, fears and reward incentives is crucial for success and a rich ecosystem of tools that support the scientists are needed.
Abstract: We present the Taverna workflow workbench and argue that scientific workflow environments need a rich ecosystem of tools that support the scientists. experimental lifecycle. Workflows are scientific objects in their own right, to be exchanged and reused. myExperiment is a new initiative to create a social networking environment for workflow workers. We present the motivation for myExperiment and sketch the proposed capabilities and challenges. We argue that actively engaging with a scientist's needs, fears and reward incentives is crucial for success.

Journal ArticleDOI
01 Mar 2007
TL;DR: This paper proposes an architecture for Web services enabled BPM in C-Commerce and provides technical insights into why Web services can enhance business process coordination and an implementation of a dynamic e-procurement application based on the proposed architecture.
Abstract: Collaborative commerce (C-Commerce) is a set of technologies and business practices that allows companies to build stronger relationships with their trading partners through integrating complex and cross-enterprise processes governed by business logic and rules, as well as workflows. Business Process Management (BPM) is a key element of C-Commerce solutions for complex process coordination. It provides a mechanism to support e-businesses in modeling, deploying, and managing business processes that involve various applications with greater flexibility. Traditional BPM solutions often lack the capability to integrate external applications in that they have very limited support for interoperability. In recent years, Web services have emerged as a promising enabling technology for BPM in support of C-Commerce. Web services offer effective and standard-based means to improve interoperability among different software applications over Internet protocols. This paper aims to give an in-depth analysis of BPM and Web services in the context of C-Commerce. We propose an architecture for Web services enabled BPM in C-Commerce and provide technical insights into why Web services can enhance business process coordination. Finally, an implementation of a dynamic e-procurement application based on the proposed architecture is presented. With the advent of Web service standards and business process integration tools that support them, BPM systems enabled by Web services are empowering the development of more flexible and dynamic C-Commerce.

Proceedings ArticleDOI
10 Dec 2007
TL;DR: In this article, the authors proposed a dynamic critical path (DCP) based workflow scheduling algorithm that determines efficient mapping of tasks by calculating the critical path in the workflow task graph at every step.
Abstract: Effective scheduling is a key concern for the execution of performance driven grid applications. In this paper, we propose a dynamic critical path (DCP) based workflow scheduling algorithm that determines efficient mapping of tasks by calculating the critical path in the workflow task graph at every step. It assigns priority to a task in the critical path which is estimated to complete earlier. Using simulation, we have compared the performance of our proposed approach with other existing heuristic and meta-heuristic based scheduling strategies for different type and size of workflows. Our results demonstrate that DCP based approach can generate better schedule for most of the type of workflows irrespective of their size particularly when resource availability changes frequently.

Proceedings Article
22 Jul 2007
TL;DR: A new approach to workflow creation that uses semantic representations to describe compactly complex scientific applications in a dataindependent manner, then automatically generates workflows of computations for given data sets, and finally maps them to available computing resources.
Abstract: Scientific workflows are being developed for many domains as a useful paradigm to manage complex scientific computations. In our work, we are challenged with efficiently generating and validating workflows that contain large amounts (hundreds to thousands) of individual computations to be executed over distributed environments. This paper describes a new approach to workflow creation that uses semantic representations to describe compactly complex scientific applications in a dataindependent manner, then automatically generates workflows of computations for given data sets, and finally maps them to available computing resources. The semantic representations are used to automatically generate descriptions for each of the thousands of new data products. We interleave the creation of the workflow with its execution, which allows intermediate execution data products to influence the generation of the following portions of the workflow. We have implemented this approach in Wings, a workflow creation system that combines semantic representations with planning techniques. We have used Wings to create workflows of thousands of computations, which are submitted to the Pegasus mapping system for execution over distributed computing environments. We show results on an earthquake simulation workflow that was automatically created with a total number of 24,135 jobs and that executed for a total of 1.9 CPU years.

Journal ArticleDOI
01 Mar 2007
TL;DR: A framework for escalations that draws on established principles from the workflow management field is proposed that identifies and classifies a number of escalation mechanisms such as changing the routing of work, changing the work distribution or changing the requirements with respect to available data.
Abstract: Decision making in process-aware information systems involves build-time and run-time decisions. At build-time, idealized process models are designed based on the organization's objectives, infrastructure, context, constraints, etc. At run-time, this idealized view is often broken. In particular, process models generally assume that planned activities happen within a certain period. When such assumptions are not fulfilled, users must make decisions regarding alternative arrangements to achieve the goal of completing the process within its expected time frame or to minimize tardiness. We refer to the required decisions as escalations. This paper proposes a framework for escalations that draws on established principles from the workflow management field. The paper identifies and classifies a number of escalation mechanisms such as changing the routing of work, changing the work distribution or changing the requirements with respect to available data. A case study and a simulation experiment are used to illustrate and evaluate these mechanisms.

Proceedings ArticleDOI
14 May 2007
TL;DR: This paper examines the issue of optimizing disk usage and of scheduling large-scale scientific workflows onto distributed resources where the workflows are data- intensive, requiring large amounts of data storage, and where the resources have limited storage resources and designed an algorithm that can improve the overall workflow performance.
Abstract: In this paper we examine the issue of optimizing disk usage and of scheduling large-scale scientific workflows onto distributed resources where the workflows are data- intensive, requiring large amounts of data storage, and where the resources have limited storage resources. Our approach is two-fold: we minimize the amount of space a workflow requires during execution by removing data files at runtime when they are no longer required and we schedule the workflows in a way that assures that the amount of data required and generated by the workflow fits onto the individual resources. For a workflow used by gravitational- wave physicists, we were able to improve the amount of storage required by the workflow by up to 57 %. We also designed an algorithm that can not only find feasible solutions for workflow task assignment to resources in disk- space constrained environments, but can also improve the overall workflow performance.

Patent
30 Oct 2007
TL;DR: In this paper, a method of performing production operations of an oilfield having at least one process facility and at least 1 wellsite operatively connected to the facility, each of which having a wellbore penetrating a subterranean formation for extracting fluid from an underground reservoir therein, is described.
Abstract: The invention relates to a method of performing production operations of an oilfield having at least one process facility and at least one wellsite operatively connected thereto, each at least one wellsite having a wellbore penetrating a subterranean formation for extracting fluid from an underground reservoir therein. The method steps include receiving a number of steps each from at least one of a number of collaborators, specifying an automated workflow including the number of steps and for generating a first well plan, obtaining first data associated with the production operations, applying the automated workflow to the first data to generate the first well plan, adjusting the production operations based on the first well plan, and modifying at least one of the number of steps based on input from at least one of the number of collaborators to generate an updated automated workflow.