scispace - formally typeset
Search or ask a question

Showing papers on "Workflow published in 2006"


Journal ArticleDOI
TL;DR: Kepler as mentioned in this paper is a scientific workflow system, which is currently under development across a number of scientific data management projects and is a community-driven, open source project, and always welcome related projects and new contributors to join.
Abstract: Many scientific disciplines are now data and information driven, and new scientific knowledge is often gained by scientists putting together data analysis and knowledge discovery “pipelines”. A related trend is that more and more scientific communities realize the benefits of sharing their data and computational services, and are thus contributing to a distributed data and computational community infrastructure (a.k.a. “the Grid”). However, this infrastructure is only a means to an end and scientists ideally should be bothered little with its existence. The goal is for scientists to focus on development and use of what we call scientific workflows. These are networks of analytical steps that may involve, e.g., database access and querying steps, data analysis and mining steps, and many other steps including computationally intensive jobs on high performance cluster computers. In this paper we describe characteristics of and requirements for scientific workflows as identified in a number of our application projects. We then elaborate on Kepler, a particular scientific workflow system, currently under development across a number of scientific data management projects. We describe some key features of Kepler and its underlying Ptolemyii system, planned extensions, and areas of future research. Kepler is a communitydriven, open source project, and we always welcome related projects and new contributors to join.

1,926 citations


Journal ArticleDOI
TL;DR: The Taverna Workbench as discussed by the authors is a Grid environment for the composition and execution of workflows for the life sciences community, which is based on the myGrid project's workbench.
Abstract: Life sciences research is based on individuals, often with diverse skills, assembled into research groups. These groups use their specialist expertise to address scientific problems. The in silico experiments undertaken by these research groups can be represented as workflows involving the co-ordinated use of analysis programs and information repositories that may be globally distributed. With regards to Grid computing, the requirements relate to the sharing of analysis and information resources rather than sharing computational power. The myGrid project has developed the Taverna Workbench for the composition and execution of workflows for the life sciences community. This experience paper describes lessons learnt during the development of Taverna. A common theme is the importance of understanding how workflows fit into the scientists' experimental context. The lessons reflect an evolving understanding of life scientists' requirements on a workflow environment, which is relevant to other areas of data intensive and exploratory science.

729 citations


01 Jan 2006
TL;DR: This paper presents the first systematic review of the original twenty control-flow patterns and provides a formal description of each of them in the form of a Coloured Petri-Net (CPN) model and identifies twenty three new patterns relevant to the control- flow perspective.
Abstract: The Workflow Patterns Initiative was established with the aim of delineating the fundamental requirements that arise during business process modelling on a recurring basis and describe them in an imperative way. The first deliverable of this research project was a set of twenty patterns describing the control-flow perspective of workflow systems. Since their release, these patterns have been widely used by practitioners, vendors and academics alike in the selection, design and development of workflow systems [vdAtHKB03]. This paper presents the first systematic review of the original twenty control-flow patterns and provides a formal description of each of them in the form of a Coloured Petri-Net (CPN) model. It also identifies twenty three new patterns relevant to the control-flow perspective. Detailed context conditions and evaluation criteria are presented for each pattern and their implementation is assessed in fourteen commercial offerings including workflow and case handling systems, business process modelling formalisms and business process execution languages.

669 citations


Book ChapterDOI
04 Sep 2006
TL;DR: This work proposes a fundamental paradigm shift for flexible process management and proposes the ConDec language for modelling and enacting dynamic business processes, based on temporal logic rather than some imperative process modelling language.
Abstract: Management of dynamic processes in an important issue in rapidly changing organizations. Workflow management systems are systems that use detailed process models to drive the business processes. Current business process modelling languages and models are of imperative nature – they strictly prescribe how to work. Systems that allow users to maneuver within the process model or even change the model while working are considered to be the most suitable for dynamic processes management. However, in many companies it is not realistic to expect that end-users are able to change their processes. Moreover, the imperative nature of these languages forces designer to over-specify processes, which results in frequent changes. We propose a fundamental paradigm shift for flexible process management and propose a more declarative approach. Declarative models specify what should be done without specifying how it should be done. We propose the ConDec language for modelling and enacting dynamic business processes. ConDec is based on temporal logic rather than some imperative process modelling language.

566 citations


Book ChapterDOI
05 Sep 2006
TL;DR: This paper examines the suitability of the Business Process Modelling Notation for business process modelling, using the Workflow Patterns as an evaluation framework, a sequel to previous work in which languages including BPEL and UML Activity Diagrams were evaluated.
Abstract: In this paper we examine the suitability of the Business Process Modelling Notation (BPMN) for business process modelling, using the Workflow Patterns as an evaluation framework. The Workflow Patterns are a collection of patterns developed for assessing control-flow, data and resource capabilities in the area of Process Aware Information Systems (PAISs). In doing so, we provide a comprehensive evaluation of the capabilities of BPMN, and its strengths and weaknesses when utilised for business process modelling. The analysis provided for BPMN is part of a larger effort aiming at an unbiased and vendor-independent survey of the suitability and the expressive power of some mainstream process modelling languages. It is a sequel to previous work in which languages including BPEL and UML Activity Diagrams were evaluated.

408 citations


Journal ArticleDOI
TL;DR: A genetic algorithm approach is presented to address scheduling optimization problems in workflow applications, based on two QoS constraints, deadline and budget, which are presented in this paper.
Abstract: Grid technologies have progressed towards a service-oriented paradigm that enables a new way of service provisioning based on utility computing models, which are capable of supporting diverse computing services. It facilitates scientific applications to take advantage of computing resources distributed world wide to enhance the capability and performance. Many scientific applications in areas such as bioinformatics and astronomy require workflow processing in which tasks are executed based on their control or data dependencies. Scheduling such interdependent tasks on utility Grid environments need to consider users' QoS requirements. In this paper, we present a genetic algorithm approach to address scheduling optimization problems in workflow applications, based on two QoS constraints, deadline and budget.

392 citations


Book ChapterDOI
08 Sep 2006
TL;DR: DecSerFlow as mentioned in this paper is a declarative service flow language that can be used to specify, enact, and monitor service flows, and it can be extendible (i.e., constructs can be added without changing the engine or semantical basis) to enforce or to check the conformance of service flows.
Abstract: The need for process support in the context of web services has triggered the development of many languages, systems, and standards. Industry has been developing software solutions and proposing standards such as BPEL, while researchers have been advocating the use of formal methods such as Petri nets and π-calculus. The languages developed for service flows, i.e., process specification languages for web services, have adopted many concepts from classical workflow management systems. As a result, these languages are rather procedural and this does not fit well with the autonomous nature of services. Therefore, we propose DecSerFlow as a Declarative Service Flow Language. DecSerFlow can be used to specify, enact, and monitor service flows. The language is extendible (i.e., constructs can be added without changing the engine or semantical basis) and can be used to enforce or to check the conformance of service flows. Although the language has an appealing graphical representation, it is grounded in temporal logic.

386 citations


Book ChapterDOI
03 May 2006
TL;DR: A complete framework for data and process provenance in the Kepler Scientific Workflow System is described and how generic provenance capture can be facilitated in Kepler's actor-oriented workflow environment is introduced.
Abstract: In many data-driven applications, analysis needs to be performed on scientific information obtained from several sources and generated by computations on distributed resources. Systematic analysis of this scientific information unleashes a growing need for automated data-driven applications that also can keep track of the provenance of the data and processes with little user interaction and overhead. Such data analysis can be facilitated by the recent advancements in scientific workflow systems. A major profit when using scientific workflow systems is the ability to make provenance collection a part of the workflow. Specifically, provenance should include not only the standard data lineage information but also information about the context in which the workflow was used, execution that processed the data, and the evolution of the workflow design. In this paper we describe a complete framework for data and process provenance in the Kepler Scientific Workflow System. We outline the requirements and issues related to data and workflow provenance in a multi-disciplinary workflow system and introduce how generic provenance capture can be facilitated in Kepler's actor-oriented workflow environment. We also describe the usage of the stored provenance information for efficient rerun of scientific workflows.

322 citations


Journal ArticleDOI
TL;DR: A real‐world application scenario that uses three distinct types of workflow within the Triana problem‐solving environment: serial scientific workflow for the data processing of gravitational wave signals; job submission workflows that execute Triana services on a testbed; and monitoring workflow that examine and modify the behaviour of the executing application.
Abstract: In this paper, we discuss a real-world application scenario that uses three distinct types of workflow within the Triana problem-solving environment: serial scientific workflow for the data processing of gravitational wave signals; job submission workflows that execute Triana services on a testbed; and monitoring workflows that examine and modify the behaviour of the executing application. We briefly describe the Triana distribution mechanisms and the underlying architectures that we can support. Our middleware independent abstraction layer, called the Grid Application Prototype (GAP), enables us to advertise, discover and communicate with Web and peer-to-peer (P2P) services. We show how gravitational wave search algorithms have been implemented to distribute both the search computation and data across the European GridLab testbed, using a combination of Web services, Globus interaction and P2P infrastructures.

294 citations


Proceedings ArticleDOI
18 Sep 2006
TL;DR: A heuristic is presented that performs extremely well while providing excellent (almost optimal) solutions to the general optimization problem of how to select Web services for each task so that the overall QoS and cost requirements of the composition are satisfied.
Abstract: This paper discusses the Quality of Service (QoS)- aware composition of Web Services. The work is based on the assumption that for each task in a workflow a set of alternative Web Services with similar functionality is available and that these Web Services have different QoS parameters and costs. This leads to the general optimization problem of how to select Web Services for each task so that the overall QoS and cost requirements of the composition are satisfied. Current proposals use exact algorithms or complex heuristics (e.g. genetic algorithms) to solve this problem. An actual implementation of a workflow engine (like our WSQoSX architecture), however, has to be able to solve these optimization problems in real-time and under heavy load. Therefore, we present a heuristic that performs extremely well while providing excellent (almost optimal) solutions. Using simulations, we show that in most cases our heuristic is able to calculate solutions that come as close as 99% to the optimal solution while taking less than 2% of the time of a standard exact algorithm. Further, we also investigate how much and under which circumstances the solution obtained by our heuristic can be further improved by other heuristics.

273 citations


Book ChapterDOI
03 May 2006
TL;DR: An overview of VisTrails, a system that provides an infrastructure for systematically capturing detailed provenance and streamlining the data exploration process, which simplifies data exploration by allowing scientists to easily navigate through the space of workflows and parameter settings for an exploration task.
Abstract: We give an overview of VisTrails, a system that provides an infrastructure for systematically capturing detailed provenance and streamlining the data exploration process. A key feature that sets VisTrails apart from previous visualization and scientific workflow systems is a novel action-based mechanism that uniformly captures provenance for data products and workflows used to generate these products. This mechanism not only ensures reproducibility of results, but it also simplifies data exploration by allowing scientists to easily navigate through the space of workflows and parameter settings for an exploration task.

Journal Article
TL;DR: This paper presents a classification framework for workflow exception handling in the form of patterns that is independent of specific modelling approaches or technologies and provides an objective means of delineating the exception-handling capabilities of specific workflow systems.
Abstract: This paper presents a classification framework for workflow exception handling in the form of patterns. This framework is independent of specific modelling approaches or technologies and as such provides an objective means of delineating the exception-handling capabilities of specific workflow systems. It is subsequently used to assess the level of exceptions support provided by eight commercial workflow systems and business process modelling and execution languages. On the basis of these investigations, we propose a graphical, tool-independent language for defining exception handling strategies in workflows.

Book
01 Nov 2006
TL;DR: Clinical decision support is described as a universal commodity not a proprietary resource and attempts to establish consortia to overcome barriers are made.
Abstract: Introduction: Clinical decision support What it is and why important A brief history of the field Types of computer-aided decision making Case studies Regenstrief experience Brigham experience LDS HELP experience Lessons learned, summary Where are we now? Penetration Limitations New motivations and interest Where does the knowledge come from? Expert knowledge Data mining and predictive modeling Evidence-based medicine Problems in developing decision support applications Representation of the knowledge Integration with host environments Managing the knowledge: authoring and update Standards efforts Arden syntax Guidelines GELLO Vocabularies and data models Process/workflow models Issues Top-down vs. bottom-up approaches Legacy investments Differences among models and purposes Institutional KM challenges Getting a handle on the problem Content management and collaborative authoring/editing Common representation Common interfaces/transaction services Tools for knowledge management A timetable of opportunities Rules knowledge General expressions/calculation/logic Knowledge element groups Order sets Reports and data entry forms Prospects for dissemination and sharing Knowledge as a universal commodity not a proprietary resource Attempts to establish consortia What is needed to overcome barriers

Journal IssueDOI
TL;DR: Characteristics of and requirements for scientific workflows as identified in a number of application projects are described, and some key features of Kepler and its underlying Ptolemy II system, planned extensions, and areas of future research are described.
Abstract: Many scientific disciplines are now data and information driven, and new scientific knowledge is often gained by scientists putting together data analysis and knowledge discovery ‘pipelines’. A related trend is that more and more scientific communities realize the benefits of sharing their data and computational services, and are thus contributing to a distributed data and computational community infrastructure (a.k.a. ‘the Grid’). However, this infrastructure is only a means to an end and ideally scientists should not be too concerned with its existence. The goal is for scientists to focus on development and use of what we call scientific workflows. These are networks of analytical steps that may involve, e.g., database access and querying steps, data analysis and mining steps, and many other steps including computationally intensive jobs on high-performance cluster computers. In this paper we describe characteristics of and requirements for scientific workflows as identified in a number of our application projects. We then elaborate on Kepler, a particular scientific workflow system, currently under development across a number of scientific data management projects. We describe some key features of Kepler and its underlying Ptolemy II system, planned extensions, and areas of future research. Kepler is a community-driven, open source project, and we always welcome related projects and new contributors to join. Copyright © 2005 John Wiley & Sons, Ltd.

Book ChapterDOI
05 Jun 2006
TL;DR: In this article, a classification framework for workflow exception handling in the form of patterns is presented, which is independent of specific modelling approaches or technologies and as such provides an objective means of delineating the exception-handling capabilities of specific workflow systems.
Abstract: This paper presents a classification framework for workflow exception handling in the form of patterns. This framework is independent of specific modelling approaches or technologies and as such provides an objective means of delineating the exception-handling capabilities of specific workflow systems. It is subsequently used to assess the level of exceptions support provided by eight commercial workflow systems and business process modelling and execution languages. On the basis of these investigations, we propose a graphical, tool-independent language for defining exception handling strategies in workflows.

Book ChapterDOI
28 Aug 2006
TL;DR: This case study demonstrates the ability of an easy orchestration of complex biological workflows based on Web services as building blocks and Triana as workflow engine.
Abstract: In life sciences, scientists are confronted with an exponential growth of biological data, especially in the genomics and proteomics area. The efficient management and use of these data, and its transformation into knowledge are basic requirements for biological research. Therefore, integration of diverse applications and data from geographically distributed computing resources will become a major issue. We will present the status of our efforts for the realization of an automated protein prediction pipeline as an example for a complex biological workflow scenario in a Grid environment based on Web services. This case study demonstrates the ability of an easy orchestration of complex biological workflows based on Web services as building blocks and Triana as workflow engine.

Journal ArticleDOI
TL;DR: This paper provides a data-flow perspective for detecting data-flows anomalies such as missing data, redundant data, and potential data conflicts and includes two basic components:Data-flow specification and data- flow analysis; these components add more analytical rigor to business process management.
Abstract: Workflow technology has become a standard solution for managing increasingly complex business processes. Successful business process management depends on effective workflow modeling and analysis. One of the important aspects of workflow analysis is the data-flow perspective because, given a syntactically correct process sequence, errors can still occur during workflow execution due to incorrect data-flow specifications. However, there have been only scant treatments of the data-flow perspective in the literature and no formal methodologies are available for systematically discovering data-flow errors in a workflow model. As an indication of this research gap, existing commercial workflow management systems do not provide tools for data-flow analysis at design time. In this paper, we provide a data-flow perspective for detecting data-flow anomalies such as missing data, redundant data, and potential data conflicts. Our data-flow framework includes two basic components: data-flow specification and data-flow analysis; these components add more analytical rigor to business process management.

Patent
12 Jul 2006
TL;DR: In this paper, a method processor in a mobile device uses a workflow engine to load modules for execution according to a workflow that specifies the execution flow directions based on the outcomes of the modules.
Abstract: Methods and apparatuses to enable the development, deployment and update of composite applications on mobile devices. In one embodiment, a method processor in a mobile device uses a workflow engine to load modules for execution according to a workflow that specifies the execution flow directions based on the outcomes of the modules. Detached object managers can be used to manage locally data that are checked out from data sources. A object manager can maintain multiple versions of the data locally, receive changes, submit changes, and/or detect conflicts. Conflicts can be resolved over time. Using a check-out check-in model, different devices can work on the same data without having to synchronize with a server sequentially. Data and workflow can be packaged together for transmission over a sometime connected network (e.g., via email) such that a method processor does not have to wait for response if the network connection is not available.

Journal ArticleDOI
01 Feb 2006
TL;DR: The approach allows for partial visibility of workflows and their resources, thus providing powerful ways for inter-organizational workflow configuration, and provides workflow participants with the freedom to change their work-flows without changing their roles in the cooperation.
Abstract: This paper presents a novel approach to inter-organizational workflow cooperation. Our goal is to provide support for organizations which are involved in a shared but not pre-modeled cooperative workflow across organizational boundaries. Our approach allows for partial visibility of workflows and their resources, thus providing powerful ways for inter-organizational workflow configuration. Varying degrees of visibility of workflows enable organizations to retain required levels of privacy and security of internal workflows. Our presented view concept provides a high degree of flexibility for participating organizations, since internal structures of collaborative workflows may be adapted without changes in the inter-organizational workflows. Furthermore, we provide workflow participants with the freedom to change their work-flows without changing their roles in the cooperation. This increases flexibility and is an important step to increase efficiency as well as reduction in costs for inter-organizational workflows. The presented approach is inspired by the Service-oriented Architecture (SOA). Accordingly, our approach consists of three steps: workflow advertisement, workflow interconnection, and workflow cooperation.

Journal ArticleDOI
TL;DR: The needs for increased attention to software usability testing and engineering to enhance user-friendliness of metadata management software, new capital investments in ecological data archives, and increasing the metadata management benefit–cost ratio for the average scientist via incentives and enabling tools are still being faced.

Journal ArticleDOI
TL;DR: It is demonstrated that SciTegic's methodology for molecular fingerprints, molecular similarity, molecular clustering, maximal common subgraph search and Bayesian learning are well suited to a wide variety of tasks.
Abstract: Workflow technology is being increasingly applied in discovery information to organize and analyze data. SciTegic's Pipeline Pilot is a chemically intelligent implementation of a workflow technology known as data pipelining. It allows scientists to construct and execute workflows using components that encapsulate many cheminformatics based algorithms. In this paper we review SciTegic's methodology for molecular fingerprints, molecular similarity, molecular clustering, maximal common subgraph search and Bayesian learning. Case studies are described showing the application of these methods to the analysis of discovery data such as chemical series and high throughput screening results. The paper demonstrates that the methods are well suited to a wide variety of tasks such as building and applying predictive models of screening data, identifying molecules for lead optimization and the organization of molecules into families with structural commonality.

Book ChapterDOI
03 May 2006
TL;DR: The concept of provenance query is defined and techniques that allow us to perform scoped provenance queries are described, which are described as simple and efficient ways of querying for the provenance of entities.
Abstract: The provenance of entities, whether electronic data or physical artefacts, is crucial information in practically all domains, including science, business and art. The increased use of software in automating activities provides the opportunity to add greatly to the amount we can know about an entity's history and the process by which it came to be as it is. However, it also presents difficulties: querying for the provenance of an entity could potentially return detailed information stretching back to the beginning of time, and most of it irrelevant to the querier. In this paper, we define the concept of provenance query and describe techniques that allow us to perform scoped provenance queries.

Proceedings ArticleDOI
18 Sep 2006
TL;DR: A framework, based on a loosely-coupled publish-subscribe architecture for propagating provenance activities, satisfies the needs of detailed provenance collection while a performance evaluation of a prototype finds a minimal performance overhead.
Abstract: The increasing ability for the earth sciences to sense the world around us is resulting in a growing need for datadriven applications that are under the control of data-centric workflows composed of grid- and web- services. The focus of our work is on provenance collection for these workflows, necessary to validate the workflow and to determine quality of generated data products. The challenge we address is to record uniform and usable provenance metadata that meets the domain needs while minimizing the modification burden on the service authors and the performance overhead on the workflow engine and the services. The framework, based on a loosely-coupled publish-subscribe architecture for propagating provenance activities, satisfies the needs of detailed provenance collection while a performance evaluation of a prototype finds a minimal performance overhead (in the range of 1% for an eight service workflow using 271 data products).

Proceedings ArticleDOI
17 Jul 2006
TL;DR: A layer-based approach to creating frameworks for repeatable, white-box BPEL unit testing is proposed, which is used for the development of a new testing framework that supports automated test execution and offers test management capabilities in a standardized and open way via well-defined interfaces.
Abstract: The Business Process Execution Language (BPEL) is emerging as the new standard in Web service composition. As more and more workflows are modelled using BPEL, unit-testing these compositions becomes increasingly important. However, little research has been done in this area and no frameworks comparable to the xUnit family are available. In this paper, we propose a layer-based approach to creating frameworks for repeatable, white-box BPEL unit testing, which we use for the development of a new testing framework. This framework uses a specialized BPEL-level testing language to describe interactions with a BPEL process to be carried out in a test case. The framework supports automated test execution and offers test management capabilities in a standardized and open way via well-defined interfaces -- even to third-party applications.

Proceedings ArticleDOI
20 Aug 2006
TL;DR: A prototype application deployed at the U.S. National Science Foundation for assisting program directors in identifying reviewers for proposals extracts information from the full text of proposals both to learn about the topics of proposals and the expertise of reviewers.
Abstract: In this paper, we discuss a prototype application deployed at the U.S. National Science Foundation for assisting program directors in identifying reviewers for proposals. The application helps program directors sort proposals into panels and find reviewers for proposals. To accomplish these tasks, it extracts information from the full text of proposals both to learn about the topics of proposals and the expertise of reviewers. We discuss a variety of alternatives that were explored, the solution that was implemented, and the experience in using the solution within the workflow of NSF.

Journal ArticleDOI
TL;DR: A distributed system architecture that utilizes dominant state-of-the-art standard technologies, such as workflows, ontologies, and web services, in order to address the need for interoperability in the industrial enterprise environment in an efficient way is presented.
Abstract: The need for interoperability is prominent in the industrial enterprise environment. Different applications and systems that cover the overall range of the industrial infrastructure from the field to the enterprise level need to interoperate. This quest is driven by the enterprise need for greater flexibility and for the wider possible integration of the enterprise systems. This paper presents a distributed system architecture that utilizes dominant state-of-the-art standard technologies, such as workflows, ontologies, and web services, in order to address the above quest in an efficient way.

Proceedings ArticleDOI
07 Jun 2006
TL;DR: This paper presents a model-checking based approach for automated analysis of delegation and revocation functionalities in the context of a real-world banking workflow requiring static and dynamic separation of duty properties.
Abstract: Demonstrating the safety of a system (ie. avoiding the undesired propagation of access rights or indirect access through some other granted resource) is one of the goals of access control research, e.g. [1-4]. However, the flexibility required from enterprise resource management (ERP) systems may require the implementation of seemingly contradictory requirements (e.g. tight access control but at the same time support for discretionary delegation of workflow tasks and rights).To aid in the analysis of safety problems in workflow-based ERP system, this paper presents a model-checking based approach for automated analysis of delegation and revocation functionalities. This is done in the context of a real-world banking workflow requiring static and dynamic separation of duty properties.We derived information about the workflow from BPEL specifications and ERP business object repositories. This was captured in a SMV specification together with a definition of possible delegation and revocation scenarios. The required separation properties were translated into a set of LTL-based constraints. In particular, we analyse the interaction between delegation and revocation activities in the context of dynamic separation of duty policies.

Patent
11 Sep 2006
TL;DR: In this article, the authors propose a data elevation architecture for automatically and dynamically surfacing to a user interface (UI) context-specific data based on specific workflow or content currently being worked on by a user.
Abstract: Data elevation architecture for automatically and dynamically surfacing to a user interface (UI) context-specific data based on specific workflow or content currently being worked on by a user. Data is broken down into data elements and stored at a data element level in a data catalog using metadata, attributes, and relationships. Data elements are automatically selected from a comprehensive collection of the data catalogs based on relevancy and correlation to the current user task. The data catalog stores and relates the data elements and metadata based on criteria specified by content matching based on business terms or specified in a business process in predefined relationships between forms or specified by the user as correlated. The UI displays the data automatically in forms dynamically selected, populated, and presented at the point of focus or user activity so that the user can interact or take action immediately.

Journal IssueDOI
TL;DR: The Taverna Workbench as mentioned in this paper is a Grid environment for the composition and execution of workflows for the life sciences community, which is based on the myGrid project's workbench.
Abstract: Life sciences research is based on individuals, often with diverse skills, assembled into research groups. These groups use their specialist expertise to address scientific problems. The in silico experiments undertaken by these research groups can be represented as workflows involving the co-ordinated use of analysis programs and information repositories that may be globally distributed. With regards to Grid computing, the requirements relate to the sharing of analysis and information resources rather than sharing computational power. The myGrid project has developed the Taverna Workbench for the composition and execution of workflows for the life sciences community. This experience paper describes lessons learnt during the development of Taverna. A common theme is the importance of understanding how workflows fit into the scientists' experimental context. The lessons reflect an evolving understanding of life scientists' requirements on a workflow environment, which is relevant to other areas of data intensive and exploratory science. Copyright © 2005 John Wiley & Sons, Ltd.

Patent
22 Mar 2006
TL;DR: In this article, a method processor in a mobile device may include a workflow engine and a cache manager which looks ahead of the current execution of a workflow to preload modules, and a logger to stamp the workflow related data with real-time measurements, such as time, location, and vehicle bus information.
Abstract: Methods and apparatuses to enable the development, deployment and update of composite applications on mobile devices. In one embodiment, a method processor in a mobile device may include a workflow engine and a cache manager which looks ahead of the current execution of a workflow to preload modules. The method processor may present modal user interfaces in a non-modal way to eliminate flicker, and use a logger to stamp the workflow related data with real time measurements, such as time, location, and vehicle bus information. The logger may capture screen images and global data of the workflow during the execution. The log data stream may be collected and sent from the mobile device in real time, or in a batch mode, for monitoring, debugging, diagnosing or tuning the execution of a workflow, for providing hot update, help and guidance against deviation during the execution, and for other features.