Topic
Workflow
About: Workflow is a research topic. Over the lifetime, 31996 publications have been published within this topic receiving 498339 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: In this paper, the authors give an overview of the organizational aspects of workflow technology in the context of the workflow life cycle, provide a review of existing work, and develop guidelines for the design of a workflow-enabled organization, which can be used by both workflow vendors and users.
Abstract: Business processes automation requires the specification of process structures as well as the definition of resources involved in the execution of these processes. While the modeling of business processes and workflows is well researched, the link between the organizational elements and process activities is less well understood, and current developments in the web services choreography area completely neglect the organizational aspect of workflow applications. The purpose of this paper is to give an overview of the organizational aspects of workflow technology in the context of the workflow life cycle, to provide a review of existing work, and to develop guidelines for the design of a workflow-enabled organization, which can be used by both workflow vendors and users.
214 citations
••
TL;DR: The multiple tools, methodologies, and practices that the ITK community has adopted, refined, and followed during the past decade, in order to become one of the research communities with the most modern reproducibility verification infrastructure are described.
Abstract: Reproducibility verification is essential to the practice of the scientific method. Researchers report their findings, which are strengthened as other independent groups in the scientific community share similar outcomes. In the many scientific fields where software has become a fundamental tool for capturing and analyzing data, this requirement of reproducibility implies that reliable and comprehensive software platforms and tools should be made available to the scientific community. The tools will empower them and the public to verify, through practice, the reproducibility of observations that are reported in the scientific literature.Medical image analysis is one of the fields in which the use of computational resources, both software and hardware, are an essential platform for performing experimental work. In this arena, the introduction of the Insight Toolkit (ITK) in 1999 has transformed the field and facilitates its progress by accelerating the rate at which algorithmic implementations are developed, tested, disseminated and improved. By building on the efficiency and quality of open source methodologies, ITK has provided the medical image community with an effective platform on which to build a daily workflow that incorporates the true scientific practices of reproducibility verification.This article describes the multiple tools, methodologies, and practices that the ITK community has adopted, refined, and followed during the past decade, in order to become one of the research communities with the most modern reproducibility verification infrastructure. For example, 207 contributors have created over 2400 unit tests that provide over 84% code line test coverage. The Insight Journal, an open publication journal associated with the toolkit, has seen over 360,000 publication downloads. The median normalized closeness centrality, a measure of knowledge flow, resulting from the distributed peer code review system was high, 0.46.
214 citations
••
TL;DR: The article identifies and examines extensions to the basic client/server model that provide explicit support for coordinating multiserver interactions.
Abstract: A major limitation in the basic client/server model is its focus on clients requesting individual services. Clients often need to invoke multiple services, coordinated to reflect how those services interrelate and contribute to the overall application. Important examples include task allocation and event notification in collaborative workgroup systems, and task sequencing and routing in workflow applications. Failure to address control requirements for such interactions has impeded development of uniform methods and tools for building many types of distributed systems with client/server architectures. The article identifies and examines extensions to the basic client/server model that provide explicit support for coordinating multiserver interactions. >
212 citations
••
TL;DR: The results of the service-oriented application applied to alpine runoff models are shown, including the use of geospatial services facilitating discovery, access, processing and visualization ofGeospatial data in a distributed manner.
Abstract: Environmental modelling often requires a long iterative process of sourcing, reformatting, analyzing, and introducing various types of data into the model. Much of the data to be analyzed are geospatial data-digital terrain models (DTM), river basin boundaries, snow cover from satellite imagery, etc.-and so the modelling workflow typically involves the use of multiple desktop GIS and remote sensing software packages, with limited compatibility among them. Recent advances in service-oriented architectures (SOA) are allowing users to migrate from dedicated desktop solutions to on-line, loosely coupled, and standards-based services which accept source data, process them, and pass results as basic parameters to other intermediate services and/or then to the main model, which also may be made available on-line. This contribution presents a service-oriented application that addresses the issues of data accessibility and service interoperability for environmental models. Key model capabilities are implemented as geospatial services, which are combined to form complex services, and may be reused in other similar contexts. This work was carried out under the auspices of the AWARE project funded by the European programme Global Monitoring for Environment and Security (GMES). We show results of the service-oriented application applied to alpine runoff models, including the use of geospatial services facilitating discovery, access, processing and visualization of geospatial data in a distributed manner.
212 citations
•
03 Apr 2002TL;DR: In this paper, a client computer in a communications network with a server computer assembles a record set that has a MIME declaration header with a multipart content type and a content sub-type indicative of a workflow media type.
Abstract: A client computer in a communications network with a server computer assembles a record set that has a MIME declaration header with a multipart content type and a content sub-type indicative of a workflow media type. The first client computer also assembles a binary file having therein an encoded workflow specification. The record set is then transmitted with the binary file to the communications network. A second client computer on the communications network receives both the record set and the binary file and begins decoding the workflow specification. The second client computer uses an application program to execute the decoded workflow specification so as to perform all or a portion of the workflow process that is specified therein. The workflow specification is optionally written in eXtensible Mark-up Language (XML).
212 citations