scispace - formally typeset
Search or ask a question

Showing papers on "Workflow technology published in 2022"


Proceedings ArticleDOI
28 Feb 2022
TL;DR: The design of a worker-side workflow schedule pattern for serverless workflow execution is presented and FaaSFlow is implemented to enable efficient workflow execution in the serverless context and an adaptive storage library FaaStore is proposed that enables fast data transfer between functions on the same node without through the database.
Abstract: Serverless computing (Function-as-a-Service) provides fine-grain resource sharing by running functions (or Lambdas) in containers. Data-dependent functions are required to be invoked following a pre-defined logic, which is known as serverless workflows. However, our investigation shows that the traditional master-worker based workflow execution architecture performs poorly in serverless context. One significant overhead results from the master-side workflow schedule pattern, with which the functions are triggered in the master node and assigned to worker nodes for execution. Besides, the data movement between workers also reduces the throughput. To this end, we present a worker-side workflow schedule pattern for serverless workflow execution. Following the design, we implement FaaSFlow to enable efficient workflow execution in the serverless context. Besides, we propose an adaptive storage library FaaStore that enables fast data transfer between functions on the same node without through the database. Experiment results show that FaaSFlow effectively mitigates the workflow scheduling overhead by 74.6% on average and data transmission overhead by 95% at most. When the network bandwidth fluctuates, FaaSFlow-FaaStore reduces the throughput degradation by 23.0%, and is able to multiply the utilization of network bandwidth by 1.5X-4X.

15 citations


Journal ArticleDOI
TL;DR: In this paper , a universal workflow was developed to identify workable AI technologies for energy saving, and the developed universal workflow adopted a hierarchical structure to guide users to choose learning, optimisation, and control tools to achieve energy saving.

11 citations


Journal ArticleDOI
TL;DR: WfCommons as mentioned in this paper is a collection of tools for analyzing workflow executions, for producing generators of synthetic workflows, and for simulating workflow executions; they demonstrate the realism of the generated synthetic workflow by comparing their simulated executions to real workflow executions.

11 citations


Journal ArticleDOI
TL;DR: WfCommons as discussed by the authors is a collection of tools for analyzing workflow executions, for producing generators of synthetic workflows, and for simulating workflow executions; they demonstrate the realism of the generated synthetic workflow by comparing their simulated executions to real workflow executions.

9 citations


Journal ArticleDOI
TL;DR: Sapporo is developed, an application to provide a unified layer of workflow execution upon the differences of various workflow systems, and can support the research community in utilizing valuable resources for data analysis.
Abstract: The increased demand for efficient computation in data analysis encourages researchers in biomedical science to use workflow systems. Workflow systems, or so-called workflow languages, are used for the description and execution of a set of data analysis steps. Workflow systems increase the productivity of researchers, specifically in fields that use high-throughput DNA sequencing applications, where scalable computation is required. As systems have improved the portability of data analysis workflows, research communities are able to share workflows to reduce the cost of building ordinary analysis procedures. However, having multiple workflow systems in a research field has resulted in the distribution of efforts across different workflow system communities. As each workflow system has its unique characteristics, it is not feasible to learn every single system in order to use publicly shared workflows. Thus, we developed Sapporo, an application to provide a unified layer of workflow execution upon the differences of various workflow systems. Sapporo has two components: an application programming interface (API) that receives the request of a workflow run and a browser-based client for the API. The API follows the Workflow Execution Service API standard proposed by the Global Alliance for Genomics and Health. The current implementation supports the execution of workflows in four languages: Common Workflow Language, Workflow Description Language, Snakemake, and Nextflow. With its extensible and scalable design, Sapporo can support the research community in utilizing valuable resources for data analysis.

7 citations


Journal ArticleDOI
TL;DR: It is indicated that the BIM-assisted workflow can significantly reduce operation time to enhance preliminary design efficiency and deserves to be strongly promoted in the Chinese Architecture, Engineering, and Construction (AEC) industry.
Abstract: China’s urban housing demand has directly influenced urbanization development. To stabilize the level of urbanization, it is urgent to optimize the whole life-cycle efficiency of construction and the preliminary design as the first step is even more significant. Building Information Modeling (BIM) is widely used as information technology in the construction industry to promote the implementation and management of projects. However, the traditional preliminary design approach still occupies the mainstream market without forming a systematic BIM workflow, which causes inefficiency. To address this issue, this research aims to construct a BIM-assisted workflow to enhance the preliminary design efficiency of architecture. This study creates traditional and BIM-assisted workflows for comparative analysis to capture duration data with a questionnaire and validate by practical simulation. The findings show that the BIM-assisted workflow consumes less time than the traditional workflow. This research indicates that the BIM-assisted workflow can significantly reduce operation time to enhance preliminary design efficiency and deserves to be strongly promoted in the Chinese Architecture, Engineering, and Construction (AEC) industry.

4 citations


Journal ArticleDOI
TL;DR: In this paper , a Cognitive Engine Process Controller (CEPC) is proposed to realize a human-centric manufacturing system by augmenting the human in workflow selection and providing the flexibility in using alternative methods in the workflow.

4 citations


Posted ContentDOI
TL;DR: The third edition of the ``Workflows Community Summit" explored workflows challenges and opportunities from the perspective of computing centers and facilities.
Abstract: The importance of workflows is highlighted by the fact that they have underpinned some of the most significant discoveries of the past decades. Many of these workflows have significant computational, storage, and communication demands, and thus must execute on a range of large-scale computer systems, from local clusters to public clouds and upcoming exascale HPC platforms. Historically, infrastructures for workflow execution consisted of complex, integrated systems, developed in-house by workflow practitioners with strong dependencies on a range of legacy technologies. Due to the increasing need to support workflows, dedicated workflow systems were developed to provide abstractions for creating, executing, and adapting workflows conveniently and efficiently while ensuring portability. While these efforts are all worthwhile individually, there are now hundreds of independent workflow systems. The resulting workflow system technology landscape is fragmented, which may present significant barriers for future workflow users due to many seemingly comparable, yet usually mutually incompatible, systems that exist. In order to tackle some of these challenges, the DOE-funded ExaWorks and NSF-funded WorkflowsRI projects have organized in 2021 a series of events entitled the"Workflows Community Summit". The third edition of the ``Workflows Community Summit"explored workflows challenges and opportunities from the perspective of computing centers and facilities. The third summit brought together a small group of facilities representatives with the aim to understand how workflows are currently being used at each facility, how facilities would like to interact with workflow developers and users, how workflows fit with facility roadmaps, and what opportunities there are for tighter integration between facilities and workflows. More information at: https://workflowsri.org/summits/facilities/

3 citations


Journal ArticleDOI
TL;DR: In this article , a two-step scheduler based on divide and conquer is proposed to minimize the execution cost of a deadline-constrained workflow in the cloud computing environment.
Abstract: Abstract A workflow is an effective way for modeling complex applications and serves as a means for scientists and researchers to better understand the details of applications. Cloud computing enables the running of workflow applications on many types of computational resources which become available on-demand. As one of the most important aspects of cloud computing, workflow scheduling needs to be performed efficiently to optimize resources. Due to the existence of various resource types at different prices, workflow scheduling has evolved into an even more challenging problem on cloud computing. The present paper proposes a workflow scheduling algorithm in the cloud to minimize the execution cost of the deadline-constrained workflow. The proposed method, EDQWS, extends the current authors’ previous study (DQWS) and is a two-step scheduler based on divide and conquer. In the first step, the workflow is divided into sub-workflows by defining, scheduling, and removing a critical path from the workflow, similar to DQWS. The process continues until only chain-structured sub-workflows, called linear graphs, remain. In the second step which is linear graph scheduling, a new merging algorithm is proposed that combines the resulting linear graphs so as to reduce the number of used instances and minimize the overall execution cost. In addition, the current work introduces a scoring function to select the most efficient instances for scheduling the linear graphs. Experiments show that EDQWS outperforms its competitors, both in terms of minimizing the monetary costs of executing scheduled workflows and meeting user-defined deadlines. Furthermore, in more than 50% of the examined workflow samples, EDQWS succeeds in reducing the number of resource instances compared to the previously introduced DQWS method.

3 citations


Journal ArticleDOI
TL;DR: In this paper , the authors propose SecDATAVIEW, a distributed big data workflow management system that employs heterogeneous workers, such as Intel SGX and AMD SEV, to protect both workflow and workflow data execution.
Abstract: Big data workflow management systems (BDWMS)s have recently emerged as popular data analytics platforms to conduct large-scale data analytics in the cloud. However, the protection of data confidentiality and secure execution of workflow applications remains an important and challenging problem. Although a few data analytics systems, such as VC3 and Opaque, were developed to address security problems, they are limited to specific domains such as Map-Reduce-style and SQL query workflows. A generic secure framework for BDWMSs is still missing. In this article, we propose SecDATAVIEW, a distributed BDWMS that employs heterogeneous workers, such as Intel SGX and AMD SEV, to protect both workflow and workflow data execution, addressing three major security challenges: (1) Reducing the TCB size of the big data workflow management system in the untrusted cloud by leveraging the hardware-assisted TEE and software attestation; (2) Supporting Java-written workflow tasks to overcome the limitation of SGX’s lack of support for Java programs; and (3) Reducing the adverse impact of SGX enclave memory paging overhead through a “Hybrid” workflow task scheduling system that selectively deploys sensitive tasks to a mix of SGX and SEV worker nodes. Our experimental results show that SecDATAVIEW imposes moderate overhead on the workflow execution time.

2 citations


Journal ArticleDOI
TL;DR: A novel meta‐heuristic algorithm named Investment‐Based Optimization (IBO) was developed to identify an optimal mapping that produces minimal execution cost with fair workload distribution on resources and it was found that IBO reduces execution costs by 33%, 16%, 16.36%, and 20% with aFair workload distribution.
Abstract: Workflow scheduling is an important way to manage the execution of a workflow. It introduces the concept of providing suitable resources to workflow tasks in order to finish workflow execution and meet the user's objectives. However, the problem becomes more complex when scheduling must balance two conflicting objectives, such as minimizing execution cost and maximizing load across all computing resources. A workflow has many interdependent tasks, and the cloud datacenter has many computing resources to execute the workflow. There can be an asymptotically infinite number of mappings of tasks‐to‐computing resources. Every mapping produces different execution costs with different workloads on computing resources. The main challenge for the researcher is to develop an intelligent scheduling algorithm to identify an optimal mapping that produces minimal execution cost with fair workload distribution on resources. We developed a novel meta‐heuristic algorithm named Investment‐Based Optimization (IBO) to identify an optimal mapping. The IBO algorithm was first tested on optimization benchmark functions and then simulated in CloudSim to see its performance for scheduling workflows. Finally, IBO was tested over Montage, Epigenomics, Sipht, and a sample workflow, and it was found that IBO reduces execution costs by 33%, 16%, 16.36%, and 20% with a fair workload distribution.

Posted ContentDOI
19 Sep 2022
TL;DR: In this paper , the authors outline ten simple rules for converting workflow manager pipelines into command line applications, and present working examples that also function as templates using the two most popular workflow managers, Snakemake and Nextflow.
Abstract: There is a growing trend for releasing bioinformatics workflows as command line applications. This is a good thing as workflow management systems add both functionality and reliability, while command line interfaces are convenient for end users. Developing command line software in this way is considerably faster. However, there are many potential pitfalls that developers of bioinformatics tools should avoid. We outline ten simple rules for converting workflow manager pipelines into command line applications, and present working examples that also function as templates using the two most popular workflow managers, Snakemake (github.com/beardymcjohnface/Snaketool) and Nextflow (github.com/beardymcjohnface/Nektool).

Proceedings ArticleDOI
01 Nov 2022
TL;DR: In this article , the challenges of provenance management and reuse in e-science, focusing primarily on scientific workflow approaches by exploring different SWfMSs and provenance Management systems.
Abstract: Scientific workflow is one of the well-established pillars of large-scale computational science and emerged as a torchbearer to formalize and structure a massive amount of complex heterogeneous data and accelerate scientific progress. A workflow can analyze terabyte-scale datasets, contain numerous individual tasks, and coordinate between heterogeneous tasks with the help of scientific workflow management systems (SWfMSs). SWfMSs support the automation of repetitive tasks and capture complex analysis through workflows. However, the execution of workflows is costly and requires a lot of resource usage. At different phases of a workflow life cycle, most SWfMSs store provenance information, allowing result reproducibility, sharing, and knowledge reuse in the scientific community. But, this provenance information can be many times larger than the workflow and input data, and managing provenance data is growing in complexity with large-scale applications. Handling exponential increasing data volume and utilizing the technical resources for storage and computing are thus demanded by exploiting data-intensive computing in various application fields. This paper documented the challenges of provenance management and reuse in e-science, focusing primarily on scientific workflow approaches by exploring different SWfMSs and provenance management systems. We also investigated the ways to overcome the challenges.

Journal ArticleDOI
TL;DR: An effective two-phase algorithm for provisioning of cloud resources for workflow applications by using its structural features to minimize makespan and resource wastage is proposed.

Book ChapterDOI
TL;DR: In this paper , the authors proposed an improved workflow model that considers disk times in communications costs and devised a genetic algorithm that produces robust schedules to solve the scheduling problem, which is able to predict the execution time of the workflow with more precision than the existing ones in a Cloud Infrastructure as a Services system.
Abstract: Distributed scientific applications are commonly executed as a workflow of data interdependent tasks on a cluster of different machines. Over the last years, the infrastructure used for solving these problems has evolved from clusters of physical machines to virtual resources in a Cloud based on Quality of Service requirements and pay-per-use basis. In these settings, the total execution time of the workflow, i.e., the makespan, is one of the main objectives. The subsequent optimization problem of distributing the tasks on the available resources, called workflow scheduling problem, is often solved by means of metaheuristics. In this paper we propose an improved workflow model that considers disk times in communications costs. To solve the scheduling problem, we devise a genetic algorithm that produces robust schedules. The experimental study showed that the proposed model is able to predict the execution time of the workflow with more precision than the existing ones in a Cloud Infrastructure as a Services system.

Posted ContentDOI
03 Nov 2022-bioRxiv
TL;DR: Yevis helps in the building of a workflow registry to share reusable workflows without requiring extensive human resources by following Yevis’s workflow-sharing procedure, one can operate a registry while satisfying the reusable workflow criteria.
Abstract: Background Many open-source workflow systems have made bioinformatics data analysis procedures portable. Sharing these workflows provides researchers easy access to high-quality analysis methods without the requirement of computational expertise. However, published workflows are not always guaranteed to be reliably reusable. Therefore, a system is needed to lower the cost of sharing workflows in a reusable form. Results We introduce Yevis, a system to build a workflow registry that automatically validates and tests workflows to be published. The validation and test are based on the requirements we defined for a workflow being reusable with confidence. Yevis runs on GitHub and Zenodo and allows workflow hosting without the need of dedicated computing resources. A Yevis registry accepts workflow registration via a GitHub pull request, followed by an automatic validation and test process for the submitted workflow. As a proof of concept, we built a registry using Yevis to host workflows from a community to demonstrate how a workflow can be shared while fulfilling the defined requirements. Conclusions Yevis helps in the building of a workflow registry to share reusable workflows without requiring extensive human resources. By following Yevis’s workflow-sharing procedure, one can operate a registry while satisfying the reusable work-flow criteria. This system is particularly useful to individuals or communities that want to share workflows but lacks the specific technical expertise to build and maintain a workflow registry from scratch.

Journal ArticleDOI
TL;DR: In this article , a WMS simulator called DISSECT-CF-WMS was developed to evaluate the performance of workflow management systems (WMS) and optimise workflow management techniques.
Abstract: Abstract Scientific workflows are becoming increasingly important for complex scientific applications. Conducting real experiments for large-scale workflows is challenging because they are very expensive and time consuming. A simulation is an alternative approach to a real experiment that can help evaluating the performance of workflow management systems (WMS) and optimise workflow management techniques. Although there are several workflow simulators available today, they are often user-oriented and treat the cloud as a black box. Unfortunately, this behaviour prevents the evaluation of the infrastructure level impact of the various decisions made by the WMSs. To address these issues, we have developed a WMS simulator (called DISSECT-CF-WMS) on DISSECT-CF that exposes the internal details of cloud infrastructures. DISSECT-CF-WMS enables better energy awareness by allowing the study of schedulers for physical machines. It also enables dynamic provisioning to meet the resource needs of the workflow application while considering the provisioning delay of a VM in the cloud. We evaluated our simulation extension by running several workflow applications on a given infrastructure. The experimental results show that we can investigate different schedulers for physical machines on different numbers of virtual machines to reduce energy consumption. The experiments also show that DISSECT-CF-WMS is up to 295× faster than WorkflowSim and still provides equivalent results. The experimental results of auto-scaling show that it can optimise makespan, energy consumption and VM utilisation in contrast to static VM provisioning.

Journal ArticleDOI
TL;DR: In this paper , a distributed architecture model for sensor data acquisition and processing is presented, where multi-sensor data fusion algorithms are used to extract a computational representation of activities in surgical procedures.

Journal ArticleDOI
TL;DR: Researchers are encouraged to investigate the design and implementation of WMOTs and use the tools to create best practices to enable workflow automation and improve workflow efficiency and care quality.
Abstract: BACKGROUND Automation of health care workflows has recently become a priority. This can be enabled and enhanced by a workflow monitoring tool (WMOT). OBJECTIVES We shared our experience in clinical workflow analysis via three cases studies in health care and summarized principles to design and develop such a WMOT. METHODS The case studies were conducted in different clinical settings with distinct goals. Each study used at least two types of workflow data to create a more comprehensive picture of work processes and identify bottlenecks, as well as quantify them. The case studies were synthesized using a data science process model with focuses on data input, analysis methods, and findings. RESULTS Three case studies were presented and synthesized to generate a system structure of a WMOT. When developing a WMOT, one needs to consider the following four aspects: (1) goal orientation, (2) comprehensive and resilient data collection, (3) integrated and extensible analysis, and (4) domain experts. DISCUSSION We encourage researchers to investigate the design and implementation of WMOTs and use the tools to create best practices to enable workflow automation and improve workflow efficiency and care quality.

Journal ArticleDOI
01 Jan 2022
TL;DR: Canonical Workflow Building Blocks (CWBB) as mentioned in this paper is a methodology of describing and wrapping computational tools, in order for them to be utilised in a reproducible manner from multiple workflow languages and execution platforms.
Abstract: Abstract We introduce the concept of Canonical Workflow Building Blocks (CWBB), a methodology of describing and wrapping computational tools, in order for them to be utilised in a reproducible manner from multiple workflow languages and execution platforms. The concept is implemented and demonstrated with the BioExcel Building Blocks library (BioBB), a collection of tool wrappers in the field of computational biomolecular simulation. Interoperability across different workflow languages is showcased through a protein Molecular Dynamics setup transversal workflow, built using this library and run with 5 different Workflow Manager Systems (WfMS). We argue such practice is a necessary requirement for FAIR Computational Workflows and an element of Canonical Workflow Frameworks for Research (CWFR) in order to improve widespread adoption and reuse of computational methods across workflow language barriers.

Journal ArticleDOI
TL;DR: Yevis as mentioned in this paper is a system to build a workflow registry that automatically validates and tests workflows to be published and runs on GitHub and Zenodo and allows workflow hosting without the need of dedicated computing resources.
Abstract: Abstract Background Many open-source workflow systems have made bioinformatics data analysis procedures portable. Sharing these workflows provides researchers easy access to high-quality analysis methods without the requirement of computational expertise. However, published workflows are not always guaranteed to be reliably reusable. Therefore, a system is needed to lower the cost of sharing workflows in a reusable form. Results We introduce Yevis, a system to build a workflow registry that automatically validates and tests workflows to be published. The validation and test are based on the requirements we defined for a workflow being reusable with confidence. Yevis runs on GitHub and Zenodo and allows workflow hosting without the need of dedicated computing resources. A Yevis registry accepts workflow registration via a GitHub pull request, followed by an automatic validation and test process for the submitted workflow. As a proof of concept, we built a registry using Yevis to host workflows from a community to demonstrate how a workflow can be shared while fulfilling the defined requirements. Conclusions Yevis helps in the building of a workflow registry to share reusable workflows without requiring extensive human resources. By following Yevis’s workflow-sharing procedure, one can operate a registry while satisfying the reusable workflow criteria. This system is particularly useful to individuals or communities that want to share workflows but lacks the specific technical expertise to build and maintain a workflow registry from scratch.

Journal ArticleDOI
TL;DR: In this paper , the applicability of workflow management systems in conjunction with image recognition and machine learning methods has been explored in the supervision of the repair and diagnostic works in a real industrial environment.
Abstract: Abstract Supervision of repair and diagnostic works aimed at improving the safety of maintenance crews is one of the key objectives of the distributed INRED system. Working in a real industrial environment, the INRED system includes, among others, the so-called INRED-Workflow, which provides an infrastructure for process automation. Participants of the service processes, managed by the INRED-Workflow, are controlled at each stage of the performed service procedures, both by the system and other process participants, such as quality managers and technologists. All data collected from the service processes is stored in the System Knowledge Repository (SKR) for further processing by using advanced algorithms, and the so-called Smart Procedures merge services supplied by other INRED system modules. The applicability of workflow management systems in conjunction with image recognition and machine learning methods has not yet been thoroughly explored. The presented paper shows the innovative usage of such systems in the supervision of the repair and diagnostic works.


Proceedings ArticleDOI
07 Sep 2022
TL;DR: This work implemented a new enhanced workflow that combines human-machine collaboration in two ways: Humans can aid the machine in solving more difficult tasks with high information value, while the machine can facilitate human engagement by generating motivational messages that emphasize different aspects of human- machine collaboration.
Abstract: The unprecedented growth of online citizen science projects provides growing opportunities for the public to participate in scientific discoveries. Nevertheless, volunteers typically make only a few contributions before exiting the system. Thus a significant challenge to such systems is increasing the capacity and efficiency of volunteers without hindering their motivation and engagement. To address this challenge, we study the role of incorporating collaborative agents in the existing workflow of a citizen science project for the purpose of increasing the capacity and efficiency of these systems, while maintaining the motivation of participants in the system. Our new enhanced workflow combines human-machine collaboration in two ways: Humans can aid the machine in solving more difficult tasks with high information value, while the machine can facilitate human engagement by generating motivational messages that emphasize different aspects of human-machine collaboration. We implemented this workflow in a study comprising thousands of volunteers in Galaxy Zoo, one of the largest citizen science projects on the web. Volunteers could choose to use the enhanced workflow or the existing workflow in which users did not receive motivational messages, and tasks were allocated to volunteers sequentially without regard to information value. We found that the volunteers working in the enhanced workflow were more productive than those volunteers who worked in the existing workflow, without incurring a loss in the quality of their contributions. Additionally, in the enhanced workflow, the type of messages used had a profound effect on volunteer performance. Our work demonstrates the importance of varying human-machine collaboration models in citizen science.

Journal ArticleDOI
TL;DR: In this paper , the considerations necessary for space planning and equipment needs for a digital orthodontic lab are discussed. But the authors do not discuss the design of the lab itself.

Journal ArticleDOI
TL;DR: In this article , a two-step scheduler based on divide and conquer is proposed to minimize the execution cost of a deadline-constrained workflow in the cloud computing environment.
Abstract: Abstract A workflow is an effective way for modeling complex applications and serves as a means for scientists and researchers to better understand the details of applications. Cloud computing enables the running of workflow applications on many types of computational resources which become available on-demand. As one of the most important aspects of cloud computing, workflow scheduling needs to be performed efficiently to optimize resources. Due to the existence of various resource types at different prices, workflow scheduling has evolved into an even more challenging problem on cloud computing. The present paper proposes a workflow scheduling algorithm in the cloud to minimize the execution cost of the deadline-constrained workflow. The proposed method, EDQWS, extends the current authors’ previous study (DQWS) and is a two-step scheduler based on divide and conquer. In the first step, the workflow is divided into sub-workflows by defining, scheduling, and removing a critical path from the workflow, similar to DQWS. The process continues until only chain-structured sub-workflows, called linear graphs, remain. In the second step which is linear graph scheduling, a new merging algorithm is proposed that combines the resulting linear graphs so as to reduce the number of used instances and minimize the overall execution cost. In addition, the current work introduces a scoring function to select the most efficient instances for scheduling the linear graphs. Experiments show that EDQWS outperforms its competitors, both in terms of minimizing the monetary costs of executing scheduled workflows and meeting user-defined deadlines. Furthermore, in more than 50% of the examined workflow samples, EDQWS succeeds in reducing the number of resource instances compared to the previously introduced DQWS method.

Journal ArticleDOI
TL;DR: In this article , a meta-heuristic algorithm was proposed to schedule the scientific workflow and minimize the overall completion time by properly managing the acquisition and transmission delays of servers deployed at the edge of a network.
Abstract: The edge computing model offers an ultimate platform to support scientific and real-time workflow-based applications over the edge of the network. However, scientific workflow scheduling and execution still facing challenges such as response time management and latency time. This leads to deal with the acquisition delay of servers, deployed at the edge of a network and reduces the overall completion time of workflow. Previous studies show that existing scheduling methods consider the static performance of the server and ignore the impact of resource acquisition delay when scheduling workflow tasks. Our proposed method presented a meta-heuristic algorithm to schedule the scientific workflow and minimize the overall completion time by properly managing the acquisition and transmission delays. We carry out extensive experiments and evaluations based on commercial clouds and various scientific workflow templates. The proposed method has approximately 7.7% better performance than the baseline algorithms, particularly in overall deadline constraint that gives a success rate.

Proceedings ArticleDOI
06 Oct 2022
TL;DR: In this article , a mobile cloud computing (MCC) workflow task dynamic scheduling model is designed to improve the level of adaptation of MCC workflow tasks dynamic scheduling, where the preference information of workflow task flow is assembled and clustered using multi-scale and multi-scalar feature analysis methods, and the dynamic feature evolution clustering analysis model is constructed.
Abstract: A mobile cloud computing (MCC) workflow task dynamic scheduling model is designed to improve the level of adaptation of MCC workflow task dynamic scheduling. The group decision model parameter set of MCC workflow task dynamic scheduling is constructed, the preference information of MCC workflow task flow is assembled and clustered using multi-scale and multi-scalar feature analysis methods, and the dynamic feature evolution clustering analysis model of MCC workflow task is constructed, the dynamic information flow regression parameters of MCC workflow task are obtained by partial least squares method, and the fuzzy clustering processing of MCC workflow task dynamic scheduling is realized according to each attribute The fuzzy clustering process of dynamic scheduling of MCC workflow tasks is realized according to the results of weight vector assignment. Finally, the dynamic adaptive scheduling of MCC workflow tasks is realized according to the information clustering results. The experimental findings indicate that the technique has an excellent dynamic distribution and strong comprehensive assessment capability during MCC workflow task scheduling, aiding in the adaptive assignment of MCC workflow tasks.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors analyzed the application status of workflow office system in the process of enterprise information management, and took Shenzhen chinasoft International Company as an example to conduct empirical research and analysis on the process and effect of workflow system application in small and medium-sized enterprises.
Abstract: The rapid development of information technology makes the application of computer and network technology in office system more and more widely, which not only promotes the realization and development of office automation, but also reduces people's work burden. With the continuous improvement of enterprise information management requirements, a new workflow office system appears in the process of information management, which plays an important role in enterprise information management. By discussing the meaning and development history of office automation, this paper analyzes the application status of workflow office system in the process of enterprise information management, and takes Shenzhen chinasoft International Company as an example to conduct empirical research and analysis on the process and effect of workflow system application in small and medium-sized enterprises.

Proceedings ArticleDOI
18 Sep 2022
TL;DR: In this paper , the authors developed a suggestive system for interactive workflow composition using frequent patterns in workflows, which is useful for novice as well as experienced scientists in composing workflows with state-of-the-art tools.
Abstract: Workflows or pipelines provide a means for executing complex data analysis seamlessly. Composing tools into a workflow is essential in bioinformatics experiments. There are scientific workflow systems such as Taverna and Galaxy that facilitate automatic workflow composition. However, designing workflows using workflow systems becomes more complex with the availability of vast numbers of complex, heterogeneous tools. Connecting such heterogeneous tools to a workflow is error-prone and time-consuming. The objective of the study is to develop a suggestive system for interactive workflow composition using frequent patterns in workflows. The approach basically consists of three main phases: pattern mining, component suggestion, and updating the workflow. Frequent patterns of workflows are identified using frequent subgraph mining techniques and N-gram modeling. The suggested components allow reusing best practice workflows while reducing the time required in composing the workflows. Frequent usage patterns identified can also be used in searching similar workflows in workflow repositories. An interactive workflow composition approach is useful for novice as well as experienced scientists in composing workflows with state-of-the-art tools. The approach enhances the reuse of best practice workflows developed by other users. Such systems would succeed more in future with the availability of more and more workflows in the light of open science