scispace - formally typeset
Search or ask a question

Showing papers on "Job shop scheduling published in 2022"


Journal ArticleDOI
TL;DR: In this article, an effective hybrid collaborative algorithm with cooperative search scheme is designed to solve the problem effectively, and a double-population cooperative search link based on learning mechanism is presented.

78 citations


Journal ArticleDOI
TL;DR: In this paper , an effective hybrid collaborative algorithm with cooperative search scheme is designed to solve the problem effectively, and a double-population cooperative search link based on learning mechanism is presented.

78 citations


Journal ArticleDOI
TL;DR: The resource-constrained project scheduling problem is to schedule activities subject to precedence and resource constraints such that the makespan is minimized as discussed by the authors , which is a rather stylized model with assumptions that are too narrow to capture many real world requirements.

68 citations


Journal ArticleDOI
TL;DR: In this paper , a hybrid adaptive differential evolution (HADE) algorithm is proposed to solve the problem of job shop scheduling with fuzzy processing time and completion time, where the new individuals are selected according to the fitness value obtained from a population consisting of parents and children in HADE.
Abstract: The job-shop scheduling problem (JSP) is NP hard, which has very important practical significance. Because of many uncontrollable factors, such as machine delay or human factors, it is difficult to use a single real-number to express the processing and completion time of the jobs. JSP with fuzzy processing time and completion time (FJSP) can model the scheduling more comprehensively, which benefits from the developments of fuzzy sets. Fuzzy relative entropy leads to a method that can evaluate the quality of a feasible solution following the comparison between the actual value and the ideal value (the due date). Therefore, the multiobjective FJSP can be transformed into a single-objective optimization problem and solved by a hybrid adaptive differential evolution (HADE) algorithm. The maximum completion time, the total delay time, and the total energy consumption of jobs will be considered. HADE adopts a mutation strategy based on DE-current-to-best. Its parameters (CR and F ) are all made adaptive and normally distributed. The new individuals are selected according to the fitness value (FRE) obtained from a population consisting of N parents and N children in HADE. The algorithm is analyzed from different viewpoints. As the experimental results demonstrate, the performance of the HADE algorithm is better than those of some other state-of-the-art algorithms (namely, ant colony optimization, artificial bee colony, and particle swarm optimization).

58 citations


Journal ArticleDOI
TL;DR: In this paper , the authors proposed an enhanced firefly algorithm adapted for tackling workflow scheduling challenges in a cloud-edge environment, which overcomes observed deficiencies of original firefly metaheuristic by incorporating genetic operators and quasi-reflection-based learning procedure.
Abstract: Edge computing is a novel technology, which is closely related to the concept of Internet of Things. This technology brings computing resources closer to the location where they are consumed by end-users-to the edge of the cloud. In this way, response time is shortened and lower network bandwidth is utilized. Workflow scheduling must be addressed to accomplish these goals. In this paper, we propose an enhanced firefly algorithm adapted for tackling workflow scheduling challenges in a cloud-edge environment. Our proposed approach overcomes observed deficiencies of original firefly metaheuristics by incorporating genetic operators and quasi-reflection-based learning procedure. First, we have validated the proposed improved algorithm on 10 modern standard benchmark instances and compared its performance with original and other improved state-of-the-art metaheuristics. Secondly, we have performed simulations for a workflow scheduling problem with two objectives-cost and makespan. We performed comparative analysis with other state-of-the-art approaches that were tested under the same experimental conditions. Algorithm proposed in this paper exhibits significant enhancements over the original firefly algorithm and other outstanding metaheuristics in terms of convergence speed and results' quality. Based on the output of conducted simulations, the proposed improved firefly algorithm obtains prominent results and managed to establish improvement in solving workflow scheduling in cloud-edge by reducing makespan and cost compared to other approaches.

45 citations


Journal ArticleDOI
TL;DR: An alternative task scheduler approach for organizing IoT application tasks over the CCE, using a modified Manta ray foraging optimization (MRFO) and the salp swarm algorithm (SSA), is proposed to handle the problem of scheduling IoT tasks in cloud computing.
Abstract: The usage of cloud services is growing exponentially with the recent advancement of Internet of Things (IoT)-based applications. Advanced scheduling approaches are needed to successfully meet the application demands while harnessing cloud computing’s potential effectively to schedule the IoT services onto cloud resources optimally. This article proposes an alternative task scheduler approach for organizing IoT application tasks over the CCE. In particular, a novel hybrid swarm intelligence method, using a modified Manta ray foraging optimization (MRFO) and the salp swarm algorithm (SSA), is proposed to handle the problem of scheduling IoT tasks in cloud computing. This proposed method, called MRFOSSA, depends on using SSA to improve the local search ability of MRFO that typically enhances the rate of convergence towards the global solution. To validate the developed MRFOSSA, a set of experimental series is performed using different real-world and synthetic datasets with variant sizes. The performance of MRFOSSA is tested and compared with other metaheuristic techniques. Experiment results show the superiority of MRFOSSA over its competitors in terms of performance measures, such as makespan time and cloud throughput.

43 citations


Journal ArticleDOI
TL;DR: A hybrid multiobjective genetic algorithm (HMOGA) is incorporated into the proposed framework to solve the EJSP-SDST, aiming to minimize the makespan, total tardiness and total energy consumption simultaneously.
Abstract: Energy-efficient production scheduling research has received much attention because of the massive energy consumption of the manufacturing process. In this article, we study an energy-efficient job-shop scheduling problem with sequence-dependent setup time, aiming to minimize the makespan, total tardiness and total energy consumption simultaneously. To effectively evaluate and select solutions for a multiobjective optimization problem of this nature, a novel fitness evaluation mechanism (FEM) based on fuzzy relative entropy (FRE) is developed. FRE coefficients are calculated and used to evaluate the solutions. A multiobjective optimization framework is proposed based on the FEM and an adaptive local search strategy. A hybrid multiobjective genetic algorithm is then incorporated into the proposed framework to solve the problem at hand. Extensive experiments carried out confirm that our algorithm outperforms five other well-known multiobjective algorithms in solving the problem.

36 citations


Journal ArticleDOI
25 Jan 2022-Sensors
TL;DR: An adaptive Particle Swarm Optimisation (PSO) based task scheduling approach that reduces the task execution time, and increases throughput and Average Resource Utilization Ratio (ARUR) and an adaptive inertia weight strategy namely Linearly Descending and Adaptive Inertia Weight (LDAIW) is introduced.
Abstract: Cloud computing has emerged as the most favorable computing platform for researchers and industry. The load balanced task scheduling has emerged as an important and challenging research problem in the Cloud computing. Swarm intelligence-based meta-heuristic algorithms are considered more suitable for Cloud scheduling and load balancing. The optimization procedure of swarm intelligence-based meta-heuristics consists of two major components that are the local and global search. These algorithms find the best position through the local and global search. To achieve an optimized mapping strategy for tasks to the resources, a balance between local and global search plays an effective role. The inertia weight is an important control attribute to effectively adjust the local and global search process. There are many inertia weight strategies; however, the existing approaches still require fine-tuning to achieve optimum scheduling. The selection of a suitable inertia weight strategy is also an important factor. This paper contributed an adaptive Particle Swarm Optimisation (PSO) based task scheduling approach that reduces the task execution time, and increases throughput and Average Resource Utilization Ratio (ARUR). Moreover, an adaptive inertia weight strategy namely Linearly Descending and Adaptive Inertia Weight (LDAIW) is introduced. The proposed scheduling approach provides a better balance between local and global search leading to an optimized task scheduling. The performance of the proposed approach has been evaluated and compared against five renown PSO based inertia weight strategies concerning makespan and throughput. The experiments are then extended and compared the proposed approach against the other four renowned meta-heuristic scheduling approaches. Analysis of the simulated experimentation reveals that the proposed approach attained up to 10%, 12% and 60% improvement for makespan, throughput and ARUR respectively.

35 citations


Journal ArticleDOI
TL;DR: In this article , two semi-greedy based algorithms are proposed to minimize the total energy consumption of fog nodes (FNs) while meeting the quality of service (QoS) requirements of IoT tasks.

34 citations


Journal ArticleDOI
TL;DR: In this article, a Pareto-based collaborative multi-objective optimization algorithm (CMOA) is proposed to solve the distributed permutation flow shop problem with limited buffers (DPFSP-LB).
Abstract: Energy-efficient scheduling of distributed production systems has become a common practice among large companies with the advancement of economic globalization and green manufacturing. Nevertheless, energy-efficient scheduling of distributed permutation flow-shop problem with limited buffers (DPFSP-LB) does not receive adequate attention in the relevant literature. This paper is therefore the first attempt to study this DPFSP-LB with objectives of minimizing makespan and total energy consumption ( T E C ). To solve this energy-efficient DPFSP-LB, a Pareto-based collaborative multi-objective optimization algorithm (CMOA) is proposed. In the proposed CMOA, first, the speed scaling strategy based on problem property is designed to reduce T E C . Second, a collaborative initialization strategy is presented to generate a high-quality initial population. Third, three properties of DPFSP-LB are utilized to develop a collaborative search operator and a knowledge-based local search operator. Finally, we verify the effectiveness of each improvement component of CMOA and compare it against other well-known multi-objective optimization algorithms on instances. Experiment results demonstrate the effectiveness of CMOA in solving this energy-efficient DPFSP-LB. Especially, the CMOA is able to obtain excellent results on all problems regarding the comprehensive metric, and is also competitive to its rivals regarding the convergence metric.

33 citations


Journal ArticleDOI
TL;DR: In this article , a Pareto-based collaborative multi-objective optimization algorithm (CMOA) is proposed to solve the problem of distributed permutation flow shop problem with limited buffers.
Abstract: Energy-efficient scheduling of distributed production systems has become a common practice among large companies with the advancement of economic globalization and green manufacturing. Nevertheless, energy-efficient scheduling of distributed permutation flow-shop problem with limited buffers (DPFSP-LB) does not receive adequate attention in the relevant literature. This paper is therefore the first attempt to study this DPFSP-LB with objectives of minimizing makespan and total energy consumption ( T E C ). To solve this energy-efficient DPFSP-LB, a Pareto-based collaborative multi-objective optimization algorithm (CMOA) is proposed. In the proposed CMOA, first, the speed scaling strategy based on problem property is designed to reduce T E C . Second, a collaborative initialization strategy is presented to generate a high-quality initial population. Third, three properties of DPFSP-LB are utilized to develop a collaborative search operator and a knowledge-based local search operator. Finally, we verify the effectiveness of each improvement component of CMOA and compare it against other well-known multi-objective optimization algorithms on instances. Experiment results demonstrate the effectiveness of CMOA in solving this energy-efficient DPFSP-LB. Especially, the CMOA is able to obtain excellent results on all problems regarding the comprehensive metric, and is also competitive to its rivals regarding the convergence metric. • A green criterion is considered in the studied problem. • A new constraint of the limited buffers is introduced into this problem. • A multi-objective optimization algorithm is presented to solve this problem. • An effective energy saving strategy is proposed. • An initialization strategy and local search strategy are proposed.

Journal ArticleDOI
TL;DR: In this article, an interference-aware and prediction-based resource manager for DL systems is proposed, which proactively predicts GPU utilization of heterogeneous DL jobs extrapolated from the DL model's computation graph features, removing the need for online profiling and isolated reserved GPUs.
Abstract: To accelerate the training of Deep Learning (DL) models, clusters of machines equipped with hardware accelerators such as GPUs are leveraged to reduce execution time. State-of-the-art resource managers are needed to increase GPU utilization and maximize throughput. While co-locating DL jobs on the same GPU has been shown to be effective, this can incur interference causing slowdown. In this article we propose Horus: an interference-aware and prediction-based resource manager for DL systems. Horus proactively predicts GPU utilization of heterogeneous DL jobs extrapolated from the DL model’s computation graph features, removing the need for online profiling and isolated reserved GPUs. Through micro-benchmarks and job co-location combinations across heterogeneous GPU hardware, we identify GPU utilization as a general proxy metric to determine good placement decisions, in contrast to current approaches which reserve isolated GPUs to perform online profiling and directly measure GPU utilization for each unique submitted job. Our approach promotes high resource utilization and makespan reduction; via real-world experimentation and large-scale trace driven simulation, we demonstrate that Horus outperforms other DL resource managers by up to 61.5 percent for GPU resource utilization, 23.7–30.7 percent for makespan reduction and 68.3 percent in job wait time reduction.

Journal ArticleDOI
TL;DR: In this paper , a distributed flow shop group scheduling problem is considered and a cooperative co-evolutionary algorithm (CCEA) with a novel collaboration model and a reinitialization scheme is proposed.
Abstract: This article addresses a novel scheduling problem, a distributed flowshop group scheduling problem, which has important applications in modern manufacturing systems. The problem considers how to arrange a variety of jobs subject to group constraints at a number of identical manufacturing cellulars, each one with a flowshop structure, with the objective of minimizing makespan. We explore the problem-specific knowledge and present a mixed-integer linear programming model, a counterintuitive paradox, and two suites of accelerations to save computational efforts. Due to the complexity of the problem, we consider a decomposition strategy and propose a cooperative co-evolutionary algorithm (CCEA) with a novel collaboration model and a reinitialization scheme. A comprehensive and thorough computational and statistical campaign is carried out. The results show that the proposed collaboration model and reinitialization scheme are very effective. The proposed CCEA outperforms a number of metaheuristics adapted from closely related scheduling problems in the literature by a significantly considerable margin.

Journal ArticleDOI
TL;DR: In this article , a hybrid iterated greedy and simulated annealing algorithm is proposed to solve the flexible job shop scheduling problem with crane transportation processes (CFJSP), where two objectives are simultaneously considered, namely, the minimization of the maximum completion time and the energy consumptions during machine processing and crane transportation.
Abstract: In this study, we propose an efficient optimization algorithm that is a hybrid of the iterated greedy and simulated annealing algorithms (hereinafter, referred to as IGSA) to solve the flexible job shop scheduling problem with crane transportation processes (CFJSP). Two objectives are simultaneously considered, namely, the minimization of the maximum completion time and the energy consumptions during machine processing and crane transportation. Different from the methods in the literature, crane lift operations have been investigated for the first time to consider the processing time and energy consumptions involved during the crane lift process. The IGSA algorithm is then developed to solve the CFJSPs considered. In the proposed IGSA algorithm, first, each solution is represented by a 2-D vector, where one vector represents the scheduling sequence and the other vector shows the assignment of machines. Subsequently, an improved construction heuristic considering the problem features is proposed, which can decrease the number of replicated insertion positions for the destruction operations. Furthermore, to balance the exploration abilities and time complexity of the proposed algorithm, a problem-specific exploration heuristic is developed. Finally, a set of randomly generated instances based on realistic industrial processes is tested. Through comprehensive computational comparisons and statistical analyses, the highly effective performance of the proposed algorithm is favorably compared against several efficient algorithms. Note to Practitioners —The flexible job shop scheduling problem (FJSP) can be extended and applied to many types of practical manufacturing processes. Many realistic production processes should consider the transportation procedures, especially for the limited crane resources and energy consumptions during the transportation operations. This study models a realistic production process as an FJSP with crane transportation, wherein two objectives, namely, the makespan and energy consumptions, are to be simultaneously minimized. This study first considers the height of the processing machines, and therefore, the crane lift operations and lift energy consumptions are investigated. A hybrid iterated greedy algorithm is proposed for solving the problem considered, and several problem-specific heuristics are embedded to balance the exploration and exploitation abilities of the proposed algorithm. In addition, the proposed algorithm can be generalized to solve other types of scheduling problems with crane transportations.

Journal ArticleDOI
TL;DR: In this article , a fault mode-assisted gated recurrent unit (FGRU) life prediction method is used to guide the predictive maintenance initiation time of all machines, and the FGRU method is more accurate than three common methods (Encoder-Decoder Recurrent Neural Network, Bidirectional Long Short-Term Memory and GRU) through two actual bearing degradation cases, and shows through three benchmark cases that the joint decision-making can effectively reduce the time cost of manufacturing enterprises.

Journal ArticleDOI
TL;DR: A survey of parallel batching problems can be found in this paper , where the authors provide a taxonomy of the most common problems and a discussion of current trends in scheduling jobs on machines with parallel batch processing.

Journal ArticleDOI
TL;DR: Results indicate that the logic-based Benders decomposition method is able to return high-quality schedules for solving seru scheduling problems.

Journal ArticleDOI
TL;DR: In this paper , a hybrid deep Q network (HDQN) was developed to solve the dynamic flexible job shop scheduling problem with insufficient transportation resources (DFJSP-ITR) to minimize the makespan and total energy consumption.
Abstract: With the extensive application of automated guided vehicles in manufacturing system, production scheduling considering limited transportation resources becomes a difficult problem. At the same time, the real manufacturing system is prone to various disturbance events, which increase the complexity and uncertainty of shop floor. To this end, this paper addresses the dynamic flexible job shop scheduling problem with insufficient transportation resources (DFJSP-ITR) to minimize the makespan and total energy consumption. As a sequential decision-making problem, DFJSP-ITR can be modeled as a Markov decision process where the agent should determine the scheduling object and allocation of resources at each decision point. So this paper adopts deep reinforcement learning to solve DFJSP-ITR. In this paper, the multiobjective optimization model of DFJSP-ITR is established. Then, in order to make agent learn to choose the appropriate rule based on the production state at each decision point, a hybrid deep Q network (HDQN) is developed for this problem, which combines deep Q network with three extensions. Moreover, the shop floor state model is established at first, and then the decision point, generic state features, genetic-programming-based action space and reward function are designed. Based on these contents, the training method using HDQN and the strategy for facing new job insertions and machine breakdowns are proposed. Finally, comprehensive experiments are conducted, and the results show that HDQN has superiority and generality compared with current optimization-based approaches, and can effectively deal with disturbance events and unseen situations through learning.

Journal ArticleDOI
TL;DR: In this article, a hybrid deep Q network (HDQN) was developed to solve the dynamic flexible job shop scheduling problem with insufficient transportation resources (DFJSP-ITR) to minimize the makespan and total energy consumption.
Abstract: With the extensive application of automated guided vehicles in manufacturing system, production scheduling considering limited transportation resources becomes a difficult problem. At the same time, the real manufacturing system is prone to various disturbance events, which increase the complexity and uncertainty of shop floor. To this end, this paper addresses the dynamic flexible job shop scheduling problem with insufficient transportation resources (DFJSP-ITR) to minimize the makespan and total energy consumption. As a sequential decision-making problem, DFJSP-ITR can be modeled as a Markov decision process where the agent should determine the scheduling object and allocation of resources at each decision point. So this paper adopts deep reinforcement learning to solve DFJSP-ITR. In this paper, the multiobjective optimization model of DFJSP-ITR is established. Then, in order to make agent learn to choose the appropriate rule based on the production state at each decision point, a hybrid deep Q network (HDQN) is developed for this problem, which combines deep Q network with three extensions. Moreover, the shop floor state model is established at first, and then the decision point, generic state features, genetic-programming-based action space and reward function are designed. Based on these contents, the training method using HDQN and the strategy for facing new job insertions and machine breakdowns are proposed. Finally, comprehensive experiments are conducted, and the results show that HDQN has superiority and generality compared with current optimization-based approaches, and can effectively deal with disturbance events and unseen situations through learning.

Journal ArticleDOI
TL;DR: In this article , a hybrid whale optimization algorithm-based MBA algorithm is proposed for solving the multi-objective task scheduling problems in cloud computing environments, which decreases the makespan by maximizing the resource utilization.

Journal ArticleDOI
TL;DR: In this paper , a multi-objective optimization framework is proposed based on the FEM and an adaptive local search strategy to minimize the makespan, total tardiness and total energy consumption simultaneously.
Abstract: Energy-efficient production scheduling research has received much attention because of the massive energy consumption of the manufacturing process. In this article, we study an energy-efficient job-shop scheduling problem with sequence-dependent setup time, aiming to minimize the makespan, total tardiness and total energy consumption simultaneously. To effectively evaluate and select solutions for a multiobjective optimization problem of this nature, a novel fitness evaluation mechanism (FEM) based on fuzzy relative entropy (FRE) is developed. FRE coefficients are calculated and used to evaluate the solutions. A multiobjective optimization framework is proposed based on the FEM and an adaptive local search strategy. A hybrid multiobjective genetic algorithm is then incorporated into the proposed framework to solve the problem at hand. Extensive experiments carried out confirm that our algorithm outperforms five other well-known multiobjective algorithms in solving the problem.

Journal ArticleDOI
TL;DR: In this article , the scheduling problem in a seru production system (SPS) is formulated as a mixed-integer programming problem and then reformulated to a set partitioning master problem and some independent subproblems by employing the logic-based Benders decomposition (LBBD) method.

Journal ArticleDOI
TL;DR: In this paper , a hybrid multi-objective optimization algorithm of estimation of distribution algorithm and deep Q-network is proposed to solve a flexible job shop scheduling problem with time-of-use electricity price constraint.
Abstract: In this study, a flexible job shop scheduling problem with time-of-use electricity price constraint is considered. The problem includes machine processing speed, setup time, idle time, and the transportation time between machines. Both maximum completion time and total electricity price are optimized simultaneously. A hybrid multi-objective optimization algorithm of estimation of distribution algorithm and deep Q-network is proposed to solve this. The processing sequence, machine assignment, and processing speed assignment are all described using a three-dimensional solution representation. Two knowledge-based initialization strategies are designed for better performance. In the estimation of distribution algorithm component, three probability matrices corresponding to solution representation are provided. In the deep Q-network component, 34 state features are selected to describe the scheduling situation, while nine knowledge-based actions are defined to refine the scheduling solution, and the reward based on the two objectives is designed. As the knowledge for initialization and optimization strategies, five properties of the considered problem are proposed. The proposed mixed integer linear programming model of the problem is validated by exact solver CPLEX. The results of the numerical testing on wide-range scale instances show that the proposed hybrid algorithm is efficient and effective at solving the integrated flexible job shop scheduling problem.

Journal ArticleDOI
TL;DR: In this paper, an interval many-objective cloud task scheduling optimization (I-MCTSO) model is designed to simulate real cloud computing task scheduling, and an interval credibility strategy is employed to improve the convergence performance.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper established a distributed two-stage reentrant hybrid flow shop bi-level scheduling model, which takes makespan, total carbon emissions and total energy consumption costs as the optimization objectives.

Journal ArticleDOI
01 Oct 2022
TL;DR: In this article , a multi-agent reinforcement learning algorithm is proposed to solve job scheduling problems in a resource preemption environment with multiagent RL, where each job is regarded as an intelligent agent that chooses an available robot according to its current partial observation.
Abstract: In smart manufacturing, robots gradually replace traditional machines as new processing units, which have significantly liberated laborers and reduced manufacturing expenditure. However, manufacturing resources are usually limited so that the preemption relationship exists among robots. Under this circumstance, job scheduling puts forward higher requirements on accuracy and generalization. To this end, this paper proposes a scheduling algorithm to solve job scheduling problems in a resource preemption environment with multi-agent reinforcement learning. The resource preemption environment is modeled as a decentralized partially observable Markov decision process, where each job is regarded as an intelligent agent that chooses an available robot according to its current partial observation. Based on this modeling, a multi-agent scheduling architecture is constructed to handle the high-dimension action space issue caused by multi-task simultaneous scheduling. Besides, multi-agent reinforcement learning is employed to learn both the decision-making policy of each agent and the cooperation between job agents. This paper is novel in addressing the scheduling problem in a resource preemption environment and solving the job shop scheduling problem with multi-agent reinforcement learning. The experiments of the case study indicate that our proposed method outperforms the traditional rule-based methods and the distributed-agent reinforcement learning method in total makespan, training stability, and model generalization.

Journal ArticleDOI
TL;DR: In this paper , the authors presented a modeling and solution approach for a robust job shop scheduling problem under deterministic and stochastic machine unavailability caused by planned preventive maintenance (PM) and unplanned corrective maintenance (CM) following random breakdowns.

Journal ArticleDOI
TL;DR: In this article , a hybrid whale optimization algorithm-based MBA algorithm is proposed for solving the multi-objective task scheduling problems in cloud computing environments, which decreases the makespan by maximizing the resource utilization.

Journal ArticleDOI
01 Jan 2022-Energy
TL;DR: In this article, the authors developed a novel mathematical formulation for the energy-efficient flexible job-shop scheduling problem using the improved unit-specific event-based time representation, which can achieve up to 13.5% energy savings in less computational time.

Journal ArticleDOI
TL;DR: Enhanced versions of the HEFT algorithm under user-required financial constraints to minimize the makespan of a specified workflow submission on virtual machines are suggested and are suggested to perform better than the basic HEFT method in terms of lesser schedule length of the workflow problems running on various virtual machines.
Abstract: Cloud computing is one of the most commonly used infrastructures for carrying out activities using virtual machines known as processing units. One of the most fundamental issues with cloud computing is task scheduling. The optimal determination of scheduling criteria in cloud computing is a non-deterministic polynomial-time (NP)-complete optimization problem, and several procedures to manage this problem have been suggested by researchers in the past. Among these methods, the Heterogeneous Earliest Finish Time (HEFT) algorithm is recognized to produce optimal outcomes in a shorter time period for scheduling tasks in a heterogeneous environment. Literature shows that HEFT gives extraordinary results in terms of quality of schedule and execution time. However, in some cases, the average computation cost and selection of the first idle slot may not produce a good solution. Therefore, here we propose modified versions of the HEFT algorithm that can obtain improved results. In the rank generation phase, we implement different methodologies for calculating ranks, while in the processor selection phase, we modify the way of selecting idle slots for scheduling the tasks. This paper suggests enhanced versions of the HEFT algorithm under user-required financial constraints to minimize the makespan of a specified workflow submission on virtual machines. Our findings also suggest that enhanced versions of the HEFT algorithm perform better than the basic HEFT method in terms of lesser schedule length of the workflow problems running on various virtual machines.