scispace - formally typeset
Search or ask a question

Showing papers on "Dynamic priority scheduling published in 2015"


Journal ArticleDOI
TL;DR: Through analyzing the cloud computing architecture, this survey first presents taxonomy at two levels of scheduling cloud resources, then paints a landscape of the scheduling problem and solutions, and a comprehensive survey of state-of-the-art approaches is presented systematically.
Abstract: A disruptive technology fundamentally transforming the way that computing services are delivered, cloud computing offers information and communication technology users a new dimension of convenience of resources, as services via the Internet. Because cloud provides a finite pool of virtualized on-demand resources, optimally scheduling them has become an essential and rewarding topic, where a trend of using Evolutionary Computation (EC) algorithms is emerging rapidly. Through analyzing the cloud computing architecture, this survey first presents taxonomy at two levels of scheduling cloud resources. It then paints a landscape of the scheduling problem and solutions. According to the taxonomy, a comprehensive survey of state-of-the-art approaches is presented systematically. Looking forward, challenges and potential future research directions are investigated and invited, including real-time scheduling, adaptive dynamic scheduling, large-scale scheduling, multiobjective scheduling, and distributed and parallel scheduling. At the dawn of Industry 4.0, cloud computing scheduling for cyber-physical integration with the presence of big data is also discussed. Research in this area is only in its infancy, but with the rapid fusion of information and data technology, more exciting and agenda-setting topics are likely to emerge on the horizon.

416 citations


Journal ArticleDOI
TL;DR: This work proposes a general-purpose framework, Petuum, that systematically addresses data- and model-parallel challenges in large-scale ML, by observing that many ML programs are fundamentally optimization-centric and admit error-tolerant, iterative-convergent algorithmic solutions.
Abstract: What is a systematic way to efficiently apply a wide spectrum of advanced ML programs to industrial scale problems, using Big Models (up to 100 s of billions of parameters) on Big Data (up to terabytes or petabytes)? Modern parallelization strategies employ fine-grained operations and scheduling beyond the classic bulk-synchronous processing paradigm popularized by MapReduce, or even specialized graph-based execution that relies on graph representations of ML programs. The variety of approaches tends to pull systems and algorithms design in different directions, and it remains difficult to find a universal platform applicable to a wide range of ML programs at scale. We propose a general-purpose framework, Petuum, that systematically addresses data- and model-parallel challenges in large-scale ML, by observing that many ML programs are fundamentally optimization-centric and admit error-tolerant, iterative-convergent algorithmic solutions. This presents unique opportunities for an integrative system design, such as bounded-error network synchronization and dynamic scheduling based on ML program structure. We demonstrate the efficacy of these system designs versus well-known implementations of modern ML algorithms, showing that Petuum allows ML programs to run in much less time and at considerably larger model sizes, even on modestly-sized compute clusters.

395 citations


Journal ArticleDOI
TL;DR: An extensive survey and comparative analysis of various scheduling algorithms for cloud and grid environments based on three popular metaheuristic techniques: Ant Colony Optimization, Genetic Algorithm and Particle Swarm Optimization and two novel techniques: League Championship Algorithm (LCA) and BAT algorithm.

334 citations


Journal ArticleDOI
TL;DR: Experimental results show that based on these four metrics, a multi-objective optimization method is better than other similar methods, especially as it increased 56.6% in the best case scenario.
Abstract: For task-scheduling problems in cloud computing, a multi-objective optimization method is proposed here. First, with an aim toward the biodiversity of resources and tasks in cloud computing, we propose a resource cost model that defines the demand of tasks on resources with more details. This model reflects the relationship between the user’s resource costs and the budget costs. A multi-objective optimization scheduling method has been proposed based on this resource cost model. This method considers the makespan and the user’s budget costs as constraints of the optimization problem, achieving multi-objective optimization of both performance and cost. An improved ant colony algorithm has been proposed to solve this problem. Two constraint functions were used to evaluate and provide feedback regarding the performance and budget cost. These two constraint functions made the algorithm adjust the quality of the solution in a timely manner based on feedback in order to achieve the optimal solution. Some simulation experiments were designed to evaluate this method’s performance using four metrics: 1) the makespan; 2) cost; 3) deadline violation rate; and 4) resource utilization. Experimental results show that based on these four metrics, a multi-objective optimization method is better than other similar methods, especially as it increased 56.6% in the best case scenario.

265 citations


Journal ArticleDOI
TL;DR: This paper makes a comprehensive survey of workflow scheduling in cloud environment in a problem–solution manner and conducts taxonomy and comparative review on workflow scheduling algorithms.
Abstract: To program in distributed computing environments such as grids and clouds, workflow is adopted as an attractive paradigm for its powerful ability in expressing a wide range of applications, including scientific computing, multi-tier Web, and big data processing applications. With the development of cloud technology and extensive deployment of cloud platform, the problem of workflow scheduling in cloud becomes an important research topic. The challenges of the problem lie in: NP-hard nature of task-resource mapping; diverse QoS requirements; on-demand resource provisioning; performance fluctuation and failure handling; hybrid resource scheduling; data storage and transmission optimization. Consequently, a number of studies, focusing on different aspects, emerged in the literature. In this paper, we firstly conduct taxonomy and comparative review on workflow scheduling algorithms. Then, we make a comprehensive survey of workflow scheduling in cloud environment in a problem---solution manner. Based on the analysis, we also highlight some research directions for future investigation.

206 citations


Proceedings ArticleDOI
24 Aug 2015
TL;DR: This work presents Rapier, a coflow-aware network optimization framework that seamlessly integrates routing and scheduling for better application performance, and demonstrates that Rapier significantly reduces the average coflow completion time.
Abstract: In the data flow models of today's data center applications such as MapReduce, Spark and Dryad, multiple flows can comprise a coflow group semantically. Only completing all flows in a coflow is meaningful to an application. To optimize application performance, routing and scheduling must be jointly considered at the level of a coflow rather than individual flows. However, prior solutions have significant limitation: they only consider scheduling, which is insufficient. To this end, we present Rapier, a coflow-aware network optimization framework that seamlessly integrates routing and scheduling for better application performance. Using a small-scale testbed implementation and large-scale simulations, we demonstrate that Rapier significantly reduces the average coflow completion time (CCT) by up to 79.30% compared to the state-of-the-art scheduling-only solution, and it is readily implementable with existing commodity switches.

175 citations


Journal ArticleDOI
TL;DR: It is proved that the expected makespan of scheduling stochastic tasks is greater than or equal to the makes pan of scheduling deterministic tasks, where all processing times and communication times are replaced by their expected values.
Abstract: Generally, a parallel application consists of precedence constrained stochastic tasks, where task processing times and intertask communication times are random variables following certain probability distributions. Scheduling such precedence constrained stochastic tasks with communication times on a heterogeneous cluster system with processors of different computing capabilities to minimize a parallel application's expected completion time is an important but very difficult problem in parallel and distributed computing. In this paper, we present a model of scheduling stochastic parallel applications on heterogeneous cluster systems. We discuss stochastic scheduling attributes and methods to deal with various random variables in scheduling stochastic tasks. We prove that the expected makespan of scheduling stochastic tasks is greater than or equal to the makespan of scheduling deterministic tasks, where all processing times and communication times are replaced by their expected values. To solve the problem of scheduling precedence constrained stochastic tasks efficiently and effectively, we propose a stochastic dynamic level scheduling (SDLS) algorithm, which is based on stochastic bottom levels and stochastic dynamic levels. Our rigorous performance evaluation results clearly demonstrate that the proposed stochastic task scheduling algorithm significantly outperforms existing algorithms in terms of makespan, speedup, and makespan standard deviation.

170 citations


Journal ArticleDOI
TL;DR: Three task scheduling algorithms, called MCC, MEMAX and CMMN for heterogeneous multi-cloud environment, which aim to minimize the makespan and maximize the average cloud utilization are presented.
Abstract: Cloud Computing has grown exponentially in the business and research community over the last few years. It is now an emerging field and becomes more popular due to recent advances in virtualization technology. In Cloud Computing, various applications are submitted to the datacenters to obtain some services on pay-per-use basis. However, due to limited resources, some workloads are transferred to other data centers to handle peak client demands. Therefore, scheduling workloads in heterogeneous multi-cloud environment is a hot topic and very challenging due to heterogeneity of the cloud resources with varying capacities and functionalities. In this paper, we present three task scheduling algorithms, called MCC, MEMAX and CMMN for heterogeneous multi-cloud environment, which aim to minimize the makespan and maximize the average cloud utilization. The proposed MCC algorithm is a single-phase scheduling whereas rests are two-phase scheduling. We perform rigorous experiments on the proposed algorithms using various benchmark as well as synthetic datasets. Their performances are evaluated in terms of makespan and average cloud utilization and experimental results are compared with that of existing single-phase and two-phase scheduling algorithms to demonstrate the efficacy of the proposed algorithms.

170 citations


Proceedings ArticleDOI
27 Aug 2015
TL;DR: Tarcil is presented, a distributed scheduler that targets both scheduling speed and quality, and uses an analytically derived sampling framework that adjusts the sample size based on load, and provides statistical guarantees on the quality of allocated resources.
Abstract: Scheduling diverse applications in large, shared clusters is particularly challenging. Recent research on cluster scheduling focuses either on scheduling speed, using sampling to quickly assign resources to tasks, or on scheduling quality, using centralized algorithms that search for the resources that improve both task performance and cluster utilization. We present Tarcil, a distributed scheduler that targets both scheduling speed and quality. Tarcil uses an analytically derived sampling framework that adjusts the sample size based on load, and provides statistical guarantees on the quality of allocated resources. It also implements admission control when sampling is unlikely to find suitable resources. This makes it appropriate for large, shared clusters hosting short- and long-running jobs. We evaluate Tarcil on clusters with hundreds of servers on EC2. For highly-loaded clusters running short jobs, Tarcil improves task execution time by 41% over a distributed, sampling-based scheduler. For more general scenarios, Tarcil achieves near-optimal performance for 4× and 2× more jobs than sampling-based and centralized schedulers respectively.

168 citations


Journal ArticleDOI
TL;DR: A resource-aware hybrid scheduling algorithm suitable for Heterogeneous Distributed Computing, especially for modern High-Performance Computing (HPC) systems in which applications are modeled with various requirements (both IO and computational intensive), with accent on data from multimedia applications.

166 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigated the effects of production scheduling policies aimed towards improving productive and environmental performances in a job shop system using a green genetic algorithm, which achieved a semi-optimal makespan similar to that obtained by the best of other methods but with a significantly lower total energy consumption.
Abstract: The paper investigates the effects of production scheduling policies aimed towards improving productive and environmental performances in a job shop system. A green genetic algorithm allows the assessment of multi-objective problems related to sustainability. Two main considerations have emerged from the application of the algorithm. First, the algorithm is able to achieve a semi-optimal makespan similar to that obtained by the best of other methods but with a significantly lower total energy consumption. Second, the study demonstrated that the worthless energy consumption can be reduced significantly by employing complex energy-efficient machine behaviour policies.

Journal ArticleDOI
TL;DR: The novelty of this multi-objective evolutionary algorithm (MOEA)-based proactive-reactive method is that it is able to handle multiple objectives including efficiency and stability simultaneously, adapt to the new environment quickly by incorporating heuristic dynamic optimization strategies, and deal with two scheduling policies of machine assignment and operation sequencing together.

Proceedings Article
08 Jul 2015
TL;DR: Mercury is proposed, a hybrid resource management framework that supports the full spectrum of scheduling, from centralized to distributed, and exposes a programmatic interface that allows applications to trade-off between scheduling overhead and execution guarantees.
Abstract: Datacenter-scale computing for analytics workloads is increasingly common. High operational costs force heterogeneous applications to share cluster resources for achieving economy of scale. Scheduling such large and diverse workloads is inherently hard, and existing approaches tackle this in two alternative ways: 1) centralized solutions offer strict, secure enforcement of scheduling invariants (e.g., fairness, capacity) for heterogeneous applications, 2) distributed solutions offer scalable, efficient scheduling for homogeneous applications. We argue that these solutions are complementary, and advocate a blended approach. Concretely, we propose Mercury, a hybrid resource management framework that supports the full spectrum of scheduling, from centralized to distributed. Mercury exposes a programmatic interface that allows applications to trade-off between scheduling overhead and execution guarantees. Our framework harnesses this flexibility by opportunistically utilizing resources to improve task throughput. Experimental results on production-derived workloads show gains of over 35% in task throughput. These benefits can be translated by appropriate application and framework policies into job throughput or job latency improvements. We have implemented and contributed Mercury as an extension of Apache Hadoop / YARN.

Journal ArticleDOI
TL;DR: A novel algorithm named PRS that combines proactive with reactive scheduling methods is proposed to schedule real-time tasks and three system scaling strategies according to dynamic workloads are developed to improve the resource utilization and reduce energy consumption.

Journal ArticleDOI
TL;DR: Results and comparisons show that TABC is effective in both scheduling stage and rescheduling stage, and the uncertainty in timing of returns in remanufacturing is modeled as new job inserting constraint in FJSP.
Abstract: A heuristic is proposed for initializing ABC population.An ensemble local search method is proposed to improve the convergence of TABC.Three re-scheduling strategies are proposed and evaluated.TABC is tested using benchmark instances and real cases from re-manufacturing.TABC compared against several state-of-the-art algorithms. This study addresses the scheduling problem in remanufacturing engineering. The purpose of this paper is to model effectively to solve remanufacturing scheduling problem. The problem is modeled as flexible job-shop scheduling problem (FJSP) and is divided into two stages: scheduling and re-scheduling when new job arrives. The uncertainty in timing of returns in remanufacturing is modeled as new job inserting constraint in FJSP. A two-stage artificial bee colony (TABC) algorithm is proposed for scheduling and re-scheduling with new job(s) inserting. The objective is to minimize makespan (maximum complete time). A new rule is proposed to initialize bee colony population. An ensemble local search is proposed to improve algorithm performance. Three re-scheduling strategies are proposed and compared. Extensive computational experiments are carried out using fifteen well-known benchmark instances with eight instances from remanufacturing. For scheduling performance, TABC is compared to five existing algorithms. For re-scheduling performance, TABC is compared to six simple heuristics and proposed hybrid heuristics. The results and comparisons show that TABC is effective in both scheduling stage and rescheduling stage.

Journal ArticleDOI
TL;DR: 0-1 linear programming formulations exploiting the stated hierarchy are proposed and used to derive a formal proof that the joint OR planning and scheduling problem is NP-hard.

Book
07 Sep 2015
TL;DR: This work considers the problem of input control, subject to a specified product mix, and priority sequencing in a two-station multiclass queueing network with general service time distributions and a general routing structure, and obtains an effective scheduling rule.
Abstract: Motivated by a factory scheduling problem, we consider the problem of input control, subject to a specified product mix, and priority sequencing in a two-station multiclass queueing network with general service time distributions and a general routing structure. The objective is to minimize the long-run expected average number of customers in the system subject to a constraint on the long-run expected average output rate. Under balanced heavy loading conditions, this scheduling problem is approximated by a control problem involving Brownian motion. A reformulation of this Brownian control problem was solved exactly in 1990 by L. M. Wein. In the present paper, this solution is interpreted in terms of the queueing network model in order to obtain an effective scheduling rule. The resulting sequencing policy dynamically prioritizes customers according to reduced costs calculated from a linear program. The input rule is a workload regulating input policy, where a customer is injected into the system whenever the expected total amount of work in the system for the two stations falls within a prescribed region. An example is presented that illustrates the procedure and demonstrates its effectiveness.

Journal ArticleDOI
TL;DR: The results show that the proposed method can coordinate the scheduling of the three types of handling equipment and can realize the optimal trade-off between time-saving and energy-saving.
Abstract: Energy-saving objective is considered in the container terminal operations.A MIP model for the integrated scheduling of cranes and trucks is developed.A simulation optimization method is proposed to solve the NP-hard problem.Experimental results show that the proposed method can realize port energy-saving. Container terminals mainly include three types of handling equipment, i.e., quay cranes (QCs), internal trucks (ITs) and yard cranes (YCs). Due to high cost of the handling equipment, container terminals can hardly purchase additional handling equipment. Therefore, the reasonable scheduling of these handling equipment, especially coordinated scheduling of the three types of handling equipment, plays an important role in the service level and energy-saving of container terminal. This paper addresses the problem of integrated QC scheduling, IT scheduling and YC scheduling. Firstly, this problem is formulated as a mixed integer programming model (MIP), where the objective is to minimize the total departure delay of all vessels and the total transportation energy consumption of all tasks. Furthermore, an integrated simulation-based optimization method is developed for solving the problem, where the simulation is designed for evaluation and optimization algorithm is designed for searching solution space. The optimization algorithm integrates genetic algorithm (GA) and particle swarm optimization (PSO) algorithm, where the GA is used for global search and the PSO is used for local search. Finally, numerical experiments are conducted to verify the effectiveness of the proposed method. The results show that the proposed method can coordinate the scheduling of the three types of handling equipment and can realize the optimal trade-off between time-saving and energy-saving.

Journal ArticleDOI
TL;DR: In this article, the authors integrated wind power with hydrothermal scheduling to establish multi-objective economic emission hydro-thermal-wind scheduling problem (MO-HTWS) model with considering wind uncertain cost.

Journal ArticleDOI
TL;DR: An efficient cloud workload management framework in which cloud workloads have been identified, analyzed and clustered through K-means on the basis of weights assigned and their QoS requirements is presented.
Abstract: Cloud computing harmonizes and delivers the ability of resource sharing over different geographical sites. Cloud resource scheduling is a tedious task due to the problem of finding the best match of resource-workload pair. The efficient management of dynamic nature of resource can be done with the help of cloud workloads. Till cloud workload is deliberated as a central capability, the resources cannot be utilized in an effective way. In literature, very few efficient resource scheduling policies for energy, cost and time constraint cloud workloads are reported. This paper presents an efficient cloud workload management framework in which cloud workloads have been identified, analyzed and clustered through K-means on the basis of weights assigned and their QoS requirements. Further scheduling has been done based on different scheduling policies and their corresponding algorithms. The performance of the proposed algorithms has been evaluated with existing scheduling policies through CloudSim toolkit. The experimental results show that the proposed framework gives better results in terms of energy consumption, execution cost and time of different cloud workloads as compared to existing algorithms.

Journal ArticleDOI
TL;DR: This method first models the STSP as a hybrid Petri net (PN) and then derives critically important schedulability conditions and adopts efficient heuristics to solve subproblems with continuous variables and discrete variables.
Abstract: To effectively operate a refinery and make it competitive, efficient short-term scheduling techniques that utilize commercial software tools for practical applications need to be developed. However, cumbersome details make it difficult to solve the short-term scheduling problem (STSP) of crudeoil operations, and mathematical programming models fail to meet the industrial needs. This article proposes an innovative control-theoretic and formal model-based method to tackle this long-standing issue. This method first models the STSP as a hybrid Petri net (PN) and then derives critically important schedulability conditions. The conditions are used to decompose a complex problem into several tractable subproblems. In each subproblem, there are either continuous variables or discrete variables. For subproblems with continuous variables, this work proposes a linear programming-based method to solve them; while, for subproblems with discrete variables, this work adopts efficient heuristics. Consequently, the STSP is efficiently resolved,and the application of the proposed method is well illustrated via industrial case studies.

Journal ArticleDOI
TL;DR: Experimental results indicate that the proposed algorithm can keep the load balance in a dynamic environment and outperform other approaches.
Abstract: Virtual machine (VM) scheduling with load balancing in cloud computing aims to assign VMs to suitable servers and balance the resource usage among all of the servers. In an infrastructure-as-a-service framework, there will be dynamic input requests, where the system is in charge of creating VMs without considering what types of tasks run on them. Therefore, scheduling that focuses only on fixed task sets or that requires detailed task information is not suitable for this system. This paper combines ant colony optimization and particle swarm optimization to solve the VM scheduling problem, with the result being known as ant colony optimization with particle swarm (ACOPS). ACOPS uses historical information to predict the workload of new input requests to adapt to dynamic environments without additional task information. ACOPS also rejects requests that cannot be satisfied before scheduling to reduce the computing time of the scheduling procedure. Experimental results indicate that the proposed algorithm can keep the load balance in a dynamic environment and outperform other approaches.

Journal ArticleDOI
TL;DR: A new approach to automatic programming via iterated local search (APRILS) for dynamic job shop scheduling and suggests that the good performance of APRILS comes from the balance between exploration and exploitation in its search mechanism.
Abstract: Dispatching rules have been commonly used in practice for making sequencing and scheduling decisions. Due to specific characteristics of each manufacturing system, there is no universal dispatching rule that can dominate in all situations. Therefore, it is important to design specialized dispatching rules to enhance the scheduling performance for each manufacturing environment. Evolutionary computation approaches such as tree-based genetic programming (TGP) and gene expression programming (GEP) have been proposed to facilitate the design task through automatic design of dispatching rules. However, these methods are still limited by their high computational cost and low exploitation ability. To overcome this problem, we develop a new approach to automatic programming via iterated local search (APRILS) for dynamic job shop scheduling. The key idea of APRILS is to perform multiple local searches started with programs modified from the best obtained programs so far. The experiments show that APRILS outperforms TGP and GEP in most simulation scenarios in terms of effectiveness and efficiency. The analysis also shows that programs generated by APRILS are more compact than those obtained by genetic programming. An investigation of the behavior of APRILS suggests that the good performance of APRILS comes from the balance between exploration and exploitation in its search mechanism.

Journal ArticleDOI
TL;DR: The results show that the proposed agent-based local search genetic algorithm improves the efficiency.

Journal ArticleDOI
TL;DR: This paper focuses on task scheduling using a multi-objective nested Particle Swarm Optimization (TSPSO) to optimize energy and processing time and found that the proposed algorithm provide an optimal balance results for multiple objectives.

Journal ArticleDOI
Rongqing Zhang1, Xiang Cheng1, Liuqing Yang, Xia Shen1, Bingli Jiao1 
TL;DR: A novel centralized time-division multiple access (TDMA)-based scheduling protocol for practical vehicular networks based on a new weight-factor-based scheduler that can significantly improve the network throughput and can be easily incorporated into practical Vehicular networks.
Abstract: In this paper, we propose a novel centralized time-division multiple access (TDMA)-based scheduling protocol for practical vehicular networks based on a new weight-factor-based scheduler A roadside unit (RSU), as a centralized controller, collects the channel state information and the individual information of the communication links within its communication coverage, and it calculates their respective scheduling weight factors, based on which scheduling decisions are made by the RSU Our proposed scheduling weight factor mainly consists of three parts, ie, the channel quality factor, the speed factor, and the access category factor In addition, a resource-reusing mode among multiple vehicle-to-vehicle (V2V) links is permitted if the distances between every two central vehicles of these V2V links are larger than a predefined interference interval Compared with the existing medium-access-control protocols in vehicular networks, the proposed centralized TDMA-based scheduling protocol can significantly improve the network throughput and can be easily incorporated into practical vehicular networks

Journal ArticleDOI
TL;DR: This paper designs a novel task scheduling scheme based on reinforcement learning and queuing theory to optimize task scheduling under the resource constraints, and the state aggregation technologies is employed to accelerate the learning progress.
Abstract: Task scheduling is a necessary prerequisite for performance optimization and resource management in the cloud computing system. Focusing on accurate scaled cloud computing environment and efficient task scheduling under resource constraints problems, we introduce fine-grained cloud computing system model and optimization task scheduling scheme in this paper. The system model is comprised of clearly defined separate submodels including task schedule submodel, task execute submodel and task transmission submodel, so that they can be accurately analyzed in the order of processing of user requests. Moreover the submodels are scalable enough to capture the flexibility of the cloud computing paradigm. By analyzing the submodels, where results are repeated to obtain sufficient accuracy, we design a novel task scheduling scheme based on reinforcement learning and queuing theory to optimize task scheduling under the resource constraints, and the state aggregation technologies is employed to accelerate the learning progress. Our results, on the one hand, demonstrate the efficiency of the task scheduling scheme and, on the other hand, reveal the relationship between the arrival rate, server rate, number of VMs and the number of buffer size.

Journal ArticleDOI
TL;DR: This paper introduces immoveable dataset concept which constrains the movement of certain datasets due to security and cost considerations and proposes a new scheduling model in the context of Cloud systems, which holds an economical distribution of tasks among the available CSPs (Cloud Service Providers) in the market.

Journal ArticleDOI
TL;DR: In this article, a hybrid genetic algorithm is used for integrated scheduling, dispatching, and conflict-free routing of jobs and AGVs in FMS environment using a hybrid GA.
Abstract: The paper presents an algorithm for integrated scheduling, dispatching, and conflict-free routing of jobs and AGVs in FMS environment using a hybrid genetic algorithm. The algorithm generates an integrated schedule and detail routing paths while optimizing makespan, AGV travel time, and penalty cost due to jobs tardiness and delay as a result of conflict avoidance. The multi-objective fitness function use adaptive weight approach to assign weights to each objective for every generation based on objective improvement performance. Fuzzy expert system is used to control genetic operators using the overall population performance improvements of the last two previous generations. Computational experiments was conducted on the developed algorithm coded in Matlab to test the effectiveness of the algorithm. Integrated scheduling of jobs in FMS which are in synchrony with AGV dispatching, scheduling, and routing proved to ensure the feasibility and effectiveness of all the solutions of the integrated constituent elements.

Journal ArticleDOI
TL;DR: This work considers a canonical model of dynamic advance scheduling with two patient classes: an urgent demand class which must be served on the day of arrival, and a regular demand class, which can be served at a future date.
Abstract: The dynamic assignment of patients to exam days in order to manage daily variations in demand and capacity is a long-standing open research area in appointment scheduling In particular, the dynamic assignment of advance appointments has been considered to be especially challenging because of its high dimensionality We consider a canonical model of dynamic advance scheduling with two patient classes: an urgent demand class, which must be served on the day of arrival, and a regular demand class, which can be served at a future date Patients take the earliest appointments offered and do not differentiate among providers We derive a surprising characterization of an optimal policy and an algorithm to compute the policy exactly and efficiently These are, to our knowledge, the first analytical results for the dynamic advance assignment of patients to exam days We introduce the property of successive refinability, which allows advance schedules to be easily computable and under which there is no cost to the system to making advance commitments to patients We allow multiple types of capacity to be considered and both demand and capacity to be nonstationary and stochastic