scispace - formally typeset
Search or ask a question
Author

Xiaoyun Xu

Bio: Xiaoyun Xu is an academic researcher from Arizona State University. The author has contributed to research in topics: Job scheduler & Scheduling (computing). The author has an hindex of 3, co-authored 5 publications receiving 80 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: A consistent pattern in the plot of WTV over mean of all possible sequences for a set of jobs is discovered, which can be used to evaluate the sacrifice of mean waiting time while pursuing WTV minimization.

47 citations

Journal ArticleDOI
TL;DR: It is proved that the expected WTV can be predicted given characteristics of the jobs and these findings can be applied to achieve a desire level of WTV for service stability.
Abstract: When a batch of jobs are waiting for services from a machine or resource, sometimes it is desirable to minimise the variance of job waiting times Waiting Time Variance (WTV) for service stability to all the jobs in the batch so that the jobs have about the same waiting times. Many factors, including the sum of the jobs' processing times, the probability distribution of job processing times and the scheduling method may influence the variance of job waiting times. In this paper, we use multivariate exploratory techniques such as Principal Components Analysis (PCA) and Correspondence Analysis (CA) along with other statistical analysis techniques to investigate these factors. We prove that the expected WTV can be predicted given characteristics of the jobs. These findings can be applied to achieve a desire level of WTV for service stability.

22 citations

Journal ArticleDOI
01 Sep 2007
TL;DR: It is proved that, given the same job set, any feasible schedule of Pm//WTV can be transformed into the feasible solution for (Pm//WTV) with the sameJob set by applying the polynomial algorithms proposed in this paper.
Abstract: We consider the problem of scheduling independent jobs on identical parallel machines so as to minimize the waiting time variance of the jobs (Pm//WTV). We show that the optimal value of(Pm//WTV) is identical to the optimal value of the problem for minimizing the completion time variance of jobs on identical parallel machines(Pm//WTV). We prove that, given the same job set, any feasible schedule of (Pm//WTV) can be transformed into the feasible solution for (Pm//WTV) with the same job set by applying the polynomial algorithms proposed in this paper. Several other important properties of (Pm//WTV) are also proved. Heuristic algorithms are proposed to solve (Pm//WTV) problems. We present the testing results of these heuristic algorithms, which are applied to problems with both small and large job sets.

14 citations

Journal Article
TL;DR: This paper presents QoS protocols, called Instantaneous RSVP (I-RSVP) and Stable Instantaneous Resource reSerVation Protocol (SI- RSVP), which are developed for providing the end-to-end delay guarantee of instantaneous jobs and are tested and compared with that of the best effort service model.
Abstract: QoS (Quality of Service) guarantee is highly desirable for many service-oriented computer and network applications on the Internet. This paper focuses on the timeliness aspect of QoS, especially the end-to-end delay guarantee. Resource reSerVation Protocol (RSVP) has been proposed based on the Integrated Service (InteServ) model to provide the QoS guarantee through bandwidth reservation that is applicable to jobs with continuous data flows over a period of time, such as those for tele-conferencing, voice over IP, video and audio streaming applications. There are other applications such as emails generating one-time, instantaneous jobs that cannot be characterized by the flow rate and peak rate for bandwidth reservation. Hence, RSVP is not applicable to instantaneous jobs. This paper presents QoS protocols, called Instantaneous RSVP (I-RSVP) and Stable Instantaneous Resource reSerVation Protocol (SI-RSVP), which are developed for providing the end-to-end delay guarantee of instantaneous jobs. The performance of I-RSVP and SI-RSVP are tested and compared with that of the best effort service model using both small and large scale network simulations.

1 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Several typical covering models and their extensions ordered from simple to complex are introduced, including Location Set Covering Problem (LSCP), Maximal Covering Location Problem (MCLP), Double Standard Model (DSM), Maximum Expected Covering location problem (MEXCLP, and Maximum Availability Location problem (MALP) models.
Abstract: With emergencies being, unfortunately, part of our lives, it is crucial to efficiently plan and allocate emergency response facilities that deliver effective and timely relief to people most in need. Emergency Medical Services (EMS) allocation problems deal with locating EMS facilities among potential sites to provide efficient and effective services over a wide area with spatially distributed demands. It is often problematic due to the intrinsic complexity of these problems. This paper reviews covering models and optimization techniques for emergency response facility location and planning in the literature from the past few decades, while emphasizing recent developments. We introduce several typical covering models and their extensions ordered from simple to complex, including Location Set Covering Problem (LSCP), Maximal Covering Location Problem (MCLP), Double Standard Model (DSM), Maximum Expected Covering Location Problem (MEXCLP), and Maximum Availability Location Problem (MALP) models. In addition, recent developments on hypercube queuing models, dynamic allocation models, gradual covering models, and cooperative covering models are also presented in this paper. The corresponding optimization techniques to solve these models, including heuristic algorithms, simulation, and exact methods, are summarized.

356 citations

Proceedings Article
01 Jan 2000
TL;DR: This work proposes and evaluates several algorithms for supporting advanced reservation of resources in supercomputing scheduling systems and finds that the wait times of applications submitted to the queue increases when reservations are supported and the increase depends on how reservations aresupported.
Abstract: Some computational grid applications have very large resource requirements and need simultaneous access to resources from more than one parallel computer. Current scheduling systems do not provide mechanisms to gain such simultaneous access without the help of human administrators of the computer systems. In this work, we propose and evaluate several algorithms for supporting advanced reservation of resources in supercomputing scheduling systems. These advanced reservations allow users to request resources from scheduling systems at specific times. We find that the wait times of applications submitted to the queue increases when reservations are supported and the increase depends on how reservations are supported. Further, we find that the best performance is achieved when we assume that applications can be terminated and restarted, backfilling is performed, and relatively accurate run-time predictions are used.

303 citations

Journal ArticleDOI
TL;DR: In this article, a novel simulated annealing (SA) with a new concept, called migration mechanism, and a new operator, called giant leap, was introduced to bolster the competitive performance of SA through striking a compromise between the lengths of neighborhood search structures.
Abstract: This article addresses the problem of scheduling hybrid flowshops where the setup times are sequence dependent to minimize makespan and maximum tardiness. To solve such an NP-hard problem, we introduce a novel simulated annealing (SA) with a new concept, called “Migration mechanism”, and a new operator, called “Giant leap”, to bolster the competitive performance of SA through striking a compromise between the lengths of neighborhood search structures. We hybridize the SA (HSA) with a simple local search to further equip our algorithm with a new strong tool to promote the quality of final solution of our proposed SA. We employ the Taguchi method as an optimization technique to extensively tune different parameters and operators of our algorithm. Taguchi orthogonal array analysis is specifically used to pick the best parameters for the optimum design process with the least number of experiments. We established a benchmark to draw an analogy between the performance of SA with other algorithms. Two basically different objective functions, minimization of makespan and maximum tardiness, are taken into consideration to evaluate the robustness and effectiveness of the proposed HSA. Furthermore, we explore the effects of the increase in the number of jobs on the performance of our algorithm to make sure it is effective in terms of both the acceptability of the solution quality and robustness. The excellence and strength of our HSA are concluded from all the results acquired in various circumstances.

107 citations

Journal ArticleDOI
TL;DR: This work addresses the two-stage assembly scheduling problem where there are m machines at the first stage and an assembly machine at the second stage and presents a dominance relation and proposes three heuristics, known to be the best heuristic for the case of zero setup times.

75 citations

Journal ArticleDOI
TL;DR: This research investigates the influence of adhesive dispensing process parameters on the shear strength of 0805 chip capacitors and proposes an innovative parametric design for artificial neural network (ANN) modeling for multi-quality function problem to determine the optimal process scenarios.
Abstract: Due to increasing environmental consciousness, the European Union has prohibited the use of lead substances in electronics soldering material. 58Bi/42Sn solder with a melting temperature of only $$138^{\,\circ }$$ C helps achieve a lower process temperature to resolve the board warpage issue. Curing the adhesive simultaneously with solder reflow helps simplify the assembly process and reduces the manufacturing cost. When a low soldering temperature profile is used, the impact on the adhesion performance of the cured adhesive becomes a major concern. This research investigates the influence of adhesive dispensing process parameters on the shear strength of 0805 chip capacitors. Experimental data is analyzed using the principal component analysis (PCA) technique and PCA integrated with grey relational analysis algorithm. This study also proposes an innovative parametric design for artificial neural network (ANN) modeling for the multi-quality function problem to determine the optimal process scenarios. Results of the confirmation test indicate that the samples prepared using the process parameters identified by ANN are superior to the others. Thus, the optimal process parameters are adhesive dispense location beneath the component body, placement time of 0 s, a curing temperature of $$160^{\,\circ }$$ C and a conveyor speed of 1 m/min. The implementation of the optimal process has improved chip capacitor fall-off from 2.5 to 0.88 %.

59 citations