scispace - formally typeset
Search or ask a question

Showing papers on "Scheduling (computing) published in 2002"


Journal ArticleDOI
TL;DR: Two novel scheduling algorithms for a bounded number of heterogeneous processors with an objective to simultaneously meet high performance and fast scheduling time are presented, called the Heterogeneous Earliest-Finish-Time (HEFT) algorithm and the Critical-Path-on-a-Processor (CPOP) algorithm.
Abstract: Efficient application scheduling is critical for achieving high performance in heterogeneous computing environments. The application scheduling problem has been shown to be NP-complete in general cases as well as in several restricted cases. Because of its key importance, this problem has been extensively studied and various algorithms have been proposed in the literature which are mainly for systems with homogeneous processors. Although there are a few algorithms in the literature for heterogeneous processors, they usually require significantly high scheduling costs and they may not deliver good quality schedules with lower costs. In this paper, we present two novel scheduling algorithms for a bounded number of heterogeneous processors with an objective to simultaneously meet high performance and fast scheduling time, which are called the Heterogeneous Earliest-Finish-Time (HEFT) algorithm and the Critical-Path-on-a-Processor (CPOP) algorithm. The HEFT algorithm selects the task with the highest upward rank value at each step and assigns the selected task to the processor, which minimizes its earliest finish time with an insertion-based approach. On the other hand, the CPOP algorithm uses the summation of upward and downward rank values for prioritizing tasks. Another difference is in the processor selection phase, which schedules the critical tasks onto the processor that minimizes the total execution time of the critical tasks. In order to provide a robust and unbiased comparison with the related work, a parametric graph generator was designed to generate weighted directed acyclic graphs with various characteristics. The comparison study, based on both randomly generated graphs and the graphs of some real applications, shows that our scheduling algorithms significantly surpass previous approaches in terms of both quality and cost of schedules, which are mainly presented with schedule length ratio, speedup, frequency of best results, and average scheduling time metrics.

2,961 citations


Journal ArticleDOI
TL;DR: This work states that clusters, Grids, and peer‐to‐peer (P2P) networks have emerged as popular paradigms for next generation parallel and distributed computing and introduces a number of resource management and application scheduling challenges in the domain of security, resource and policy heterogeneity, fault tolerance, continuously changing resource conditions, and politics.
Abstract: SUMMARY Clusters, Grids, and peer-to-peer (P2P) networks have emerged as popular paradigms for next generation parallel and distributed computing. They enable aggregation of distributed resources for solving largescale problems in science, engineering, and commerce. In Grid and P2P computing environments, the resources are usually geographically distributed in multiple administrative domains, managed and owned by different organizations with different policies, and interconnected by wide-area networks or the Internet. This introduces a number of resource management and application scheduling challenges in the domain of security, resource and policy heterogeneity, fault tolerance, continuously changing resource conditions, and politics. The resource management and scheduling systems for Grid computing need to manage resources and application execution depending on either resource consumers’ or owners’ requirements, and continuously adapt to changes in resource availability. The management of resources and scheduling of applications in such large-scale distributed systems is a complex undertaking. In order to prove the effectiveness of resource brokers and associated scheduling algorithms, their performance needs to be evaluated under different scenarios such as varying number of resources and users with different requirements. In a Grid environment, it is hard and even impossible to perform scheduler performance evaluation in a repeatable and controllable manner as resources and users are distributed across multiple organizations with their own policies. To overcome this limitation, we have developed a Java-based discrete-event Grid simulation toolkit called GridSim. The toolkit supports modeling and simulation of heterogeneous Grid resources (both time- and space-shared), users and application models. It provides primitives for creation of application tasks, mapping of tasks to resources, and their management. To demonstrate suitability of the GridSim toolkit, we have simulated a Nimrod-G

1,604 citations


Journal ArticleDOI
TL;DR: A computational economy framework for resource allocation and for regulating supply and demand in Grid computing environments is proposed and some of the economic models in resource trading and scheduling are demonstrated using the Nimrod/G resource broker.
Abstract: The accelerated development in peer-to-peer and Grid computing has positioned them as promising next-generation computing platforms. They enable the creation of virtual enterprises for sharing resources distributed across the world. However, resource management, application development and usage models in these environments is a complex undertaking. This is due to the geographic distribution of resources that are owned by different organizations or peers. The resource owners of each of these resources have different usage or access policies and cost models, and varying loads and availability. In order to address complex resource management issues, we have proposed a computational economy framework for resource allocation and for regulating supply and demand in Grid computing environments. This framework provides mechanisms for optimizing resource provider and consumer objective functions through trading and brokering services. In a real world market, there exist various economic models for setting the price of services based on supply-and-demand and their value to the user. They include commodity market, posted price, tender and auction models. In this paper, we discuss the use of these models for interaction between Grid components to decide resource service value, and the necessary infrastructure to realize each model. In addition to usual services offered by Grid computing systems, we need an infrastructure to support interaction protocols, allocation mechanisms, currency, secure banking and enforcement services. We briefly discuss existing technologies that provide some of these services and show their usage in developing the Nimrod-G grid resource broker. Furthermore, we demonstrate the effectiveness of some of the economic models in resource trading and scheduling using the Nimrod/G resource broker, with deadline and cost constrained scheduling for two different optimization strategies, on the World-Wide Grid testbed that has resources distributed across five continents.

961 citations


Proceedings ArticleDOI
07 Nov 2002
TL;DR: A cross-layer design framework to the multiple access problem in contention-based wireless ad hoc networks is introduced, limiting multiuser interference to increase single-hop throughput and reducing power consumption to prolong battery life.
Abstract: In this paper we introduce power control as a solution to the multiple access problem in contention-based wireless ad-hoc networks. The motivation for this study is two fold, limiting multi-user interference to increase single-hop throughput, and reducing power consumption to increase battery life. We focus on next neighbor transmissions where nodes are required to send information packets to their respective receivers subject to a constraint on the signal-to-interference-and-noise ratio. The multiple access problem is solved via two alternating phases, namely scheduling and power control. The scheduling algorithm is essential to coordinate the transmissions of independent users in order to eliminate strong interference (e.g. self-interference) that can not be overcome by power control. On the other hand, power control is executed in a distributed fashion to determine the admissible power vector, if one exists, that can be used by the scheduled users to satisfy their single-hop transmission requirements. This is done for two types of networks, namely TDMA and TDMA/CDMA wireless ad-hoc networks.

704 citations


Patent
05 Nov 2002
TL;DR: In this article, the authors provide techniques to achieve better utilization of the available resources and robust performance for the downlink and uplink in a multiple-access MIMO system.
Abstract: Techniques to achieve better utilization of the available resources and robust performance for the downlink and uplink in a multiple-access MIMO system. Techniques are provided to adaptively process data prior to transmission, based on channel state information, to more closely match the data transmission to the capacity of the channel. Various receiver processing techniques are provided to process a data transmission received via multiple antennas at a receiver unit. Adaptive reuse schemes and power back-off are also provided to operate the cells in the system in a manner to further increase the spectral efficiency of the system (e.g., reduce interference, improve coverage, and attain high throughput). Techniques are provided to efficiently schedule data transmission on the downlink and uplink. The scheduling schemes may be designed to optimize transmissions (e.g., maximize throughput) for single or multiple terminals in a manner to meet various constraints and requirements.

671 citations


Journal ArticleDOI
01 Feb 2002
TL;DR: Two new approaches to solve jointly the assignment and job-shop scheduling problems (with total or partial flexibility) are presented and an evolutionary approach controlled by the assignment model is generated.
Abstract: Traditionally, assignment and scheduling decisions are made separately at different levels of the production management framework. The combining of such decisions presents additional complexity and new problems. We present two new approaches to solve jointly the assignment and job-shop scheduling problems (with total or partial flexibility). The first one is the approach by localization (AL). It makes it possible to solve the problem of resource allocation and build an ideal assignment model (assignments schemata). The second one is an evolutionary approach controlled by the assignment model (generated by the first approach). In such an approach, we apply advanced genetic manipulations in order to enhance the solution quality. We also explain some of the practical and theoretical considerations in the construction of a more robust encoding that will enable us to solve the flexible job-shop problem by applying the genetic algorithms (GAs). Two examples are presented to show the efficiency of the two suggested methodologies.

660 citations


Journal ArticleDOI
TL;DR: It is proved that fixed-size window control can achieve fair bandwidth sharing according to any of these criteria, provided scheduling at each link is performed in an appropriate manner.
Abstract: This paper concerns the design of distributed algorithms for sharing network bandwidth resources among contending flows. The classical fairness notion is the so-called max-min fairness. The alternative proportional fairness criterion has recently been introduced by F. Kelly (see Eur. Trans. Telecommun., vol.8, p.33-7, 1997); we introduce a third criterion, which is naturally interpreted in terms of the delays experienced by ongoing transfers. We prove that fixed-size window control can achieve fair bandwidth sharing according to any of these criteria, provided scheduling at each link is performed in an appropriate manner. We then consider a distributed random scheme where each traffic source varies its sending rate randomly, based on binary feedback information from the network. We show how to select the source behavior so as to achieve an equilibrium distribution concentrated around the considered fair rate allocations. This stochastic analysis is then used to assess the asymptotic behavior of deterministic rate adaption procedures.

591 citations


Journal ArticleDOI
TL;DR: This thesis proposes a distributed computational economy as an effective metaphor for the management of resources and application scheduling and proposes an architectural framework that supports resource trading and quality of services based scheduling that enables the regulation of supply and demand for resources.
Abstract: Computational Grids, emerging as an infrastructure for next generation computing, enable the sharing, selection, and aggregation of geographically distributed resources for solving large-scale problems in science, engineering, and commerce. As the resources in the Grid are heterogeneous and geographically distributed with varying availability and a variety of usage and cost policies for diverse users at different times and, priorities as well as goals that vary with time. The management of resources and application scheduling in such a large and distributed environment is a complex task. This thesis proposes a distributed computational economy as an effective metaphor for the management of resources and application scheduling. It proposes an architectural framework that supports resource trading and quality of services based scheduling. It enables the regulation of supply and demand for resources and provides an incentive for resource owners for participating in the Grid and motives the users to trade-off between the deadline, budget, and the required level of quality of service. The thesis demonstrates the capability of economic-based systems for peer-to-peer distributed computing by developing users' quality-of-service requirements driven scheduling strategies and algorithms. It demonstrates their effectiveness by performing scheduling experiments on the World-Wide Grid for solving parameter sweep applications.

579 citations


Journal ArticleDOI
TL;DR: This paper obtains an optimal offline schedule for a node operating under a deadline constraint, and shows that this lazy schedule is significantly more energy-efficient compared to a deterministic schedule that guarantees queue stability for the same range of arrival rates.
Abstract: The paper considers the problem of minimizing the energy used to transmit packets over a wireless link via lazy schedules that judiciously vary packet transmission times. The problem is motivated by the following observation. With many channel coding schemes, the energy required to transmit a packet can be significantly reduced by lowering transmission power and code rate, and therefore transmitting the packet over a longer period of time. However, information is often time-critical or delay-sensitive and transmission times cannot be made arbitrarily long. We therefore consider packet transmission schedules that minimize energy subject to a deadline or a delay constraint. Specifically, we obtain an optimal offline schedule for a node operating under a deadline constraint. An inspection of the form of this schedule naturally leads us to an online schedule which is shown, through simulations, to perform closely to the optimal offline schedule. Taking the deadline to infinity, we provide an exact probabilistic analysis of our offline scheduling algorithm. The results of this analysis enable us to devise a lazy online algorithm that varies transmission times according to backlog. We show that this lazy schedule is significantly more energy-efficient compared to a deterministic (fixed transmission time) schedule that guarantees queue stability for the same range of arrival rates.

563 citations


Proceedings ArticleDOI
24 Jul 2002
TL;DR: This work develops a family of algorithms and uses simulation studies to evaluate various combinations of these algorithms to suggest that while it is necessary to consider the impact of replication, it is not always necessary to couple data movement and computation scheduling.
Abstract: In high-energy physics, bioinformatics, and other disciplines, we encounter applications involving numerous, loosely coupled jobs that both access and generate large data sets. So-called Data Grids seek to harness geographically distributed resources for such large-scale data-intensive problems. Yet effective scheduling in such environments is challenging, due to a need to address a variety of metrics and constraints while dealing with multiple, potentially independent sources of jobs and a large number of storage, compute, and network resources. We describe a scheduling framework that addresses these problems. Within this framework, data movement operations may be either tightly bound to job scheduling decisions or, alternatively, performed by a decoupled, asynchronous process on the basis of observed data access patterns and load. We develop a family of algorithms and use simulation studies to evaluate various combinations. Our results suggest that while it is necessary to consider the impact of replication, it is not always necessary to couple data movement and computation scheduling. Instead, these two activities can be addressed separately, thus significantly simplifying the design and implementation.

504 citations


Journal ArticleDOI
TL;DR: This paper proposes a Pareto approach based on the hybridization of fuzzy logic (FL) and evolutionary algorithms (EAs) to solve the flexible job-shop scheduling problem (FJSP).

Journal ArticleDOI
01 Oct 2002
TL;DR: The authors' service-oriented grid computing system called Nimrod-G manages all operations associated with remote execution including resource discovery, trading, scheduling based on economic principles and a user-defined QoS requirement.
Abstract: Computational grids that couple geographically distributed resources such as PCs, workstations, clusters, and scientific instruments, have emerged as a next generation computing platform for solving large-scale problems in science, engineering, and commerce. However, application development, resource management, and scheduling in these environments continue to be a complex undertaking. In this article, we discuss our efforts in developing a resource management system for scheduling computations on resources distributed across the world with varying quality of service (QoS). Our service-oriented grid computing system called Nimrod-G manages all operations associated with remote execution including resource discovery, trading, scheduling based on economic principles and a user-defined QoS requirement. The Nimrod-G resource broker is implemented by leveraging existing technologies such as Globus, and provides new services that are essential for constructing industrial-strength grids. We present the results of experiments using the Nimrod-G resource broker for scheduling parametric computations on the World Wide Grid (WWG) resources that span five continents.

Journal ArticleDOI
TL;DR: It is shown that several key properties, used to design heuristic procedures, do not hold in the blocking and no-wait cases, while some of the most effective ideas used to develop branch and bound algorithms can be easily extended.

Journal ArticleDOI
TL;DR: A predictive-maintenance structure for a gradually deteriorating single- unit system (continuous time/continuous state) and shows its adaptability to different possible characteristics of the maintained single-unit system.
Abstract: A predictive-maintenance structure for a gradually deteriorating single-unit system (continuous time/continuous state) is presented in this paper. The proposed decision model enables optimal inspection and replacement decision in order to balance the cost engaged by failure and unavailability on an infinite horizon. Two maintenance decision variables are considered: the preventive replacement threshold and the inspection schedule based on the system state. In order to assess the performance of the proposed maintenance structure, a mathematical model for the maintained system cost is developed using regenerative and semi-regenerative processes theory. Numerical experiments show that the s-expected maintenance cost rate on an infinite horizon can be minimized by a joint optimization of the replacement threshold and the a periodic inspection times. The proposed maintenance structure performs better than classical preventive maintenance policies which can be treated as particular cases. Using the proposed maintenance structure, a well-adapted strategy can automatically be selected for the maintenance decision-maker depending on the characteristics of the wear process and on the different unit costs. Even limit cases can be reached: for example, in the case of expensive inspection and costly preventive replacement, the optimal policy becomes close to a systematic periodic replacement policy. Most of the classical maintenance strategies (periodic inspection/replacement policy, systematic periodic replacement, corrective policy) can be emulated by adopting some specific inspection scheduling rules and replacement thresholds. In a more general way, the proposed maintenance structure shows its adaptability to different possible characteristics of the maintained single-unit system.

Journal ArticleDOI
TL;DR: The proportional model is applied in the differentiation of queueing delays, and appropriate packet scheduling mechanisms are investigated, calling for scheduling mechanisms that can implement the PDD model, when it is feasible to do so.
Abstract: The proportional differentiation model provides the network operator with the 'tuning knobs' for adjusting the per-hop quality-of-service (QoS) ratios between classes, independent of the class loads. This paper applies the proportional model in the differentiation of queueing delays, and investigates appropriate packet scheduling mechanisms. Starting from the proportional delay differentiation (PDD) model, we derive the average queueing delay in each class, show the dynamics of the class delays under the PDD constraints, and state the conditions in which the PDD model is feasible. The feasibility model of the model can be determined from the average delays that result with the strict priorities scheduler. We then focus on scheduling mechanisms that can implement the PDD model, when it is feasible to do so. The proportional average delay (PAD) scheduler meets the PDD constraints, when they are feasible, but it exhibits a pathological behavior in short timescales. The waiting time priority (WTP) scheduler, on the other hand, approximates the PDD model closely, even in the short timescales of a few packet departures, but only in heavy load conditions. PAD and WTP serve as motivation for the third scheduler, called hybrid proportional delay (HPD). HPD approximates the PDD model closely, when the model is feasible, independent of the class load distribution. Also, HPD provides predictable delay differentiation even in short timescales.

Journal ArticleDOI
TL;DR: This article provides a survey of scheduling techniques for several types of wireless networks, including TDMA, CDMA, and multihop packet networks, and some of the challenges in designing such schedulers are first discussed.
Abstract: Scheduling algorithms are important components in the provision of guaranteed quality of service parameters such as delay, delay jitter, packet loss rate, or throughput. The design of scheduling algorithms for mobile communication networks is especially challenging given the highly variable link error rates and capacities, and the. changing mobile station connectivity typically encountered in such networks. This article provides a survey of scheduling techniques for several types of wireless networks. Some of the challenges in designing such schedulers are first discussed. Desirable features and classifications of schedulers are then reviewed. This is followed by a discussion of several, scheduling algorithms which have been proposed for TDMA, CDMA, and multihop packet networks.

Journal ArticleDOI
TL;DR: A scheduling architecture for real-time control tasks is proposed that uses feedback from execution-time measurements and feedforward from workload changes to adjust the sampling periods of the control tasks so that the combined performance of the controllers is optimized.
Abstract: A scheduling architecture for real-time control tasks is proposed. The scheduler uses feedback from execution-time measurements and feedforward from workload changes to adjust the sampling periods of the control tasks so that the combined performance of the controllers is optimized. The performance of each controller is described by a cost function. Based on the solution to the optimal resource allocation problem, explicit solutions are derived for linear and quadratic approximations of the cost functions. It is shown that a linear rescaling of the nominal sampling frequencies is optimal for both of these approximations. An extensive inverted pendulum example is presented, where the performance obtained with open-loop, feedback, combined feedback and feedforward scheduling, and earliest-deadline first scheduling are compared. The performance under earliest-deadline first scheduling is explained by studying the behavior of periodic tasks under overload conditions. It is shown that the average values of the sampling periods equal the nominal periods, rescaled by the processor utilization.

Proceedings ArticleDOI
23 Sep 2002
TL;DR: A distributed receiver-oriented multiple access (ROMA) channel access scheduling protocol for ad hoc networks with directional antennas, each of which can form multiple beams and commence several simultaneous communication sessions is proposed.
Abstract: Directional antennas can adaptively select radio signals of interest in specific directions, while filtering out unwanted interference from other directions. Although a couple of medium access protocols based on random access schemes have been proposed for networks with directional antennas, they suffer from high probability of collisions because of their dependence on omnidirectional mode for the transmission or reception of control packets in order to establish directional links. We propose a distributed receiver-oriented multiple access (ROMA) channel access scheduling protocol for ad hoc networks with directional antennas, each of which can form multiple beams and commence several simultaneous communication sessions. Unlike random access schemes that use on-demand handshakes or signal scanning to resolve communication targets, ROMA determines a number of links for activation in every time slot using only two-hop topology information. It is shown that significant improvements on network throughput and delay can be achieved by exploiting the multi-beam forming capability of directional antennas in both transmission and reception. The performance of ROMA is studied by simulations, and compared with a well-know static scheduling scheme that is based on global topology information.

Journal ArticleDOI
TL;DR: This work explores the data reuse properties of full-search block-matching for motion estimation (ME) and associated architecture designs, as well as memory bandwidth requirements, and a seven-type classification system is developed that can accommodate most published ME architectures.
Abstract: This work explores the data reuse properties of full-search block-matching (FSBM) for motion estimation (ME) and associated architecture designs, as well as memory bandwidth requirements. Memory bandwidth in high-quality video is a major bottleneck to designing an implementable architecture because of large frame size and search range. First, the memory bandwidth in ME is analyzed and the problem is solved by exploring data reuse. Four levels are defined according to the degree of data reuse for previous frame access. With the highest level of data reuse, one-access for frame pixels is achieved. A scheduling strategy is also applied to data reuse of the ME architecture designs and a seven-type classification system is developed that can accommodate most published ME architectures. This classification can simplify the work of designers in designing more cost-effective ME architectures, while simultaneously minimizing memory bandwidth. Finally, a FSBM architecture suitable for high quality HDTV video with a minimum memory bandwidth feature is proposed. Our architecture is able to achieve 100% hardware efficiency while preserving minimum I/O pin count, low local memory size, and bandwidth.

Proceedings ArticleDOI
07 Nov 2002
TL;DR: A fair scheduling which assigns dynamic weights to the flows such that the weights depend on the congestion in the neighborhood and schedule the flows which constitute a maximum weighted matching is proposed.
Abstract: We consider scheduling policies for maxmin fair allocation of bandwidth in wireless ad hoc networks. We formalize the maxmin fair objective under wireless scheduling constraints. We propose a fair scheduling which assigns dynamic weights to the flows such that the weights depend on the congestion in the neighborhood and schedule the flows which constitute a maximum weighted matching. It is possible to prove analytically that this policy attains both short term and long term fairness. We consider more generalized fairness notions, and suggest mechanisms to attain these objectives.

Journal ArticleDOI
TL;DR: The author considers a hidden Markov model where a single Markov chain is observed by a number of noisy sensors and designs algorithms for choosing dynamically at each time instant which sensor to select to provide the next measurement.
Abstract: The author considers a hidden Markov model (HMM) where a single Markov chain is observed by a number of noisy sensors. Due to computational or communication constraints, at each time instant, one can select only one of the noisy sensors. The sensor scheduling problem involves designing algorithms for choosing dynamically at each time instant which sensor to select to provide the next measurement. Each measurement has an associated measurement cost. The problem is to select an optimal measurement scheduling policy to minimize a cost function of estimation errors and measurement costs. The optimal measurement policy is solved via stochastic dynamic programming. Sensor management issues and suboptimal scheduling algorithms are also presented. A numerical example that deals with the aircraft identification problem is presented.

Proceedings ArticleDOI
10 Dec 2002
TL;DR: This work defines the basic concepts of network scheduling in NCSs and forms the optimal scheduling problem under both rate-monotonic-schedulability constraints and NCS-stability constraints, and gives an example of how such optimization is carried out.
Abstract: Feedback control systems wherein the control loops are closed through a real-time network are called networked control systems (NCSs). The insertion of the communication network in the feedback control loop makes the analysis and design of an NCS complex. Driving our research effort into NCSs is the point of view that the design of both the communication protocols and the interacting controlled system should not be treated as separate. In the co-design approach we propose, network issues such as bandwidth, quantization, survivability, reliability and message delay will be considered simultaneously with controlled system issues such as stability, performance, fault tolerance and adaptability. Thus, we study network scheduling when a set of NCSs are connected to the network and arbitrating for network bandwidth. We first define the basic concepts of network scheduling in NCSs. Then, we apply the rate monotonic scheduling algorithm to schedule a set of NCSs. We also formulate the optimal scheduling problem under both rate-monotonic-schedulability constraints and NCS-stability constraints, and give an example of how such optimization is carried out. Next, the assumptions of ideal transmission are relaxed: we study the above network scheduling problem with network-induced delay, packet dropouts, and multiple-packet transmissions taken into account.

Journal ArticleDOI
TL;DR: This work presents a novel scheduling framework in which tasks are treated as springs with given elastic coefficients to better conform to the actual load conditions, and under this model, periodic tasks can intentionally change their execution rate to provide different quality of service.
Abstract: An increasing number of real-time applications related to multimedia and adaptive control systems require greater flexibility than classical real-time theory usually permits. We present a novel scheduling framework in which tasks are treated as springs with given elastic coefficients to better conform to the actual load conditions. Under this model, periodic tasks can intentionally change their execution rate to provide different quality of service and the other tasks can automatically adapt their periods to keep the system underloaded. The proposed model can also be used to handle overload conditions in a more flexible way and to provide a simple and efficient mechanism for controlling a system's performance as a function of the current load.

Patent
Peter G. Capek1, William Grey1, Paul A. Moskowitz1, Clifford A. Pickover1, Dailun Shi1 
25 Apr 2002
TL;DR: In this article, the authors present a method for scheduling an event or meeting consisting of a plurality of persons which is determined by optimizing one or more variables, in the preferred embodiment, the meeting requests for a meeting are pooled.
Abstract: The present invention is a method for scheduling an event or meeting consisting of a plurality of persons which is determined by optimizing one or more variables. In the preferred embodiment, one or more requests for a meeting are pooled. A selected variable is optimized and an event is scheduled on the optimized variable. As additional meeting requests are pooled which conflict with the initial optimized event, the selected variable is again optimized and the event is dynamically rescheduled based on the optimized variable.

Journal ArticleDOI
TL;DR: The Cello disk scheduling framework is demonstrated to be suitable for next generation operating systems since it aligns the service provided with the application requirements, it protects application classes from one another, it is work-conserving and can adapt to changes in work-load.
Abstract: In this paper, we present the Cello disk scheduling framework for meeting the diverse service requirements of applications. Cello employs a two-level disk scheduling architecture, consisting of a class-independent scheduler and a set of class-specific schedulers. The two levels of the framework allocate disk bandwidth at two time-scales: the class-independent scheduler governs the coarse-grain allocation of bandwidth to application classes, while the class-specific schedulers control the fine-grain interleaving of requests. The two levels of the architecture separate application-independent mechanisms from application-specific scheduling policies, and thereby facilitate the co-existence of multiple class-specific schedulers. We demonstrate that Cello is suitable for next generation operating systems since: (i) it aligns the service provided with the application requirements, (ii) it protects application classes from one another, (iii) it is work-conserving and can adapt to changes in work-load, (iv) it minimizes the seek time and rotational latency overhead incurred during access, and (v) it is computationally efficient.

Patent
03 Apr 2002
TL;DR: In this article, a set-top box stores a plurality of advertisements that have been targeted to the subscriber, and the selected advertisements are scheduled for displays, i.e., as soon as the next avail (advertisement opportunity) is identified, the next selected advertisement is inserted in the avail.
Abstract: A subscriber selected ad display and scheduling system which allows subscribers to request any one of a plurality of advertisements they wish to view from a library of advertisements stored at the set-top box. Generally, the set-top box stores a plurality of advertisements that have been targeted to the subscriber. Two different embodiments are provided. In one embodiment, the subscriber may view different available advertisements via an advertisement guide, and select one or more advertisements wherein upon selection, the contents of the selected advertisements are immediately displayed at the subscriber display. Alternatively, the subscriber may select one or more advertisements from a list of advertisements with the help of an advertisement guide. In this embodiment, the selected advertisements are scheduled for displays, i.e., as soon as the next avail (advertisement opportunity) is identified, the next selected advertisement is inserted in the avail.

Journal ArticleDOI
TL;DR: Simulated annealing (SA), a meta-heuristic, is employed in this study to determine a scheduling policy so as to minimize total tardiness, and shows that the proposed SA method significantly outperforms a neighborhood search method in terms of total tardyness.
Abstract: This paper presents a scheduling problem for unrelated parallel machines with sequence-dependent setup times, using simulated annealing (SA). The problem accounts for allotting work parts of L jobs into M parallel unrelated machines, where a job refers to a lot composed of N items. Some jobs may have different items while every item within each job has an identical processing time with a common due date. Each machine has its own processing times according to the characteristics of the machine as well as job types. Setup times are machine independent but job sequence dependent. SA, a meta-heuristic, is employed in this study to determine a scheduling policy so as to minimize total tardiness. The suggested SA method utilizes six job or item rearranging techniques to generate neighborhood solutions. The experimental analysis shows that the proposed SA method significantly outperforms a neighborhood search method in terms of total tardiness.

Proceedings ArticleDOI
08 Oct 2002
TL;DR: An energy-aware scheduling policy for non-real-time operating systems that benefits from event counters is proposed and energy measurements of the target architecture under variable load show the advantage of the proposed approach.
Abstract: Scalability of the core frequency is a common feature of low-power processor architectures. Many heuristics for frequency scaling were proposed in the past to find the best trade-off between energy efficiency and computational performance. With complex applications exhibiting unpredictable behavior these heuristics cannot reliably adjust the operation point of the hardware because they do not know where the energy is spent and why the performance is lost.Embedded hardware monitors in the form of event counters have proven to offer valuable information in the field of performance analysis. We will demonstrate that counter values can also reveal the power-specific characteristics of a thread.In this paper we propose an energy-aware scheduling policy for non-real-time operating systems that benefits from event counters. By exploiting the information from these counters, the scheduler determines the appropriate clock frequency for each individual thread running in a time-sharing environment. A recurrent analysis of the thread-specific energy and performance profile allows an adjustment of the frequency to the behavioral changes of the application. While the clock frequency may vary in a wide range, the application performance should only suffer slightly (e.g. with 10% performance loss compared to the execution at the highest clock speed). Because of the similarity to a car cruise control, we called our scheduling policy Process Cruise Control. This adaptive clock scaling is accomplished by the operating system without any application support.Process Cruise Control has been implemented on the Intel XScale architecture, that offers a variety of frequencies and a set of configurable event counters. Energy measurements of the target architecture under variable load show the advantage of the proposed approach.

Journal ArticleDOI
TL;DR: In this paper, a general framework for using real-time information to improve scheduling decisions is developed, which allows us to trade off the quality of the revised schedule against the production disturbance which results from changing the planned schedule.

Journal ArticleDOI
TL;DR: The presented scheduling method can adjust the sampling period as small as possible, allocate the bandwidth of the network for three types of data, and exchange the transmission orders of data for sensors and actuators.
Abstract: This paper presents a scheduling method for network-based control systems with three types of data (periodic data, sporadic data, and messages). As a basic parameter for the scheduling method for network-based control systems, a maximum allowable delay bound is used,which guarantees stability of network-based control systems and is derived from characteristics of the given plant using the presented theorems. The presented scheduling method for network-based control systems can adjust the sampling period as small as possible, allocate the bandwidth of the network for three types of data, and exchange the transmission orders of data for sensors and actuators. In addition, the presented scheduling method guarantees real-time transmission of sporadic and periodic data, and minimum utilization for nonreal-time messages. The proposed method is shown to be useful by examples.