scispace - formally typeset
Search or ask a question
Author

James W. Layland

Bio: James W. Layland is an academic researcher from California Institute of Technology. The author has contributed to research in topics: Dynamic priority scheduling & Scheduling (computing). The author has an hindex of 2, co-authored 3 publications receiving 12156 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: The problem of multiprogram scheduling on a single processor is studied from the viewpoint of the characteristics peculiar to the program functions that need guaranteed service and it is shown that an optimum fixed priority scheduler possesses an upper bound to processor utilization.
Abstract: The problem of multiprogram scheduling on a single processor is studied from the viewpoint of the characteristics peculiar to the program functions that need guaranteed service. It is shown that an optimum fixed priority scheduler possesses an upper bound to processor utilization which may be as low as 70 percent for large task sets. It is also shown that full processor utilization can be achieved by dynamically assigning priorities on the basis of their current deadlines. A combination of these two scheduling techniques is also discussed.

7,067 citations

Book
03 Jan 1989
TL;DR: In this paper, the problem of multiprogram scheduling on a single processor is studied from the viewpoint of the characteristics peculiar to the program functions that need guaranteed service, and it is shown that an optimum fixed priority scheduler possesses an upper bound to processor utilization which may be as low as 70 percent for large task sets.
Abstract: The problem of multiprogram scheduling on a single processor is studied from the viewpoint of the characteristics peculiar to the program functions that need guaranteed service. It is shown that an optimum fixed priority scheduler possesses an upper bound to processor utilization which may be as low as 70 percent for large task sets. It is also shown that full processor utilization can be achieved by dynamically assigning priorities on the basis of their current deadlines. A combination of these two scheduling techniques is also discussed.

5,397 citations

Reference EntryDOI
15 Jan 2003
TL;DR: The Deep Space Network (DSN), managed by the California Institute of Technology's Jet Propulsion laboratory (JPL), has provided vital communications and navigation services for NASA deep space exploration missions for more than 40 years.
Abstract: The Deep Space Network (DSN), managed by the California Institute of Technology's Jet Propulsion laboratory (JPL), has provided vital communications and navigation services for NASA deep space exploration missions for more than 40 years. The remarkable technical achievements of the planetary exploration program executed by NASA would not have been possible without the extensive and sophisticated communications and data handling systems that comprise the Deep Space Network. The principal facilities of the DSN are three major ground station complexes, one in the United States (Goldstone, California), one near Madrid, Spain, and one near Canberra, Australia. Each of the complexes has several tracking antennae; the largest has a diameter of 70 meters, and smaller ones have diameters of 11–34 meters. In addition to antennae, the DSN also has a complex of computers and signal processing capabilities that permit very thorough and sophisticated analysis of signals sent back from distant spacecraft. In addition to receiving data from very distant spacecraft, the facilities of the network are also used to control the spacecraft. The network and the associated spacecraft are designed so that if a failure of a spacecraft should occur, there are often means available to effect a recovery. The continuing improvements in the technology of the network have greatly increased the useful lives and capabilities of the spacecraft that use the network. No history of the DSN would be complete without full appreciation of the contribution made by advanced technology to the successful development of the Network. The wellspring of new and innovative ideas for increasing the existing capability of the Network, improving reliability, operability, and cost-effectiveness, and for enabling recovery from potential mission-threatening situations has resided, from the very beginning of the Network's history, in a strong program of advanced technology, research, and development. Keywords: Deep Space Network; antennae; forward command; data link; return telemetry; antennae array; radio-metric techniques; Goldstone Solar System Radar; telecommunications performance

2 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: An investigation is conducted of two protocols belonging to the priority inheritance protocols class; the two are called the basic priority inheritance protocol and the priority ceiling protocol, both of which solve the uncontrolled priority inversion problem.
Abstract: An investigation is conducted of two protocols belonging to the priority inheritance protocols class; the two are called the basic priority inheritance protocol and the priority ceiling protocol. Both protocols solve the uncontrolled priority inversion problem. The priority ceiling protocol solves this uncontrolled priority inversion problem particularly well; it reduces the worst-case task-blocking time to at most the duration of execution of a single critical section of a lower-priority task. This protocol also prevents the formation of deadlocks. Sufficient conditions under which a set of periodic tasks using this protocol may be scheduled is derived. >

2,443 citations

Proceedings ArticleDOI
05 Dec 1989
TL;DR: An exact characterization of the ability of the rate monotonic scheduling algorithm to meet the deadlines of a periodic task set and a stochastic analysis which gives the probability distribution of the breakdown utilization of randomly generated task sets are represented.
Abstract: An exact characterization of the ability of the rate monotonic scheduling algorithm to meet the deadlines of a periodic task set is represented. In addition, a stochastic analysis which gives the probability distribution of the breakdown utilization of randomly generated task sets is presented. It is shown that as the task set size increases, the task computation times become of little importance, and the breakdown utilization converges to a constant determined by the task periods. For uniformly distributed tasks, a breakdown utilization of 88% is a reasonable characterization. A case is shown in which the average-case breakdown utilization reaches the worst-case lower bound of C.L. Liu and J.W. Layland (1973). >

1,582 citations

Proceedings ArticleDOI
23 Oct 1995
TL;DR: This paper proposes a simple model of job scheduling aimed at capturing some key aspects of energy minimization, and gives an off-line algorithm that computes, for any set of jobs, a minimum-energy schedule.
Abstract: The energy usage of computer systems is becoming an important consideration, especially for battery-operated systems. Various methods for reducing energy consumption have been investigated, both at the circuit level and at the operating systems level. In this paper, we propose a simple model of job scheduling aimed at capturing some key aspects of energy minimization. In this model, each job is to be executed between its arrival time and deadline by a single processor with variable speed, under the assumption that energy usage per unit time, P, is a convex function, of the processor speed s. We give an off-line algorithm that computes, for any set of jobs, a minimum-energy schedule. We then consider some on-line algorithms and their competitive performance for the power function P(s)=s/sup p/ where p/spl ges/2. It is shown that one natural heuristic, called the Average Rate heuristic, uses at most a constant times the minimum energy required. The analysis involves bounding the largest eigenvalue in matrices of a special type.

1,525 citations

Proceedings ArticleDOI
21 Oct 2001
TL;DR: This paper presents a class of novel algorithms that modify the OS's real-time scheduler and task management service to provide significant energy savings while maintaining real- time deadline guarantees, and shows that these RT-DVS algorithms closely approach the theoretical lower bound on energy consumption.
Abstract: In recent years, there has been a rapid and wide spread of non-traditional computing platforms, especially mobile and portable computing devices. As applications become increasingly sophisticated and processing power increases, the most serious limitation on these devices is the available battery life. Dynamic Voltage Scaling (DVS) has been a key technique in exploiting the hardware characteristics of processors to reduce energy dissipation by lowering the supply voltage and operating frequency. The DVS algorithms are shown to be able to make dramatic energy savings while providing the necessary peak computation power in general-purpose systems. However, for a large class of applications in embedded real-time systems like cellular phones and camcorders, the variable operating frequency interferes with their deadline guarantee mechanisms, and DVS in this context, despite its growing importance, is largely overlooked/under-developed. To provide real-time guarantees, DVS must consider deadlines and periodicity of real-time tasks, requiring integration with the real-time scheduler. In this paper, we present a class of novel algorithms called real-time DVS (RT-DVS) that modify the OS's real-time scheduler and task management service to provide significant energy savings while maintaining real-time deadline guarantees. We show through simulations and a working prototype implementation that these RT-DVS algorithms closely approach the theoretical lower bound on energy consumption, and can easily reduce energy consumption 20% to 40% in an embedded real-time system.

1,265 citations

Journal ArticleDOI
TL;DR: It is shown that the problem is NP-hard in all but one special case and the complexity of optimal fixed-priority scheduling algorithm is discussed.

1,230 citations