scispace - formally typeset
Search or ask a question

Showing papers on "Paging published in 1974"


Journal ArticleDOI
TL;DR: In this article, the authors proposed a method called multiple key hashing which attempts to minimize the cost of page accessing by using a combination of inversion and hash functions. But this approach is not always preferable to inversion, a combined method is described.
Abstract: The high cost of page accessing implies a need for for more careful data organization in a paged memory than is typical of most inverted file and similar approaches to multi-key retrieval. This article analyses that cost and proposes a method called multiple key hashing which attempts to minimize it. Since this approach is not always preferable to inversion, a combined method is described. The exact specifications of this combination for a file with given data and traffic characteristics is formulated as a mathematical program. The proposed heuristic solution to this program can often improve on a simple inversion technique by a factor of 2 or 3.

134 citations


Journal ArticleDOI
TL;DR: It is shown that simple algorithms exist which yield fault probabilities close to optimal with only a modest increase in memory, and performance bounds are obtained which are independent of the page request probabilities.
Abstract: The topic of this paper is a probabilistic analysis of demand paging algorithms for storage hierarchies. Two aspects of algorithm performance are studied under the assumption that the sequence of page requests is statistically independent: the page fault probability for a fixed memory size and the variation of performance with memory. Performance bounds are obtained which are independent of the page request probabilities. It is shown that simple algorithms exist which yield fault probabilities close to optimal with only a modest increase in memory.

79 citations


Patent
10 Apr 1974
TL;DR: In this paper, the cache store is selectively cleared of the information from the page whose data information is no longer needed by addressing each level of an associative tag directory to the cache.
Abstract: In a data processing system that uses segmentation and paging to access data information such as in a virtual memory machine, the cache store need not be entirely cleared each time an I/O operation is performed or each time the data in the cache has a possibility of being incorrect. With segmentation and paging, only a portion of the cache store need be cleared when a new page is obtained from the virtual memory. The entire cache store is cleared only when a new segment is indicated by the instruction. The cache store is selectively cleared of the information from the page whose data information is no longer needed by addressing each level of an associative tag directory to the cache store. The columns of each level are compared to the page address and if a comparison is signaled that column of the addressed level is cleared by clearing the flag indicating the full status of the column in the addressed level. Each level of the tag directory is addressed.

51 citations


Journal ArticleDOI
TL;DR: A remarkably simple program reference model is suggested that, for rough approximations, will suffice, and is applicable to such diverse problems as choosing an optimum size for a paging memory, arranging for reproducible memory usage charges, and estimating the amount of core memory sharing.
Abstract: Predicting the performance of a proposed automatically managed multilevel memory system requires a model of the patterns by which programs refer to the information stored in the memory. Some recent experimental measurements on the Multics virtual memory suggest that, for rough approximations, a remarkably simple program reference model will suffice. The simple model combines the effect of the information reference pattern with the effect of the automatic management algorithm to produce a single, composite statement: the mean number of memory references between paging exceptions increases linearly with the size of the paging memory. The resulting model is easy to manipulate, and is applicable to such diverse problems as choosing an optimum size for a paging memory, arranging for reproducible memory usage charges, and estimating the amount of core memory sharing.

48 citations


Journal ArticleDOI
L. A. Belady, F. P. Palerrno1
TL;DR: An algorithm is presented that extracts the sequences of minimum memory capacities (MMCs) from the sequence of page references generated by a program as it is executed in a demand paging environment.
Abstract: An algorithm is presented that extracts the sequence of minimum memory capacities (MMCs) from the sequence of page references generated by a program as it is executed in a demand paging environment...

41 citations


Proceedings ArticleDOI
06 May 1974
TL;DR: The dynamics of double-paging are investigated and it is shown that an increase in the size of the memory of the virtual machine without a corresponding increase in its real memory size can lead to a significant increase to the amount of paging, even for the LRU algorithm.
Abstract: Belady's paging anomaly has illustrated that certain page replacement algorithms can cause more page faults as the size of memory increases. Mattson has shown that there exists a class of algorithms called stack algorithms (such as LRU least recently used) which cannot cause more page faults as memory size increases. In this paper, we investigate the dynamics of double-paging, i.e., running a paged operating system, e.g., IBM's OS/VS2, under a paged virtual machine monitor, e.g., VM/370. In particular, we show that an increase in the size of the memory of the virtual machine without a corresponding increase in its real memory size can lead to a significant increase in the amount of paging, even for the LRU algorithm.

27 citations


Journal ArticleDOI
TL;DR: This paper considers the case of paged memory systems — i.e., systems whose physical and logical address space is partitioned into equal sized blocks of contiguous addresses, which are used by many computer systems.
Abstract: Dynamic memory management is an important advance in memory allocation, especially in virtual memory systems. In this paper we consider the case of paged memory systems — i.e., systems whose physical and logical address space is partitioned into equal sized blocks of contiguous addresses. Paged memories have been used by many computer systems. However, the relationships among page fault frequency (the frequency of those instances at which an executing program requires a page of data or instructions not in main memory), efficiency, and space-time product with various replacement algorithms and page sizes are still not sufficiently understood and are of considerable interest.

26 citations


Proceedings ArticleDOI
James L. Elshoff1
06 May 1974
TL;DR: The focus of this paper is to analyze programming techniques which will reduce the number of page faults in matrix operations and thereby improve program performance.
Abstract: Although virtual memory systems are supposed to free the programmer from space management problems, the systems do not always succeed. In fact, programmers find that by ignoring the fact that real core is limited, the cost of executing their programs sometimes makes them unusable, not to mention some of the detrimental effects the program has on the throughput of the overall system. This problem seems to be especially prevalent when large matrices of data are involved. The data are usually referenced in a cyclical pattern and when the entire matrix will not fit in core, the number of page faults encountered during execution is maximized. The focus of this paper is to analyze programming techniques which will reduce the number of page faults in matrix operations and thereby improve program performance.

21 citations


Patent
James DeRosa1
12 Dec 1974
TL;DR: In this article, a paging receiver is described having page memory for automatically storing a page if the user does not reset the receiver within a prescribed time interval after an alert signal was started.
Abstract: A paging receiver is described having page memory for automatically storing a page if the user does not reset the receiver within a prescribed time interval after an alert signal is started. After a page is stored it can be recalled from page memory by operating the reset button.

19 citations


Journal ArticleDOI
TL;DR: This paper presents the decision rules for a system that is operational in customer production sites and spells out in detail how the system is actually working and how the algorithm actually functions, without trying to hide those decisions which the authors recognize require further evaluation.
Abstract: Substantial theoretical work has been done on paging algorithms and the related area of task scheduling. In this article, we describe the paging algorithm currently used by the UNIVAC Virtual Memory Operating System (VMOS) (running on Series 70 systems: 70/3, 70/7, 70/46, 70/61) plus those task scheduling decisions which affect the paging algorithm. Many articles on paging algorithms have approached the subject from a theoretical point of view or else they have evaded the guts of the issue by referring to pages which \"are no longer members of the working set\" without indicating how they made such a determination. This paper presents the decision rules for a system that is operational in customer production sites. It spells out in detail how the system is actually working and how the algorithm actually functions, without trying to hide those decisions which we recognize require further evaluation. Finally, some of our experiences with the algorithm and its development are also presented.

16 citations


Journal ArticleDOI
TL;DR: Replacement strategies which attempt to maximize useful CPU utilisation and hence throughput by choosing the page to be replaced on the basis of its probability of reference as well as on the rapidity with which the page can be removed from main memory are considered.
Abstract: Commonly used memory management strategies in a paged environment optimize separately the page-fault rate (by proper choice of a replacement algorithm) and the response time of the paging drum (by a shortest-access-time-first discipline with multiple sectors). In this paper we consider replacement strategies which attempt to maximize useful CPU utilisation and hence throughput by choosing the page to be replaced on the basis of its probability of reference as well as on the rapidity with which the page can be removed from main memory, assuming that a fixed-head disk or drum is used as a secondary memory. An analysis of the approach is given using a mathematical model. Analytical results for the gain in CPU utilisation are obtained under mono-programming and numerical examples are presented illustrating the effects of variation of program behaviour and of variants of the replacement strategy.

Journal ArticleDOI
01 Jan 1974
TL;DR: The designers of a controller which will regulate the system in order to optimize a given performance measure have shown that the control scheme leaves to an improvement of the system performance in mean response time and resource utilization, and adapts in real-time the degree of multiprogramming to the characteristics of the users.
Abstract: It is known that the regulation of the degree of multiprogramming is perhaps one of the most important factors determining the overall performance of a virtual memory computer. In this paper we present an approach which differs some what from the approaches usually taken to regulate the degree of multiprogramming, which are mainly derived from the working-set principles.We design a controller which will regulate the system in order to optimize a given performance measure. The controller is applied to a system where the critical resource is primary memory, and we are only concerned with systems where ineffective regulation leads to the phenomenon known as thrashing due to extensive paging activity.In the first section, the dynamics of the system we wish to regulate are investigated using an analytical model. The system consists of a set of terminals and of a resource loop (CPU, secondary memory device, file disk) shared by the users. Using classical assumptions about program behavior (e.g., life-time function), the throughput of the RL is obtained as a function of the degree of multiprogramming n (number of users sharing the resources at a given instant of time) and of the system parameters. This result provides a greater insight of the “plant” we wish to control. The mathematical results are validated and extended with data from simulation experiments using a more detailed model (overheads and non-exponential assumption).In the next section, a criterion called “dilatation” based on the utilization of the different resources is defined. From the analytical and simulation results of the first section, it can be shown that there exists a value no of the degree of multiprogramming which maximizes this criterion. The regulation of n to no is achieved by controlling the access of the users to the RL. The value of no is estimated in real-time through a continuous estimation of the two first moments of the criterion. Using these estimations, the decision of introducing or not a new user in the RL is taken whenever a user leaves a terminal or departs from the RL.Extensive simulation experiments were conducted, where the implementation of the different functions of the controller have been thoroughly simulated. They have shown that the control scheme leaves to an improvement of the system performance in mean response time and resource utilization, and, overall, adapts in real-time the degree of multiprogramming to the characteristics of the users (the adaptation is performed in 4 sec. or so for a unit variation of the optimal degree of multiprogramming). A discussion of practical application of results ends the paper.

01 May 1974
TL;DR: The design, conducting, and results of an experiment intended to measure the paging rate of a virtual memory computer system as a function of paging memory size are reported, which suggest models of demand paging behavior.
Abstract: This thesis reports the design, conducting, and results of an experiment intended to measure the paging rate of a virtual memory computer system as a function of paging memory size. This experiment, conducted on the Multics computer system at MIT, a large interactive computer utility serving an academic community, sought to predict paging rates for paging memory sizes larger than the existent memory at the time. A trace of all secondary memory references for two days was accumulated, and simulation techniques applicable to "stack" type page algorithms (of which the least-recently-used discipline used by Multics is one) were applied to it. A technique for interfacing such an experiment to an operative computer utility in such a way that adequate data can be gathered reliably and without degrading system performance is described. Issues of dynamic page deletion and creation are dealt with, apparently for the first reported time. The successful performance of this experiment asserts the viability of performing this type of measurement on this type of system. The results of the experiment are given, which suggest models of demand paging behavior.

Journal ArticleDOI
John W. Boyse1
TL;DR: The execution characteristics of two types of commonly used programs in a large-scale, time-shared computer system are shown and paging characteristics of tasks as a function of the number of pages those tasks have in core.h.
Abstract: h show the execution characteristics of two types of commonly used programs in a large-scale, time-shared computer system A software monitoring facility built into the supervisor was used for data collection during normal system operation These data were analyzed, and results of this analysis are presented for a Fortran compiler and an interactive line file editorProbability distribution functions and other data are given for such things as CPU intervals, I/O intervals, and the number of such intervals during execution Empirical distributions are compared with simple theoretical distributions (exponential, hyperexponential, and geometric) Other data show paging characteristics of tasks as a function of the number of pages those tasks have in core

Journal ArticleDOI
01 Jan 1974
TL;DR: It is tentatively concluded that this linear approximation relating mean time between page transfers between levels of memory is untenable for main memory, especially under working set policies; and that the linearity of the data for the drum reflects the behavior of the Multics scheduler for background jobs, not thebehavior of programs.
Abstract: The linear approximation relating mean time between page transfers between levels of memory, as reported by Saltzer for Multics, is examined. It is tentatively concluded that this approximation is untenable for main memory, especially under working set policies; and that the linearity of the data for the drum reflects the behavior of the Multics scheduler for background jobs, not the behavior of programs.

Proceedings ArticleDOI
01 Jan 1974
TL;DR: It has been shown by Randell and Purdom & Steigler that internal fragmentation is a more serious problem than external fragmentation and therefore the attention will be directed to it.
Abstract: Internal fragmentation is a phenomenon of a paging environment. Memory is divided into equal size blocks called pages, and when a job requests memory space, it receives an integral number of pages. The difference between the amount of space actually required and the amount allocated is wasted space and is referred to as internal fragmentation. In comparison a segmentation environment allocates exactly the amount of space required. However, this system is subject to another type of wasted memory called external fragmentation. This fragmentation occurs when allocation requests cannot be filled even though there is available space in memory (which is either too small to be useful or too scattered). It has been shown by Randell [6] and Purdom & Steigler [5] that internal fragmentation is a more serious problem than external fragmentation and therefore we will direct our attention to it.

Journal ArticleDOI
TL;DR: An experimental algorithm for optimizing program placement in virtual storage systems is described, which identifies the optimum program placements within the context of a paging environment.
Abstract: An experimental algorithm for optimizing program placement in virtual storage systems is described. Interprogram linkages are monitored and subsequently analyzed for frequency and proximity. The algorithm evaluates this information within the context of a paging environment. Program lists that define the optimum program placements are then generated. Performance gains are also discussed.

Journal ArticleDOI
TL;DR: It is shown that the stack algorithm concept can easily be extended to apply to two-level directly addressable memory hierarchies and an efficient procedure exists for collecting data on the performance of stack replacement algorithms.
Abstract: In this paper we consider the application of the stack algorithm concept to a two-level paged storage hierarchy in which both levels are directly addressable by the central processor. Since both levels are directly addressable, pages need not reside in the first level of memory for a reference to be completed. The effectiveness of a page replacement algorithm in such a storage hierarchy is measured by the frequency of references to the first level of memory and the number of page promotions. It is shown that the stack algorithm concept can easily be extended to apply to two-level directly addressable memory hierarchies. A class of page replacement algorithms called stack replacement algorithms is defined. An efficient procedure exists for collecting data on the performance of stack replacement algorithms.

Proceedings ArticleDOI
06 May 1974
TL;DR: In this paper, the authors propose a page-fault-optimizing loaders, linkage editors, and compilers to make local code more local, and show that the loading sequence of subroutines can have a considerable effect on paging activity, but little effort has been made to provide the programmer with suitable tools for making his programs more local.
Abstract: As the use of virtual memory becomes more and more accepted, the problem of effective storage management becomes more and more important. To date most efforts to optimize the use of memory have been directed at devising memory management strategies at the operating system level that minimize the number of page faults. For example, Comeau has shown that the loading sequence of subroutines can have a considerable effect on paging activity. Hence page-fault-optimizing loaders, linkage editors and compilers have been proposed. Although the concepts of "locality" and "working set" have been known for some time (c.f. Denning), little effort has been made to provide the programmer with suitable tools for making his programs "more local". This seems to stem from the fact that, short of notions of "modular coding", little is known about what sorts of programming habits actually result in local code. Consequently, most optimization techniques used to date have assumed that user programs were an unmodifiable input to the operating system.

Journal ArticleDOI
TL;DR: A learning algorithm is introduced to monitor references in such a way that it prevents seldom accessed pages to be brought into primary memory and is comparable with and superior to those obtained with paging.
Abstract: The efficient utilization of a two-level directly executable memory system is investigated. After defining the time and space product resulting from static allocation of the most often referenced pages, from paging, and from an optimal algorithm when the amount of primary memory is constrained, we introduce a learning algorithm. Its basic feature is to monitor references in such a way that it prevents seldom accessed pages to be brought into primary memory. The additional hardware requirements are not extensive. Simulations attest to the validity of the concept, and show that results are comparable with those obtained from the static allocation (the latter being impractical since it requires the knowledge of the whole reference stream) and superior to those obtained with paging. In the case of application programs, contributions to the learning algorithm can be made at compile time. Algorithms and data stuctures necessitated in an optimizing phase of the compiler are described.

Book ChapterDOI
01 Jan 1974
TL;DR: The most common type of storage organization consists of two levels. Level 1 is a large electronic store such as core which is directly addressable from the central processing unit. Level 2 consists of one or more high-capacity peripheral electromechanical devices such as drum, disk, and tape as discussed by the authors.
Abstract: Publisher Summary This chapter describes the hardware devices for virtual memory and multilevel memory. There are many well-known hardware devices for information storage and retrieval, including registers, core, drum, fixed-head disk, movable head disk, magnetic tape, punched paper tape, and cards. There are hardware mechanisms especially adapted for virtual memory. In particular, multilevel memory and the paging drum have significantly improved the economic feasibility of virtual memory over conventional hardware design. The most common type of storage organization consists of two levels. Level 1 is a large electronic store such as core which is directly addressable from the central processing unit. Level 2 consists of one or more high-capacity peripheral electromechanical devices such as drum, disk, and tape. It is suggested that if information needed is not located in core, that information must be brought into core from a peripheral device before it can be accessed. This scheme was used in nearly all early paging and segmentation designs and is still found in a majority of small- and medium-sized machines.

Proceedings ArticleDOI
30 Sep 1974
TL;DR: The development of a storage management scheme for MEMBERS, an emulated stack machine incorporating a virtual store and paging, is described and some preliminary estimates of its performance are given.
Abstract: The development of a storage management scheme for MEMBERS, an emulated stack machine incorporating a virtual store and paging, is described. The approach used employs an anologue of the 'working set' concept of Denning. Information contained in the activation stack for each process is used to identify a 'Top Stack Set' for the process. The set of all the top stack sets defines the pages that are held in main storage.The architecture of the MEMBERS machine incorporates an activation stack in which pointers can be distinguished from other data. The basic programming language for the machine is designed to encourage a modular structure for programs, so the activation records in the stack for the recently entered procedures contain pointers to an approximation of the working set. A descriptor-based addressing structure enables address mapping to be performed without the use of a page table, so no further gains can be obtained through the use of associative page-register hardware.The use of microprogramming for the implementation of the storage management system is discussed and some preliminary estimates of its performance are given.

Proceedings ArticleDOI
04 Jun 1974
TL;DR: Denning has derived the following formula E(L) &equil; 2•L/(2•L+s−1) which is shown to be a very accurate predictor of paging drum efficiency and results for the space-time cost of loading a program's working set using the strategies of prepaging and pure demand paging are derived.
Abstract: The efficiency of a paging drum, operating as the back-up storage device in a virtual memory system, is defined to be the percentage of time that data is being transferred. Efficiency (E) is a function of the number of page requests pending (L), and the capacity of one track on the drum (S), expressed as a multiple of the page size. Denning has derived the following formula E(L) = 2·L/(2·L+s-1). This formula is shown to be a very accurate predictor of paging drum efficiency. Denning has also derived results for the space-time cost of loading a program's working set using the strategies of prepaging and pure demand paging. For the cost of prepaging, simulation analysis produces results which are at variance with those predicted by the analytically derived formula. The simulated cost of prepaging the working set is tabulated for several values of L, s, and w (the number of pages in the working set), and compared with the cost of loading the working set by pure demand paging.

Journal ArticleDOI
TL;DR: The development of a storage management scheme for MEMBERS, an emulated stack machine incorporating a virtual store and paging, is described and the use of microprogramming for the implementation of the storage management system is discussed.