scispace - formally typeset
Search or ask a question

Showing papers on "Memory management published in 1976"


Patent
24 Sep 1976
TL;DR: In this article, a virtual address translator comprises a content addressed memory and a word addressed memory, and a subsegment descriptor includes an absolute base address which is added to a deflection field to obtain an absolute memory address.
Abstract: A virtual address translator comprises a content addressed memory and a word addressed memory. A task name and subsegment number from a virtual address supplied by a processor are employed as a key word to search a content addressed memory and read out a subsegment descriptor if the key word is matched. The subsegment descriptor includes an absolute base address which is added to a deflection field to obtain an absolute memory address. The memory address is applied to a memory to permit transfer of a word between the processor and the memory. The processor may present any one of several task names depending upon whether the memory reference is made for an instruction or data for the processor, or for an instruction or data for an I/O connected to the processor. Bounds, residency and access privileges are checked using the subsegment descriptor. If a search of the content addressed memory reveals that the desired subsegment descriptor is not in the word addressed memory, the translator obtains the descriptor from memory and then generates the desired absolute memory address. The translator is provided with circuits generating values which indicate the efficiency of its operation. Controls are provided for selecting any one of several widths for the subsegment and deflection fields of virtual addresses received from the processor.

117 citations


Patent
22 Apr 1976
TL;DR: In this article, the authors present a multiprocessor microcomputer system having two or more substantially independent processors each of which has its own bus-type interconnection structure, and a shared memory accessible by any of the processors without interferring with the proper operation of the other processors.
Abstract: A multiprocessor microcomputer system having two or more substantially independent processors each of which has its own bus-type interconnection structure, and a shared memory accessible by any of the processors without interferring with the proper operation of the other processors. Controlled access to the memory by connecting the memory to the processor requesting access when only one such request is present and to the last processor to have received access when more than one request is received simultaneously allows autosynchronous operation, automatic selection of priority and high speed of operation.

74 citations


Journal ArticleDOI
J. Rodriguez-Rosell1
TL;DR: With very few exceptions the reference strings that have been measured characterize virtual memory utilization, reflecting the fact that the motivating force behind these research activities is the desire to understand the behavior of virtual memory paging systems.
Abstract: During the past several years a considerable amount of effort has gone into the measurement, analysis, and modeling of program behavior Most of the work either assumes the existence of a reference string, which is then used in various ways, or attempts to produce a model for the process by which such reference strings are generated With very few exceptions the reference strings that have been measured characterize virtual memory utilization, reflecting the fact that the motivating force behind these research activities is the desire to understand the behavior of virtual memory paging systems

60 citations



Journal ArticleDOI
TL;DR: Program behavior studies may be useful in designing new programs and new virtual memory systems that are capable of levels of performance higher than those currently achievable.
Abstract: The practical objective of program behavior studies is to enhance program and system performance. On the one hand, the knowledge resulting from these studies may be useful in designing new programs and new virtual memory systems that are capable of levels of performance higher than those currently achievable. On the other hand, such knowledge may often be employed to increase the performance of existing programs and systems.

54 citations


Journal ArticleDOI
TL;DR: This paper is concerned with paging systems, that is, systems for which the blocks of contiguous locations are of equal size and the occurrence of a reference to a page that is currently not in main memory is called a page fault.
Abstract: Virtual memory is one of the major concepts that has evolved in computer architecture over the last decade. It has had a great impact on the design of new computer systems since it was first introduced by the designers of the Atlas computer in 1962. A virtual memory is usually divided into blocks of contiguous locations to allow an efficient mapping of the logical addresses into the physical address space. In this paper, we are concerned with paging systems, that is, systems for which the blocks of contiguous locations are of equal size. The memory system consists of two levels: main memory and auxiliary memory. The occurrence of a reference to a page that is currently not in main memory is called a page fault. A page fault results in the interruption of the program and the transfer of the referenced page from auxiliary to main memory.

36 citations


Patent
19 Aug 1976
TL;DR: In this paper, the behavior of a hierarchical memory system for data flow programs is described by a formal memory model appropriate to a computer system for Data Flow programs, which is used in the architecture of a memory system having hierarchical structure.
Abstract: Packet communication is used in the architecture of a memory system having hierarchical structure. The behavior of this memory system is prescribed by a formal memory model appropriate to a computer system for data flow programs.

35 citations


Patent
19 Aug 1976
TL;DR: In this article, the behavior of a memory system is described by a formal memory model appropriate to a computer system for data flow programs, which is used in the architecture of memory systems capable of processing many independent memory transactions concurrently.
Abstract: Packet communication is used in the architecture of a memory system capable of processing many independent memory transactions concurrently. The behavior of this memory system is prescribed by a formal memory model appropriate to a computer system for data flow programs.

33 citations


Patent
22 Oct 1976
TL;DR: In this paper, a method for randomly scrambling the physical address of a block of data, within a memory subject to data site deterioration, by utilizing an auxiliary correspondence memory to pair each logical input/output address with a physical memory address at a random time.
Abstract: A method for randomly scrambling the physical address of a block of data, within a memory subject to data site deterioration, by utilizing an auxiliary correspondence memory to pair each logical input/output address with a physical memory address at a random time. Apparatus for implementing the novel method is also disclosed.

33 citations


Proceedings ArticleDOI
13 Oct 1976
TL;DR: This paper shows how to calculate analytically the effectiveness of set associative paging relative to full associative (unconstrained mapping) paging, and suggests that as electronically accessed third level memories become available, algorithms currently used only for cache paging will be applied to main memory, for the same reasons of efficiency, implementation ease and cost.
Abstract: Set associative page mapping algorithms have become widespread for the operation of cache memories for reasons of cost and efficiency. In this paper we show how to calculate analytically the effectiveness of set associative paging relative to full associative (unconstrained mapping) paging. For two miss ratio models, Saltzer's linear model and a mixed geometric model, we are able to obtain simple, closed form expressions for the relative LRU fault rates. Trace driven simulations are used to verify the accuracy of our results. We suggest that as electronically accessed third level memories, such as electron beam memories, magnetic bubbles or charge coupled devices become available, algorithms currently used only for cache paging will be applied to main memory, for the same reasons of efficiency, implementation ease and cost.

31 citations


Patent
05 May 1976
TL;DR: In this article, the first-in-first-out (FIRO) memory was replaced with a random access memory (RA) for storing received data and the latter memory continuously replenished with data as the printer executes its printing action and printed characters are erased from the RAM.
Abstract: Print control apparatus which combines a first in - first out memory for storing received data with a random access memory for feeding data to a movable type printer to eliminate the need for suspension of receiving input characters or the adding of fill characters or time delays during the period when printer action is suspended during execution of control actions, such as paper feed. The apparatus controls the transfer of data from the first in - first out memory to the random access memory by keeping the latter memory continuously replenished with data as the printer executes its printing action and printed characters are erased from the random access memory.

Journal ArticleDOI
TL;DR: It is seen that look-ahead paging demonstrates an inherent advantage sufficient to account for the differences observed between currently implemented demand paging algorithms and theoretically optimal algorithms.
Abstract: We express the future behavior of programs that may be described by two common program behavior models, the independent reference model and the LRU stack model, by a discrete time Markov chain. Using this Markov chain model, we are able to calculate the theoretical minimum number of page faults for a program representable by either of these models in either a fixed or variable size memory. The behavior of optimal look-ahead and optimal realizable demand paging algorithms are compared, and it is seen that look-ahead paging demonstrates an inherent advantage sufficient to account for the differences observed between currently implemented demand paging algorithms and theoretically optimal algorithms.

PatentDOI
TL;DR: A programmable voice characteristic memory system for programming any number of different specifications in an electronic digital organ that may be transferred to and recorded on the external non-volatile read-write memory for permanent storage and future use.
Abstract: A programmable voice characteristic memory system for programming any number of different specifications in an electronic digital organ. Digital information which defines voice characteristics in an electronic digital organ is stored in a read-write specification memory. Voice characteristic information may be selectively written into the specification memory from an external data inputting device such as a punched card reader or from an external non-volatile read-write memory such as a magnetic tape. Information stored in the specification memory may be transferred to and recorded on the external non-volatile read-write memory for permanent storage and future use. Voice characteristic information stored in the specification memory may also be accessed by the digital organ to generate musical tones in conventional fashion.

Patent
Thyselius Per-Olof1
30 Jan 1976
TL;DR: In this article, a method of addressing memory positions in a switch memory of a transit exchange for the transfer of synchronous data signals between incoming and outgoing TDM links comprising data channels of several data rates, each constituting a multiple of a basic rate derived from the number of time slots in a TDM frame.
Abstract: A method of and an apparatus is disclosed for economically addressing memory positions in a switch memory of a transit exchange for the transfer of synchronous data signals between incoming and outgoing TDM links comprising data channels of several data rates, each constituting a multiple of a basic rate derived from the number of time slots in a TDM frame The data signals are stored in a switch memory having a memory position for each of the data channels in the incoming links and are then transferred to a buffer memory having a memory position for each time slot of the data channels in the outgoing links before they are sent out on these links The memory writing as well as the reading occurring at a repetition rate determined by the data rate of the respective data channel The data signals are written into the switch memory by the aid of an address calculator including a structure memory for the storage of information indicating the allocation of time slots to the various data channels of each link, which information is common to all links of the same type, and a type memory for the storage of type designations where the relevant type designation is addressed by means of the identity number of the link

Proceedings ArticleDOI
07 Jun 1976
TL;DR: The intelligent memory is a computer memory formed of circulating serial storage loops and distributed processing logic that performs off-line sort processing, associative searching, updating and retrieval, and is capable of dynamically varying its loop size to accommodate varying data requirements.
Abstract: The intelligent memory is a computer memory formed of circulating serial storage loops and distributed processing logic. In addition to the basic information storage function, the memory performs off-line sort processing, associative searching, updating and retrieval. The memory is also capable of dynamically varying its loop size to accommodate varying data requirements.A number of memory configurations which trade performance for economy are possible. The options range from single record per loop and on-chip logic (aimed at CCD technology) to multiple records per loop and off-chip logic (aimed at magnetic bubble memories). The latter option is made possible by a new sort algorithm named "gyro sort" in which loop contents are caused to "precess" at appropriate intervals.As one component of a storage hierarchy, the intelligent memory offers potential performance gains ranging from one to three orders of magnitude over random access memories at comparable cost.

Patent
27 Jul 1976
TL;DR: In this article, an apparatus is provided which allows computer programs to execute directly out of a large, sector addressable secondary memory by utilizing a relatively small, word addressable buffer memory.
Abstract: An apparatus is provided which allows computer programs to execute directly out of a large, sector addressable secondary memory by utilizing a relatively small, word addressable buffer memory. The system includes circuitry adapted to selectively transfer data between the secondary memory and the buffer memory so that a memory word request by the computational unit will result in either transferring the word from the buffer memory to the computational unit if the word is present in the buffer memory or transferring the data sector in which the requested word resides into buffer memory from secondary memory. The circuitry selectively transfers data sectors between the secondary and the buffer memory to continually maintain the data sector containing the addressed word and a predetermined number of directly adjacent data sectors from secondary memory in a portion of the buffer memory. In this manner, the requested word is located in the buffer memory along with data which is physically located on either side of the requested word in secondary memory. A data sector may consist of one or more data words.

Patent
03 Nov 1976
TL;DR: In this paper, the authors present an approach for managing data and data requests within an associative memory device, wherein the associative device is responsive to requests from one or more using devices, such as a digital computer, to store, locate, retrieve, and modify, by means of decision logic, specific data which is stored in peripheral memory.
Abstract: Apparatus for managing data and data requests within an associative memory device, wherein the associative memory device is responsive to requests from one or more using devices, such as a digital computer, to store, locate, retrieve, and modify, by means of decision logic, specific data which is stored in peripheral memory. The apparatus includes a number of elements which individually are capable of performing a number of general-purpose functions, which are in turn combined in specified sequences to form special-purpose functions. The special purpose functions and several of the general-purpose functions together provide the necessary data management capability for the associative memory device.

Journal ArticleDOI
Gilbert, Storma1, Ballard2, Hobrock2, James2, Wood1 
TL;DR: A memory control unit, operating in conjunction with a special purpose digital computer, achieves real-time storage into and retrieval from computer memory of individual video images undergoing on-line digitization, processing, and reconstitution.
Abstract: A memory control unit is described, which, operating in conjunction with a special purpose digital computer, achieves real-time storage into and retrieval from computer memory of individual video images undergoing on-line digitization, processing, and reconstitution. The memory control unit is capable of rapid sequential access on up to six 16 K-word core and two 131 K-word (28-bit word) solid-state memories, achieving data transfers to or from ememory at up to 40 million 9-bit samples/s for 33 ms. The memory control unit employs a variety of data rates, and can, under program control, assemble one or more bytes into, or disassemble one or more bytes from, each memory word. The control unit can sign extend incoming data to any of four different byte lengths from any of four different byte lengths. The control unit possesses dual data busses, one dedicated to memory read operations and a second capable of either "reads from" or "writes into" memory. The eight memory modules are sequenced by a small microprogrammed control store loadable from an associated computer.

Journal ArticleDOI
TL;DR: "Virtual memory" is a computing term which has come into increasing use in recent years, but its use often causes controversy and misunderstanding, for it is used to mean different things by different people.
Abstract: "Virtual memory" is a computing term which has come into increasing use in recent years. Unfortunately, like other new expressions, its use often causes controversy and misunderstanding, for it is used to mean different things by different people. Not long ago when one major computer vendor announced the introduction of the new technique of 'virtual storage,' other manufacturers complained that they had been doing the same thing for years under a different name (see Figure 1).

Journal ArticleDOI
TL;DR: Among the performance characteristics of programs, the patterns of memory references they generate have the unique property of being totally irrelevant in a non-virtual memory context and perhaps the most important aspect of program behavior in a virtual memory system.
Abstract: Among the performance characteristics of programs, the patterns of memory references they generate have the unique property of being totally irrelevant in a non-virtual memory context and perhaps the most important aspect of program behavior in a virtual memory system. It is because of their importance in the latter case that the referencing behavior is so often referred to as the "behavior" par excellence.


Journal ArticleDOI
01 Apr 1976
TL;DR: A specialized parallel computer architecture that is proposed for high-speed searching of large text data bases and the benefits claimed are design simplicity, high speed for suitable applications, ease of software development, reliability and reasonable cost.
Abstract: This paper describes a specialized parallel computer architecture that is proposed for high-speed searching of large text data bases. The proposed machine architecture is a parallel array of independent processors connected to a common bus. Each processor consists of a microprocessor, a high-speed block-access memory for storage of a part of the data base, read-only memory for control software, and random-access memory for storage of programs and working storage. The processors are connected by a common high-speed bus that is used for communication of data, programs, commands and results. The benefits claimed for this architecture are design simplicity, high speed for suitable applications, ease of software development, reliability and reasonable cost. This machine architecture is under consideration as a means for providing high-speed text searching capabilities.


01 Dec 1976
TL;DR: The multi-process design of a paging system that may be used to implement a virtual memory on a large scale, demand paged computer utility is presented and shown to have significant advantages over conventional designs in terms of simplicity, modularity, system security, and system growth and adaptability.
Abstract: This thesis presents a design for a paging system that may be used to implement a virtual memory on a large scale, demand paged computer utility. A model for such a computer system with a multi-level, hierarchical memory system is presented. The functional requirements of a paging system for such a model are discussed, with emphasis on the parallelism inherent in the algorithms used to implement the memory management functions. A complete, multi-process design is presented for the model system. The design incorporates two system processes, each of which manages one level of the multi-level memory, being responsible for the paging system functions for that memory. These processes may execute in parallel with each other and with user processes. The multi-process design is shown to have significant advantages over conventional designs in terms of simplicity, modularity, system security, and system growth and adaptability. An actual test implementation on the Multics system was carried out to validate the proposed design.

Book ChapterDOI
09 Aug 1976
TL;DR: A probabilistic model of a computer system with multipro-gramming and paging is considered and an adaptive memory allocation policy is introduced which dynamically changes the number of working sets to reach the goal of having always enough memory available to load the parachor of each program.
Abstract: We consider a probabilistic model of a computer system with multipro-gramming and paging. The applied work-load is derived from measurements in scientific computer applications and is characterized by a great variance of compute time. Throughput of a cyclic model is computed approximately presuming program sizes with negative exponential distribution. After a review of previous results for a memory allocation policy with prescribed number, n, of working sets at least to be loaded, an adaptive memory allocation policy is introduced which dynamically changes the number, n. Thereby, it is possible to reach the goal of having always enough memory available to load the parachor of each program. Simulation results establish our approximations as being very good. CPU scheduling is chosen to be through-put optimal. Our results are useful to demonstrate the benefits of allocation policies with adaptive controlled degree of multiprogramming. Previous contributions to this problem are to that date only by means of simulation [5].

Journal ArticleDOI
TL;DR: Applications of DCAM can greatly speed up queue searches and scheduling processes in operating systems and increase linearly with memory size.
Abstract: The discriminating content addressable memory (DCAM) is a modified content addressable memory (CAM). After the ``similar to'' search performed by the CAM, the modified memory searches in parallel for the highest value in a particular field across all words of the memory. Applications of DCAM can greatly speed up queue searches and scheduling processes in operating systems. Worst case DCAM query time for 1000 words is 2 ?s. The number of extra gates required by a DCAM increases linearly with memory size.

01 Jun 1976
TL;DR: The results of an investigation of the applicability of paging and segmentation to memory management in modified UNIX operating systems on the PDP-11/50 minicomputer system at the Naval Postgraduate School Signal Processing and Display Laboratory are reported.
Abstract: : This thesis reports the results of an investigation of the applicability of paging and segmentation to memory management in modified UNIX operating systems on the PDP-11/50 minicomputer system at the Naval Postgraduate School Signal Processing and Display Laboratory. Two memory managers are specifically considered: a partitioned segmented memory manager that was designed and implemented; and a simpler, segmented memory manager that was designed based on the performance of the partitioned segmented memory manager. Recommendations are given for future work.



Journal ArticleDOI
TL;DR: A minicomputer based CAMAC automation system for simultaneous operation of independent analytical instruments and an approach to application software development in a real-time multi-user environment is discussed.
Abstract: A minicomputer based CAMAC automation system for simultaneous operation of independent analytical instruments is described. An approach to application software development in a real-time multi-user environment is discussed. The necessity of using queued I/O for the CAMAC branch driver is emphasized. Various techniques for efficient main memory management are also outlined.