scispace - formally typeset
Search or ask a question

Showing papers on "Memory management published in 1978"


Journal ArticleDOI
TL;DR: It is shown that prefetching all memory references in very fast computers can increase the effective CPU speed by 10 to 25 percent.
Abstract: Memory transfers due to a cache miss are costly. Prefetching all memory references in very fast computers can increase the effective CPU speed by 10 to 25 percent.

315 citations


Patent
01 Feb 1978
TL;DR: In this paper, a security system is disclosed which utilizes plural remote terminals for controlling access at plural locations throughout a secured area or building, each of these remote terminals is capable of independent functioning, and includes a memory for storing plural independent identification numbers which define the personnel who will be granted access.
Abstract: A security system is disclosed which utilizes plural remote terminals for controlling access at plural locations throughout a secured area or building. Each of these remote terminals is capable of independent functioning, and includes a memory for storing plural independent identification numbers which define the personnel who will be granted access. These numbers stored in the terminal memories may be different from terminal to terminal, or may be uniform throughout the system, and may be the same as a list stored at a central processing location. Thus, access may be limited to the same group of individuals regardless of whether it is provided by a central memory list or a remote memory list. The remote memories provide total memory flexibility, so that the deletion of identification numbers from the list does not reduce the memory size. The memory, in addition to identification numbers, stores data defining real time access limitations for each of the individuals who will be granted access, so that flexibility in time of day access control is provided on a programmable basis.

83 citations


Book ChapterDOI
01 Jan 1978

40 citations


Patent
Lee T. Thorsrud1
30 Nov 1978
TL;DR: In this article, the authors present an approach for scaling addresses received by a memory module in a modular requestor-memory system in which standard memory modules may be of a discretely variable size and utilized in a plurality of positions in an overall contiguous memory addressing scheme.
Abstract: Apparatus for scaling addresses received by a memory module in a modular requestor-memory system in which standard memory modules may be of a discretely variable size and utilized in a plurality of positions in an overall contiguous memory addressing scheme. In particular, this scaling apparatus enables a modular memory which is only partially populated, i.e., only able to respond to a subset of the set of all addresses available, to be located in any one of several positions representing different addressing ranges. This is accomplished without modification of the memory module itself. The memory module knows its discrete capacity, or size, by virtue of the population of the memory array storage locations (array cards) contained therein. The memory module then uses this information to scale, or strip off, the appropriate number of bits from the gross address to allow addressing of the restricted number of memory locations present in the memory module.

31 citations


Journal ArticleDOI
TL;DR: Five types of anomalous behaviour which may occur in paged virtual memory operating systems are defined and reference string examples of various anomalies are presented.
Abstract: Five types of anomalous behaviour which may occur in paged virtual memory operating systems are defined. One type of anomaly, for example, concerns the fact that, with certain reference strings and paging algorithms, an increase in mean memory allocation may result in an increase in fault rate. Two paging algorithms, the page fault frequency and working set algorithms, are examined in terms of their anomaly potential, and reference string examples of various anomalies are presented. Two paging algorithm properties, the inclusion property and the generalized inclusion property, are discussed and the anomaly implications of these properties presented.

25 citations


Patent
11 Apr 1978
TL;DR: In this paper, the authors present an approach for expanding the direct addressing range and memory size of digital computers by managing memory device power and controlling memory device/processing unit interfaces by input/output (I/O) instructions.
Abstract: Apparatus for expanding the direct addressing range and memory size of digital computer means by managing memory device power and controlling memory device/processing unit interfaces by input/output (I/O) instructions. Memory modules are provided which are selectively powered in response to the (I/O) instructions to provide the aforenoted expansion.

25 citations


Patent
18 Apr 1978
TL;DR: In this article, an apparatus and arrangement for controlling the sharing of an electronic memory between a number of memory users, at least one of which requires transfers of blocks of data on a high priority basis.
Abstract: An apparatus and arrangement is disclosed for controlling the sharing of an electronic memory between a number of memory users, at least one of which requires transfers of blocks of data on a high priority basis. Access to the memory is controlled by means of a modified time division multiplexing scheme whereby a set of time slots is assigned for performing memory accesses requested by high priority memory users, but, during times in which no high priority users are using the memory, these time slots may be used by other memory users in the order of pre-assigned priorities. Independent output data paths are provided for the respective high and low priority data transfers.

24 citations


Journal ArticleDOI
TL;DR: An optimization criterion by which average access time, i.
Abstract: This paper presents an analytical study of speed-cost tradeoffs in memory hierarchy design. It develops an optimization criterion by which average access time, i. e., memory system delay, is minimized under a cost constraint for a hierarchy with given memory sizes and access probabilities. Using a power function assumption relating speed and cost of memory units, it is shown that an optimized hierarchy has the property of balanced cost and delay distributions, in that each memory unit makes the same percentage contribution to memory system cost as it makes to average system access delay. Using the same assumption, a lower bound on average access time is developed, showing that access time is roughly related to a cube-root averaging of access probabilities. These results provide useful tools for developing memory hierarchy design strategies and for evaluating data placement algorithms.

16 citations


Patent
Frederick John Aichelmann1
26 Dec 1978
TL;DR: In this article, a computer paging store memory utilizing line addressable random access memories (LARAM) including charge coupled device (CCD) shift registers in which data is read out of the memory for utilization in a block storage memory without loss of refresh time due to the refresh of individual CCD shift registers.
Abstract: A computer paging store memory utilizing line addressable random access memories (LARAM) including charge coupled device (CCD) shift registers in which data is read out of the memory for utilization in a block storage memory without loss of refresh time due to the refresh of individual CCD shift registers. The memory is organized as a number of parallel-connected memory storage units, each of which includes a separate interface logic, a refresh control and a number of memory array units, each of which in turn is constructed of LARAM devices, each of which must be refreshed within a predetermined time interval. Data is normally read out from the LARAM devices one at a time in sequence. During the readout operation, a detection is continuously made which determines whether the next LARAM device in sequence must be refreshed during the subsequent readout time period. If a refresh operation is required, the selection sequence is reordered enabling the refreshing operation to occur while the data transfer comes from another element.

14 citations


Proceedings ArticleDOI
23 Aug 1978
TL;DR: A machine-independent FORTRAN implementation of the GSPC proposed standard and several CORE system design issues are discussed from the implementor's viewpoint, including a breakdown of the functional modules reinforces the portability aspects.
Abstract: A machine-independent FORTRAN implementation of the GSPC proposed standard is presented. DIGRAF (Device Independent GRAphics from FORTRAN) has been designed to closely parallel level-3 of the 'CORE' system. The present implementation allows portability to any computer with a FORTRAN compiler and a word length of at least 16-bits.Several CORE system design issues are discussed from the implementor's viewpoint. A breakdown of the functional modules reinforces the portability aspects. Special features of the user interface are presented. A storage structure for retained segments is presented with a review of the memory management alternatives. The device-dependent interface for two common classes of devices is discussed. Finally, the design and data structure techniques used to implement several CORE functions is presented.

12 citations


Patent
05 Dec 1978
TL;DR: In this article, the authors propose an apparatus for storing information to be used by a data processing system ch employs a mini computer, where the mini computer is incapable of addressing more than a specified maximum number of discrete storage locations.
Abstract: Apparatus for storing information to be used by a data processing system ch employs a mini computer, where the mini computer is incapable of addressing more than a specified maximum number of discrete storage locations. The apparatus includes a plurality of memory modules for providing a number of information storage locations which exceeds the specified maximum number of the mini computer, each memory module including a plurality of memory sections, each memory section for providing a number of information storage locations which does not exceed the specified maximum number. The apparatus further includes a component for generating memory select signals, each of the memory select signals corresponding to a different one of the memory sections. An additional component, which is responsive to the memory select signals, is provided for coupling the data and address buses of the mini computer to a given one of the memory sections when the memory select signal corresponding to the given memory section has been generated, whereby the mini computer is capable of selectively accessing any one of the memory sections.

Proceedings ArticleDOI
01 Oct 1978
TL;DR: It is shown how circuits for the addition of several serial binary numbers can be obtained as a combination of parallel counters and memory cells.
Abstract: It is shown how circuits for the addition of several serial binary numbers can be obtained as a combination of parallel counters and memory cells. The various schemes belong to one of three different classes, characterized by the way in which carries, produced by parallel counters, are treated. A comparison is made between the various schemes, in terms of speed and complexity.

Patent
15 May 1978
TL;DR: In this paper, the authors proposed to ensure protection for the contents of the memory by transferring in a high speed the specified region in the main memory to the auxiliary device in case the power supply breaking command is given from the operator or when the AC power supply is cut off.
Abstract: PURPOSE: To ensure protection for the contents of the memory by transferring in a high speed the contents of the specified region in the main memory to the auxiliary device in case the power supply breaking command is given from the operator or when the AC power supply is cut off. CONSTITUTION: Main memory 1 memorizing the information is provided along with auxiliary memory device 3 which performs the information transfer with memory 1, arithmetic controller 2 which reads the information out of memory 1 to perform the operation and control plus power circuit 4 which supplies the power to memory 1 and controller 2 respectively. The high-speed static memory is used to device 3 with connection to other power source such as the battery or the like. Thus, in case the AC power sourece supplied to circuit 4 is cut off, an access is given to controller 2 by ON/OFF state signal 13 of circuit 4, and output 14 of circuit 4 is supplied with a delay of the time during which the information of the prescribed region in memory 1 is transferred and shunted to device 3 via access line 10. COPYRIGHT: (C)1979,JPO&Japio

Proceedings ArticleDOI
23 Aug 1978
TL;DR: BGRAF2 is a real-time interactive 2D graphics language that contends with an unusual combination of features: timing, events, parallelism, image manipulation, user interaction and procedural structures that creates many unpredictable interrelated tasks competing for execution.
Abstract: BGRAF2 is a real-time interactive 2D graphics language. Its supporting system contends with an unusual combination of features: timing, events, parallelism, image manipulation, user interaction and procedural structures. This combination creates within the system many unpredictable interrelated tasks competing for execution.A BGRAF2 program is compiled into an object module consisting of a sequence of pure code blocks, tasks, and a set of data blocks. The real-time environment is a hierarchical structure, where the highest level is a Scheduler, and the next level is composed of the object module and five additional processors: Graphics Processor, Control Processor, Input-Output Processor, Real-Time Processor and Memory Manager. The Scheduler is an abstract monitor responsible for scheduling tasks in accordance with a multi-level priority from a multi-queue scheme.

Journal ArticleDOI
D.L. Boyd1
TL;DR: The appearance in the commercial marketplace of online auxiliary memory devices with very large capacities has resulted in several new and interesting problems including new applications of the devices, the need for new data structures organized about the physical characteristics of the MSF, problems in performance, enhancement of automatic device and data control, and the continuing need for more memory protection capability.
Abstract: The appearance in the commercial marketplace of online auxiliary memory devices with very large capacities has resulted in several new and interesting problems. These problems include new applications of the devices (generally called mass storage facilities or MSF's) for storage of on-line data, the need for new data structures organized about the physical characteristics of the MSF, problems in performance, enhancement of automatic device and data control, enhancement of automatic memory management systems, increased concern for data integrity, and the continuing need for more memory protection capability.

Patent
P. Bonyhard1
15 Nov 1978
TL;DR: In this paper, a magnetic bubble memory with a direct propagation path between a bubble generator and a detector is presented. But the memory organization is not restricted to the direct path, but instead includes a control circuit to store indications of the current state of the memory and the address of presently accessed data in the path.
Abstract: A magnetic bubble memory herein includes a direct propagation path between a bubble generator and a detector. A control circuit is adapted to store indications of the current state of the memory and the address of presently accessed data in the path responsive to a power failure signal. Portions of the memory are organized in a familiar major, minor mode, data from two major loops being replicated into the direct path. The arrangement exhibits improved access times, improved data rates and is secure from power failure problems. Moreover, the memory organization permits the realization of large capacity chips without requiring block replication.

Proceedings ArticleDOI
01 Oct 1978
TL;DR: This paper examines the memory hierarchy both overall and with respect to its components in an attempt to identify research problems and project future directions for both research and development.
Abstract: The memory hierarchy is usually the largest identifiable part of a computer system and making effective use of it is critical to the operation and use of the system. We consider the levels of such a memory hierarchy and describe the state of the art and likely directions for both research and development. Algorithmic and logical features of the hierarchy not directly associated with specific components are also discussed. Among the problems we believe to be the most significant are the following: (a) evaluate the effectiveness of gap filler technology as a level of storage between main memory and disk, and if it proves to be effective, determine how/where it should be used, (b) develop algorithms for the use of mass storage in a large comguter system and (c) determine how cache memories should be implemented in very large, fast multiprocessor systems.

Proceedings ArticleDOI
03 Apr 1978
TL;DR: The performance of various memory configurations for parallel-pipelined computer which execute multiple instruction streams on multiple data streams is investigated and design considerations are discussed and an example given to illustrate possible design options.
Abstract: The performance of various memory configurations for parallel-pipelined computer which execute multiple instruction streams on multiple data streams is investigated.For a parallel-pipelined processor of order (s,p), which consists of p parallel processors each of which is a pipelined processor with s degrees of multiprogramming, there can be up to s p memory requests in each instruction cycle. The memory, which consists of N(=2n) identical memory modules, is organized such that there are l(=2i) lines and m(=2n-i) modules per line, where each module is characterized by the address cycle (address hold time)and memory cycle of a and c time units respectively.The performance which is affected by the memory interference problem is evaluated as a function of the memory configuration, (l, m), the module characteristics (a, c) and the processor order (s, p). Design considerations are discussed and an example given to illustrate possible design options.

Journal ArticleDOI
TL;DR: A compile time storage allocation scheme is given, which determines the relative address within the memory segment of a process for the activation records of all procedures called by the process to facilitate the generation of an efficient run-time code.
Abstract: This paper discusses the problem of allocating storage for the activation records of procedure calls within a system of parallel processes. A compile time storage allocation scheme is given, which determines the relative address within the memory segment of a process for the activation records of all procedures called by the process. This facilitates the generation of an efficient run-time code. The allocation scheme applies to systems in which data and procedures can be shared among several processes. However, recursive procedure calls are not supported.

Patent
16 Dec 1978
TL;DR: In this paper, a built-in check on the message stored in the solid state memory in order to verify compliance with statutory regulations concerning operation of telephone answering machines is presented, where the memory is connected via a digital to analog converter and an integrator to a display unit.
Abstract: The circuit provides a built in check on the message stored in the solid state memory in order to verify compliance with statutory regulations concerning operation of telephone answering machines. The memory is connected via a digital to analog converter and an integrator to a display unit. This gives the extent to which the memory is filled in relation to elapsed time. It is thus of value when recording the message which is to be relayed to an incoming call. It saves having to check the recording with a stop watch. The display can also be used to indicate the extent to which a message has been replayed. Typically the display comprises an array of light emitting diodes which come on in a sequence depending on memory content and time.

Proceedings ArticleDOI
13 Nov 1978
TL;DR: The argument why it is unlikely that anyone will find a cheaper nonlookahead memory policy that delivers significantly better performance is outlined.
Abstract: A program's working set is the collection of pages (or segments) recently referenced. This concept has led to efficient methods for measuring a program's intrinsic memory demand; it has assisted in understanding program behavior; and it has been used as the basis of optimal multiprogrammed memory management. This paper outlines the argument why it is unlikely that anyone will find a cheaper nonlookahead memory policy that delivers significantly better performance. This paper is based on a longer paper that presents the arguments in greater detail [DENN78d].


Proceedings Article
01 Sep 1978
TL;DR: In this article, a new approach to the concept of content addressable memory (CAM) is presented, which is suitable for integration in large-scale LSI-based systems.
Abstract: The increasing capabilities of todays LSI-technology have raised the interest in complex building blocks (RAM, ROM, microprocessor, etc.) for use in large electronic systems. We believe that a content addressable memory (CAM, refs 1-3) is such a standard function too. Much information-processing is performed by means of tables. By using a CAM, table searching can be done efficiently in the memory, thus moving some software routines to the hardware level. Yet there is no generally accepted standard for a CAM. The existing CAM chips generally contain only an array of memory cells without any control circuitry. Moreover reading and writing is done in the same way as in a RAM : the information must be accompanied by the address of the physical location to be used. In this paper we present a new approach to the concept of a CAM, suitable for integration.

Proceedings ArticleDOI
A.V. Pohm1
13 Nov 1978
TL;DR: Examples are given in which the diminished cost of memory impacts the design of hierarchies and dictates the use of high level languages for small system developments with limited production volume.
Abstract: The rapid decreases in the cost of memory and logic are increasing the variety of economically viable systems and changing the relative importance of a variety of design factors. Examples are given in which the diminished cost of memory impacts the design of hierarchies and dictates the use of high level languages for small system developments with limited production volume. From an educational point of view, computer engineers and computer scientists will be required to have an enlarged breadth of expertise in the design of many information processing systems.