scispace - formally typeset
Search or ask a question

Showing papers on "Memory management published in 1984"


Proceedings ArticleDOI
01 Jun 1984
TL;DR: This paper considers the changes necessary to permit a relational database system to take advantage of large amounts of main memory, and evaluates AVL vs B+-tree access methods, hash-based query processing strategies vs sort-merge, and study recovery issues when most or all of the database fits in main memory.
Abstract: With the availability of very large, relatively inexpensive main memories, it is becoming possible keep large databases resident in main memory In this paper we consider the changes necessary to permit a relational database system to take advantage of large amounts of main memory We evaluate AVL vs B+-tree access methods for main memory databases, hash-based query processing strategies vs sort-merge, and study recovery issues when most or all of the database fits in main memory As expected, B+-trees are the preferred storage mechanism unless more than 80--90% of the database fits in main memory A somewhat surprising result is that hash based query processing strategies are advantageous for large memory situations

922 citations


Journal ArticleDOI
Babb1
TL;DR: The Large-Grained Data Flow (LGDF) as mentioned in this paper model is a compromise between the data flow and traditional approaches, which takes a much finer grained view of system execution.
Abstract: Research in data flow architectures and languages, a major effort for the past 15 years, I has been motivated mainly by the desire for computational speeds that exceed those possible with current computer architectures. The computational speedup offered by the data flow approach is possible because all program instructions whose input values have been previously computed, can be executed simultaneously. There is no notion of a program counter or of global memory. Machine instructions are linked together in a network so that the result of each instruction execution is fed automatically into appropriate inputs of other instructions. Since no side-effects can occur as a result of instruction execution, many instructions can be active simultaneously. Although data flow concepts are attractive for providing an alternative to traditional computer architecture and programming styles, to date few data flow machines have been built, and data flow programming languages are not widely accepted. This article describes a compromise between the data flow and traditional approaches. The approach is called the large-grain data flow, or LGDF, to distinguish it from traditional data flow architectures and languages, which take a much finer grained view of system execution. Data flow machine instructions are typically at the level of an arithmetic operator and two operands. The LGDF model usually deals with much larger data-activated \"program chunks,\" corresponding to 5 to 50 (or even more) statements in a higher level programming language. Another difference in the model described here is that global memories can be shared by a specified set of programs , although access contention to shared memories is still managed in a data-flow-like manner. A fundamental concept of the LGDF model is that programs are viewed as comprising systems of data-activated processing units. Using a coherent hierarchy of data flow diagrams, complex systems are specified as compositions of simpler systems. The lowest level programs can be written in almost any language. (Fortran is used here). Programs specitied in this way have been implemented efficiently on both sequential (single instruction, single data) and vector (single instruction, multiple data), as well as true parallel (multiple instruction, multiple data) architectures. The steps involved in modeling and implementing a For-tran program using large-grain data flow techniques are * Draw Data Flow Diagrams. Create a hierarchical, consistent set of system data flow diagrams that express the logical data dependencies of the program fragments modeled. * Create Wirelist. Encode the data flow dependencies …

165 citations


Journal ArticleDOI
TL;DR: In this article, the authors argue that such a machine, even with a relatively slow processor, can outperform all other super-computers on memory bound computations, and show how it can lead to reduced memory access times and higher reliability.
Abstract: This paper argues the case for a computer with massive amounts of primary storage, on the order of tens of billions of bytes. We argue that such a machine, even with a relatively slow processor, can outperform all other super-computers on memory bound computations. This machine would be simple to program. In addition, it could lead to new and highly efficient programs which traded the available space for running time. We present a novel architecture for such a machine, and show how it can lead to reduced memory access times and higher reliability.

89 citations


Patent
10 Feb 1984
TL;DR: In this article, a digital computer system adapted for executing a set of instructions including at least one encrypted instruction is presented, where the program stored in the main memory may be executed by the central processing unit.
Abstract: A digital computer system adapted for executing a set of instructions including at least one encrypted instruction. The system includes a main memory for storing the instructions, a cache memory for storing selected instructions with a relatively fast access time, a selectively operable decryption system for decrypting selected encrypted instructions from the main memory, and a central processing unit. The system is adapted so that the program stored in the main memory may be executed by the central processing unit. To this end, the decrypted instructions are decrypted only during execution when those instructions are transferred from the main memory to the cache memory so that plaintext versions of those encrypted instructions exist only in the cache memory in response to requests by the central processing unit while executing the program.

69 citations


Journal ArticleDOI
TL;DR: A representation for terms is described that is comparable in efficiency to the best known, and yet supports arbitrary orders of tree search.
Abstract: A Prolog interpreter can be viewed as a process that searches a tree in order to produce the sets of terms at certain successful leaves. It does this by constructing the set of terms for each node in the tree. Moving from one node to another requires (re)construction of the set of terms at that node. The choice of representation of sets of terms influences the kind of tree search that can be supported.

59 citations


Patent
Andrew G. Heninger1
27 Aug 1984
TL;DR: A 32-bit central processing unit (CPU) has a six-stage pipeline architecture with an instruction and data cache memory and a memory management units, all provided on a single, integrated circuit (I.C.) chip as mentioned in this paper.
Abstract: A 32-bit central processing unit (CPU) having a six-stage pipeline architecture with an instruction and data cache memory and a memory management units, all provided on a single, integrated circuit (I.C.) chip. The CPU also contains means for controlling the operation of a separate I.C. chip co-processor that is dedicated to performing specific functions at a very high rate of speed, commonly called an extended processing unit (EPU). The EPU is provided with interface circuits that generate control signals and communicate them to the controlling CPU.

44 citations


Patent
04 Jun 1984
TL;DR: In this article, a demand paging scheme for a shared memory processing system that uses paged virtual memory addressing and includes a plurality of address translation buffers (ATBs) is presented.
Abstract: Disclosed is a demand paging scheme for a shared memory processing system that uses paged virtual memory addressing and includes a plurality of address translation buffers (ATBs). Page frames of main memory that hold pages being considered for swapping from memory are sequestered and flags, one corresponding to each ATB in the system, are cleared. Each time an ATB is flushed, its associated flag is set. Setting of all the flags indicates that the address translation information of pages held by selected sequestered page frames does not appear in any ATB and that the selected pages may be swapped from main memory.

41 citations


Patent
24 Dec 1984
TL;DR: In this paper, a prefetching mechanism for a memory hierarchy which includes at least two levels of storage, with L1 being a high-speed low-capacity memory, and L2 being a low-speed high capacity memory, is presented.
Abstract: A prefetching mechanism for a memory hierarchy which includes at least two levels of storage, with L1 being a high-speed low-capacity memory, and L2 being a low-speed high-capacity memory, with the units of L2 and L1 being blocks and sub-blocks respectively, with each block containing several sub-blocks in consecutive addresses Each sub-block is provided an additional bit, called a r-bit, which indicates that the sub-block has been previously stored in L1 when the bit is 1, and has not been previously stored in L1 when the bit is 0 Initially when a block is loaded into L2 each of the r-bits in the sub-block are set to 0 When a sub-block is transferred from L1 to L2, its r-bit is then set to 1 in the L2 block, to indicate its previous storage in L1 When the CPU references a given sub-block which is not present in L1, and has to be fetched from L2 to L1, the remaining sub-blocks in this block having r-bits set to 1 are prefetched to L1 This prefetching of the other sub-blocks having r-bits set to 1 results in a more efficient utilization of the L1 storage capacity and results in a highter hit ratio

40 citations


Patent
11 Jul 1984
TL;DR: In this article, a plurality of rows of content addressable memory cells (32), a corresponding plurality of random access memory cells(35) and another plurality of control circuits (37) are coupled to both the content and random access cells.
Abstract: An apparatus that translates virtual memory addresses into physical memory addresses. In particular, this apparatus comprises a plurality of rows of content addressable memory cells (32), a corresponding plurality of random access memory cells (35) and another corresponding plurality of control circuits (37). The content addressable memory cells (32) store the virtual memory addresses and the random access memory cells (35) store the physical memory addresses. The control circuits (37) are coupled to both the content addressable and the random access memory cells (32, 35) and are disposed for controlling the operation of the apparatus.

35 citations


Patent
21 Jun 1984
TL;DR: In this paper, the authors present a method for first checking the memory contents when the power-off command has been activated, and then storing the results of the checking in a specific area of the RAM.
Abstract: In an electronic apparatus, for example, a programmable calculator, a portable or handheld computer, a memory module, or a data collector, incorporating a battery backup RAM, the present invention provides means for first checking the memory contents when the power-OFF command has been activated, and then causes the results of the checking to be stored in a specific area of the RAM. When the memory contents checking command has been activated, the system rechecks the memory contents to see if the memory contents have varied. The invention securely confirms whether the memory contents of either the effective programs or data have been correctly backed up when either replacing the battery or during storage of an electronic apparatus that uses the battery backup RAM. This is particularly effective for portable or handheld programmable computers.

32 citations


Patent
11 Dec 1984
TL;DR: In this article, an apparatus for extending the memory capacity of a computer system having discrete memory storage was described, including a scanner for accessing computer information and a memory device for storing computer information.
Abstract: An apparatus for extending the memory capacity of a computer system having discrete memory storage. The apparatus including a scanner for accessing computer information, and a memory device for storing computer information and for accessing and transferring computer information to and from the scanner and to and from the discrete memory storage of the computer system. The scanner utilizing an optical laser scanning system to encode computer information on a physical medium. The memory device utilizing virtual memory techniques to store and retrieve data for use at a required time.

Patent
22 Mar 1984
TL;DR: In this article, the authors propose a method for verifying the design of a digital electronic component in which the component is replaced by a simulation unit connected to the intended host system, and the simulation unit has a memory for holding responses to stimuli from the host system.
Abstract: A method for verifying the design of a digital electronic component in which the component is replaced by a simulation unit connected to the intended host system. The simulation unit has a memory for holding responses to stimuli from the host system. If the required response is not in the memory, it is calculated and placed in the memory, and the operation of the host system is then re-started from the beginning. In this way, the required set of responses is built up incrementally in the memory until, eventually, the operation of the host system can run to completion.

Journal ArticleDOI
Tanner1
TL;DR: A series of designs for a 256K memory are presented which integrate error-correcting coding into the memory organization, starting from a simple single-error correcting product code and exploring trade-offs in coding efficiency, access delay, and complexity of communication and computation.
Abstract: A series of designs for a 256K memory are presented which integrate error-correcting coding into the memory organization. Starting from a simple single-error correcting product code, the successive designs explore trade-offs in coding efficiency, access delay, and complexity of communication and computation. In the most powerful design, all the 256K bits are organized so that they form a codeword in a double-error-correcting triple-error-detecting code derived from a projective plane. Because all of the bits are components of this single codeword, the coding efficiency is very high; the required parity check bits increase the storage by only 3 percent, approximately. Single error correction can take place at the time of a read with very little additional delay compared to that of a normal irredundant memory. Multiple error correction can be performed by the memory management system. A variety of failure modes, including failure of a whole column of one of the constituent 64 x 64 subarrays can be tolerated. Writing into the memory is somewhat slower than in a conventional memory, involving a read-write cycle.

Patent
02 Feb 1984
TL;DR: A dual memory system consists of two memory units each having volatile memory devices, backup power storage means and backup monitoring facility which memorizes a signal indicative of whether backup is successful or failing as discussed by the authors.
Abstract: A dual memory system consists of two memory units each having volatile memory devices, backup power storage means and backup monitoring facility which memorizes a signal indicative of whether backup is successful or failing. When the contents of one memory unit are copied to another memory unit, the receiving memory unit has the monitor signal in a state which is made coincident with the signal state of the sending memory unit. Consequently, the receiving memory unit will have the same state of the monitor signal as of the sending memory unit at the end of copying, and thus both memory units are in a successful backup state only when the sending memory unit is in a successful backup state.

Patent
25 Jan 1984
TL;DR: In this article, the authors restrict the extent of disappearance of information accumulated in an external memory unit even when an abnormal state occurs by changing a spare memory unit that accumulates the same information as that stored in presently used memory unit to a history memory unit at every specified period.
Abstract: PURPOSE:To restrict as far as possible the extent of disappearance of information accumulated in an external memory unit even when an abnormal state occurs by changing a spare memory unit that accumulates the same information as that stored in presently used memory unit to a history memory unit at every specified period. CONSTITUTION:The external memory unit 3-1 is used as a presently used memory unit. At a point of time t2, an external memory unit 3-2 is used as a spare memory unit, and external memory units 3-3-3-5 are used as history memory units. When the point of time t2 is arrived, a processing device 1 changes the spare memory unit 3-2 to a history memory unit, and after copying telegrams etc. accumulated in the presently used memory unit 3-1 in the history memory unit 3-5, starts using as a spare memory unit. On arriving at a point of time t3, the processing device 1 changes the spare memory unit 3-5 to a history memory unit, and after copying telegrams etc. accumulated by the presently used memory unit 3-1 in the history memory unit 3-4, starts using as a spare memory unit.

Journal ArticleDOI
TL;DR: Experimnental results are given about the performance of six sorting algorithms in a virtual memory based on the working set principle, and quicksort turns out to be the best algorithm, also in a working set virtual memory environment.
Abstract: Experimnental results are given about the performance of six sorting algorithms in a virtual memory based on the working set principle. With one exception, the algorithms are general internal sorting algorithms and not especially tuned for virtual memory. Algorithms are compared in terms of their time requirements, space requirements, and space-time integrals. The relative performances of the algorithms vary from one measure to the other. Especially in terms of a space-time integral, quicksort turns out to be the best algorithm, also in a working set virtual memory environment.

Patent
12 Apr 1984
TL;DR: In this article, the data formats for transmission of the data to the non-volatile memory and for receiving it therefrom do not agree with the data format which the volatile storage must have.
Abstract: An electronic demand register for an electric meter includes a volatile storage for normal processing of data and a non-volatile storage into which data is serially written upon the occurrence of conditions which may threaten the integrity of such data and from which the data is again retrieved when the condition no longer exists. The data formats for transmission of the data to the non-volatile memory and for receiving it therefrom do not agree with the data formats which the volatile storage must have. A communications buffer assembles a data package for transmission to the non-volatile memory which has a format which can be suitably serially transmitted to the non-volatile memory and be properly interpreted there due to the manner in which the non-volatile memory recognizes and stores data. In addition, the communications buffer receives the serial data from the non-volatile memory and, by left shift and selection of only valid portions of the data, assembles data suitable for transmission to the volatile memory.

Journal ArticleDOI
TL;DR: Two schemes which allow for the secondary storage to preload input data and programs into the primary memories so that processor utilization can be increased and system response time decreased are presented.
Abstract: One class of reconfigurable parallel processing systems is based on the use of a large number of processing elements where each processing element consists of a processor and a primary memory. To efficiently employ the processing elements, it is desirable to overlap the operation of the secondary storage with computations being performed by the processors. Due to the dynamically reconfigurable architecture of such systems, the processors which will execute a new task may not be selected until they are ready to run the task. That is, a task must be preloaded prior to the final selection of the processors on which it will execute. Two schemes which allow for the secondary storage to preload input data and programs into the primary memories so that processor utilization can be increased and system response time decreased are presented. PASM is used as an example system for comparing the performance of the schemes by simulation studies. Results show that both methods are effective techniques. These schemes can be applied to reconfigurable parallel processing systems which use a centralized scheduling policy.

Patent
20 Dec 1984
TL;DR: In this paper, a data transmission control device for controlling the data transfer between two memory means on the basis of an instruction from a processor is disclosed in which the instruction from the processor is decoded, a transfer request is issued to each memory means a plurality of times, depending upon a transfer unit indicated by the decoded instruction and an access unit of each memory mean.
Abstract: A data transmission control device for controlling the data transfer between two memory means on the basis of an instruction from a processor is disclosed in which the instruction from the processor is decoded, a transfer request is issued to each memory means a plurality of times, depending upon a transfer unit indicated by the decoded instruction and an access unit of each memory means, a data buffer is provided between the memory means to temporarily store data whichis transferred from one of the memory means to the other memory means, and the issue of a transfer request to each memory means is allowed or stopped in accordance with the quantity of data stored in the data buffer.

Patent
12 Jul 1984
TL;DR: In this paper, a maintenance exerciser makes requests of certain inoperative and malfunctioning storage memory bank portions of a large scale storage memory unit concurrently that normal system requestors do request of remaining, correctly functional, storage memory banks portions of such storage memory units.
Abstract: A maintenance exerciser makes requests of certain inoperative and malfunctioning storage memory bank portions of a large scale storage memory unit concurrently that normal system requestors do request of remaining, correctly functional, storage memory bank portions of such storage memory unit. All requests are collectively prioritized in a priority network which, save for the circuit of the present invention, will not advance to successive prioritizations until each currently prioritized request is positively acknowledged by the requested storage memory bank. When the maintenance exerciser makes abundant and repetitive requests to storage memory banks which are non-responding, resulting that such requests would time-out save for the circuit of the present invention, then such requests of the maintenance exerciser which would time-out would, by suspending successive prioritizations, significantly impede the concurrent access of normal system requestors to remaining correctly functional and responsive storage memory banks. A time delay circuit accomplishes a forced clear of an unacknowledged request from a maintenance exerciser to an unresponsive storage memory bank in the present invention, thereby simultaneously forceably precluding, or clearing, that the maintenance exerciser should experience a time-out to the requested memory while allowing the priority network to continue successive prioritizations.

Journal ArticleDOI
TL;DR: In this paper, a crosstalk reduction technique utilizing the gradient descent procedure is developed first which minimizes the memory processing error and enhances memory saving and a self-correcting technique is developed which achieves error-free recognition of near neighbors for any training pattern even among the presence of crosStalk.
Abstract: A computer model for a distributed associative memory has been developed based on Walsh-Hadamard functions. In this memory device, the information storage is distributed over the entire memory medium and thereby lends itself to parallel comparison of the input with stored data. These inherent economic storage and parallel processing capabilities may be found effective especially in real-time processing of large amount of information. However, overlaying different pieces of data in the same memory medium creates the problem of interference or crosstalk between stored data and may lead to recognition errors. In this paper, a crosstalk reduction technique utilizing the gradient descent procedure is developed first. This minimizes the memory processing error and enhances memory saving. Second, for an efficient implementation of the memory structure, these associative memories are configured in a hierarchical structure which not only expands storage capacity but also utilizes the speed of tree search. Finally, a self-correcting technique is developed which achieves error-free recognition of near neighbors for any training pattern even among the presence of crosstalk.

Patent
31 Jan 1984
TL;DR: In this paper, a special memory (G/H) is provided in which the individual address-based allocations of replacement memory sectors (R6-R8) to defective memory sectors(E1/E2, E21/E22) are stored.
Abstract: In a magnetic disk memory (M1-M12, N), memory segments (M1-M12) of any number of memory sectors (E1/E2, E21/E22, R6, R7, R8) are formed. A part of the memory medium with an adequate number of memory sectors (R6-R8) is reserved as a replacement for memory sectors which become defective, i.e. is not directly available to form memory segments. An alert signal is stored in the address field (E1, E21) of each memory sector (E1/E2, E21/E22) insofar as a memory sector is defective. A special memory (G/H) is provided in which the individual address-based allocations of replacement memory sectors (R6-R8) to defective memory sectors (E1/E2, E21/E22) are stored. By the alert signal or in the case of illegibility of the address field (E1, E21), the address of the replacement memory sector (R6, R8) concerned which is associated with the address of the defective memory sector (E1/E2, E21/E22) is searched for in the special memory (G/H) with the address of the defective memory sector (E1/E2, E21/E22).

Journal ArticleDOI
TL;DR: Various memory allocation algorithms that allow formation of a multiprocessor system that incorporates several content-addressable memories and is designated for fast data base applications are discussed.
Abstract: For associative processing and relational data bases characterized by sequential memory search, it is convenient to store a sequence of data files in a content-addressable memory since it can perform two concurrent data base operations at a time (search and update, search and delete, etc.) and the sequential nature of its operation is in conformity with the sequential nature of maintenance and update of data files. To take into account various communication delays introduced by the communication network in transferring updated words to the content-adressable memory assume that a sequence of data words contained in the same data file is stored with a shifting distance from one another, d ? 1, where the d integer is selectable by a programmer, and a pair of adjacent data words from the same file may have a constant or variable d. (A particular case, d = 1, means consecutive word storage.) In this paper, we discuss various memory allocation algorithms that allow formation of a multiprocessor system that incorporates several content-addressable memories and is designated for fast data base applications. All memory allocation schemes introduced in this paper are described by a Diophantine equation whose solution, x, shows the distance between any two processors that are not in conflict when they access the same content-addressable memory. The paper presents a technique for finding a maximal set of noninterfering processors and conflict-free allocation techniques for various structures of data files.

Journal ArticleDOI
Lars Philipson1
01 Jan 1984
TL;DR: Design principles for MIMD multiprocessor computers with virtual memory based on a common, global and uniform logical address space, supporting parallel, procedural languages such as Ada are discussed and suggested solutions given.
Abstract: Design principles for MIMD multiprocessor computers with virtual memory based on a common, global and uniform logical address space, supporting parallel, procedural languages such as Ada (Ada is a registered trademark of the US Government, AJPO), are discussed. The major design issues are identified and suggested solutions given, the most important of which are distributed, associative address translation, and local mechanisms supporting efficient resource allocation policies to reduce over-all communication costs. Arguments are given for using shared memory and bus-based global communication. Some preliminary studies of bus-based intercommunication schemes, parallel language implementation, capacity simulation and VLSI implementation are reviewed, as well as a number of existing experimental and commercial multiprocessors. Finally an experimental system for evaluation of different mechanisms and policies in systems of the suggested type is outlined.


Patent
John A. Celio1
22 Jun 1984
TL;DR: In this article, a virtual address and access protection code stored at that address are fetched simultaneously from secondary memory and stored in corresponding locations in first and second content addressable memories (36, 68).
Abstract: A virtual address and access protection code stored at that address are fetched simultaneously from secondary memory and stored in corresponding locations in first and second content addressable memories (36, 68). When a program-generated virtual address is later applied to the first content addressable memory (36), a corresponding access code is simultaneously applied to the second content addressable memory (68). Match signals obtained simultaneously from corresponding locations in both content addressable memories (36, 68) are combined in logic means (84) to produce an access control signal to control access to data stored at a real memory address corresponding to the matched virtual address.

Journal ArticleDOI
TL;DR: It is proved that the algorithms oriented towards the working set or sampled working set policy are optimum when applied to programs having no more than two blocks per page, and that, when this restriction is removed, they minimize both upper and lower bounds of the performance index they consider as the figure of merit to be reduced.

Journal ArticleDOI
TL;DR: Limited, though encouraging, results are presented which show that this new algorithm can be at least as effective as the Critical LRU algorithm, even when the memory management policy is LRU itself, and can also be at at leastAs effective as Critical Working Set, evenWhen the memorymanagement policy is the working set policy.

Patent
10 Apr 1984
TL;DR: In this article, a memory management system is designed for use with a self-contained microprocessor to form a multi-user computer, which operates to establish user and kernel modes each having different operating permissions.
Abstract: A memory management system is structured for use with a self-contained microprocessor to form a multi-user computer. The system operates to establish user and kernel modes each having different operating permissions. When the system is operating in the user mode, certain of the fixed functions of the microprocessor, such as interrupt-off and halt, are blocked from enablement by any user. The system is designed having multiple memory maps, some accessible when in the user mode and all accessible from the kernel mode.

Journal ArticleDOI
Bill Bateson1
TL;DR: The main features of the Xenix time-sharing system are discussed, paying particular attention to those aspects of the operating system which relate most closely to the hardware on which the system is running.