Patent
Efficient cache write technique through deferred tag modification
Reads0
Chats0
TLDR
In this article, the authors propose a two-stage cache access pipeline which embellishes a simple "write-thru with write-allocate" cache write policy to achieve single cycle cache write access even when the processor cycle time does not allow sufficient time for the cache control to check the cache tag for validity and to reflect those results to the processor within the same processor cycle.Abstract:
An efficient cache write technique useful in digital computer systems wherein it is desired to achieve single cycle cache write access even when the processor cycle time does not allow sufficient time for the cache control to check the cache "tag" for validity and to reflect those results to the processor within the same processor cycle. The novel method and apparatus comprising a two-stage cache access pipeline which embellishes a simple "write-thru with write-allocate" cache write policy.read more
Citations
More filters
Patent
Disk controller with volatile and non-volatile cache memories
TL;DR: In this article, a disk storage subsystem includes both volatile and nonvolatile portions of memory, which can also be mirrored in additional non-volatile memory blocks to reduce disk access time.
Patent
Memory controller with priority queues
TL;DR: A memory controller receives reads, memory writes, and cache writes from memory, cache writes are checked to determine whether any correspond to the pending read If there is a cache write, the data from the corresponding cache write is used to respond to the read.
Patent
Multiprocessor system with write generate method for updating cache
Arun K. Somani,Craig M. Wittenbrink,Chung-Ho Chen,Robert E. Johnson,Kenneth Cooper,Robert M. Haralick +5 more
TL;DR: In this paper, a write generate mode is implemented for updating cache by first allocating lines of shared memory as write before read areas and cache tags are updated directly on cache misses without reading from memory.
Patent
Store processing method in a pipelined cache memory
Fujio Itomitsu,Yuuichi Saito +1 more
TL;DR: In this paper, the lower order bits of the contents stored in the first address register are transferred to the second address register through a transferring path in a write operation in a cache memory apparatus and microprocessor therewith.
Patent
Semiconductor memory device having an SRAM as a cache memory integrated on the same chip and operating method thereof
TL;DR: In this article, a cache DRAM (100) includes a DRAM memory array (11) accessed by a row address signal and a column address signal, an SRAM memory arrays (21), accessed by the column address signals, and an ECC circuit (30).
References
More filters
Patent
Cached multiprocessor system with pipeline timing.
TL;DR: In this article, the authors describe a multiprocessor data processing system including a main memory system, the processors (30) of which share a common control unit (CCU 10) that includes a write-through cache memory (20), for accessing copies of memory data without undue delay in retrieving data from the main memory systems.
Patent
Multiprocessor shared pipeline cache memory with split cycle and concurrent utilization
Keeley James W,Thomas F. Joyce +1 more
TL;DR: In this paper, a cache memory unit is constructed to have a two-stage pipeline shareable by a plurality of sources which include two independently operated central processing units (CPUs).
Patent
Dual cache for independent prefetch and execution units
TL;DR: In this article, a pipelined digital computer processor system is provided comprising an instruction prefetch unit (IPU,2) for prefetching instructions and an arithmetic logic processing unit (ALPU, 4) for executing instructions.
Patent
Fast access priority queue for managing multiple messages at a communications node or managing multiple programs in a multiprogrammed data processor
TL;DR: In this article, a comparison of the priority of a new element with the precedence of the existing element in the holding register is made, and the new element is written onto the top of the stack.
Patent
Data processing machine with improved cache memory management
TL;DR: In this paper, the cache operating cycle is divided into two subcycles dedicated to mutually exclusive operations: the first subcycle is dedicated to receiving a central processor memory read request, with its address.