scispace - formally typeset
Patent

Adaptive read-ahead disk cache

Reads0
Chats0
TLDR
In this paper, an adaptive read ahead cache is provided with a real cache and a virtual cache, where the real cache has a data buffer, an address buffer, and a status buffer.
Abstract
An adaptive read ahead cache is provided with a real cache and a virtual cache. The real cache has a data buffer, an address buffer, and a status buffer. The virtual cache contains only an address buffer and a status buffer. Upon receiving an address associated with the consumer's request, the cache stores the address in the virtual cache address buffer if the address is not found in the real cache address buffer and the virtual cache address buffer. Further, the cache fills the real cache data buffer with data responsive to the address from said memory if the address is found only in the virtual cache address buffer. The invention thus loads data into the cache only when sequential accesses are occurring and minimizes the overhead of unnecessarily filling the real cache when the host is accessing data in a random access mode.

read more

Citations
More filters
Patent

Increasing the memory performance of flash memory devices by writing sectors simultaneously to multiple flash memory devices

TL;DR: In this article, a memory storage system for storing information organized in sectors within a nonvolatile memory bank is disclosed, where sectors are organized into blocks with each sector identified by a host provided logical block address (LBA).
Patent

Policy based storage configuration

Narayan Devireddy, +1 more
TL;DR: In this article, a storage device configuration manager is implemented in software for a computer system including a processor, a memory coupled to the processor, and at least one storage device coupled to a processor.
Patent

Moving sectors within a block of information in a flash memory mass storage architecture

TL;DR: In this article, a device is disclosed for storing mapping information for mapping a logical block address identifying a block being accessed by a host to a physical block address, identifying a free area of nonvolatile memory, the block being selectively erasable and having one or more sectors that may be individually moved.
Patent

Direct logical block addressing flash memory mass storage architecture

TL;DR: In this paper, a nonvolatile semiconductor mass storage system and architecture can be substituted for a rotating hard disk by using several flags, and a map to correlate a logical block address of a block to a physical address of that block.
Patent

Non-volatile memory control

TL;DR: In this article, a pipelining sequence for transferring data to and from non-volatile memory arrays and limiting the number of active arrays operating at one time is presented, the arrangement being such that the controller waits for the at least one of the arrays to complete before initiating the transfer.
References
More filters
Patent

Method and apparatus for efficiently handling temporarily cacheable data

TL;DR: In this article, a method and apparatus for marking data that is temporarily cacheable to facilitate the efficient management of said data is presented, where a bit in the segment and/or page descriptor of the data called the marked data bit (MDB) is generated by the compiler and included in a request for data from memory by the processor in the form of a memory address and will be stored in the cache directory at a location related to the particular line of data involved.
Patent

Asynchronous read-ahead disk caching using multiple disk I/O processes adn dynamically variable prefetch length

TL;DR: In this paper, a file-based read-ahead method employs asynchronous I/O processes to fetch Demand and Read-ahead data blocks from a disk, depending on their physical and logical sequentialities.
Patent

System for scheduling readahead operations if new request is within a proximity of N last read requests wherein N is dependent on independent activities

TL;DR: In this paper, the disk controller keeps track of the last n reads to the array and if a new read request is received that is adjacent to any of those last reads, the controller performs a look ahead read because a sequential read may be in progress.
Patent

Enhanced cache operation with remapping of pages for optimizing data relocation from addresses causing cache misses

TL;DR: In this paper, the authors propose to add bus activity sampling logic to the CPU and enhance the operating system to detect when cache thrashing is occurring and remap data pages to new physical memory locations.
Patent

Cache hierarchy design for use in a memory management unit

TL;DR: In this article, a cache hierarchy to be managed by a memory management unit (MMU) combines the advantages of logical and virtual address caches by providing cache hierarchy having a logical address cache backed up by a virtual address cache.