Journal ArticleDOI
Memory expansion technology (MXT): software support and performance
Bulent Abali,Hubertus Franke,Dan E. Poff,Robert Saccone,Charles O. Schulz,Lorraine M. Herger,T.B. Smith +6 more
Reads0
Chats0
TLDR
Results show that the hardware compression of main memory has a negligible penalty compared to an uncompressed main memory, and for memory-starved applications it increases performance significantly, and the memory content of an application can usually be compressed by a factor of 2.Abstract:
A novel memory subsystem called Memory Expansion Technology (MXT) has been built for fast hardware compression of main-memory content. This allows a memory expansion to present a "real" memory larger than the physically available memory. This paper provides an overview of the memory-compression architecture, its OS support under Linux and Windows®, and an analysis of the performance impact of memory compression. Results show that the hardware compression of main memory has a negligible penalty compared to an uncompressed main memory, and for memory-starved applications it increases performance significantly. We also show that the memory content of an application can usually be compressed by a factor of 2.read more
Citations
More filters
Proceedings ArticleDOI
Base-delta-immediate compression: practical data compression for on-chip caches
Gennady Pekhimenko,Vivek Seshadri,Onur Mutlu,Michael Kozuch,Phillip B. Gibbons,Todd C. Mowry +5 more
TL;DR: There is a need for a simple yet efficient compression technique that can effectively compress common in-cache data patterns, and has minimal effect on cache access latency.
Patent
Content independent data compression method and system
TL;DR: In this paper, a method for compressing data comprises the steps of: analyzing a data block of an input data stream to identify a data type of the data block, the input dataset consisting of a plurality of disparate data types; performing content dependent data compression on the block; and performing content independent data compression if the data type is not identified.
Patent
Systems and Methods for Accelerated Loading of Operating Systems and Application Programs
TL;DR: In this article, the authors present a method for providing accelerated loading of operating system and application programs upon system boot or application launch, which consists of: maintaining a list of boot data associated with an application program, preloading the application data upon launching the application program; and servicing requests for application data from a computer system using the preloaded boot data.
Patent
Data Storewidth Accelerator
TL;DR: In this article, a composite disk controller provides data storage and retrieval acceleration using multiple caches for data pipelining and increased throughput, and the disk controller with acceleration is embedded in the storage device.
Patent
Memory module with a circuit providing load isolation and memory domain translation
TL;DR: In this article, a memory module includes a plurality of memory devices and a circuit, which is configured to be electrically coupled to a memory controller of a computer system, selectively isolating one or more of the loads of the memory devices from the computer system.
References
More filters
Book
UNIX Internals: The New Frontiers
TL;DR: The Scope of This Book: Process Scheduling in MACH, the Digital UNIX Real-Time Scheduler, and other Considerations.
Proceedings Article
The case for compressed caching in virtual memory systems
TL;DR: This study shows that technology trends favor compressed virtual memory--it is attractive now, offering reduction of paging costs of several tens of percent, and it will be increasingly attractive as CPU speeds increase faster than disk speeds.
Proceedings ArticleDOI
Analysis of branch prediction via data compression
TL;DR: In this article, the authors apply techniques from data compression to establish a theoretical basis for branch prediction, and to illustrate alternatives for further improvement, and show that current two-level or correlation based predictors are, in fact, simplifications of an optimal predictor in data compression, Prediction by Partial Matching (PPM).
Proceedings ArticleDOI
Compiler-driven cached code compression schemes for embedded ILP processors
Sergei Y. Larin,Thomas M. Conte +1 more
TL;DR: Experiments found that when the missprediction penalty of the added Huffman decoder stage was taken into account, a Tailored ISA approach produced higher performance, while providing higher ROM size savings.
Patent
Compression architecture for system memory application
William Paul Hovis,Kent Harold Haselhorst,Steven Wayne Kerchberger,Jeffrey Douglas Brown,David A. Luick +4 more
TL;DR: In this article, the authors present a memory architecture and method of partitioning a computer memory, which includes a cache section, a setup table, and a compressed storage, all of which are partitioned from the computer memory.