scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

A case for small row buffers in non-volatile main memories

30 Sep 2012-Vol. 2012, pp 484-485
TL;DR: It is found that on a multi-core system, reducing the row buffer size can greatly reduce main memory dynamic energy compared to a DRAM baseline with large row sizes, without greatly affecting endurance, and for some NVM technologies, leads to improved performance.
Abstract: DRAM-based main memories have read operations that destroy the read data, and as a result, must buffer large amounts of data on each array access to keep chip costs low. Unfortunately, system-level trends such as increased memory contention in multi-core architectures and data mapping schemes that improve memory parallelism lead to only a small amount of the buffered data to be accessed. This makes buffering large amounts of data on every memory array access energy-inefficient; yet organizing DRAM chips to buffer small amounts of data is costly, as others have shown [11]. Emerging non-volatile memories (NVMs) such as PCM, STT-RAM, and RRAM, however, do not have destructive read operations, opening up opportunities for employing small row buffers without incurring additional area penalty and/or design complexity. In this work, we discuss and evaluate architectural changes to enable small row buffers at a low cost in NVMs. We find that on a multi-core system, reducing the row buffer size can greatly reduce main memory dynamic energy compared to a DRAM baseline with large row sizes, without greatly affecting endurance, and for some NVM technologies, leads to improved performance.

Summary (2 min read)

Introduction

  • Abstract—DRAM-based main memories have read operations that destroy the read data, and as a result, must buffer large amounts of data on each array access to keep chip costs low.
  • Emerging non-volatile memories (NVMs) such as PCM, STT-RAM, and RRAM, however, do not have destructive read operations, opening up opportunities for employing small row buffers without incurring additional area penalty and/or design complexity.
  • Over time, this charge leaks, causing the stored data to be lost.
  • As a result, the performance benefit of large row buffers may decrease in multi-core systems.

II. MOTIVATION

  • Emerging NVM technologies have several promising attributes compared to existing memory technologies such as SRAM (used in on-chip caches), DRAM, and Flash.
  • NVMs provide cost advantages compared to SRAM and DRAM, and latency advantages compared to Flash.
  • Typical DRAM chip micro-architectures (JEDEC-standard DDRtype SDRAM) are divided into banks that consist of rows and columns .
  • Comparing the 1- and 8-core row-interleaved data, the authors see that while row interleaving does enable more row buffer locality, its benefits diminish as memory system contention increases with more cores: row buffer hit rate is less than 50% for row interleaving even with large, 1KB rows.

III. A SMALL ROW BUFFER NVM ARCHITECTURE

  • Figure 1(b) shows the organization of their NVM architecture.
  • Compared to a traditional DRAM organization, the physical placement of the row buffer and the column multiplexer (part of the I/O gating circuitry in DRAM designs) are swapped in the data path (shown in gray).
  • This rearrangement makes better use of resources by sharing a smaller number of sense amplifiers (the devices which store bits in the row buffer) among multiple bitlines.
  • Note that this is not possible in DRAM (without reducing the row size) because a sense amplifier for each bit in the row is required in DRAM to restore the charge of the cell after it is read.
  • Unlike DRAM, however, their organization requires decoding both the row address and the column address during a RAS command, so that only a subset of the row containing the bits of interest will be selected, sensed, and stored in the row buffer.

IV. RESULTS

  • The authors modify their memory simulator timings according to those in Table I for PCM and STT-RAM.
  • The authors evaluate 31 multiprogrammed workloads composed of SPEC, TPC, and STREAM benchmarks.
  • Note that this reduction is achieved despite worse underlying technology parameters 2For more details, please refer to their accompanying tech report [5].
  • For a given memory technology, reducing the row buffer size does not greatly affect system performance due to the already low row buffer locality present on their multi-core system .
  • NVM cells have a limited lifetime in terms of the number of times they can be written to before their ability to store data fails, also known as Durability.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

A Case for Small Row Buffers in Non-Volatile Main Memories
Justin Meza
Jing Li
Onur Mutlu
Carnegie Mellon University
IBM T.J. Watson Research Center
{meza,onur}@cmu.edu jli@us.ibm.com
Abstract—DRAM-based main memories have read operations that
destroy the read data, and as a result, must buffer large amounts of data
on each array access to keep chip costs low. Unfortunately, system-level
trends such as increased memory contention in multi-core architectures
and data mapping schemes that improve memory parallelism lead to only
a small amount of the buffered data to be accessed. This makes buffering
large amounts of data on every memory array access energy-inefficient;
yet organizing DRAM chips to buffer small amounts of data is costly, as
others have shown [11].
Emerging non-volatile memories (NVMs) such as PCM, STT-RAM,
and RRAM, however, do not have destructive read operations, opening
up opportunities for employing small row buffers without incurring
additional area penalty and/or design complexity. In this work, we
discuss and evaluate architectural changes to enable small row buffers
at a low cost in NVMs. We find that on a multi-core system, reducing
the row buffer size can greatly reduce main memory dynamic energy
compared to a DRAM baseline with large row sizes, without greatly
affecting endurance, and for some NVM technologies, leads to improved
performance.
I. INTRODUCTION
Modern main memory is composed of dynamic random-access
memory (DRAM). A DRAM cell stores data as charge on a capacitor.
Over time, this charge leaks, causing the stored data to be lost. To
prevent this, data stored in DRAM must be periodically read out and
rewritten, a process called refreshing. In addition, reading data stored
in a DRAM cell destroys its state, requiring data to be later restored,
leading to increased cell access time and energy. For this reason,
DRAM devices require buffering data which are read. To keep costs
low, the buffering circuitry in DRAM devices is amortized among
large rows of cells, in peripheral storage called the row buffer, at
least one per bank [2]. Refreshing data and buffering large amounts of
data wastes energy in DRAM devices, causing main memory power
to constitute a significant fraction of the total system power.
Data fetched into the row buffer, however, can be accessed at much
lower latencies and less energy than accessing the DRAM memory
array. Therefore, large row buffer sizes can improve performance
and efficiency if many accesses can be served in the same row.
Unfortunately, there are several reasons why such row buffer locality
can be low in systems: (1) some applications inherently do not have
significant locality within rows (e.g., random access applications),
(2) as more cores are placed on chip, applications running on those
cores interfere with each other in the row buffers, leading to reduced
locality, especially if the memory scheduling policy is unaware of
applications’ interference in the row buffers [7], as also observed
by others [10, 11], and (3) interleaving techniques that improve
parallelism in the memory system (e.g., cache block interleaving)
tend to reduce row buffer locality because they stripe consecutive
cache blocks across different banks. As a result, the performance
benefit of large row buffers may decrease in multi-core systems.
New non-volatile memory (NVM) technologies, such as phase-
change memory (PCM), spin-transfer torque RAM (STT-RAM), and
resistive RAM (RRAM), on the other hand, provide non-destructive
reads and do not require refreshing and restoring their data after
sensing. This is because NVMs do not store their data as charge, and
thus their data persists after being read. This not only eliminates the
refresh problem of DRAM devices but also opens up opportunities for
employing smaller row buffers in NVMs without incurring additional
area penalty and/or design complexity.
II. MOTIVATION
Emerging NVM technologies have several promising attributes
compared to existing memory technologies such as SRAM (used in
on-chip caches), DRAM, and Flash. For example, NVMs provide cost
advantages compared to SRAM and DRAM, and latency advantages
compared to Flash. Importantly, these NVMs feature non-destructive
read operations, which DRAM does not have (i.e., data sensing does
not destroy the contents of cells).
Typical DRAM chip micro-architectures (JEDEC-standard DDR-
type SDRAM) are divided into banks that consist of rows (wordlines)
and columns (bitlines). Due to physical pin limitations, all the
information required to service a memory request must be supplied
over multiple commands. The Row Address Strobe (RAS) command
sends the row and bank address to select one of the banks and a row
within that bank. Then, an entire row (usually 1 to 2KB per chip)
is read out into latch-based sense amplifiers which comprise the row
buffer [2]. The Column Address Strobe (CAS) command then selects
a subset (i.e., column) of data from the row buffer (8B in a DDR3
×8 device [6]). Thus, a DRAM access first fetches many kilobytes
of data into the row buffer (RAS) and, in the worst case, uses only
a tiny portion of it (CAS). If multiple columns of the row buffer are
needed, multiple consecutive CAS commands can be issued, which
amortizes the cost of fetching the large row into the row buffer.
To illustrate how much buffered data is actually used in real
applications, Figure 1a shows average row buffer locality (row hit
rate) when employing various row buffer sizes on several system con-
figurations using the FR-FCFS scheduling policy [8].
1
In particular,
we show 1- and 8-core systems employing two different schemes
for mapping data in main memory: (1) row interleaving, which
places consecutive memory addresses in the same row, and (2) block
interleaving, which stripes data in consecutive memory addresses
(usually cache blocks) across different banks. Row interleaving helps
exploit row buffer locality by enabling data with spatial locality
to reside in the same row buffer, while block interleaving aims
to improve memory parallelism by enabling concurrent access of
memory channels/banks for consecutive memory addresses.
Comparing the 1- and 8-core row-interleaved data, we see that
while row interleaving does enable more row buffer locality, its
benefits diminish as memory system contention increases with more
cores: row buffer hit rate is less than 50% for row interleaving even
with large, 1KB rows. Block interleaving reduces row buffer locality
over row interleaving, to less than 10% in the 8-core case. While
it is clear that row locality is lower on multi-core systems, what
is less obvious is how row buffer size affects system-level tradeoffs,
such as energy-efficiency, performance, and durability, in NVM main
memories. This work evaluates these tradeoffs.
8B 16B 32B 64B 128B 256B 512B
1KB
Per DRAM-Chip Row Buffer Size
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
Row Buffer Locality
1-core, row interleaved
8-core, row interleaved
1-core, cache block interleaved
8-core, cache block interleaved
(a)
Bank N
Bank 2
Bank 1
Bank 0
Memory Array
Row Buffer
Column Decoder
and Multiplexer
Row Decoder
...
...
...
(b)
Fig. 1: Row size affects row locality (a); our NVM architecture (b).
1
Application-aware memory request scheduling policies (e.g., [1, 3, 7])
provide better performance, but they can reduce row buffer locality.
1

8B 16B 32B 64B 128B 256B 512B
1KB
Baseline
Per DRAM-Chip Row Buffer Size
0.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
1.6
Normalized Memory Energy
STT-RAM
DRAM
PCM
(a) Memory energy with block interleaving.
8B 16B 32B 64B 128B 256B 512B
1KB
Per DRAM-Chip Row Buffer Size
0
1
2
3
4
5
Weighted Speedup
STT-RAM
DRAM
PCM
(b) Performance with block interleaving.
8B 16B 32B 64B 128B 256B 512B
1KB
Per DRAM-Chip Row Buffer Size
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
1.1
Normalized Writes
No cache
32MB cache
(c) Writes with and without a 32MB cache.
Fig. 2: Multi-core results for energy (normalized to DRAM with 1KB rows), performance, and number of writes (normalized to 1KB rows).
III. A SMALL ROW BUFFER NVM ARCHITECTURE
Figure 1(b) shows the organization of our NVM architecture. Com-
pared to a traditional DRAM organization, the physical placement of
the row buffer and the column multiplexer (part of the I/O gating
circuitry in DRAM designs) are swapped in the data path (shown in
gray). This rearrangement makes better use of resources by sharing a
smaller number of sense amplifiers (the devices which store bits in the
row buffer) among multiple bitlines. Note that this is not possible in
DRAM (without reducing the row size) because a sense amplifier for
each bit in the row is required in DRAM to restore the charge of the
cell after it is read. Unlike DRAM, however, our organization requires
decoding both the row address and the column address during a RAS
command, so that only a subset of the row containing the bits of
interest will be selected, sensed, and stored in the row buffer. During
a CAS command, the data bits from the row buffer corresponding to
the desired column are further selected by the I/O gating circuitry
and sent to a prefetch buffer.
2
While related prior work [4] employed multiple, narrow rows
in a PCM main memory for reducing array reads and writes, it
focused on (1) a traditional DRAM data path design, (2) an iso-
area reorganization, requiring more area overhead than our technique
which employs smaller row buffers, and (3) assumed a standard
DRAM protocol for device access.
IV. RESULTS
We developed a cycle-accurate DDR3 memory simulator which we
use as part of an in-house x86 multi-core simulator, whose front-end
is based on Pin. We modify our memory simulator timings according
to those in Table I for PCM and STT-RAM. We show results for an
8-core system with different memories and row buffer sizes, though
reducing row buffer size in DRAM incurs significant area overhead
and chip cost, as discussed in [10, 11], which we do not evaluate.
We evaluate 31 multiprogrammed workloads composed of SPEC,
TPC, and STREAM benchmarks. We will focus on a DRAM chip
micro-architecture with 1KB row buffers and block interleaving as
our baseline (our findings are similar for row interleaving [5]).
Technology Energy (Read/Write) Latency (Read/Write)
PCM 2×/100× 5×/10×
STT-RAM 0.5×/1× 1×/1×
TABLE I: NVM array parameters, relative to DRAM.
Energy (Figure 2a): In all cases, reducing the row buffer size can
significantly reduce memory energy consumption, though there are
diminishing marginal returns. The diminishing marginal returns are
because, as the row buffer size decreases, memory energy becomes
dominated by the energy required to transfer data between the row
buffer and I/O pads during read and write operations.
A modest row buffer size of 64B per chip leads to 47%/67%
less main memory energy for PCM/STT-RAM, compared to an all-
DRAM main memory with large rows (1KB per chip). Note that this
reduction is achieved despite worse underlying technology parameters
2
For more details, please refer to our accompanying tech report [5].
than DRAM (cf. Table I) because the energy saved by reducing the
row buffer size more than makes up for the higher average memory
array access energy. Hence, an NVM main memory with smaller
row buffers can significantly reduce memory energy consumption
compared to a DRAM baseline with large row buffers.
Performance (Figure 2b): We evaluate the performance of our
system using the weighted speedup metric [9] (higher is better). For
a given memory technology, reducing the row buffer size does not
greatly affect system performance due to the already low row buffer
locality present on our multi-core system (cf. Figure 1a). Interestingly,
with similar technology-dependent timing parameters as DRAM, an
STT-RAM main memory can achieve better performance because our
new organization enables a more efficient access protocol (detailed
in [5]) which eliminates the precharge delay incurred on row buffer
misses, and relaxes the t
RRD
and t
FAW
timing parameters to enable
more banks to be accessed simultaneously.
Durability (Figure 2c): NVM cells have a limited lifetime in terms
of the number of times they can be written to before their ability
to store data fails. We examine the effects of different row buffer
sizes on device durability with and without a small 32MB e-DRAM
cache to a PCM main memory. We find that with or without a cache,
decreasing the row buffer size has only a small effect on the number
of NVM writes performed due to the low row buffer locality present
in the system. In contrast, the addition of a reasonably-sized e-DRAM
cache has a large impact on the reduction of writes, decreasing the
number of writes by 39% to 47% across the various row buffer sizes.
V. CONCLUSIONS
We showed that on a multi-core system, reducing the row buffer
size can greatly reduce main memory dynamic energy compared
to a DRAM baseline with large rows, without greatly affecting
performance and durability. Our future work includes exploring
architectural techniques which effectively leverage small row buffer
sizes for improved performance and energy-efficiency.
REFERENCES
[1] Y. Kim et al. ATLAS: A scalable and high-performance scheduling
algorithm for multiple memory controllers. HPCA ’10.
[2] Y. Kim et al. A case for exploiting subarray-level parallelism (SALP)
in DRAM. ISCA ’12.
[3] Y. Kim et al. Thread cluster memory scheduling: Exploiting differences
in memory access behavior. MICRO ’10.
[4] B. C. Lee, E. Ipek, O. Mutlu, and D. Burger. Architecting phase change
memory as a scalable DRAM alternative. ISCA ’09.
[5] J. Meza, J. Li, and O. Mutlu. A case for small row buffers in non-volatile
main memories. http://safari.ece.cmu.edu/tr/tr-2012-002.pdf.
[6] Micron. 1Gb: ×4, ×8, ×16 DDR3 SDRAM data sheet. http://download.
micron.com/pdf/datasheets/dram/ddr/1GbDDRx4x8x16.pdf.
[7] S. P. Muralidhara et al. Reducing memory interference in multicore
systems via application-aware memory channel partitioning. MICRO’11.
[8] S. Rixner, W. J. Dally, U. J. Kapasi, P. Mattson, and J. D. Owens.
Memory access scheduling. ISCA ’00.
[9] A. Snavely et al. Symbiotic jobscheduling for a simultaneous multi-
threading processor. ASPLOS ’10.
[10] K. Sudan, N. Chatterjee, D. Nellans, et al. Micro-pages: increasing dram
efficiency with locality-aware data placement. ASPLOS ’10.
[11] A. N. Udipi, N. Muralimanohar, N. Chatterjee, et al. Rethinking DRAM
design and organization for energy-constrained multi-cores. ISCA ’10.
2
Citations
More filters
DOI
24 Jun 2013
TL;DR: The goal of this work is to explore the design of a Persistent Memory Manager that coordinates the management of memory and storage under a single hardware unit in a single address space and shows that such a system with a persistent memory can improve energy efficiency and performance.
Abstract: Most applications manipulate persistent data, yet traditional systems decouple data manipulation from persistence in a two-level storage model. Programming languages and system software manipulate data in one set of formats in volatile main memory (DRAM) using a load/store interface, while storage systems maintain persistence in another set of formats in non-volatile memories, such as Flash and hard disk drives in traditional systems, using a file system interface. Unfortunately, such an approach suffers from the system performance and energy overheads of locating data, moving data, and translating data between the different formats of these two levels of storage that are accessed via two vastly different interfaces. Yet today, new non-volatile memory (NVM) technologies show the promise of storage capacity and endurance similar to or better than Flash at latencies comparable to DRAM, making them prime candidates for providing applications a persistent single-level store with a single load/store interface to access all system data. Our key insight is that in future systems equipped with NVM, the energy consumed executing operating system and file system code to access persistent data in traditional systems becomes an increasingly large contributor to total energy. The goal of this work is to explore the design of a Persistent Memory Manager that coordinates the management of memory and storage under a single hardware unit in a single address space. Our initial simulation-based exploration shows that such a system with a persistent memory can improve energy efficiency and performance by eliminating the instructions and data movement traditionally used to perform I/O operations

103 citations


Cites background from "A case for small row buffers in non..."

  • ...a DRAM cache, as others have shown in the context of heterogeneous main memories [21, 30, 31, 37, 38, 49]....

    [...]

Proceedings ArticleDOI
01 Sep 2017
TL;DR: UH-MEM as discussed by the authors is a page management mechanism for various hybrid memories that systematically estimates the utility of migrating a page between different memory types, and uses this information to guide data placement.
Abstract: While the memory footprints of cloud and HPC applications continue to increase, fundamental issues with DRAM scaling are likely to prevent traditional main memory systems, composed of monolithic DRAM, from greatly growing in capacity Hybrid memory systems can mitigate the scaling limitations of monolithic DRAM by pairing together multiple memory technologies (eg, different types of DRAM, or DRAM and non-volatile memory) at the same level of the memory hierarchy The goal of a hybrid main memory is to combine the different advantages of the multiple memory types in a cost-effective manner while avoiding the disadvantages of each technology Memory pages are placed in and migrated between the different memories within a hybrid memory system, based on the properties of each page It is important to make intelligent page management (ie, placement and migration) decisions, as they can significantly affect system performanceIn this paper, we propose utility-based hybrid memory management (UH-MEM), a new page management mechanism for various hybrid memories, that systematically estimates the utility (ie, the system performance benefit) of migrating a page between different memory types, and uses this information to guide data placement UH-MEM operates in two steps First, it estimates how much a single application would benefit from migrating one of its pages to a different type of memory, by comprehensively considering access frequency, row buffer locality, and memory-level parallelism Second, it translates the estimated benefit of a single application to an estimate of the overall system performance benefit from such a migrationWe evaluate the effectiveness of UH-MEM with various types of hybrid memories, and show that it significantly improves system performance on each of these hybrid memories For a memory system with DRAM and non-volatile memory, UH-MEM improves performance by 14% on average (and up to 26%) compared to the best of three evaluated state-of-the-art mechanisms across a large number of data-intensive workloads

78 citations


Cites background or methods from "A case for small row buffers in non..."

  • ...The detailed DRAM and NVM timing and energy parameters are based on prior studies [53, 54, 78, 79, 81]....

    [...]

  • ...Previous works on hybrid memory systems observe that the latency of a row bu er hit is similar across memory types, while the latency of a row bu er con ict/miss is generally much higher in denser memories [53,54,55,78,79,126]....

    [...]

Book ChapterDOI
TL;DR: RowClone, a mechanism that exploits DRAM technology to perform bulk copy and initialization operations completely inside main memory, and a complementary work that uses DRAM to performs bulk bitwise AND and OR operations inside mainmemory significantly improve the performance and energy efficiency of the respective operations.
Abstract: In existing systems, the off-chip memory interface allows the memory controller to perform only read or write operations. Therefore, to perform any operation, the processor must first read the source data and then write the result back to memory after performing the operation. This approach consumes high latency, bandwidth, and energy for operations that work on a large amount of data. Several works have proposed techniques to process data near memory by adding a small amount of compute logic closer to the main memory chips. In this chapter, we describe two techniques proposed by recent works that take this approach of processing in memory further by exploiting the underlying operation of the main memory technology to perform more complex tasks. First, we describe RowClone, a mechanism that exploits DRAM technology to perform bulk copy and initialization operations completely inside main memory. We then describe a complementary work that uses DRAM to perform bulk bitwise AND and OR operations inside main memory. These two techniques significantly improve the performance and energy efficiency of the respective operations.

75 citations

Journal ArticleDOI
TL;DR: This dissertation provides a detailed analysis of DRAM latency by using both circuit-levelsimulation with a detailed DRAM model and FPGA-based pro?ling of real DRAM modules, and proposes anew technique, Architectural-Variation-Aware DRAM (AVA-DRAM), which reduces DRAMlatency at low cost.
Abstract: In modern systems, DRAM-based main memory is signi?cantly slower than the processor.Consequently, processors spend a long time waiting to access data from main memory, makingthe long main memory access latency one of the most critical bottlenecks to achieving highsystem performance. Unfortunately, the latency of DRAM has remained almost constant inthe past decade. This is mainly because DRAM has been optimized for cost-per-bit, ratherthan access latency. As a result, DRAM latency is not reducing with technology scaling, andcontinues to be an important performance bottleneck in modern and future systems.This dissertation seeks to achieve low latency DRAM-based memory systems at low costin three major directions. The key idea of these three major directions is to enable and ex-ploit latency heterogeneity in DRAM architecture. First, based on the observation that longbitlines in DRAM are one of the dominant sources of DRAM latency, we propose a newDRAM architecture, Tiered-Latency DRAM (TL-DRAM), which divides the long bitline intotwo shorter segments using an isolation transistor, allowing one segment to be accessed withreduced latency. Second, we propose a ?ne-grained DRAM latency reduction mechanism,Adaptive-Latency DRAM, which optimizes DRAM latency for the common operating conditions for individual DRAM module. We observe that DRAM manufacturers incorporate a very large timing margin as a provision against the worst-case operating conditions, whichis accessing the slowest cell across all DRAM products with the worst latency at the highesttemperature, even though such a slowest cell and such an operating condition are rare. Ourmechanism dynamically optimizes DRAM latency to the current operating condition of theaccessed DRAM module, thereby reliably improving system performance. Third, we observethat cells closer to the peripheral logic can be much faster than cells farther from the peripherallogic (a phenomenon we call architectural variation). Based on this observation, we propose anew technique, Architectural-Variation-Aware DRAM (AVA-DRAM), which reduces DRAMlatency at low cost, by pro?ling and identifying only the inherently slower regions in DRAMto dynamically determine the lowest latency DRAM can operate at without causing failures.This dissertation provides a detailed analysis of DRAM latency by using both circuit-levelsimulation with a detailed DRAM model and FPGA-based pro?ling of real DRAM modules.Our latency analysis shows that our low latency DRAM mechanisms enable significant latencyreductions, leading to large improvement in both system performance and energy e?fficiencyacross a variety of workloads in our evaluated systems, while ensuring reliable DRAM operation.

47 citations


Cites methods from "A case for small row buffers in non..."

  • ...These technologies include Phase Change Memory (PCM) [57, 134, 135, 136, 168, 216, 221, 225, 282, 292], Spin-Transfer Torque Magnetic Memory (STT-MRAM) [131, 149, 168], Resistive RAM [281] or memristors [44, 202], and Conductive Bridging Memory (CB-RAM) [133]....

    [...]

Book ChapterDOI
01 Jan 2015
TL;DR: The memory system is a fundamental performance and energy bottleneck in almost all computing systems and is experiencing difficult technology scaling challenges that make the maintenance and enhancement of its capacity, energy-efficiency, and reliability significantly more costly with conventional techniques.
Abstract: The memory system is a fundamental performance and energy bottleneck in almost all computing systems. Recent system design, application, and technology trends that require more capacity, bandwidth, efficiency, and predictability out of the memory system make it an even more important system bottleneck. At the same time, DRAM technology is experiencing difficult technology scaling challenges that make the maintenance and enhancement of its capacity, energy-efficiency, and reliability significantly more costly with conventional techniques.

46 citations

References
More filters
Proceedings ArticleDOI
20 Jun 2009
TL;DR: This work proposes, crafted from a fundamental understanding of PCM technology parameters, area-neutral architectural enhancements that address these limitations and make PCM competitive with DRAM.
Abstract: Memory scaling is in jeopardy as charge storage and sensing mechanisms become less reliable for prevalent memory technologies, such as DRAM. In contrast, phase change memory (PCM) storage relies on scalable current and thermal mechanisms. To exploit PCM's scalability as a DRAM alternative, PCM must be architected to address relatively long latencies, high energy writes, and finite endurance.We propose, crafted from a fundamental understanding of PCM technology parameters, area-neutral architectural enhancements that address these limitations and make PCM competitive with DRAM. A baseline PCM system is 1.6x slower and requires 2.2x more energy than a DRAM system. Buffer reorganizations reduce this delay and energy gap to 1.2x and 1.0x, using narrow rows to mitigate write energy and multiple rows to improve locality and write coalescing. Partial writes enhance memory endurance, providing 5.6 years of lifetime. Process scaling will further reduce PCM energy costs and improve endurance.

1,568 citations

Proceedings ArticleDOI
01 May 2000
TL;DR: This paper introduces memory access scheduling, a technique that improves the performance of a memory system by reordering memory references to exploit locality within the 3-D memory structure.
Abstract: The bandwidth and latency of a memory system are strongly dependent on the manner in which accesses interact with the “3-D” structure of banks, rows, and columns characteristic of contemporary DRAM chips. There is nearly an order of magnitude difference in bandwidth between successive references to different columns within a row and different rows within a bank. This paper introduces memory access scheduling, a technique that improves the performance of a memory system by reordering memory references to exploit locality within the 3-D memory structure. Conservative reordering, in which the first ready reference in a sequence is performed, improves bandwidth by 40% for traces from five media benchmarks. Aggressive reordering, in which operations are scheduled to optimize memory bandwidth, improves bandwidth by 93% for the same set of applications. Memory access scheduling is particularly important for media processors where it enables the processor to make the most efficient use of scarce memory bandwidth.

1,009 citations

Journal ArticleDOI
12 Nov 2000
TL;DR: It is demonstrated that performance on a hardware multithreaded processor is sensitive to the set of jobs that are coscheduled by the operating system jobscheduler, and that a small sample of the possible schedules is sufficient to identify a good schedule quickly.
Abstract: Simultaneous Multithreading machines fetch and execute instructions from multiple instruction streams to increase system utilization and speedup the execution of jobs. When there are more jobs in the system than there is hardware to support simultaneous execution, the operating system scheduler must choose the set of jobs to coscheduleThis paper demonstrates that performance on a hardware multithreaded processor is sensitive to the set of jobs that are coscheduled by the operating system jobscheduler. Thus, the full benefits of SMT hardware can only be achieved if the scheduler is aware of thread interactions. Here, a mechanism is presented that allows the scheduler to significantly raise the performance of SMT architectures. This is done without any advance knowledge of a workload's characteristics, using sampling to identify jobs which run well together.We demonstrate an SMT jobscheduler called SOS. SOS combines an overhead-free sample phase which collects information about various possible schedules, and a symbiosis phase which uses that information to predict which schedule will provide the best performance. We show that a small sample of the possible schedules is sufficient to identify a good schedule quickly. On a system with random job arrivals and departures, response time is improved as much as 17% over a schedule which does not incorporate symbiosis.

619 citations

Proceedings ArticleDOI
01 Apr 2010
TL;DR: It is shown that the implementation of least-attained-service thread prioritization reduces the time the cores spend stalling and significantly improves system throughput, and ATLAS's performance benefit increases as the number of cores increases.
Abstract: Modern chip multiprocessor (CMP) systems employ multiple memory controllers to control access to main memory. The scheduling algorithm employed by these memory controllers has a significant effect on system throughput, so choosing an efficient scheduling algorithm is important. The scheduling algorithm also needs to be scalable — as the number of cores increases, the number of memory controllers shared by the cores should also increase to provide sufficient bandwidth to feed the cores. Unfortunately, previous memory scheduling algorithms are inefficient with respect to system throughput and/or are designed for a single memory controller and do not scale well to multiple memory controllers, requiring significant finegrained coordination among controllers. This paper proposes ATLAS (Adaptive per-Thread Least-Attained-Service memory scheduling), a fundamentally new memory scheduling technique that improves system throughput without requiring significant coordination among memory controllers. The key idea is to periodically order threads based on the service they have attained from the memory controllers so far, and prioritize those threads that have attained the least service over others in each period. The idea of favoring threads with least-attained-service is borrowed from the queueing theory literature, where, in the context of a single-server queue it is known that least-attained-service optimally schedules jobs, assuming a Pareto (or any decreasing hazard rate) workload distribution. After verifying that our workloads have this characteristic, we show that our implementation of least-attained-service thread prioritization reduces the time the cores spend stalling and significantly improves system throughput. Furthermore, since the periods over which we accumulate the attained service are long, the controllers coordinate very infrequently to form the ordering of threads, thereby making ATLAS scalable to many controllers. We evaluate ATLAS on a wide variety of multiprogrammed SPEC 2006 workloads and systems with 4–32 cores and 1–16 memory controllers, and compare its performance to five previously proposed scheduling algorithms. Averaged over 32 workloads on a 24-core system with 4 controllers, ATLAS improves instruction throughput by 10.8%, and system throughput by 8.4%, compared to PAR-BS, the best previous CMP memory scheduling algorithm. ATLAS's performance benefit increases as the number of cores increases.

439 citations

Proceedings ArticleDOI
04 Dec 2010
TL;DR: This paper presents a new memory scheduling algorithm that addresses system throughput and fairness separately with the goal of achieving the best of both, and evaluates TCM on a wide variety of multiprogrammed workloads and compares its performance to four previously proposed scheduling algorithms, finding that TCM achieves both the best system throughputand fairness.
Abstract: In a modern chip-multiprocessor system, memory is a shared resource among multiple concurrently executing threads. The memory scheduling algorithm should resolve memory contention by arbitrating memory access in such a way that competing threads progress at a relatively fast and even pace, resulting in high system throughput and fairness. Previously proposed memory scheduling algorithms are predominantly optimized for only one of these objectives: no scheduling algorithm provides the best system throughput and best fairness at the same time. This paper presents a new memory scheduling algorithm that addresses system throughput and fairness separately with the goal of achieving the best of both. The main idea is to divide threads into two separate clusters and employ different memory request scheduling policies in each cluster. Our proposal, Thread Cluster Memory scheduling (TCM), dynamically groups threads with similar memory access behavior into either the latency-sensitive (memory-non-intensive) or the bandwidth-sensitive (memory-intensive) cluster. TCM introduces three major ideas for prioritization: 1) we prioritize the latency-sensitive cluster over the bandwidth-sensitive cluster to improve system throughput, 2) we introduce a ``niceness'' metric that captures a thread's propensity to interfere with other threads, 3) we use niceness to periodically shuffle the priority order of the threads in the bandwidth-sensitive cluster to provide fair access to each thread in a way that reduces inter-thread interference. On the one hand, prioritizing memory-non-intensive threads significantly improves system throughput without degrading fairness, because such ``light'' threads only use a small fraction of the total available memory bandwidth. On the other hand, shuffling the priority order of memory-intensive threads improves fairness because it ensures no thread is disproportionately slowed down or starved. We evaluate TCM on a wide variety of multiprogrammed workloads and compare its performance to four previously proposed scheduling algorithms, finding that TCM achieves both the best system throughput and fairness. Averaged over 96 workloads on a 24-core system with 4 memory channels, TCM improves system throughput and reduces maximum slowdown by 4.6%/38.6% compared to ATLAS (previous work providing the best system throughput) and 7.6%/4.6% compared to PAR-BS (previous work providing the best fairness).

375 citations

Frequently Asked Questions (2)
Q1. What have the authors contributed in "A case for small row buffers in non-volatile main memories" ?

In this work, the authors discuss and evaluate architectural changes to enable small row buffers at a low cost in NVMs. The authors find that on a multi-core system, reducing the row buffer size can greatly reduce main memory dynamic energy compared to a DRAM baseline with large row sizes, without greatly affecting endurance, and for some NVM technologies, leads to improved performance. 

Their future work includes exploring architectural techniques which effectively leverage small row buffer sizes for improved performance and energy-efficiency.