Filters








12 Hits in 2.2 sec

ChargeCache: Reducing DRAM latency by exploiting row access locality

Hasan Hassan, Gennady Pekhimenko, Nandita Vijaykumar, Vivek Seshadri, Donghyuk Lee, Oguz Ergin, Onur Mutlu
2016 2016 IEEE International Symposium on High Performance Computer Architecture (HPCA)  
2 Executive Summary • Goal: Reduce average DRAM access latency with no modification to the existing DRAM chips • Observations: 1) A highly-charged DRAM row can be accessed with low latency 2) A row's  ...  Bound: Low Latency DRAM -Works as ChargeCache with 100% Hit Ratio -On all DRAM accesses: tRCD-7, tRAS-20 cycles 6 1.  ... 
doi:10.1109/hpca.2016.7446096 dblp:conf/hpca/HassanPVSLEM16 fatcat:rw5jdwnmgnhkhecyu7cigvv2r4

Exploiting Row-Level Temporal Locality in DRAM to Reduce the Memory Access Latency [article]

Hasan Hassan, Gennady Pekhimenko, Nandita Vijaykumar, Vivek Seshadri, Donghyuk Lee, Oguz Ergin, Onur Mutlu
2018 arXiv   pre-print
If a later DRAM request hits in that table, the memory controller uses lower timing parameters, leading to reduced DRAM latency.  ...  In this work, we develop a low-cost mechanism, called ChargeCache, that enables faster access to recently-accessed rows in DRAM, with no modifications to DRAM chips.  ...  This work is supported in part by NSF grants 1212962, 1320531, and 1409723, the Intel Science and Technology Center for Cloud Computing, and the Semiconductor Research Corporation.  ... 
arXiv:1805.03969v1 fatcat:dtbttcvk35b67batezwu5f4jxm

Reducing DRAM Access Latency by Exploiting DRAM Leakage Characteristics and Common Access Patterns [article]

Hasan Hassan
2016 arXiv   pre-print
In this thesis, we develop a low-cost mechanism, called ChargeCache, which enables faster access to recently-accessed rows in DRAM, with no modifications to DRAM chips.  ...  If a later DRAM request hits in that table, the memory controller uses lower timing parameters, leading to reduced DRAM latency.  ...  We refer to this as Row Level Temporal Locality (RLTL)(see Section 3). • We propose an efficient mechanism, ChargeCache [29] , which exploits RLTL to reduce the average DRAM access latency by requiring  ... 
arXiv:1609.07234v1 fatcat:5iuox7vjmndu3dciwbvlzpc5hu

Recent Advances in Overcoming Bottlenecks in Memory Systems and Managing Memory Resources in GPU Systems [article]

Onur Mutlu, Saugata Ghose, Rachata Ausavarungnirun
2018 arXiv   pre-print
The compound effect of contention, high memory latency and access overheads, as well as inefficient management of resources, greatly degrades performance, quality-of-service, and energy efficiency.  ...  This article features extended summaries and retrospectives of some of the recent research done by our research group, SAFARI, on (1) various critical problems in memory systems and (2) how memory system  ...  ChargeCache is a new mechanism that takes advantage of the high charge held within a recently-closed row to reduce the access latency to such a row when it is accessed again soon in the future.  ... 
arXiv:1805.06407v2 fatcat:vn5mju4fivgtjho7buv6pauaga

Exploiting the DRAM Microarchitecture to Increase Memory-Level Parallelism [article]

Yoongu Kim, Vivek Seshadri, Donghyuk Lee, Jamie Liu, Onur Mutlu
2018 arXiv   pre-print
Our three proposed mechanisms mitigate the negative impact of bank serialization by overlapping different components of the bank access latencies of multiple requests that go to different subarrays within  ...  The key observation exploited by our mechanisms is that a modern DRAM bank is implemented as a collection of subarrays that operate largely independently while sharing few global peripheral structures.  ...  This research was also partially supported by grants from NSF (CAREER Award CCF-0953246), GSRC, and Intel ARO Memory Hierarchy Program.  ... 
arXiv:1805.01966v1 fatcat:wv3fnjmd7vcc3hy2ktlyizug7e

Adaptive-Latency DRAM: Reducing DRAM Latency by Exploiting Timing Margins [article]

Donghyuk Lee, Yoongu Kim, Gennady Pekhimenko, Samira Khan, Vivek Seshadri, Kevin Chang, Onur Mutlu
2018 arXiv   pre-print
AL-DRAM is a mechanism that optimizes DRAM latency based on the DRAM module and the operating temperature, by exploiting the extra margin that is built into the DRAM timing parameters.  ...  The timing parameter margin ensures that the slow outlier chips operate reliably at the worst-case temperature, and hence leads to a high access latency.  ...  Donghyuk Lee was supported in part by the John and Claire Bertucci Graduate Fellowship.  ... 
arXiv:1805.03047v1 fatcat:5iebwkma75bfpomxmetkw4lgf4

Understanding and Improving the Latency of DRAM-Based Memory Systems [article]

Kevin K. Chang
2017 arXiv   pre-print
In stark contrast with capacity and bandwidth, DRAM latency has remained almost constant, reducing by only 1.3x in the same time frame.  ...  We also examine the critical relationship between supply voltage and latency in modern DRAM chips and develop new mechanisms that exploit this voltage-latency trade-off to improve energy efficiency.  ...  ChargeCache [108] enables faster access to recently-accessed rows in DRAM by tracking the addresses of recently-accessed rows in the memory controller. NUAT [315] enables CHAPTER 3.  ... 
arXiv:1712.08304v1 fatcat:6y2nr2eowvb5fhr7km7azmkioe

Uncovering In-DRAM RowHammer Protection Mechanisms: A New Methodology, Custom RowHammer Patterns, and Implications [article]

Hasan Hassan, Yahya Can Tugrul, Jeremie S. Kim, Victor van der Veen, Kaveh Razavi, Onur Mutlu
2021 arXiv   pre-print
We show how U-TRR allows us to craft RowHammer access patterns that successfully circumvent the TRR mechanisms employed in 45 DRAM modules of the three major DRAM vendors.  ...  We find that the DRAM modules we analyze are vulnerable to RowHammer, having bit flips in up to 99.9% of all DRAM rows.  ...  Mutlu, “ChargeCache: Reducing DRAM Latency by Exploiting Row Access Layer Access: Improving 3D-Stacked Memory Bandwidth at Low Cost,” in TACO,  ... 
arXiv:2110.10603v1 fatcat:ab7zgdwb3vaqtbszjmyxuvngny

Enhancing Programmability, Portability, and Performance with Rich Cross-Layer Abstractions [article]

Nandita Vijaykumar
2019 arXiv   pre-print
While there is abundant research, and thus significant improvements, at different levels of the stack that address these very challenges, in this thesis, we observe that we are fundamentally limited by  ...  Due to the temporal locality in row accesses seen in most workloads, ChargeCache is able to signi cantly reduce DRAM access latency.  ...  ChargeCache tracks recently-accessed rows in each bank, and any subsequent access to a recently-accessed row is handled by the memory controller with reduced DRAM parameters.  ... 
arXiv:1911.05660v1 fatcat:w5f3g4isqbcphm2jjfzjtvrjnq

Practical Data Compression for Modern Memory Hierarchies [article]

Gennady Pekhimenko
2016 arXiv   pre-print
A key insight in our approach is that access time (including decompression latency) is critical in modern memory hierarchies.  ...  BDI exploits the existing low dynamic range of values present in many cache lines to compress them to smaller sizes using Base+Delta encoding.  ...  DRAM parallelism [120, 34] , (iii) exploiting variation in DRAM latency (e.g., Adaptive Latency DRAM [133] , ChargeCache [77] ), (iv) smarter refresh and scheduling mechanisms (e.g., [92, 147, 34,  ... 
arXiv:1609.02067v1 fatcat:i4z7m2ydtjgwvlwmglno26nb54

Hardware-Accelerated Platforms and Infrastructures for Network Functions: A Survey of Enabling Technologies and Research Studies

Prateek Shantharama, Akhilesh S. Thyagaturu, Martin Reisslein
2020 IEEE Access  
MEMORY 1) DRAM Understanding the latency components of DRAM memory accesses facilitates the effective design of NF applications to exploit the locality of data within DRAM system memory with reduced latency  ...  [267] have proposed a DRAM access strategy based on the memory controller timing changes so as to achieve latency reductions up to 9%. Conventionally, DRAM is accessed row by row.  ... 
doi:10.1109/access.2020.3008250 fatcat:kv4znpypqbatfk2m3lpzvzb2nu

Understanding and Improving the Latency of DRAM-Based Memory Systems

Kevin K. Chang
2018
latency for frequently-accessed data, and reduced preparation latency for subsequent accesses [...]  ...  In stark contrast with capacity and bandwidth, DRAM latency has remained almost constant, reducing by only 1.3x in the same time frame.  ...  ChargeCache [105] enables faster access to recently-accessed rows in DRAM by tracking the addresses of recently-accessed rows.  ... 
doi:10.1184/r1/6724127.v1 fatcat:gjvlf6dju5eg3kjz6nfjn3shie