Filters








30 Hits in 4.2 sec

The SARC Architecture

Alex Ramirez, Felipe Cabarcas, Ben Juurlink, Mauricio Alvarez Mesa, Friman Sanchez, Arnaldo Azevedo, Cor Meenderinck, Catalin Ciobanu, Sebastian Isaza, Gerogi Gaydadjiev
2010 IEEE Micro  
The SARC architecture is based on a heterogeneous set of processors managed at runtime in a master-worker mode.  ...  However, chip multiprocessors (CMPs) often struggle with programmability and scalability issues such as cache coherency and offchip memory bandwidth and latency.  ...  To avoid the latency penalty involved in sequential TLB and scratchpad/cache accesses, workers first check a logically indexed and tagged write-through L0 cache.  ... 
doi:10.1109/mm.2010.79 fatcat:xle4zkaarnbdvlryyq7f674544

PFC: Transparent Optimization of Existing Prefetching Strategies for Multi-Level Storage Systems

Zhe Zhang, Kyuhyung Lee, Xiaosong Ma, Yuanyuan Zhou
2008 2008 The 28th International Conference on Distributed Computing Systems  
storage studies have focused mostly on cache replacement strategies.  ...  However, while prefetching has been shown as a crucial technique to exploit the sequentiality in accesses common for such systems and hide the increasing relative cost of disk I/O, existing multi-level  ...  At both levels, LRU is used as the cache replacement policy, except for SARC, which comes with its own cache management strategy.  ... 
doi:10.1109/icdcs.2008.89 dblp:conf/icdcs/ZhangLMZ08 fatcat:fihhlj3iojd4rdn6oyu35bibye

Tombolo: Performance enhancements for cloud storage gateways

Suli Yang, Kiran Srinivasan, Kishore Udayashankar, Swetha Krishnan, Jingxin Feng, Yupu Zhang, Andrea C. Arpaci-Dusseau, Remzi H. Arpaci-Dusseau
2016 2016 32nd Symposium on Mass Storage Systems and Technologies (MSST)  
We also provide insights on how to build such cloud gateways, especially with respect to caching and prefetching techniques.  ...  Our analysis of real-world traces shows that certain primary data sets can reside in the cloud with its working set cached locally, using a cloud gateway that acts as a caching bridge between local data  ...  Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and may not reflect the views of NSF or other institutions.  ... 
doi:10.1109/msst.2016.7897076 dblp:conf/mss/YangSUKFZAA16 fatcat:avvx4affuzfv3dwuajxnabqlp4

Prefetching with Adaptive Cache Culling for Striped Disk Arrays

Sung Hoon Baek, Kyu Ho Park
2008 USENIX Annual Technical Conference  
the hit rate of both prefetched data and cached data in a given cache management scheme.  ...  We implemented a kernel module in Linux version 2.6.18 as a RAID-5 driver with our scheme, which significantly outperforms the sequential prefetching of Linux from several times to an order of magnitude  ...  The differential feedback has similar features with the adaptive scheme based on marginal utility used in the sequential prefetching in adaptive replacement cache (SARC) [11] .  ... 
dblp:conf/usenix/BaekP08 fatcat:3xodcrqsdnaj7ofinvkckyyssy

Memory resource allocation for file system prefetching

Zhe Zhang, Amit Kulkarni, Xiaosong Ma, Yuanyuan Zhou
2009 Proceedings of the fourth ACM european conference on Computer systems - EuroSys '09  
As an important technique to hide disk I/O latency, prefetching has been widely studied, and dynamic adaptive prefetching techniques have been deployed in diverse storage environments.  ...  More specifically, we applied (1) two SCM policies to dynamically configure the sequential prefetching parameters, and (2) an SCM solution to correct the access pattern information distortion in multi-level  ...  In addition, the work is supported by Xiaosong Ma's joint appointment between NCSU and ORNL.  ... 
doi:10.1145/1519065.1519075 dblp:conf/eurosys/ZhangKMZ09 fatcat:d2hh6tz5fzh3zlzu7w7itsjfnq

RPP: Reference Pattern Based Kernel Prefetching Controller

Hyo J. LEE, In Hwan DOH, Eunsam KIM, Sam H. NOH
2009 IEICE transactions on information and systems  
., looping, in addition to random and sequential patterns and delaying starting prefetching until patterns are confirmed to be sequential or looping. key words: kernel prefetching, reference pattern, read-ahead  ...  Conventional kernel prefetching schemes have focused on taking advantage of sequential access patterns that are easy to detect.  ...  SARC partitions the cache space for the random and sequential references [2] and adapts prefetching degree and partition size according Copyright c 2009 The Institute of Electronics, Information and  ... 
doi:10.1587/transinf.e92.d.2512 fatcat:vgm6kxov2jdzdgfbiptyy4qbii

Optimal multistream sequential prefetching in a shared cache

Binny S. Gill, Luis Angel D. Bathen
2007 ACM Transactions on Storage  
There are two problems that plague the state-of-the-art sequential prefetching algorithms: (i) cache pollution, which occurs when prefetched data replaces more useful prefetched or demand-paged data, and  ...  Prefetching is a widely used technique in modern data storage systems. We study the most widely used class of prefetching algorithms known as sequential prefetching.  ...  The Problem of Cache Pollution In the context of prefetching, cache pollution is said to occur when prefetched data replaces more useful data (demand-paged or prefetched) from the cache.  ... 
doi:10.1145/1288783.1288789 fatcat:i5lfjduzmrcebmi7uli6qqn2ey

Sequential Prefetch Cache Sizing for Maximal Hit Rate

Swapnil Bhatia, Elizabeth Varki, Arif Merchant
2010 2010 IEEE International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems  
If the I/O workload has sequential locality, then data prefetched in response to sequential accesses in the workload will receive hits.  ...  Disk array caches perform sequential prefetching by loading data contiguous to I/O request data into the array cache.  ...  ACKNOWLEDGMENTS The first author was supported in part by a grant from the Office of Naval Research and by the UNH CEPS Teaching Award when this work was underway at UNH.  ... 
doi:10.1109/mascots.2010.18 dblp:conf/mascots/BhatiaVM10 fatcat:2nlidyartvh6hdjs7h2xcdlpzq

Dual queues cache replacement algorithm based on sequentiality detection

Nong Xiao, YingJie Zhao, Fang Liu, ZhiGuang Chen
2011 Science China Information Sciences  
Replacement policies play a critical role in the cache design due to the limited cache capacity.  ...  Dual queues cache replacement algorithm based on sequentiality detection.  ...  Both STEP and SARC are prefetching policies; Bargain Cache is a file-level policy; CRASD presented in this paper is a block-level policy. They can work together to attain high performance.  ... 
doi:10.1007/s11432-011-4213-z fatcat:w5poam4zkzgcblbcls4rcaks5q

MC2: Multiple Clients on a Multilevel Cache

Gala Yadgar, Michael Factor, Kai Li, Assaf Schuster
2008 2008 The 28th International Conference on Distributed Computing Systems  
In today's networked storage environment, it is common to have a hierarchy of caches where the lower levels of the hierarchy are accessed by multiple clients.  ...  The local scheme uses readily available information about the client's future access profile to save the most valuable blocks, and to choose the best replacement policy for them.  ...  SARC [11] and AMP [10] combine caching and prefetching in a storage server. SARC adjusts the stack positions of prefetched blocks to avoid eviction.  ... 
doi:10.1109/icdcs.2008.29 dblp:conf/icdcs/YadgarFLS08 fatcat:lp3qhz5rs5hk7ku2knooaltajy

Prefetch throttling and data pinning for improving performance of shared caches

Ozcan Ozturk, Seung Woo Son, Mahmut Kandemir, Mustafa Karakoy
2008 2008 SC - International Conference for High Performance Computing, Networking, Storage and Analysis  
In this paper, we (i) quantify the impact of compilerdirected I/O prefetching on shared caches at I/O nodes.  ...  Prefetch throttling prevents one or more clients from issuing further prefetches if such prefetches are predicted to be harmful, i.e., replace from the memory cache the useful data accessed by other clients  ...  This work is supported in part by NSF grants #0406340, #0444158, #0621402, #0724599, #0821527, and #0833126.  ... 
doi:10.1109/sc.2008.5213128 dblp:conf/sc/OzturkSKK08 fatcat:clyktl2e2zadxprz3ux4el55oy

Matrix Stripe Cache-Based Contiguity Transform for Fragmented Writes in RAID-5

Sung Hoon Baek, Kyu Ho Park
2009 IEEE transactions on computers  
The results demonstrate that MSC-CT is extremely simple to implement, has low overhead, and is ideally suited for RAID controllers not only for random writes but also for sequential writes in various realistic  ...  Given that contiguous reads and writes between a cache and a disk outperform fragmented reads and writes, fragmented reads and writes are forcefully transformed into contiguous reads and writes via a proposed  ...  One of the latest and most outstanding investigations of prefetching involves sequential prefetching in adaptive replacement cache (SARC) [23] .  ... 
doi:10.1109/tc.2007.1058 fatcat:oakirruzpjd6xpa3e5pdgpkzjy

Matrix-Stripe-Cache-Based Contiguity Transform for Fragmented Writes in RAID-5

Sung Hoon Baek, Kyu Ho Park
2007 IEEE transactions on computers  
The results demonstrate that MSC-CT is extremely simple to implement, has low overhead, and is ideally suited for RAID controllers not only for random writes but also for sequential writes in various realistic  ...  Given that contiguous reads and writes between a cache and a disk outperform fragmented reads and writes, fragmented reads and writes are forcefully transformed into contiguous reads and writes via a proposed  ...  One of the latest and most outstanding investigations of prefetching involves sequential prefetching in adaptive replacement cache (SARC) [23] .  ... 
doi:10.1109/tc.2007.70758 fatcat:pfepw7r6fzgt3b45chbb6hykge

DeNovoND

Hyojin Sung, Rakesh Komuravelli, Sarita V. Adve
2013 Proceedings of the eighteenth international conference on Architectural support for programming languages and operating systems - ASPLOS '13  
This is because the preceding load a will read a in its own cache in Registered state and so will not prefetch b which is registered at C1.  ...  A read that accesses a valid word with prefetch bit set is considered a cache hit.  ... 
doi:10.1145/2451116.2451119 dblp:conf/asplos/SungKA13 fatcat:vi7qs6oupjeqzjizngny2imauy

DeNovoND

Hyojin Sung, Rakesh Komuravelli, Sarita V. Adve
2013 SIGPLAN notices  
This is because the preceding load a will read a in its own cache in Registered state and so will not prefetch b which is registered at C1.  ...  A read that accesses a valid word with prefetch bit set is considered a cache hit.  ... 
doi:10.1145/2499368.2451119 fatcat:uo7qvgefyvdjjatdmugen5yq7u
« Previous Showing results 1 — 15 out of 30 results