Filters








432 Hits in 6.9 sec

Effect of program localities on memory management strategies

Takashi Masuda
1977 Proceedings of the sixth symposium on Operating systems principles - SOSP '77  
The working set strategy and the local LRU strategy are modeled in the simulation system. A simple phase transition model and the simple LRU stack model are used as a program paging behavior model.  ...  For this purpose, an elaborate simulation model of the multiprogrammed memory management has been developed for a time-sharing environment.  ...  The author is also indebted to Mr. I. 0hnishi and Dr. K. Noguchi of the Software Works of Hitachi for their cooperation in designing simulation environments.  ... 
doi:10.1145/800214.806554 dblp:conf/sosp/Masuda77 fatcat:wvmrdpf6vbbg5ms4i2zqgxxjfu

Chip Multiprocessor Design Space Exploration through Statistical Simulation

Davy Genbrugge, Lieven Eeckhout
2009 IEEE transactions on computers  
Our experimental evaluation using the SPEC CPU benchmarks demonstrates average prediction error of 7.3 percent across a range of CMP configurations while varying the number of cores and memory hierarchy  ...  This paper enhances state-of-the-art statistical simulation: 1) by modeling the memory address stream behavior in a more microarchitecture-independent way and 2) by modeling a program's time-varying execution  ...  The LRU stack depth profile can be used to estimate cache miss rates for caches that are smaller than the largest cache of interest.  ... 
doi:10.1109/tc.2009.77 fatcat:be3ssorn4ngb5ly5bugz2ye7n4

Trace reduction for virtual memory simulations

Scott F. Kaplan, Yannis Smaragdakis, Paul R. Wilson
1999 Proceedings of the 1999 ACM SIGMETRICS international conference on Measurement and modeling of computer systems - SIGMETRICS '99  
OLR also satisfies an optimality property: for a given trace and memory size it produces the shortest possible trace that has the same LRU behavior as the original for a memory of at least this size.  ...  In particular, simulation on OLR-reduced traces is accurate for the LRU replacement algorithm, while simulation on SAD-reduced traces is accurate for the LRU and OPT algorithms.  ...  However, that error is small only if the depth of the stack (i.e., the size of the memory used for reduction) is much smaller than the simulated memory (typically 20% to 50% of its size).  ... 
doi:10.1145/301453.301479 dblp:conf/sigmetrics/KaplanSW99 fatcat:2so7otni75bfhodvu5nm236kku

Trace reduction for virtual memory simulations

Scott F. Kaplan, Yannis Smaragdakis, Paul R. Wilson
1999 Performance Evaluation Review  
OLR also satisfies an optimality property: for a given trace and memory size it produces the shortest possible trace that has the same LRU behavior as the original for a memory of at least this size.  ...  In particular, simulation on OLR-reduced traces is accurate for the LRU replacement algorithm, while simulation on SAD-reduced traces is accurate for the LRU and OPT algorithms.  ...  However, that error is small only if the depth of the stack (i.e., the size of the memory used for reduction) is much smaller than the simulated memory (typically 20% to 50% of its size).  ... 
doi:10.1145/301464.301479 fatcat:yiwixx3xxzfahnscoblavehyzu

The MT Stack: Paging Algorithm and Performance in a Distributed Virtual Memory System

Marco T. Morazan, Douglas R. Troeger, Myles Nash
2018 CLEI Electronic Journal  
We then present empirical results obtained from observ- ing the paging behavior of the MT stack.  ...  Our empirical results suggest that LRU is superior to FIFO as a page replacement policy for MT stack pages.  ...  We then proceed to describe the MT stack page replacement algorithm that simulates LRU and avoids the overhead traditionally associated with software implementations of LRU.  ... 
doi:10.19153/cleiej.5.1.2 fatcat:ecq2q5kmjzgz3nl6tqdvzbia4u

File grouping for scientific data management

Shyamala Doraimani, Adriana Iamnitchi
2008 Proceedings of the 17th international symposium on High performance distributed computing - HPDC '08  
This paper presents the benefits of using this file grouping for prestaging data and compares it with previously proposed file grouping techniques along a range of performance metrics.  ...  The analysis of data usage in a large set of real traces from a highenergy physics collaboration revealed the existence of an emergent grouping of files that we coined "filecules".  ...  A stack depth analysis [8, 2] on the entire set of DZero traces shows that all stack depths are smaller than 1 million (Figure 8) , which is less than 10% of the total number of file accesses.  ... 
doi:10.1145/1383422.1383429 dblp:conf/hpdc/DoraimaniI08 fatcat:xvspqs5ijzdf3k3enwomsjnifu

Making LRU Friendly to Weak Locality Workloads: A Novel Replacement Algorithm to Improve Buffer Cache Performance

Song Jiang, Xiaodong Zhang
2005 IEEE transactions on computers  
Meanwhile, LIRS mostly retains the simple assumption adopted by LRU for predicting future block access behaviors.  ...  LIRS effectively addresses the limitations of LRU by using recency to evaluate Inter-Reference Recency (IRR) of accessed blocks for making a replacement decision.  ...  Jong Min Kim, Donghee Lee, and Jongmoo Choi at the Seoul National University for providing us with their traces and simulators. They are also grateful to Dr. Scott Kaplan at Amherst College and Dr.  ... 
doi:10.1109/tc.2005.130 fatcat:xoj6breqknfsxf6qovk7ar3fzm

Recency-based TLB preloading

Ashley Saulsbury, Fredrik Dahlgren, Per Stenström
2000 Proceedings of the 27th annual international symposium on Computer architecture - ISCA '00  
We present results for traditional next-page TLB miss preloading -an approach shown to cut some of the misses.  ...  This work presents one of the first attempts to hide TLB miss latency by using preloading techniques.  ...  TLB miss prediction using an LRU stack An LRU stack algorithm [14] can be used to predict the miss rate of a fully-associative TLB with an LRU replacement policy.  ... 
doi:10.1145/339647.339666 fatcat:infqwgzj2bbargccxvrakumxxm

Recency-based TLB preloading

Ashley Saulsbury, Fredrik Dahlgren, Per Stenström
2000 SIGARCH Computer Architecture News  
We present results for traditional next-page TLB miss preloading -an approach shown to cut some of the misses.  ...  This work presents one of the first attempts to hide TLB miss latency by using preloading techniques.  ...  TLB miss prediction using an LRU stack An LRU stack algorithm [14] can be used to predict the miss rate of a fully-associative TLB with an LRU replacement policy.  ... 
doi:10.1145/342001.339666 fatcat:5u4jmimt65f5vihng527vk26mi

Principles of database buffer management

Wolfgang Effelsberg, Theo Haerder
1984 ACM Transactions on Database Systems  
; the notion of referencing versus addressing of database pages is introduced; and the concept of fixing pages in the buffer to prevent uncontrolled replacement is explained.  ...  For each of these tasks, implementation altematives are discussed and illustrated by examples from a performance evaluation project of a CODASYL DBMS.  ...  We are grateful to Michael Brunner and Paul Hirsch for their support of the empirical study. The helpful comments of the referees are gratefully acknowledged.  ... 
doi:10.1145/1994.2022 fatcat:gcmdgt5c2rh6pmc5zbomrf7tam

Analysis of caching algorithms for distributed file systems

Benjamin Reed, Darrell D. E. Long
1996 ACM SIGOPS Operating Systems Review  
When picking a cache replacement policy for file systems, LRU (Least Recently Used) has always been the obvious choice, because of the temporal locality found in programs and data.  ...  Results show that LRU is an effective NFS server cache replacement policy and frequency based tend to exhibit erratic behavior in the presence of temporal locality and sequentially accessed files.  ...  Acknowledgments This work has been supported by the Office of Naval Research under grant N00014-92-J-1807.  ... 
doi:10.1145/230908.230913 fatcat:7vqhmrrakzeltmfwbyqoncmqsi

Modeling cache performance beyond LRU

Nathan Beckmann, Daniel Sanchez
2016 2016 IEEE International Symposium on High Performance Computer Architecture (HPCA)  
Unfortunately, current cache models do not capture these high-performance policies as most use stack distances, which are inherently tied to LRU or its variants.  ...  It uses absolute reuse distances instead of stack distances, and models replacement policies as abstract ranking functions. These innovations let us model arbitrary age-based replacement policies.  ...  Acknowledgments We thank the anonymous reviewers as well as Harshad Kasture, Po-An Tsai, Mark Jeffrey, Suvinay Subramanian, and Joel Emer for their helpful feedback.  ... 
doi:10.1109/hpca.2016.7446067 dblp:conf/hpca/BeckmannS16 fatcat:pcbfqiafrba5fhrjqup7pituga

Performance evaluation of cache replacement policies for the SPEC CPU2000 benchmark suite

Hussein Al-Zoubi, Aleksandar Milenkovic, Milena Milenkovic
2004 Proceedings of the 42nd annual Southeast regional conference on - ACM-SE 42  
The cumulative distribution of cache hits in the LRU stack indicates a very good potential for way prediction using LRU information, since the percentage of hits to the bottom of the LRU stack is relatively  ...  In order to better understand the behavior of different policies, we introduced new measures, such as cumulative distribution of cache hits in the LRU stack.  ...  or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.  ... 
doi:10.1145/986537.986601 dblp:conf/ACMse/Al-ZoubiMM04 fatcat:yy3nxllshzao7hpvbcmlol6cua

Optimal Web cache sizing: scalable methods for exact solutions

T. Kelly, D. Reeves
2001 Computer Communications  
We use an efficient single-pass simulation algorithm to compute aggregate miss cost as a function of cache size in O(M log M) time and O(M) memory, where M is the number of requests in the workload.  ...  The same basic algorithm also permits us to compute complete stack distance transformations in O(M log N) time and O(N) memory, where N is the number of unique items referenced.  ...  We are particularly grateful for the generous assistance of Merit Network, Inc. and the University of Michigan's IT Division: Jeff Ogden provided bandwidth pricing data for Table 2 , and JoElla Coles  ... 
doi:10.1016/s0140-3664(00)00311-x fatcat:45aypnbfi5fl3l7ayytjym3g4e

I/O reference behavior of production database workloads and the TPC benchmarks---an analysis at the logical level

Windsor W. Hsu, Alan Jay Smith, Honesty C. Young
2001 ACM Transactions on Database Systems  
However, there are some noteworthy exceptions that affect well-known I /O optimization techniques such as caching (LRU is further from the optimal for TPC-C, while there is little sharing of pages between  ...  We discover that for the most part, the reference behavior of TPC-C and TPC-D fall within the range of behavior exhibited by the production workloads.  ...  Hellerstein for helpful comments on versions of this paper.  ... 
doi:10.1145/383734.383737 fatcat:75kyjaq37fhythj37zag7b4zkm
« Previous Showing results 1 — 15 out of 432 results