8,555 Hits in 4.1 sec

Static probabilistic timing analysis in presence of faults

Chao Chen, Luca Santinelli, Jerome Hugues, Giovanni Beltrame
2016 2016 11th IEEE Symposium on Industrial Embedded Systems (SIES)  
Our approach is the first method that calculates the timing behavior of caches with random replacement policy in presence of both transient and permanent faults.  ...  Without loss of generality, we assume 1 cycle for a cache hit and 100 cycles for a cache miss (any timing behavior would work).  ... 
doi:10.1109/sies.2016.7509422 dblp:conf/sies/ChenSHB16 fatcat:b37jpdq4bvf27odutil2uferje

Footprints in the cache

Dominique Thiebaut, Harold S. Stone
1987 ACM Transactions on Computer Systems  
The reload transient depends on the cache size and on the sizes of the footprints in the cache of the competing programs, where a program footprint is defined to be the set of lines in the cache in active  ...  The cache-reload transient is the set of cache misses that occur when a process is reinitiated after being suspended temporarily.  ...  Natarajan of IBM T. J. Watson Research Center for his careful reading of the manuscript and his helpful comments.  ... 
doi:10.1145/29868.32979 fatcat:ss7vpdmp2vgfznjchim7ln5gau

A Primer on Memory Consistency and Cache Coherence

Daniel J. Sorin, Mark D. Hill, David A. Wood
2011 Synthesis Lectures on Computer Architecture  
., Memory Consistency, Memory Consistency Model, or Memory Model) Consistency models define correct shared memory behavior in terms of loads and stores (memory reads and writes), without reference to caches  ...  The memory model specifies the allowed behavior of multithreaded programs executing with shared memory.  ... 
doi:10.2200/s00346ed1v01y201104cac016 fatcat:4hqyxplumrg2plqo77dm6jse2i

Rage Against the Machine Clear: A Systematic Analysis of Machine Clears and Their Implications for Transient Execution Attacks

Hany Ragab, Enrico Barberis, Herbert Bos, Cristiano Giuffrida
2021 USENIX Security Symposium  
MC, Memory Ordering MC, and Memory Disambiguation MC.  ...  However, while the community has investigated several variants to trigger attacks during transient execution, much less attention has been devoted to the analysis of the root causes of transient execution  ...  This behavior leads to a temporary desynchronization between the code and data views of the CPU, transiently breaking the architectural memory model (where L1d/L1i coherence ensures consistent code/data  ... 
dblp:conf/uss/RagabBBG21 fatcat:ijitnsbwnnc3rh5pvioh3dzonq

Improving GPU Cache Hierarchy Performance with a Fetch and Replacement Cache [chapter]

Francisco Candel, Salvador Petit, Alejandro Valero, Julio Sahuquillo
2018 Lecture Notes in Computer Science  
The proposed approach leverages a small additional Fetch and Replacement Cache (FRC) that stores control and coherence information of incoming blocks until they are fetched from main memory.  ...  The memory requirements of GPGPU applications widely differ from the requirements of CPU counterparts.  ...  In this paper, we look into the reasons explaining this behavior, and we find that one of the main sources of performance losses of the memory subsystem is the management of L2 cache misses.  ... 
doi:10.1007/978-3-319-96983-1_17 fatcat:m4oklyttujgarmxvucqwtdd4ty

Die-Stacked DRAM: Memory, Cache, or MemCache? [article]

Mohammad Bakhshalipour, HamidReza Zare, Pejman Lotfi-Kamran, Hamid Sarbazi-Azad
2018 arXiv   pre-print
the overhead of tag-checking, and manage the rest of the DRAM as a cache, for capturing the dynamic behavior of applications.  ...  The cache portion of the die-stacked DRAM is managed by hardware, caching data allocated in the off-chip memory.  ...  as a hardware-managed cache to capture the transient data-dependent behavior of applications.  ... 
arXiv:1809.08828v1 fatcat:fato47wpevct3b7eklydg7xyna

Locality-information-based scheduling in shared-memory multiprocessors [chapter]

Frank Bellosa
1996 Lecture Notes in Computer Science  
Most shared-memory multiprocessors use multiple stages of caches to hide latency.  ...  While CPU utilization of processes still determines scheduling decisions of contemporary schedulers, we propose novel scheduling policies based on cache miss rates and information about synchronization  ...  like to thank Thomas Eirich (IBM Zurich Research Laboratory), Franz Hauck (Vrije Universiteit Amsterdam), Matthias Gente, Fridolin Hofmann, Christoph Koppe, Armin Rueth and Michael Schröder (University of  ... 
doi:10.1007/bfb0022298 fatcat:a2tqyztdcjapzmupo4sbvnq77e

Caching Strategies for Run-time Probabilistic Model Checking

Hiroyuki Nakagawa, Kento Ogawa, Tatsuhiro Tsuchiya
2016 ACM/IEEE International Conference on Model Driven Engineering Languages and Systems  
For software systems that need to adapt to their environment at run-time, run-time verification is useful to guarantee the correctness of their behaviors.  ...  In order to expand the applicability of the approach, we propose three strategies, caching, prediction, and reduction, for reducing computational time for re-generated expressions at run-time.  ...  Caching In many cases, self-adaptive systems change only a part of their behaviors.  ... 
dblp:conf/models/NakagawaOT16 fatcat:kczunxtd5jc4bhej5a4nrypg7a


Xiaowei Shen, Arvind, Larry Rudolph
1999 Proceedings of the 13th international conference on Supercomputing - ICS '99  
An adaptive cache coherence protocol changes its actions to address changing program behaviors. We present an adaptive protocol called Cachet for distributed sharedmemory systems.  ...  Cachet is a seamless integration of several micro-protocols, each of which has been optimized for a particular memory access pattern.  ...  For example, a protocol may use any cache in the memory hierarchy as the rendezvous for the processors that access a shared memory location, provided that it maintains the same observable memory behavior  ... 
doi:10.1145/305138.305187 dblp:conf/ics/ShenAR99 fatcat:mhyjyqyoqffv5n2hthcsno62nq

Persistence and Synchronization: Friends or Foes? [article]

Pradeep Fernando, Irina Calciu, Jayneel Gandhi, Aasheesh Kolli, Ada Gavrilovska
2020 arXiv   pre-print
We consider different hardware characteristics, in terms of support for hardware transactional memory (HTM) and the boundaries of the persistence domain (transient or persistent caches).  ...  Through our empirical study, we show two major factors that impact the cost of supporting persistence in transactional systems: the persistence domain (transient or persistent caches) and application characteristics  ...  The same mechanisms have very different behavior and performance characteristics on systems with transient caches versus systems with persistent caches.  ... 
arXiv:2012.15731v1 fatcat:bzoxtvfzjzcb3ojmztn43lh4d4

Virtual Platform to Analyze the Security of a System on Chip at Microarchitectural Level

Quentin Forcioli, Jean-Luc Danger, Clementine Maurice, Lilian Bossuet, Florent Bruguier, Maria Mushtaq, David Novo, Loic France, Pascal Benoit, Sylvain Guilley, Thomas Perianin
2021 2021 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW)  
One typical example is the exploitation of cache memory which keeps track of the program execution and paves the way to side-channel (SCA) analysis and transient execution attacks like Meltdown and Spectre  ...  The main objective is to create a virtual and open platform that simulates the behavior of microarchitectural features and their interactions with the peripherals, like accelerators and memories in emerging  ...  ACKNOWLEDGEMENTS The work presented in this paper was realized in the framework of the ARCHI-SEC project number ANR-19-CE39-0008-03 supported by the French "Agence Nationale de la Recherche".  ... 
doi:10.1109/eurospw54576.2021.00017 fatcat:ljhuwgh3ebb47ksi3bocapspmy

Joint Performance Improvement and Error Tolerance for Memory Design Based on Soft Indexing

Shuo Wang, Lei Wang
2006 Computer Design (ICCD '99), IEEE International Conference on  
Memory design is facing the dual challenges of performance improvement and error tolerance due to a combination of technology scaling and higher levels of integration.  ...  Simulation results show 94.9% average error-control coverage on the 23 benchmarks, with average of 23.2% reduction in memory miss rates as compared to the conventional techniques.  ...  ACKNOWLEDGMENT This research was supported in part by the University of Connecticut Faculty Research Grant 446751.  ... 
doi:10.1109/iccd.2006.4380789 dblp:conf/iccd/WangW06 fatcat:hafl7457v5gttf3yazdomgr2hm

SafeSpec: Banishing the Spectre of a Meltdown with Leakage-Free Speculation [article]

Khaled N. Khasawneh, Esmaeil Mohammadian Koruyeh, Chengyu Song, Dmitry Evtyushkin, Dmitry Ponomarev, Nael Abu-Ghazaleh
2018 arXiv   pre-print
Several attack variations are possible, allowing arbitrary exposure of the full kernel memory to an unprivileged attacker.  ...  The recent Meltdown and Spectre attacks have shown that this behavior can be exploited to expose privileged information to an unprivileged attacker.  ...  Some outlier behavior such as Pop2 and imagick where the percentage of i-cache misses drops significantly could be due to the larger size of the shadow structures expanding the effective size of the cache  ... 
arXiv:1806.05179v2 fatcat:jucwlhzcdjavvde4kixmnmghba

Foreshadow: Extracting the Keys to the Intel SGX Kingdom with Transient Out-of-Order Execution

Jo Van Bulck, Marina Minkin, Ofir Weisse, Daniel Genkin, Baris Kasikci, Frank Piessens, Mark Silberstein, Thomas F. Wenisch, Yuval Yarom, Raoul Strackx
2018 USENIX Security Symposium  
cache.  ...  The extracted remote attestation keys affect millions of devices.  ...  the intended program behavior.  ... 
dblp:conf/uss/BulckMWGKPSWYS18 fatcat:iv4sdkremzcb3lwp2iculitcwu

Increasing Memory Utilization with Transient Memory Scheduling

Qi Wang, Jiguo Song, Gabriel Parmer, Andrew Sweeney, Guru Venkataramani
2012 2012 IEEE 33rd Real-Time Systems Symposium  
In addition to the traditional spatial multiplexing of memory, TMEM introduces the predictable temporal multiplexing of memory within caches in a system component, and memory scheduling to continually  ...  We find that TMEM is able to maintain the efficiency of caches, while also lowering both task tardiness and system memory requirements.  ...  We'd like to thank the anonymous reviewers and the shepherd of this paper. They have significantly improved the quality and presentation of this work.  ... 
doi:10.1109/rtss.2012.76 dblp:conf/rtss/WangSPSV12 fatcat:phhbuwl3q5dz7cmvin6gaod67e
« Previous Showing results 1 — 15 out of 8,555 results