Filters








3,788 Hits in 4.9 sec

Concurrent Non-deferred Reference Counting on the Microgrid: First Experiences [chapter]

Stephan Herhut, Carl Joslin, Sven-Bodo Scholz, Raphael Poss, Clemens Grelck
2011 Lecture Notes in Computer Science  
This novel approach decouples computational workload from reference-counting overhead.  ...  Scalability is essentially limited only by the combined sequential runtime of all reference counting operations, in accordance with Amdahl's law.  ...  Conclusion We have presented a novel approach for concurrent non-deferred reference counting on many-core architectures.  ... 
doi:10.1007/978-3-642-24276-2_12 fatcat:xcdw6nguobeixecwgmqmlszgty

Safe manual memory management

David Gay, Rob Ennals, Eric Brewer
2007 Proceedings of the 6th international symposium on Memory management - ISMM '07  
Porting programs for use with HeapSafe typically requires little effort (on average 0.6% of lines change), adds an average 11% time overhead (84% in the worst case), and increases space usage by an average  ...  We present HeapSafe, a tool that uses reference counting to dynamically verify the soundness of manual memory management of C programs.  ...  Deferred reference counting increases the pause at free time, but this overhead is small and predictable, as it is a function of the number of live variables on the call stack.  ... 
doi:10.1145/1296907.1296911 dblp:conf/iwmm/GayEB07 fatcat:ojdzbvl76fakxaym4rtd2dwwpe

Garbage collecting the Internet: a survey of distributed garbage collection

Saleh E. Abdullahi, Graem A. Ringwood
1998 ACM Computing Surveys  
This taxonomy is used as a framework to explore distribution issues: locality of action, communication overhead and indeterministic communication latency.  ...  However, an excised reference can be restored before its copy count reaches zero. This can be compared with deferred reclamation (Sect. 2.2.2).  ...  At the same time, these solutions reduce the communication overhead. Distributed Reference Counting One of the earliest distributed reference-counting collectors was described by Nori [1979] .  ... 
doi:10.1145/292469.292471 fatcat:odrr35rx4jfyvihpguawcljqv4

Small, Fast, Concurrent Proof Checking for the lambda-Pi Calculus Modulo Rewriting [article]

Michael Färber
2021 arXiv   pre-print
This work presents a small proof checker with support for concurrent proof checking, achieving state-of-the-art performance in both concurrent and nonconcurrent settings.  ...  The proof checker is faster than the reference proof checker for this calculus, Dedukti, on all of five evaluated datasets obtained from proof assistants and interactive theorem provers.  ...  Furthermore, I would like to thank Gaspard Ferey, Guillaume Genestier and Gabriel Hondet for explaining to me the inner workings of Dedukti, and Emilie Grienenberger for providing me with the Dedukti export  ... 
arXiv:2102.08766v1 fatcat:ewgptnfbazaqpe4fkbf7chuevu

Compiler Techniques for Massively Scalable Implicit Task Parallelism

Timothy G. Armstrong, Justin M. Wozniak, Michael Wilde, Ian T. Foster
2014 SC14: International Conference for High Performance Computing, Networking, Storage and Analysis  
It executes using a data-driven task parallel execution model that is capable of orchestrating millions of concurrently executing asynchronous tasks on homogeneous or heterogeneous resources.  ...  Producing code that executes efficiently at this scale requires sophisticated compiler transformations: poorly optimized code inhibits scaling with excessive synchronization and communication.  ...  In cases of large parallel loops, reference counting overhead is amortized over the entire loop with batching. IV.  ... 
doi:10.1109/sc.2014.30 dblp:conf/sc/ArmstrongWWF14 fatcat:fn5th2mbljhavelopcxrrhljte

Hardware Concurrent Garbage Collection for Short-Lived Objects in Mobile Java Devices [chapter]

Chi Hang Yau, Yi Yu Tan, Anthony S. Fong, Wing Shing Yu
2005 Lecture Notes in Computer Science  
Reference counting object cache with hardware write barrier and object allocator is proposed to provide the hardware concurrent garbage collection for small size objects in jHISC.  ...  The reference counting collector reclaims the memory occupied by object immediately after the object become garbage. The hardware allocator provides a constant time object allocation.  ...  Evaluation A nearly real-time garbage collection with reference-counting has been done successfully.  ... 
doi:10.1007/11596356_8 fatcat:hqcxl4x4aferlgloiv7afdprru

CREAM: A Concurrent-Refresh-Aware DRAM Memory architecture

Tao Zhang, Matt Poremba, Cong Xu, Guangyu Sun, Yuan Xie
2014 2014 IEEE 20th International Symposium on High Performance Computer Architecture (HPCA)  
As DRAM density keeps increasing, more rows need to be protected in a single refresh with the constant refresh number.  ...  A quasi-ROR interface protocol is proposed so that CREAM is fully compatible with JEDEC-DDR standard with negligible hardware overhead and no extra pin-out.  ...  The selected SPEC2006 CPU benchmark with reference input size [20] and STREAM with all functions [21] are evaluated as multi-programmed testbench.  ... 
doi:10.1109/hpca.2014.6835947 dblp:conf/hpca/ZhangPXSX14 fatcat:lpwxedokhne7pa6bhvxfocuyqu

A unified theory of garbage collection

David F. Bacon, Perry Cheng, V. T. Rajan
2004 SIGPLAN notices  
Using this framework, we show that all high-performance collectors (for example, deferred reference counting and generational collection) are in fact hybrids of tracing and reference counting.  ...  Intuitively, the difference is that tracing operates on live objects, or "matter", while reference counting operates on dead objects, or "anti-matter".  ...  It is simply deferred reference counting with the role of the edges exchanged; it therefore has a duality with deferred reference counting in a similar way that tracing and reference counting are themselves  ... 
doi:10.1145/1035292.1028982 fatcat:mwmii7lwvvbgrntgmf7y5o5y5y

A unified theory of garbage collection

David F. Bacon, Perry Cheng, V. T. Rajan
2004 Proceedings of the 19th annual ACM SIGPLAN Conference on Object-oriented programming, systems, languages, and applications - OOPSLA '04  
Using this framework, we show that all high-performance collectors (for example, deferred reference counting and generational collection) are in fact hybrids of tracing and reference counting.  ...  Intuitively, the difference is that tracing operates on live objects, or "matter", while reference counting operates on dead objects, or "anti-matter".  ...  It is simply deferred reference counting with the role of the edges exchanged; it therefore has a duality with deferred reference counting in a similar way that tracing and reference counting are themselves  ... 
doi:10.1145/1028976.1028982 dblp:conf/oopsla/BaconCR04 fatcat:f3nzhd674zcotk7mgoqnebksty

Lazy reference counting for the microgrid

Raphael Poss, Clemens Grelck, Stephan Herhut, Sven-Bodo Scholz
2012 2012 16th Workshop on Interaction between Compilers and Computer Architectures (INTERACT)  
This papers revisits non-deferred reference counting, a common technique to ensure that potentially shared large heap objects can be reused safely when they are both input and output to computations.  ...  Traditionally, thread-safe reference counting exploit implicit memory-based communication of counter data and require means to achieve a globally consistent memory state, either using barriers or locks  ...  What would be a constant time operation in an imperative environment, becomes linear in the size of the array.  ... 
doi:10.1109/interact.2012.6339625 dblp:conf/IEEEinteract/PossGHS12 fatcat:wk6y7y5nr5cevhyhdifxbgltxu

A comprehensive strategy for contention management in software transactional memory

Michael F. Spear, Luke Dalessandro, Virendra J. Marathe, Michael L. Scott
2009 SIGPLAN notices  
Unfortunately, most past approaches to contention management were designed for obstruction-free STM frameworks, and impose significant constant-time overheads.  ...  In this paper we present a comprehensive strategy for contention management via fair resolution of conflicts in an STM with invisible reads.  ...  When no such transactions exist, the runtime should incur only a small constant overhead at commit time. Only when transactions with nonzero priority are in-flight should additional overheads apply.  ... 
doi:10.1145/1594835.1504199 fatcat:3biom3pkz5awpfsqk3uez7vwei

A comprehensive strategy for contention management in software transactional memory

Michael F. Spear, Luke Dalessandro, Virendra J. Marathe, Michael L. Scott
2008 Proceedings of the 14th ACM SIGPLAN symposium on Principles and practice of parallel programming - PPoPP '09  
Unfortunately, most past approaches to contention management were designed for obstruction-free STM frameworks, and impose significant constant-time overheads.  ...  In this paper we present a comprehensive strategy for contention management via fair resolution of conflicts in an STM with invisible reads.  ...  When no such transactions exist, the runtime should incur only a small constant overhead at commit time. Only when transactions with nonzero priority are in-flight should additional overheads apply.  ... 
doi:10.1145/1504176.1504199 dblp:conf/ppopp/SpearDMS09 fatcat:nkbrcyimabdndnwwlvlmauzmti

Derivation and Evaluation of Concurrent Collectors [chapter]

Martin T. Vechev, David F. Bacon, Perry Cheng, David Grove
2005 Lecture Notes in Computer Science  
We have implemented a concurrent collector framework and the resulting algorithms in IBM's J9 Java virtual machine product and compared their performance in terms of space, time, and incrementality.  ...  There are many algorithms for concurrent garbage collection, but they are complex to describe, verify, and implement.  ...  in object space overhead and barrier time overhead by reducing the size of the scanned reference count (SRC); (4) reducing object space overhead by reducing the precision of the per-object shade; (5)  ... 
doi:10.1007/11531142_25 fatcat:jzkz7gm4kng4jk5djgr52xtxpy

Performance of memory reclamation for lockless synchronization

Thomas E. Hart, Paul E. McKenney, Angela Demke Brown, Jonathan Walpole
2007 Journal of Parallel and Distributed Computing  
Achieving high performance for concurrent applications on modern multiprocessors remains challenging.  ...  Many programmers avoid locking to improve performance, while others replace locks with non-blocking synchronization to protect against deadlock, priority inversion, and convoying.  ...  For example, reference counting [8, 39] has high overhead in the base case and scales poorly with data-structure size.  ... 
doi:10.1016/j.jpdc.2007.04.010 fatcat:e7h4p4irkfci7pnxuxyruivcny

Synchronized-by-Default Concurrency for Shared-Memory Systems

Martin Bättig, Thomas R. Gross
2017 Proceedings of the 22nd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming - PPoPP '17  
Threads are mapped to atomic sections that a programmer must explicitly split to increase concurrency. A naive implementation of this approach incurs a large amount of overhead.  ...  with transactional I/O to provide good scaling properties.  ...  To the extend that the execution time of a concurrent program is not dominated by synchronization time, SBD offers an appealing alternative.  ... 
doi:10.1145/3018743.3018747 fatcat:hec6gd77vrh7lmcg74hcy5zfku
« Previous Showing results 1 — 15 out of 3,788 results