Filters








23 Hits in 0.96 sec

An Empirical Guide to the Behavior and Use of Scalable Persistent Memory [article]

Jian Yang, Juno Kim, Morteza Hoseinzadeh, Joseph Izraelevitz, Steven Swanson
2019 arXiv   pre-print
[50] have explored logging mechanisms for the devices, and Izraelevitz et al. [30] have explored general performance characteristics.  ... 
arXiv:1908.03583v1 fatcat:vdni6qh2wnhwfo7mfzmudv7faa

Brief announcement

Joseph Izraelevitz, Michael L. Scott
2014 Proceedings of the 26th ACM symposium on Parallelism in algorithms and architectures - SPAA '14  
In this paper, we introduce two new FIFO dual queues. Like all dual queues, they arrange for dequeue operations to block when the queue is empty, and to complete in the original order when data becomes available. Compared to alternatives in which dequeues on an empty queue return an error code and force the caller to retry, dual queues provide a valuable guarantee of fairness. Our algorithms, based on the LCRQ of Morrison and Afek, outperform existing dual queues-notably the one in
more » ... current-by a factor of four to six. For both of our algorithms, we present extensions that guarantee lock freedom, albeit at some cost in performance.
doi:10.1145/2612669.2612711 dblp:conf/spaa/IzraelevitzS14 fatcat:wjxxk2geajay3jz75mb5weomhe

Generality and Speed in Nonblocking Dual Containers

Joseph Izraelevitz, Michael L. Scott
2017 ACM Transactions on Parallel Computing  
Section 3 (earlier versions of which appeared as a brief announcement [Izraelevitz and Scott 2014a] and a technical report [Izraelevitz and Scott 2014c]) introduces a generic construction for building  ...  Section 4 (also a previous brief announcement [Izraelevitz and Scott 2014b] and a technical report [Izraelevitz and Scott 2014d]) presents FIFO dual queues that significantly improve on the performance  ...  To solve this problem, we used the hot potato microbenchmark [Izraelevitz and Scott 2014c] .  ... 
doi:10.1145/3040220 fatcat:mlgs6rvma5e6rfx43ghntdrk5u

An Unbounded Nonblocking Double-Ended Queue

Matthew Graichen, Joseph Izraelevitz, Michael L. Scott
2016 2016 45th International Conference on Parallel Processing (ICPP)  
We introduce a new algorithm for an unbounded concurrent double-ended queue (deque). Like the bounded deque of Herlihy, Luchangco, and Moir on which it is based, the new algorithm is simple and obstruction free, has no pathological long-latency scenarios, avoids interference between operations at opposite ends, and requires no special hardware support beyond the usual compare-and-swap. To the best of our knowledge, no prior concurrent deque combines these properties with unbounded capacity, or
more » ... rovides consistently better performance across a wide range of concurrent workloads. Index Terms-parallel processing; parallel algorithms; nonblocking algorithms; X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X
doi:10.1109/icpp.2016.32 dblp:conf/icpp/GraichenIS16 fatcat:nypkxvkcmje6fj4f36opidkg4m

Brief announcement

Joseph Izraelevitz, Michael L. Scott
2014 Proceedings of the 2014 ACM symposium on Principles of distributed computing - PODC '14  
A dual container has the property that when it is empty, the remove method will insert an explicit reservation ("antidata") into the container, rather than returning an error flag. This convention gives the container explicit control over the order in which pending requests will be satisfied once data becomes available. The dual pattern also allows the method's caller to spin on a thread-local flag, avoiding memory contention. In this paper we introduce a new nonblocking construction that
more » ... any nonblocking container for data to be paired with almost any nonblocking container for antidata. This construction provides a composite ordering discipline-e.g., it can satisfy pending pops from a stack in FIFO order, or satisfy pending dequeues in order of thread priority.
doi:10.1145/2611462.2611510 dblp:conf/podc/IzraelevitzS14 fatcat:p7ucn2gpt5euzg7pv5ybhelw2y

Linearizability of Persistent Memory Objects Under a Full-System-Crash Failure Model [chapter]

Joseph Izraelevitz, Hammurabi Mendes, Michael L. Scott
2016 Lecture Notes in Computer Science  
This paper provides a theoretical and practical framework for crash-resilient data structures on a machine with persistent (nonvolatile) memory but transient registers and cache. In contrast to certain prior work, but in keeping with "real world" systems, we assume a full-system failure model, in which all transient state (of all processes) is lost on a crash. We introduce the notion of durable linearizability to govern the safety of concurrent objects under this failure model and a
more » ... g relaxed, buffered variant which ensures that the persistent state in the event of a crash is consistent but not necessarily up to date. At the implementation level, we present a new "memory persistency model," explicit epoch persistency, that builds upon and generalizes prior work. Our model captures both hardware buffering and fully relaxed consistency, and subsumes both existing and proposed instruction set architectures. Using the persistency model, we present an automated transform to convert any linearizable, nonblocking concurrent object into one that is also durably linearizable. We also present a design pattern, analogous to linearization points, for the construction of other, more optimized objects. Finally, we discuss generic optimizations that may improve performance while preserving both safety and liveness.
doi:10.1007/978-3-662-53426-7_23 fatcat:lmpxppphz5gvfgm7h3z6caij4y

FileMR: Rethinking RDMA Networking for Scalable Persistent Memory

Jian Yang, Joseph Izraelevitz, Steven Swanson
2020 Symposium on Networked Systems Design and Implementation  
The emergence of dense, byte-addressable nonvolatile main memories (NVMMs) allows application developers to combine storage and memory into a single layer. With NVMMs, servers can equip terabytes of memory that survive power outages, and all of this persistent capacity can be managed through a specialized NVMM file system. NVMMs appear to mesh perfectly with another popular technology, remote direct memory access (RDMA). RDMA gives a client direct access to memory on a remote machine and
more » ... s this access through a memory region abstraction that handles the necessary translations and permissions. NVMM and RDMA seem eminently compatible: by combining them, we should be able to build network-attached, byte-addressable, persistent storage. Unfortunately, however, the systems were not designed to work together. An NVMMaware file system manages persistent memory as files, whereas RDMA uses a different abstraction -memory regions to organize remotely accessible memory. As a result, in practice, building RDMA-accessible NVMMs requires expensive translation layers resulting from this duplication of effort that spans permissions, naming, and address translation. This work introduces two changes to the existing RDMA protocol: file memory region (FileMR) and range-based address translation. These optimizations create an abstraction that combines memory regions and files: a client can directly access a file backed by NVMM file system through RDMA, addressing its contents via file offsets. By eliminating redundant translations, it minimizes the amount of translations done at the NIC, reduces the load on the NIC's translation cache and increases the hit rate by 3.8× -340× and resulting in application performance improvement by 1.8× -2.0×.
dblp:conf/nsdi/YangIS20 fatcat:sdfjalmpkbexrgmepjkxduiwje

Dalí: A Periodically Persistent Hash Map

Faisal Nawab, Joseph Izraelevitz, Terence Kelly, Charles B. Morrey III, Dhruva R. Chakrabarti, Michael L. Scott, Marc Herbstritt
2017 International Symposium on Distributed Computing  
Technology trends suggest that byte-addressable nonvolatile memory (NVM) will supplant many uses of DRAM over the coming decade, raising the prospect of inexpensive recovery from power failures and similar faults. Ensuring the consistency of persistent state remains nontrivial, however, in the presence of volatile caches; cached values can "leak" back to persistent memory in arbitrary order. To ensure consistency, existing persistent memory algorithms use expensive, explicit write-back
more » ... ons to force each value back to memory before performing a dependent write, thereby incurring significant run-time overhead. To reduce this overhead, we present a new design paradigm that we call periodic persistence. In a periodically persistent data structure, updates are made "in place," but can safely leak back to memory in any order, because only those updates that are known to be valid will be heeded during recovery. To guarantee forward progress, we periodically force a write-back of all dirty data in the cache, ensuring that all "sufficiently old" updates have indeed become persistent, at which point they become semantically visible to the recovery process. As an example of periodic persistence, we present a transactional hash map, Dalí, together with an informal proof of safety (buffered durable linearizability). Experiments with a prototype implementation suggest that periodic persistence can offer substantially better performance than either file-based or incrementally persistent (per-access write-back) alternatives.
doi:10.4230/lipics.disc.2017.37 dblp:conf/wdag/NawabIKMCS17 fatcat:bgihfwsmqveevkuduvwanm6lyq

Performance Improvement via Always-Abort HTM

Joseph Izraelevitz, Lingxiang Xiang, Michael L. Scott
2017 2017 26th International Conference on Parallel Architectures and Compilation Techniques (PACT)  
This work proposes and discusses the implications of adding a new feature to hardware transactional memory, allowing a program to specify that a transaction should always abort (even if it executes a commit instruction), and is thus guaranteed to be free of side effects. Perhaps counterintuitively, we believe such a primitive can be useful. Prior art has already noted that HTM transactions, even in failure, can accelerate the subsequent execution of their contents by warming up the branch
more » ... tor and caches. However, traditional HTM requires that the programmer properly coordinate between HTM and other synchronization primitives, otherwise data races can occur. With always-abort HTM (AAHTM), no such synchronization is necessary, because there is no risk of accidentally committing a transaction that has seen inconsistent state. We can therefore use AAHTM in scenarios where traditional HTM would be unsafe. In this paper, we present several designs that use AAHTM, discuss preliminary results, and identify other situations in which the new primitive might be useful.
doi:10.1109/pact.2017.16 dblp:conf/IEEEpact/IzraelevitzXS17 fatcat:n3f45w4iira4zglhwrh3gzn3ay

Interval-based memory reclamation

Haosen Wen, Joseph Izraelevitz, Wentao Cai, H. Alan Beadle, Michael L. Scott
2018 Proceedings of the 23rd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming - PPoPP '18  
In this paper we present interval-based reclamation (IBR), a new approach to safe reclamation of disconnected memory blocks in nonblocking concurrent data structures. Safe reclamation is a difficult problem: a thread, before freeing a block, must ensure that no other threads are accessing that block; the required synchronization tends to be expensive. In contrast with epoch-based reclamation, in which threads reserve all blocks created after a certain time, or pointerbased reclamation (e.g.,
more » ... ard pointers), in which threads reserve individual blocks, IBR allows a thread to reserve all blocks known to have existed in a bounded interval of time. By comparing a thread's reserved interval with the lifetime of a detached but not yet reclaimed block, the system can determine if the block is safe to free. Like hazard pointers, IBR avoids the possibility that a single stalled thread may reserve an unbounded number of blocks; unlike hazard pointers, it avoids a memory fence on most pointer-following operations. It also avoids the need to explicitly "unreserve" a no-longer-needed pointer. We describe three specific IBR schemes (one with several variants) that trade off performance, applicability, and space requirements. IBR requires no special hardware or OS support. In experiments with data structure microbenchmarks, it also compares favorably (in both time and space) to other state-of-the-art approaches, making it an attractive alternative for libraries of concurrent data structures.
doi:10.1145/3178487.3178488 dblp:conf/ppopp/WenICBS18 fatcat:v3qcujeaj5f7daotjtkfwyazdu

Orion: A Distributed File System for Non-Volatile Main Memory and RDMA-Capable Networks

Jian Yang, Joseph Izraelevitz, Steven Swanson
2019 USENIX Conference on File and Storage Technologies  
High-performance, byte-addressable non-volatile main memories (NVMMs) force system designers to rethink tradeoffs throughout the system stack, often leading to dramatic changes in system architecture. Conventional distributed file systems are a prime example. When faster NVMM replaces block-based storage, the dramatic improvement in storage performance makes networking and software overhead a critical bottleneck. In this paper, we present Orion, a distributed file system for NVMM-based storage.
more » ... By taking a clean slate design and leveraging the characteristics of NVMM and high-speed, RDMA-based networking, Orion provides high-performance metadata and data access while maintaining the byte addressability of NVMM. Our evaluation shows Orion achieves performance comparable to local NVMM file systems and outperforms existing distributed file systems by a large margin.
dblp:conf/fast/YangIS19 fatcat:f6bpmpw475fyjfniqy2o3i6ium

Basic Performance Measurements of the Intel Optane DC Persistent Memory Module [article]

Joseph Izraelevitz, Jian Yang, Lu Zhang, Juno Kim, Xiao Liu, Amirsaman Memaripour, Yun Joon Soh, Zixuan Wang, Yi Xu, Subramanya R. Dulloor, Jishen Zhao, Steven Swanson
2019 arXiv   pre-print
Scalable nonvolatile memory DIMMs will finally be commercially available with the release of the Intel Optane DC Persistent Memory Module (or just "Optane DC PMM"). This new nonvolatile DIMM supports byte-granularity accesses with access times on the order of DRAM, while also providing data storage that survives power outages. This work comprises the first in-depth, scholarly, performance review of Intel's Optane DC PMM, exploring its capabilities as a main memory device, and as persistent,
more » ... -addressable memory exposed to user-space applications. This report details the technologies performance under a number of modes and scenarios, and across a wide variety of macro-scale benchmarks. Optane DC PMMs can be used as large memory devices with a DRAM cache to hide their lower bandwidth and higher latency. When used in this Memory (or cached) mode, Optane DC memory has little impact on applications with small memory footprints. Applications with larger memory footprints may experience some slow-down relative to DRAM, but are now able to keep much more data in memory. When used under a file system, Optane DC PMMs can result in significant performance gains, especially when the file system is optimized to use the load/store interface of the Optane DC PMM and the application uses many small, persistent writes. For instance, using the NOVA-relaxed NVMM file system, we can improve the performance of Kyoto Cabinet by almost 2x. Optane DC PMMs can also enable user-space persistence where the application explicitly controls its writes into persistent Optane DC media. In our experiments, modified applications that used user-space Optane DC persistence generally outperformed their file system counterparts. For instance, the persistent version of RocksDB performed almost 2x faster than the equivalent program utilizing an NVMM-aware file system.
arXiv:1903.05714v3 fatcat:pk5ng6rplzddrf7tfbrueftcua

Dalí: A Periodically Persistent Hash Map

Faisal Nawab, Joseph Izraelevitz, Terence Kelly, Charles Morrey, Dhruva Chakrabarti, Michael Scott
unpublished
Technology trends suggest that byte-addressable nonvolatile memory (NVM) will supplant many uses of DRAM over the coming decade, raising the prospect of inexpensive recovery from power failures and similar faults. Ensuring the consistency of persistent state remains nontrivial, however , in the presence of volatile caches; cached values can "leak" back to persistent memory in arbitrary order. To ensure consistency, existing persistent memory algorithms use expensive, explicit write-back
more » ... ions to force each value back to memory before performing a dependent write, thereby incurring significant run-time overhead. To reduce this overhead, we present a new design paradigm that we call periodic persistence. In a periodically persistent data structure, updates are made "in place," but can safely leak back to memory in any order, because only those updates that are known to be valid will be heeded during recovery. To guarantee forward progress, we periodically force a write-back of all dirty data in the cache, ensuring that all "sufficiently old" updates have indeed become persistent, at which point they become semantically visible to the recovery process. As an example of periodic persistence, we present a transactional hash map, Dalí, together with an informal proof of safety (buffered durable linearizability). Experiments with a prototype implementation suggest that periodic persistence can offer substantially better performance than either file-based or incrementally persistent (per-access write-back) alternatives.
fatcat:upeurvqazvfido6rmwju556piq

Scientific Exploration of Venus with Aerial Platforms

James Cutts, Shahid Aslam, Sushil Atreya, Kevin Baines, Patricia Beauchamp, Josette Bellan, Daniel C. Bowman, Kumar Bugga, Mark Bullock, Paul K. Byrne, Kar-Ming Cheung, Darby Dyar (+28 others)
2021 Bulletin of the AAS  
doi:10.3847/25c2cfeb.29dd4fbb fatcat:ozgc27hscvenzenphv7yo6mdbi

How Should We Think about Persistent Data Structures?

Michael L. Scott
2022 Proceedings of the 2022 ACM Symposium on Principles of Distributed Computing  
ACKNOWLEDGEMENTS Ideas described in this keynote reflect joint work with past and current students, including Joseph Izraelevitz, Haosen Wen, Wentao Cai, Mingzhe Du, and Chris Kjellqvist.  ... 
doi:10.1145/3519270.3538455 fatcat:cjmpkvuy4zdqjad24icrotoyae
« Previous Showing results 1 — 15 out of 23 results