Advanced concurrency control in Java

Pascal Felber, Michael K. Reiter
2002 Concurrency and Computation  
Developing concurrent applications is not a trivial task. As programs grow larger and become more complex, advanced concurrency control mechanisms are needed to ensure that application consistency is not compromised. Managing mutual exclusion on a per-object basis is not sufficient to guarantee isolation of sets of semantically-related actions. In this paper, we consider 'atomic blocks', a simple and lightweight concurrency control paradigm that enables arbitrary blocks of code to access
more » ... e shared objects in isolation. We evaluate various strategies for implementing atomic blocks in Java, in such a way that concurrency control is transparent to the programmer, isolation is preserved, and concurrency is maximized. We discuss these concurrency control strategies and evaluate them in terms of complexity and performance. data items. This is especially true of code that was not developed with concurrency in mind, but is executed a posteriori in a concurrent context. Concurrency control mechanisms that implement mutual exclusion of multiple actions in concurrent applications face a tradeoff: On the one hand, control over shared resources must be acquired in a conservative way to avoid situations where rollback would be necessary. On the other hand, control over these shared resources must be held for the shortest amount of time possible to increase concurrency. While this tension has been extensively studied in databases [1], surprisingly little work has been performed in the context of concurrent programming languages. This paper discusses concurrency control mechanisms for implementing atomic sets of actions in Java, a general-purpose, object-oriented concurrent programming language. The goal is to provide simple yet efficient mechanisms to implement mutual exclusion on arbitrary sets of objects, in order to increase concurrency of multi-threaded applications without violating safety. We take advantage of the object-oriented nature of the language to guarantee isolation in a transparent way and decouple the declaration of critical sections from the underlying mutual exclusion mechanisms. Code executing in an atomic block does not need to be aware of concurrency, and existing applications only require trivial modifications for taking advantage of our mechanisms. Several concurrency control strategies are presented and evaluated in terms of complexity and performance. While the mechanisms discussed in this paper have been packaged as a class library for ease of implementation, they could easily be added to the language through a simple extension of Java's 'synchronized' statement. This paper makes several contributions: First, we discuss the provision of advanced concurrency control mechanisms that preserve consistency and isolation of shared objects across multiple operations in multi-threaded environments. We identify several deadlock-free locking strategies that satisfy the requirements of our application model-four variants of two-phase locking protocols and a tree-based locking protocol-and we discuss the benefits and drawbacks of each strategy. Second, we specifically address the problem of transparent concurrency management in Java. The mechanisms introduced in this paper permit seamless addition of concurrency control to arbitrary blocks of Java code, without modifications to the actual code within critical sections. Because concurrency management is fully decoupled from the application logic, features like the locking strategy can be modified as late as at runtime, independently of the application's code. Finally, we evaluate the cost of transparent concurrency control in Java applications. We make a comparative analysis of the locking strategies implemented in our framework under various workloads and we measure the overhead of the techniques used to make concurrency management transparent to the application's code. The rest of the paper is organized as follows. Section 2 introduces background concepts and presents the motivations of this work. Section 3 briefly discusses related work. Section 4 describes the various locking policies supported by our Java concurrency control framework. Section 5 discusses the implementation of atomic blocks in Java using the locking policies previously introduced. Section 6 presents experimental results from our Java implementation, and compares the different policies in terms of concurrency and runtime performance. Finally, Section 7 concludes the paper. BACKGROUND AND MOTIVATIONS Consider the simple problem of transferring money from one bank account to another. This transfer operation must be atomic, in the sense that any other entity accessing these accounts concurrently will
doi:10.1002/cpe.635 fatcat:w2na2murhvc2rc2md77vylhjvy