Practical proofs of concurrent programs

Marc Shapiro
2006 SIGPLAN notices  
Modern computer architectures are increasingly parallel: viz., clusters and multi-core PCs. More and more developers will be seduced into concurrent programming, unprepared for the difficulties of understanding, writing and debugging concurrent programs. Proposed higher-level abstractions (such as lightweight transactions [3]) may provide the illusion that concurrency is easy, but there is a fundamental theoretical issue: threads can interfere with one another in arbitrary ways; the number of
more » ... ses is combinatorial. In practice however, reasonable programs have concurrency control disciplines (e.g., locking) that avoid the bad interactions. We propose to formalise this concurrency control and to leverage it, in order to reason in a modular fashion and side-step the combinatorial explosion. To this effect, we use the "rely-guarantee" (R-G) approach [1, 4] . In addition to the standard pre-and postconditions of sequential Hoare logic, a program is equipped with a non-interference assertions: a rely condition limits the interference it may suffer from its environment; a guarantee condition specifies what interference it may inflict on its environment. If the rely condition of any particular thread is implied by all other threads' guarantee conditions (and if certain technical conditions are met), then standard sequential reasoning can be used to prove the postcondition. We describe some extensions to the basic R-G approach to make it practical. As an example, we study a family of implementations for linked lists using fine-grain synchronisation. This approach enables greater concurrency, but correctness is a greater challenge than for classical, coarse-grain synchronisation. Our examples are demonstrative of common design patterns such as lock coupling, optimistic, and lazy synchronisation. Although they are highly con-
doi:10.1145/1160074.1159819 fatcat:75mbz5g55bgercqh6tee7hn7re