Analysis of recursively parallel programs
We propose a general formal model of isolated hierarchical parallel computations, and identify several fragments to match the concurrency constructs present in real-world programming languages such as Cilk and X10. By associating fundamental formal models (vector addition systems with recursive transitions) to each fragment, we provide a common platform for exposing the relative difficulties of algorithmic reasoning. For each case we measure the complexity of deciding state-reachability for
... te-data recursive programs, and propose algorithms for the decidable cases. The complexities which include PTIME, NP, EXPSPACE, and 2EXPTIME contrast with undecidable state-reachability for recursive multi-threaded programs. Introduction Despite the ever-increasing importance of concurrent software (e.g., for designing reactive applications, or parallelizing computation across multiple processor cores), concurrent programming and concurrent program analysis remain challenging endeavors. The most widely available facility for designing concurrent applications is multithreading, where concurrently executing sequential threads nondeterministically interleave their accesses to shared memory. Such nondeterminism leads to rarely-occurring "Heisenbugs" which are notoriously difficult to reproduce and repair. To prevent such bugs programmers are faced with the difficult task of preventing undesirable interleavings, e.g., by employing lock-based synchronization, without preventing benign interleavings-otherwise the desired reactivity or parallelism is forfeited. The complexity of multi-threaded program analysis seems to comply with the perceived difficulty of multi-threaded programming. The state-reachability problem for multi-threaded programs is PSPACE-complete  with a finite number of finite-state threads, and undecidable  with recursive threads. Current analysis approaches either explore an underapproximate concurrent semantics by considering relatively few interleavings [9, 22] or explore a coarse overapproximate semantics via abstraction [13, 18] . Explicitly-parallel programming languages have been advocated to avoid the intricate interleavings implicit in program syntax  , and several such industrial-strength languages have been developed [2, 5, 6, 17, 25, 31, 33]. Such systems introduce various mechanisms for creating (e.g., fork, spawn, post) and consuming (e.g., join, sync) concurrent computations, and either encourage (through recommended programming practices) or ensure (through static analyses or runtime systems) that parallel computations execute in isolation without interference from others, through datapartitioning , data-replication , functional programming , message passing , or version-based memory access models , * Partially supported by the project ANR-09-SEGI-016 Veridyc. † Supported by a post-doctoral fellowship from the Fondation Sciences Mathématiques de Paris. 0 Proofs to technical results are contained in the appendices.