Resource-aware programming and simulation of MPSoC architectures through extension of X10

Frank Hannig, Sascha Roloff, Gregor Snelting, Jürgen Teich, Andreas Zwinkau
2011 Proceedings of the 14th International Workshop on Software and Compilers for Embedded Systems - SCOPES '11  
The efficient use of future MPSoCs with 1000 or more processor cores requires new means of resource-aware programming to deal with increasing imperfections such as process variation, fault rates, aging effects, and power as well as thermal problems. In this paper, we apply a new approach called invasive computing that enables an application programmer to spread computations to processors deliberately and on purpose at certain points of the program. Such decisions can be made depending on the
more » ... depending on the degree of application parallelism and the state of the underlying resources such as utilization, load, and temperature. The introduced programming constructs for resource-aware programming are embedded into the parallel computing language X10 as developed by IBM using a library-based approach. Moreover, we show how individual heterogeneous MPSoC architectures may be modeled for subsequent functional simulation by defining compute resources such as processors themselves by lightweight threads that are executed in parallel together with the application threads by the X10 run-time system. Thus, the state changes of each hardware resource may be simulated including temperature, aging, and other useful monitor functionality to provide a first high-level programming test-bed for invasive computing. computing [14] as a solution to this problem: With this term, we envision that applications running on a Multi-Processor System-on-a-Chip architectures (MPSoC) might map and distribute their workload themselves based on their temporal computing demands, temporal availability of resources, and other state information of the resources (e. g., temperature, faultiness, resource usage, permissions). However, in order to make this computing paradigm become a reality and to evaluate its benefits properly, the way of application development including algorithm design, language implementation and compilation tools needs to change to a large extent. The idea of allowing applications to spread their computations on resources and later free them again decentrally by themselves at run-time sounds promising. The expected benefits include an increase of speedup (with respect to statically mapped applications), fault-tolerance, and a considerable increase of resource utilization, hence computational efficiency. These efficiency numbers, however, need to be analyzed carefully and traded against the overhead caused with respect to statically mapped applications.
doi:10.1145/1988932.1988941 dblp:conf/scopes/HannigRSTZ11 fatcat:x74uwipv2vdgpol5p5pfte5biq