A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2015; you can also visit <a rel="external noopener" href="http://research.microsoft.com/pubs/150180/oopsla065-burckhardt.pdf">the original URL</a>. The file type is <code>application/pdf</code>.
<i title="ACM Press">
<a target="_blank" rel="noopener" href="https://fatcat.wiki/container/6y3m32grtnfktkp2oo6oqjbvta" style="color: black;">Proceedings of the 2011 ACM international conference on Object oriented programming systems languages and applications - OOPSLA '11</a>
Parallel or incremental versions of an algorithm can significantly outperform their counterparts, but are often difficult to develop. Programming models that provide appropriate abstractions to decompose data and tasks can simplify parallelization. We show in this work that the same abstractions can enable both parallel and incremental execution. We present a novel algorithm for parallel self-adjusting computation. This algorithm extends a deterministic parallel programming model (concurrent<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1145/2048066.2048101">doi:10.1145/2048066.2048101</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/oopsla/BurckhardtLSYB11.html">dblp:conf/oopsla/BurckhardtLSYB11</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/vutsiid3ebc7hgcgqe62n5rvr4">fatcat:vutsiid3ebc7hgcgqe62n5rvr4</a> </span>
more »... isions) with support for recording and repeating computations. On record, we construct a dynamic dependence graph of the parallel computation. On repeat, we reexecute only parts whose dependencies have changed. We implement and evaluate our idea by studying five example programs, including a realistic multi-pass CSS layout algorithm. We describe programming techniques that proved particularly useful to improve the performance of self-adjustment in practice. Our final results show significant speedups on all examples (up to 37x on an 8-core machine). These speedups are well beyond what can be achieved by parallelization alone, while requiring a comparable effort by the programmer.
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20150916014901/http://research.microsoft.com/pubs/150180/oopsla065-burckhardt.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/bf/84/bf84237c25769592fdef82bf5e74bebfa32525f4.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1145/2048066.2048101"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> acm.org </button> </a>