A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Optimization for Speculative Execution of Multiple Jobs in a MapReduce-like Cluster
[article]
2015
arXiv
pre-print
Nowadays, a computing cluster in a typical data center can easily consist of hundreds of thousands of commodity servers, making component/ machine failures the norm rather than exception. A parallel processing job can be delayed substantially as long as one of its many tasks is being assigned to a failing machine. To tackle this so-called straggler problem, most parallel processing frameworks such as MapReduce have adopted various strategies under which the system may speculatively launch
arXiv:1406.0609v3
fatcat:ept2yi2xabhs3ldn5r6heag2c4