A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Overcoming Hadoop Scaling Limitations through Distributed Task Execution
2015
2015 IEEE International Conference on Cluster Computing
Data driven programming models like MapReduce have gained the popularity in large-scale data processing. Although great efforts through the Hadoop implementation and framework decoupling (e.g. YARN, Mesos) have allowed Hadoop to scale to tens of thousands of commodity cluster processors, the centralized designs of the resource manager, task scheduler and metadata management of HDFS file system adversely affect Hadoop's scalability to tomorrow's extreme-scale data centers. This paper aims to
doi:10.1109/cluster.2015.42
dblp:conf/cluster/WangLSYZLLSR15
fatcat:zb2wcgjrznh7vaq7ssisaqffgi