A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
HPO-B: A Large-Scale Reproducible Benchmark for Black-Box HPO based on OpenML
[article]
2021
arXiv
pre-print
Hyperparameter optimization (HPO) is a core problem for the machine learning community and remains largely unsolved due to the significant computational resources required to evaluate hyperparameter configurations. As a result, a series of recent related works have focused on the direction of transfer learning for quickly fine-tuning hyperparameters on a dataset. Unfortunately, the community does not have a common large-scale benchmark for comparing HPO algorithms. Instead, the de facto
arXiv:2106.06257v2
fatcat:ijfywum3mveopkxqmnfm2mfvfm