A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Regularization in directable environments with application to Tetris
2019
International Conference on Machine Learning
Learning from small data sets is difficult in the absence of specific domain knowledge. We present a regularized linear model called STEW, which benefits from a generic and prevalent form of prior knowledge: feature directions. STEW shrinks weights toward each other, converging to an equalweights solution in the limit of infinite regularization. We provide theoretical results on the equalweights solution that explains how STEW can productively trade-off bias and variance. Across a wide range of
dblp:conf/icml/LichtenbergS19
fatcat:4svwlczxqrfvfluh2xc2cm6xaq