A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit <a rel="external noopener" href="https://arxiv.org/pdf/1705.04867v2.pdf">the original URL</a>. The file type is <code>application/pdf</code>.
Nearest Neighbors for Matrix Estimation Interpreted as Blind Regression for Latent Variable Model
[article]
<span title="2019-06-27">2019</span>
<i >
arXiv
</i>
<span class="release-stage" >pre-print</span>
We consider the setup of nonparametric blind regression for estimating the entries of a large m × n matrix, when provided with a small, random fraction of noisy measurements. We assume that all rows u ∈ [m] and columns i ∈ [n] of the matrix are associated to latent features x_row(u) and x_col(i) respectively, and the (u,i)-th entry of the matrix, A(u, i) is equal to f(x_row(u), x_col(i)) for a latent function f. Given noisy observations of a small, random subset of the matrix entries, our goal
<span class="external-identifiers">
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1705.04867v2">arXiv:1705.04867v2</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/vgi4rghohfec5kwmpj36gatute">fatcat:vgi4rghohfec5kwmpj36gatute</a>
</span>
more »
... s to estimate the unobserved entries of the matrix as well as to "de-noise" the observed entries. As the main result of this work, we introduce a nearest-neighbor-based estimation algorithm, and establish its consistency when the underlying latent function f is Lipschitz, the underlying latent space is a bounded diameter Polish space, and the random fraction of observed entries in the matrix is at least max( m^-1 + δ, n^-1/2 + δ), for any δ > 0. As an important byproduct, our analysis sheds light into the performance of the classical collaborative filtering algorithm for matrix completion, which has been widely utilized in practice. Experiments with the MovieLens and Netflix datasets suggest that our algorithm provides a principled improvement over basic collaborative filtering and is competitive with matrix factorization methods. Our algorithm has a natural extension to the setting of tensor completion via flattening the tensor to matrix. When applied to the setting of image in-painting, which is a 3-order tensor, we find that our approach is competitive with respect to state-of-art tensor completion algorithms across benchmark images.
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200829214432/https://arxiv.org/pdf/1705.04867v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/8d/92/8d9231ad99524d5bc6f2958fa10bbf77415788dd.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1705.04867v2" title="arxiv.org access">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
arxiv.org
</button>
</a>