A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Distributed-Memory Tensor Completion for Generalized Loss Functions in Python using New Sparse Tensor Kernels
[article]
2021
arXiv
pre-print
Tensor computations are increasingly prevalent numerical techniques in data science, but pose unique challenges for high-performance implementation. We provide novel algorithms and systems infrastructure which enable efficient parallel implementation of algorithms for tensor completion with generalized loss functions. Specifically, we consider alternating minimization, coordinate minimization, and a quasi-Newton (generalized Gauss-Newton) method. By extending the Cyclops library, we implement
arXiv:1910.02371v3
fatcat:nrxjrkde7baulfn4zdj3vmlgcm