A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Memory-efficient training of binarized neural networks on the edge
2022
Proceedings of the 59th ACM/IEEE Design Automation Conference
A visionary computing paradigm is to train resource efficient neural networks on the edge using dedicated low-power accelerators instead of cloud infrastructures, eliminating communication overheads and privacy concerns. One promising resource-efficient approach for inference is binarized neural networks (BNNs), which binarize parameters and activations. However, training BNNs remains resource demanding. State-of-the-art BNN training methods, such as the binary optimizer (Bop), require to store
doi:10.1145/3489517.3530496
fatcat:v4sdy7fesvap5fe3f7ztzn2udu