A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
MF-Net: Compute-In-Memory SRAM for Multibit Precision Inference using Memory-immersed Data Conversion and Multiplication-free Operators
[article]
2021
arXiv
pre-print
We propose a co-design approach for compute-in-memory inference for deep neural networks (DNN). ...
We use multiplication-free function approximators based on ell_1 norm along with a co-adapted processing array and compute flow. ...
Additionally, multibit precision DNN inference is complex using compute-in-memory. ...
arXiv:2102.00035v1
fatcat:hveresaqmngbfi5tqopniuwn5q
Table of contents
2021
IEEE Transactions on Circuits and Systems Part 1: Regular Papers
Inference Using Memory-Immersed Data Conversion and Multiplication-Free Operators ..................................... ...
Liang 1906
(Contents Continued on Back Cover)
(Contents Continued from Front Cover)
Digital Circuits and Systems and VLSI
A 7.8-13.6 pJ/b
MF-Net: Compute-In-Memory SRAM for Multibit Precision ...
doi:10.1109/tcsi.2021.3072264
fatcat:uhdta35gybe6rddwiwqfntuu2e