A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
A Survey on the Optimization of Neural Network Accelerators for Micro-AI On-Device Inference
2021
IEEE Journal on Emerging and Selected Topics in Circuits and Systems
Deep neural networks (DNNs) are being prototyped for a variety of artificial intelligence (AI) tasks including computer vision, data analytics, robotics, etc. The efficacy of DNNs coincides with the fact that they can provide state-ofthe-art inference accuracy for these applications. However, this advantage comes from the high computational complexity of the DNNs in use. Hence, it is becoming increasingly important to scale these DNNs so that they can fit on resource-constrained hardware and
doi:10.1109/jetcas.2021.3129415
fatcat:nknpy4eernaeljz2hpqafe7sja