A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Integrated ARM big.Little-Mali Pipeline for High-Throughput CNN Inference
[post]
2021
unpublished
<div>State-of-the-art Heterogeneous System on Chips (HMPSoCs) can perform on-chip embedded inference on its CPU and GPU. Multi-component pipelining is the method of choice to provide high-throughput Convolutions Neural Network (CNN) inference on embedded platforms. In this work, we provide details for the first CPU-GPU pipeline design for CNN inference called Pipe-All. Pipe-All uses the ARM-CL library to integrate an ARM big.Little CPU with an ARM Mali GPU. Pipe-All is the first three-stage CNN
doi:10.36227/techrxiv.14994885
fatcat:pyguawiux5extirswtk6vsjj7m