A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is
The engine depends on a fine-grained workflow which enables a Neural Network Design Exploration and a sensitivity analysis of each layer for quantization. ... To achieve this, we introduce LPDNN, a framework for optimized deployment of Deep Neural Networks on heterogeneous embedded devices. ... LPDNN (Low Power Deep Neural Network) is an inference engine developed within the Bonseyes project for Deep Learning. ...doi:10.1145/3203217.3203282 dblp:conf/cf/PradoDBP18 fatcat:jbolw7re3nadjbhszpn5jefuv4
We first pro- pose Channeleon, which tackles the problem of compressing the activation maps in deep neural networks (DNNs) at inference time. ... This enables the activations to have low bit-width while incurring acceptable accuracy losses. ... Quenn: Quantization engine for low-power neural networks. ICCF, 2018.  Moinuddin K Qureshi, David Thompson, and Yale N Patt. ...doi:10.14288/1.0404515 fatcat:nxtj5xz4yffm7inyjyp7k6caem