Filters








2 Hits in 1.5 sec

QUENN

Miguel de Prado, Maurizio Denna, Luca Benini, Nuria Pazos
2018 Proceedings of the 15th ACM International Conference on Computing Frontiers - CF '18  
The engine depends on a fine-grained workflow which enables a Neural Network Design Exploration and a sensitivity analysis of each layer for quantization.  ...  To achieve this, we introduce LPDNN, a framework for optimized deployment of Deep Neural Networks on heterogeneous embedded devices.  ...  LPDNN (Low Power Deep Neural Network) is an inference engine developed within the Bonseyes project for Deep Learning.  ... 
doi:10.1145/3203217.3203282 dblp:conf/cf/PradoDBP18 fatcat:jbolw7re3nadjbhszpn5jefuv4

Efficient in-hardware compression of on-chip data

Amin Ghasemazar
2021
We first pro- pose Channeleon, which tackles the problem of compressing the activation maps in deep neural networks (DNNs) at inference time.  ...  This enables the activations to have low bit-width while incurring acceptable accuracy losses.  ...  Quenn: Quantization engine for low-power neural networks. ICCF, 2018. [202] Moinuddin K Qureshi, David Thompson, and Yale N Patt.  ... 
doi:10.14288/1.0404515 fatcat:nxtj5xz4yffm7inyjyp7k6caem