100 Hits in 5.9 sec

TinyML for Ubiquitous Edge AI [article]

Stanislava Soro
2021 arXiv   pre-print
inference applications on battery-operated, resource-constrained devices.  ...  TinyML addresses the challenges in designing power-efficient, compact deep neural network models, supporting software framework, and embedded hardware that will enable a wide range of customized, ubiquitous  ...  The increase in computation resources accelerated the research and development of deep neural networks, which continued to grow in complexity and resource requirements over time.  ... 
arXiv:2102.01255v1 fatcat:if5ny6kcirdkhnj56mswfaptlm

Neural networks on microcontrollers: saving memory at inference via operator reordering [article]

Edgar Liberis, Nicholas D. Lane
2020 arXiv   pre-print
In this work, we discuss the deployment and memory concerns of neural networks on MCUs and present a way of saving memory by changing the execution order of the network's operators, which is orthogonal  ...  However, they lack the computational resources to run neural networks as straightforwardly as mobile or server platforms, which necessitates changes to the network architecture and the inference software  ...  The authors would also like to thank Javier Fernández-Marqués for providing help with measuring energy usage.  ... 
arXiv:1910.05110v2 fatcat:ouhx7qgiwfavxc6kefencidtym

Machine Learning for Microcontroller-Class Hardware – A Review [article]

Swapnil Sayan Saha, Sandeep Singh Sandha, Mani Srivastava
2022 arXiv   pre-print
Conventional machine learning deployment has high memory and compute footprint hindering their direct deployment on ultra resource-constrained microcontrollers.  ...  This paper highlights the unique requirements of enabling onboard machine learning for microcontroller class devices.  ...  Similarly, SpArSe [86] attempted to run deep neural networks on microcontrollers and not non-neural models to broaden the application spectrum of AI-IoT.  ... 
arXiv:2205.14550v3 fatcat:y272riitirhwfgfiotlwv5i7nu

Multi-Component Optimization and Efficient Deployment of Neural-Networks on Resource-Constrained IoT Hardware [article]

Bharath Sudharsan, Dineshkumar Sundaram, Pankesh Patel, John G. Breslin, Muhammad Intizar Ali, Schahram Dustdar, Albert Zomaya, Rajiv Ranjan
2022 arXiv   pre-print
that can comfortably fit and execute on resource-constrained hardware.  ...  On such resource-constrained devices, manufacturers still manage to provide attractive functionalities (to boost sales) by following the traditional approach of programming IoT devices/products to collect  ...  Neural Networks vs Resource-constrained MCUs.  ... 
arXiv:2204.10183v1 fatcat:7yelkcwgdvcg5n4t4tmwymsln4

MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers [article]

Colby Banbury, Chuteng Zhou, Igor Fedorov, Ramon Matas Navarro, Urmish Thakker, Dibakar Gope, Vijay Janapa Reddi, Matthew Mattina, Paul N. Whatmough
2021 arXiv   pre-print
Executing machine learning workloads locally on resource constrained microcontrollers (MCUs) promises to drastically expand the application space of IoT.  ...  However, so-called TinyML presents severe technical challenges, as deep neural network inference demands a large compute and memory budget.  ...  ACKNOWLEDGEMENTS This work was sponsored in part by the ADA (Applications Driving Architectures) Center.  ... 
arXiv:2010.11267v6 fatcat:cte3gwj2wnh3nlg3rnonvbpazu

MoTransFrame: Model Transfer Framework for CNNs on Low-Resource Edge Computing Node

Panyu Liu, Huilin Ren, Xiaojun Shi, Yangyang Li, Zhiping Cai, Fang Liu, Huacheng Zeng
2020 Computers Materials & Continua  
Instead of designing a model compression algorithm with a high compression ratio, MoTransFrame can transplant popular convolutional neural networks models to resources-starved edge devices promptly and  ...  However, the lack of resources in the edge terminal equipment makes it difficult to run deep learning algorithms that require more memory and computing power.  ...  For example, it can compress AlexNet by as much as 35 times and has the least impact on accuracy at the same time.  ... 
doi:10.32604/cmc.2020.010522 fatcat:epa7rwcbiredbgcnipseiuiyum

EAST: Encoding-Aware Sparse Training for Deep Memory Compression of ConvNets [article]

Matteo Grimaldi, Valentino Peluso, Andrea Calimera
2019 arXiv   pre-print
The implementation of Deep Convolutional Neural Networks (ConvNets) on tiny end-nodes with limited non-volatile memory space calls for smart compression strategies capable of shrinking the footprint yet  ...  This work addresses the issue by introducing EAST, Encoding-Aware Sparse Training, a novel memory-constrained training procedure that leads quantized ConvNets towards deep memory compression.  ...  units (MCU) with limited memory and computational resources.  ... 
arXiv:1912.10087v1 fatcat:rybqdermwjfr5mkbtzoqb3adrm

Optimality Assessment of Memory-Bounded ConvNets Deployed on Resource-Constrained RISC Cores

Matteo Grimaldi, Valentino Peluso, Andrea Calimera
2019 IEEE Access  
A cost-effective implementation of Convolutional Neural Nets on the mobile edge of the Internet-of-Things (IoT) requires smart optimizations to fit large models into memory-constrained cores.  ...  The objective of this work is to make an assessment of such memory-bounded implementations and to show that most of them are centred on specific parameter settings that are found difficult to be implemented  ...  INTRODUCTION AND MOTIVATIONS Most IoT applications run Deep Convolutional Neural Networks (ConvNets hereafter) in the cloud, public or private depending on the context.  ... 
doi:10.1109/access.2019.2948577 fatcat:6ckfv72yxjd2ppbct7zwe4ahwm

Robustifying the Deployment of tinyML Models for Autonomous Mini-Vehicles

Miguel de de Prado, Manuele Rusci, Alessandro Capotondi, Romain Donze, Luca Benini, Nuria Pazos
2021 Sensors  
When running the family of tinyCNNs, our solution running on GAP8 outperforms any other implementation on the STM32L4 and NXP k64f (traditional single-core MCUs), reducing the latency by over 13× and the  ...  Standard-sized autonomous vehicles have rapidly improved thanks to the breakthroughs of deep learning.  ...  In the context of resource-constrained MCUs, several software stacks have been introduced to address the severe limitations in terms of computational and memory resources.  ... 
doi:10.3390/s21041339 pmid:33668645 pmcid:PMC7918899 fatcat:krfqzoi5pnbnfnyn3gndzj7j3q

MCUNet: Tiny Deep Learning on IoT Devices [article]

Ji Lin, Wei-Ming Chen, Yujun Lin, John Cohn, Chuang Gan, Song Han
2020 arXiv   pre-print
Machine learning on tiny IoT devices based on microcontroller units (MCU) is appealing but challenging: the memory of microcontrollers is 2-3 orders of magnitude smaller even than mobile phones.  ...  TinyNAS adopts a two-stage neural architecture search approach that first optimizes the search space to fit the resource constraints, then specializes the network architecture in the optimized search space  ...  Acknowledgments We thank MIT Satori cluster for providing the computation resource.  ... 
arXiv:2007.10319v2 fatcat:y55nauwvvvgrljqzdso76agmqm

Robust and Energy-efficient PPG-based Heart-Rate Monitoring [article]

Matteo Risso, Alessio Burrello, Daniele Jahier Pagliari, Simone Benatti, Enrico Macii, Luca Benini, Massimo Poncino
2022 arXiv   pre-print
A wrist-worn PPG sensor coupled with a lightweight algorithm can run on a MCU to enable non-invasive and comfortable monitoring, but ensuring robust PPG-based heart-rate monitoring in the presence of motion  ...  Moreover, their deployment on MCU-based edge nodes has not been investigated.  ...  Moreover, to further reduce the memory footprint for deploying our TCNs on resource-constrained MCUs, we also apply full-integer posttraining quantization to the MorphNet outputs, converting them from  ... 
arXiv:2203.16339v1 fatcat:czsisy3wenbyvkety6aaa3b2jm

Robustifying the Deployment of tinyML Models for Autonomous mini-vehicles

Miguel de Prado, Manuele Rusci, Romain Donze, Alessandro Capotondi, Serge Monnerat, Luca Benini, Nuria Pazos
2021 2021 IEEE International Symposium on Circuits and Systems (ISCAS)  
Robustifying the Deployment of tinyML Models for Autonomous Mini-Vehicles. Sensors  ...  Conflicts of Interest: The authors declare no conflict of interest.  ...  The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.  ... 
doi:10.1109/iscas51556.2021.9401154 fatcat:cqieoaxwm5cwxhgbellngethze

Pruning In Time (PIT): A Lightweight Network Architecture Optimizer for Temporal Convolutional Networks [article]

Matteo Risso, Alessio Burrello, Daniele Jahier Pagliari, Francesco Conti, Lorenzo Lamberti, Enrico Macii, Luca Benini, Massimo Poncino
2022 arXiv   pre-print
Temporal Convolutional Networks (TCNs) are promising Deep Learning models for time-series processing tasks.  ...  One key feature of TCNs is time-dilated convolution, whose optimization requires extensive experimentation.  ...  PIT is able to generate improved versions of existing state-of-the-art architectures, with a compression of up to 54% with negligible accuracy drop, enabling their efficient deployment on resource-constrained  ... 
arXiv:2203.14768v1 fatcat:rcvo2mn2lrch7hwdoky372sf3q

Dynamic ConvNets on Tiny Devices via Nested Sparsity [article]

Matteo Grimaldi, Luca Mocerino, Antonio Cipolletta, Andrea Calimera
2022 arXiv   pre-print
This work introduces a new training and compression pipeline to build Nested Sparse ConvNets, a class of dynamic Convolutional Neural Networks (ConvNets) suited for inference tasks deployed on resource-constrained  ...  A Nested Sparse ConvNet consists of a single ConvNet architecture containing N sparse sub-networks with nested weights subsets, like a Matryoshka doll, and can trade accuracy for latency at run time, using  ...  In many IoT applications, the end-nodes are lightweight devices powered by tiny Micro Controller Units (MCUs), characterized by small form factor, minimal storage and memory resources, i.e., few MBs of  ... 
arXiv:2203.03324v1 fatcat:vcdq3h3ljzf6jcfoan7zdllphu

A 64mW DNN-based Visual Navigation Engine for Autonomous Nano-Drones

Daniele Palossi, Antonio Loquercio, Francesco Conti, Francesco Conti, Eric Flamand, Eric Flamand, Davide Scaramuzza, Luca Benini, Luca Benini
2019 IEEE Internet of Things Journal  
Visual navigation based on AI approaches, such as deep neural networks (DNNs) are becoming pervasive for standard-size drones, but are considered out of reach for nanodrones with size of a few cm^2.  ...  To achieve this goal we developed a complete methodology for parallel execution of complex DNNs directly on-bard of resource-constrained milliwatt-scale nodes.  ...  ACKNOWLEDGMENTS The authors thank Hanna Müller for her contribution in designing the PULP-Shield, Noé Brun for his support in making the camera-holder, and Frank K.  ... 
doi:10.1109/jiot.2019.2917066 fatcat:ogqpf3qzg5hc5hph6cgdccivxu
« Previous Showing results 1 — 15 out of 100 results