Filters








3,684 Hits in 4.2 sec

Improved Techniques for Quantizing Deep Networks with Adaptive Bit-Widths [article]

Ximeng Sun, Rameswar Panda, Chun-Fu Chen, Naigang Wang, Bowen Pan, Kailash Gopalakrishnan, Aude Oliva, Rogerio Feris, Kate Saenko
2021 arXiv   pre-print
First, we propose a collaborative strategy to choose a high-precision teacher for transferring knowledge to the low-precision student while jointly optimizing the model with all bit-widths.  ...  Second, to effectively transfer knowledge, we develop a dynamic block swapping method by randomly replacing the blocks in the lower-precision student network with the corresponding blocks in the higher-precision  ...  CoQuant achieves best overall performance ∆B among all-at-once quantization methods.  ... 
arXiv:2103.01435v3 fatcat:qukv5qgpxjfxnixlor6blwfbi4

Edge-Cloud Polarization and Collaboration: A Comprehensive Survey [article]

Jiangchao Yao, Shengyu Zhang, Yang Yao, Feng Wang, Jianxin Ma, Jianwei Zhang, Yunfei Chu, Luo Ji, Kunyang Jia, Tao Shen, Anpeng Wu, Fengda Zhang (+6 others)
2021 arXiv   pre-print
We also discuss potentials and practical experiences of some on-going advanced edge AI topics including pretraining models, graph neural networks and reinforcement learning.  ...  Influenced by the great success of deep learning via cloud computing and the rapid development of edge chips, research in artificial intelligence (AI) has shifted to both of the computing paradigms, i.e  ...  G-META [322] uses local subgraphs to transfer subgraph-specific information and learn transferable knowledge faster via meta gradients with only a handful of nodes or edges in the new task.  ... 
arXiv:2111.06061v2 fatcat:qhbyomrom5ghvikjlqkqb7eayq

Compressive Linear Network Coding For Efficient Data Collection In Wireless Sensor Networks

Francesca Bassi, Lana Iwaza, Michel Kieffer, Chao Liu
2012 Zenodo  
Data transmission The nodes collaborate to rely the packets containing quantized measurements to the sink. This is achieved via linear network coding [14] .  ...  The MAP estimation (4) performed via explicit enumeration of all elements of X (y) is only tractable when n, m, and q are very small, which is not verified in large sensor networks.  ... 
doi:10.5281/zenodo.52097 fatcat:sun4r6dp2rg2fl4fgp5lmpju7i

On-device Learning Systems for Edge Intelligence: A Software and Hardware Synergy Perspective

Qihua Zhou Et Al.
2020 Zenodo  
This survey presents a software and hardware synergy of on-device learning techniques, covering the scope of model-level neural network design, algorithm-level training optimization and hardware-level  ...  After correctly determining the scale parameter, the weights and bias can be easily controlled by quantized them just once at the end of model training.  ...  However, the QAT scheme trains a quantized model from scratch and holds higher accuracy, which often collaborates with the fine-tuning and transfer learning techniques.  ... 
doi:10.5281/zenodo.5105970 fatcat:v64lgelerfcefc4ztm6uz5o7tu

Communication Efficiency in Federated Learning: Achievements and Challenges [article]

Osama Shahid, Seyedamin Pouriyeh, Reza M. Parizi, Quan Z. Sheng, Gautam Srivastava, Liang Zhao
2021 arXiv   pre-print
A challenge that exists in FL is the communication costs, as FL takes place in a distributed environment where devices connected over the network have to constantly share their updates this can create  ...  Factors such as enduser network connections that operate at substantially lower rates when compared to network connections that are available at a data center.  ...  Once all the device training is complete and model parameters are obtained each device uploads its local model back to the central server.  ... 
arXiv:2107.10996v1 fatcat:7cyelxjnbjhczm27gknbwcmlyy

A Survey on Efficient Convolutional Neural Networks and Hardware Acceleration

Deepak Ghimire, Dayoung Kil, Seong-heum Kim
2022 Electronics  
In this review, to improve the efficiency of deep learning research, we focus on three aspects: quantized/binarized models, optimized architectures, and resource-constrained systems.  ...  The learning capability of convolutional neural networks (CNNs) originates from a combination of various feature extraction layers that fully utilize a large amount of data.  ...  In surveying efficient CNN architectures and hardware acceleration, we are deeply grateful again for all the researchers and their contributions to our science.  ... 
doi:10.3390/electronics11060945 fatcat:bxxgccwkujatzh4onkzh5lgspm

Can collaborative learning be private, robust and scalable? [article]

Dmitrii Usynin, Helena Klause, Daniel Rueckert, Georgios Kaissis
2022 arXiv   pre-print
Once the models are trained, we then quantize them using the validation dataset (public) to tune the quantization parameters.  ...  It is of note that other approaches can, arguably, be applicable when discussing robustness of collaboratively trained models, such as train-time quantization or quantization-aware training.  ...  alternative training via a soft-quantization network with noisy-natural samples only. 2018. 6. Appendix  ... 
arXiv:2205.02652v1 fatcat:cahf5qta4rdjxfrowposizzm4m

The Possibility of Combining and Implementing Deep Neural Network Compression Methods

Bratislav Predić, Uroš Vukić, Muzafer Saračević, Darjan Karabašević, Dragiša Stanujkić
2022 Axioms  
INT8 quantization and knowledge distillation also led to a significant decrease in the model execution time.  ...  In the paper, the possibility of combining deep neural network (DNN) model compression methods to achieve better compression results was considered.  ...  In the cases when the main goal is to reduce the model size on the disk (due to the transfer via a network), all the compression methods (pruning, quantization, and weight clustering) can be used.  ... 
doi:10.3390/axioms11050229 fatcat:ug3lzisyevexnoan6kgtli3eoq

Communication Learning via Backpropagation in Discrete Channels with Unknown Noise

Benjamin Freed, Guillaume Sartoretti, Jiaheng Hu, Howie Choset
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
We demonstrate the effectiveness of our approach in two example multi-robot tasks: a path finding and a collaborative search problem.  ...  To the best of our knowledge, this work presents the first differentiable communication learning approach that can compute unbiased derivatives through channels with unknown noise.  ...  Both actor and critic networks for all tasks are composed of a convolutional stack followed by two fully-connected layers.  ... 
doi:10.1609/aaai.v34i05.6205 fatcat:m745yruc25hu3df72fpiknprra

Knowledge distillation in deep learning and its applications

Abdolmaged Alkhulaifi, Fahad Alsahli, Irfan Ahmad
2021 PeerJ Computer Science  
In this paper, we present an outlook of knowledge distillation techniques applied to deep learning models.  ...  One possible solution is knowledge distillation whereby a smaller model (student model) is trained by utilizing the information from a larger model (teacher model).  ...  Another ensemble knowledge distillation method was proposed by Guo et al. (2020) named Knowledge Distillation via Collaborative Learning (KDCL).  ... 
doi:10.7717/peerj-cs.474 pmid:33954248 pmcid:PMC8053015 fatcat:d77srjjrrrdhhpij6q7ltxl45m

Virtual Codec Supervised Re-Sampling Network for Image Compression [article]

Lijun Zhao, Huihui Bai, Anhong Wang, Yao Zhao
2018 arXiv   pre-print
At the encoder, the quantized vectors or coefficients are losslessly compressed by arithmetic coding.  ...  At the receiver, the decoded vectors are utilized to restore input image by image decoder network (IDN).  ...  Different from VCN network, IDN network works to restore input image from quantized re-sampled vectors Z so that the user could receive a high-quality imageĨ at decoder. B.  ... 
arXiv:1806.08514v2 fatcat:roos7bbw6rhexko6gnc4dvyfau

From Federated to Fog Learning: Distributed Machine Learning over Heterogeneous Wireless Networks [article]

Seyyedali Hosseinalipour and Christopher G. Brinton and Vaneet Aggarwal and Huaiyu Dai and Mung Chiang
2020 arXiv   pre-print
This migrates from star network topologies used for parameter transfers in federated learning to more distributed topologies at scale.  ...  It accounts for the topology structures of the local networks among the heterogeneous nodes at each network layer, orchestrating them for collaborative/cooperative learning through device-to-device (D2D  ...  Moreover, fog learning introduces collaborative/cooperative model training via D2D communications among the devices at different layers of the network hierarchy. III.  ... 
arXiv:2006.03594v3 fatcat:mpcav4qexvgwdmnvvr4qzuiblm

LadaBERT: Lightweight Adaptation of BERT through Hybrid Model Compression [article]

Yihuan Mao, Yujing Wang, Chufan Wu, Chen Zhang, Yang Wang, Yaming Yang, Quanlu Zhang, Yunhai Tong, Jing Bai
2020 arXiv   pre-print
Existing solutions leverage the knowledge distillation framework to learn a smaller model that imitates the behaviors of BERT.  ...  However, the training procedure of knowledge distillation is expensive itself as it requires sufficient training data to imitate the teacher model.  ...  For illustration, we consider the matrix initialized by real pretrained BERT weights, and the pruning process is done at once.  ... 
arXiv:2004.04124v2 fatcat:sftc2oxxeff6bofyweouredjn4

Distributed multimedia systems

V.O.K. Li, Wanjiun Liao
1997 Proceedings of the IEEE  
Such a system enhances human communications by exploiting both visual and aural senses and provides the ultimate flexibility in work and entertainment, allowing one to collaborate with remote participants  ...  the OS at the host system and real-time delivery via the network.  ...  All these reading activities happen at the same time. Therefore, the large logical data block transferred is composed of smaller physical data blocks.  ... 
doi:10.1109/5.611116 fatcat:xdmlblqyljglxpr3yeyks65vda

OCTOPUS: Overcoming Performance andPrivatization Bottlenecks in Distributed Learning [article]

Shuo Wang, Surya Nepal, Kristen Moore, Marthie Grobler, Carsten Rudolph, Alsharif Abuadbba
2022 arXiv   pre-print
We introduce a new distributed/collaborative learning scheme to address communication overhead via latent compression, leveraging global data while providing privatization of local data without additional  ...  Federated learning enables distributed participants to collaboratively learn a commonly-shared model while holding data locally.  ...  For complex models (e.g., BERT), quantization-based approaches incorporate a significant decrease of accuracy. Furthermore, downstream update part N C × N M × N E will not even be compressed at all.  ... 
arXiv:2105.00602v2 fatcat:2avblpysobdl3hmgriss7umypq
« Previous Showing results 1 — 15 out of 3,684 results