Filters








8,837 Hits in 5.8 sec

Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better [article]

Sameer Bibikar, Haris Vikalo, Zhangyang Wang, Xiaohan Chen
2021 arXiv   pre-print
Federated learning (FL) enables distribution of machine learning workloads from the cloud to resource-limited edge devices.  ...  In this paper, we develop, implement, and experimentally validate a novel FL framework termed Federated Dynamic Sparse Training (FedDST) by which complex neural networks can be deployed and trained with  ...  Wang's research were supported in part by the NSF Real-Time Machine Learning program (Award Number: 2053279), and the NSF AI Institute for Foundations of Machine Learning (IFML).  ... 
arXiv:2112.09824v1 fatcat:4ca6ybzihbe7bfgcp2aj4mlpjy

Dynamic Sparse Training for Deep Reinforcement Learning [article]

Ghada Sokar, Elena Mocanu, Decebal Constantin Mocanu, Mykola Pechenizkiy, Peter Stone
2022 arXiv   pre-print
In this paper, we introduce for the first time a dynamic sparse training approach for deep reinforcement learning to accelerate the training process.  ...  The proposed approach trains a sparse neural network from scratch and dynamically adapts its topology to the changing data distribution during training.  ...  In this paper, we 1 Code is available at: https://github.com/GhadaSokar/Dynamic-Sparse-Training-for-Deep-Reinforcement-Learning.  ... 
arXiv:2106.04217v3 fatcat:bk5kpcnnkne7tlxrglhxzzrndm

System Optimization in Synchronous Federated Training: A Survey [article]

Zhifeng Jiang, Wei Wang
2021 arXiv   pre-print
The unprecedented demand for collaborative machine learning in a privacy-preserving manner gives rise to a novel machine learning paradigm called federated learning (FL).  ...  Given a sufficient level of privacy guarantees, the practicality of an FL system mainly depends on its time-to-accuracy performance during the training process.  ...  called federated learning (FL) [6] .  ... 
arXiv:2109.03999v2 fatcat:oxmq44iuo5eexbjtq7xdj3quq4

Low Precision Decentralized Distributed Training over IID and non-IID Data [article]

Sai Aparna Aketi, Sangamesh Kodge, Kaushik Roy
2022 arXiv   pre-print
The proposed low precision decentralized training decreases computational complexity, memory usage, and communication cost by 4x and compute energy by a factor of 20x, while trading off less than a 1%  ...  However, the practical realization of such on-device training is limited by the communication and compute bottleneck.  ...  The nodes/devices are connected via a weighted, sparse, yet strongly connected graph topology.  ... 
arXiv:2111.09389v2 fatcat:jmh6i5yhybbpdhcdsxolnupjne

Federated Self-Training for Semi-Supervised Audio Recognition [article]

Vasileios Tsouvalas, Aaqib Saeed, Tanir Ozcelebi
2022 arXiv   pre-print
In this work, we study the problem of semi-supervised learning of audio models via self-training in conjunction with federated learning.  ...  Federated Learning is a distributed machine learning paradigm dealing with decentralized and personal datasets.  ...  Apart from self-training, alternative SSL approaches introduce a loss term, which is computed on unlabeled data, to encourages the model to generalize better to unseen data.  ... 
arXiv:2107.06877v2 fatcat:hf3dr6i3n5c5bmzjkm6f2y3kei

Pre-training Methods in Information Retrieval [article]

Yixing Fan, Xiaohui Xie, Yinqiong Cai, Jia Chen, Xinyu Ma, Xiangsheng Li, Ruqing Zhang, Jiafeng Guo
2022 arXiv   pre-print
Owing to sophisticated pre-training objectives and huge model size, pre-trained models can learn universal language representations from massive textual data, which are beneficial to the ranking task of  ...  In recent years, the resurgence of deep learning has greatly advanced this field and leads to a hot topic named NeuIR (i.e., neural information retrieval), especially the paradigm of pre-training methods  ...  Acknowledgements References Pre-training Methods in Information Retrieval Acknowledgements  ... 
arXiv:2111.13853v3 fatcat:pilemnpphrgv5ksaktvctqdi4y

Ping-pong beam training for reciprocal channels with delay spread

Elisabeth de Carvalho, Jorgen Bach Andersen
2015 2015 49th Asilomar Conference on Signals, Systems and Computers  
The s cale mixture representation allows us to formulate the corresponding Type II version of these algorithms, following the hierarchical bayesian framework of Sparse Bayesian Learning (SBL ) and enable  ...  The proposed detector requires no training signals and outerforms conentional covariance matrix based detectors which require training.  ...  Sparse Bayesian Learning (SBL).  ... 
doi:10.1109/acssc.2015.7421451 dblp:conf/acssc/CarvalhoA15 fatcat:mqokuvnh3zg45licnfbgxyvxfu

MEASUREMENT SCIENCE AND TRAINING

C. Victor Bunderson
1988 ETS Research Report Series  
, implementation, and conduct of training.  ...  The paper is intended to be a discussion-focusing chapter in a forthcoming book sponsored by the American Society of Training and Development in which other chapters will be written by training practitioners  ...  However, mea §urement science has not yet made a powerful approach to measuring the growth of human competence as it develops over time as a result of learning.  ... 
doi:10.1002/j.2330-8516.1988.tb00319.x fatcat:kdn7yxtgzzfsppr3febtawo5vq

Pre-Trained Models: Past, Present and Future [article]

Xu Han, Zhengyan Zhang, Ning Ding, Yuxian Gu, Xiao Liu, Yuqi Huo, Jiezhong Qiu, Yuan Yao, Ao Zhang, Liang Zhang, Wentao Han, Minlie Huang (+12 others)
2021 arXiv   pre-print
It is now the consensus of the AI community to adopt PTMs as backbone for downstream tasks rather than learning models from scratch.  ...  In this paper, we take a deep look into the history of pre-training, especially its special relation with transfer learning and self-supervised learning, to reveal the crucial position of PTMs in the AI  ...  Federation (CCF).  ... 
arXiv:2106.07139v3 fatcat:kn6gk2bg4jecndvlhhvq32x724

Efficient DNN Training with Knowledge-Guided Layer Freezing [article]

Yiding Wang, Decang Sun, Kai Chen, Fan Lai, Mosharaf Chowdhury
2022 arXiv   pre-print
While most existing solutions try to overlap/schedule computation and communication for efficient training, this paper goes one step further by skipping computing and communication through DNN layer freezing  ...  ones, saving their corresponding backward computation and communication.  ...  yet far from convergence.  ... 
arXiv:2201.06227v1 fatcat:b7nvft75knhbdhfs3xr7para5u

Schrödinger's FP: Dynamic Adaptation of Floating-Point Containers for Deep Learning Training [article]

Miloš Nikolić, Enrique Torres Sanchez, Jiahui Wang, Ali Hadi Zadeh, Mostafa Mahmoud, Ameer Abdelhadi, Andreas Moshovos
2022 arXiv   pre-print
We introduce methods to dynamically adjust the size and format of the floating-point containers used to store activations and weights during training.  ...  Quantum Mantissa, is a machine learning-first mantissa compression method that taps on training's gradient descent algorithm to also learn minimal mantissa bitlengths on a per-layer granularity, and obtain  ...  This manuscript was previously submitted to the 2022 International Symposium on Computer Architecture.  ... 
arXiv:2204.13666v1 fatcat:puhvlv5fkjfafcchfvzbcnwfc4

Quick and Robust Feature Selection: the Strength of Energy-efficient Sparse Training for Autoencoders [article]

Zahra Atashgahi, Ghada Sokar, Tim van der Lee, Elena Mocanu, Decebal Constantin Mocanu, Raymond Veldhuis, Mykola Pechenizkiy
2021 arXiv   pre-print
This criterion, blended with sparsely connected denoising autoencoders trained with the sparse evolutionary training procedure, derives the importance of all input features simultaneously.  ...  Major complications arise from the recent increase in the amount of high-dimensional data, including high computational costs and memory requirements.  ...  Parameter efficient training of deep convolutional neural networks by dynamic sparse reparameterization.  ... 
arXiv:2012.00560v2 fatcat:bnb7vtzrabcglexgfjjyis7eke

Optimal Complexity in Decentralized Training [article]

Yucheng Lu, Christopher De Sa
2022 arXiv   pre-print
Decentralization is a promising method of scaling up parallel machine learning systems.  ...  Our lower bound reveals a theoretical gap in known convergence rates of many existing decentralized training algorithms, such as D-PSGD.  ...  Feder Cooper, Jerry Chee, Zheng Li, Ran Xin, Jiaqi Zhang and anonymous reviewers from ICML 2021 for providing valuable feedbacks on earlier versions of this paper.  ... 
arXiv:2006.08085v4 fatcat:zsbovarrqbgubgnwe3jbqgzaxa

Trends in Vocational Education and Training Research, Vol. II. Proceedings of the European Conference on Educational Research (ECER), Vocational Education and Training Network (VETNET)

VETNET
2019 Zenodo  
VETNET is a network of researchers interested in exploring societal, policy, governance, organisational, institutional and individual factors that shape and explain vocational education, learning and training  ...  The 2019 edition is the second volume on Trends in vocational education and training research.  ...  by Federal Institute for Vocational Education and Training (BiBB) within the research project "innowas: Innovative Weiterbildung mit Autorensystemen -Stärkung der horizontalen Mobilität in der Produktion  ... 
doi:10.5281/zenodo.3457503 fatcat:h4kprpevnfh7znrs3zas37i65q

Journal of Vocational, Adult and Continuing Education and Training, Volume 3, Issue 1 2020

Joy Papier
2020 Journal of Vocational, Adult and Continuing Education and Training  
ACKNOWLEDGEMENTS -vi - Acknowledgements We would like to thank fellow course facilitator Dr Alan Ralphs, who acted as the critical reader of this article, the Diploma in Higher Education Teaching and Learning  ...  This was evident from what a student respondent pointed out: We observe, with little chance for us to work on the computer, as to learn how to fix the computer.  ...  First, the level of formality is significantly lower at a training company because the curriculum is less detailed, even if the aims are specified; thus, learning happens in a less intentional way.  ... 
doi:10.14426/jovacet.v3i1.136 fatcat:ny4wl6a6ifdslkdrvdufyhmnaq
« Previous Showing results 1 — 15 out of 8,837 results