Filters








1,919 Hits in 2.8 sec

Quantized Adam with Error Feedback [article]

Congliang Chen, Li Shen, Haozhi Huang, Wei Liu
2021 arXiv   pre-print
that the distributed adaptive gradient method with weight quantization and error-feedback converges to the point related to the quantized level under both the single-worker and multi-worker modes.  ...  Theoretically, in the stochastic nonconvex setting, we show that the distributed adaptive gradient method with gradient quantization and error-feedback converges to the first-order stationary point, and  ...  [48] . 3.1.1 Gradient Quantization. Let ( ) = . The quantized generic Adam reduces to be generic Adam with gradient quantization and error-feedback.  ... 
arXiv:2004.14180v2 fatcat:gl3fe5ndtfhghaa72bok37pmx4

Quantized Compressive Sampling of Stochastic Gradients for Efficient Communication in Distributed Deep Learning

Afshin Abdi, Faramarz Fekri
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
Next, we propose to improve the convergence rate of the distributed training algorithm via a weighted error feedback.  ...  Second, for those approaches for which the compressed SG values are biased, there is no guarantee for the learning convergence and thus an error feedback is often required.  ...  feedback method), and hence the learning algorithm would not converge with error feedback.  ... 
doi:10.1609/aaai.v34i04.5706 fatcat:whyzqryf2rf6niiiz4ekw3a644

8-bit Optimizers via Block-wise Quantization [article]

Tim Dettmers, Mike Lewis, Sam Shleifer, Luke Zettlemoyer
2021 arXiv   pre-print
., the exponentially smoothed sum (SGD with momentum) or squared sum (Adam) of past gradient values.  ...  To maintain stability and performance, we combine block-wise quantization with two additional changes: (1) dynamic quantization, a form of non-linear optimization that is precise for both large and small  ...  ACKNOWLEDGEMENTS We thank Ari Holtzman, Gabriel Ilharco, Ofir Press, Aditya Kusupati, and Mitchell Wortsman for their valuable feedback.  ... 
arXiv:2110.02861v1 fatcat:x4xwpdorf5gffbqnrdcfkohkuu

Lossy raw data compression in computed tomography with noise shaping to control image effects

Yao Xie, Adam S. Wang, Norbert J. Pelc, Jiang Hsieh, Ehsan Samei
2008 Medical Imaging 2008: Physics of Medical Imaging  
If we are able to compress the sinogram data with a tolerable amount of loss in the reconstructed image quality, we can store the raw data and then reconstruct desired images in the future.  ...  Figure 9 : 9 (a): quantizer error spectrum, without error feedback; (b): quantizer error spectrum with feedback. Figure 10 : 10 Comparison frequency response of different feedback filter. Fig.  ...  We found that, error feedback quantization (with first order filter) is the simplest one and has similar performance with other higher order FIR feedback filter.  ... 
doi:10.1117/12.769954 fatcat:udivmecghjgpndc4ycryngblfq

Channel prediction and feedback in multiuser broadcast channels

Adam J. Tenenbaum, Raviraj S. Adve, Young-Soo Yuk
2009 2009 11th Canadian Workshop on Information Theory  
mean squared error (MSE).  ...  Limiting the total feedback rate is an important design goal for multiuser multiple-input, multiple-output systems, as the feedback overhead can potentially consume a large percentage of system resources  ...  quantization codebook as determined by the Lloyd-Max algorithm [15] ), with B feedback bits per real channel coefficient, and the U = 4 AR model.  ... 
doi:10.1109/cwit.2009.5069523 fatcat:7abqgicz55btfc7vuhcgplbczu

WheelCon: A wheel control-based gaming platform for studying human sensorimotor control [article]

Quanying Liu, Yorie Nakahira, Ahkeel Mohideen, Adam Dai, Sunghoon Choi, Angelina Pan, Dimitar M. Ho, John C. Doyle
2019 arXiv   pre-print
gaming steering wheel with a force feedback motor.  ...  The platform provides flexibility, as will be demonstrated in the demos provided, so that researchers may manipulate the disturbances, delay, and quantization (data rate) in the layered feedback loops,  ...  The L1-/L2-/L¥-norm of error increases with the increasing delay. Figure 5 - 5 Quantization in vision input (a) and action output (b).  ... 
arXiv:1811.00738v2 fatcat:wl7mahnxurfm5eggeibffuoacq

Learning Physical-Layer Communication with Quantized Feedback [article]

Jinxiang Song, Bile Peng, Christian Häger, Henk Wymeersch, Anant Sahai
2019 arXiv   pre-print
Simulation results show that feedback quantization does not appreciably affect the learning process and can lead to excellent performance, even with 1-bit quantization.  ...  In this paper, we study the impact of quantized feedback in data-driven learning of physical-layer communication.  ...  Results and Discussion 1) Perfect vs Quantized Feedback: We start by evaluating the impact of quantized feedback on the system performance, measured in terms of the symbol error rate (SER).  ... 
arXiv:1904.09252v2 fatcat:l4oc5qc6bvcptjncs6tlsibcmq

Learning Physical-Layer Communication with Quantized Feedback

Jinxiang Song, Bile Peng, Christian Hager, Henk Wymeersch, Anant Sahai
2019 IEEE Transactions on Communications  
, even with 1-bit quantization.  ...  In this paper, we study the impact of quantized feedback on data-driven learning of physical-layer communication.  ...  Results and Discussion 1) Perfect vs Quantized Feedback: We start by evaluating the impact of quantized feedback on the system performance, measured in terms of the symbol error rate (SER).  ... 
doi:10.1109/tcomm.2019.2951563 fatcat:j35q6g2rxbhqhgtfhjewl4yph4

An Efficient Deep Learning Framework for Low Rate Massive MIMO CSI Reporting [article]

Zhenyu Liu, Lin Zhang, Zhi Ding
2019 arXiv   pre-print
CQNet significantly outperforms solutions using uniform CSI quantization and μ-law non-uniform quantization.  ...  CQNet can be directly integrated within other DL-based CSI feedback works for further enhancement.  ...  We select CsiNet with M = 256 and DualNet-MAG with M = 128 to demonstrate the quantization error of UQ and µQ.  ... 
arXiv:1912.10608v1 fatcat:hox3b3lgebd6vch2zqavonkdki

Deep Learning-based Limited Feedback Designs for MIMO Systems [article]

Jeonghyeon Jang, Hoon Lee, Sangwon Hwang, Haibao Ren, Inkyu Lee
2019 arXiv   pre-print
Compared to conventional limited feedback schemes, the proposed DL method shows an 1 dB symbol error rate (SER) gain with reduced computational complexity.  ...  We study a deep learning (DL) based limited feedback methods for multi-antenna systems.  ...  The quantized CSI is sent back to the transmitter through finite-rate feedback channels with the aid of a channel codebook [3] .  ... 
arXiv:1912.09043v1 fatcat:ymbqvincqbbqfhkjsnmdxk37de

Exploit Where Optimizer Explores via Residuals [article]

An Xu, Zhouyuan Huo, Heng Huang
2020 arXiv   pre-print
stage, and similar to or better than SGD(m) at the end of training with better generalization error.  ...  We provide theoretical analysis to show that RSGD achieves a smaller growth rate of the generalization error and the same (but empirically better) convergence rate compared with SGD.  ...  RSGD(m) shows the same (but empirically better) convergence rate and a slower growth rate of generalization error compared with SGD.  ... 
arXiv:2004.05298v2 fatcat:l4k3x5hzqbbm5odhfpwpppcnnm

A Distributed Training Algorithm of Generative Adversarial Networks with Quantized Gradients [article]

Xiaojun Chen and Shu Yang and Li Shen and Xuanrong Pang
2020 arXiv   pre-print
In this paper, we propose a distributed GANs training algorithm with quantized gradient, dubbed DQGAN, which is the first distributed training method with quantized gradient for GANs.  ...  The error-feedback operation we designed is used to compensate for the bias caused by the compression, and moreover, ensure the convergence of the new method.  ...  We used PyTorch [32] We compared our method with the Centralized Parallel Optimistic Adam (CPOAdam) which is our method without quantization and error-feedback, and the Centralized Parallel Optimistic  ... 
arXiv:2010.13359v1 fatcat:qfjmj572mffw5phdavq6cfgjze

AdaComp : Adaptive Residual Gradient Compression for Data-Parallel Distributed Training [article]

Chia-Yu Chen, Jungwook Choi, Daniel Brand, Ankur Agrawal, Wei Zhang, Kailash Gopalakrishnan
2017 arXiv   pre-print
momentum, Adam) and network parameters (number of learners, minibatch-size etc.).  ...  show excellent results on a wide spectrum of state of the art Deep Learning models in multiple domains (vision, speech, language), datasets (MNIST, CIFAR10, ImageNet, BN50, Shakespeare), optimizers (SGD with  ...  Figure 3 : 3 This work (AdaComp) achieves similar effective compression rates (40× for CONV layers and 200× for FC/LSTM layers) with Adam, and had no impact on convergence or test error (Adam: baseline  ... 
arXiv:1712.02679v1 fatcat:xjfy7efqmrctfa6dnaqvp5qhya

Neural Machine Translation with 4-Bit Precision and Beyond [article]

Alham Fikri Aji, Kenneth Heafield
2019 arXiv   pre-print
We also propose to use an error-feedback mechanism during retraining, to preserve the compressed model as a stale gradient.  ...  We design a quantization procedure to compress NMT models better for devices with limited hardware capability.  ...  Models are optimized with Adam (Kingma and Ba, 2014).  ... 
arXiv:1909.06091v2 fatcat:m2dsb2dhvzagfpyacxo2otkpve

Performance analysis and optimal filter design for sigma-delta modulation via duality with DPCM

Or Ordentlich, Uri Erez
2015 2015 IEEE International Symposium on Information Theory (ISIT)  
The goal of this work is to characterize the optimal trade-off between the per-sample quantization rate and the resulting mean-squared-error distortion, under various restrictions on the feedback filter  ...  This is attained by using a feedback filter at the encoder, in conjunction with a low-pass filter at the decoder.  ...  For such quantizers, the quantization error is composed of two main factors [1] : granular errors that corresponds to the quantization error in the case where the input signal falls within the quantizer's  ... 
doi:10.1109/isit.2015.7282469 dblp:conf/isit/OrdentlichE15 fatcat:syzdvk5ntveidpvgd7noyjpz4a
« Previous Showing results 1 — 15 out of 1,919 results