Filters








1,760 Hits in 6.5 sec

Smoothness Matrices Beat Smoothness Constants: Better Communication Compression Techniques for Distributed Optimization [article]

Mher Safaryan, Filip Hanzely, Peter Richtárik
2021 arXiv   pre-print
In order to further alleviate the communication burden inherent in distributed optimization, we propose a novel communication sparsification strategy that can take full advantage of the smoothness matrices  ...  In this paper, we argue that when training supervised models, smoothness matrices – information-rich generalizations of the ubiquitous smoothness constants – can and should be exploited for further dramatic  ...  that there is a hitherto untapped richness of smoothness information that can be used to construct better distributed optimization algorithms and obtain better theory.  ... 
arXiv:2102.07245v1 fatcat:ddimi7i5lneapc4yw6q6dq6764

Square Root SAM

Frank Dellaert
2005 Robotics: Science and Systems I  
Such techniques have several significant advantages over the EKF: they are faster yet exact, they can be used in either batch or incremental mode, are better equipped to deal with non-linear process and  ...  We investigate smoothing approaches as a viable alternative to extended Kalman filter-based solutions to the problem.  ...  However, these algorithms are typically implemented for dense matrices only, and it is imperative that we use a sparse storage scheme for optimal performance.  ... 
doi:10.15607/rss.2005.i.024 dblp:conf/rss/Dellaert05 fatcat:5zowkkponrgvdp2x7b72f5j4ge

Shifted Compression Framework: Generalizations and Improvements [article]

Egor Shulgin, Peter Richtárik
2022 arXiv   pre-print
Communication is one of the key bottlenecks in the distributed training of large-scale machine learning models, and lossy compression of exchanged information, such as stochastic gradients or models, is  ...  However, unless the model being trained is overparameterized, there is no a-priori reason for the vectors we wish to compress to approach zero during the iterations of classical methods such as distributed  ...  Acknowledgements We would like to thank the anonymous reviewers, Laurent Condat and Konstantin Mishchenko for their helpful comments and suggestions to improve the manuscript.  ... 
arXiv:2206.10452v1 fatcat:rdptmg5itnfdnatoiq3tsfbtya

Graph Classification with 2D Convolutional Neural Networks [article]

Antoine Jean-Pierre Tixier, Giannis Nikolentzos, Polykarpos Meladianos, Michalis Vazirgiannis
2019 arXiv   pre-print
Acknowledgments We thank the anonymous reviewers for their helpful comments. The GPU used in this project was donated by NVidia as part of their GPU grant program.  ...  Step 2: Alignment and compression with PCA. As state-of-the-art node embedding techniques (such as node2vec) are neural, they are stochastic.  ...  More precisely, we used a simple but powerful nonparametric technique known as the smoothed bootstrap with variance correction [32] , detailed in Alg. 1.  ... 
arXiv:1708.02218v4 fatcat:izqk4eoif5do5cyfut4adt7u3m

Frequency-modulated continuous-wave LiDAR compressive depth-mapping

Daniel J. Lum, Samuel H. Knarr, John C. Howell
2018 Optics Express  
We present an inexpensive architecture for converting a frequency-modulated continuous-wave LiDAR system into a compressive-sensing based depth-mapping camera.  ...  Moreover, by efficiently storing only 2m data points from m<n measurements of an n pixel scene, we can easily extract depths by solving only two linear equations with efficient convex-optimization methods  ...  Agishev, "Intensity-modulated linear-frequency-modulated continuous-wave lidar for distributed media: fundamentals of technique," Appl. Opt. 49, 3369-3379 (2010). 34. B. C. Redman, B. Stann, W.  ... 
doi:10.1364/oe.26.015420 pmid:30114804 fatcat:ojivk7nl6za7pmlr66u2jj4544

Quantization and Compressive Sensing [article]

Petros T. Boufounos, Laurent Jacques, Felix Krahmer, Rayan Saab
2014 arXiv   pre-print
This book chapter explores the interaction of quantization and compressive sensing and examines practical quantization strategies for compressive acquisition systems.  ...  As we demonstrate, proper accounting for quantization and careful quantizer design has significant impact in the performance of a compressive acquisition system.  ...  To that end, following [42] we will use a (standard) compressed sensing decoder D : R m → R N that has uniform robustness guarantees for matrices with an appropriate RIP constant.  ... 
arXiv:1405.1194v2 fatcat:dva6z6yqojgo7pd3y5zt7khuoe

Quantization and Compressive Sensing [chapter]

Petros T. Boufounos, Laurent Jacques, Felix Krahmer, Rayan Saab
2015 Compressed Sensing and its Applications  
This chapter explores the interaction of quantization and compressive sensing and examines practical quantization strategies for compressive acquisition systems.  ...  This chapter explores the interaction of quantization and compressive sensing and examines practical quantization strategies for compressive acquisition systems.  ...  This type of error decay is sub-optimal (albeit better than scalar quantization), and rather far from the optimal exponential error decay.  ... 
doi:10.1007/978-3-319-16042-9_7 fatcat:raeigqgka5cohkjotlr6mir45u

Optimal estimation of $\ell_1$-regularization prior from a regularized empirical Bayesian risk standpoint

Lior Horesh, Eldad Haber, Hui Huang
2012 Inverse Problems and Imaging  
Memory requirements as well as computation of the nonlinear, non-smooth subgradient equations are prohibitive for large-scale problems.  ...  We further demonstrate that the solutions of ill-posed inverse problems by incorporation of 1 -regularization using the learned prior matrix perform generally better than commonly used regularization techniques  ...  Thanks go to Mark Schmidt for his Matlab code minFunc, which has been modified to solve our unconstrained optimization problem.  ... 
doi:10.3934/ipi.2012.6.447 fatcat:4e5l67et4nf3lfb7t3pdk4gz4m

Preferred Tempo and Low-Audio-Frequency Bias Emerge From Simulated Sub-cortical Processing of Sounds With a Musical Beat

Nathaniel J. Zuk, Laurel H. Carney, Edmund C. Lalor
2018 Frontiers in Neuroscience  
However, the tempo identification algorithm that was optimized for simple stimuli often failed for recordings of music.  ...  First, irrespective of the stimulus being presented, the preferred tempo was around 100 beats per minute, which is within the range of tempi where tempo discrimination and tapping accuracy are optimal.  ...  However, an explanation for this optimal range of tempi is unclear.  ... 
doi:10.3389/fnins.2018.00349 pmid:29896080 pmcid:PMC5987030 fatcat:edswueki7bdstbrzgav5gx2gze

Better Methods and Theory for Federated Learning: Compression, Client Selection and Heterogeneity [article]

Samuel Horváth
2022 arXiv   pre-print
Unfortunately, optimization for FL faces several specific issues that centralized optimization usually does not need to handle.  ...  In Chapter 5, we argue that if compressed communication is required for distributed training due to communication overhead, it is better to use unbiased compressors.  ...  Chapter 5 A better alternative to error feedback for communication-efficient distributed learning Introduction We consider distributed optimization problems of the form min x∈R d f (x) def = 1 n n i=  ... 
arXiv:2207.00392v1 fatcat:4ulwzbpusjbntnijrwe4vlgspe

FedNL: Making Newton-Type Methods Applicable to Federated Learning [article]

Mher Safaryan and Rustem Islamov and Xun Qian and Peter Richtárik
2022 arXiv   pre-print
Our communication efficient Hessian learning technique provably learns the Hessian at the optimum.  ...  , ii) makes it applicable beyond generalized linear models, and iii) provably works with general contractive compression operators for compressing the local Hessians, such as Top-K or Rank-R, which are  ...  B.3.3 Top-K compression operator for matrices Another example of contractive compression operators is Top-K compressor for matrices.  ... 
arXiv:2106.02969v2 fatcat:fzg2w465qzdhzftjusfrsu3zhi

Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14) [article]

L. Jacques, C. De Vleeschouwer, Y. Boursier, P. Sudhakar, C. De Mol, A. Pizurica, S. Anthoine, P. Vandergheynst, P. Frossard, C. Bilen, S. Kitic, N. Bertin, R. Gribonval, N. Boumal (+51 others)
2014 arXiv   pre-print
For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014.  ...  There, the authors study 2 -stability for this class of decomposable norms with a general sufficiently smooth data fidelity.  ...  Relation to Previous Works This result extends the work of [6] for 1 -regularization, [12] for analysis-1 , [1] for non-overlapping group Lasso, [2] for the trace norm, and [13] for general polyhedral  ... 
arXiv:1410.0719v2 fatcat:4y3drgk3ujh5hopfn2p2runlzu

A Fast Semidirect Least Squares Algorithm for Hierarchically Block Separable Matrices

Kenneth L. Ho, Leslie Greengard
2014 SIAM Journal on Matrix Analysis and Applications  
For an M × N HBS matrix with M ≥ N having bounded off-diagonal block rank, the algorithm has optimal O (M + N) complexity.  ...  We present a fast algorithm for linear least squares problems governed by hierarchically block separable (HBS) matrices.  ...  We would like to thank the anonymous referees for their careful reading and insightful remarks, which have improved the paper tremendously.  ... 
doi:10.1137/120902677 fatcat:n5rnrdu2avh4xhqdx4zm46u7ka

State smoothing by sum-of-norms regularization

Henrik Ohlsson, Fredrik Gustafsson, Lennart Ljung, Stephen Boyd
2010 49th IEEE Conference on Decision and Control (CDC)  
The state smoothing problem for linear state space models is here formulated as a least-squares problem with sum-of-norms regularization, a generalization of the 1regularization.  ...  A nice property of the suggested formulation is that it only has one tuning parameter, the regularization constant which is used to trade off fit and the number of jumps.  ...  CUSUM also suffers of varying SNR and would do better if it was retuned for each new SNR value. VII.  ... 
doi:10.1109/cdc.2010.5717386 dblp:conf/cdc/OhlssonGLB10a fatcat:2lqv53tkgndexbdy3letg4ouii

A deep learning approach to structured signal recovery

Ali Mousavi, Ankit B. Patel, Richard G. Baraniuk
2015 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton)  
In this paper, we develop a new framework for sensing and recovering structured signals.  ...  In contrast to compressive sensing (CS) systems that employ linear measurements, sparse representations, and computationally complex convex/greedy algorithms, we introduce a deep learning framework that  ...  We solve this optimization problem by using either convex optimization techniques (e.g. linear programming) or greedy algorithms.  ... 
doi:10.1109/allerton.2015.7447163 dblp:conf/allerton/MousaviPB15 fatcat:hv5dqtrzi5h53jphsepdkyw6fq
« Previous Showing results 1 — 15 out of 1,760 results