Filters








2,225 Hits in 4.1 sec

Parameter-Free Convex Learning through Coin Betting

Francesco Orabona, Dávid Pál
2016 International Conference on Machine Learning  
Applications to obtain parameter-free convex optimization and machine learning algorithms are shown.  ...  We present a new parameter-free algorithm for online linear optimization over any Hilbert space. It is theoretically optimal, with regret guarantees as good as with the best possible learning rate.  ...  Algorithm From Coin Betting Here, we present our new parameter-free algorithm for OLO over a Hilbert space H, stated as Algorithm 1.  ... 
dblp:conf/icml/OrabonaP16 fatcat:lrg2v4dqfjbmpec7f6m2bzfeeu

Coin Betting and Parameter-Free Online Learning [article]

Francesco Orabona, Dávid Pál
2016 arXiv   pre-print
We present a new intuitive framework to design parameter-free algorithms for both online linear optimization over Hilbert spaces and for learning with expert advice, based on reductions to betting on outcomes  ...  In the recent years, a number of parameter-free algorithms have been developed for online linear optimization over Hilbert spaces and for learning with expert advice.  ...  classic online learning algorithms and parameter-free ones is real and not just theoretical.  ... 
arXiv:1602.04128v4 fatcat:dk4p4t2xejgo5mhgfytwtilfue

Better Parameter-free Stochastic Optimization with ODE Updates for Coin-Betting [article]

Keyi Chen, John Langford, Francesco Orabona
2022 arXiv   pre-print
In this paper, we close the empirical gap with a new parameter-free algorithm based on continuous-time Coin-Betting on truncated models.  ...  We show empirically that this new parameter-free algorithm outperforms algorithms with the "best default" learning rates and almost matches the performance of finely tuned baselines without anything to  ...  "AF: Small: Collaborative Research: New Representations for Learning Algorithms and Secure Computation", and no. 2046096 "CAREER: Parameter-free Optimization Algorithms for Machine Learning".  ... 
arXiv:2006.07507v3 fatcat:7phhzpv76zgsjbepx4cbzdlaz4

Parameter-Free Online Convex Optimization with Sub-Exponential Noise [article]

Kwang-Sung Jun, Francesco Orabona
2019 arXiv   pre-print
So, we design a novel parameter-free OCO algorithm for Banach space, which we call BANCO, via a reduction to betting on noisy coins.  ...  ., in a parameter-free way.  ...  We would like to thank Adam Smith for his valuable feedback on differentially-private SGDs.  ... 
arXiv:1902.01500v3 fatcat:6czpdrzvdjfyrecd6iljmfyrlu

PDE-Based Optimal Strategy for Unconstrained Online Learning [article]

Zhiyu Zhang, Ashok Cutkosky, Ioannis Paschalidis
2022 arXiv   pre-print
Unconstrained Online Linear Optimization (OLO) is a practical problem setting to study the training of machine learning models.  ...  To our knowledge, the proposed algorithm is the first to achieve such optimalities.  ...  Interestingly, our prior work [ZCP22] used the coin-betting approach to achieve a similar goal as [DM19] , suggesting intriguing connections between differential equations and parameter-free online  ... 
arXiv:2201.07877v2 fatcat:uukx5sgh2fdl7hat3xlggp4yui

Parameter-free Gradient Temporal Difference Learning [article]

Andrew Jacobsen, Alan Chan
2021 arXiv   pre-print
In parallel, progress in online learning has provided parameter-free methods that achieve minimax optimal guarantees up to logarithmic terms, but their application in reinforcement learning has yet to  ...  In this work, we combine these two lines of attack, deriving parameter-free, gradient-based temporal difference algorithms.  ...  Coin betting and parameter-free on- line learning. Advances in Neural Information Processing Systems, 2016. Orabona, F. and Pál, D. Scale-free online learning.  ... 
arXiv:2105.04129v1 fatcat:2oyaubqm2vezddi7x7vlc5lpg4

Parameter-Free Locally Differentially Private Stochastic Subgradient Descent [article]

Kwang-Sung Jun, Francesco Orabona
2019 arXiv   pre-print
In this work, we propose BANCO (Betting Algorithm for Noisy COins), the first ϵ-LDP SGD algorithm that essentially matches the convergence rate of the tuned SGD without any learning rate parameter, reducing  ...  Further, tuning is detrimental to privacy loss since it significantly increases the number of gradient requests.  ...  We would like to thank Adam Smith for his valuable feedback on differentially-private SGDs.  ... 
arXiv:1911.09564v1 fatcat:zqeek7eblver7lrk2zswqmn5he

Implicit Parameter-free Online Learning with Truncated Linear Models [article]

Keyi Chen and Ashok Cutkosky and Francesco Orabona
2022 arXiv   pre-print
Parameter-free algorithms are online learning algorithms that do not require setting learning rates.  ...  Unfortunately, truncated linear models cannot be used with parameter-free algorithms because the updates become very expensive to compute.  ...  and Secure Computation" and no. 2046096 "CAREER: Parameter-free Optimization Algorithms for Machine Learning".  ... 
arXiv:2203.10327v1 fatcat:tgdm7kp2rzgslgjtsvxeze3ibi

Improved Strongly Adaptive Online Learning using Coin Betting [article]

Kwang-Sung Jun, Francesco Orabona, Rebecca Willett, Stephen Wright
2017 arXiv   pre-print
This paper describes a new parameter-free online learning algorithm for changing environments.  ...  Empirical results show that our algorithm outperforms state-of-the-art methods in learning with expert advice and metric learning scenarios.  ...  The authors thank András György for providing constructive feedback and Kristjan Greenewald for providing the metric learning code.  ... 
arXiv:1610.04578v3 fatcat:klhphkaazrefbhhadtxklvcoby

Online Learning for Changing Environments using Coin Betting [article]

Kwang-Sung Jun, Francesco Orabona, Stephen Wright, Rebecca Willett
2017 arXiv   pre-print
This algorithm is derived by a reduction from optimal algorithms for the so-called coin betting problem.  ...  A key challenge in online learning is that classical algorithms can be slow to adapt to changing environments.  ...  The authors thank András György for providing constructive feedback and Kristjan Greenewald for providing the metric learning code.  ... 
arXiv:1711.02545v1 fatcat:x6gsm5xk3vh7hdjwfd3ikncnfy

Online Parameter-Free Learning of Multiple Low Variance Tasks [article]

Giulia Denevi, Dimitris Stamos, Massimiliano Pontil
2020 arXiv   pre-print
We propose a method to learn a common bias vector for a growing sequence of low-variance tasks. Unlike state-of-the-art approaches, our method does not require tuning any hyper-parameter.  ...  We then adapt the methods to the statistical setting: the aggressive variant becomes a multi-task learning method, the lazy one a meta-learning method.  ...  For this reason, both Alg. 1 and Alg. 2 can be considered parameter-free algorithms. We start from describing Alg. 1. One-Dimension Coin Betting Algorithm.  ... 
arXiv:2007.05732v1 fatcat:3vcoigxgb5g4hpomvu7k5vhyeu

Training Deep Networks without Learning Rates Through Coin Betting [article]

Francesco Orabona, Tatiana Tommasi
2017 arXiv   pre-print
Instead, we reduce the optimization process to a game of betting on a coin and propose a learning-rate-free optimal algorithm for this scenario.  ...  Contrary to previous methods, we do not adapt the learning rates nor we make use of the assumed curvature of the objective function.  ...  Acknowledgments The authors thank the Stony Brook Research Computing and Cyberinfrastructure, and the Institute for Advanced Computational Science at Stony Brook University for access to the high-performance  ... 
arXiv:1705.07795v3 fatcat:2tlcskmh3ve4jizoinoz43mz4q

Towards Painless Policy Optimization for Constrained MDPs [article]

Arushi Jain, Sharan Vaswani, Reza Babanezhad, Csaba Szepesvari, Doina Precup
2022 arXiv   pre-print
We instantiate this framework to use coin-betting algorithms and propose the Coin Betting Politex (CBP) algorithm.  ...  We consider the online setting with linear function approximation and assume global access to the corresponding features.  ...  Acknowledgements We would like to thank Tor Lattimore for feedback on the paper.  ... 
arXiv:2204.05176v1 fatcat:ted73dwiufbrng7titludzeg2q

Artificial Constraints and Lipschitz Hints for Unconstrained Online Learning [article]

Ashok Cutkosky
2019 arXiv   pre-print
Further, given a known bound u< D, our same techniques allow us to design algorithms that adapt optimally to the unknown value of u without requiring knowledge of G.  ...  We provide algorithms that guarantee regret R_T(u)<Õ(Gu^3 + G(u+1)√(T)) or R_T(u)<Õ(Gu^3T^1/3 + GT^1/3+ Gu√(T)) for online convex optimization with G-Lipschitz losses for any comparison point u without  ...  Classical gradient-descent algorithms require learning rates that are tuned to the values of ẘ and g t ⋆ , while parameter-free algorithms automatically adapt to these unknown parameters, and so can largely  ... 
arXiv:1902.09013v1 fatcat:tbb55w7xmvfbzbpid6xtuylqie

A Modern Introduction to Online Learning [article]

Francesco Orabona
2022 arXiv   pre-print
Particular attention is given to the issue of tuning the parameters of the algorithms and learning in unbounded domains, through adaptive and parameter-free online learning algorithms.  ...  Here, online learning refers to the framework of regret minimization under worst-case assumptions.  ...  The idea of using a coin-betting to do parameter-free OCO was introduced in Orabona and Pál [2016] .  ... 
arXiv:1912.13213v5 fatcat:2jfr62y6ofg5boqzv3a6mybgo4
« Previous Showing results 1 — 15 out of 2,225 results