Multitask Online Mirror Descent [article]

Nicolò Cesa-Bianchi, Pierre Laforgue, Andrea Paudice, Massimiliano Pontil
2022 arXiv   pre-print
We introduce and analyze MT-OMD, a multitask generalization of Online Mirror Descent (OMD) which operates by sharing updates between tasks. We prove that the regret of MT-OMD is of order √(1 + σ^2(N-1))√(T), where σ^2 is the task variance according to the geometry induced by the regularizer, N is the number of tasks, and T is the time horizon. Whenever tasks are similar, that is σ^2 ≤ 1, our method improves upon the √(NT) bound obtained by running independent OMDs on each task. We further
more » ... e a matching lower bound, and show that our multitask extensions of Online Gradient Descent and Exponentiated Gradient, two major instances of OMD, enjoy closed-form updates, making them easy to use in practice. Finally, we present experiments which support our theoretical findings.
arXiv:2106.02393v3 fatcat:nwzuhlr3ofhydiolalcqmznrie