A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2015; you can also visit the original URL.
The file type is
In online convex optimization, adaptive algorithms, which can utilize the second-order information of the loss function's (sub)gradient, have shown improvements over standard gradient methods. This paper presents a framework Follow the Bregman Divergence Leader that unifies various existing adaptive algorithms from which new insights are revealed. Under the proposed framework two simple adaptive online algorithms with improvable guarantee are derived. Further, a general equation derived fromdoi:10.1109/tnnls.2016.2527053 pmid:26929066 fatcat:eodmbdzixvh55jlhnbcg3ihkye