A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is
A supervised learning algorithm searches over a set of functions A → B parametrised by a space P to find the best approximation to some ideal function f A → B. It does this by taking examples (a,f(a)) ∈ A× B, and updating the parameter according to some rule. We define a category where these update rules may be composed, and show that gradient descent---with respect to a fixed step size and an error function satisfying a certain property---defines a monoidal functor from a category ofarXiv:1711.10455v3 fatcat:m2xi2jtrgrfo7adviem23zopxq