Convex Learning with Invariances

Choon Hui Teo, Amir Globerson, Sam T. Roweis, Alexander J. Smola
2007 Neural Information Processing Systems  
Incorporating invariances into a learning algorithm is a common problem in machine learning. We provide a convex formulation which can deal with arbitrary loss functions and arbitrary losses. In addition, it is a drop-in replacement for most optimization algorithms for kernels, including solvers of the SVMStruct family. The advantage of our setting is that it relies on column generation instead of modifying the underlying optimization problem directly.
dblp:conf/nips/TeoGRS07 fatcat:ujioevnjcnby5mjpyk2mlms37a