A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2011; you can also visit the original URL.
The file type is application/pdf
.
Sparse Approximation Through Boosting for Learning Large Scale Kernel Machines
2010
IEEE Transactions on Neural Networks
Recently, sparse approximation has become a preferred method for learning large scale kernel machines. This technique attempts to represent the solution with only a subset of original data points also known as basis vectors, which are usually chosen one by one with a forward selection procedure based on some selection criteria. The computational complexity of several resultant algorithms scales as ( 2 ) in time and ( ) in memory, where is the number of training points and is the number of basis
doi:10.1109/tnn.2010.2044244
pmid:20409992
fatcat:uasdffvpqfflnb5g33ief73a5q