A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is
The prospect of carrying out data mining on cheaply compressed versions of high dimensional massive data sets holds tremendous potential and promise. However, our understanding of the performance guarantees available from such computationally inexpensive dimensionality reduction methods for data mining and machine learning tasks is currently lagging behind the requirements. In this paper we take a new look at randomly projected ordinary least squares regression, and give improved bounds on itsdoi:10.1109/icdmw.2013.152 dblp:conf/icdm/Kaban13 fatcat:z3caiafpn5fstcobgubbw6vsve