A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
A Deep Neural Network Architecture Using Dimensionality Reduction with Sparse Matrices
[chapter]
2016
Lecture Notes in Computer Science
Matsumoto, W.; Hagiwara, M.; Boufounos, P.T.; Fukushima, K.; Mariyama, T.; Xiongxin, Z. Abstract We present a new deep neural network architecture, motivated by sparse random matrix theory that uses a low-complexity embedding through a sparse matrix instead of a conventional stacked autoencoder. We regard autoencoders as an information-preserving dimensionality reduction method, similar to random projections in compressed sensing. Thus, exploiting recent theory on sparse matrices for
doi:10.1007/978-3-319-46681-1_48
fatcat:e3e7fkwkrvgadliavosmsazmka