A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2016; you can also visit the original URL.
The file type is application/pdf
.
Scalable stacking and learning for building deep architectures
2012
2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Deep Neural Networks (DNNs) have shown remarkable success in pattern recognition tasks. However, parallelizing DNN training across computers has been difficult. We present the Deep Stacking Network (DSN), which overcomes the problem of parallelizing learning algorithms for deep architectures. The DSN provides a method of stacking simple processing modules in buiding deep architectures, with a convex learning problem in each module. Additional fine tuning further improves the DSN, while
doi:10.1109/icassp.2012.6288333
dblp:conf/icassp/DengYP12
fatcat:fpyjcflktjf4pb6e4tg22wqrv4