Techniques for Learning Binary Stochastic Feedforward Neural Networks [article]

Tapani Raiko, Mathias Berglund, Guillaume Alain, Laurent Dinh
2015 arXiv   pre-print
Stochastic binary hidden units in a multi-layer perceptron (MLP) network give at least three potential benefits when compared to deterministic MLP networks. (1) They allow to learn one-to-many type of mappings. (2) They can be used in structured prediction problems, where modeling the internal structure of the output is important. (3) Stochasticity has been shown to be an excellent regularizer, which makes generalization performance potentially better in general. However, training stochastic
more » ... works is considerably more difficult. We study training using M samples of hidden activations per input. We show that the case M=1 leads to a fundamentally different behavior where the network tries to avoid stochasticity. We propose two new estimators for the training gradient and propose benchmark tests for comparing training algorithms. Our experiments confirm that training stochastic networks is difficult and show that the proposed two estimators perform favorably among all the five known estimators.
arXiv:1406.2989v3 fatcat:sfgbyogvwncszpvd2r2za2q6au