Training Data Expansion and Boosting of Convolutional Neural Networks for Reducing the MNIST Dataset Error Rate
Naukovì Vìstì Nacìonalʹnogo Tehnìčnogo Unìversitetu Ukraïni Kiïvsʹkij Polìtehnìčnij Institut
Background. Due to that the preceding approaches for improving the MNIST image dataset error rate do not have a clear structure which could let repeat it in a strengthened manner, the formalization of the performance improvement is considered. Objective. The goal is to strictly formalize a strategy of reducing the MNIST dataset error rate. Methods. An algorithm for achieving the better performance by expanding the training data and boosting with ensembles is suggested. The algorithm uses the
... gorithm uses the designed concept of the training data expansion. Coordination of the concept and the algorithm defines a strategy of the error rate reduction. Results. In relative comparison, the single convolutional neural network performance on the MNIST dataset has been bettered almost by 30 %. With boosting, the performance is 0.21 % error rate meaning that only 21 handwritten digits from 10,000 are not recognized. Conclusions. The training data expansion is crucial for reducing the MNIST dataset error rate. The boosting is ineffective without it. Application of the stated approach has an impressive impact for reducing the MNIST dataset error rate, using only 5 or 6 convolutional neural networks against those 35 ones in the benchmark work. Keywords: MNIST; convolutional neural network; error rate; training data expansion; boosting. 12. Romanuke V.V. A method of resume-training of discontinuous wear state trackers for composing boosting high-accurate ensembles needed to regard statistical data inaccuracies and shifts // Problems of Tribology. -2015. -№ 3. -P. 19-22. 13. Romanuke V.V. Accuracy improvement in wear state discontinuous tracking model regarding statistical data inaccuracies and shifts with boosting mini-ensemble of two-layer perceptrons // Problems of Tribology. -2014. -№ 4. -P. 55-58.