Selective Fine-Tuning on a Classifier Ensemble: Realizing Adaptive Neural Networks with a Diversified Multi-Exit Architecture

Hirose Kazutoshi, Shinya Takamaeda-Yamazaki, Jaehoon Yu, Masato Motomura
2020 IEEE Access  
Adaptive neural networks that provide a trade-off between computing costs and inference performance can be a crucial solution for edge artificial intelligence (AI) computing where resource and energy consumption are significantly constrained. Edge AIs require a fine-tuning technique to achieve target accuracy with less computation for pre-trained models on the cloud. However, a multi-exit network, which realizes adaptive inference costs, requires significant training costs because it has many
more » ... assifiers that need to be fine-tuned. In this study, we propose a novel fine-tuning method for an ensemble of classifiers that efficiently retrain the multi-exit network. The proposed fine-tuning method exploits individualities by assembling the output of the intermediate classifiers trained with distinct preprocessed data. The evaluation results show that the proposed method achieved 0.2%-5.8%, 0.2%-4.6% higher accuracy with only 77%-93%, 73%-84% training computation compared to the entire fine-tuning of classifiers on the premodified CIFAR-100 and Imagenet, respectively, although it depends on assumed edge environments. INDEX TERMS Ensemble, fine-tuning, neural networks.
doi:10.1109/access.2020.3047799 fatcat:zgqebi3yxneb7aeaymlcsccqxi