A Practical Approach to Sizing Neural Networks [article]

Gerald Friedland, Alfredo Metere, Mario Krell
2018 arXiv   pre-print
Memorization is worst-case generalization. Based on MacKay's information theoretic model of supervised machine learning, this article discusses how to practically estimate the maximum size of a neural network given a training data set. First, we present four easily applicable rules to analytically determine the capacity of neural network architectures. This allows the comparison of the efficiency of different network architectures independently of a task. Second, we introduce and experimentally
more » ... validate a heuristic method to estimate the neural network capacity requirement for a given dataset and labeling. This allows an estimate of the required size of a neural network for a given problem. We conclude the article with a discussion on the consequences of sizing the network wrongly, which includes both increased computation effort for training as well as reduced generalization capability.
arXiv:1810.02328v1 fatcat:f4zh2nkitrhoff6oun7woe6ua4