Leveraging Prior Concept Learning Improves Generalization From Few Examples in Computational Models of Human Object Recognition

Joshua S. Rule, Maximilian Riesenhuber
2021 Frontiers in Computational Neuroscience  
Humans quickly and accurately learn new visual concepts from sparse data, sometimes just a single example. The impressive performance of artificial neural networks which hierarchically pool afferents across scales and positions suggests that the hierarchical organization of the human visual system is critical to its accuracy. These approaches, however, require magnitudes of order more examples than human learners. We used a benchmark deep learning model to show that the hierarchy can also be
more » ... eraged to vastly improve the speed of learning. We specifically show how previously learned but broadly tuned conceptual representations can be used to learn visual concepts from as few as two positive examples; reusing visual representations from earlier in the visual hierarchy, as in prior approaches, requires significantly more examples to perform comparably. These results suggest techniques for learning even more efficiently and provide a biologically plausible way to learn new visual concepts from few examples.
doi:10.3389/fncom.2020.586671 pmid:33510629 pmcid:PMC7835122 fatcat:25sn7lntlbcxzl4ef3kfizd5ju