Putting more neural in artificial neural networks
In computational neuroscience, we are at the intersection of neuroscience and artificial intelligence (AI). In this thesis, we work towards understanding how can we use AI algorithms as a model of the brain, and how can the brain influence how we design AI algorithms? Throughout this work we use artificial neural networks (ANNs), AI algorithms that were directly influenced by the brain, to solve two problems -working memory and computer vision. We first consider working memory, which requires
... formation about external stimuli to be stored and represented in the brain for tens of seconds even after the stimuli goes away. Prior work in this field relies on learning rules or network organization that is not biologically plausible. To identify mechanisms through which biological networks can learn memory function, we derived biologically plausible plasticity rules that enable information storing. We then demonstrate these networks' robustness and ability to continue to learn. We next consider the field of computer vision, where we are trying to get machines to interpret visual information as people do. Specifically, we focus on object recognition, getting machines to identify the objects within an image. State-of-the-art algorithms in object recognition still suffer from interpreting images in a way that is not human-like, leading to unexpected and potentially catastrophic errors. We outline a new way to train object recognition algorithms using the brain as a teacher for training. We also propose a new metric for evaluating the success of our object recognition algorithms by evaluating the human-likeness of the errors. The form and content of this abstract are approved. I recommend its publication.