A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit <a rel="external noopener" href="https://arxiv.org/pdf/2108.03603v2.pdf">the original URL</a>. The file type is <code>application/pdf</code>.
<span class="release-stage" >pre-print</span>
Visual understanding requires comprehending complex visual relations between objects within a scene. Here, we seek to characterize the computational demands for abstract visual reasoning. We do this by systematically assessing the ability of modern deep convolutional neural networks (CNNs) to learn to solve the "Synthetic Visual Reasoning Test" (SVRT) challenge, a collection of twenty-three visual reasoning problems. Our analysis reveals a novel taxonomy of visual reasoning tasks, which can be<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2108.03603v2">arXiv:2108.03603v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/qnylgwdicvgwfm7a6umxrea2ta">fatcat:qnylgwdicvgwfm7a6umxrea2ta</a> </span>
more »... rimarily explained by both the type of relations (same-different vs. spatial-relation judgments) and the number of relations used to compose the underlying rules. Prior cognitive neuroscience work suggests that attention plays a key role in humans' visual reasoning ability. To test this hypothesis, we extended the CNNs with spatial and feature-based attention mechanisms. In a second series of experiments, we evaluated the ability of these attention networks to learn to solve the SVRT challenge and found the resulting architectures to be much more efficient at solving the hardest of these visual reasoning tasks. Most importantly, the corresponding improvements on individual tasks partially explained our novel taxonomy. Overall, this work provides an granular computational account of visual reasoning and yields testable neuroscience predictions regarding the differential need for feature-based vs. spatial attention depending on the type of visual reasoning problem.
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211211084557/https://arxiv.org/pdf/2108.03603v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/e7/51/e751c865595eed2c073f6709a7ad4350c6dd6be9.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2108.03603v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>