Filters








326,626 Hits in 7.3 sec

Learning explanations that are hard to vary [article]

Giambattista Parascandolo, Alexander Neitz, Antonio Orvieto, Luigi Gresele, Bernhard Schölkopf
2020 arXiv   pre-print
In this paper, we investigate the principle that 'good explanations are hard to vary' in the context of deep learning.  ...  To inspect this, we first formalize a notion of consistency for minima of the loss surface, which measures to what extent a minimum appears only when examples are pooled.  ...  ACKNOWLEDGMENTS We wish to thank Sebastian Gomez, Luca Biggio, Julius von Kügelgen, Paolo Penna, Ioannis Anagno, Ricards Marcinkevics, Sidak Pal Singh, Damien Teney for feedback on the manuscript, and  ... 
arXiv:2009.00329v3 fatcat:hexhmtq57zbmfnjjcqtxzt6evu

Applying Deutsch's concept of good explanations to artificial intelligence and neuroscience – an initial exploration [article]

Daniel C. Elton
2020 arXiv   pre-print
We explore what role hard-to-vary explanations play in intelligence by looking at the human brain and distinguish two learning systems in the brain.  ...  We argue that figuring out how replicate this second system, which is capable of generating hard-to-vary explanations, is a key challenge which needs to be solved in order to realize artificial general  ...  they are hard to vary (HTV)  ... 
arXiv:2012.09318v1 fatcat:xcp5uto65bbpfftfvdiv4ajog4

Easy-to-hard effects in perceptual learning depend upon the degree to which initial trials are "easy"

Matthew G. Wisniewski, Barbara A. Church, Eduardo Mercado, Milen L. Radell, Alexandria C. Zakrzewski
2019 Psychonomic Bulletin & Review  
Rather, they support incremental learning models that account for easy-to-hard effects.  ...  Results challenge assumptions that sequencing effects in learning are related to attentional spotlighting of task-relevant dimensions.  ...  Compliance with ethical standards Open practices statement A preexperiment plan, annotated postexperiment file, and the raw data are available at www. alclaboratory.com/opendata.  ... 
doi:10.3758/s13423-019-01627-4 pmid:31243721 pmcid:PMC6868315 fatcat:r2vovmbdkzak7ntrgm5cwls4qq

Analysis of Task Difficulty Sequences in a Simulation-Based POE Environment [chapter]

Sadia Nawaz, Namrata Srivastava, Ji Hyun Yu, Ryan S. Baker, Gregor Kennedy, James Bailey
2020 Lecture Notes in Computer Science  
The findings suggest that if students perceive the TDs as easy or hard, it may lead to poorer learning outcomes, while the medium or moderate TDs may result in better learning outcomes.  ...  In terms of TD transitions, difficulty level hard followed by a hard may lead to poorer learning outcomes.  ...  A plausible explanation for this outcome is that students tend to engage more in the tasks that are perceived moderately difficult than the tasks that are perceived too easy or too hard [6] .  ... 
doi:10.1007/978-3-030-52237-7_34 fatcat:qudsj7tyxzbo3ddldo4gjiwg6y

The easy-to-hard effect in human (Homo sapiens) and rat (Rattus norvegicus) auditory identification

Estella H. Liu, Eduardo Mercado, Barbara A. Church, Itzel Orduña
2008 Journal of Comparative Psychology  
The results are not predicted by an explanation that assumes interaction of generalized excitation and inhibition, but are consistent with a hierarchical account of perceptual learning in which the representational  ...  These findings indicate that transitioning from an easier to a more difficult task during training can facilitate, and in some cases may be essential for, auditory perceptual learning.  ...  We are also grateful to David Smith and three reviewers for helpful comments and suggestions on earlier versions of this article.  ... 
doi:10.1037/0735-7036.122.2.132 pmid:18489229 pmcid:PMC2664539 fatcat:tekffsas3bhzvhd5wp5lioi3uy

Page 290 of Exceptional Children Vol. 27, Issue 5 [page]

1961 Exceptional Children  
” The explanations given by siblings to account for their brother’s or sister’s problems vary greatly, as that will be noted in Table 3.  ...  The more descriptive narra- tives are quoted below: “She has an injury in her brain and it is hard for her to do some things.”  ... 

Toward a more complete sociobiology

Sandra Scarr
1990 Contemporary Psychology  
Natural selection provided our ability “to learn” and “to decide,” because these abilities helped our ances- tors deal with the varying environmental conditions that they encountered.  ...  Ad- aptations evolve to help organisms deal with varying conditions in their environ- ment, not to isolate them from that en- vironment.  ... 
doi:10.1037/028561 fatcat:zeap4ouk2vb6fhbbvbqwmcuiyi

Page 410 of Contemporary Psychology Vol. 35, Issue 4 [page]

1990 Contemporary Psychology  
Natural selection provided our ability “to learn” and “to decide,” because these abilities helped our ances- tors deal with the varying environmental conditions that they encountered.  ...  Ad- aptations evolve to help organisms deal with varying conditions in their environ- ment, not to isolate them from that en- vironment.  ... 

Sociobiology: Environmentalist and discursive

Charles Crawford, Martin Smith, Dennis Krebs
1990 Contemporary Psychology  
Natural selection provided our ability “to learn” and “to decide,” because these abilities helped our ances- tors deal with the varying environmental conditions that they encountered.  ...  Ad- aptations evolve to help organisms deal with varying conditions in their environ- ment, not to isolate them from that en- vironment.  ... 
doi:10.1037/028562 fatcat:k3e7qoqywnhyblz7gabzisncta

BAGEL: A Benchmark for Assessing Graph Neural Network Explanations [article]

Mandeep Rathee, Thorben Funke, Avishek Anand, Megha Khosla
2022 arXiv   pre-print
We are interested in a specific type of machine learning model that deals with graph data called graph neural networks.  ...  Evaluating interpretability approaches for graph neural networks (GNN) specifically are known to be challenging due to the lack of a commonly accepted benchmark.  ...  Though we tried our best to use datasets with varying graph properties and distributions, we believe this benchmark has a large scope to expand to multiple graph datasets with varying graph properties.  ... 
arXiv:2206.13983v1 fatcat:332by4gaa5fd7n4su5lged5ize

Evaluating the performance of the LIME and Grad-CAM explanation methods on a LEGO multi-label image classification task [article]

David Cian, Jan van Gemert, Attila Lengyel
2020 arXiv   pre-print
In this paper, we run two methods of explanation, namely LIME and Grad-CAM, on a convolutional neural network trained to label images with the LEGO bricks that are visible in them.  ...  However, we also posit that it is more useful to employ these two methods together, as the insights they yield are complementary.  ...  Finally, the author expresses his gratitude to the Delft University of Technology for the opportunity to pursue this research.  ... 
arXiv:2008.01584v1 fatcat:d7umxnmuxrdatnid4hqmvgkdla

Page 652 of Education Vol. 67, Issue 10 [page]

1947 Education  
Our pupils find it rather hard to learn how to study.  ...  In fact, it is very hard to teach our pupils NOT to copy homework. So prevalent is this bad habit in the high schools that many teachers do not ask for written homework.  ... 

On the Robustness of Pretraining and Self-Supervision for a Deep Learning-based Analysis of Diabetic Retinopathy [article]

Vignesh Srinivasan, Nils Strodthoff, Jackie Ma, Alexander Binder, Klaus-Robert Müller, Wojciech Samek
2021 arXiv   pre-print
There is an increasing number of medical use-cases where classification algorithms based on deep neural networks reach performance levels that are competitive with human medical experts.  ...  To this end, we investigate different aspects such as quantitative performance, statistics of the learned feature representations, interpretability and robustness to image distortions.  ...  Other common state-of-the-art methods in machine learning are supervised-learning methods, i.e. models that are trained with labeled data, opposed to other methods that require only some or even no labeled  ... 
arXiv:2106.13497v1 fatcat:y6hx6mwvgjc2jc6aln5y67adru

Honey bees selectively avoid difficult choices

C. J. Perry, A. B. Barron
2013 Proceedings of the National Academy of Sciences of the United States of America  
Because two alternative mechanisms have been proposed to explain the same behavioral data, Morgan's Canon (16) cautions that when presented with two alternative explanations, we are obliged to choose the  ...  To aid in this estimation, humans are able to monitor their degree of uncertainty and use that knowledge to improve their decisions (1, 2).  ...  Because two alternative mechanisms have been proposed to explain the same behavioral data, Morgan's Canon (16) cautions that when presented with two alternative explanations, we are obliged to choose  ... 
doi:10.1073/pnas.1314571110 pmid:24191024 pmcid:PMC3839751 fatcat:x7ii4rkbdzawtios27yii2yula

Tell me why! Explanations support learning relational and causal structure [article]

Andrew K. Lampinen, Nicholas A. Roy, Ishita Dasgupta, Stephanie C. Y. Chan, Allison C. Tam, James L. McClelland, Chen Yan, Adam Santoro, Neil C. Rabinowitz, Jane X. Wang, Felix Hill
2022 arXiv   pre-print
Language can shape the way that agents to generalize out-of-distribution from ambiguous, causally-confounded training, and explanations even allow agents to learn to perform experimental interventions  ...  We then show that explanations can help agents to infer not only relational but also causal structure.  ...  We also do not want to imply that explanations are necessary for learning.  ... 
arXiv:2112.03753v3 fatcat:ppd5d7udqjbnlns5gdwlu7fley
« Previous Showing results 1 — 15 out of 326,626 results