Filters








41,656 Hits in 8.2 sec

Deep Learners Benefit More from Out-of-Distribution Examples

Yoshua Bengio, Frédéric Bastien, Arnaud Bergeron, Nicolas Boulanger-Lewandowski, Thomas M. Breuel, Youssouf Chherawala, Moustapha Cissé, Myriam Côté, Dumitru Erhan, Jeremy Eustache, Xavier Glorot, Xavier Muller (+5 others)
2011 Journal of machine learning research  
The hypothesis evaluated here is that intermediate levels of representation, because they can be shared across tasks and examples from different but related distributions, can yield even more benefits.  ...  in order to obtain out-ofdistribution examples.  ...  The deep learner (SDA) benefits more from out-of-distribution examples, compared to the shallow MLP.  ... 
dblp:journals/jmlr/BengioBBBBCCCEEGMLPRSS11 fatcat:3pzqzwsx5zeixhelk53tyaabue

Deep Self-Taught Learning for Handwritten Character Recognition [article]

Frédéric Bastien and Yoshua Bengio and Arnaud Bergeron and Nicolas Boulanger-Lewandowski and Thomas Breuel and Youssouf Chherawala and Moustapha Cisse and Myriam Côté and Dumitru Erhan and Jeremy Eustache and Xavier Glorot and Xavier Muller and Sylvain Pannetier Lebeuf and Razvan Pascanu and Salah Rifai and Francois Savard and Guillaume Sicard
2010 arXiv   pre-print
We show that deep learners benefit more from out-of-distribution examples than a corresponding shallow learner, at least in the area of handwritten character recognition.  ...  Self-taught learning (exploiting unlabeled examples or examples from other distributions) has already been applied to deep learners, but mostly to show the advantage of unlabeled examples.  ...  deep architectures benefit more from such out-of-distribution examples.  ... 
arXiv:1009.3589v1 fatcat:i7yz2qkigbcbhekj2f2345y3om

Improving learning in MOOCs with Cognitive Science

Joseph Jay Williams
2013 International Conference on Artificial Intelligence in Education  
They offer a chance to build a set of educational resources from the ground up, at a time when scientists know far more about learning and teaching than at the advent of the current education system.  ...  This paper presents practical implications of research from cognitive science, showing empirically supported and actionable strategies any designer or instructor can use to improve students' learning.  ...  There is substantial evidence that learners' understanding is improved by prompts to explain out loud the meaning of what they are learning or say out loud what they are thinking [9] -although studies  ... 
dblp:conf/aied/Williams13a fatcat:kb6yx6xpijdexpfryujkynkye4

Deep Learning: The Impact on Future eLearning

Anandhavalli Muniasamy, Areej Alasiry
2020 International Journal of Emerging Technologies in Learning (iJET)  
Deep learning using artificial intelligence continues to become more and more popular and having impacts on many areas of eLearning.  ...  In addition, deep learning models for developing the contents of the eLearning platform, deep learning framework that enable deep learn-ing systems into eLearning and its development, benefits & future  ...  It can make peer-to-peer interactions more productive. For example, match mentors to online learners who can benefit from their specific skills or past experiences.  ... 
doi:10.3991/ijet.v15i01.11435 fatcat:uj63cq5l7fchtbvipi7xyr4mfm

Deep Meta-Learning: Learning to Learn in the Concept Space [article]

Fengwei Zhou, Bin Wu, Zhenguo Li
2018 arXiv   pre-print
In this work, we argue that this is due to the lack of a good representation for meta-learning, and propose deep meta-learning to integrate the representation power of deep learning into meta-learning.  ...  For example, on 5-way-1-shot image recognition on CIFAR-100 and CUB-200, it improves Matching Nets from 50.53% and 56.53% to 58.18% and 63.47%, improves MAML from 49.28% and 50.45% to 56.65% and 64.63%  ...  This result shows that the meta-learner does benefit from the concept generator enhanced by the external data, but placing too much emphasis on the external data can harm the performance of meta-learner  ... 
arXiv:1802.03596v1 fatcat:agkjw3avzbfqrhzdjist7zcpoi

ZPD Teaching Strategies for Deep Reinforcement Learning from Demonstrations [article]

Daniel Seita, David Chan, Roshan Rao, Chen Tang, Mandi Zhao, John Canny
2019 arXiv   pre-print
Our results align with intuition from human learners: it is not always the best policy to draw demonstrations from the best performing demonstrator (in terms of reward).  ...  Prior work, such as the popular Deep Q-learning from Demonstrations (DQfD) algorithm has generally focused on single demonstrators.  ...  A prominent example of this for discrete control is Deep Q-learning from Demonstrations (DQfD) [11] , the most relevant prior work, which seeded a learner agent with a small batch of human demonstrator  ... 
arXiv:1910.12154v1 fatcat:7anddutaa5h2daie6yadypkzmi

Uniform Priors for Data-Efficient Transfer [article]

Samarth Sinha, Karsten Roth, Anirudh Goyal, Marzyeh Ghassemi, Hugo Larochelle, Animesh Garg
2020 arXiv   pre-print
Metric Learning, Zero-Shot Domain Adaptation, as well as Out-of-Distribution classification.  ...  Across all experiments, we show that uniformity regularization consistently offers benefits over baseline methods and is able to achieve state-of-the-art performance in Deep Metric Learning and Meta-Learning  ...  in Deep Metric Learning, Zero-Shot Domain Adaptation, Out-of-Distribution Detection and Meta-Learning.  ... 
arXiv:2006.16524v2 fatcat:w6e74imnbbhe7jb3ghuf7fjymm

Ensembles of Deep LSTM Learners for Activity Recognition using Wearables [article]

Yu Guan, Thomas Ploetz
2017 arXiv   pre-print
We demonstrate, both formally and empirically, that Ensembles of deep LSTM learners outperform the individual LSTM networks.  ...  We have developed modified training procedures for LSTM networks and combine sets of diverse LSTM learners into classifier collectives.  ...  comments that helped improving the manuscript of this paper.  ... 
arXiv:1703.09370v1 fatcat:6k24lhdgnfeljbq6l6bu24rumi

"Inside the Head and Out in the World". An Approach to Deep Teaching and Learning

Moises Esteban-Guitart, James Gee
2020 REMIE : Multidisciplinary Journal of Educational Research  
from the new, modern affinity spaces that we believe now make up the present-day geography of deep learning.  ...  We shall outline what, in our view, are the key elements of deep learning. We will also describe a theoretical approach called the "Deep Teaching and Learning Model" (DTLM).  ...  They do not represent deep teaching and deep learning of the type going on out of school.  ... 
doi:10.17583/remie.2020.4868 fatcat:43sbb3l4xrdujkazskxhrrpaq4

Applying Cognitive Science to Online Learning

Joseph Jay Williams
2013 Social Science Research Network  
Acknowledgements 211 The is an expansion and revision of a proceedings presented at the MOOCShop Workshop at the 212 International Conference on Artificial Intelligence in Education in 2013. 213  ...  Just say it out loud. (@ 2:15) For example, a typical 192 practice sequence might be 12 problems of type A, then 12 of type B, and 12 of type C.  ...  166 There is substantial evidence that learners' understanding is improved by prompts to explain out 167 loud the meaning of what they are learning or say out loud what they are thinking [16] .  ... 
doi:10.2139/ssrn.2535549 fatcat:7swzo5zgqzdollu3wmt6oexcxy

Distributed Deep Forest and its Application to Automatic Detection of Cash-out Fraud [article]

Ya-Lin Zhang, Jun Zhou, Wenhao Zheng, Ji Feng, Longfei Li, Ziqi Liu, Ming Li, Zhiqiang Zhang, Chaochao Chen, Xiaolong Li, Zhi-Hua Zhou, YUAN QI
2020 arXiv   pre-print
We tested the deep forest model on an extra-large scale task, i.e., automatic detection of cash-out fraud, with more than 100 millions of training samples.  ...  In this work, based on our parameter server system, we developed the distributed version of deep forest.  ...  This research was partially supported by the National Key R&D Program of China (2018YFB1004300), the National Science Foundation of China (61751306), and the Collaborative Innovation Center of Novel Software  ... 
arXiv:1805.04234v3 fatcat:iwrny7pogvcezfz6ukvjcg35tu

Curriculum learning

Yoshua Bengio, Jérôme Louradour, Ronan Collobert, Jason Weston
2009 Proceedings of the 26th Annual International Conference on Machine Learning - ICML '09  
Humans and animals learn much better when the examples are not randomly presented but organized in a meaningful order which illustrates gradually more concepts, and gradually more complex ones.  ...  In the context of recent research studying the difficulty of training in the presence of non-convex training criteria (for deep deterministic and stochastic neural networks), we explore curriculum learning  ...  benefit from a similar training strategy?  ... 
doi:10.1145/1553374.1553380 dblp:conf/icml/BengioLCW09 fatcat:z6gzl575off4lhaf4xualxkte4

Evidence for Cognitive Science Principles that Impact Learning in Mathematics [chapter]

Julie L. Booth, Kelly M. McGinn, Christina Barbieri, Kreshnik N. Begolli, Briana Chang, Dana Miller-Cotto, Laura K. Young, Jodi L. Davenport
2017 Acquisition of Complex Arithmetic Skills and Higher-Order Mathematics Concepts  
The opinions expressed are those of the authors and do not represent views of the Institute or the U.S. Department of Education.  ...  ACKNOWLEDGMENTS Funding for the writing of this chapter was provided by the Institute of Education Sciences and U.S.  ...  WORKED EXAMPLE PRINCIPLE The worked example principle suggests that having learners study examples of worked-out solutions to problems is more effective for learning than having them solve all of the problems  ... 
doi:10.1016/b978-0-12-805086-6.00013-8 fatcat:7hgowz7ig5g2zf22jstrlc3ydm

A Brief Summary of Interactions Between Meta-Learning and Self-Supervised Learning [article]

Huimin Peng
2021 arXiv   pre-print
Self-supervised learning guided by meta-learner and general meta-learning algorithms under self-supervision are both examples of possible combinations.  ...  Meta-learning aims to adapt trained deep models to solve diverse tasks and to develop general AI algorithms.  ...  Acknowledgment I appreciate valuable comments from Basile Starynkevitch <basile@starynkevitch.net>.  ... 
arXiv:2103.00845v2 fatcat:soq6tfl56vgshebtnot57e4qwe

Meta-Learning and Universality: Deep Representations and Gradient Descent can Approximate any Learning Algorithm [article]

Chelsea Finn, Sergey Levine
2018 arXiv   pre-print
recurrent models to the more recent approaches that embed gradient descent into the meta-learner.  ...  Learning to learn is a powerful paradigm for enabling models to learn from data more effectively and efficiently.  ...  ACKNOWLEDGMENTS We thank Sharad Vikram for detailed feedback on the proof, as well as Justin Fu, Ashvin Nair, and Kelvin Xu for feedback on an early draft of this paper.  ... 
arXiv:1710.11622v3 fatcat:nly3w6p4tjccle4lt6zruqpsae
« Previous Showing results 1 — 15 out of 41,656 results