Filters








4,350 Hits in 7.9 sec

Curriculum-Based Self-Training Makes Better Few-Shot Learners for Data-to-Text Generation [article]

Pei Ke, Haozhe Ji, Zhenyu Yang, Yi Huang, Junlan Feng, Xiaoyan Zhu, Minlie Huang
2022 arXiv   pre-print
Thus, we introduce self-training as a better few-shot learner than task-adaptive pre-training, which explicitly captures this relationship via pseudo-labeled data generated by the pre-trained model.  ...  To alleviate the side-effect of low-quality pseudo-labeled data during self-training, we propose a novel method called Curriculum-Based Self-Training (CBST) to effectively leverage unlabeled data in a  ...  Acknowledgments This work was supported by the National Science Foundation for Distinguished Young Scholars (with No. 62125604) and the NSFC projects (Key project with No. 61936010 and regular project  ... 
arXiv:2206.02712v1 fatcat:d3heo6jdirfp5l6tvb7kce257a

Self Paced Adversarial Training for Multimodal Few-shot Learning [article]

Frederik Pahde, Oleksiy Ostapenko, Patrick Jähnichen, Tassilo Klein, Moin Nabi
2018 arXiv   pre-print
Therefore, we design a few-shot learning task that is multimodal during training (i.e. image and text) and single-modal during test time (i.e. image).  ...  In this regard, we propose a self-paced class-discriminative generative adversarial network incorporating multimodality in the context of few-shot learning.  ...  However, the challenge is to pick adequate samples out of the pool of generated samples that allow for building a better classifier within the few-shot scenario.  ... 
arXiv:1811.09192v1 fatcat:6msho45ygngchbwgbv4bb2zn64

Meta-Transfer Learning through Hard Tasks [article]

Qianru Sun, Yaoyao Liu, Zhaozheng Chen, Tat-Seng Chua, Bernt Schiele
2019 arXiv   pre-print
The key idea is to leverage a large number of similar few-shot tasks in order to learn how to adapt a base-learner to a new task for which only a few labeled samples are available.  ...  meta-training, and (2) freezing their convolutional layers as the feature extractor of base-learners.  ...  Basically, the nature of few-shot learning with very scarce training data makes it difficult to train powerful machine learning models for new concepts.  ... 
arXiv:1910.03648v1 fatcat:l2z7dowb5bclzgr2a3ofk3z2za

Generative Conversational Networks [article]

Alexandros Papangelis and Karthik Gopalakrishnan and Aishwarya Padmakumar and Seokhwan Kim and Gokhan Tur and Dilek Hakkani-Tur
2021 arXiv   pre-print
training data (given some seed data) and then train themselves from that data to perform a given task.  ...  We also conduct an analysis of the novelty of the generated data and provide generated examples for intent detection, slot tagging, and non-goal oriented conversations.  ...  We pre-train the generator with the available training data of each few-shot setting and use a curriculum batch schedule to mix seed and generated data.  ... 
arXiv:2106.08484v2 fatcat:nqltrsj3c5bmbfkflmsvfty2f4

Uncertainty-aware Self-training for Few-shot Text Classification

Subhabrata Mukherjee, Ahmed Hassan Awadallah
2020 Neural Information Processing Systems  
Standard self-training mechanism randomly samples instances from the unlabeled pool to generate pseudo-labels and augment labeled data.  ...  We study selftraining as one of the earliest semi-supervised learning approaches to reduce the annotation bottleneck by making use of large-scale unlabeled data for the target task.  ...  as natural few-shot learners.  ... 
dblp:conf/nips/MukherjeeA20 fatcat:qbvfuk72xfcvlexqouzlw7f7bu

Meta-Learning in Neural Networks: A Survey

Timothy M Hospedales, Antreas Antoniou, Paul Micaelli, Amos J. Storkey
2021 IEEE Transactions on Pattern Analysis and Machine Intelligence  
This paradigm provides an opportunity to tackle many of the conventional challenges of deep learning, including data and computation bottlenecks, as well as the fundamental issue of generalization.  ...  We survey promising applications and successes of meta-learning including few-shot learning, reinforcement learning and architecture search.  ...  Attention Modules have been used as comparators in metric-based meta-learners [132] , to prevent catastrophic forgetting in few-shot continual learning [133] and to summarize the distribution of text  ... 
doi:10.1109/tpami.2021.3079209 pmid:33974543 fatcat:wkzeodki4fbcnjlcczn4mr6kry

Meta-Learning in Neural Networks: A Survey [article]

Timothy Hospedales, Antreas Antoniou, Paul Micaelli, Amos Storkey
2020 arXiv   pre-print
This paradigm provides an opportunity to tackle many conventional challenges of deep learning, including data and computation bottlenecks, as well as generalization.  ...  We survey promising applications and successes of meta-learning such as few-shot learning and reinforcement learning.  ...  Attention Modules have been used as comparators in metric-based meta-learners [137] , to prevent catastrophic forgetting in few-shot continual learning [138] and to summarize the distribution of text  ... 
arXiv:2004.05439v2 fatcat:3r23tsxxkfbgzamow5miglkrye

Small Sample Learning in Big Data Era [article]

Jun Shu, Zongben Xu, Deyu Meng
2018 arXiv   pre-print
The purpose is mainly to simulate human learning behaviors like recognition, generation, imagination, synthesis and analysis.  ...  This category mainly focuses on learning with insufficient samples, and can also be called small data learning in some literatures.  ...  The augmented data can be obtained through generating confident pseudo-labels by a self ameliorable model, such as curriculum/self-paced learning (Bengio et al., 2009; Kumar et al., 2010; Jiang et al.  ... 
arXiv:1808.04572v3 fatcat:lqqzzrmgfnfb3izctvdzgopuny

Uncertainty-aware Self-training for Text Classification with Few Labels [article]

Subhabrata Mukherjee, Ahmed Hassan Awadallah
2020 arXiv   pre-print
In this work, we study self-training as one of the earliest semi-supervised learning approaches to reduce the annotation bottleneck by making use of large-scale unlabeled data for the target task.  ...  Standard self-training mechanism randomly samples instances from the unlabeled pool to pseudo-label and augment labeled data.  ...  as natural few-shot learners.  ... 
arXiv:2006.15315v1 fatcat:pstouqr24jd4bm4do6xplipury

A survey on data‐efficient algorithms in big data era

Amina Adadi
2021 Journal of Big Data  
This has triggered a serious debate in both the industrial and academic communities calling for more data-efficient models that harness the power of artificial learners while achieving good results with  ...  less training data and in particular less human supervision.  ...  [297] applied NAS to few-shot learning to overcome the data scarcity, while they only search for the most promising architecture and optimize it to work on multiple few-shot learning tasks.  ... 
doi:10.1186/s40537-021-00419-9 fatcat:v4uahsvhlzdldlxqf24bshmja4

Fostering Learner Autonomy in Educational Settings

Sara Kashefian-Naeeini, Yousef Kouhpeyma
2020 International Journal of Multicultural and Multireligious Understanding  
The present research tried to make some contribution and move in line with this purpose and to provide a compendium of information about ways to foster learner autonomy in educational settings.  ...  The role of teacher in guiding learners toward gaining learners autonomy stands out.  ...  Curriculum-Based Approach Curriculum-based approach emphasizes the idea of learner control over the curriculum as a whole.  ... 
doi:10.18415/ijmmu.v7i7.1765 fatcat:txcir6b6yvfdjdhxzojbp5qfki

CRITICAL THINKING SKILLS IN LANGUAGE CLASS OF ARCHITECTURE – A CASE STUDY

Stars Jasmine
2020 Zenodo  
Moving away from factual knowledge, the students will have opportunities to make intellectual moves, reason well and offer solutions to the problems.  ...  The paper focuses on the need of inculcating 'thinking' in the 21st century learners.  ...  The tasks and assignments should enable them to i) conclude a set of facts (data) ii) make comparative judgments from data iii) interpret data generated for records, files, and reports iv) analyze data  ... 
doi:10.5281/zenodo.3598118 fatcat:ugpp627ywfar5fcfroh77c6enm

The Flipped Classroom: Two Learning Modes that Foster Two Learning Outcomes

Eugenia M. W. Ng
2016 Issues in Informing Science and Information Technology  
Data collected from students' project pages show they had used average of 3.22 editing features for the theme images for their project.  ...  It was found that students had rated all five questions relating to generic skills highly, with self-study skills rated the highest.  ...  Acknowledgements The author is very thankful to students for participating and allowing her to cite their work and responses. Special thanks go to Pecco Yin for his good research support.  ... 
doi:10.28945/3462 fatcat:x3vtvlmsvfdybkoo2vrby5qxb4

Implementation outcomes of a multiinstitutional web-based ethical, legal, and social implications genetics curriculum for primary care residents in three specialties

Malathi Srinivasan, Frank C Day, Erin Griffin, Daniel J Tancredi, Wylie Burke, Linda Pinsky, Roberta A Pagon, Jerome R Hoffman, Michael S Wilkes
2011 Genetics in Medicine  
Method: During 3 years, we implemented an interactive, web-based curriculum on ethical, legal, and social implications in medical genetics for primary care residents in three specialties at three institutions  ...  Residents reported that this curriculum covered ethical, legal, and social implications/genetics better than their usual curricula.  ...  Scale refinement was conducted by using baseline data to develop initial scales and follow-up data for validation.  ... 
doi:10.1097/gim.0b013e31820e279a pmid:21543989 fatcat:rkq4updduzbenobiy4qwfltxum

ALLSH: Active Learning Guided by Local Sensitivity and Hardness [article]

Shujian Zhang, Chengyue Gong, Xingchao Liu, Pengcheng He, Weizhu Chen, Mingyuan Zhou
2022 arXiv   pre-print
Furthermore, we observe consistent improvements over the baselines on the study of prompt selection in prompt-based few-shot learning.  ...  Active learning, which effectively collects informative unlabeled data for annotation, reduces the demand for labeled data.  ...  Few-shot learners suffer from the quality of labeled data (Sohn et al., 2020) , and previous acquisition functions usually fail to boost the performance from labeling random sampled data.  ... 
arXiv:2205.04980v1 fatcat:wtesmhbgbbf7fhvaxwolz6g72y
« Previous Showing results 1 — 15 out of 4,350 results