A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Multi-task Learning for Low-resource Second Language Acquisition Modeling
[article]
2020
arXiv
pre-print
Second language acquisition (SLA) modeling is to predict whether second language learners could correctly answer the questions according to what they have learned. ...
Fortunately, there are some latent common patterns among different language-learning tasks, which gives us an opportunity to solve the low-resource SLA modeling problem. ...
Particularly, second language acquisition (SLA) modeling is a kind of KT in the filed of language learning. ...
arXiv:1908.09283v4
fatcat:zvuvxs6x2nf57l2wstf3rokzfu
Comparison of Three Models Dealing with Working Memory and Its Dimensions in Second Language Acquisition
2017
International Journal of Applied Linguistics and English Literature
The current status of research on working memory (WM) and its components in second language acquisition (SLA) was examined in this review. ...
The phonological and executive components of WM were examined in more detail, as these determine the two basic aspects of language acquisition: language characteristics and acquisition methods (Wen, 2012 ...
INTRODUCTION The way people process and store linguistic information related to a language and its learning is partly responsible for their learning performance in second and foreign languages. ...
doi:10.7575/aiac.ijalel.v.7n.1p.38
fatcat:xpk7wwwk45gu7hwxmaciu3zafu
Acquisition of Inflectional Morphology in Artificial Neural Networks With Prior Knowledge
[article]
2019
arXiv
pre-print
an easier task for the model; (ii) knowledge of a prefixing (resp. suffixing) language makes acquisition of a suffixing (resp. prefixing) language's morphology more challenging; and (iii) surprisingly ...
, a source language which exhibits an agglutinative morphology simplifies learning of a second language's inflectional morphology, independent of their relatedness. ...
Bowman and Kyle Gorman for helpful discussions and suggestions. ...
arXiv:1910.05456v1
fatcat:4o7h73i6ffbw7pasupv3mghi6e
Linguistic unit discovery from multi-modal inputs in unwritten languages: Summary of the "Speaking Rosetta" JSALT 2017 Workshop
[article]
2018
arXiv
pre-print
We study the replacement of orthographic transcriptions by images and/or translated text in a well-resourced language to help unsupervised discovery from raw speech. ...
We summarize the accomplishments of a multi-disciplinary workshop exploring the computational and scientific issues surrounding the discovery of linguistic units (subwords and words) in a language without ...
This makes it suitable for our low-resource scenario. ...
arXiv:1802.05092v1
fatcat:xzelus33z5hapcddpppqqnhyq4
Computational Morphology with Neural Network Approaches
[article]
2021
arXiv
pre-print
Neural network approaches have been applied to computational morphology with great success, improving the performance of most tasks by a large margin and providing new perspectives for modeling. ...
They investigated the cross-lingual transfer effect for this task, i.e. to use training data for high-resource languages to help training models for low-resource language, where high-resource and low-resource ...
Tagging tasks have seen improvements in models for low-resource languages when models are trained with related high-resource languages (e.g. ...
arXiv:2105.09404v1
fatcat:6w4n7yjaevh6fpnauntzlfe64u
Linguistic Unit Discovery from Multi-Modal Inputs in Unwritten Languages: Summary of the "Speaking Rosetta" JSALT 2017 Workshop
2018
2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
This makes it suitable for our low-resource scenario. ...
The cross-language definition of units approach [30] uses linguistic knowledge of the low-resource language and a semi-supervised training paradigm to build an ASR system for a low-resource language ...
doi:10.1109/icassp.2018.8461761
dblp:conf/icassp/ScharenborgBBHM18
fatcat:snvpnnq3ejftxdkord43uvlqom
Lessons learned in multilingual grounded language learning
[article]
2018
arXiv
pre-print
We show that multilingual training improves over bilingual training, and that low-resource languages benefit from training with higher-resource languages. ...
Here, we investigate in detail which conditions affect the performance of this type of grounded language learning model. ...
High-to-low resource transfer: In Section 6.2 we investigate whether low-resource languages benefit from jointly training on larger data sets from higher-resource languages. ...
arXiv:1809.07615v1
fatcat:vwglumwbkjhqtlcsrdvvv4vadq
Boosting the Transformer with the BERT Supervision in Low-Resource Machine Translation
2022
Applied Sciences
However, these models suffer from the data scarcity problem in low-resource machine translation tasks. ...
The evaluation results on six low-resource translation tasks suggest that the Transformer trained by our algorithm significantly outperforms the baselines which were trained end-to-end in previous works ...
Low-Resource Machine Translation Models The lack of parallel data is challenging for NMT model training. Qi et al. ...
doi:10.3390/app12147195
fatcat:2ltyivnbcjdqbnjfwekj4p5ywa
Recognition of Latin American Spanish Using Multi-Task Learning
2019
Interspeech 2019
Recently, TDNN based multi-task learning has received some attention in this area, with interesting results, typically using models trained with a variety of different accented corpora from a particular ...
, during a TDNN based multi-task training. ...
Acknowledgements The authors would like to acknowledge our VoiceInteraction colleagues, in particular Tiago Luís and Alícia Martinez-Losa, for their contributions to this work. ...
doi:10.21437/interspeech.2019-2772
dblp:conf/interspeech/MendesANT19
fatcat:dlqcsme4jnaf3l2ripxg5gz5jq
Speaker Adversarial Training of DPGMM-Based Feature Extractor for Zero-Resource Languages
2019
Interspeech 2019
However, owing to the absence of phoneme labels, zero-resource languages cannot employ adversarial multi-task (AMT) learning for speaker normalization. ...
We propose a novel framework for extracting speaker-invariant features for zero-resource languages. ...
Based on this assumption, it may not be appropriate to use posteriorgrams for the supervision of phonemes in multi-task learning. ...
doi:10.21437/interspeech.2019-2052
dblp:conf/interspeech/HiguchiTKO19
fatcat:h7lorg26vzasnmtyypz3gfdh54
Low-Resource Adaptation of Neural NLP Models
[article]
2020
arXiv
pre-print
The objective of this thesis is to investigate methods for dealing with such low-resource scenarios in information extraction and natural language understanding. ...
To this end, we study distant supervision and sequential transfer learning in various low-resource settings. ...
In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. ...
arXiv:2011.04372v1
fatcat:626mbe5ba5bkdflv755o35u5pq
Working Memory, Processing Speed, and Executive Memory Contributions to Computer-Assisted Second Language Learning
2012
Contemporary Educational Technology
How individual differences in information processing affect second language (L2) learning has been unclear in prior research. ...
Further, the results hold certain implications for tailoring second language teaching on-line or in other technology-based instruction to learner profiles on abilities in working memory, processing speed ...
for different probes into second language learning. ...
doi:10.30935/cedtech/6077
fatcat:axpdkyrclrhodo6hbrz3k5ducq
Effective Approaches to Neural Query Language Identification
2022
Computational Linguistics
Moreover, to remedy the low resource challenge in this task, a novel machine translation based strategy is proposed to automatically generate synthetic query-style data for low-resource languages. ...
There exist two main challenges in Q-LID: 1) insufficient contextual information in queries for disambiguation; and 2) the lack of query-style training examples for low-resource languages. ...
Acknowledgments The authors thank the reviewers for their helpful comments in improving the quality of this work. This work is supported by National Key R&D Program of China (2018YFB1403202). ...
doi:10.1162/coli_a_00451
fatcat:tz3oxk6lujehtesnagnnzwpzqe
Study on the Relationship between College English Multimodal Teaching and Autonomic Learning
2017
DEStech Transactions on Social Science Education and Human Science
Experimental results show that multi-modal teaching model conforms to the development trend of college English teaching, to maintain students' interest in learning English, improve their English proficiency ...
University English multi-modal teaching model is an important trend in college English teaching reform. ...
Only in this way, the network media Teaching Resources under the multi-modal can really provide maximum supplementary language learning platform for teachers and students. ...
doi:10.12783/dtssehs/icsste2017/9283
fatcat:m4tcx34azjdhfkcwbva2wlvquq
A Theoretical Study on English Teaching in Chinese Ethnic Minority Regions
2013
English Language Teaching
Inappropriate teaching strategies, learning materials as well as language policy hinder the development of teaching and learning progress in those areas, which still causes the minority students' lack ...
In trilingual education, we believe that teaching should be learner-centered and teachers, students, tasks and context of situation are greatly emphasized. ...
Compared with the mother tongue, the context problem is another barrier to the second language acquisition. ...
doi:10.5539/elt.v6n7p168
fatcat:ac5vwgb4vfcwno2osrqnzdeiwu
« Previous
Showing results 1 — 15 out of 60,691 results