1,423 Hits in 4.8 sec

Phonological transfer effects in novice learners: A learner's brain detects grammar errors only if the language sounds familiar

Sabine Gosselke Berthelsen, Merle Horne, Yury Shtyrov, Mikael Roll
2021 Bilingualism: Language and Cognition  
Along with lexicosemantic content expressed by consonants, the words contained grammatical properties embedded in vowels and tones.  ...  Pictures that were mismatched with any of the words' phonological cues elicited an N400 in tonal learners.  ...  Special thanks to Frida Blomberg for help and support particularly during data acquisition, to Jonas Brännström for help with the setup of the auditory stimulus presentation, as well as to all participants  ... 
doi:10.1017/s1366728921000134 fatcat:i42yb5stzvggviad2z6ozepkqy

Dutch Humor Detection by Generating Negative Examples [article]

Thomas Winters, Pieter Delobelle
2020 arXiv   pre-print
In particular, we compare the humor detection capabilities of classic neural network approaches with the state-of-the-art Dutch language model RobBERT.  ...  In machine learning, humor detection is usually modeled as a binary classification task, trained to predict if the given text is a joke or another type of text.  ...  Pieter Delobelle was supported by the Research Foundation -Flanders under EOS No. 30992574 and received funding from the Flemish Government under the "Onderzoeksprogramma Artificiële Intelligentie (AI)  ... 
arXiv:2010.13652v1 fatcat:536w2pprt5cmbflcwlbhvh7io4

Improving BERT with Syntax-aware Local Attention [article]

Zhongli Li, Qingyu Zhou, Chao Li, Ke Xu, Yunbo Cao
2021 arXiv   pre-print
The proposed syntax-aware local attention can be integrated with pretrained language models, such as BERT, to render the model to focus on syntactically relevant words.  ...  Pre-trained Transformer-based neural language models, such as BERT, have achieved remarkable results on varieties of NLP tasks.  ...  The MSRA NER and CGED datasets are selected for named entity recognition and grammatical error detection in Chinese.  ... 
arXiv:2012.15150v2 fatcat:fglcymg5izb67bhlypdlx4n7z4

Adversarial Inference for Multi-Sentence Video Description [article]

Jae Sung Park, Marcus Rohrbach, Trevor Darrell, Anna Rohrbach
2019 arXiv   pre-print
Our approach results in more accurate, diverse, and coherent multi-sentence video descriptions, as shown by automatic as well as human evaluation on the popular ActivityNet Captions dataset.  ...  The work of Trevor Darrell and Anna Rohrbach was in part supported by the DARPA XAI program, the Berkeley Artificial Intelligence Research (BAIR) Lab, and the Berkeley DeepDrive (BDD) Lab.  ...  object labels with bag of words weighted by detection confidences (denoted as BottomUp).  ... 
arXiv:1812.05634v2 fatcat:vcszgbvwrzckfkja4xhsrv5u6i


Masatoshi Sato, Roy Lyster
2012 Studies in Second Language Acquisition  
This process of detecting an error, rehearsing the error-free solution, and, thus, reducing the error rate contributes to proceduralization by storing correct linguistic representations in longterm memory  ...  Each learner in the group had a different error list because detecting errors would otherwise have been too easy.  ...  APPENDIX A A SAMPLE SCENARIO AND ERROR LIST (POLICE REPORT) Grammatical target: Past tense Scenario: You witnessed a bank robbery.  ... 
doi:10.1017/s0272263112000356 fatcat:qzsu334k2zaqvm4rltd2mszjbq

Effects of manipulating task complexity on self-repairs during L2 oral production

Roger Gilabert
2007 International Review of Applied Linguistics in Language Teaching  
It specifically focuses on self-repairs, which are taken as a measure of accuracy since they denote both attention to form and an attempt at being accurate.  ...  Results show an overall effect of Task Complexity on self-repairs behavior across task types, with different behaviors existing among the three task types.  ...  With regard to the frequency of errors and self-repairs, a higher rate of errors was captured by one of the measures (ratio errors/words), but not by the number of error per AS-units.  ... 
doi:10.1515/iral.2007.010 fatcat:6fatd6ji4rhfnj2ydmacre7cvq

Syntax-BERT: Improving Pre-trained Transformers with Syntax Trees [article]

Jiangang Bai, Yujing Wang, Yiren Chen, Yaming Yang, Jing Bai, Jing Yu, Yunhai Tong
2021 arXiv   pre-print
In this paper, we address this problem by proposing a novel framework named Syntax-BERT.  ...  However, how to incorporate the syntax trees effectively and efficiently into pre-trained Transformers is still unsettled.  ...  This shows a good explainability of Syntax-Transformer by correctly identifying the error term "anyone", following a rule that "anyone" is seldom matched with the punctuation ".".  ... 
arXiv:2103.04350v1 fatcat:wd7hiiqhmreuficb4bgu7bhd44

A Comprehensive Survey of Grammar Error Correction [article]

Yu Wang, Yuelin Wang, Jie Liu, Zhuo Liu
2020 arXiv   pre-print
Grammar error correction (GEC) is an important application aspect of natural language processing techniques.  ...  Similarly, some performance boosting techniques are adapted from machine translation and are successfully combined with GEC systems for enhancement on the final performance.  ...  A neural sequence error detection model is trained to rerank the n-best hypothesizes output by MT based model.  ... 
arXiv:2005.06600v1 fatcat:p4op2mwbefdqtfsewnhrvhcl6q

Does Educator Training or Experience Affect the Quality of Multiple-Choice Questions?

Emily M. Webb, Jonathan S. Phuong, David M. Naeger
2015 Academic Radiology  
Results: Questions written by faculty with MCQ writing training had significantly fewer errors: mean 0.4 errors per question compared to a mean of 1.5-1.7 errors per question for the other groups (P <  ...  Conclusions: Faculty with training in effective MCQ writing made fewer errors in MCQ construction.  ...  With 50 questions per group, we were powered to be able to detect pairwise differences of approximately 0.4 errors per question between any two groups (beta = 0.8).  ... 
doi:10.1016/j.acra.2015.06.012 pmid:26277486 fatcat:v2f6yu72ybdghfj72zadu2k3be

A Mutually Auxiliary Multitask Model with Self-Distillation for Emotion-Cause Pair Extraction

Jiaxin Yu, Wenyuan Liu, Yongjun He, Chunyue Zhang
2021 IEEE Access  
. • We design a self-distillation method for pairwise tasks and apply it to train our multitask model, which further improve the accuracy of emotion and cause extraction. • We evaluate our models by comparative  ...  [56] utilized the self-distillation to accurately detect the text in the image and optimized the teacher-student training process. Clark et al.  ... 
doi:10.1109/access.2021.3057880 fatcat:2hzcavx63zbsffm6pwlrsy6sl4

Collecting fluency corrections for spoken learner English

Andrew Caines, Emma Flint, Paula Buttery
2017 Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications  
Our emphasis in data collection is on fluency corrections, a more complete correction than has traditionally been aimed for in grammatical error correction research (GEC).  ...  We analyse crowdworker behaviour in HEC and conclude that the method is useful with certain amendments for future work.  ...  We thank the three reviewers for their very helpful comments and have attempted to improve the paper in line with their suggestions.  ... 
doi:10.18653/v1/w17-5010 dblp:conf/bea/CainesFB17 fatcat:yj2pzspwd5hghad2sfgjccncle

The effects of stimulus complexity on the preattentive processing of self-generated and nonself voices: An ERP study

Tatiana Conde, Óscar F. Gonçalves, Ana P. Pinheiro
2015 Cognitive, Affective, & Behavioral Neuroscience  
However, most of the existing studies have only focused on the sensory correlates of self-generated voice processing, whereas the effects of attentional demands and stimulus complexity on self-generated  ...  These findings suggest differences in the time course of automatic detection of a change in voice identity.  ...  This finding suggests similar engagement of involuntary attention by both SGV and NSV deviants when attention is directed away from the primary task (i.e., watching a silent movie) by the detection of  ... 
doi:10.3758/s13415-015-0376-1 pmid:26415897 fatcat:44v6d5tih5fftizvufrtiyhvqe

Agreement attraction in comprehension: Representations and processes

Matthew W. Wagers, Ellen F. Lau, Colin Phillips
2009 Journal of Memory and Language  
Second, we observe a 'grammatical asymmetry': attraction effects are limited to ungrammatical sentences, which would be unexpected if the representation of subject number were inherently prone to error  ...  Cognitive Psychology, 23, 45-93]), in which a verb erroneously agrees with an intervening noun.  ...  Figure 5 45 Experiment 4 Self-paced Reading ResultsPanel ARegion by region means segregated by attractor number and grammaticality. Error bars indicate standard error of the mean.  ... 
doi:10.1016/j.jml.2009.04.002 fatcat:q75kkqtrk5cnjbtfrwb2ucqo3y

Syntactic processing in L2 depends on perceived reliability of the input: Evidence from P600 responses to correct input

Kristin Lemhöfer, Herbert Schriefers, Peter Indefrey
2020 Journal of Experimental Psychology. Learning, Memory and Cognition  
with the intuition of many German speakers that the correct phrase should be het boot).  ...  the input, either because of the nature of the task (grammaticality judgments) or because of the salient presence of incorrect sentences.  ...  Indeed, the processing of syntactic violations is usually regarded to be a multiple-stage process, starting with the detection of the error and followed by a later, more strategy-driven stage of reanalysis  ... 
doi:10.1037/xlm0000895 pmid:32658543 fatcat:fjb5rvqrwrcn3i4vjoytcy3yvq

Automatic recognition of symptom severity from psychiatric evaluation records

Travis R. Goodwin, Ramon Maldonado, Sanda M. Harabagiu
2017 Journal of Biomedical Informatics  
We evaluated three methods for inferring the latent severity score associated with each record: (i) pointwise ridge regression; (ii) pairwise comparison-based classification; and (iii) a hybrid approach  ...  This paper presents a novel method for automatically recognizing symptom severity by using natural language processing of psychiatric evaluation records to extract features that are processed by machine  ...  Acknowledgments Research reported in this publication was supported by the National Institute of Mental Health (NIMH), the National Library of Medicine (NLM), and the National Human Genome Research Institute  ... 
doi:10.1016/j.jbi.2017.05.020 pmid:28576748 pmcid:PMC5705296 fatcat:qepww5ibgbb5fj3i7kz6jscztu
« Previous Showing results 1 — 15 out of 1,423 results