Predictions for self-priming from incremental updating models unifying comprehension and production

Cassandra L. Jacobs
2015 Proceedings of the 6th Workshop on Cognitive Modeling and Computational Linguistics  
Syntactic priming from comprehension to production has been shown to be robust: we are more likely to repeat structures that we have previously heard. Many current models do not distinguish between comprehension and production. Here we contrast human language processing with two variants of a Bayesian belief updating model. In the first model, production-to-production priming (i.e. selfpriming) is as strong as comprehension-toproduction priming. In the second, both individuals who self-prime
more » ... s who self-prime and those who do not are exposed to a syntactic construction via comprehension. Our results suggest that when production-to-production priming is as robust as comprehension-to-production priming, then speakers who self-prime are simultaneously less likely to be primed by input from comprehension and demonstrate different distributions of responses than speakers who do not self-prime. The computational model accords with recent results demonstrating no self-priming, and provides evidence for an account of syntactic priming that distinguishes between production and comprehension input.
doi:10.3115/v1/w15-1101 dblp:conf/acl-cmcl/Jacobs15 fatcat:bdaqomgbxvc5fkqbze2fhmivrm