Filters








6,103 Hits in 5.7 sec

Eye movements as a window into real-time spoken language comprehension in natural contexts

Kathleen M. Eberhard, Michael J. Spivey-Knowlton, Julie C. Sedivy, Michael K. Tanenhaus
1995 Journal of Psycholinguistic Research  
Together, the first four experiments showed that listeners immediatel.v integrated lexieal, sublexical, and prosodic infolwlation in the spoken input with information fi'om the vis,tal context to reduce  ...  The fifth experiment demonstrated that a visual referential context qffected the initial structuring o f the linguistic input, eliminating even strong syntactic preferences that result in clear garden  ...  visual scene image coordinates. information provided by the visual context.  ... 
doi:10.1007/bf02143160 pmid:8531168 fatcat:3t6dzazeobdlboooqacbfg2aom

Situated sentence processing: The coordinated interplay account and a neurobehavioral model

Matthew W. Crocker, Pia Knoeferle, Marshall R. Mayberry
2010 Brain and Language  
Results from a new simulation suggest the model also correlates with event-related brain potentials elicited by the immediate use of visual context for linguistic disambiguation (Knoeferle, Habets, Crocker  ...  influence of non-linguistic visual context.  ...  The CIA here crucially asserts that visual information, particularly that identified by the utterance as relevant, becomes highly salient for the unfolding interpretation and disambiguation of situated  ... 
doi:10.1016/j.bandl.2009.03.004 pmid:19450874 fatcat:bynsdud7jvggnk77tscsbe7d4y

Do Speakers Avoid Ambiguities During Dialogue?

S. L. Haywood, M. J. Pickering, H. P. Branigan
2005 Psychological Science  
More interestingly, they were more likely to disambiguate their utterances when the visual context was potentially ambiguous than when it was not, reflecting sensitivity to ease of comprehension.  ...  What affects speakers' production of ambiguous utterances in dialogue? They might consider ease of production for themselves, or ease of comprehension for their addressees.  ...  Our experiment focused on utterances whose syntactic structure was ambiguous but whose interpretation was disambiguated by visual context (i.e., what the listener could see) and task constraints.  ... 
doi:10.1111/j.0956-7976.2005.01541.x pmid:15869694 fatcat:46cyaw32svbwrenw7krb76iluu

Visual Scenes Trigger Immediate Syntactic Reanalysis: Evidence from ERPs during Situated Spoken Comprehension

P. Knoeferle, B. Habets, M. W. Crocker, T. F. Munte
2007 Cerebral Cortex  
of attention in, visual contexts.  ...  A central topic in sentence comprehension research is the kinds of information and mechanisms involved in resolving temporary ambiguity regarding the syntactic structure of a sentence.  ...  With respect to the role of scene information in disambiguation, it is further interesting to note that the distribution of the P600-like component was similar-whether disambiguation was triggered by depicted  ... 
doi:10.1093/cercor/bhm121 pmid:17644830 fatcat:s5cht2r42zglvdrg74olygnbiu

SIMMC 2.0: A Task-oriented Dialog Dataset for Immersive Multimodal Conversations [article]

Satwik Kottur, Seungwhan Moon, Alborz Geramifard, Babak Damavandi
2021 arXiv   pre-print
of the generated utterances to collect diverse referring expressions.  ...  Our baseline model, powered by the state-of-the-art language model, shows promising results, and highlights new challenges and directions for the community to study.  ...  or disambiguation within the dialog utterances.  ... 
arXiv:2104.08667v2 fatcat:fmhhquhxpneu3dlcdak77tbkoi

Conflicting Constraints in Resource-Adaptive Language Comprehension [chapter]

Andrea Weber, Matthew W. Crocker, Pia Knoeferle
2010 Resource-Adaptive Cognitive Processes  
Previous work by Tanenhaus and colleagues [35] has shown, for example, the rapid influence of visual referential context on ambiguity resolution in on-line situated utterance processing.  ...  . 2.4 speak clearly for an interplay of both the visual context and the verb information.  ... 
doi:10.1007/978-3-540-89408-7_7 dblp:series/cogtech/WeberCK11 fatcat:unf7c6rkrzgd3en4s7fdsfukce

Recognizing Verbal Irony in Spontaneous Speech

Gregory A. Bryant, Jean E. Fox Tree
2002 Metaphor and Symbol  
Based on relevance theory, we predicted that speakers would provide acoustic disambiguation cues when speaking in situations that lack other sources of information, such as a visual channel.  ...  If conversationalists attempt to communicate as efficiently as possible given the communicative context, speech should contain semantic disambiguation cues as a function of the richness of contextual information  ...  of University Women Educational Foundation and by a University of California Regents Fellowship.  ... 
doi:10.1207/s15327868ms1702_2 fatcat:2qzsbhvxcjhb7o636ifxgxjbdy

Situationally independent prosodic phrasing

Shari R. Speer, Paul Warren, Amy J. Schafer
2011 Laboratory Phonology  
expected to influence the impact of syntactic ambiguity, including the availability of visual referents for the meanings of ambiguous utterances and the use of utterances as instructions versus confirmations  ...  Results from PP-attachment and verb transitivity ambiguities indicate clearly that the production of prosody-syntax correspondences is not conditional upon situational disambiguation of syntactic structure  ...  This work was supported by NIH training grant DC-00029 (Schafer), NIH research grant MH-51768 and NSF grant 0088175 (Speer), and by NZ/ USA Cooperative Science Programme grant CSP95/01 and Marsden Fund  ... 
doi:10.1515/labphon.2011.002 fatcat:3s2k2jvok5gipntavuh4hquxfy

Page 676 of Cognitive Science Vol. 32, Issue 4 [page]

2008 Cognitive Science  
Consistent with previous work using scripted utterances, we observed typical lexical competitor effects for expressions uttered by the experimenter outside the context of the conversation.  ...  The increase in target looks and decrease in competitor looks following the point-of-disambiguation indicates that addressees were able to use the disambiguating information online during the conversation  ... 

The Coordinated Interplay of Scene, Utterance, and World Knowledge: Evidence From Eye Tracking

Pia Knoeferle, Matthew W. Crocker
2006 Cognitive Science  
Two studies investigated the interaction between utterance and scene processing by monitoring eye movements in agent-action-patient events, while participants listened to related utterances.  ...  Experiment 2 investigated the relative importance of linguistic/world knowledge and scene information.  ...  We are grateful to Martin Pickering for comments on an early draft of this article. This research was funded by a PhD scholarship to Pia Knoeferle and by SFB 378"ALPHA" to Matthew W.  ... 
doi:10.1207/s15516709cog0000_65 pmid:21702823 fatcat:xpw4r7k4ojbaxpdoj765rgjija

Multimodal Interactions Using Pretrained Unimodal Models for SIMMC 2.0 [article]

Joosung Lee, Kijong Han
2021 arXiv   pre-print
SIMMC 2.0 dataset is a multimodal dataset containing image and text information, which is more challenging than the problem of only text-based conversations because it must be solved by understanding the  ...  SIMMC 2.0 includes 4 subtasks, and we introduce our multimodal approaches for the subtask \#1, \#2 and the generation of subtask \#4.  ...  Huang et al. (2021) and Jeong et al. (2021) achieve good performance in SIMMC 1.0 by using visual information as a text description of the object.  ... 
arXiv:2112.05328v3 fatcat:tunnaa3p25f3tkmlpv3ax2lcie

A speaker's gesture style can affect language comprehension: ERP evidence from gesture-speech integration

Christian Obermeier, Spencer D. Kelly, Thomas C. Gunter
2015 Social Cognitive and Affective Neuroscience  
Adding trials with such grooming movements makes the gesture information a much weaker cue compared with the gestures of the non-grooming speaker.  ...  Event-related potentials elicited by the speech signal revealed that adding grooming movements attenuated the impact of gesture for this particular speaker.  ...  utterances of these two speakers are randomly mixed on a trial-by-trial basis.  ... 
doi:10.1093/scan/nsv011 pmid:25688095 pmcid:PMC4560945 fatcat:5bcnrl5rj5gstbvckccdt434py

Interactive Visual Analysis of Transcribed Multi-Party Discourse

Mennatallah El-Assady, Annette Hautli-Janisz, Valentin Gold, Miriam Butt, Katharina Holzinger, Daniel Keim
2017 Proceedings of ACL 2017, System Demonstrations  
We present the first web-based Visual Analytics framework for the analysis of multi-party discourse data using verbatim text transcripts.  ...  Our framework supports a broad range of server-based processing steps, ranging from data mining and statistical analysis to deep linguistic parsing of English and German.  ...  Acknowledgments The research carried out in this paper was supported by the Bundesministerium für Bildung und Forschung (BMBF) under grant no. 01461246 (eHumanities VisArgue project).  ... 
doi:10.18653/v1/p17-4009 dblp:conf/acl/El-AssadyHGBHK17 fatcat:fn2bv5vr4ngadct4qmy7tdlto4

Peaches and eggplants or. . . something else? The role of context in emoji interpretations

Benjamin Weissman
2019 Proceedings of the Linguistic Society of America  
The context surrounding these messages was manipulated across experimental conditions, altering both the preceding discourse and the presence of a sentence-final wink emoji.  ...  When one of these emojis is used in a context that strongly biases towards the non-euphemistic interpretation, ratings for sexualness decrease and variability increases.  ...  An affective that follows an otherwise ambiguous utterance may provide additional intentional information that helps the message receiver to disambiguate.  ... 
doi:10.3765/plsa.v4i1.4533 fatcat:5jrfr5tyozcexhllumybztm3va

Probabilistic grounding of situated speech using plan recognition and reference resolution

Peter Gorniak, Deb Roy
2005 Proceedings of the 7th international conference on Multimodal interfaces - ICMI '05  
To resolve semantic ambiguities we propose a situation model that captures aspects of the physical context of an utterance as well as the speaker's intentions, in our case represented by recognized plans  ...  In a single, coherent Framework for Understanding Situated Speech (FUSS) we show how these two influences, acting on an ambiguous representation of the speech signal, complement each other to disambiguate  ...  The ranking should be informed by acoustic, lexical and grammatical knowledge.  ... 
doi:10.1145/1088463.1088489 dblp:conf/icmi/GorniakR05 fatcat:2ht22ymxhzacplmntkpj2h2w7m
« Previous Showing results 1 — 15 out of 6,103 results