The Internet Archive has digitized a microfilm copy of this work. It may be possible to borrow a copy for reading.
Similarity-based identification of repairs in Japanese spoken language. In Proceedings of the 3rd International Conference on Spoken Language Processing, pages 915-918. ... In Proceedings of the Conference on Empirical Methods in Natural Language ne pages 33-47. Nakatani, Christine H. and Julia Hirschberg. 1994. A corpus-based study of repair cues in spontaneous speech. ...
Lecture Notes in Computer Science
Building multilingual spoken language translation systems requires knowledge about both acoustic models and language models of each language to be translated. ... Our multilinguM translation system JANUS-2 is able to translate English and German spoken input into either English, German, Spanish, Japanese or Korean output. ... Currently, English and German spoken input can be translated into either English, German, Spanish, Japanese or Korean output. Work is in progress to add Spanish and Korean as input languages. ...doi:10.1007/3-540-60925-3_42 fatcat:gvgbvudr5vfcfikdhnhoh42z5a
child aged 2:10-4:10; 9503761 hearing-impaired children/classmate communicative interactions; vid- eotaped interactions; deaf/hard-of-hearing children aged 5; 9503959 Japanese as a foreign language, word ... use, parental training; video program/personal conferences; language sample analysis; mothers of delayed-language children; 9504154 infant language-related mechanisms development, emotional content import ...
It argues that further distinctions can be made in terms of whether the extension is prosodically integrated with the prior unit, its host, (Non-add-on) or not, and in terms of whether it repairs some ... In conclusion the study argues for a classification of 'increment' types which goes beyond the English-based Glue-on, attributes a central role to prosodic delivery and adopts a usage-based understanding ... TCU continuation in Japanese In this section we pursue a similar line of inquiry with respect to TCU continuation in Japanese, although our findings are based on an even smaller data set (Tanaka 1999 ...doi:10.1075/prag.17.4.02cou fatcat:i2xlzwql6zewlnkfkalgyjxbsm
We present JANUS-II, a large scale system effort aimed at interactive spoken language translation. ... JANUS-II now accepts spontaneous conversational speech in a limited domain in English, German or Spanish and produces output in German, English, Spanish, Japanese and Korean. ... Thanks are also due to the partners and affiliates in C-STAR, who have helped define speech translation today. ...doi:10.1109/2.511967 fatcat:eynsxz5oarhfrcy6lhb2vwav5a
Computational Models of Speech Pattern Processing
We present JANUS-II, a large scale system effort aimed at interactive spoken language translation. ... JANUS-II now accepts spontaneous conversational speech in a limited domain in English, German or Spanish and produces output in German, English, Spanish, Japanese and Korean. ... Thanks are also due to the partners and affiliates in C-STAR, who have helped define speech translation today. ...doi:10.1007/978-3-642-60087-6_33 fatcat:n5bh5t72ujewhmaco5cahxbgkq
-used as a repair initiator when, for example, one has not clearly heard what someone just said-is found in roughly the same form and function in spoken languages across the globe. ... In support of the first, we show that the similarities in form and function of this interjection across languages are much greater than expected by chance. ... Acknowledgments This work was carried out in the project ''Interactional Foundations of Language'' within the Language and Cognition Department at the Max Planck Institute for Psycholinguistics. ...doi:10.1371/journal.pone.0078273 pmid:24260108 pmcid:PMC3832628 fatcat:dwncelv7jjhrhnwk255h2xh4fi
, domain of similar size (or even to a distinct language pair, which can raise similar issues --this should be clear if one compares moving between pairs of very similar languages which may share a great ... translation of spoken language, often in the context of a meeting or someone addressing a group of people. ...doi:10.1007/978-0-387-73819-2_10 fatcat:ydqiai2pzjgo7hz4dtfq3xhfaa
Thirty-two subjects completed a battery of production and perception tasks: spoken target words that were likely to have epenthetic vowels, read sentences, epenthetic vowel perception, identification of ... The aim of this study was to investigate individual differences in vowel epenthesis among Korean L2 speakers of English, and its relationship to other measures of segmental and suprasegmental processing ... of language groups. ...doi:10.1121/1.3508938 fatcat:e262gtcscbbpdo7zw7fa27seze
The design and implementation of SKOPE demonstrates how connectionist/symbolic hybrid architectures can be constructed for spoken agglutinative language processing. ... Spoken language processing requires speech and natural language integration. Moreover, spoken Korean calls for unique processing methodology due to its linguistic characteristics. ... Often the spoken languages are ungrammatical, fragmentary, and contain non-fluencies and speech repairs, and must be processed incrementally under the time constraints (?). ...arXiv:cmp-lg/9504008v2 fatcat:qzmvax43gfdjpibn5xy27ryzoq
Based on this synthesis, the following key areas for future research are identified: learners' identification and use of cognates in English, their knowledge of loanwords in Japanese, their attitudes and ... Furthermore, an overview and synthesis of the research is given, illustrating how cognates are typically treated in the feeder disciplines and in studies focusing on language learning and/or teaching, ... and although they were initially better at identifying orally presented cognates, their skill in identifying cognates in written form grew over 3 years to equal that of spoken identification. ...doi:10.7820/vli.v09.1.allen.a fatcat:jvpri6gleve7rjkstiprgtn6ma
We assess the perception of illegal consonant clusters in native speakers of Japanese, Brazilian Portuguese, and European Portuguese, three languages that have similar phonological properties, but that ... Listeners of various languages tend to perceive an illusory vowel inside consonant clusters that are illegal in their native language. ... The A and B tokens of the ABX trial were spoken by two different female speakers, and the X token was spoken by a male speaker. The ISI was 150 ms. ...doi:10.1016/j.jml.2010.12.004 fatcat:l3jhkjt4wvdfhgok4egyvzqlle
Lecture Notes in Computer Science
In this paper, a large Hungarian spoken language database is introduced. ... This phonetically-based multi-purpose database contains various types of spontaneous and read speech from 333 monolingual speakers (about 50 minutes of speech sample per speaker). ... Development of a large spontaneous speech database of agglutinative Hungarian ...doi:10.1007/978-3-319-10816-2_51 fatcat:4ucbmrlewvfwtbkfcjyp54nxbe
This paper describes recent progress and the author's perspectives of speech recognition technology. ... Applications of speech recognition technology can be classified into two main areas, dictation and human-computer dialogue systems. • ... The system is based on the spoken language systems developed for the RailTel project  and the ESPRIT Mask project  . ...doi:10.3115/1034678.1034680 dblp:conf/acl/Furu99 fatcat:ovnd2rgnwrf5ddhnnshwtppxje
Alternatively, signed conversations may show a similar distribution of turn-timing as spoken languages, thus avoiding both gaps and overlaps. ... Cross-linguistic comparison has indicated that spoken languages vary only minimally in terms of turn-timing, and language acquisition research has shown pre-linguistic vocal turn-taking in the first half ... We also thank Ellen Nauta for modeling in Figure 1 , Sean Roberts for comments on the quantitative analysis of the data, and the members of the Language and Cognition Department, especially Elma Hilbrink ...doi:10.3389/fpsyg.2015.00268 pmid:25852593 pmcid:PMC4371657 fatcat:3od5cccjirew3oomj2grlbfevi
« Previous Showing results 1 — 15 out of 4,123 results