Filters








74 Hits in 5.8 sec

Neural Chinese Word Segmentation with Lexicon and Unlabeled Data via Posterior Regularization

Junxin Liu, Fangzhao Wu, Chuhan Wu, Yongfeng Huang, Xing Xie
2019 The World Wide Web Conference on - WWW '19  
In this paper, we propose a neural approach for Chinese word segmentation which can exploit both lexicon and unlabeled data.  ...  Our approach is based on a variant of posterior regularization algorithm, and the unlabeled data and lexicon are incorporated into model training as indirect supervision by regularizing the prediction  ...  labeled data; (8) LUPR, our proposed neural CWS approach with both lexicon and unlabeled data via posterior regularization.  ... 
doi:10.1145/3308558.3313437 dblp:conf/www/LiuWWHX19 fatcat:6fgh4vl3lfehppq3pkp5jdua6y

Bayesian Modeling of Lexical Resources for Low-Resource Settings

Nicholas Andrews, Mark Dredze, Benjamin Van Durme, Jason Eisner
2017 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)  
However, discriminative training with lexical features requires annotated data to reliably estimate the lexical feature weights and may result in overfitting the lexical features at the expense of features  ...  Lexical resources such as dictionaries and gazetteers are often used as auxiliary data for tasks such as part-of-speech induction and named-entity recognition.  ...  Acknowledgments This work was supported by the JHU Human Language Technology Center of Excellence, DARPA LORELEI, and NSF grant IIS-1423276. Thanks to Jay Feldman for early discussions.  ... 
doi:10.18653/v1/p17-1095 dblp:conf/acl/AndrewsDDE17 fatcat:zpg6zlqkjfel5mxnr3t6nlgayq

Transfer learning for speech and language processing

Dong Wang, Thomas Fang Zheng
2015 2015 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)  
between data distributions and data types, but also between model structures (e.g., shallow nets and deep nets) or even model types (e.g., Bayesian models and neural models).  ...  For example in speech recognition, an acoustic model trained for one language can be used to recognize speech in another language, with little or no re-training data.  ...  For example, in [62] , [144] , images and words are embedded in the same low-dimensional space via neural networks, by which image classification can be improved by the word embedding, even for classes  ... 
doi:10.1109/apsipa.2015.7415532 dblp:conf/apsipa/WangZ15 fatcat:oby5enn52batdhoewb4n3ufo4y

Transfer Learning for Speech and Language Processing [article]

Dong Wang, Thomas Fang Zheng
2015 arXiv   pre-print
between data distributions and data types, but also between model structures (e.g., shallow nets and deep nets) or even model types (e.g., Bayesian models and neural models).  ...  For example in speech recognition, an acoustic model trained for one language can be used to recognize speech in another language, with little or no re-training data.  ...  For example, in [62] , [146] , images and words are embedded in the same low-dimensional space via neural networks, by which image classification can be improved by the word embedding, even for classes  ... 
arXiv:1511.06066v1 fatcat:vzl3rb5oqvauxk3cva6t5r7jzy

Lifelong learning for text retrieval and recognition in historical handwritten document collections [article]

Lambert Schomaker
2019 arXiv   pre-print
This chapter provides an overview of the problems that need to be dealt with when constructing a lifelong-learning retrieval, recognition and indexing engine for large historical document collections in  ...  principle is introduced, which describes the evolution from the sparsely-labeled stage that can only be addressed by traditional methods or nearest-neighbor methods on embedded vectors of pre-trained neural  ...  From an over-segmented data set, hit lists are generated of 'mined' word zones and presented to the user, for confirmation.  ... 
arXiv:1912.05156v1 fatcat:k4prbvki4nf6bkmov2mu726gye

Arabic Sentiment Analysis: A Survey

Adel Assiri, Ahmed Emam, Hmood Aldossari
2015 International Journal of Advanced Computer Science and Applications  
To solve a bounding issue with the data as it was reported, we modified existing logarithmic smoothing technique and applied it to pre-process the performance scores before the analysis.  ...  online commentary and micro blogging data in this important domain.  ...  They encode intuitive lexical and discourse knowledge as expressive constraints and integrate them into the learning of conditional random field models via posterior regularization.  ... 
doi:10.14569/ijacsa.2015.061211 fatcat:3scpsrgu5vcddcw3hnnfyo3f5m

Unsupervised Language Acquisition [article]

Carl de Marcken
1996 arXiv   pre-print
This work has application to data compression, language modeling, speech recognition, machine translation, information retrieval, and other tasks that rely on either structural or stochastic descriptions  ...  This thesis presents a computational theory of unsupervised language acquisition, precisely defining procedures for learning language from ordinary spoken or written utterances, with no explicit help from  ...  Berwick, who has supported and taught me for every one of the ten years I have been at MIT.  ... 
arXiv:cmp-lg/9611002v1 fatcat:tvlmfer5uneojcgvqaynxwtbaq

Self-Supervised Speech Representation Learning: A Review [article]

Abdelrahman Mohamed, Hung-yi Lee, Lasse Borgholt, Jakob D. Havtorn, Joakim Edin, Christian Igel, Katrin Kirchhoff, Shang-Wen Li, Karen Livescu, Lars Maaløe, Tara N. Sainath, Shinji Watanabe
2022 arXiv   pre-print
Other approaches rely on multi-modal data for pre-training, mixing text or visual data streams with speech.  ...  Although self-supervised speech representation is still a nascent research area, it is closely related to acoustic word embedding and learning with zero lexical resources, both of which have seen active  ...  Both SSL and PL leverage unlabeled speech-only data.  ... 
arXiv:2205.10643v1 fatcat:w3gm53o4unhkjfkvi4a3d7a3ay

Message from the general chair

Benjamin C. Lee
2015 2015 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS)  
We propose a joint learning model which combines pairwise classification and mention clustering with Markov logic.  ...  Learning-based Multi-Sieve Co-reference Resolution with Knowledge Lev Ratinov and Dan Roth Saturday 11:00am-11:30am -202 A (ICC) We explore the interplay of knowledge and structure in co-reference resolution  ...  for Chinese word segmentation and new word detection.  ... 
doi:10.1109/ispass.2015.7095776 dblp:conf/ispass/Lee15 fatcat:ehbed6nl6barfgs6pzwcvwxria

Recent Advances in End-to-End Automatic Speech Recognition [article]

Jinyu Li
2022 arXiv   pre-print
Recently, the speech community is seeing a significant trend of moving from deep neural network based hybrid modeling to end-to-end (E2E) modeling for automatic speech recognition (ASR).  ...  Similar ideas have been used for data augmentation by replacing some word segments in an utterance with new word segments from another utterance to train a general E2E model [190, 191] .  ...  unlabeled data, DataAugment [28] 2020 CTC Transformer 1.8/3.3 internal LM prior correction, EOS modeling [328] 2021 RNN-T BLSTM 2.2/5.6 w2v-BERT: SSL with unlabeled data, SpecAugment [329] 2021  ... 
arXiv:2111.01690v2 fatcat:6pktwep34jdvjklw4gkri4yn4y

Adaptive and Interactive Approaches to Document Analysis [chapter]

George Nagy, Sriharsha Veeramachaneni
2008 Studies in Computational Intelligence  
., deterministic or statistical constraints on the sequence of letters in syllables or words, and on the sequence of words in phrases or sentences.  ...  Human interaction is often more effective interspersed with algorithmic processes than only before or after the automated parts of the process.  ...  Another example is word completion on touch-screen devices (word HUMAN MACHINE Dichotomies Multi category classification Figure-ground completion is seldom used with regular keyboards because it tends  ... 
doi:10.1007/978-3-540-76280-5_9 fatcat:bgqprex26jhxhh4brlyp5g2o7e

Knowledge Efficient Deep Learning for Natural Language Processing [article]

Hai Wang
2020 arXiv   pre-print
Annotation is time-consuming and expensive to produce at scale.  ...  This thesis focuses on adapting such classical methods to modern deep learning models and algorithms.  ...  [53, 54] focused on supervised learning and the logical rules were introduced to augment labeled examples via posterior regularization [36] .  ... 
arXiv:2008.12878v1 fatcat:vhcxrhydyfcsnh3iu5t3g5goky

Recent Progresses in Deep Learning based Acoustic Models (Updated) [article]

Dong Yu, Jinyu Li
2018 arXiv   pre-print
We first discuss acoustic models that can effectively exploit variable-length contextual information, such as recurrent neural networks (RNNs), convolutional neural networks (CNNs), and their various combination  ...  with other models.  ...  Figure 1 gives an example of the posterior output of word CTC. In the figure, the units with the maximum posterior values are blanks and silences at most of time steps.  ... 
arXiv:1804.09298v2 fatcat:yfxzxu6qanbndcnmt3loikqeym

Crosslinguistic interplay between semantics and phonology in late bilinguals: neurophysiological evidence

NIKOLAY NOVITSKIY, ANDRIY MYACHYKOV, YURY SHTYROV
2018 Bilingualism: Language and Cognition  
Our masked priming paradigm used L1 (Russian) words as masked primes and L2 (English) words as targets.  ...  We conclude that the semantic and phonological interplay between L1 and L2 suggest an integrated bilingual lexicon.  ...  are better neurally at differentiating between L2 words semantically related and unrelated to L1 primes.  ... 
doi:10.1017/s1366728918000627 fatcat:h7wupuernncbbirfzuz3bfftgq

Spoken Content Retrieval—Beyond Cascading Speech Recognition with Text Retrieval

Lin-shan Lee, James Glass, Hung-yi Lee, Chun-an Chan
2015 IEEE/ACM Transactions on Audio Speech and Language Processing  
the Information not present in ASR outputs: to try to utilize the information in speech signals inevitably lost when transcribed into phonemes and words; 3) Directly Matching at the Acoustic Level without  ...  : with efficient presentation of the retrieved objects, an interactive retrieval process incorporating user actions may produce better retrieval results and user experiences.  ...  With this approach, the spoken content is first converted into word sequences or lattices via ASR.  ... 
doi:10.1109/taslp.2015.2438543 fatcat:hwrwmwtlkzfbfagox7bazu5r6a
« Previous Showing results 1 — 15 out of 74 results