Filters








885,901 Hits in 9.6 sec

A Generative Model of Words and Relationships from Multiple Sources [article]

Stephanie L. Hyland, Theofanis Karaletsos, Gunnar Rätsch
2015 arXiv   pre-print
We propose a generative model which integrates evidence from diverse data sources, enabling the sharing of semantic information.  ...  Neural language models are a powerful tool to embed words into semantic vector spaces. However, learning such models generally relies on the availability of abundant and diverse training examples.  ...  Acknowledgments This work was funded by the Memorial Hospital and the Sloan Kettering Institute (MSKCC; to G.R.).  ... 
arXiv:1510.00259v2 fatcat:o5nb6ynefrgbfh6t7lkurokupy

Using Multiple Encoders for Chinese Neural Question Generation from the Knowledge Base

Meixi Chen, Jiahao Zhao, Ming Liu
2019 IOP Conference Series: Materials Science and Engineering  
encoder-decoder network in Chinese question generation, where a triple from the knowledge base as an input is encoded and a question as the output is decoded.  ...  Question generation is an important task in the field of natural language processing and intelligent tutoring system.  ...  Acknowledgments This work is supported by the National Nature Science Foundation of China(61502397).  ... 
doi:10.1088/1757-899x/490/4/042013 fatcat:ettevzp4pfd3hlcql3esc5yxgy

Assessing the use of multiple sources in student essays

Peter Hastings, Simon Hughes, Joseph P. Magliano, Susan R. Goldman, Kimberly Lawless
2012 Behavior Research Methods  
The first was a simple pattern-matching approach called "multi-word" that allowed for flexible matching of words and phrases in the sentences.  ...  Finally, the third was a machinelearning technique, support vector machines, which learned a classification scheme from the corpus.  ...  Author note The authors gratefully acknowledge the contributions of Rebecca Penzik and Kristen Rutkowski in the conduct of the experiment and analyses of the assessment data reported herein.  ... 
doi:10.3758/s13428-012-0214-0 pmid:22653561 fatcat:mb6dovyikbccfpgsm66xzayks4

Improving Natural Language Inference Using External Knowledge in the Science Questions Domain [article]

Xiaoyan Wang, Pavan Kapanipathi, Ryan Musa, Mo Yu, Kartik Talamadupula, Ibrahim Abdelaziz, Maria Chang, Achille Fokoue, Bassem Makni, Nicholas Mattei, Michael Witbrock
2018 arXiv   pre-print
We present the results of applying our techniques on text, graph, and text-to-graph based models, and discuss implications for the use of external knowledge in solving the NLI problem.  ...  Our model achieves the new state-of-the-art performance on the NLI problem over the SciTail science questions dataset.  ...  (2015) introduced a word-by-word attention model that learns conditional encodings of premise and hypothesis for textual entailment.  ... 
arXiv:1809.05724v2 fatcat:7dolmjp3rvgxljilc6qanqsgvy

Lexical acquisition and clustering of word senses to conceptual lexicon construction

Charnyote Pluempitiwiriyawej, Nick Cercone, Xiangdong An
2009 Computers and Mathematics with Applications  
As part of this research, we define lexical models to present words and lexicons.  ...  We describe a mechanism and an algorithm to support construction of a large complex conceptual lexicon from an existing alphabetical lexicon.  ...  Last but not least, we would like to thank Faculty of Science and Engineering, York University, Faculty of Science, Mahidol University and Department of Computer Science, Mahidol University for partial  ... 
doi:10.1016/j.camwa.2009.01.001 fatcat:p6rohhaxpjg7vjxspyfgm7uis4

Towards ontology-driven interoperability for simulation-based applications

Perakath Benjamin, Kumar Akella
2009 Proceedings of the 2009 Winter Simulation Conference (WSC)  
Akella's areas of expertise include simulation modeling, the design of experiments, numerical modeling, data mining, and text mining.  ...  Benjamin has a Ph.D. in Industrial Engineering from Texas A&M (1991).  ...  If any of the related concepts ("target concepts") from WordNet® exist in the pool of concepts extracted from the domain text, then a relationship is discovered between the source concept and the target  ... 
doi:10.1109/wsc.2009.5429286 dblp:conf/wsc/BenjaminA09 fatcat:hi4gy4anlfenxgrz73irdgzz7i

reStructured Pre-training [article]

Weizhe Yuan, Pengfei Liu
2022 arXiv   pre-print
In such a paradigm, the role of data will be re-emphasized, and model pre-training and fine-tuning of downstream tasks are viewed as a process of data storing and accessing.  ...  Experimentally, RST models not only surpass strong competitors (e.g., T0) on 52/55 popular datasets from a variety of NLP tasks, but also achieve superior performance in National College Entrance Examination  ...  Additionally, many thanks to Zhendang Wan, who is a famous English teacher from No.1 High school of Huaibei and has been patiently helping with the essay scoring work and giving constructive suggestions  ... 
arXiv:2206.11147v1 fatcat:f7h352jebnekjpgwadqtysiweq

MMLUP: Multi-source & Multi-task Learning for User Profiles in Social Network

Dongjie Zhu, Yuhua Wang, Chuiju You, Jinming Qiu, Ning Cao, Chenjing Gong, Guohua Yang, Helen Min Zhou
2019 Computers Materials & Continua  
Secondly, we design a shared layer to fuse multiple heterogeneous data sources as general shared representation for multi-task learning.  ...  Firstly, we design their own feature extraction models for multiple heterogeneous data sources.  ...  We compare MMLUP with a single-task model, a single-source mod-el and a model with self-attention removed. The results from Tab. 3 show that our model performs very well and exceeds all models.  ... 
doi:10.32604/cmc.2019.06041 fatcat:bxrf3ttjivcajdtk5fwi536jn4

Improving Natural Language Inference Using External Knowledge in the Science Questions Domain

Xiaoyan Wang, Pavan Kapanipathi, Ryan Musa, Mo Yu, Kartik Talamadupula, Ibrahim Abdelaziz, Maria Chang, Achille Fokoue, Bassem Makni, Nicholas Mattei, Michael Witbrock
2019 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
We present the results of applying our techniques on text, graph, and text-and-graph based models; and discuss the implications of using external knowledge to solve the NLI problem.  ...  Our model achieves close to state-of-the-art performance for NLI on the SciTail science questions dataset.  ...  Given a premise P (with K words) and a hypothesis H (with J words), the model computes the matching results between them as follows: • Context Encoding: A contextual representation of the premise and hypothesis  ... 
doi:10.1609/aaai.v33i01.33017208 fatcat:chb6qvwgrzav5f5x2olzavo5bi

Improving Bilingual Lexicon Construction from Chinese-English Comparable Corpora via Dependency Relationship Mapping

Hua Xu, Dandan Liu, Longhua Qian, Guodong Zhou
2011 2011 International Conference on Asian Language Processing  
Following this line of research, this paper proposes a dependency relationship mapping model and investigates its effect on bilingual lexicon construction.  ...  The experiments show that, by mapping context words, dependency relationship types and directions simultaneously when calculating the similarity between two words in the source and target languages respectively  ...  CONCLUSION In this paper, we propose a dependency relationship mapping model and apply this model to bilingual lexicon construction from a Chinese-English comparable corpus.  ... 
doi:10.1109/ialp.2011.22 dblp:conf/ialp/XuLQZ11 fatcat:tcdxb3vcwndi5eiqnu36zis3fq

Benchmarking Semantic Capabilities of Analogy Querying Algorithms [chapter]

Christoph Lofi, Athiq Ahamed, Pratima Kulkarni, Ravi Thakkar
2016 Lecture Notes in Computer Science  
However, it is still quite unclear how well these algorithms work from a semantic point of view. One of the problems is that there is no clear consensus on the intended semantics of analogy queries.  ...  Current developments in natural language processing and machine learning resulted in some very promising algorithms relying on deep learning neural word embeddings which might contribute to finally realizing  ...  Especially, our dataset is not automatically generated from structured data sources, but instead relies on crowdsourcing and a large number of human judgements.  ... 
doi:10.1007/978-3-319-32025-0_29 fatcat:vqw2fba3izb2zejo7wv2gvx6eu

Hybrid System Combination Framework for Uyghur–Chinese Machine Translation

Yajuan Wang, Xiao Li, Yating Yang, Azmat Anwar, Rui Dong
2021 Information  
In the second layer, the outputs of multiple systems are combined to leverage the advantage of SMT and NMT models by using a multi-source-based system combination approach and the voting-based system combination  ...  Both the statistical machine translation (SMT) model and neural machine translation (NMT) model are the representative models in Uyghur–Chinese machine translation tasks with their own merits.  ...  [22] used the extracted semantic similarity as a new feature to generate multiple new systems from a single SMT system and to combine multiple systems.  ... 
doi:10.3390/info12030098 fatcat:lxwjzakrmnfkjnw2xvpoygr5km

An information integration framework for e-commerce

H. Benetti, D. Beneventano, S. Bergamaschi, F. Guerra, M. Vincini
2002 IEEE Intelligent Systems  
Starting from local source descriptions, the Global Schema Builder generates an integrated view of all data sources and expresses those views using XML.  ...  Momis uses One of the main challenges for e-commerce infrastructure designers is to retrieve data from different sources and create a unified view that overcomes contradictions and redundancies.  ...  Acknowledgments This article is an extended version of the article "SI-Designer: An Integration Framework for E-Commerce." 14  ... 
doi:10.1109/5254.988444 fatcat:jpqavoul55gphmcs2qky54y2li

Look-ahead Attention for Generation in Neural Machine Translation [article]

Long Zhou, Jiajun Zhang, Chengqing Zong
2017 arXiv   pre-print
However, we find that the generation of a target word does not only depend on the source sentence, but also rely heavily on the previous generated target words, especially the distant words which are difficult  ...  The attention model has become a standard component in neural machine translation (NMT) and it guides translation process by selectively focusing on parts of the source sentence when predicting each target  ...  Acknowledgments The research work has been funded by the Natural Science Foundation of China under Grant No. 61673380, No. 61402478 and No. 61403379.  ... 
arXiv:1708.09217v1 fatcat:fjmhi3fsabewrpqqzljbgoflty

The Use of a Structural N-gram Language Model in Generation-Heavy Hybrid Machine Translation [chapter]

Nizar Habash
2004 Lecture Notes in Computer Science  
A structural N-gram model captures the relationship between words in a dependency representation without taking into account the overall structure at the phrase level.  ...  This paper describes the use of a statistical structural N-gram model in the natural language generation component of a Spanish-English generationheavy hybrid machine translation system.  ...  Acknowledgments This work has been supported, in part, by Army Research Lab Cooperative Agreement DAAD190320020, NSF CISE Research Infrastructure Award EIA0130422, and Office of Naval Research MURI Contract  ... 
doi:10.1007/978-3-540-27823-8_7 fatcat:hsrmscafsnatvfjyhymfrmtovq
« Previous Showing results 1 — 15 out of 885,901 results