Filters








8,315 Hits in 3.6 sec

The Effect of Cross-Lingual Pooling on Evaluation

Kazuko Kuriyama, Masaharu Yoshioka, Noriko Kando
2001 NTCIR Conference on Evaluation of Information Access Technologies  
The purpose of this study is to examine whether there is an effect on the relative evaluation of the IR systems using the relevance judgments made by the pooling method and additional interactive searches  ...  Almost the same rankings were produced by all the relevance judgments. Therefore our results verified the reliability of the evaluation using test collection based on pooling.  ...  This research is a part of the research project "A Study on Ubiquitous Information System for Utilization of Highly Distributed Information Resources", supported by JSPS (Japan Society for the Promotion  ... 
dblp:conf/ntcir/KuriyamaYK01 fatcat:blrjw72iwfcnvodig4cklfbagy

LAReQA: Language-agnostic answer retrieval from a multilingual pool [article]

Uma Roy, Noah Constant, Rami Al-Rfou, Aditya Barua, Aaron Phillips, Yinfei Yang
2020 arXiv   pre-print
This finding underscores our claim that languageagnostic retrieval is a substantively new kind of cross-lingual evaluation.  ...  Interestingly, the embedding baseline that performs the best on LAReQA falls short of competing baselines on zero-shot variants of our task that only target "weak" alignment.  ...  We thank Sebastian Ruder and Melvin Johnson for helpful comments on an earlier draft of this paper. We also thank Rattima Nitisaroj for helping us evaluate the quality of our Thai sentence breaking.  ... 
arXiv:2004.05484v1 fatcat:75ilkbbezzhdvnqm4xzultmldu

NACSIS test collection workshop (NTCIR-1) (poster abstract)

Noriko Kando, Kazuko Kuriyama, Toshihiko Nozue
1999 Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval - SIGIR '99  
ACKNOWLEDGMENTS We thanks for all the participants for their contributions and analysts who worked very hard with surprisingly excellent concentration.  ...  of variations of pooling methods and coverage, and variation in relevance assessments have on the evaluation of search effectiveness.  ...  Using the results of the pretest, we evaluated (1) coverage of the initial pooling done in NACSIS, (2) effectiveness of pooling, and (3) the reliability of a test collection through investigating the effect  ... 
doi:10.1145/312624.312730 dblp:conf/sigir/KandoKN99 fatcat:zydfuai6kjb5vhfemwufcr46ua

Retrieval Effectiveness of Cross Language Information Retrieval Search Engines [chapter]

Schubert Foo
2011 Lecture Notes in Computer Science  
This study evaluates the retrieval effectiveness of English-Chinese (EC) cross-language information retrieval (CLIR) on four common search engines along the dimensions of recall and precision.  ...  Findings showed that CLIR effectiveness is poor with average recall and precision values of 0.165 and 0.539 for monolingual EE/CC searches, and 0.078 and 0.282 for cross lingual CE/EC searches.  ...  Acknowledgement The author gratefully acknowledges the contributions of Gao Xiuqing, Katherine Chia and Tang Jiao who carried out this study as part of their Critical Inquiry Project in their M.Sc.  ... 
doi:10.1007/978-3-642-24826-9_37 fatcat:cfheaohlg5hr5fxbajzi6y45hi

How Language-Neutral is Multilingual BERT? [article]

Jindřich Libovický and Rudolf Rosa and Alexander Fraser
2019 arXiv   pre-print
Previous work probed the cross-linguality of mBERT using zero-shot transfer learning on morphological and syntactic tasks. We instead focus on the semantic properties of mBERT.  ...  semantics to allow high-accuracy word-alignment and sentence retrieval but is not yet good enough for the more difficult task of MT quality estimation.  ...  In this paper, we directly assess the semantic cross-lingual properties of mBERT.  ... 
arXiv:1911.03310v1 fatcat:ti3oyahm45ggtjiiwdsiy47s7m

The Effectiveness of Cross-lingual Link Discovery

Ling-Xiang Tang, Kelly Y. Itakura, Shlomo Geva, Andrew Trotman, Yue Xu
2011 NTCIR Conference on Evaluation of Information Access Technologies  
This paper describes the evaluation in benchmarking the effectiveness of cross-lingual link discovery (CLLD).  ...  Cross-lingual link discovery is a way of automatically finding prospective links between documents in different languages, which is particularly helpful for knowledge discovery of different language domains  ...  , and the effectiveness of cross-lingual link discovery is discussed.  ... 
dblp:conf/ntcir/TangIGTX11 fatcat:acoaju4ppbenfnbdnum5saamrm

Evaluating the Cross-Lingual Effectiveness of Massively Multilingual Neural Machine Translation

Aditya Siddhant, Melvin Johnson, Henry Tsai, Naveen Ari, Jason Riesa, Ankur Bapna, Orhan Firat, Karthik Raman
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
In this paper, we evaluate the cross-lingual effectiveness of representations from the encoder of a massively multilingual NMT model on 5 downstream classification and sequence labeling tasks covering  ...  Its improved translation performance on low resource languages hints at potential cross-lingual transfer capability for downstream tasks.  ...  We detail all of the experiments in this section. XNLI: Cross-lingual NLI XNLI is a popularly used corpus for evaluating cross-lingual sentence classification.  ... 
doi:10.1609/aaai.v34i05.6414 fatcat:lrbenej25ne33m2iloiov5esla

Development of a spontaneous large vocabulary speech recognition system for Qatari Arabic

Mohamed Elmahdy
2013 Qatar Foundation Annual Research Forum Proceedings  
A major problem with dialectal Arabic speech recognition is due to the sparsity of speech resources.  ...  The proposed MSA-based transfer learning technique was performed by applying orthographic normalization, phone mapping, data pooling, acoustic model adaptation, and system combination.  ...  Interpolation weights were optimized on the dev. set. The cross-lingual interpolation resulted in a vocabulary size of 265.7K words.  ... 
doi:10.5339/qfarf.2013.ictp-053 fatcat:rhpntl44bngdfg3vwdegy4qbwq

Cross-Lingual Non-ferrous Metals Related News Recognition Method Based on CNN with a Limited Bi-lingual Dictionary

Xudong Hong, Xiao Zheng Jinyuan Xia, Linna Wei, Wei Xue
2019 Computers Materials & Continua  
Then, to improve the effect of recognition, we use a variant of the CNN to learn recognition features and construct the recognition model.  ...  Firstly, considering the lack of related language resources of non-ferrous metals, we use a limited bilingual dictionary and CCA to learn cross-lingual word vector and to represent news in different languages  ...  The number of training and test data is shown in Tab. 2. We use Precision, Recall, and F-measure to evaluate the effect of our method, which are calculated as follow.  ... 
doi:10.32604/cmc.2019.04059 fatcat:d3dyragvhvg37jyi53kjqp4uee

Evaluating the Cross-Lingual Effectiveness of Massively Multilingual Neural Machine Translation [article]

Aditya Siddhant, Melvin Johnson, Henry Tsai, Naveen Arivazhagan, Jason Riesa, Ankur Bapna, Orhan Firat, Karthik Raman
2019 arXiv   pre-print
In this paper, we evaluate the cross-lingual effectiveness of representations from the encoder of a massively multilingual NMT model on 5 downstream classification and sequence labeling tasks covering  ...  Its improved translation performance on low resource languages hints at potential cross-lingual transfer capability for downstream tasks.  ...  We detail all of the experiments in this section. XNLI: Cross-lingual NLI XNLI is a popularly used corpus for evaluating cross-lingual sentence classification.  ... 
arXiv:1909.00437v1 fatcat:cve74jijyvgmdbiebxbpdm26fm

Unsupervised Multilingual Sentence Embeddings for Parallel Corpus Mining [article]

Ivana Kvapilıkova, Mikel Artetxe, Gorka Labaka, Eneko Agirre, Ondřej Bojar
2021 arXiv   pre-print
The quality of the representations is evaluated on two parallel corpus mining tasks with improvements of up to 22 F1 points over vanilla XLM.  ...  We first produce a synthetic parallel corpus using unsupervised machine translation, and use it to fine-tune a pretrained cross-lingual masked language model (XLM) to derive the multilingual sentence representations  ...  Acknowledgments This study was supported in parts by the grants SVV 260 575, 1050119 of the Charles University Grant Agency and 19-26934X of the Czech Science Foundation, by a Facebook Fellowship, the  ... 
arXiv:2105.10419v1 fatcat:lnc3qfaprngihhse2ras5npe3y

Analyzing Zero-shot Cross-lingual Transfer in Supervised NLP Tasks [article]

Hyunjin Choi, Judong Kim, Seongho Joe, Seungjai Min, Youngjune Gwon
2021 arXiv   pre-print
Our results indicate that the presence of cross-lingual transfer is most pronounced in STS, sentiment analysis the next, and MRC the last.  ...  In zero-shot cross-lingual transfer, a supervised NLP task trained on a corpus in one language is directly applicable to another language without any additional training.  ...  Cross-lingual Mapping for Fine-grained Alignment of Sen- tence Embedddings TABLE I EVALUATION I ON STS TASKS.  ... 
arXiv:2101.10649v1 fatcat:dk62xepylfhaligkv5j3jer3xq

How robust are multilingual information retrieval systems?

Thomas Mandl, Christa Womser-Hacker, Giorgio Di Nunzio, Nicola Ferro
2008 Proceedings of the 2008 ACM symposium on Applied computing - SAC '08  
This paper analyzes a large amount of evaluation experiments from the Cross Language Evaluation Forum (CLEF).  ...  Our analysis shows that a small decrease of performance of biand multi-lingual retrieval goes along with a tremendous difference between the geometric mean and the average of topics.  ...  The analysis showed that cross-lingual retrieval with its inherent difficulty compared to mono-lingual retrieval greatly increases the divergence between rankings based on MAP and GMAP.  ... 
doi:10.1145/1363686.1363949 dblp:conf/sac/MandlWNF08 fatcat:lpeokm4nfvbbbew2l3dqza5ng4

Cross-Lingual Relation Extraction with Transformers [article]

Jian Ni and Taesun Moon and Parul Awasthy and Radu Florian
2020 arXiv   pre-print
More importantly, our models can be applied to perform zero-shot cross-lingual RE, achieving the state-of-the-art cross-lingual RE performance on two datasets (68-89% of the accuracy of the supervised  ...  Relation extraction (RE) is one of the most important tasks in information extraction, as it provides essential information for many NLP applications.  ...  Our models achieve the state-ofthe-art cross-lingual RE performance on two datasets (68-89% of the accuracy of the supervised target-language RE model).  ... 
arXiv:2010.08652v1 fatcat:xvbomv66gjavhgs3shbwsgnxv4

Neural Relation Extraction with Multi-lingual Attention

Yankai Lin, Zhiyuan Liu, Maosong Sun
2017 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)  
attention to consider the information consistency and complementarity among cross-lingual texts.  ...  Experimental results on real-world datasets show that our model can take advantage of multi-lingual texts and consistently achieve significant improvements on relation extraction as compared with baselines  ...  Acknowledgments This work is supported by the 973 Program  ... 
doi:10.18653/v1/p17-1004 dblp:conf/acl/LinLS17 fatcat:qnpzfmi3g5hbvd7cn7p3veprny
« Previous Showing results 1 — 15 out of 8,315 results