Filters








512,063 Hits in 4.7 sec

Machine Translation of Legal Information and Its Evaluation [chapter]

Atefeh Farzindar, Guy Lapalme
2009 Lecture Notes in Computer Science  
The authors attempted to shorten this time significantly using a unique statistical machine translation system which has attracted the attention of the federal courts in Canada for its accuracy and speed  ...  This paper presents the machine translation system known as TransLI (Translation of Legal Information) developed by the authors for automatic translation of Canadian Court judgments from English to French  ...  We sincerely thank our lawyers for leading the human evaluation: Pia Zambelli and Diane Doray. The authors thank also Fabrizio Gotti and Jimmy Collin for technical support of experiments.  ... 
doi:10.1007/978-3-642-01818-3_9 fatcat:ruaksgdh6vcfdpfxl3kywg3i3u

Machine Translation and the Evaluation of Its Quality [chapter]

Mirjam Sepesy Maučec, Gregor Donaj
2019 Natural Language Processing - New Approaches and Recent Applications [Working Title]  
This chapter also describes the evaluation of machine translation quality. It covers manual and automatic evaluations.  ...  Traditional and recently proposed metrics for automatic machine translation evaluation are described.  ...  It is necessary to evaluate MT quality before use in practice. Machine translation evaluation As MT emerges as an important mode of translation, its quality is becoming more and more important.  ... 
doi:10.5772/intechopen.89063 fatcat:smarh3acdrcxfdfwpqvq3qh4z4

Simplification of RNN and Its Performance Evaluation in Machine Translation

Tomohiro Fujita, Zhiwei Luo, Changqin Quan, Kohei Mori
2020 Transactions of the Institute of Systems Control and Information Engineers  
As a result of machine translation in relatively small corpus, compared with LSTM and GRU, our proposed SGR can realize higher scores than LSTM and GRU.  ...  It is necessary to analyze in more detail a performance in larger dataset and a performance difference due to multi-layering, weight for input and the number of gates. * Manuscript Received  ...  Performance Evaluation Outline of Performance Evaluation The experiment of machine translation is performed for performance evaluation.  ... 
doi:10.5687/iscie.33.267 fatcat:4egyfncokrailojvlre5n4hgxu

APE at Scale and Its Implications on MT Evaluation Biases

Markus Freitag, Isaac Caswell, Scott Roy
2019 Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers)  
In this work, we train an Automatic Post-Editing (APE) model and use it to reveal biases in standard Machine Translation (MT) evaluation procedures.  ...  The goal of our APE model is to correct typical errors introduced by the translation process, and convert the "translationese" output into natural text.  ...  Y with the translation model, and b) post-edit the output of the translation by passing it through the APE model.  ... 
doi:10.18653/v1/w19-5204 dblp:conf/wmt/FreitagCR19 fatcat:tijv7fletzdm5jtsl4lyddms7e

A Transliteration System Based on Bayesian Alignment and its Human Evaluation within a Machine Translation System

Andrew Finch, Keiji Yasuda
2012 Journal of NICT  
As evidence of this phenomenon, it is common practice in competitive machine translation evaluation campaigns for the systems to delete untranslated unknown words from their machine translation output,  ...  A human evaluation is usually preferable to an automatic evaluation, and in the case of this evaluation especially so, since the common machine translation evaluation methods are often being biassed towards  ... 
doi:10.24812/nictjournal.59.3.4_049 fatcat:dymt27opznbntkt67cisucdzv4

A Bayesian Model of Transliteration and Its Human Evaluation When Integrated into a Machine Translation System

Andrew FINCH, Keiji YASUDA, Hideo OKUMA, Eiichiro SUMITA, Satoshi NAKAMURA
2011 IEICE transactions on information and systems  
A human evaluation is usually preferable to an automatic evaluation, and in the case of this evaluation especially so, since the common machine translation evaluation methods are affected by the length  ...  We demonstrate the effectiveness of our Bayesian segmentation by using it to build a translation model for a phrase-based statistical machine translation (SMT) system trained to perform transliteration  ...  As a evidence of this phenomenon, it is common practice in competitive machine translation evaluation campaigns for the systems to delete untranslated unknown words from their machine translation output  ... 
doi:10.1587/transinf.e94.d.1889 fatcat:32rg6oefpjev3ea6e2exccnt6u

Evaluating Arabic to English Machine Translation

Laith S., Taghreed M., Mohammed N.
2014 International Journal of Advanced Computer Science and Applications  
Generally the manual (human) evaluation of machine translation (MT) systems is better than the automatic evaluation, but it is not feasible to be used.  ...  This study presents a comparison of effectiveness of two free online machine translation systems (Google Translate and Babylon machine translation system) to translate Arabic to English.  ...  Bilingual Evaluation Understudy (BLEU) is based on string matching, and it is the most widely-used evaluation method to automatically evaluate machine translation systems, and therefore it is used in this  ... 
doi:10.14569/ijacsa.2014.051112 fatcat:6d6lehxf3jeyjpupr4ttjrkrnu

Using binary classification to evaluate the quality of machine translators

Ran Li, School of Computer and Information Technology, Xinyang Normal University, Xinyang, China, Yihao Yang, Kelin Shen, Mohammed Hijji, School of Computer and Information Technology, Xinyang Normal University, Xinyang, China, School of Foreign Languages, Xinyang Agriculture and Forestry University, Xinyang, China, Industrial Innovation and Robotic Center (IIRC), University of Tabuk, Tabuk 47711, Saudi Arabia
2022 Maǧallaẗ Al-Kuwayt li-l-ʿulūm  
However, machine translators often produces some unnatural texts, and an evaluation of machine translators is thus needed to avoid the abuse of machine-translated texts.  ...  This paper presents the use of binary classification to evaluate the quality of machine translators without references.  ...  Quality Evaluation Inspired by the utilization of binary classifiers in machine translation detection, the evaluation of machine translators makes it possible to think of it as a binary classification  ... 
doi:10.48129/kjs.splml.19547 fatcat:hcfp3ngflncobngt66tqdzeqzy

COSTA MT Evaluation Tool: An Open Toolkit for Human Machine Translation Evaluation

Konstantinos Chatzitheodorou
2013 Prague Bulletin of Mathematical Linguistics  
It is a Java program that can be used to manually evaluate the quality of the machine translation output.  ...  A hotly debated topic in machine translation is human evaluation.  ...  The main window of the tool is divided into 4 parts: i) the part of the source text, ii) the part of the machine translation, iii) the part of the reference translation, and iv) the part of the translation  ... 
doi:10.2478/pralin-2013-0014 fatcat:47eolwxq7jdm3nrv7vohlbuupu

Empirical machine translation and its evaluation

Jesús Giménez
2009 European Association for Machine Translation Conferences/Workshops  
Translation and its Evaluation  ...  Simon Corston-Oliver, Michael Gamon, and Chris Brockett. A Machine Learning Approach to the Automatic Evaluation of Machine Translation.  ... 
dblp:conf/eamt/Gimenez09 fatcat:f76qtbusv5hlthods63m44qd64

Machine Translation Quality Assessment of Selected Works of Xiaoping Deng Supported by Digital Humanistic Method

Qing Wang, Xiao Ma
2021 International Journal of Applied Linguistics and Translation  
, computer technology and statistical methods, so a to evaluate the quality of machine translations generated by different translation software from lexical, syntactical, semantic and pragmatic levels.  ...  to eliminate people's bias to the machine translation, so as to make people have a deeper understanding of the advantages and disadvantages of machine translation and improve the machine translation software  ...  It evaluates the machine translation through the concepts of similarity, error rate, accuracy and recalling rate, and realizes the automation, algorithmization and accuracy of the quality evaluation of  ... 
doi:10.11648/j.ijalt.20210702.15 fatcat:5za7cgzgrrfmfiipcftr2ej74y

A Review of Machine Translation Systems in India and different Translation Evaluation Methodologies

Aditi Kalyani, Priti S. Sajja
2015 International Journal of Computer Applications  
This paper gives a review of the work done on various Indian machine translation systems and existing methods for evaluating the translated MT system's Output.  ...  Machine Translation (MT) is a field of Artificial Intelligence and Natural Language Processing which deals with translation from one language to another using machine translation system.  ...  Also we discussed evaluation strategies for evaluating the translated output of machines.  ... 
doi:10.5120/21840-4917 fatcat:orjsd4l2cvem7l46oaejakf6ja

Evaluating English to Arabic Machine Translation Using BLEU

Mohammed N., Taghreed M., Emad M., Izzat M.
2013 International Journal of Advanced Computer Science and Applications  
There are many automatic methods used to evaluate different machine translators, one of these methods; Bilingual Evaluation Understudy (BLEU) method, which was adopted and implemented to achieve the main  ...  This study aims to compare the effectiveness of two popular machine translation systems (Google Translate and Babylon machine translation system) used to translate English sentences into Arabic relative  ...  To explain that, we take a source sentence as an example and translate it using Babylon machine translation system and Google Translate machine translation system, and two human translations called Reference  ... 
doi:10.14569/ijacsa.2013.040109 fatcat:jtjaxzo6tbdr7kljpsqpbqv2mu

Grammar Accuracy Evaluation (GAE): Quantifiable Quantitative Evaluation of Machine Translation Models [article]

Dojun Park, Youngjin Jang, Harksoo Kim
2022 arXiv   pre-print
As a result of analyzing the quality of machine translation by BLEU and GAE, it was confirmed that the BLEU score does not represent the absolute performance of machine translation models and GAE compensates  ...  for the shortcomings of BLEU with flexible evaluation of alternative synonyms and changes in sentence structure.  ...  the quantitative evaluations for measuring the performance of machine translation models and has been the most used evaluation for machine translation models since it appeared in 2002.  ... 
arXiv:2105.14277v3 fatcat:24ixlbuo7zdflepks7l2k3vm6a

Automatic evaluation of the quality of machine translation of a scientific text: the results of a five-year-long experiment

Ilya Ulitkin, Irina Filippova, Natalia Ivanova, Alexey Poroykov, A. Zheltenkov, A. Mottaeva
2021 E3S Web of Conferences  
It is shown that modern systems of automatic evaluation of translation quality allow errors made by machine translation systems to be identified and systematized, which will enable the improvement of the  ...  These methods, i.e. methods based on string matching and n-gram models, make it possible to compare the quality of machine translation to reference translation.  ...  Acknowledgements The authors wish to thank their colleague Stephen Garratt (England) for his helpful suggestions on manuscript editing and polishing the language of the paper.  ... 
doi:10.1051/e3sconf/202128408001 fatcat:55cwrcu4kbbvvknucmeiitkdgy
« Previous Showing results 1 — 15 out of 512,063 results