199,266 Hits in 2.7 sec

On validation of ATL transformation rules by transformation models

Fabian Büttner, Jordi Cabot, Martin Gogolla
2011 Proceedings of the 8th International Workshop on Model-Driven Engineering, Verification and Validation - MoDeVVa  
In contrast, transformation models provide an integrated structural description of the source and target metamodels and the transformation between them.  ...  Model-to-model transformations constitute an important ingredient in model-driven engineering.  ...  target model from the source model.  ... 
doi:10.1145/2095654.2095666 fatcat:yfym7wrwjjawxgjsp5e2mhcfae

Lexically Constrained Neural Machine Translation with Levenshtein Transformer [article]

Raymond Hendy Susanto, Shamil Chollampatt, Liling Tan
2020 arXiv   pre-print
This paper proposes a simple and effective algorithm for incorporating lexical constraints in neural machine translation.  ...  Leveraging the flexibility and speed of a recently proposed Levenshtein Transformer model (Gu et al., 2019), our method injects terminology constraints at inference time without any impact on decoding  ...  Song et al. (2019) trained a Transformer (Vaswani et al., 2017) model by augmenting the data to include the constraint target phrases in the source sentence.  ... 
arXiv:2004.12681v1 fatcat:v6yuzxnuvbb27krshvzegyr2vq

Translation of Restricted OCL Constraints into Graph Constraints for Generating Meta Model Instances by Graph Grammars

Jessica Winkelmann, Gabriele Taentzer, Karsten Ehrig, Jochen M. Küster
2008 Electronical Notes in Theoretical Computer Science  
To satisfy also the given OCL constraints, well-formedness checks have to be done in addition.  ...  We present a restricted form of OCL constraints that can be translated to graph constraints which can be checked during the instance generation process.  ...  to visual modeling languages".  ... 
doi:10.1016/j.entcs.2008.04.038 fatcat:hdauvydf7fcyvkmwz2kgidxkjy

Incorporating Terminology Constraints in Automatic Post-Editing [article]

David Wan, Chris Kedzie, Faisal Ladhak, Marine Carpuat, Kathleen McKeown
2020 arXiv   pre-print
However, we show that our models do not learn to copy constraints systematically and suggest a simple data augmentation technique that leads to improved performance and robustness.  ...  While there exist techniques for incorporating terminology constraints during inference for MT, current APE approaches cannot ensure that they will appear in the final translation.  ...  Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.  ... 
arXiv:2010.09608v1 fatcat:x2ucsyytv5dlxfb3m6ouyugwni

Constraint Translation Candidates: A Bridge between Neural Query Translation and Cross-lingual Information Retrieval [article]

Tianchi Bi and Liang Yao and Baosong Yang and Haibo Zhang and Weihua Luo and Boxing Chen
2020 arXiv   pre-print
The constraint translation candidates are employed at both of training and inference time, thus guiding the translation model to learn and generate well performing target queries.  ...  Besides, the translation model lacks a mechanism at the inference time to guarantee the generated words to match the search index.  ...  Since the target search index is built pursuant to the probability distribution of terms in documents, a natural way is to transfer the translation to those target candidates being likely to appear in  ... 
arXiv:2010.13658v1 fatcat:voprjm6qnnhgxme2s5bs2xkfje

Knowledge Graphs Effectiveness in Neural Machine Translation Improvement

Benyamin Ahmadnia, Bonnie J. Dorr, Parisa Kordjamshidi
2020 Computer Science  
forces entity relations in the source-language to be carried over to the corresponding entities in the target-language translation.  ...  The core idea is to use KG entity relations as embedding constraints to improve the mapping from source to target.  ...  Acknowledgements The authors would like to express their sincere gratitude to Dr. Kimberly Foster, Dr. Michael W. Mislove, Dr. Carola Wenk, and Dr.  ... 
doi:10.7494/csci.2020.21.3.3701 fatcat:iufwewlllze7faj4lx46i3l7ey

Code-Switching for Enhancing NMT with Pre-Specified Translation [article]

Kai Song, Yue Zhang, Heng Yu, Weihua Luo, Kun Wang, Min Zhang
2019 arXiv   pre-print
Our method does not change the MNT model or decoding algorithm, allowing the model to learn lexicon translations by copying source-side target words.  ...  We investigate a data augmentation method, making code-switched training data by replacing source phrases with their target translations.  ...  the source and target sides during training, so that a model can translate such words by learning to translate placeholder tags.  ... 
arXiv:1904.09107v4 fatcat:5ydvlz6nkrcwvph63x26hii2zy

Code-Switching for Enhancing

Kai Song, Yue Zhang, Heng Yu, Weihua Luo, Kun Wang, Min Zhang
2019 Proceedings of the 2019 Conference of the North  
Our method does not change the NMT model or decoding algorithm, allowing the model to learn lexicon translations by copying source-side target words.  ...  We investigate a data augmentation method, making code-switched training data by replacing source phrases with their target translations.  ...  the source and target sides during training, so that a model can translate such words by learning to translate placeholder tags.  ... 
doi:10.18653/v1/n19-1044 dblp:conf/naacl/SongZYLWZ19 fatcat:rdutx2aixnc5njppe6lwsudytu

Joint Source-Target Self Attention with Locality Constraints [article]

José A. R. Fonollosa, Noe Casas, Marta R. Costa-jussà
2019 arXiv   pre-print
As input for training, both source and target sentences are fed to the network, which is trained as a language model.  ...  Our simplified architecture consists in the decoder part of a transformer model, based on self-attention, but with locality constraints applied on the attention receptive field.  ...  This work is also supported in part by the Spanish Ministerio de Economía y Competitividad, the European Regional Development Fund and the Agencia Estatal de Investigación, through the postdoctoral senior  ... 
arXiv:1905.06596v1 fatcat:bjr7q5woqnhobakbdtkjajbw5y

Neural Machine Translation with Explicit Phrase Alignment [article]

Jiacheng Zhang, Huanbo Luan, Maosong Sun, FeiFei Zhai, Jingfang Xu, Yang Liu
2019 arXiv   pre-print
The lack of alignment in NMT models leads to three problems: it is hard to (1) interpret the translation process, (2) impose lexical constraints, and (3) impose structural constraints.  ...  To alleviate these problems, we propose to introduce explicit phrase alignment into the translation process of arbitrary NMT models.  ...  We find that although DBA is able to include all specified target phrases in the translations, it tends to either translate the specified source phrases repeatedly (highlighted in bold) or omitting source  ... 
arXiv:1911.11520v3 fatcat:pwjyeuefazgi5mmfq3ew66bgde

Input Augmentation Improves Constrained Beam Search for Neural Machine Translation: NTT at WAT 2021 [article]

Katsuki Chousa, Makoto Morishita
2021 arXiv   pre-print
In this task, the systems are required to output translated sentences that contain all given word constraints. Our system combined input augmentation and constrained beam search algorithms.  ...  Through experiments, we found that this combination significantly improves translation accuracy and can save inference time while containing all the constraints in the output.  ...  t | y <t , X). (1) In the restricted translation task, lists of target words are provided to represent word restrictions, and systems are required to output translations that contain all of the target  ... 
arXiv:2106.05450v1 fatcat:ykt7xc27rvgx7gse2eiexhaksi

Alignment-Enhanced Transformer for Constraining NMT with Pre-Specified Translations

Kai Song, Kun Wang, Heng Yu, Yue Zhang, Zhongqiang Huang, Weihua Luo, Xiangyu Duan, Min Zhang
Existing works impose pre-specified translations as lexical constraints during decoding, which are based on word alignments derived from target-to-source attention weights.  ...  We address this problem by introducing a dedicated head in the multi-head Transformer architecture to capture external supervision signals.  ...  Thanks to Niyu Ge for the guidance in writing.  ... 
doi:10.1609/aaai.v34i05.6418 fatcat:c4rw4ewulraijonhdkbz362skm

TranSmart: A Practical Interactive Machine Translation System [article]

Guoping Huang, Lemao Liu, Xing Wang, Longyue Wang, Huayang Li, Zhaopeng Tu, Chengyan Huang, Shuming Shi
2021 arXiv   pre-print
In addition, TranSmart has the potential to avoid similar translation mistakes by using translated sentences in history as its memory.  ...  By word-level and sentence-level autocompletion, TranSmart allows users to interactively translate words in their own manners rather than the strict manner from left to right.  ...  Unlike the conventional IMT systems with the strict manner from left to right, TranSmart conducts interaction between a user and the machine in a flexible manner, and it particularly contains a translation  ... 
arXiv:2105.13072v1 fatcat:k366cmqsxzf6rn4jvo454kqlqq

Towards Model Round-Trip Engineering: An Abductive Approach [chapter]

Thomas Hettel, Michael Lawley, Kerry Raymond
2009 Lecture Notes in Computer Science  
Based on abductive logic programming, it allows us to compute a set of legitimate source changes that equate to a given change to the target model.  ...  Providing support for reversible transformations as a basis for round-trip engineering is a significant challenge in model transformation research.  ...  Translating changes must ensure that applying the transformation to the changed source yields exactly the range (relevant part) of new target model ( Fig. 1) .  ... 
doi:10.1007/978-3-642-02408-5_8 fatcat:lccgzi75wnbhnpjhextde2pssu

Two-Phase Hypergraph Based Reasoning with Dynamic Relations for Multi-Hop KBQA

Jiale Han, Bo Cheng, Xu Wang
2020 Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence  
Moreover, the model predicts relations hop-by-hop to generate an intermediate relation path.  ...  We conduct extensive experiments on two widely used multi-hop KBQA datasets to prove the effectiveness of our model.  ...  As shown in the column without constraint, all models perform similarly as the vanilla Transformer in a constraint-free setting.  ... 
doi:10.24963/ijcai.2020/496 dblp:conf/ijcai/ChenCWL20 fatcat:ulm5rrsjkfb4vhrzmruxsd45tu
« Previous Showing results 1 — 15 out of 199,266 results