876,488 Hits in 5.8 sec

Testing models and model transformations using classifying terms

Frank Hilken, Martin Gogolla, Loli Burgueño, Antonio Vallecillo
2016 Journal of Software and Systems Modeling  
The proposal is based on the use of classifying terms for partitioning the input space and for simplifying the testing process.  ...  In this paper we propose an iterative process for the correct specication of model transformations, i.e., for developing correct transformation models.  ...  This work was partially funded by the German Research Foundation (DFG) under grant GO 454/19-1 and by the Spanish Research Projects TIN2011-23795 and TIN2014-52034-R.  ... 
doi:10.1007/s10270-016-0568-3 fatcat:ymun57qz2zabrnagl7qes3kyha

Generating effective test suites for model transformations using classifying terms

Loli Burgueño, Frank Hilken, Antonio Vallecillo, Martin Gogolla
2016 ACM/IEEE International Conference on Model Driven Engineering Languages and Systems  
This paper explores the use of classifying terms and stratified sampling for developing richer test cases for model transformations.  ...  Classifying terms are used to define the equivalence classes that characterize the relevant subgroups for the test cases.  ...  Classifying terms Classifying terms [2] constitute a technique for developing test cases for UML and OCL models, based on an approach that automatically constructs object models for class The process  ... 
dblp:conf/models/BurguenoHVG16 fatcat:x5ntcdcwjfbt3mk6g6jvu5lvji

Employing classifying terms for testing model transformations

Martin Gogolla, Antonio Vallecillo, Loli Burgueno, Frank Hilken
2015 2015 ACM/IEEE 18th International Conference on Model Driven Engineering Languages and Systems (MODELS)  
The technique is applied for automatically constructing relevant source model test cases for model transformations between a source and target metamodel.  ...  By guiding the construction process through so-called classifying terms, the built test cases in form of object models are classified into equivalence classes.  ...  Building Tract Test Suites with Classifying Terms Tracts: Tracts were introduced in [3] as a specification and black-box testing mechanism for model transformations.  ... 
doi:10.1109/models.2015.7338262 dblp:conf/models/GogollaVBH15 fatcat:ydy72tdaezh5fmr7yec3innwna

Testing Code Generators: a Case Study on Applying USE, EFinder and Tracts in Practice

Zijun Chen, Wilbert Alberts, Ivan Kurtev
2021 International Conference on Software Technologies: Applications and Foundations  
A commonly found application of model transformations is in the implementation of code generators where typically a chain of model-to-model and model-to-text transformations is used.  ...  We focused on two aspects: automatic support for generating an efficient suite of input test models and alleviating the test oracle problem by using lightweight transformation specifications based on the  ...  A number of challenges were identified and two techniques proposed in the literature were applied: the classifying terms and tracts approaches.  ... 
dblp:conf/staf/ChenAK21 fatcat:tvm7eykt7fa7ji7fwps6n344ju

Application of Machine Learning in Transformer Health Index Prediction

Alhaytham Alqudsi, Ayman El-Hag
2019 Energies  
In the first step, three different data training and testing scenarios were used with several pattern recognition tools for classifying the transformer health condition based on the full set of input test  ...  It was found that reducing the number of tests did not influence the accuracy of the ML prediction models, which is considered as a significant advantage in terms of transformer asset management (TAM)  ...  rejected and the term is added to the model.  ... 
doi:10.3390/en12142694 fatcat:edlt2hdmgzcajozz7jdmykyk4a

Variational Semi-supervised Aspect-term Sentiment Analysis via Transformer [article]

Xingyi Cheng, Weidi Xu, Taifeng Wang, Wei Chu
2019 arXiv   pre-print
This paper proposes a semi-supervised method for the ATSA problem by using the Variational Autoencoder based on Transformer (VAET), which models the latent distribution via variational inference.  ...  Our method is classifier agnostic, i.e., the classifier is an independent module and various advanced supervised models can be integrated.  ...  Classifier Various currently available models can be used as the classifier.  ... 
arXiv:1810.10437v3 fatcat:ulq24es3m5hofdky6zpsydpq6e

Emotion Detection for Spanish by Combining LASER Embeddings, Topic Information, and Offense Features

Fedor Vitiugin, Giorgio Barnabò
2021 Annual Conference of the Spanish Society for Natural Language Processing  
We propose a novel model for Emotion Detection that combines transformers embeddings with topic information and offense features.  ...  As for the leader-board, our classification model achieved a macro weighted averaged F1 score of 0.661427, and a overall accuracy of 0.675725, reaching the 9th and 10th place respectively.  ...  To prove the need of using additional topic and offense feature vectors we also used only transformer embeddings as input to a LSTM model.  ... 
dblp:conf/sepln/VitiuginB21 fatcat:lfg7jcw5qzelhccsfkn7tkf2ie

Two Stage Classifier Chain Architecture for efficient pair-wise multi-label learning

Dejan Gjorgjevikj, Gjorgji Madjarov
2011 2011 IEEE International Workshop on Machine Learning for Signal Processing  
A common approach for solving multi-label learning problems using problem-transformation methods and dichotomizing classifiers is the pair-wise decomposition strategy.  ...  In terms of testing speed TSCCA shows better performance comparing to the pair-wise methods for multi-label learning.  ...  A common approach for problem transformation is to use class binarization methods, i.e. decomposition of the problem into several binary sub-problems that can then be solved using a binary base classifier  ... 
doi:10.1109/mlsp.2011.6064599 dblp:conf/mlsp/GjorgjevikjM11 fatcat:kpyblssa3raefgr7ahisb3f7ou

Lexically Constrained Neural Machine Translation with Levenshtein Transformer [article]

Raymond Hendy Susanto, Shamil Chollampatt, Liling Tan
2020 arXiv   pre-print
Leveraging the flexibility and speed of a recently proposed Levenshtein Transformer model (Gu et al., 2019), our method injects terminology constraints at inference time without any impact on decoding  ...  Previous work either required re-training existing models with the lexical constraints or incorporating them during beam search decoding with significantly higher computational overheads.  ...  We use the WMT'17 En-De test sets released by Dinu et al. (2019) 8 that were created based on Wiktionary and IATE term entries exactly matching the source and target.  ... 
arXiv:2004.12681v1 fatcat:v6yuzxnuvbb27krshvzegyr2vq

Two stage architecture for multi-label learning

Gjorgji Madjarov, Dejan Gjorgjevikj, Sašo Džeroski
2012 Pattern Recognition  
A common approach to solving multi-label learning problems is to use problem transformation methods and dichotomizing classifiers as in the pair-wise decomposition strategy.  ...  In terms of testing speed, all three methods show better performance as compared to the pair-wise methods for multi-label learning.  ...  Acknowledgments The authors would like to thank Dragi Kocev, Department of Knowledge Technologies, Jožef Stefan Institute, Ljubljana for his valuable comments and suggestions.  ... 
doi:10.1016/j.patcog.2011.08.011 fatcat:6r5fmx34zzdathfjbj5gaosroi

Classification of NDE Waveforms with Autoregressive Models [chapter]

R. B. Melton
1983 Review of Progress in Quantitative Nondestructive Evaluation  
Using this approach a set of matched filters is constructed, one for each category of waveform, based on parameters from autoregressive models.  ...  The method offers advantages in terms of.hardware implementation over conventional pattern recognition approaches. Feasibility is shown using computer generated data.  ...  For the test set, 100% of the fretting and 98% of the crack AE were correctly classified.  ... 
doi:10.1007/978-1-4613-3706-5_72 fatcat:b25axiiv3rb2tgxoyiheho2boy

Hierarchical multi-label news article classification with distributed semantic model based features

Ivana Clairine Irsan, Masayu Leylia Khodra
2019 IJAIN (International Journal of Advances in Intelligent Informatics)  
Multiplication of word term frequency and the average of word vectors were also used to build this classifiers.  ...  The multi-label classification model performance is also influenced by news' released date. The difference period between training and testing data would also decrease models' performance.  ...  Testing data used for model trained using Dataset1 and Dataset2 was identical, no news were added to the testing data.  ... 
doi:10.26555/ijain.v5i1.168 fatcat:emb5wzo6gjhcvmv6hzwcwonmu4

Unsupervised Domain Adaptation with Adversarial Residual Transform Networks [article]

Guanyu Cai, Yuqin Wang, Mengchu Zhou, Lianghua He
2018 arXiv   pre-print
In this model, residual connections are used to share features and adversarial loss is reconstructed, thus making the model more generalized and easier to train.  ...  Recent researches show that deep adversarial domain adaptation models can make markable improvements in performance, which include symmetric and asymmetric architectures.  ...  This objective funtion replaces G(x i ) in (1) with T (G(x i )), indicating that our model uses features generated from transform network T to be the input of label classifier C and domain classifier D  ... 
arXiv:1804.09578v1 fatcat:5je3rdt565bcdikao6rflg4ocm

Learning Invariances for Interpretability using Supervised VAE [article]

An-phi Nguyen, María Rodríguez Martínez
2020 arXiv   pre-print
Our experimental results show the capability of our proposed model both in terms of classification, and generation of invariantly transformed samples.  ...  By sampling solely the nuisance dimensions, we are able to generate samples that have undergone transformations that leave the classification unchanged, revealing the invariances of the model.  ...  Such a formulation will enable us to simultaneously learn a classifier and a generative model for invariances.  ... 
arXiv:2007.07591v1 fatcat:qukprtxtxzg2rhji32vcsmwl64

Classifying Drug Ratings Using User Reviews with Transformer-Based Language Models [article]

Akhil Shiju, Zhe He
2021 medRxiv   pre-print
This research demonstrated that transformer-based classification models can be used to classify drug reviews and identify reviews that are inconsistent with the ratings.  ...  Also, transformer-based neural network models including BERT, BioBERT, RoBERTa, XLNet, ELECTRA, and ALBERT were built for classification using the raw text as input.  ...  Discussion We built multiple classification models through transformer-based architecture. These models were then to classify test data including condition-specific data.  ... 
doi:10.1101/2021.04.15.21255573 fatcat:q3iajftgvbdefgigonx4xmbncu
« Previous Showing results 1 — 15 out of 876,488 results