A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Filters
Fine-Tuning Model Transformation: Change Propagation in Context of Consistency, Completeness, and Human Guidance
[chapter]
2011
Lecture Notes in Computer Science
An important role of model transformation is in exchanging modeling information among diverse modeling languages. ...
Such an assistant should be able to combine information from diverse models, react incrementally to enable transformation as information becomes available, and accept human guidancefrom direct queries ...
Acknowledgments We would like to gratefully acknowledge the Austrian Science Fund (FWF) through grants P21321-N15 and M1268-N23, and the EU Marie Curie Actions -Intra European Fellowship (IEF) through ...
doi:10.1007/978-3-642-21732-6_1
fatcat:kxjnwh7pbvfnpeq57kzdyxjd7a
Probing and Fine-tuning Reading Comprehension Models for Few-shot Event Extraction
[article]
2020
arXiv
pre-print
Moreover, our model can be fine-tuned with a small amount of data to boost its performance. ...
We study the problem of event extraction from text data, which requires both detecting target event types and their arguments. ...
Still, the work of [1] shows that fine-tuned Bert model can consistently outperform simple word-vector-based models in inferring relations. ...
arXiv:2010.11325v1
fatcat:4v564b3y5rhwjkekppt5gsyxxm
Fine-tuned BERT Model for Large Scale and Cognitive Classification of MOOCs
2022
International Review of Research in Open and Distance Learning
First, we automated the pedagogical annotation of MOOCs on a large scale and based on the cognitive levels of Bloom's taxonomy. Second, we fine-tuned BERT via different architectures. ...
Our objective in this research work was the automatic and large-scale classification of MOOCs based on their learning objectives and Bloom's taxonomy. ...
Fine- Tuned BERT Model for Large Scale and Cognitive Classification of MOOCs Sebbaq and El Faddoulithe different pedagogies in MOOCs. ...
doi:10.19173/irrodl.v23i2.6023
fatcat:vujlpvddg5gsrhwysdrb34ymcm
Pretrained Transformers for Text Ranking: BERT and Beyond
[article]
2021
arXiv
pre-print
We cover a wide range of modern techniques, grouped into two high-level categories: transformer models that perform reranking in multi-stage architectures and dense retrieval techniques that perform ranking ...
(i.e., result quality) and efficiency (e.g., query latency, model and index size). ...
Acknowledgements This research was supported in part by the Canada First Research Excellence Fund and the Natural Sciences and Engineering Research Council (NSERC) of Canada. ...
arXiv:2010.06467v3
fatcat:obla6reejzemvlqhvgvj77fgoy
A Survey of Controllable Text Generation using Transformer-based Pre-trained Language Models
[article]
2022
arXiv
pre-print
In recent years, methods using large-scale pre-trained language models (PLMs), in particular the widely used transformer-based PLMs, have become a new paradigm of NLG, allowing generation of more diverse ...
In this paper, we present a systematic critical review on the common tasks, main approaches and evaluation methods in this area. ...
They can then be fine-tuned in downstream tasks and have achieved excellent results. ...
arXiv:2201.05337v1
fatcat:lqr6ulndhrcjbiy7etejwtdghy
SG-Net: Syntax Guided Transformer for Language Representation
[article]
2021
arXiv
pre-print
Understanding human language is one of the key themes of artificial intelligence. ...
For language representation, the capacity of effectively modeling the linguistic knowledge from the detail-riddled and lengthy texts and getting rid of the noises is essential to improve its performance ...
Two stages of training are adopted in these models: firstly, pre-train a model using language model objectives on a large-scale text corpus, and then fine-tune the model (as an pre-trained encoder with ...
arXiv:2012.13915v2
fatcat:2zyyd4s6ibcuvjal3k7t2e4v44
Compositional Transformers for Scene Generation
[article]
2021
arXiv
pre-print
state-of-the-art performance in terms of visual quality, diversity and consistency. ...
We introduce the GANformer2 model, an iterative object-oriented transformer, explored for the task of generative modeling. ...
We wish to thank the anonymous reviewers for their thorough, insightful and constructive feedback, questions and comments. Dor wishes to thank Prof. Christopher D. ...
arXiv:2111.08960v1
fatcat:mevc72ear5d77igl5hln72s6hm
Neural Transfer Learning with Transformers for Social Science Text Analysis
[article]
2021
arXiv
pre-print
Across all evaluated tasks, textual styles, and training data set sizes, the conventional models are consistently outperformed by transfer learning with Transformer-based models, thereby demonstrating ...
Especially deep learning models that are based on the Transformer architecture (Vaswani et al., 2017) and are used in a transfer learning setting have contributed to this development. ...
Architecture BERT consists of a stack of Transformer encoders 17 and comes in two different model sizes (Devlin et al., 2019) : BERT BASE consists of 12 stacked Transformer encoders. ...
arXiv:2102.02111v1
fatcat:5ulwuvuwlncdhc6uiwaghymmym
Pretrained Transformers for Text Ranking: BERT and Beyond
2021
Proceedings of the 14th ACM International Conference on Web Search and Data Mining
In the context of text ranking, these models produce high quality results across many domains, tasks, and settings. ...
The combination of transformers and self-supervised pretraining has, without exaggeration, revolutionized the fields of natural language processing (NLP), information retrieval (IR), and beyond. ...
However, there remain many open research questions, and thus in addition to laying out the foundations of pretrained transformers for text ranking, this survey also attempts to prognosticate where the ...
doi:10.1145/3437963.3441667
fatcat:6teqmlndtrgfvk5mneq5l7ecvq
Adversarial Robustness of Neural-Statistical Features in Detection of Generative Transformers
[article]
2022
arXiv
pre-print
In the process, we find that previously effective complex phrasal features for detection of computer-generated text hold little predictive power against contemporary generative models, and identify promising ...
The detection of computer-generated text is an area of rapidly increasing significance as nascent generative models allow for efficient creation of compelling human-like text, which may be abused for the ...
Using features from pre-trained models reduces variation from separate fine-tuning processes, and enables reproducibility. ...
arXiv:2203.07983v1
fatcat:2rawqul7lvclfhhgxvztobwbb4
Towards user-centric concrete model transformation
2012
2012 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)
To generate a transformation with most current MDE approaches, users are required to specify (or provide) complex abstractions and meta-models and engage in quite lowlevel coding in usually textual transformation ...
Model transformations are an important part of Model Driven Engineering (MDE). ...
• Completeness: Is a model transformation fully developed and does it result in a complete target? • Consistency: Does model transformation include conflicting information? ...
doi:10.1109/vlhcc.2012.6344520
dblp:conf/vl/Avazpour12
fatcat:izqupk5eubhalhgrfahy7y3cbm
Big Bird: Transformers for Longer Sequences
[article]
2021
arXiv
pre-print
Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. ...
We show that BigBird is a universal approximator of sequence functions and is Turing complete, thereby preserving these properties of the quadratic, full attention model. ...
We fine-tune the pretrained BIGBIRD from App. F.1 using hyper-parameters described in Tab. 21 . ...
arXiv:2007.14062v2
fatcat:wifw4iuuorbmbobuatvfn53bea
Local-Global Context Aware Transformer for Language-Guided Video Segmentation
[article]
2022
arXiv
pre-print
In light of this, we present Locater (local-global context aware Transformer), which augments the Transformer architecture with a finite memory so as to query the entire video with the language expression ...
To thoroughly examine the visual grounding capability of LVS models, we contribute a new LVS dataset, A2D-S+, which is built upon A2D-S dataset but poses increased challenges in disambiguating among similar ...
All models are trained on A2D-S train without fine-tuning. As seen, our model surpasses other competitors across most metrics. ...
arXiv:2203.09773v1
fatcat:6u5mrlvg7rbithmv3xsdwfgqvi
Automated tabulation of clinical trial results: A joint entity and relation extraction approach with transformer-based language representations
[article]
2021
arXiv
pre-print
Two deep neural net models were developed as part of a joint extraction pipeline, using the principles of transfer learning and transformer-based language representations. ...
To train and test these models, a new gold-standard corpus was developed, comprising almost 600 result sentences from six disease areas. ...
Through fine-tuning BERT-based transformer models, pre-trained on billions of domain-specific tokens, our system embeds and encodes input sentences into context-rich language representations for these ...
arXiv:2112.05596v1
fatcat:f36iasitpngtte2lfteii7eeou
Bidirectional Transformation of MES Source Code and Ontologies
2020
Procedia Manufacturing
The transformation procedure of source code to resource, product and generic concepts of the manufacturing plant ontology is described. ...
The transformation procedure of source code to resource, product and generic concepts of the manufacturing plant ontology is described. ...
Moreover, ontology enrichment which does not change the concepts and relations, but only refines the existing constraints also needs to be handled as a result of fine tuning the source code in MES. ...
doi:10.1016/j.promfg.2020.02.070
fatcat:z42knzdu7zbhzabkpc3i5l43hi
« Previous
Showing results 1 — 15 out of 8,893 results