Filters








12 Hits in 1.3 sec

OpenPrompt: An Open-source Framework for Prompt-learning [article]

Ning Ding, Shengding Hu, Weilin Zhao, Yulin Chen, Zhiyuan Liu, Hai-Tao Zheng, Maosong Sun
2021 arXiv   pre-print
However, no standard implementation framework of prompt-learning is proposed yet, and most existing prompt-learning codebases, often unregulated, only provide limited implementations for specific scenarios  ...  In this paper, we present OpenPrompt, a unified easy-to-use toolkit to conduct prompt-learning over PLMs.  ...  Lastly, there is no comprehensive open-source framework particularly designed for prompt-learning at present, which makes it difficult to try out new methods and make rigorous comparisons for previous  ... 
arXiv:2111.01998v1 fatcat:5ymws6gmlvbv3n7kkzks2tqrba

OpenPrompt: An Open-source Framework for Prompt-learning

Ning Ding, Shengding Hu, Weilin Zhao, Yulin Chen, Zhiyuan Liu, Haitao Zheng, Maosong Sun
2022 Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations   unpublished
In this paper, we present Open-Prompt, a unified easy-to-use toolkit to conduct prompt-learning over PLMs.  ...  However, no standard implementation framework of prompt-learning is proposed yet, and most existing promptlearning codebases, often unregulated, only provide limited implementations for specific scenarios  ...  We present OpenPrompt, an open-source, easyto-use, and extensible toolkit for prompt-learning.  ... 
doi:10.18653/v1/2022.acl-demo.10 fatcat:k2e7h7t6qzawvdhr4n5sz4zxei

Clinical Prompt Learning with Frozen Language Models [article]

Niall Taylor, Yi Zhang, Dan Joyce, Alejo Nevado-Holgado, Andrey Kormilitzin
2022 arXiv   pre-print
We argue that prompt learning therefore provides lower computational resource costs applicable to clinical settings, that can serve as an alternative to fine-tuning ever increasing in size PLMs.  ...  Results are partially in line with the prompt learning literature, with prompt learning able to match or improve on traditional fine-tuning with substantially fewer trainable parameters and requiring less  ...  Acknowledgement NT is supported by the EPSRC Center for Doctoral Training in Health Data Science (EP/S02428X/1).  ... 
arXiv:2205.05535v1 fatcat:w5mzncc6nvc53gkrt6duhojb5u

Prompt-Learning for Short Text Classification [article]

Yi Zhu, Xinke Zhou, Jipeng Qiang, Yun Li, Yunhao Yuan, Xindong Wu
2022 arXiv   pre-print
Recently, as an effective method for tuning Pre-trained Language Models for specific downstream tasks, prompt-learning has attracted a vast amount of attention and research.  ...  However, most prompt-learning methods expand label words manually or only consider the class name for knowledge incorporating in cloze-style prediction, which will inevitably incur omissions and bias in  ...  Animal Husbandry Discipline of Targeted Support (yzuxk202015), the Opening Foundation of Key Laboratory of Huizhou Architecture in Anhui Province under grant HPJZ-2020-02, Open Project Program of Joint  ... 
arXiv:2202.11345v2 fatcat:5ux5h5qwcza43gaideog6vocye

ORCA: Interpreting Prompted Language Models via Locating Supporting Data Evidence in the Ocean of Pretraining Data [article]

Xiaochuang Han, Yulia Tsvetkov
2022 arXiv   pre-print
However, it remains unclear from where the model learns the task-specific knowledge, especially in a zero-shot setup.  ...  Large pretrained language models have been performing increasingly well in a variety of downstream tasks via prompting.  ...  We use the OpenPrompt library (Ding et al., 2022) to prompt the BERT model with the templates and verbalizers inherited from Gao et al. (2021b) .  ... 
arXiv:2205.12600v1 fatcat:ltkvi65la5fubb4kxhp7qthrfi

Instance-wise Prompt Tuning for Pretrained Language Models [article]

Yuezihan Jiang, Hao Yang, Junyang Lin, Hanyu Zhao, An Yang, Chang Zhou, Hongxia Yang, Zhi Yang, Bin Cui
2022 arXiv   pre-print
Prompt Learning has recently gained great popularity in bridging the gap between pretraining tasks and various downstream tasks.  ...  We introduce Instance-wise Prompt Tuning (IPT), the first prompt learning paradigm that injects knowledge from the input data instances to the prompts, thereby providing PLMs with richer and more concrete  ...  IPT is a general framework compatible with existing Prompt Learning pipelines and allows various IPT strategy designs.  ... 
arXiv:2206.01958v1 fatcat:32mtporxorfhxkoa7avq7tjcqi

The NICHD protocol: a review of an internationally-used evidence-based tool for training child forensic interviewers

David La Rooy, Sonja P Brubacher, Anu Aromäki-Stratos, Mireille Cyr, Irit Hershkowitz, Julia Korkman, Trond Myklebust, Makiko Naka, Carlos E. Peixoto, Kim P Roberts, Heather Stewart, Michael E Lamb
2015 Journal of Criminological Research, Policy and Practice  
This article reviews an evidence-based tool for training child forensic interviewers called the NICHD Protocol, with a specific focus on how the Protocol is being adapted in various countries.  ...  The NICHD Protocol can be easily incorporated into existing training programs worldwide and is available for free.  ...  likely that an openprompt has been delivered.  ... 
doi:10.1108/jcrpp-01-2015-0001 fatcat:kfk3akwk6vao7eg5bdhylrtbw4

Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt Verbalizer for Text Classification [article]

Shengding Hu, Ning Ding, Huadong Wang, Zhiyuan Liu, Jingang Wang, Juanzi Li, Wei Wu, Maosong Sun
2022 arXiv   pre-print
Tuning pre-trained language models (PLMs) with task-specific prompts has been a promising approach for text classification.  ...  In this work, we focus on incorporating external knowledge into the verbalizer, forming a knowledgeable prompt-tuning (KPT), to improve and stabilize prompt-tuning.  ...  Ethical Considerations This work proposes knowledgeable prompt tuning which uses external knowledge bases to construct the verbalizer.  ... 
arXiv:2108.02035v2 fatcat:bmpvtrim65f2ngfegoifzhy3gq

A Roadmap for Big Model [article]

Sha Yuan, Hanyu Zhao, Shuai Zhao, Jiahong Leng, Yangxiao Liang, Xiaozhi Wang, Jifan Yu, Xin Lv, Zhou Shao, Jiaao He, Yankai Lin, Xu Han (+88 others)
2022 arXiv   pre-print
With the rapid development of deep learning, training Big Models (BMs) for multiple downstream tasks becomes a popular paradigm.  ...  In this paper, we cover not only the BM technologies themselves but also the prerequisites for BM training and applications with BMs, dividing the BM review into four parts: Resource, Models, Key Technologies  ...  OpenPrompt [313] provides a unified programming framework to flexibly conduce prompt-oriented fine-tuning.  ... 
arXiv:2203.14101v4 fatcat:rdikzudoezak5b36cf6hhne5u4

No More Fine-Tuning? An Experimental Evaluation of Prompt Tuning in Code Intelligence [article]

Chaozheng Wang, Yuanhang Yang, Cuiyun Gao, Yun Peng, Hongyu Zhang, Michael R. Lyu
2022 pre-print
In prompt tuning, the prompts inserted during tuning provide task-specific knowledge, which is especially beneficial for tasks with relatively scarce data.  ...  In addition, prompt tuning shows great potential in low-resource scenarios, e.g., improving the BLEU scores of fine-tuning by more than 26\% on average for code summarization.  ...  The overall framework is Pytorch 3 . Our implementation of prompt is based on OpenPrompt [7] .  ... 
doi:10.1145/3540250.3549113 arXiv:2207.11680v1 fatcat:3i43367xzzcqxcnnszrigkkui4

Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt Verbalizer for Text Classification

Shengding Hu, Ning Ding, Huadong Wang, Zhiyuan Liu, Jingang Wang, Juanzi Li, Wei Wu, Maosong Sun
2022 Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)   unpublished
Tuning pre-trained language models (PLMs) with task-specific prompts has been a promising approach for text classification.  ...  Our source code is publicly available at https://github.com/ thunlp/KnowledgeablePromptTuning.  ...  Ethical Considerations This work proposes knowledgeable prompt tuning which uses external knowledge bases to construct the verbalizer.  ... 
doi:10.18653/v1/2022.acl-long.158 fatcat:7xoqgwvpnnhlrcyb5vsujr7blu

BMInf: An Efficient Toolkit for Big Model Inference and Tuning

Xu Han, Guoyang Zeng, Weilin Zhao, Zhiyuan Liu, Zhengyan Zhang, Jie Zhou, Jun Zhang, Jia Chao, Maosong Sun
2022 Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations   unpublished
To address the computation bottleneck encountered in deploying big models in real-world scenarios, we introduce an open-source toolkit for Big Model Inference and tuning (BMInf), which can support big  ...  distributed learning toolkits for PLMs.  ...  The original CPM-2 is implemented with the distributed toolkits Deep-Speed and Megatron, which are currently the most efficient open-source tools for running big models.  ... 
doi:10.18653/v1/2022.acl-demo.22 fatcat:afojxlkkbbgknktbuhvmdyc3lm