Filters








17,117 Hits in 4.3 sec

Few-Shot Self-Rationalization with Natural Language Prompts [article]

Ana Marasović, Iz Beltagy, Doug Downey, Matthew E. Peters
2022 arXiv   pre-print
Then, by using this prompt and scaling the model size, we demonstrate that making progress on few-shot self-rationalization is possible.  ...  We identify the right prompting approach by extensively exploring natural language prompts on FEB.  ...  Prompting for Self-Rationalization We approach few-shot self-rationalization with prompt-based finetuning using natural language (NL) prompts.  ... 
arXiv:2111.08284v2 fatcat:nkwgkxl2draizccdbyi6s6hxxi

Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm [article]

Laria Reynolds, Kyle McDonell
2021 arXiv   pre-print
Using GPT-3 as a case study, we show that 0-shot prompts can significantly outperform few-shot prompts.  ...  In this work, we discuss methods of prompt programming, emphasizing the usefulness of considering prompts through the lens of natural language.  ...  Keywords: language models, transformers, mat at extracting specific learned behaviors from self- GPT-3, few-shot learning, prompt programming,  ... 
arXiv:2102.07350v1 fatcat:cvnjukoydbhtvoehvyrkszdmjq

STaR: Bootstrapping Reasoning With Reasoning [article]

Eric Zelikman, Yuhuai Wu, Jesse Mu, Noah D. Goodman
2022 arXiv   pre-print
This technique, the "Self-Taught Reasoner" (STaR), relies on a simple loop: generate rationales to answer many questions, prompted with a few rationale examples; if the generated answers are wrong, try  ...  However, inducing language model rationale generation currently requires either constructing massive rationale datasets or sacrificing accuracy by using only few-shot inference.  ...  We thank Cem Anil for his very helpful insight that rationale finetuning performance can be improved if the training includes the few-shot rationales.  ... 
arXiv:2203.14465v2 fatcat:uzweiz4dhrgxxmmxits6jf6jcm

FLUTE: Figurative Language Understanding and Textual Explanations [article]

Tuhin Chakrabarty, Arkadiy Saakyan, Debanjan Ghosh, Smaranda Muresan
2022 arXiv   pre-print
Meanwhile, even classical natural language inference (NLI) tasks have been plagued by spurious correlations and annotation artifacts.  ...  We show how utilizing GPT-3 in conjunction with human experts can aid in scaling up the creation of datasets even for such complex linguistic phenomena as figurative language.  ...  Before demonstrating the examples in our few-shot prompt, we provide the model a natural language instruction.  ... 
arXiv:2205.12404v1 fatcat:q3dc4q25yzbb3dtets5xvu7blm

Few-shot Learning with Multilingual Language Models [article]

Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer (+9 others)
2021 arXiv   pre-print
reasoning (with +7.4% absolute accuracy improvement in 0-shot settings and +9.4% in 4-shot settings) and natural language inference (+5.4% in each of 0-shot and 4-shot settings).  ...  Our largest model with 7.5 billion parameters sets new state of the art in few-shot learning in more than 20 representative languages, outperforming GPT-3 of comparable size in multilingual commonsense  ...  A possible reason for this is the adversar- while the language models are self-supervised and ial nature of PAWS-X, where the paraphrase and language-agnostic.  ... 
arXiv:2112.10668v1 fatcat:ehexgbyr5jfetimihdd66sxdtm

Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations [article]

Jaehun Jung, Lianhui Qin, Sean Welleck, Faeze Brahman, Chandra Bhagavatula, Ronan Le Bras, Yejin Choi
2022 arXiv   pre-print
Despite their impressive capabilities, large pre-trained language models (LMs) struggle with consistent reasoning; recently, prompting LMs to generate explanations that self-guide the inference has emerged  ...  Maieutic Prompting achieves up to 20% better accuracy than state-of-the-art prompting methods, and as a fully unsupervised approach, performs competitively with supervised models.  ...  Introduction Following the remarkable success of few-shot prompting powered by large language models (e.g.  ... 
arXiv:2205.11822v1 fatcat:2l5fyuuukrhsfib7cjmzuubgru

Red Teaming Language Models with Language Models [article]

Ethan Perez, Saffron Huang, Francis Song, Trevor Cai, Roman Ring, John Aslanides, Amelia Glaese, Nat McAleese, Geoffrey Irving
2022 arXiv   pre-print
We explore several methods, from zero-shot generation to reinforcement learning, for generating test cases with varying levels of diversity and difficulty.  ...  Language Models (LMs) often cannot be deployed because of their potential to harm users in hard-to-predict ways.  ...  Stochastic Few-Shot (SFS): We sample a zeroshot test case generated above to include in the prompt as a few-shot example.  ... 
arXiv:2202.03286v1 fatcat:ogptxm22d5e37bzpyv7cizarp4

Semantic Segmentation In-the-Wild Without Seeing Any Segmentation Examples [article]

Nir Zabari, Yedid Hoshen
2021 arXiv   pre-print
We utilize a vision-language embedding model (specifically CLIP) to create a rough segmentation map for each class, using model interpretability methods.  ...  33] and object detection [4, 34, 35], few-shot and zero-shot of visual concepts.  ...  Weakly-supervised semantic segmentation models from natural language supervision. In ICML, 2021.  ... 
arXiv:2112.03185v1 fatcat:k7tgvamso5frzkhqmxqrjs77am

MRKL Systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning [article]

Ehud Karpas, Omri Abend, Yonatan Belinkov, Barak Lenz, Opher Lieber, Nir Ratner, Yoav Shoham, Hofit Bata, Yoav Levine, Kevin Leyton-Brown, Dor Muhlgay, Noam Rozen (+5 others)
2022 arXiv   pre-print
Huge language models (LMs) have ushered in a new era for AI, serving as a gateway to natural-language-based knowledge tasks.  ...  Conceptualizing the challenge as one that involves knowledge and reasoning in addition to linguistic processing, we define a flexible architecture with multiple neural models, complemented by discrete  ...  Experimental setup We conducted our experiments with the 7B parameters J1-large model [3] using prompt-tuning [21] with 10 prompt tokens.  ... 
arXiv:2205.00445v1 fatcat:2barcysfpff3zi5ay5achbvd7e

Domain-Aware Continual Zero-Shot Learning [article]

Kai Yi, Mohamed Elhoseiny
2021 arXiv   pre-print
Our method also learns a class-wise learnable prompt to obtain better class-level text representation, which is used to represent side information to enable zero-shot prediction of future unseen classes  ...  We introduce Domain Aware Continual Zero-Shot Learning (DACZSL), the task of visually recognizing images of unseen categories in unseen domains sequentially.  ...  In Empirical Methods in Natural Language Pro- Human Language Technologies, pages 5017–5033, 2021. 4 cessing (EMNLP), 2020  ... 
arXiv:2112.12989v1 fatcat:4nrgoylotvhuhckwwffc3gijqy

Memory of Berlin: An Accidental Autoethnography

Readman Mark, Bournemouth University
2021 Ekphrasis: Images, Cinema, Theory, Media  
Burgan made Memory of Berlin (1998) in his thirties, to tell the story of how he was "triggered" by the fall of the Wall in 1989 to search for his birth mother, and through this film fuses the personal with  ...  being' into circulation and dialogue" (Bochner 53), I argue that Memory of Berlin embodies this autoethnographic spirit, if not avant la lettre, then certainly without its maker's conscious engagement with  ...  Authorship implies a unitary self -there is an "I" in the film, a point of enunciation, but the film undermines this stability through its uncertainty about the nature of the self. The "who am I?"  ... 
doi:10.24193/ekphrasis.26.4 fatcat:zjr7fqagyba6bp2j6rydjp4bk4

AiSocrates: Towards Answering Ethical Quandary Questions [article]

Yejin Bang, Nayeon Lee, Tiezheng Yu, Leila Khalatbari, Yan Xu, Dan Su, Elham J. Barezi, Andrea Madotto, Hayden Kee, Pascale Fung
2022 arXiv   pre-print
AiSocrates searches for different ethical principles applicable to the ethical quandary and generates an answer conditioned on the chosen principles through prompt-based few-shot learning.  ...  These results have inspired efforts to understand the limits of LLMs so as to evaluate how far we are from achieving human level general natural language understanding.  ...  The prompt-based fewshot learning teaches the model with only a few input-output pair samples as a natural language prompt concatenated with the input of the test sample.  ... 
arXiv:2205.05989v2 fatcat:g46ccv7gwveffawngoi2yvn3wi

ENGLISH CONTAINER METAPHORS OF EMOTIONS IN UKRAINIAN TRANSLATIONS

Liudmyla Kovalenko, Alla Martynyuk
2018 Novìtnâ Osvìta  
There are some structural differences in linguistic instantiations of the EMOTION as CONTAINER mapping in the original and translation stemming from the analytical nature of English and synthetic nature  ...  As for semantic differences, in Ukrainian translations EMOTIONS-BOUNDARIES mappings tend to be substituted with EMOTIONS-INTERIORS mappings.  ...  It prompts an inference that it is more natural for Ukrainians to imagine EMOTIONAL STATES as SUBSTANCES filling the insides of their bodies than as BOUNDARIES suppressing them from the outside.  ... 
doi:10.20535/2410-8286.142723 fatcat:f7kwczyrunelrg7dozzeir3iy4

Cultural Differences in Playing Repeated Ultimatum Game Online with Virtual Humans

Elnaz Nouri, David Traaum
2014 2014 47th Hawaii International Conference on System Sciences  
We investigate the dynamics of human game playing with a conversational computational agent (Virtual Human).  ...  Our results are comparable to the reported results of similar games played among people in laboratory conditions and with high stakes.  ...  Characters can be developed to understand natural language textual input as well as fixed-choice menu options [26] .  ... 
doi:10.1109/hicss.2014.157 dblp:conf/hicss/NouriT14 fatcat:4yixoqcpd5egnab7tod6rvpjli

MultiVerS: Improving scientific claim verification with weak supervision and full-document context [article]

David Wadden, Kyle Lo, Lucy Lu Wang, Arman Cohan, Iz Beltagy, Hannaneh Hajishirzi
2022 arXiv   pre-print
Our approach outperforms two competitive baselines on three scientific claim verification datasets, with particularly strong performance in zero / few-shot domain adaptation experiments.  ...  Second, it enables the model to learn from instances annotated with a document-level fact-checking label, but lacking sentence-level rationales.  ...  Thanks to Arkadiy Saakyan and Tuhin Chakrabarty for help with COVIDFact, to Mourad Sarrouti for help with HealthVer, and to Xiangci Li and Ronak Pradeep for help with PARAGRAPHJOINT and VERT5ERINI, respectively  ... 
arXiv:2112.01640v2 fatcat:kakd6dv3lfg2beswcykxv2riy4
« Previous Showing results 1 — 15 out of 17,117 results