Filters








1,951 Hits in 5.1 sec

Privacy-Adaptive BERT for Natural Language Understanding [article]

Chen Qu, Weize Kong, Liu Yang, Mingyang Zhang, Michael Bendersky, Marc Najork
<span title="2021-04-15">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
When trying to apply the recent advance of Natural Language Understanding (NLU) technologies to real-world applications, privacy preservation imposes a crucial challenge, which, unfortunately, has not  ...  To address this issue, we study how to improve the effectiveness of NLU models under a Local Privacy setting, using BERT, a widely-used pretrained Language Model (LM), as an example.  ...  More importantly, we propose privacy-adaptive LM pretraining methods and demonstrate that a BERT pretrained with our Denoising MLM objective is more robust in handling privatized content compared with  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2104.07504v1">arXiv:2104.07504v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/sx5yszqwe5cqto2v3kvjjaqzbi">fatcat:sx5yszqwe5cqto2v3kvjjaqzbi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210829182201/https://arxiv.org/pdf/2104.07504v2.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/01/8b/018bf5da2ba1f1901e98f72c7eedbf6b91967192.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2104.07504v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

A Short Survey of Pre-trained Language Models for Conversational AI-A NewAge in NLP [article]

Munazza Zaib and Quan Z. Sheng and Wei Emma Zhang
<span title="2021-04-22">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Building a dialogue system that can communicate naturally with humans is a challenging yet interesting problem of agent-based computing.  ...  In this short survey paper, we discuss the recent progress made in the field of pre-trained language models.  ...  Unlike GPT, BERT is based on Transformer's encoder block which is designed for language understanding.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2104.10810v1">arXiv:2104.10810v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/s5elm5tyjbettl3opdvzyo4cky">fatcat:s5elm5tyjbettl3opdvzyo4cky</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210804015727/https://arxiv.org/pdf/2104.10810v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/74/45/7445166e9090f5e88163c167cd270a2da4dfc341.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2104.10810v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Compliance Checking with NLI: Privacy Policies vs. Regulations [article]

Amin Rabinia, Zane Nygaard
<span title="2022-03-01">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this work, we use Natural Language Inference (NLI) techniques to compare privacy regulations against sections of privacy policies from a selection of large companies.  ...  We tried two versions of our model: one that was trained on the Stanford Natural Language Inference (SNLI) and the second on the Multi-Genre Natural Language Inference (MNLI) dataset.  ...  [12] created a NLU (Natural Language Understanding) benchmark called GLUE (General Language Understanding Evaluation).  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2204.01845v1">arXiv:2204.01845v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/x3nnv4yhrngyzp3rm4ueoaofny">fatcat:x3nnv4yhrngyzp3rm4ueoaofny</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220630180242/https://arxiv.org/pdf/2204.01845v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/8e/42/8e4227ab6b3ec4d9475e8e92055800ee8b75348b.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2204.01845v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

BoostingBERT:Integrating Multi-Class Boosting into BERT for NLP Tasks [article]

Tongwen Huang, Qingyun She, Junlin Zhang
<span title="2020-09-13">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this work, we proposed a novel Boosting BERT model to integrate multi-class boosting into the BERT.  ...  We also use knowledge distillation within the "teacher-student" framework to reduce the computational overhead and model storage of BoostingBERT while keeping its performance for practical application.  ...  It exceeds state of the art by a wide margin on multiple natural language understanding tasks.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2009.05959v1">arXiv:2009.05959v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/fpstgneryva4dnbtpobj4y4lju">fatcat:fpstgneryva4dnbtpobj4y4lju</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200929122504/https://arxiv.org/pdf/2009.05959v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2009.05959v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

TextHide: Tackling Data Privacy in Language Understanding Tasks [article]

Yangsibo Huang, Zhao Song, Danqi Chen, Kai Li, Sanjeev Arora
<span title="2020-10-12">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this paper, we propose TextHide aiming at addressing this challenge for natural language understanding tasks.  ...  In addition, TextHide fits well with the popular framework of fine-tuning pre-trained language models (e.g., BERT) for any sentence or sentence-pair task.  ...  In Empirical Methods in Natural Language Processing (EMNLP), pages 2360-2369.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.06053v1">arXiv:2010.06053v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/jvc7ruxpvbex3dn4rvmck43egu">fatcat:jvc7ruxpvbex3dn4rvmck43egu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201022215338/https://arxiv.org/pdf/2010.06053v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/91/53/91531cb745eaa5b60d9abdbe19b8926267e64baf.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.06053v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Differentially Private Model Compression [article]

Fatemehsadat Mireshghallah, Arturs Backurs, Huseyin A Inan, Lukas Wutschitz, Janardhan Kulkarni
<span title="2022-06-03">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Natural Language Processing (NLP) tasks while simultaneously guaranteeing differential privacy.  ...  Recent papers have shown that large pre-trained language models (LLMs) such as BERT, GPT-2 can be fine-tuned on private data to achieve performance comparable to non-private models for many downstream  ...  natural language processing (NLP) applications.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2206.01838v1">arXiv:2206.01838v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/nzygt44ghfaxnl6cwkc3lf46u4">fatcat:nzygt44ghfaxnl6cwkc3lf46u4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220701194223/https://arxiv.org/pdf/2206.01838v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/e2/50/e2502731ae5d3c07d736f2ec27abcf80d5238167.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2206.01838v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Differentially Private Fine-tuning of Language Models [article]

Da Yu, Saurabh Naik, Arturs Backurs, Sivakanth Gopi, Huseyin A. Inan, Gautam Kamath, Janardhan Kulkarni, Yin Tat Lee, Andre Manoel, Lukas Wutschitz, Sergey Yekhanin, Huishuai Zhang
<span title="2021-10-13">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In comparison, absent privacy constraints, RoBERTa-Large achieves an accuracy of 90.2%. Our findings are similar for natural language generation tasks.  ...  We give simpler, sparser, and faster algorithms for differentially private fine-tuning of large-scale pre-trained language models, which achieve the state-of-the-art privacy versus utility tradeoffs on  ...  Janardhan Kulkarni would like to thank Edward Hu for sharing many ideas on fine-tuning.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2110.06500v1">arXiv:2110.06500v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/p5kk4zuodbdftlx7jf3yhj45dm">fatcat:p5kk4zuodbdftlx7jf3yhj45dm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211016200250/https://arxiv.org/pdf/2110.06500v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/7b/08/7b086f6b70479793170f39a7c85b164c60909039.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2110.06500v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Beyond The Text: Analysis of Privacy Statements through Syntactic and Semantic Role Labeling [article]

Yan Shvartzshnaider, Ananth Balashankar, Vikas Patidar, Thomas Wies, Lakshminarayanan Subramanian
<span title="2020-10-01">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
accuracy for retrieving contextual privacy parameters from privacy statements.  ...  We describe 4 different types of conventional methods that can be partially adapted to address the parameter extraction task with varying degrees of success: Hidden Markov Models, BERT fine-tuned models  ...  This challenge has inspired many recent works in applying natural language processing and machine learning techniques to automatically process privacy policies and retrieve the relevant information (Harkous  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.00678v1">arXiv:2010.00678v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/fdllcggjsvciviapj2ozn2naru">fatcat:fdllcggjsvciviapj2ozn2naru</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201006002922/https://arxiv.org/pdf/2010.00678v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.00678v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Membership Inference Attack Susceptibility of Clinical Language Models [article]

Abhyuday Jagannatha, Bhanu Pratap Singh Rawat, Hong Yu
<span title="2021-04-16">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We design and employ membership inference attacks to estimate the empirical privacy leaks for model architectures like BERT and GPT2.  ...  Clinical language models (CLMs) trained on clinical data have been used to improve performance in biomedical natural language processing tasks.  ...  This is a general definition for estimating privacy budgets and is usually defined independently of the nature of the underlying data.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2104.08305v1">arXiv:2104.08305v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/a3n3hgwi6fd2jovuapcdha3gey">fatcat:a3n3hgwi6fd2jovuapcdha3gey</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210421045138/https://arxiv.org/pdf/2104.08305v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/64/78/647848e80edb4af537bc6cd7c81495efcfea08bc.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2104.08305v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

STANCY: Stance Classification Based on Consistency Cues

Kashyap Popat, Subhabrata Mukherjee, Andrew Yates, Gerhard Weikum
<span title="">2019</span> <i title="Association for Computational Linguistics"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/u3ideoxy4fghvbsstiknuweth4" style="color: black;">Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)</a> </i> &nbsp;
In this work, we present a neural network model for stance classification leveraging BERT representations and augmenting them with a novel consistency constraint.  ...  A better understanding of such claims requires analyzing them from different perspectives.  ...  ESIM: An enhanced sequential inference model (ESIM) for natural language inference proposed in Chen et al. (2017) .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.18653/v1/d19-1675">doi:10.18653/v1/d19-1675</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/emnlp/PopatMYW19.html">dblp:conf/emnlp/PopatMYW19</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/cvqrqsy3o5fytgbzyzvd662cei">fatcat:cvqrqsy3o5fytgbzyzvd662cei</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200510192309/https://pure.mpg.de/rest/items/item_3187981_1/component/file_3187982/content" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/62/ba/62ba4eba886dfdd72a170a6c0ae560a24c12d9d1.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.18653/v1/d19-1675"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

Task-adaptive Pre-training and Self-training are Complementary for Natural Language Understanding [article]

Shiyang Li, Semih Yavuz, Wenhu Chen, Xifeng Yan
<span title="2021-09-14">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Task-adaptive pre-training (TAPT) and Self-training (ST) have emerged as the major semi-supervised approaches to improve natural language understanding (NLU) tasks with massive amount of unlabeled data  ...  We hope that TFS could serve as an important semi-supervised baseline for future NLP studies.  ...  Acknowledgement The authors would like to thank the anonymous reviewers for their thoughtful comments.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2109.06466v1">arXiv:2109.06466v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/op47sc3fmzhqbkqswwhjmqqm4a">fatcat:op47sc3fmzhqbkqswwhjmqqm4a</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210917203505/https://arxiv.org/pdf/2109.06466v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/e2/0a/e20af8d50b79cf5d94783780578c51af80dbb79d.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2109.06466v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Pre-trained Language Models in Biomedical Domain: A Systematic Survey [article]

Benyou Wang, Qianqian Xie, Jiahuan Pei, Prayag Tiwari, Zhao Li, Jie fu
<span title="2021-10-12">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Pre-trained language models (PLMs) have been the de facto paradigm for most natural language processing (NLP) tasks.  ...  ., biomedical text, electronic health records, protein, and DNA sequences for various biomedical tasks.  ...  Natural Language Inference Natural language inference (NLI, also known as the text entailment) is a basic task for the natural language understanding of biomedical texts.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2110.05006v2">arXiv:2110.05006v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/aykwfhgi4jgmfovissgdvknny4">fatcat:aykwfhgi4jgmfovissgdvknny4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211014235106/https://arxiv.org/pdf/2110.05006v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/0c/ea/0cea4db3dea404ed55b3a179b496357dc5c8ff80.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2110.05006v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

A Dataset for Statutory Reasoning in Tax Law Entailment and Question Answering [article]

Nils Holzenberger, Andrew Blair-Stanek, Benjamin Van Durme
<span title="2020-08-12">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
To investigate the performance of natural language understanding approaches on statutory reasoning, we introduce a dataset, together with a legal-domain text corpus.  ...  Legislation can be viewed as a body of prescriptive rules expressed in natural language.  ...  This makes the IRC an excellent corpus to build systems that reason with rules specified in natural language, and have good language understanding capabilities.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2005.05257v3">arXiv:2005.05257v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/kkxuoyh6gjf6xfk66amexcmc3q">fatcat:kkxuoyh6gjf6xfk66amexcmc3q</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200816095617/https://arxiv.org/pdf/2005.05257v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2005.05257v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Federated pretraining and fine tuning of BERT using clinical notes from multiple silos [article]

Dianbo Liu, Tim Miller
<span title="2020-02-20">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Large scale contextual representation models, such as BERT, have significantly advanced natural language processing (NLP) in recently years.  ...  However, in certain area like healthcare, accessing diverse large scale text data from multiple institutions is extremely challenging due to privacy and regulatory reasons.  ...  Introduction In recent years, natural language processing (NLP) has been revolutionized by large contextual representation models pre-trained with large amount of data such as ELMo (Peters et al., 2018  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2002.08562v1">arXiv:2002.08562v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/n6leku5xdvcenohyik6j4xchqq">fatcat:n6leku5xdvcenohyik6j4xchqq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200321133747/https://arxiv.org/pdf/2002.08562v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2002.08562v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Federated Learning for Personalized Humor Recognition [article]

Xu Guo, Han Yu, Boyang Li, Hao Wang, Pengwei Xing, Siwei Feng, Zaiqing Nie, Chunyan Miao
<span title="2022-04-06">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Computational understanding of humor is an important topic under creative language understanding and modeling. It can play a key role in complex human-AI interactions.  ...  It incorporates a diversity adaptation strategy into the FL paradigm to train a personalized humor recognition model.  ...  The existing line of research can be divided according to the milestones of deep learning for natural language processing (NLP).  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2012.01675v2">arXiv:2012.01675v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/erq4n7hf6vckzejbrtpdwcfcre">fatcat:erq4n7hf6vckzejbrtpdwcfcre</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220606213600/https://arxiv.org/pdf/2012.01675v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/0b/7c/0b7c2f8b0a29dd6eb4c4dff864f87c9bddb34a05.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2012.01675v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 1,951 results