Filters








2,548 Hits in 1.9 sec

Domain Agnostic Learning with Disentangled Representations [article]

Xingchao Peng, Zijun Huang, Ximeng Sun, Kate Saenko
<span title="2019-04-28">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this paper, we propose the task of Domain-Agnostic Learning (DAL): How to transfer knowledge from a labeled source domain to unlabeled data from arbitrary target domains?  ...  To tackle this problem, we devise a novel Deep Adversarial Disentangled Autoencoder (DADA) capable of disentangling domain-specific features from class identity.  ...  Domain Disentanglement To tackle the domain agnostic learning task, disentangling class-irrelevant features is not enough, as it fails to align the source domain with the target.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1904.12347v1">arXiv:1904.12347v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/heiifx5dlzgxhinrxkcbc224jy">fatcat:heiifx5dlzgxhinrxkcbc224jy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200930092258/https://arxiv.org/pdf/1904.12347v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/8b/3d/8b3d915a6362a8e5fc38692a6c0bfdf4a070f439.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1904.12347v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Domain Private and Agnostic Feature for Modality Adaptive Face Recognition [article]

Yingguo Xu, Lei Zhang, Qingyan Duan
<span title="2020-08-10">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Therefore, how to learn and utilize the domain-private feature and domain-agnostic feature for modality adaptive face recognition is the focus of this work.  ...  Specifically, this paper proposes a Feature Aggregation Network (FAN), which includes disentangled representation module (DRM), feature fusion module (FFM) and adaptive penalty metric (APM) learning session  ...  In order to solve this problem, we propose a Disentangled Representation Module (DRM), which is designed with two subnetworks, i.e. domain-agnostic network and domain-private network in Siamese structure  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2008.03848v1">arXiv:2008.03848v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/kitxy5bdc5fl5kana7zofgyhtm">fatcat:kitxy5bdc5fl5kana7zofgyhtm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200813082003/https://arxiv.org/pdf/2008.03848v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2008.03848v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Domain Adaptation via Prompt Learning [article]

Chunjiang Ge and Rui Huang and Mixue Xie and Zihang Lai and Shiji Song and Shuang Li and Gao Huang
<span title="2022-02-14">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this paper, we introduce a novel prompt learning paradigm for UDA, named Domain Adaptation via Prompt Learning (DAPL).  ...  Unsupervised domain adaption (UDA) aims to adapt models learned from a well-annotated source domain to a target domain, where only unlabeled samples are given.  ...  To learn disentangled semantic and domain representation, we introduce the prompt learning method [16, 29, 31] to UDA, by learning a representation in a continuous label space.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2202.06687v1">arXiv:2202.06687v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/h3q5gjoxergh3jnzmebczvvxdq">fatcat:h3q5gjoxergh3jnzmebczvvxdq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220216102729/https://arxiv.org/pdf/2202.06687v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/75/9b/759b5f58e58a76f79a7d845acd3169dc899d0ac2.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2202.06687v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Domain-Adversarial and Conditional State Space Model for Imitation Learning [article]

Ryo Okumura, Masashi Okada, Tadahiro Taniguchi
<span title="2021-06-04">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
For SRL, acquiring domain-agnostic states is essential for achieving efficient imitation learning.  ...  We conclude domain-agnostic states are essential for imitation learning that has large domain shifts and can be obtained using DAC-SSM.  ...  Domain-agnostic feature representation There are roughly two types of approaches to obtain domain-agnostic feature representation: domain-adversarial training and disentanglement.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2001.11628v2">arXiv:2001.11628v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/guvlnnn66jdsllsvqangahnmoa">fatcat:guvlnnn66jdsllsvqangahnmoa</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210608211507/https://arxiv.org/pdf/2001.11628v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/ef/2b/ef2b06454ffb5601d4a9a2f8cee55c5c2ad3cc21.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2001.11628v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

StyleMeUp: Towards Style-Agnostic Sketch-Based Image Retrieval [article]

Aneeshan Sain, Ayan Kumar Bhunia, Yongxin Yang, Tao Xiang, Yi-Zhe Song
<span title="2021-03-31">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
With this meta-learning framework, our model can not only disentangle the cross-modal shared semantic content for SBIR, but can adapt the disentanglement to any unseen user style as well, making the SBIR  ...  model truly style-agnostic.  ...  Disentangled representation learning: Learning a disentangled representation would require modelling dis-tinct informative factors in the variations of data [11] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2103.15706v2">arXiv:2103.15706v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ukbeu2bpujb53j3pzd5zbdldai">fatcat:ukbeu2bpujb53j3pzd5zbdldai</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210402190310/https://arxiv.org/pdf/2103.15706v1.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/67/7f/677f9c5bd2e899b493cdac524d821055f4939e58.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2103.15706v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Towards Audio Domain Adaptation for Acoustic Scene Classification using Disentanglement Learning [article]

Jakob Abeßer, Meinard Müller
<span title="2021-10-26">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this paper, we propose a novel domain adaptation strategy based on disentanglement learning.  ...  The goal is to disentangle task-specific and domain-specific characteristics in the analyzed audio recordings.  ...  This research was partially supported by H2020 EU project AI4Media-A European Excellence Centre for Media, Society and Democracy-under Grand Agreement 95191 and by the Fraunhofer Innovation Program "SEC Learn  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2110.13586v1">arXiv:2110.13586v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/6us272h6erbyzhje52elitwgla">fatcat:6us272h6erbyzhje52elitwgla</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211102142652/https://arxiv.org/pdf/2110.13586v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/f0/c9/f0c9f728a8d4ad6402dea60e5cb3c3c671b50889.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2110.13586v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

When Low Resource NLP Meets Unsupervised Language Model: Meta-pretraining Then Meta-learning for Few-shot Text Classification [article]

Shumin Deng, Ningyu Zhang, Zhanlin Sun, Jiaoyan Chen, Huajun Chen
<span title="2019-11-21">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
This paper addresses such problems using meta-learning and unsupervised language models.  ...  It can thus be further suggested that pretraining could be a promising solution for few-shot learning of many other NLP tasks.  ...  However, existing meta-learning approaches for few-shot learning can not explicitly disentangle task-agnostic and task-specific representations, and they * All authors contributed equally to this work.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1908.08788v2">arXiv:1908.08788v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/mfullotkx5ahbaoqcxn2zk66em">fatcat:mfullotkx5ahbaoqcxn2zk66em</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200822112432/https://arxiv.org/pdf/1908.08788v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/d6/19/d619c1ad1492c0fdfa54d3822a7db616e02c93f0.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1908.08788v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

When Low Resource NLP Meets Unsupervised Language Model: Meta-Pretraining then Meta-Learning for Few-Shot Text Classification (Student Abstract)

Shumin Deng, Ningyu Zhang, Zhanlin Sun, Jiaoyan Chen, Huajun Chen
<span title="2020-04-03">2020</span> <i title="Association for the Advancement of Artificial Intelligence (AAAI)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/wtjcymhabjantmdtuptkk62mlq" style="color: black;">PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE</a> </i> &nbsp;
This paper addresses such problems using meta-learning and unsupervised language models.  ...  It can thus be further suggested that pretraining could be a promising solution for few-shot learning of many other NLP tasks.  ...  However, existing meta-learning approaches for few-shot learning can not explicitly disentangle task-agnostic and task-specific representations, and they are not able to take advantage of the knowledge  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1609/aaai.v34i10.7158">doi:10.1609/aaai.v34i10.7158</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/u75k5swow5gp5jhhs772orchfe">fatcat:u75k5swow5gp5jhhs772orchfe</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201103220318/https://aaai.org/ojs/index.php/AAAI/article/download/7158/7012" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/68/57/68572ba263beeedfa76f0f927dd1d6583ee2ea4c.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1609/aaai.v34i10.7158"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

Multi-domain Unsupervised Image-to-Image Translation with Appearance Adaptive Convolution [article]

Somi Jeong, Jiyoung Lee, Kwanghoon Sohn
<span title="2022-02-06">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
This allows our method to learn the diverse mappings between multiple visual domains with only a single framework.  ...  We also exploit a contrast learning objective, which improves the disentanglement ability and effectively utilizes multi-domain image data in the training process by pairing the semantically similar images  ...  It is usually formulated as a latent space disentanglement task, which decomposes the latent representation into domain-agnostic content and domain-specific appearance.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2202.02779v1">arXiv:2202.02779v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ns45db27sfculol7cy3d7v2dci">fatcat:ns45db27sfculol7cy3d7v2dci</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220210140139/https://arxiv.org/pdf/2202.02779v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/54/1b/541bc694b618a0831d2759b02ce4492027da29be.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2202.02779v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Differentiable Disentanglement Filter: an Application Agnostic Core Concept Discovery Probe [article]

Guntis Barzdins, Eduards Sidorovics
<span title="2019-07-24">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
concepts for the classification or other machine learning tasks.  ...  The DDF proof-of-concept implementation is shown to disentangle concepts within the neural 3D scene representation - a task vital for visual grounding of natural language narratives.  ...  Introduction The recent success with disentangling the semantically meaningful core concept dimensions within the representations learned by the popular deep neural networks (Dupont, 2018; Subramanian  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1907.07507v2">arXiv:1907.07507v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/a275ltqghjg2bpkmm254cjstwm">fatcat:a275ltqghjg2bpkmm254cjstwm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20191222112325/https://arxiv.org/pdf/1907.07507v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/17/96/1796eb07edfd9c2016de789919e5a159aa5c6b8d.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1907.07507v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

CoMoGAN: continuous model-guided image-to-image translation [article]

Fabio Pizzati, Pietro Cerri, Raoul de Charette
<span title="2021-04-08">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
To that matter, we introduce a new Functional Instance Normalization layer and residual mechanism, which together disentangle image content from position on target manifold.  ...  CoMoGAN can be used with any GAN backbone and allows new types of image translation, such as cyclic image translation like timelapse generation, or detached linear translation.  ...  Some exploit disentanglement for few-shot generalization capabilities [33, 52] . Domain features disentanglement also enable unify representations across domains [66, 31] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2103.06879v2">arXiv:2103.06879v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/u5hthxha3vgshoiexnmbv67vxa">fatcat:u5hthxha3vgshoiexnmbv67vxa</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210313173727/https://arxiv.org/pdf/2103.06879v1.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/3b/48/3b488cb89ddffe82bacc9b296f3c6dbda1484d23.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2103.06879v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Representation Learning and sampling in networks

Tanay Kumar Saha
<span title="2018-05-13">2018</span> <i title="Figshare"> Figshare </i> &nbsp;
In this presentation, I intend to motivate the topic of representation learning. Point out my research and the relation with some recent interesting works.  ...  compositional semantics (learning representation for phrases, sentences, paragraphs, or documents) in text domain Graph can have cycles, so tree-lstm kind of recursive structure is not an option (i) Disentangled  ...  of Algorithm for Learning Representation Good criteria for learning representations (learning X )?  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.6084/m9.figshare.6263846">doi:10.6084/m9.figshare.6263846</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/wvrnfqynwjd2vll54vkcnhftay">fatcat:wvrnfqynwjd2vll54vkcnhftay</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200112152949/https://s3-eu-west-1.amazonaws.com/pfigshare-u-files/11446697/myResearch.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/36/4f/364f0512dcfff44d17b0ab2b351a683bbd008e5c.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.6084/m9.figshare.6263846"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> figshare.com </button> </a>

An Efficient Integration of Disentangled Attended Expression and Identity FeaturesFor Facial Expression Transfer andSynthesis [article]

Kamran Ali, Charles E. Hughes
<span title="2020-05-01">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
To leverage the expression and identity information encoded by the intermediate layers of both of our encoders, we combine these features with the features learned by the intermediate layers of our decoder  ...  Similarly, the disentangled expression-agnostic identity features are extracted from the input target image by inferring its combined intrinsic-shape and appearance image employing our self-supervised  ...  Specifically, an encoder G es : X → E is used to encode the expression representation e ∈ E from x s , and an encoder G et : X → H is employed to encode the expression-agnostic identity representation  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2005.00499v1">arXiv:2005.00499v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/xw2wapx76vbcbguedh34helqyu">fatcat:xw2wapx76vbcbguedh34helqyu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200509174339/https://arxiv.org/pdf/2005.00499v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2005.00499v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Content-Context Factorized Representations for Automated Speech Recognition [article]

David M. Chan, Shalini Ghosh
<span title="2022-05-19">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this work, we introduce an unsupervised, encoder-agnostic method for factoring speech-encoder representations into explicit content-encoding representations and spurious context-encoding representations  ...  Most recent work on learning disentangled representations such as Sanchez et. al.  ...  Factorized Learning with Cyclic Reconstruction Unfortunately, many methods [8, 9] for disentangled representations require access to additional labeled data to supervise the paired factors.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2205.09872v1">arXiv:2205.09872v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ftwq5o445zbftheo75q634bkgy">fatcat:ftwq5o445zbftheo75q634bkgy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220524165623/https://arxiv.org/pdf/2205.09872v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/84/79/84795d5315134645255b9accbc45d21dc5a72a50.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2205.09872v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Fairness by Learning Orthogonal Disentangled Representations [article]

Mhd Hasan Sarhan, Nassir Navab, Abouzar Eslami, Shadi Albarqouni
<span title="2020-07-04">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
This is mostly approached by purging the sensitive information from learned representations. In this paper, we propose a novel disentanglement approach to invariant representation problem.  ...  We explicitly enforce the meaningful representation to be agnostic to sensitive information by entropy maximization.  ...  Acknowledgments S.A. is supported by the PRIME programme of the German Academic Exchange Service (DAAD) with funds from the German Federal Ministry of Education and Research (BMBF).  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2003.05707v3">arXiv:2003.05707v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/o6ccgjy2enbvjapgbb5k2cu7ay">fatcat:o6ccgjy2enbvjapgbb5k2cu7ay</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200930080838/https://arxiv.org/pdf/2003.05707v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/77/9b/779b1ffa85c43ac630582c91e59c97c00121b157.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2003.05707v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 2,548 results