A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Filters
Improving Disentangled Text Representation Learning with Information-Theoretic Guidance
[article]
2022
arXiv
pre-print
Learning disentangled representations of natural language is essential for many NLP tasks, e.g., conditional text generation, style transfer, personalized dialogue systems, etc. ...
Inspired by information theory, we propose a novel method that effectively manifests disentangled representations of text, without any supervision on semantics. ...
LG] 12 Jan 2022 theoretic Disentangled Embedding Learning method (IDEL) for text, based on guidance from information theory. ...
arXiv:2006.00693v3
fatcat:l22eiy6ri5ghdjb5muq7rzvbru
Improving Disentangled Text Representation Learning with Information-Theoretic Guidance
2020
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
unpublished
Learning disentangled representations of natural language is essential for many NLP tasks, e.g., conditional text generation, style transfer, personalized dialogue systems, etc. ...
Inspired by information theory, we propose a novel method that effectively manifests disentangled representations of text, without any supervision on semantics. ...
In this paper, we introduce a novel Information-theoretic Disentangled Embedding Learning method (IDEL) for text, based on guidance from information theory. ...
doi:10.18653/v1/2020.acl-main.673
fatcat:pblq7djuonfo7nefs3eq2hdfr4
The Style-Content Duality of Attractiveness: Learning to Write Eye-Catching Headlines via Disentanglement
[article]
2020
arXiv
pre-print
The latent content information is then used to further polish the document representation and help capture the salient part. ...
Concretely, we first devise a disentanglement module to divide the style and content of an attractive prototype headline into latent spaces, with two auxiliary constraints to ensure the two spaces are ...
Compared to the computer vision field, NLP tasks mainly focus on invariant representation learning. Disentangled representation learning is widely adopted in nonparallel text style transfer. ...
arXiv:2012.07419v1
fatcat:6dptaqjrczh2fjha23ehg53vdi
A Novel Estimator of Mutual Information for Learning to Disentangle Textual Representations
[article]
2021
arXiv
pre-print
Learning disentangled representations of textual data is essential for many natural language tasks such as fair classification, style transfer and sentence generation, among others. ...
Additionally, we provide new insights illustrating various trade-offs in style transfer when attempting to learn disentangled representations and quality of the generated sentence. ...
Improv-
ing disentangled text representation learning with
information-theoretic guidance.
arXiv preprint
arXiv:2006.00693. ...
arXiv:2105.02685v1
fatcat:yqda2gawn5gkpas7yesm3ddhwu
Learning a Disentangled Embedding for Monocular 3D Shape Retrieval and Pose Estimation
[article]
2019
arXiv
pre-print
an embedding space from 3D data that only includes the relevant information, namely the shape and pose. ...
Our approach explicitly disentangles a shape vector and a pose vector, which alleviates both pose bias for 3D shape retrieval and categorical bias for pose estimation. ...
Comparing to existing methods, our approach achieves additional robustness afforded by the guidance from the "pure" information learned from 3D data, which is free from distracting factors in the images ...
arXiv:1812.09899v2
fatcat:emedbu4m5ja4dmbydyo4dn2mkm
Towards Better Understanding of Disentangled Representations via Mutual Information
[article]
2020
arXiv
pre-print
Most existing works on disentangled representation learning are solely built upon an marginal independence assumption: all factors in disentangled representations should be statistically independent. ...
We argue in this work that disentangled representations should be characterized by their relation with observable data. ...
and thus improve disentanglement of the generated samples. ...
arXiv:1911.10922v3
fatcat:mxqkpt4mkrbahpe6zaf2omnfhq
Deep Learning for Text Style Transfer: A Survey
[article]
2021
arXiv
pre-print
Text style transfer is an important task in natural language generation, which aims to control certain attributes in the generated text, such as politeness, emotion, humor, and many others. ...
In this paper, we present a systematic survey of the research on neural text style transfer, spanning over 100 representative articles since the first neural text style transfer work in 2017. ...
Method Strengths & Weaknesses
+ More profound in theoretical analysis, e.g., disentangled representation
learning
Disentanglement ...
arXiv:2011.00416v5
fatcat:wfw3jfh2mjfupbzrmnztsqy4ny
Desiderata for Representation Learning: A Causal Perspective
[article]
2022
arXiv
pre-print
This learning problem is often approached by describing various desiderata associated with learned representations; e.g., that they be non-spurious, efficient, or disentangled. ...
In this paper, we take a causal perspective on representation learning, formalizing non-spuriousness and efficiency (in supervised representation learning) and disentanglement (in unsupervised representation ...
The representation itself does not provide guidance on how to separate learned dimensions into more informative dimensions that describe animal fur and background lighting. ...
arXiv:2109.03795v2
fatcat:5u3tjmubwvhqnfz3rn7qqxy2hq
Variational Template Machine for Data-to-Text Generation
[article]
2020
arXiv
pre-print
Our contributions include: a) we carefully devise a specific model architecture and losses to explicitly disentangle text template and semantic content information, in the latent spaces, and b)we utilize ...
both small parallel data and large raw text without aligned tables to enrich the template learning. ...
Motivated by the idea of back-translation and variational autoencoders, VTM model proposed in this work can not only fully utilize the non-parallel text corpus, but also learn a disentangled representation ...
arXiv:2002.01127v2
fatcat:kvp6vkgx3be4lbzv33c346lusy
Adversarial Canonical Correlation Analysis
[article]
2020
arXiv
pre-print
It has been used in various representation learning problems, such as dimensionality reduction, word embedding, and clustering. ...
This allows new priors for what constitutes a good representation, such as disentangling underlying factors of variation, to be more directly pursued. ...
The author would like to acknowledge his advisor, Raju Vatsavai, for his guidance and support. ...
arXiv:2005.10349v2
fatcat:lr2rovbgszfgpm6ids42i3gmca
Towards information-rich, logical text generation with knowledge-enhanced neural models
[article]
2020
arXiv
pre-print
Text generation system has made massive promising progress contributed by deep learning techniques and has been widely applied in our life. ...
However, existing end-to-end neural models suffer from the problem of tending to generate uninformative and generic text because they cannot ground input context with background knowledge. ...
The graph-based contextual word representation learning module is used to redefine the distance between words for learning better contextual word representations using graph structural information. ...
arXiv:2003.00814v1
fatcat:5fllyakwqzf4vnmar3a6zjoewe
Deep Learning for Text Style Transfer: A Survey
2021
Computational Linguistics
Text style transfer is an important task in natural language generation, which aims to control certain attributes in the generated text, such as politeness, emotion, humor, and many others. ...
In this paper, we present a systematic survey of the research on neural text style transfer, spanning over 100 representative articles since the first neural text style transfer work in 2017. ...
Improving zero-shot voice style Enhong Chen. 2018d. Style transfer as
transfer via disentangled representation unsupervised machine translation. CoRR,
learning. ...
doi:10.1162/coli_a_00426
fatcat:v7vmb62ckfcu5k5mpu2pydnrxy
Small Data Challenges in Big Data Era: A Survey of Recent Progress on Unsupervised and Semi-Supervised Methods
[article]
2021
arXiv
pre-print
We will review the principles of learning the transformation equivariant, disentangled, self-supervised and semi-supervised representations, all of which underpin the foundation of recent progresses. ...
Representation learning with small labeled data have emerged in many problems, since the success of deep neural networks often relies on the availability of a huge amount of labeled data that is expensive ...
Autoencoding Variational Transformation From an information-theoretic point of view, Qi et al. ...
arXiv:1903.11260v2
fatcat:hjya3ojzmfh7nnldhqkdx6o37a
On Lottery Tickets and Minimal Task Representations in Deep Reinforcement Learning
[article]
2022
arXiv
pre-print
We show that feed-forward networks trained with behavioural cloning compared to reinforcement learning can be pruned to higher levels of sparsity without performance degradation. ...
The lottery ticket hypothesis questions the role of overparameterization in supervised deep learning. ...
Finally, this study is empirical in its nature and will require further theoretical guidance and foundations. Future Work. ...
arXiv:2105.01648v4
fatcat:43ptujjc6jaubazfz546q63eia
Unsupervised Speech Decomposition via Triple Information Bottleneck
[article]
2021
arXiv
pre-print
Recently, state-of-the-art voice conversion systems have led to speech representations that can disentangle speaker-dependent and independent information. ...
Obtaining disentangled representations of these components is useful in many speech analysis and generation applications. ...
Acknowledgment We would like to give special thanks to Gaoyuan Zhang from MIT-IBM Watson AI Lab, who has helped us a lot with building our demo webpage. ...
arXiv:2004.11284v6
fatcat:mjdt6jyoqjeetjupjvvppq6ldi
« Previous
Showing results 1 — 15 out of 5,346 results