18,335 Hits in 6.3 sec

Unsupervised Information Extraction: Regularizing Discriminative Approaches with Relation Distribution Losses

Étienne Simon, Vincent Guigue, Benjamin Piwowarski
2019 Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics  
Unsupervised relation extraction aims at extracting relations between entities in text. Previous unsupervised approaches are either generative or discriminative.  ...  To overcome this limitation, we introduce a skewness loss which encourages the classifier to predict a relation with confidence given a sentence, and a distribution distance loss enforcing that all relations  ...  This work was lead with the support of the FUI-BInD Project.  ... 
doi:10.18653/v1/p19-1133 dblp:conf/acl/SimonGP19 fatcat:4fbptj3kzjajtcilitn66hcsli

Informative Feature Disentanglement for Unsupervised Domain Adaptation

Wanxia Deng, Lingjun Zhao, Qing Liao, Deke Guo, Gangyao Kuang, Dewen Hu, Matti Pietikainen, Li Liu
2021 IEEE transactions on multimedia  
but related to the source distribution.  ...  Metric discrepancy-based methods Minimizing the domain distribution discrepancy with the metric paradigm is one more classical approach for UDA.  ... 
doi:10.1109/tmm.2021.3080516 fatcat:jipfcoeoijgvxok73ztytwym4a

Facilitating information extraction without annotated data using unsupervised and positive-unlabeled learning

Zfania Tom Korach, Sharmitha Yerneni, Jonathan Einbinder, Carl Kallenberg, Li Zhou
2021 AMIA Annual Symposium Proceedings  
Information extraction (IE), the distillation of specific information from unstructured data, is a core task in natural language processing.  ...  We combined unsupervised- with biased positive-unlabeled (PU) learning methods to: 1) facilitate positive example collection while maintaining the assumptions needed to 2) learn a binary classifier from  ...  Conclusion The combination of unsupervised-and positive-unlabeled learning methods may reduce the manual effort required to train an information-extraction classifier for a rare entity by focusing the  ... 
pmid:33936440 pmcid:PMC8075513 fatcat:6lub2xookravxay67ineshdypy

Neural Vector Spaces for Unsupervised Information Retrieval

Christophe Van Gysel, Maarten de Rijke, Evangelos Kanoulas
2018 ACM Transactions on Information Systems  
We also show that NVSM learns regularities related to Luhn significance. Finally, we give advice on how to deploy NVSM in situations where model selection (e.g., cross-validation) is infeasible.  ...  We find that an unsupervised ensemble of multiple models trained with different hyperparameter values performs better than a single cross-validated model.  ...  We then relate regularities learned by the model to traditional retrieval statistics (RQ4).  ... 
doi:10.1145/3196826 fatcat:46qldllsnfd4xfyrrvw7xrtjhq

Deep Unsupervised Image Anomaly Detection: An Information Theoretic Framework [article]

Fei Ye, Huangjie Zheng, Chaoqin Huang, Ya Zhang
2020 arXiv   pre-print
In this paper, we return to a direct objective function for anomaly detection with information theory, which maximizes the distance between normal and anomalous data in terms of the joint distribution  ...  Based on this object function we introduce a novel information theoretic framework for unsupervised image anomaly detection.  ...  The main challenge is to find a more effective loss function to replace the typically adopted pixel-wised MSE loss, which is indicated ineffective to force the model to extract discriminate features [  ... 
arXiv:2012.04837v1 fatcat:nrzcxsmjtzhh3nzpocoxpu7x44

Information-based Disentangled Representation Learning for Unsupervised MR Harmonization [article]

Lianrui Zuo, Blake E. Dewey, Aaron Carass, Yihao Liu, Yufan He, Peter A. Calabresi, Jerry L. Prince
2021 arXiv   pre-print
Both qualitative and quantitative results show that the proposed method achieves superior performance compared with other unsupervised harmonization approaches.  ...  In this work, we propose an unsupervised MR harmonization framework, CALAMITI (Contrast Anatomy Learning and Analysis for MR Intensity Translation and Integration), based on information bottleneck theory  ...  Table 1 provides a summary comparison of the proposed method with other unsupervised IIT approaches.  ... 
arXiv:2103.13283v1 fatcat:25l2fcymyba2pb4t3euokyvwdu

Enhancing Unsupervised Generative Dependency Parser with Contextual Information

Wenjuan Han, Yong Jiang, Kewei Tu
2019 Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics  
Our extensive experimental results on seventeen datasets from various sources show that our approach achieves competitive accuracy compared with both generative and discriminative state-of-the-art unsupervised  ...  Our approach can be regarded as a new type of autoencoder model to unsupervised dependency parsing that combines the benefits of both generative and discriminative techniques.  ...  Conclusion We propose D-NDMV, a novel unsupervised parser with characteristics from both generative and discriminative approaches to unsupervised parsing.  ... 
doi:10.18653/v1/p19-1526 dblp:conf/acl/HanJT19 fatcat:fvuchp7odraufgqj3pxl3z426u

Improving Unsupervised Domain Adaptation with Variational Information Bottleneck [article]

Yuxuan Song, Lantao Yu, Zhangjie Cao, Zhiming Zhou, Jian Shen, Shuo Shao, Weinan Zhang, Yong Yu
2019 arXiv   pre-print
To leverage and adapt the label information from source domain, most existing methods employ a feature extracting function and match the marginal distributions of source and target domains in a shared  ...  In this paper, from the perspective of information theory, we show that representation matching is actually an insufficient constraint on the feature space for obtaining a model with good generalization  ...  In contrast, our method seeks to provide a new regularization technique for general unsupervised domain adaptation with deep neural networks.  ... 
arXiv:1911.09310v1 fatcat:tj4yxoebebab3jufajtm7w4wbe

BottleSum: Unsupervised and Self-supervised Sentence Summarization using the Information Bottleneck Principle [article]

Peter West, Ari Holtzman, Jan Buys, Yejin Choi
2019 arXiv   pre-print
Using only pretrained language models with no direct supervision, our approach can efficiently perform extractive sentence summarization over a large corpus.  ...  In this paper, we propose a novel approach to unsupervised sentence summarization by mapping the Information Bottleneck principle to a conditional language modelling objective: given a sentence, our approach  ...  Unsupervised Extractive Summarization We now use the Information Bottleneck principle to propose BottleSum Ex , an unsupervised extractive approach to sentence summarization.  ... 
arXiv:1909.07405v2 fatcat:ph3oriu4k5emlo5v6j2zoj6tse

Unsupervised Multi-Target Domain Adaptation: An Information Theoretic Approach [article]

Behnam Gholami, Pritish Sahu, Ognjen Rudovic, Konstantinos Bousmalis, Vladimir Pavlovic
2018 arXiv   pre-print
In this work we propose an information theoretic approach for domain adaptation in the novel context of multiple target domains with unlabeled instances and one source domain with labeled instances.  ...  Disentanglement of shared and private information is accomplished using a unified information-theoretic approach, which also serves to establish a stronger link between the latent representations and the  ...  In the unsupervised representation learning literature, our work is also related to the VAE-based models [11] .  ... 
arXiv:1810.11547v1 fatcat:mius3gsmenbtzdq6zfmyjlo2wa

CycleQSM: Unsupervised QSM Deep Learning using Physics-Informed CycleGAN [article]

Gyutaek Oh, Hyokyoung Bae, Hyun-Seo Ahn, Sung-Hong Park, Jong Chul Ye
2020 arXiv   pre-print
To address this, here we propose a novel unsupervised QSM deep learning method using physics-informed cycleGAN, which is derived from optimal transport perspective.  ...  In contrast to the conventional cycleGAN, our novel cycleGAN has only one generator and one discriminator thanks to the known dipole kernel.  ...  Note that the additional losses in (14) and (15) are average values with respect to the marginal distributions.  ... 
arXiv:2012.03842v1 fatcat:c2ke2ibxtbgure3443ehifgpua

Information-Theoretical Learning of Discriminative Clusters for Unsupervised Domain Adaptation [article]

Yuan Shi, Fei Sha (University of Southern California)
2012 arXiv   pre-print
Specifically, while the method identifies a feature space where data in the source and the target domains are similarly distributed, it also learns the feature space discriminatively, optimizing an information-theoretic  ...  Many existing approaches first learn domain-invariant features and then construct classifiers with them. We propose a novel approach that jointly learn the both.  ...  Our work is also related to the recent study of regularized information maximization for discriminative clustering (Gomes et al., 2010) .  ... 
arXiv:1206.6438v1 fatcat:plkxnokwkrdpnhwg3b6ecnq6pm

Unsupervised Domain Adaptation for Cardiac Segmentation: Towards Structure Mutual Information Maximization [article]

Changjie Lu, Shen Zheng, Gaurav Gupta
2022 arXiv   pre-print
This paper introduces UDA-VAE++, an unsupervised domain adaptation framework for cardiac segmentation with a compact loss function lower bound.  ...  Unsupervised domain adaptation approaches have recently succeeded in various medical image segmentation tasks.  ...  Related Work Unsupervised Domain Adaptation Unsupervised Domain Adaptation (UDA) has been widely used for biomedical image segmentation tasks.  ... 
arXiv:2204.09334v2 fatcat:4445agb5bvbojcdzyhcidaews4

Unsupervised Meta-path Reduction on Heterogeneous Information Networks [article]

Xiaokai Wei, Zhiwei Liu, Lichao Sun, Philip S. Yu
2018 arXiv   pre-print
Heterogeneous Information Network (HIN) has attracted much attention due to its wide applicability in a variety of data mining tasks, especially for tasks with multi-typed objects.  ...  A potentially large number of meta-paths can be extracted from the heterogeneous networks, providing abundant semantic knowledge.  ...  Related Work In this section, we review some related work on heterogeneous information network and unsupervised feature selection.  ... 
arXiv:1810.12503v2 fatcat:m5zjwmw4nrgchpu4fndbafmruq

Unsupervised Domain Adaptation for Dysarthric Speech Detection via Domain Adversarial Training and Mutual Information Minimization [article]

Disong Wang, Liqun Deng, Yu Ting Yeung, Xiao Chen, Xunying Liu, Helen Meng
2021 arXiv   pre-print
minimization (MIM), which aim to learn dysarthria-discriminative and domain-invariant biomarker embeddings.  ...  This paper makes a first attempt to formulate cross-domain DSD as an unsupervised domain adaptation (UDA) problem.  ...  To extract domain-related information, a domain encoder θ dom and a domain classifier ψ dom are utilized.  ... 
arXiv:2106.10127v1 fatcat:lvtvogaccfcfrkncnnkhsksifm
« Previous Showing results 1 — 15 out of 18,335 results