Filters








62,219 Hits in 10.0 sec

A Unified Perspective on Multi-Domain and Multi-Task Learning [article]

Yongxin Yang, Timothy M. Hospedales
2015 arXiv   pre-print
In this paper, we provide a new neural-network based perspective on multi-task learning (MTL) and multi-domain learning (MDL).  ...  Moreover, it leads to a new and practically relevant problem setting of zero-shot domain adaptation (ZSDA), which is the analogous to ZSL but for novel domains: A model for an unseen domain can be generated  ...  This can be used to unify and improve on a variety of existing multi-task learning algorithms.  ... 
arXiv:1412.7489v3 fatcat:uzugz3hta5ei3kxy4g5szbjhue

Heterogeneous Representation Learning: A Review [article]

Joey Tianyi Zhou, Xi Peng, Yew-Soon Ong
2020 arXiv   pre-print
transfer learning, Learning using privileged information and heterogeneous multi-task learning.  ...  After that, we conduct a comprehensive discussion on the HRL framework by reviewing some selected learning problems along with the mathematics perspectives, including multi-view learning, heterogeneous  ...  To the best of our knowledge, this could be also the first study on discussing the diverse learning settings and applications in a unified perspective of HRL based on mathematical formulation.  ... 
arXiv:2004.13303v2 fatcat:7eiqfril5beqriycv7pzeh7sqa

Learning Robust Data Representation: A Knowledge Flow Perspective [article]

Zhengming Ding and Ming Shao and Handong Zhao and Sheng Li
2020 arXiv   pre-print
It is always demanding to learn robust visual representation for various learning problems; however, this learning and maintenance process usually suffers from noise, incompleteness or knowledge domain  ...  First of all, we deliver a unified formulation for robust knowledge discovery given single dataset.  ...  This fusion strategy jointly explores the representation learning and multi-view fusion in a unified framework.  ... 
arXiv:1909.13123v2 fatcat:wll23rkrznejvhzsihc6rwcwve

Multi-mapping Image-to-Image Translation via Learning Disentanglement [article]

Xiaoming Yu, Yuanqi Chen, Thomas Li, Shan Liu, Ge Li
2019 arXiv   pre-print
Recent advances of image-to-image translation focus on learning the one-to-many mapping from two aspects: multi-modal translation and multi-domain translation.  ...  Then, we encourage the generator to learn multi-mappings by a random cross-domain translation.  ...  Based on the generative adversarial networks [11, 29] , they propose a general-purpose framework (pix2pix) to handle I2I.  ... 
arXiv:1909.07877v2 fatcat:rjoqvmfmvrewdjft2nvism7ugu

Robust Multi-view Representation: A Unified Perspective from Multi-view Learning to Domain Adaption

Zhengming Ding, Ming Shao, Yun Fu
2018 Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence  
First of all, we formulate a unified learning framework which is able to model most existing multi-view learning and domain adaptation in this line.  ...  learning, and domain adaption.  ...  Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence  ... 
doi:10.24963/ijcai.2018/767 dblp:conf/ijcai/DingSF18 fatcat:s2cwblwxnbgavaeirobiyyfk6e

Domain-Invariant Representation Learning from EEG with Private Encoders [article]

David Bethge, Philipp Hallgarten, Tobias Grosse-Puppendahl, Mohamed Kari, Ralf Mikut, Albrecht Schmidt, Ozan Özdenizci
2022 arXiv   pre-print
To that end, we propose a multi-source learning architecture where we extract domain-invariant representations from dataset-specific private encoders.  ...  This becomes a more challenging problem when privacy-preserving representation learning is of interest such as in clinical settings.  ...  One approach to achieve this from a deep learning perspective is to extract and exploit domain-invariant representations from multi-channel EEG data.  ... 
arXiv:2201.11613v2 fatcat:e7cupdqw6vfhxgs2h3yzw5mstu

A Survey on Dialogue Summarization: Recent Advances and New Frontiers [article]

Xiachong Feng, Xiaocheng Feng, Bing Qin
2022 arXiv   pre-print
Furthermore, we discuss some future directions, including faithfulness, multi-modal, multi-domain and multi-lingual dialogue summarization, and give our thoughts respectively.  ...  However, there still remains a lack of a comprehensive survey for this task. To this end, we take the first step and present a thorough review of this research field carefully and widely.  ...  We would also like to thank Shiyue Zhang for her feedback on email summarization and Libo Qin for his helpful discussion.  ... 
arXiv:2107.03175v2 fatcat:qghkke4harac3otuvccbuw5pca

Unifying Multi-Domain Multi-Task Learning: Tensor and Neural Network Perspectives [article]

Yongxin Yang, Timothy M. Hospedales
2016 arXiv   pre-print
In this chapter, we propose a single framework that unifies multi-domain learning (MDL) and the related but better studied area of multi-task learning (MTL).  ...  As a second contribution, we present a higher order generalisation of this framework, capable of simultaneous multi-task-multi-domain learning.  ...  Multi-Domain versus Multi-Task Learning The difference between domains and tasks can be subtle, and some multi-domain learning problems can be addressed by methods proposed for multi-task learning and  ... 
arXiv:1611.09345v1 fatcat:ao45l3bjazcmxmuqyrgeqjw3am

Multi-task Domain Adaptation for Sequence Tagging

Nanyun Peng, Mark Dredze
2017 Proceedings of the 2nd Workshop on Representation Learning for NLP  
Traditional domain adaptation only considers adapting for one task. In this paper, we explore multi-task representation learning under the domain adaptation scenario.  ...  We propose a neural network framework that supports domain adaptation for multiple tasks simultaneously, and learns shared representations that better generalize for domain adaptation.  ...  To the best of our knowledge, the work that is closest to ours is Yang and Hospedales (2015) , which provided a unified perspective for multi-task learning and multi-domain learning (a more general case  ... 
doi:10.18653/v1/w17-2612 dblp:conf/rep4nlp/PengD17 fatcat:r2g43hr4pfad5irxwbyrqt4ibu

Deep Decentralized Multi-task Multi-Agent Reinforcement Learning under Partial Observability [article]

Shayegan Omidshafiei, Jason Pazis, Christopher Amato, Jonathan P. How, John Vian
2017 arXiv   pre-print
We introduce a decentralized single-task learning approach that is robust to concurrent interactions of teammates, and present an approach for distilling single-task policies into a unified policy that  ...  This paper formalizes and addresses the problem of multi-task multi-agent reinforcement learning under partial observability.  ...  Acknowledgements The authors thank the anonymous reviewers for their insightful feedback and suggestions.  ... 
arXiv:1703.06182v4 fatcat:pt76xj24snafziyymv4nnqkqsy

Multi-task Domain Adaptation for Sequence Tagging [article]

Nanyun Peng, Mark Dredze
2017 arXiv   pre-print
Traditional domain adaptation only considers adapting for one task. In this paper, we explore multi-task representation learning under the domain adaptation scenario.  ...  We propose a neural network framework that supports domain adaptation for multiple tasks simultaneously, and learns shared representations that better generalize for domain adaptation.  ...  To the best of our knowledge, the work that is closest to ours is Yang and Hospedales (2015) , which provided a unified perspective for multi-task learning and multi-domain learning (a more general case  ... 
arXiv:1608.02689v2 fatcat:rhp3xb64irhv3ghzu3jwkwa5cq

Text is Text, No Matter What: Unifying Text Recognition using Knowledge Distillation [article]

Ayan Kumar Bhunia, Aneeshan Sain, Pinaki Nath Chowdhury, Yi-Zhe Song
2021 arXiv   pre-print
Empirical evidence suggests that our proposed unified model performs on par with individual models, even surpassing them in certain cases.  ...  Ablative studies demonstrate that naive baselines such as a two-stage framework, and domain adaption/generalisation alternatives do not work as well, further verifying the appropriateness of our design  ...  Conclusion We put forth a novel perspective towards text recognition -unifying multi-scenario text recognition models.  ... 
arXiv:2107.12087v2 fatcat:z2ezlupwyrh5xjykeh6bdkjwg4

HyperPELT: Unified Parameter-Efficient Language Model Tuning for Both Language and Vision-and-Language Tasks [article]

Zhengkun Zhang, Wenya Guo, Xiaojun Meng, Yasheng Wang, Yadao Wang, Xin Jiang, Qun Liu, Zhenglu Yang
2022 arXiv   pre-print
In this paper, we design a novel unified parameter-efficient transfer learning framework that works effectively on both pure language and V&L tasks.  ...  Our proposed framework adds fewer trainable parameters in multi-task learning while achieving superior performances and transfer ability compared to state-of-the-art methods.  ...  Multi-task Learning Learning a unified model to perform well on multiple different tasks (i.e., multi-task learning) is a challenging problem in both NLP and V&L domains.  ... 
arXiv:2203.03878v1 fatcat:62pmtxbn35cttfxhs4pr6mb5gy

Multi-Domain Image Completion for Random Missing Input Data [article]

Liyue Shen, Wentao Zhu, Xiaosong Wang, Lei Xing, John M. Pauly, Baris Turkbey, Stephanie Anne Harmon, Thomas Hogue Sanford, Sherif Mehralivand, Peter Choyke, Bradford Wood, Daguang Xu
2020 arXiv   pre-print
We further illustrate that the learned representation in multi-domain image completion could be leveraged for high-level tasks, e.g., segmentation, by introducing a unified framework consisting of image  ...  and separate flesh encoding across multiple domains.  ...  More advanced, based on the content code learned in our model, we could develop a join model for multi-task learning of both generation and segmentation.  ... 
arXiv:2007.05534v1 fatcat:buih5jhb5javlla4mmz7v32eqm

A Unified Multi-task Learning Framework for Multi-goal Conversational Recommender Systems [article]

Yang Deng, Wenxuan Zhang, Weiwen Xu, Wenqiang Lei, Tat-Seng Chua, Wai Lam
2022 arXiv   pre-print
Experimental results on two MG-CRS benchmarks (DuRecDial and TG-ReDial) show that UniMIND achieves state-of-the-art performance on all tasks with a unified model.  ...  Prompt-based learning strategies are investigated to endow the unified model with the capability of multi-task learning.  ...  "-w/o PL" denotes that we only train one unified multi-task learning model for all tasks without task-specific prompt-based learning.  ... 
arXiv:2204.06923v1 fatcat:r3ove25n2vcwzha4lx3zglvjwa
« Previous Showing results 1 — 15 out of 62,219 results