Filters








413 Hits in 5.3 sec

Unpaired Multi-modal Segmentation via Knowledge Distillation

Qi Dou, Quande Liu, Pheng Ann Heng, Ben Glocker
2020 IEEE Transactions on Medical Imaging  
We propose a novel learning scheme for unpaired cross-modality image segmentation, with a highly compact architecture achieving superior segmentation accuracy.  ...  To effectively train such a highly compact model, we introduce a novel loss term inspired by knowledge distillation, by explicitly constraining the KL-divergence of our derived prediction distributions  ...  Overview of proposed multi-modal learning scheme for unpaired image segmentation using knowledge distillation.  ... 
doi:10.1109/tmi.2019.2963882 pmid:32012001 fatcat:htw4dwhhsbbcbjnqdbwcvrkaxe

Unpaired Multi-modal Segmentation via Knowledge Distillation [article]

Qi Dou, Quande Liu, Pheng Ann Heng, Ben Glocker
2020 arXiv   pre-print
We propose a novel learning scheme for unpaired cross-modality image segmentation, with a highly compact architecture achieving superior segmentation accuracy.  ...  To effectively train such a highly compact model, we introduce a novel loss term inspired by knowledge distillation, by explicitly constraining the KL-divergence of our derived prediction distributions  ...  Fig. 1 . 1 Overview of proposed multi-modal learning scheme for unpaired image segmentation using knowledge distillation.  ... 
arXiv:2001.03111v1 fatcat:vll7ngtldfgipp7axhtrskdsce

Non-Local Latent Relation Distillation for Self-Adaptive 3D Human Pose Estimation [article]

Jogendra Nath Kundu, Siddharth Seth, Anirudh Jamkhandi, Pradyumna YM, Varun Jampani, Anirban Chakraborty, R. Venkatesh Babu
2022 arXiv   pre-print
Next, we introduce relation distillation as a means to align the unpaired cross-modal samples i.e. the unpaired target videos and unpaired 3D pose sequences.  ...  To this end, we cast 3D pose learning as a self-supervised adaptation problem that aims to transfer the task knowledge from a labeled source domain to a completely unpaired target.  ...  , via relation distillation.  ... 
arXiv:2204.01971v2 fatcat:fgdpgc3t4jfc3gh47idlxkhnqy

Deep Class-Specific Affinity-Guided Convolutional Network for Multimodal Unpaired Image Segmentation [article]

Jingkun Chen, Wenqi Li, Hongwei Li, Jianguo Zhang
2021 arXiv   pre-print
Multi-modal medical image segmentation plays an essential role in clinical diagnosis. It remains challenging as the input modalities are often not well-aligned spatially.  ...  To learn effective representations, we design class-specific affinity matrices to encode the knowledge of hierarchical feature reasoning, together with the shared convolutional layers to ensure the cross-modality  ...  We designed the following six experimental settings (single training of separate modalities (Single), Unpaired Multi-modal Segmentation via Knowledge Distillation (UMMKD) [3] , Joint training of two modalities  ... 
arXiv:2101.01513v1 fatcat:po63pr66lzcbdlfu3w4fozdpjq

Towards Cross-Modality Medical Image Segmentation with Online Mutual Knowledge Distillation

Kang Li, Lequan Yu, Shujun Wang, Pheng-Ann Heng
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
We then propose a novel Mutual Knowledge Distillation (MKD) scheme to thoroughly exploit the modality-shared knowledge to facilitate the target-modality segmentation.  ...  other state-of-the-art multi-modality learning methods.  ...  Experiments Dataset and Implementation Details We evaluate the proposed method on the Multi-modality Whole Heart Segmentation Challenge 2017 (MM-WHS 2017) dataset, which contains unpaired 20 MRI and  ... 
doi:10.1609/aaai.v34i01.5421 fatcat:4w3zi75kbzgznkz6acsuayvlte

Towards Cross-modality Medical Image Segmentation with Online Mutual Knowledge Distillation [article]

Kang Li, Lequan Yu, Shujun Wang, Pheng-Ann Heng
2020 arXiv   pre-print
data.We then propose a novel Mutual Knowledge Distillation (MKD) scheme to thoroughly exploit the modality-shared knowledge to facilitate the target-modality segmentation.  ...  other state-of-the-art multi-modality learning methods.  ...  Experiments Dataset and Implementation Details We evaluate the proposed method on the Multi-modality Whole Heart Segmentation Challenge 2017 (MM-WHS 2017) dataset, which contains unpaired 20 MRI and 20  ... 
arXiv:2010.01532v1 fatcat:ipaxqvcjpne3jf44yyin7ywxxe

Dual-Teacher: Integrating Intra-domain and Inter-domain Teachers for Annotation-efficient Cardiac Segmentation [article]

Kang Li, Shujun Wang, Lequan Yu, Pheng-Ann Heng
2020 arXiv   pre-print
from inter-domain teacher via knowledge distillation.  ...  ,semi-supervised learning further exploring plentiful unlabeled data, domain adaptation including multi-modality learning and unsupervised domain adaptation resorting to the prior knowledge from additional  ...  [21] proposed a dual-stream approach to integrate the prior knowledge from unpaired multi-modality data for improved multi-organ segmentation, and suggested X-shape achieving the leading performance  ... 
arXiv:2007.06279v1 fatcat:rtungnerpvhs5bav2tkymvvdsq

Toward Unpaired Multi-modal Medical Image Segmentation via Learning Structured Semantic Consistency [article]

Jie Yang, Ruimao Zhang, Chaoqun Wang, Zhen Li, Xiang Wan, Lingyan Zhang
2022 arXiv   pre-print
In this paper, we propose a novel scheme to achieve better pixel-level segmentation for unpaired multi-modal medical images.  ...  To demonstrate the effectiveness of the proposed method, we conduct the experiments on two medical image segmentation scenarios: (1) cardiac structure segmentation, and (2) abdominal multi-organ segmentation  ...  Overview of our proposed unpaired multi-modal medical image segmentation framework via single Transformer architecture and the proposed EAMs.  ... 
arXiv:2206.10571v1 fatcat:sldtfe7gjvfcnmwfx645fket44

Towards a Unified Foundation Model: Jointly Pre-Training Transformers on Unpaired Images and Text [article]

Qing Li, Boqing Gong, Yin Cui, Dan Kondratyuk, Xianzhi Du, Ming-Hsuan Yang, Matthew Brown
2021 arXiv   pre-print
To efficiently pre-train the proposed model jointly on unpaired images and text, we propose two novel techniques: (i) We employ the separately-trained BERT and ViT models as teachers and apply knowledge  ...  The experiments show that the resultant unified foundation transformer works surprisingly well on both the vision-only and text-only tasks, and the proposed knowledge distillation and gradient masking  ...  [1] proposed VATT to process video, audio, and text via a single transformer encoder, which is trained by multi-modal contrastive learning and desires aligned data triplets across modalities.  ... 
arXiv:2112.07074v1 fatcat:lptiemsf5fbjfas2ep522fpmce

Mix and match networks: cross-modal alignment for zero-pair image-to-image translation [article]

Yaxing Wang, Luis Herranz, Joost van de Weijer
2020 arXiv   pre-print
We also propose zero-pair cross-modal image translation, a challenging setting where the objective is inferring semantic segmentation from depth (and vice-versa) without explicit segmentation-depth pairs  ...  This paper addresses the problem of inferring unseen cross-modal image-to-image translations between multiple modalities.  ...  The aim would then be to exploit the knowledge from the paired modalities to obtain an improved mapping for the unpaired modalities.  ... 
arXiv:1903.04294v2 fatcat:m4cecpfwbnhwrlx5rsngblxhke

Image-to-Image Translation: Methods and Applications [article]

Yingxue Pang, Jianxin Lin, Tao Qin, Zhibo Chen
2021 arXiv   pre-print
I2I has drawn increasing attention and made tremendous progress in recent years because of its wide range of applications in many computer vision and image processing problems, such as image synthesis, segmentation  ...  No knowledge distillation; DAI2I 2020 unpaired No domain adaptation; NICE-GAN 2020 unpaired No introspective adversarial networks; [78] 2018 unpaired Yes domain-specific; domain-invariant  ...  [104] use the knowledge distillation scheme to define a teacher generator and student discriminator.  ... 
arXiv:2101.08629v2 fatcat:i6pywjwnvnhp3i7cmgza2slnle

3D-to-2D Distillation for Indoor Scene Parsing [article]

Zhengzhe Liu, Xiaojuan Qi, Chi-Wing Fu
2021 arXiv   pre-print
First, we distill 3D knowledge from a pretrained 3D network to supervise a 2D network to learn simulated 3D features from 2D features during the training, so the 2D network can infer without requiring  ...  Third, we design a semantic-aware adversarial training model to extend our framework for training with unpaired 3D data.  ...  Learning with privileged information via adversarial discriminative modality distillation.  ... 
arXiv:2104.02243v2 fatcat:gxxz3xjtqvee3ophixmm4dfi5a

MT-UDA: Towards Unsupervised Cross-modality Medical Image Segmentation with Limited Source Labels [chapter]

Ziyuan Zhao, Kaixin Xu, Shumeng Li, Zeng Zeng, Cuntai Guan
2021 Lecture Notes in Computer Science  
Consequently, the student model can effectively integrate the underlying knowledge beneath available data resources to mitigate the impact of source label scarcity and yield improved cross-modality segmentation  ...  More specifically, the student model not only distills the intra-domain semantic knowledge by encouraging prediction consistency but also exploits the inter-domain anatomical information by enforcing structural  ...  We evaluated our method on the Multi-Modality Whole Heart Segmentation (MM-WHS) 2017 dataset, consisting of unpaired 20 MR and 20 CT volumes with ground truth masks.  ... 
doi:10.1007/978-3-030-87193-2_28 fatcat:cmqrhlvgpvba5kape4qawz5eti

Knowledge Distillation and Student-Teacher Learning for Visual Intelligence: A Review and New Outlooks [article]

Lin Wang, Kuk-Jin Yoon
2021 arXiv   pre-print
To achieve faster speeds and to handle the problems caused by the lack of data, knowledge distillation (KD) has been proposed to transfer information learned from one model to another.  ...  KD is often characterized by the so-called 'Student-Teacher' (S-T) learning framework and has been broadly applied in model compression and knowledge transfer.  ...  [148] focuses on unpaired images of two modalities, and learns a semantic segmentation network (student) using the knowledge from the other modality (teacher).  ... 
arXiv:2004.05937v6 fatcat:yqzo7nylzbbn7pfhzpfc2qaxea

MRI-based Alzheimer's disease prediction via distilling the knowledge in multi-modal data [article]

Hao Guan
2021 arXiv   pre-print
In this work, we propose a multi-modal multi-instance distillation scheme, which aims to distill the knowledge learned from multi-modal data to an MRI-based network for MCI conversion prediction.  ...  To our best knowledge, this is the first study that attempts to improve an MRI-based prediction model by leveraging extra supervision distilled from multi-modal information.  ...  Dou et al. (2020) leverage cross-modal distillation to address the unpaired multi-modal segmentation task.  ... 
arXiv:2104.03618v1 fatcat:654s32pwpna37cvezvb7uomn2e
« Previous Showing results 1 — 15 out of 413 results