Filters








1,520 Hits in 5.0 sec

Cross-Modality Distillation: A case for Conditional Generative Adversarial Networks [article]

Siddharth Roheda, Benjamin S. Riggan, Hamid Krim, Liyi Dai
<span title="2018-07-20">2018</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this paper, we propose to use a Conditional Generative Adversarial Network (CGAN) for distilling (i.e. transferring) knowledge from sensor data and enhancing low-resolution target detection.  ...  We therefore specifically tackle the problem of a missing modality in our attempt to propose an algorithm based on CGANs to generate representative information from the missing modalities when given some  ...  CONCLUSION In this paper we proposed a technique for cross-modal distillation that uses Conditional Generative Adversarial Networks (CGANs) in order to predict features from modalities that may be unusable  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1807.07682v1">arXiv:1807.07682v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/htuyopxno5er3p5r2lww7nqp4q">fatcat:htuyopxno5er3p5r2lww7nqp4q</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200824102531/https://arxiv.org/pdf/1807.07682v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/f1/6e/f16e8e774d638780466598ad973a72843885c2c2.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1807.07682v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Unpaired cross-modality educed distillation (CMEDL) applied to CT lung tumor segmentation [article]

Jue Jiang, Andreas Rimner, Joseph O. Deasy, Harini Veeraraghavan
<span title="2021-07-16">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Therefore, we developed a new cross-modality educed distillation (CMEDL) approach, using unpaired CT and MRI scans, whereby a teacher MRI network guides a student CT network to extract features that signal  ...  Our contribution eliminates two requirements of distillation methods: (i) paired image sets by using an image to image (I2I) translation and (ii) pre-training of the teacher network with a large training  ...  METHODS A. Cross modality educed distillation (CMEDL) An overview of our approach is shown in Fig. 1 , where two subnetworks, first for cross-modality I2I translation (i.e.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2107.07985v1">arXiv:2107.07985v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ao6pbj7crfafzg2pc3wnq3j6pu">fatcat:ao6pbj7crfafzg2pc3wnq3j6pu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210721130620/https://arxiv.org/pdf/2107.07985v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/c6/f9/c6f9e71c93ed2a2e4d043d99f42f650eca967e4f.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2107.07985v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Unsupervised domain adaptation for lip reading based on cross-modal knowledge distillation

Yuki Takashima, Ryoichi Takashima, Ryota Tsunoda, Ryo Aihara, Tetsuya Takiguchi, Yasuo Ariki, Nobuaki Motoyama
<span title="">2021</span> <i title="Springer Science and Business Media LLC"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/tzakietxejgppjzsrojed7bkke" style="color: black;">EURASIP Journal on Audio, Speech, and Music Processing</a> </i> &nbsp;
In this paper, we propose a cross-modal knowledge distillation (KD)-based domain adaptation method, where we use the intermediate layer output in the audio-based speech recognition model as a teacher for  ...  Because the audio signal contains more information for recognizing speech than lip images, the knowledge of the audio-based model can be used as a powerful teacher in cases where the unlabeled adaptation  ...  There are some prior works based on a generative adversarial network (GAN) [20] for lip reading. Wand et al.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1186/s13636-021-00232-5">doi:10.1186/s13636-021-00232-5</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/tl5zm6tewrgohmjjwnyanifhya">fatcat:tl5zm6tewrgohmjjwnyanifhya</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220205130745/https://asmp-eurasipjournals.springeropen.com/track/pdf/10.1186/s13636-021-00232-5.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/1c/b6/1cb68505605a147a9ecc7cff697f3a33c3e1f4a7.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1186/s13636-021-00232-5"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> springer.com </button> </a>

Adaptive Cross-Modal Few-Shot Learning [article]

Chen Xing, Negar Rostamzadeh, Boris N. Oreshkin, Pedro O. Pinheiro
<span title="2020-02-18">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Through a series of experiments, we show that by this adaptive combination of the two modalities, our model outperforms current uni-modality few-shot learning methods and modality-alignment methods by  ...  In this paper, we propose to leverage cross-modal information to enhance metric-based few-shot learning methods. Visual and semantic feature spaces have different structures by definition.  ...  A generative adversarial approach for zero-shot learning from noisy texts.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1902.07104v3">arXiv:1902.07104v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/mmjrixwfkzg7fknqaxlgjggdki">fatcat:mmjrixwfkzg7fknqaxlgjggdki</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200826214605/https://arxiv.org/pdf/1902.07104v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/d8/d9/d8d948d2503d28b5c9a4db0a98447eb83eabdc77.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1902.07104v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Unsupervised domain adaptation for cross-modality liver segmentation via joint adversarial learning and self-learning [article]

Jin Hong, Simon Chun-Ho Yu, Weitian Chen
<span title="2022-02-24">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this work, we report a novel unsupervised domain adaptation framework for cross-modality liver segmentation via joint adversarial learning and self-learning.  ...  In proposed framework, a network is trained with the above two adversarial losses in an unsupervised manner, and then a mean completer of pseudo-label generation is employed to produce pseudo-labels to  ...  Yongcheng Yao for their valuable comments of this work.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2109.05664v3">arXiv:2109.05664v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/htdtrwkb6vci5orzardyor33me">fatcat:htdtrwkb6vci5orzardyor33me</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210924134901/https://arxiv.org/ftp/arxiv/papers/2109/2109.05664.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/da/27/da27139fc5e6707cef9f903161d6ae25e875d080.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2109.05664v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

A Novel Incremental Cross-Modal Hashing Approach [article]

Devraj Mandal, Soma Biswas
<span title="2020-02-03">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Cross-modal retrieval deals with retrieving relevant items from one modality, when provided with a search query from another modality.  ...  Extensive experiments across a variety of cross-modal datasets and comparisons with state-of-the-art cross-modal algorithms shows the usefulness of our approach.  ...  We evaluate our deep neural network under four different conditions -(1) cross-entropy loss L ce , (2) weighted cross-entropy loss L wce , (3) L ce + IBDS and (4) L ce + IBDS. we observe that each component  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2002.00677v1">arXiv:2002.00677v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/5o24pnenr5d5ni5eeydxws6uym">fatcat:5o24pnenr5d5ni5eeydxws6uym</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200321155242/https://arxiv.org/pdf/2002.00677v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2002.00677v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

SwAMP: Swapped Assignment of Multi-Modal Pairs for Cross-Modal Retrieval [article]

Minyoung Kim
<span title="2021-11-10">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We tackle the cross-modal retrieval problem, where the training is only supervised by the relevant multi-modal pairs in the data. The contrastive learning is the most popular approach for this task.  ...  With these swapped labels, we learn the data embedding for each modality using the supervised cross-entropy loss, hence leading to linear sampling complexity.  ...  That is, letting p true (y|x A ) be the true conditional class distribution for modality A, we minimize E ptrue(y|x A ) [− log p(y|x A )] with respect to P and the network parameters of φ A (•) (similarly  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2111.05814v1">arXiv:2111.05814v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/zrnjxmeowrdtjispgmhkpsl4bm">fatcat:zrnjxmeowrdtjispgmhkpsl4bm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211123193921/https://arxiv.org/pdf/2111.05814v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/de/09/de094e6dfb253961b380bc529d82a56be047d962.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2111.05814v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Voice2Mesh: Cross-Modal 3D Face Model Generation from Voices [article]

Cho-Ying Wu, Ke Xu, Chin-Cheng Hsu, Ulrich Neumann
<span title="2021-04-21">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Previous works for cross-modal face synthesis study image generation from voices.  ...  distillation.  ...  [10] with generative adversarial networks (GANs) and that by Oh et al. [35] with an encoder-decoder network. These works generate face images from the voices of a speaker.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2104.10299v1">arXiv:2104.10299v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/d5y6yjioxvgihdlmxhh3jroxce">fatcat:d5y6yjioxvgihdlmxhh3jroxce</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210423012117/https://arxiv.org/pdf/2104.10299v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/31/8a/318a99caf7ce2cbb60f077515fefe34fb7a85586.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2104.10299v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Learning with privileged information via adversarial discriminative modality distillation [article]

Nuno C. Garcia, Pietro Morerio, Vittorio Murino
<span title="2018-10-19">2018</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We propose a new approach to train a hallucination network that learns to distill depth information via adversarial learning, resulting in a clean approach without several losses to balance or hyperparameters  ...  This paper presents a new approach in this direction for RGB-D vision tasks, developed within the adversarial learning and privileged information frameworks.  ...  ACKNOWLEDGMENTS The authors would like to thank Riccardo Volpi for useful discussion on adversarial training and GANs.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1810.08437v1">arXiv:1810.08437v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/emrj23ga3ngprlp2zmtxvms3qy">fatcat:emrj23ga3ngprlp2zmtxvms3qy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20191023044049/https://arxiv.org/pdf/1810.08437v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/63/44/634466a4b02a7cc3e8d6a5378049f9d40222365f.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1810.08437v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Knowledge Distillation: A Survey [article]

Jianping Gou, Baosheng Yu, Stephen John Maybank, Dacheng Tao
<span title="2021-03-02">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In recent years, deep neural networks have been successful in both industry and academia, especially for computer vision tasks.  ...  As a representative type of model compression and acceleration, knowledge distillation effectively learns a small student model from a large teacher model.  ...  Fig. 12 The generic framework for cross-modal distillation.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2006.05525v6">arXiv:2006.05525v6</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/aedzaeln5zf3jgjsgsn5kvjrri">fatcat:aedzaeln5zf3jgjsgsn5kvjrri</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210325125524/https://arxiv.org/pdf/2006.05525v6.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/6b/4f/6b4f2dceae8ef6f602b9ca6ef69fef5f31a7a041.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2006.05525v6" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

ACE-BERT: Adversarial Cross-modal Enhanced BERT for E-commerce Retrieval [article]

Boxuan Zhang, Chao Wei, Yan Jin, Weiru Zhang
<span title="2021-12-14">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We propose a novel Adversarial Cross-modal Enhanced BERT (ACE-BERT) for efficient E-commerce retrieval. In detail, ACE-BERT leverages the patch features and pixel features as image representation.  ...  These multiple modalities are significant for a retrieval system while providing attracted products for customers.  ...  In addition, generative adversarial networks (GANs) [11] have been proposed in cross-modal retrieval.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.07209v1">arXiv:2112.07209v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/fa3fvsvgojeopginqgdijxt3aq">fatcat:fa3fvsvgojeopginqgdijxt3aq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211220001818/https://arxiv.org/pdf/2112.07209v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/38/c0/38c0bb81e0d85a15cfb8263ac2b40b12054e4c5a.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.07209v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Unsupervised Cross-Modal Alignment for Multi-Person 3D Pose Estimation [article]

Jogendra Nath Kundu, Ambareesh Revanur, Govind Vitthal Waghmare, Rahul Mysore Venkatesh, R. Venkatesh Babu
<span title="2020-08-04">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We cast the learning as a cross-modal alignment problem and propose training objectives to realize a shared latent space between two diverse modalities.  ...  We present a deployment friendly, fast bottom-up framework for multi-person 3D human pose estimation.  ...  Here, we discuss how these pathways support an effective cross-modal alignment. a) Cross-modal distillation pathway for I ∼ D I .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2008.01388v1">arXiv:2008.01388v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/rlfpgoy6vjayhmwyhhvihdizlm">fatcat:rlfpgoy6vjayhmwyhhvihdizlm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200811115803/https://arxiv.org/pdf/2008.01388v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2008.01388v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Confidence Conditioned Knowledge Distillation [article]

Sourav Mishra, Suresh Sundaram
<span title="2021-07-06">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this paper, a novel confidence conditioned knowledge distillation (CCKD) scheme for transferring the knowledge from a teacher model to a student model is proposed.  ...  Distillation through CCKD methods improves the resilience of the student models against adversarial attacks compared to the conventional KD method.  ...  Recently, distillation has been applied in face recognition [26] , cross-modal hashing [27] and collaborative learning [28] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2107.06993v1">arXiv:2107.06993v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/v6ekubfmonevdg2muqbbj6faga">fatcat:v6ekubfmonevdg2muqbbj6faga</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210718231647/https://arxiv.org/pdf/2107.06993v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/03/dc/03dc32d7c5d578d451870c3da9538e97eaa39690.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2107.06993v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

CMMCSegNet: Cross-Modality Multicascade Indirect LGE Segmentation on Multimodal Cardiac MR

Yu Wang, Jianping Zhang, Lin Lu
<span title="2021-06-05">2021</span> <i title="Hindawi Limited"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/xytabl7eh5a5lle2ofnetovd7u" style="color: black;">Computational and Mathematical Methods in Medicine</a> </i> &nbsp;
In the segmentation stage, a novel multicascade pix2pix network is designed to segment the fake bSSFP sequence obtained from a cross-modality translation network.  ...  cross-modality translation network and automatic segmentation network, respectively.  ...  Inspired by the knowledge distillation between unpaired image-to-image translation networks [32] , we employ Cycle-GAN to achieve cross-modality image translation for CMR datasets.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1155/2021/9942149">doi:10.1155/2021/9942149</a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pubmed/34194539">pmid:34194539</a> <a target="_blank" rel="external noopener" href="https://pubmed.ncbi.nlm.nih.gov/PMC8203380/">pmcid:PMC8203380</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/l4zx3f4zarhrpbdtq66ry5bjse">fatcat:l4zx3f4zarhrpbdtq66ry5bjse</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210616173915/https://downloads.hindawi.com/journals/cmmm/2021/9942149.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/35/45/354506fe64d933051188af8caa28f771ac5b1c20.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1155/2021/9942149"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> hindawi.com </button> </a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8203380" title="pubmed link"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> pubmed.gov </button> </a>

xMUDA: Cross-Modal Unsupervised Domain Adaptation for 3D Semantic Segmentation [article]

Maximilian Jaritz, Tuan-Hung Vu, Raoul de Charette, Émilie Wirbel, Patrick Pérez
<span title="2020-03-30">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this work, we explore how to learn from multi-modality and propose cross-modal UDA (xMUDA) where we assume the presence of 2D images and 3D point clouds for 3D semantic segmentation.  ...  Unsupervised Domain Adaptation (UDA) is crucial to tackle the lack of annotations in a new domain. There are many multi-modal datasets, but most UDA approaches are uni-modal.  ...  The SemanticKITTI dataset [1] provides 3D point cloud labels for the Odometry dataset of Kitti [6] which features large angle front camera and a 64-layer LiDAR.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1911.12676v2">arXiv:1911.12676v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/yaguklx3jvdcnmbbpt54k32ioe">fatcat:yaguklx3jvdcnmbbpt54k32ioe</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200402080136/https://arxiv.org/pdf/1911.12676v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1911.12676v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 1,520 results