Filters








1,292 Hits in 9.1 sec

Multi-modal Learning from Unpaired Images: Application to Multi-organ Segmentation in CT and MRI

Vanya V. Valindria, Nick Pawlowski, Martin Rajchl, Ioannis Lavdas, Eric O. Aboagye, Andrea G. Rockall, Daniel Rueckert, Ben Glocker
2018 2018 IEEE Winter Conference on Applications of Computer Vision (WACV)  
All of our MRI and CT data are unpaired, which means they are obtained from different subjects and not registered to each other.  ...  The same anatomical structures, however, may be visible in different modalities such as major organs on abdominal CT and MRI.  ...  The MRI data has been collected as part of the MALIBO project funded by the Efficacy and Mechanism Evaluation (EME) Programme, an MRC and NIHR partnership (EME project 13/122/01).  ... 
doi:10.1109/wacv.2018.00066 dblp:conf/wacv/ValindriaPRLARR18 fatcat:bi5jo4ci2jakvhhnplveuvofaq

Unpaired Multi-modal Segmentation via Knowledge Distillation

Qi Dou, Quande Liu, Pheng Ann Heng, Ben Glocker
2020 IEEE Transactions on Medical Imaging  
We propose a novel learning scheme for unpaired cross-modality image segmentation, with a highly compact architecture achieving superior segmentation accuracy.  ...  In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI, and only employ modality-specific internal normalization layers which compute respective statistics  ...  [16] learn image-to-image translation using unpaired CT and MRI cardiac images. Dou et al.  ... 
doi:10.1109/tmi.2019.2963882 pmid:32012001 fatcat:htw4dwhhsbbcbjnqdbwcvrkaxe

Generative Adversarial Networks for MR-CT Deformable Image Registration [article]

Christine Tanner, Firat Ozdemir, Romy Profanter, Valeriy Vishnevsky, Ender Konukoglu, Orcun Goksel
2018 arXiv   pre-print
Image synthesis was learned from 17 unpaired subjects per modality.  ...  Deformable Image Registration (DIR) of MR and CT images is one of the most challenging registration task, due to the inherent structural differences of the modalities and the missing dense ground truth  ...  Table 1 : 1 Mean volume of segmented structures in cm 3 per modality and ratio meanCT/meanMR for unpaired training data.  ... 
arXiv:1807.07349v1 fatcat:ckletfptbvbm7hysmlf4lkrsqi

Unsupervised Bidirectional Cross-Modality Adaptation via Deeply Synergistic Image and Feature Alignment for Medical Image Segmentation [article]

Cheng Chen, Qi Dou, Hao Chen, Jing Qin, Pheng Ann Heng
2020 arXiv   pre-print
We have extensively evaluated our method with cardiac substructure segmentation and abdominal multi-organ segmentation for bidirectional cross-modality adaptation between MRI and CT images.  ...  In this work, we present a novel unsupervised domain adaptation framework, named as Synergistic Image and Feature Alignment (SIFA), to effectively adapt a segmentation network to an unlabeled target domain  ...  For both applications, the MRI and CT data are unpaired and collected from different patient cohorts.  ... 
arXiv:2002.02255v1 fatcat:hwqslvayxnh4tg37gmcso3o37u

A Survey of Cross-Modality Brain Image Synthesis [article]

Guoyang Xie, Jinbao Wang, Yawen Huang, Yefeng Zheng, Feng Zheng, Yaochu Jin
2022 arXiv   pre-print
In this paper, we tend to approach multi-modality brain image synthesis task from different perspectives, which include the level of supervision, the range of modality synthesis, and the synthesis-based  ...  The existence of completely aligned and paired multi-modal neuroimaging data has proved its effectiveness in diagnosis of brain diseases.  ...  Zeng and Zheng [2019]hesize MRI from ultrasound image using a new fusion scheme to utilize various modality from unpaired data.Zeng and Zheng [2019]synthesize CT from MR by using the self-supervised methods  ... 
arXiv:2202.06997v2 fatcat:kqxte2xrcrcpjfkkhwrcxdjqsu

Deep learning in medical image registration

Xiang Chen, Andres Diaz-Pinto, Nishant Ravikumar, Alejandro Frangi
2020 Progress in Biomedical Engineering  
Image registration is a fundamental task in multiple medical image analysis applications.  ...  contributions to the field; (b) analysis of the development and evolution of deep learning-based image registration methods, summarising the current trends and challenges in the domain; and (c) overview  ...  Acknowledgments The Royal Academy of Engineering supports the work of A F F through a Chair in Emerging Technologies (CiET1819\19) and the MedIAN Network (EP/N026993/1) funded by the Engineering and Physical  ... 
doi:10.1088/2516-1091/abd37c fatcat:74w7ra4f7nfrrpfk2ifvmijntq

SynSeg-Net: Synthetic Segmentation Without Target Modality Ground Truth

Yuankai Huo, Zhoubing Xu, Hyeonsoo Moon, Shunxing Bao, Albert Assad, Tamara K. Moyo, Michael R. Savona, Richard G. Abramson, Bennett A. Landman
2019 IEEE Transactions on Medical Imaging  
SynSeg-Net is trained by using (1) unpaired intensity images from source and target modalities, and (2) manual labels only from source modality.  ...  Manually traced training images are typically required when segmenting organs in a new imaging modality or from distinct disease cohort.  ...  Therefore, the synthesis was performed to learn a less context imaging modality (abdominal CT) from a richer context imaging modality (abdominal MRI).  ... 
doi:10.1109/tmi.2018.2876633 pmid:30334788 pmcid:PMC6504618 fatcat:eu4gklq33favznxo2rxw7cxqzy

Magnetic resonance image (MRI) synthesis from brain computed tomography (CT) images based on deep learning methods for magnetic resonance (MR)-guided radiotherapy

Wen Li, Yafen Li, Wenjian Qin, Xiaokun Liang, Jianyang Xu, Jing Xiong, Yaoqin Xie
2020 Quantitative Imaging in Medicine and Surgery  
In this paper, we proposed a method to synthesize brain MRI images from corresponding planning CT (pCT) images.  ...  The synthetic MRI (sMRI) images can be used to align with positioning MRI (pMRI) equipped by an MRI-guided accelerator to account for the disadvantages of multi-modality image registration.  ...  Acknowledgments Funding: This work is supported in part by grants from  ... 
doi:10.21037/qims-19-885 pmid:32550132 pmcid:PMC7276358 fatcat:p3ultej2dfb5npu6vuxghapueu

PSIGAN: Joint probabilistic segmentation and image distribution matching for unpaired cross-modality adaptation based MRI segmentation [article]

Jue Jiang, Yu Chi Hu, Neelam Tyagi, Andreas Rimner, Nancy Lee, Joseph O. Deasy, Sean Berry, Harini Veeraraghavan
2020 arXiv   pre-print
We developed a new joint probabilistic segmentation and image distribution matching generative adversarial network (PSIGAN) for unsupervised domain adaptation (UDA) and multi-organ segmentation from magnetic  ...  Extensive experiments and comparisons against multiple state-of-the-art methods were done on four different MRI sequences totalling 257 scans for generating multi-organ and tumor segmentation.  ...  METHOD Goal: Learn MRI multi-organ segmentation models by using unpaired expert-segmented CT and unlabeled MRI images.  ... 
arXiv:2007.09465v1 fatcat:mzm7lgdjjzc63jaesk2jwmesqq

Towards Cross-Modality Medical Image Segmentation with Online Mutual Knowledge Distillation

Kang Li, Lequan Yu, Shujun Wang, Pheng-Ann Heng
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
Experimental results on the public multi-class cardiac segmentation data, i.e., MM-WHS 2017, show that our method achieves large improvements on CT segmentation by utilizing additional MRI data and outperforms  ...  Considering multi-modality data with the same anatomic structures are widely available in clinic routine, in this paper, we aim to exploit the prior knowledge (e.g., shape priors) learned from one modality  ...  Acknowledgements This work described in this paper was supported by the following grants from the Hong Kong Innovation and Technology Commission (Project No.  ... 
doi:10.1609/aaai.v34i01.5421 fatcat:4w3zi75kbzgznkz6acsuayvlte

Cross-Domain Segmentation with Adversarial Loss and Covariate Shift for Biomedical Imaging [article]

Bora Baydar, Savas Ozkan, A. Emre Kavur, N. Sinem Gezer, M. Alper Selver, Gozde Bozdagi Akar
2020 arXiv   pre-print
For instance, CT and MRI have advantages over each other in terms of imaging quality, artifacts, and output characteristics that lead to differential diagnosis.  ...  Despite the widespread use of deep learning methods for semantic segmentation of images that are acquired from a single source, clinicians often use multi-domain data for a detailed analysis.  ...  In this regard, multi-domain segmentation aims to extract the organs of interest from different image series without explicitly knowing their sources (i.e.  ... 
arXiv:2006.04390v1 fatcat:b64ze2433rh7lpbkm6gmmp5xuq

Discriminative Cross-Modal Data Augmentation for Medical Imaging Applications [article]

Yue Yang, Pengtao Xie
2020 arXiv   pre-print
We propose a discriminative unpaired image-to-image translation model which translates images in source modality into images in target modality where the translation task is conducted jointly with the  ...  While deep learning methods have shown great success in medical image analysis, they require a number of medical images to train.  ...  In the first application, the source modality is MRI and the target modality is CT.  ... 
arXiv:2010.03468v1 fatcat:vb4mrokvvfdlnbftbnog7cvvs4

A Review of Generative Adversarial Networks in Cancer Imaging: New Applications, New Solutions [article]

Richard Osuala, Kaisar Kushibar, Lidia Garrucho, Akis Linardos, Zuzanna Szafranowska, Stefan Klein, Ben Glocker, Oliver Diaz, Karim Lekadir
2021 arXiv   pre-print
In this review, we assess the potential of GANs to address a number of key challenges of cancer imaging, including data scarcity and imbalance, domain and dataset shifts, data access and privacy, data  ...  With this work, we strive to bridge the gap between the needs of the clinical cancer imaging community and the current and prospective research on GANs in the artificial intelligence community.  ...  Acknowledgments This project has received funding from the European Union's Horizon 2020 research and innovation pro-  ... 
arXiv:2107.09543v1 fatcat:jz76zqklpvh67gmwnsdqzgq5he

Synergizing medical imaging and radiotherapy with deep learning

Hongming Shan, Xun Jia, Pingkun Yan, Yunyao Li, Harald Paganetti, Ge Wang
2020 Machine Learning: Science and Technology  
This article reviews deep learning methods for medical imaging (focusing on image reconstruction, segmentation, registration, and radiomics) and radiotherapy (ranging from planning and verification to  ...  It is believed that deep learning in particular, and artificial intelligence and machine learning in general, will have a revolutionary potential to advance and synergize medical imaging and radiotherapy  ...  Acknowledgment This work was partially support by NIH/NCI under award numbers R01CA233888, R01CA237267, R01CA227289, R37CA214639, and R01CA237269, and NIH/NIBIB under award number R01EB026646.  ... 
doi:10.1088/2632-2153/ab869f fatcat:aibfmfelcngkrk4ilwcs25c77a

Review of Medical Image Synthesis using GAN Techniques

M. Krithika alias Anbu Devi, K. Suganthi, J. Kannan R., P. Kommers, A. S, A. Quadir Md
2021 ITM Web of Conferences  
computed tomography image (CT) and 7T from 3T MRI which can be used to obtain multimodal datasets from single modality.  ...  Hence GAN is a deep learning method that has been developed for the image to image translation, i.e. from low-resolution to highresolution image, for example generating Magnetic resonance image (MRI) from  ...  CT images are generated from MRI images with multiple organ labels.  ... 
doi:10.1051/itmconf/20213701005 fatcat:pd3vaaspendihh77aenbbfrqmy
« Previous Showing results 1 — 15 out of 1,292 results