A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Learning Cross-Modality Representations from Multi-Modal Images
2018
IEEE Transactions on Medical Imaging
Machine learning algorithms can have difficulties adapting to data from different sources, for example from different imaging modalities. We present and analyze three techniques for unsupervised cross-modality feature learning, using a shared autoencoder-like convolutional network that learns a common representation from multi-modal data. We investigate a form of feature normalization, a learning objective that minimizes crossmodality differences, and modality dropout, in which the network is
doi:10.1109/tmi.2018.2868977
pmid:30188817
fatcat:jyyfmnctgjdnfher66ii7itelq