A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
MedAug: Contrastive learning leveraging patient metadata improves representations for chest X-ray interpretation
[article]
2021
arXiv
pre-print
Self-supervised contrastive learning between pairs of multiple views of the same image has been shown to successfully leverage unlabeled data to produce meaningful visual representations for both natural and medical images. However, there has been limited work on determining how to select pairs for medical images, where availability of patient metadata can be leveraged to improve representations. In this work, we develop a method to select positive pairs coming from views of possibly different
arXiv:2102.10663v2
fatcat:fedyfy4hu5gxbfvy46x4no7fai