Filters








3,490 Hits in 8.0 sec

Weakly Supervised Contrastive Learning for Chest X-Ray Report Generation [article]

An Yan, Zexue He, Xing Lu, Jiang Du, Eric Chang, Amilcare Gentili, Julian McAuley, Chun-Nan Hsu
2021 arXiv   pre-print
A typical setting consists of training encoder-decoder models on image-report pairs with a cross entropy loss, which struggles to generate informative sentences for clinical diagnoses since normal findings  ...  To tackle this challenge and encourage more clinically-accurate text outputs, we propose a novel weakly supervised contrastive loss for medical report generation.  ...  Given a chest X-ray image I, its visual features X are extracted by pre-trained convolutional neural networks (e.g. ResNet (He et al., 2016) ).  ... 
arXiv:2109.12242v1 fatcat:vim6o7le3vc7fh5bakynaaa3re

Deep learning in generating radiology reports: A survey

Maram Mahmoud A. Monshi, Josiah Poon, Vera Chung
2020 Artificial Intelligence in Medicine  
Substantial progress has been made towards implementing automated radiology reporting models based on deep learning (DL). This is due to the introduction of large medical text/image datasets.  ...  (RNN) for natural language processing (NLP) and natural language generation (NLG).  ...  This system computed labels based on joint text/image contexts after initial CNN/RNN training using single object labels in a chest X-ray dataset from IU X-ray [21] .  ... 
doi:10.1016/j.artmed.2020.101878 pmid:32425358 pmcid:PMC7227610 fatcat:ccy2g2rh2zavdjjvvjlv7poxau

A Comparison of Pre-trained Vision-and-Language Models for Multimodal Representation Learning across Medical Images and Reports [article]

Yikuan Li, Hanyin Wang, Yuan Luo
2020 arXiv   pre-print
, clinical image-text retrieval, clinical report auto-generation.  ...  In this study, we adopt four pre-trained V+L models: LXMERT, VisualBERT, UNIER and PixelBERT to learn multimodal representation from MIMIC-CXR radiographs and associated reports.  ...  Chest X-Ray datasets are widely used in V+L researches. The joint image-text embedding can be learned from easily accessible Chest X-ray images and free-text radiology reports.  ... 
arXiv:2009.01523v1 fatcat:ktrgckfombbdxmafis2dftdnr4

PadChest: A large chest x-ray image dataset with multi-label annotated reports [article]

Aurelia Bustos, Antonio Pertusa, Jose-Maria Salinas, Maria de la Iglesia-Vayá
2019 arXiv   pre-print
We present a labeled large-scale, high resolution chest x-ray dataset for the automated exploration of medical images along with their associated reports.  ...  To the best of our knowledge, this is one of the largest public chest x-ray database suitable for training supervised models concerning radiographs, and the first to contain radiographic reports in Spanish  ...  Marco Domenech from the Radiology Department of the Castellon General Hospital (Spain), for his constructive suggestions and contributions to the hierarchies.  ... 
arXiv:1901.07441v2 fatcat:uuhka6akyrhr7orlppbgymxjsy

Alternating Cross-attention Vision-Language Model for Efficient Learning with Medical Image and Report without Curation [article]

Sangjoon Park, Eun Sun Lee, Jeong Eun Lee, Jong Chul Ye
2022 arXiv   pre-print
In this study, we introduce MAX-VL, a model tailored for efficient vision-language pre-training in the medical domain.  ...  However, there has been limited success in the application of vision-language pre-training in the medical domain, as the current vision-language models and learning strategies for photographic images and  ...  For VQA, since the model is pretrained with the chest X-ray dataset, we separately assessed the adaptation capability of the model for all questions and the question regarding the chest radiographs.  ... 
arXiv:2208.05140v1 fatcat:73rmmyiiljai7enb4awyjl5eyq

Hybrid Retrieval-Generation Reinforced Agent for Medical Image Report Generation [article]

Christy Y. Li, Xiaodan Liang, Zhiting Hu, Eric P. Xing
2018 arXiv   pre-print
We propose a novel Hybrid Retrieval-Generation Reinforced Agent (HRGR-Agent) which reconciles traditional retrieval-based approaches populated with human prior knowledge, with modern learning-based approaches  ...  For each sentence, a high-level retrieval policy module chooses to either retrieve a template sentence from an off-the-shelf template database, or invoke a low-level generation module to generate a new  ...  For instance, in Figure 1 , a retrieval-based system correctly detects effusion from a chest x-ray image, while a generative model that generates word-by-word given image features, fails to detect effusion  ... 
arXiv:1805.08298v2 fatcat:tjmrphnumfcinorkrwbfnkfzeq

Contrastive Cross-Modal Pre-Training: A General Strategy for Small Sample Medical Imaging [article]

Gongbo Liang, Connor Greenwell, Yu Zhang, Xiaoqin Wang, Ramakanth Kavuluru, Nathan Jacobs
2021 arXiv   pre-print
We use an image-text matching task to train a feature extractor and then fine-tune it in a transfer learning setting for a supervised task using a small labeled dataset.  ...  We propose using these textual reports as a form of weak supervision to improve the image interpretation performance of a neural network without requiring additional manually labeled examples.  ...  The chest X-ray images are resized to 500 × 500.  ... 
arXiv:2010.03060v4 fatcat:oitmf7z7dzhwjlzpnhaxqroaui

A Survey on Deep Learning and Explainability for Automatic Report Generation from Medical Images [article]

Pablo Messina, Pablo Pino, Denis Parra, Alvaro Soto, Cecilia Besa, Sergio Uribe, Marcelo andía, Cristian Tejos, Claudia Prieto, Daniel Capurro
2022 arXiv   pre-print
In this context, we survey works in the area of automatic report generation from medical images, with emphasis on methods using deep neural networks, with respect to: (1) Datasets, (2) Architecture Design  ...  Every year physicians face an increasing demand of image-based diagnosis from patients, a problem that can be addressed with recent artificial intelligence methods.  ...  The authors manually designed an abnormality graph and a disease graph, where each node represents an abnormality or disease, and the edges are built based on their co-occurrences in the training set.  ... 
arXiv:2010.10563v2 fatcat:usmbthlgorevliiyw7llox6zky

Clinically Accurate Chest X-Ray Report Generation [article]

Guanxiong Liu, Tzu-Ming Harry Hsu, Matthew McDermott, Willie Boag, Wei-Hung Weng, Peter Szolovits, Marzyeh Ghassemi
2019 arXiv   pre-print
In this work, we present a domain-aware automatic chest X-ray radiology report generation system which first predicts what topics will be discussed in the report, then conditionally generates sentences  ...  We verify this system on two datasets, Open-I and MIMIC-CXR, and demonstrate that our model offers marked improvements on both language generation metrics and CheXpert assessed accuracy over a variety  ...  Marzyeh Ghassemi is partially funded by a CIFAR AI Chair at the Vector Institute, and an NSERC Discovery Grant.  ... 
arXiv:1904.02633v2 fatcat:jcot36p3vbcmzakq5hqyhpxnzq

RATCHET: Medical Transformer for Chest X-ray Diagnosis and Reporting [article]

Benjamin Hou, Georgios Kaissis, Ronald Summers, Bernhard Kainz
2021 arXiv   pre-print
The model is evaluated for its natural language generation ability using common metrics from NLP literature, as well as its medically accuracy through a surrogate report classification task.  ...  RATCHET is a CNN-RNN-based medical transformer that is trained end-to-end.  ...  There are three key aspects to training the report generation model: (i) text pre-processing, (ii) tokenization and (iii) language model formulation/training.  ... 
arXiv:2107.02104v2 fatcat:x4oh6ellc5cqhjcf3dry7jzkxm

ChestX-Ray8: Hospital-Scale Chest X-Ray Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax Diseases

Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, Mohammadhadi Bagheri, Ronald M. Summers
2017 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
Although the initial quantitative results are promising as reported, deep convolutional neural network based "reading chest X-rays" (i.e., recognizing and locating the common disease patterns trained with  ...  each image can have multi-labels), from the associated radiological reports using natural language processing.  ...  Therefore we exploit to mine the per-image (possibly multiple) common thoracic pathology labels from the image-attached chest X-ray radiological reports using Natural Language Processing (NLP) techniques  ... 
doi:10.1109/cvpr.2017.369 dblp:conf/cvpr/WangPLLBS17 fatcat:7fk6qbqutzd7flnh5jwioiyhou

Deep metric learning for multi-labelled radiographs [article]

Mauro Annarumma, Giovanni Montana
2017 arXiv   pre-print
We report on a large-scale study involving over 745,000 chest radiographs whose labels were automatically extracted from free-text radiological reports through a natural language processing system.  ...  Using 4,500 validated exams, we demonstrate that the methodology performs satisfactorily on clustering and image retrieval tasks.  ...  Acknowledgments The authors thank NVIDIA for providing access to a DGX-1 server, which speeded up the training and evaluation of all the deep learning algorithms used in this work.  ... 
arXiv:1712.07682v1 fatcat:pzpry4ooabhylbsxva2koxj5kq

Making the Most of Text Semantics to Improve Biomedical Vision–Language Processing [article]

Benedikt Boecking, Naoto Usuyama, Shruthi Bannur, Daniel C. Castro, Anton Schwaighofer, Stephanie Hyland, Maria Wetscherek, Tristan Naumann, Aditya Nori, Javier Alvarez-Valle, Hoifung Poon, Ozan Oktay
2022 arXiv   pre-print
Biomedical text with its complex semantics poses additional challenges in vision--language modelling compared to the general domain, and previous work has used insufficiently adapted models that lack domain-specific  ...  A broad evaluation, including on this new dataset, shows that our contrastive learning approach, aided by textual-semantic modelling, outperforms prior methods in segmentation tasks, despite only using  ...  We introduce and release a new chest X-ray (CXR) domain-specific language model, CXR-BERT 1 (Fig. 2 ).  ... 
arXiv:2204.09817v4 fatcat:c72thidabfbgpgln7yjeoeu6ae

Explainable Deep Learning Methods in Medical Imaging Diagnosis: A Survey [article]

Cristiano Patrício, João C. Neves, Luís F. Teixeira
2022 arXiv   pre-print
In addition, we include a performance comparison among a set of report generation-based methods.  ...  In this context, we provide a thorough survey of XAI applied to medical imaging diagnosis, including visual, textual, example-based and concept-based explanation methods.  ...  Images Notes IU Chest X-Ray [27] Chest X-ray 2016 7,470 Includes reports Chest X-Ray14 [143] Chest X-ray 2017 112,120 Multiple labels CheXpert [47] Chest X-ray 2019 224,316 Multiple labels MIMIC-CXR [49  ... 
arXiv:2205.04766v2 fatcat:ngd7yb3z7fhkxkuttjm73u75wi

Automated detection of COVID-19 through convolutional neural network using chest x-ray images

Rubina Sarki, Khandakar Ahmed, Hua Wang, Yanchun Zhang, Kate Wang, Xiaodi Huang
2022 PLoS ONE  
Secondly, we develop and train CNN from scratch. In both cases, we use a public X-Ray dataset for training and validation purposes.  ...  We aim to develop a deep learning-based system for the persuasive classification and reliable detection of COVID-19 using chest radiography.  ...  x-ray images, and the second part, will include the training of DL models using generated dataset through the generative adversarial network.  ... 
doi:10.1371/journal.pone.0262052 pmid:35061767 pmcid:PMC8782355 fatcat:3trjvfdesvatrcipyajvicrrbi
« Previous Showing results 1 — 15 out of 3,490 results