Adopting Semantic Information of Grayscale Radiographs for Image Classification and Retrieval

Obioma Pelka, Felix Nensa, Christoph M. Friedrich
2018 Proceedings of the 11th International Joint Conference on Biomedical Engineering Systems and Technologies  
As the number of digital medical images taken daily rapidly increases, manual annotation is impractical, time-consuming and prone to errors. Hence, there is need to create systems that automatically classify and annotate medical images. The aim of this presented work is to utilize Transfer Learning to generate image keywords, which are substituted as text representation for medical image classification and retrieval tasks. Text preprocessing methods such as detection and removal of compound
more » ... re delimiters, stop-words, special characters and word stemming are applied before training the keyword generation model. All images are visually represented using Convolutional Neural Networks (CNN) and the Long Short-Term Memory (LSTM) based Recurrent Neural Network (RNN) Show-and-Tell model is adopted for keyword generation. To improve model performance, a second training phase is initiated, where parameters are fine-tuned using the pre-trained deep learning network Inception-ResNet-V2. For the image classification tasks, Random Forest models trained with Bag-of-Keypoints visual representations were adopted. Classification prediction accuracy was higher for all classification schemes and on two distinct radiology image datasets using the proposed approach.
doi:10.5220/0006732301790187 dblp:conf/biostec/PelkaNF18 fatcat:aoadhd2inbc6tkx5ahua4uuuz4