EuSoMII Virtual Annual Meeting 2020 Book of Abstracts

2021 Insights into Imaging  
Short Summary: A new artificial intelligence (AI)-based system, designed to support clinical interpretation of prostate MRI, shows promising performance compared to radiology studies and CAD/AI literature. Purpose/Objectives: Investigate whether AI could improve specificity of patient selection for biopsy and biopsy target identification, and assist segmentation, in the prostate cancer diagnostic pathway. Methods and materials: A multi-stage AI-based system was developed for MRI analysis, and
more » ... ained with the NCI-ISBI 2013 Challenge, PROMISE12 and PROSTATEx datasets, split into training, validation and held-out test sets. Clinically significant prostate cancer (csPCa) was defined as Gleason≥3+4 disease. Accuracy metrics were computed on validation and held-out test sets, and compared with literature on prostate MRI studies and CAD/AI models. Results: To support patient selection for biopsy as a rule-out test, sensitivity identifying patients with csPCa was 93% (95% CI 82-100%), specificity 76% (64-87%), NPV 95% (88-100%), and AUC 0.92 (0.84-0.98), evaluated with biparametric MRI (bpMRI) data from the combined PROSTATEx validation and held-out test sets (prevalence 35%, 80 patients). Performance was higher on the held-out test set (40 patients). Similar AI/CAD publications report 93% sensitivity using held-out/blinded data at specificity between 6%-42%. In major studies, radiologists' sensitivity per-patient was 88-93%, specificity 18-68%, and NPV 76-97%, with Likert/PI-RADS ≥3 defined as positive. Note that methodological and dataset differences and test set size limit comparisons. For identifying biopsy targets, the AI system detected csPCa lesions with per-lesion sensitivity 94% (85-100%), specificity 71% (61-89%), NPV 97% (93-100%), and AUC 0.89 (0.83-0.95), in the same combined PROSTATEx development validation/test set (128 lesions, 80 patients). Performance was higher on the held-out test set. For prostate gland segmentation to support analysis, PSA density evaluation, and fusion biopsy, the system showed 92% average Dice score, against radiologist ground-truth segmentations, on held-out test cases from the PROMISE12 dataset (10 patients). This is comparable to the stateof-the-art. The AI system performed similarly in the above tasks when evaluated using multiparametric MRI (mpMRI) data. Conclusion: The results suggest the AI system has promising specificity, sensitivity and NPV to help csPCa-free patients avoid biopsy, to support biopsy targeting, and to provide high-quality automated segmentations. Disclosure: AW Rix and E Sala co-founded Lucida Medical, the company developing this AI system. While regulatory approvals are in progress, the system is not currently available and is described here in retrospective research use. Short Summary: Artificial Intelligence (AI) is rapidly changing medical imaging. Multidisciplinary teams of radiologists, engineers, computer scientists and other related professions are involved in the design and validation of clinical AI tools. But, what about medical physicists? Are they the great forgotten ones? Would they like to participate in AI projects? A flash survey was conducted to answer these and more AI-related questions within the Medical Physics community. Purpose/Objectives: To assess current perceptions, practises and education needs pertaining to AI in the medical physics field. Methods and materials: A 25-questions international survey (Google Forms) promoted by the working group on AI of the European Federation of Medical Physics (EFOMP) was organised between February and March 2020. Statistical analysis was performed using the Welch's two sample ttest (SciPy 1.5.0) Results: A total of 219 medical physicists (average age 42±10 years old; 29% female) from 31 countries took part in the survey. Participants showed an average knowledge about AI (2.3±1.0 on a 1-to-5 scale). About 80% of participants think that AI will improve their profession and 96% of them expressed high interest in improving their AI skills. More than 80% expect AI to be part of their professional curriculum. Despite of this, 64% of them have a limited participation or do not participate in AI projects yet. Significantly fewer female participants are leading AI projects. Conclusion: Medical physicist perceived AI as a positive resource and are willing to improve their AI skills, thus AI courses specific for medical physicists should be organised. Medical physicists should be also involved in the multidisciplinary teams of AI projects. Special actions should be considered to increase the involvement of female medical physicists. Disclosure: N/A Short Summary: This research evaluates the efficiency and usability of the workflow of finding lesions in previous CT examinations by using hyperlinks in a simulation study, a user enquiry, and an analysis of radiology reports. Radiologists can use hyperlinks in previous reports to improve efficiency in looking up lesions for response evaluation in oncologic imaging. Purpose/Objectives: Comparison with previous examinations is a key component of response evaluation in oncologic imaging. For reliable response evaluation, radiologists should compare the current study with the baseline study, and if applicable the nadir study. Finding a particular study, series, and lesion to compare with is prone to errors and time-consuming. The PACS allows the application of hyperlinks in radiology reports. The application of this functionality in daily-practice by radiologists has not been systematically assessed. Our purpose is to develop and evaluate a strategy for accurate and efficient oncologic CT evaluation using hyperlinks in radiology reports. Methods and materials: This study describes a specific part of the reporting workflow regarding looking up lesions in previous examinations for comparative purposes. We parallel the conventional and hyperlink-enhanced approach by demonstrating a schematic representation and performing a simulation study. We assessed the usage of hyperlinks by a survey among radiologists and checked radiology reports of oncology CTs in the periods . Results: The simulation study demonstrates that radiologists can achieve up to a fivefold timesaving for looking up previous lesions by using the hyperlink workflow compared with the conventional workflow to find and compare with previous studies. The response rate of the survey was 86% (12/14). The results indicated that 92% of radiologists are familiar with hyperlinks in radiology reports, 83% are making hyperlinks in their reports, 83% use hyperlinks to identify previous examinations, and 92% use hyperlinks in multidisciplinary team meetings. At a 5 point scale, nine radiologists (75%) answered the question on usefulness with 5 (very useful). The other three radiologists gave as answers 1 (not useful), 3 (neutral) and 4 (useful). The percentages of reports of oncology CT's containing hyperlinks in the three-month periods in 2016 and 2020 were 80% (99) and 86% (85) respectively (p = 0.013). Conclusion: Usage of hyperlinks in PACS integrated reporting contributes to accurate and efficient radiology reporting in oncologic imaging. Radiologist value the functionality of hyperlinks high. Disclosure: None Short Summary: The purpose of our study is to investigate the contribution of artificial intelligence in medical decision-making processes by developing intelligent and functional medical knowledge extraction systems that may be utilized for accurately diagnosing breast cancer, using clinical and imaging data. Purpose/Objectives: To identify the effectiveness of one or a combination of breast MRI (bMRI) descriptors in using inductive decision trees to classify breast lesions. Methods and materials: The dataset included data from 77 patients: clinical (1 variable), imaging (13 variables) derived from breast MRI's, and surrogate markers (6 variables) derived from biopsy examinations. Breast MR exams were acquired on a 1.5T and a 3T scanner. They were reviewed by two independent readers, who were blinded to the histology results and clinical data of each patient. The selection of the appropriate input variables (for the machine learning algorithm used in this work) was made based on the clinical significance of each variable and with the ultimate goal of optimal performance of the algorithm (evaluation criterion: classification accuracy) Classification models were developed using the method of inductive decision trees (IDT). Induction classification trees were converted to IF / THEN classification rules. The overall approach was evaluated using the "Leave -One -Out" method and utilizing existing medical knowledge. Results: One classification rule states that 'if the lesion has MASS Morphology and the ADC map displays low signal, then it is a HER-2 positive tumor' with a confidence level of 83.3%. Another rule states 'If the patient is over 50 years old, with a non-mass enhancing lesion (NME) and the presence of a feeding vessel, then the Ki-67 is over 26%'. Yet another, 'If there is Peritumoral Edema, the Kinetic Curve is Type II and the lesion is Unifocal, then the tumor is PR negative'. Lastly, 'if the lesion has a NME morphology, low signal intensity on T2WI, the ADC is low and is Unifocal, then it is ER negative'. The last three classification rules have a confidence level of 80%. Conclusions: Decision trees are a useful clinical tool that improve classification accuracy of selected datasets. Prospective studies on large datasets are needed to further validate the diagnostic performance of the proposed inductive decision trees. Disclosure: None. Short Summary: In recent years Artificial Intelligence (AI) is becoming increasingly relevant towards digital healthcare. But the programming-heavy materials available within the web has become a burden for the healthcare and practitioners who would most likely benefit from it. While it is anticipated that the radiologists will work more closely together with AI in near future, there is a perceived lack of practical learning material to make them "AI-ready". We emphasize here that we are not expecting radiologists to be expert programmers, in the same way that while radiologists learn about MR physics, they don't get trained as physicists. However, adequate practical knowledge is of primary importance to critically appreciate a technology that is supposed to be in daily use of a radiologist's clinical routine. To get a better overview of the inaccessibility of knowledge problem within European Union as described above, we designed a survey. The short survey (takes about 3 minutes) explores different experiences of radiologists during self-study of AI. The first two questions are about what the most difficult part of learning about AI is and when the topic was last addressed. In this way, the experience can be put into a chronological order and knowledge can be gained about what makes it difficult for radiologists to learn. In a further question the participants are then asked more precisely what exactly was so difficult. The possible answers are as follows: Coding intensive, too much gimmick, information overload or nothing specific about radiology for a more detailed analysis. This should help us to identify in which of the four categories the biggest problems are. At the end we will go back to what was perceived as bad in the last time with the applied strategy, so that we can better evaluate failures and work out a solution that avoids these failures. Purpose/Objectives: The goal of this survey is to understand how easy it is for entry-level enthusiastic radiologists to learn about AI. Methods and materials: A survey: https://forms.gle/6QzVbn9FdGPrUyucA Results: Eight radiologists have participated in the survey. We won't describe the detailed results here to ensure a bias-free understanding of the European picture. Conclusions: Our plan is to develop an e-learning solution for radiologists and other medical professionals that focuses on self-study of AI. This should provide enthusiastic entry-level professionals an opportunity to critically appreciate AI beyond their medical school curriculum. Disclosure: August 2020 Short Summary: The COVID-19 pandemic is posing a large challenge for health systems, forcing a balance to be found between resource management and safe decision-making. We developed and internally validated severity (AUC-ROC=0.94) and in-hospital mortality (AUC-ROC=0.97) prediction models that could be useful to triage symptomatic COVID-19 patients. Some of the strongest predictors include oxygen saturation, age, and extent score of lung involvement on chest X-ray. These models should be further validated at different emergency departments. Purpose/Objectives: To develop prognosis prediction models for COVID-19 patients attending an emergency department based on chest X-ray, demographics, clinical and laboratory parameters. Methods and materials: All symptomatic confirmed COVID-19 patients admitted to our hospital emergency department between February 24th and April 24th 2020 were recruited. Chest X-ray features, clinical and laboratory variables and chest X-ray abnormality indices extracted by a convolutional neural network (CNN) diagnostic tool were considered potential predictors on this first visit. The most serious individual outcome defined the severity level: home discharge or hospitalization ≤ 3 days, hospital stay >3 days and intensive care requirement or death. Severity and in-hospital mortality multivariable prediction models were developed and internally validated. The Youden index was used for model selection. Results: A total of 440 patients were enrolled (median 64 years; 55.9% male); 13.6% patients were discharged, 64% hospitalized, 6.6% required intensive care and 15.7% died. The severity prediction model included SatO2/FiO2, age, C-reactive protein (CRP), lymphocyte count, extent score of lung involvement on chest X-ray, lactic dehydrogenase (LDH), D-dimer level and platelets count, with AUC-ROC=0.94 and AUC-PRC (precisionrecall curve)=0.88. The mortality prediction model included age, SatO2/FiO2, CRP, LDH, chest X-ray extent score, lymphocyte count and D-dimer level, with an AUC-ROC=0.97 and AUC-PRC=0.78. The addition of chest X-ray CNN-based indices improved the predictive metrics for mortality (AUC-ROC=0.97, AUC-PRC=0.83). Conclusions: The developed and internally validated severity and mortality prediction models should be used as triage tools for COVID-19 patients at emergency department. Disclosure: The authors declare no competing interests. The authors have not received any funding. Jose Sánchez-García is an employee of Quantitative Imaging Biomarkers in Medicine (QUIBIM SL) whose software has been used in one of the predictive models. Short Summary: Imaging COVID-19 AI initiative is a large collaborative effort to develop a deep learning solution for assisted diagnosis of COVID-19 on CT scans, and for assessing disease severity by quantification of lung involvement. The project is initiated by the European Society of Medical Imaging Informatics (EuSoMII). Twenty-six hospitals and two industry partners across Europe have collaborated to collect data and develop a deep learning model. A large and varied dataset of more than 3000 Chest CT-scans was obtained by the project group, containing a mix of COVID-19 positive scans, normal scans and other types of pulmonary infection. Radiologists have been actively involved in the selection and annotation of imaging data. The presentation explains the approach and current state of this multicenter European project. Disclosure: None. Short Summary: Between June and August 2020 we performed a 16 questions online survey about preferences on virtual meetings. Preliminary data showed that people appreciated the opportunity to attend online the radiological meetings, with some meeting design changes. Purpose/Objectives: We explored perceptions and preferences about the conversion of in-person meetings to virtual ones. Here we show some preliminary data. Methods and materials: We prepared a survey (in English) that was available online between June and August 2020. The survey comprised of 16 questions about preferences regarding online conference attendance. It and was targeted to radiologists, residents and medical students. Results: We received a total of 508 answers from 71 countries. The largest number of answers were submitted from Italy and India. Most people at the time of the survey had already attended a virtual meeting (80%) and would like to attend further virtual meetings in the future (97%). The ideal duration of a virtual meeting was 2-3 days (42%) or half a day (32%). The preferred time format was a 2-4 hours meeting, with 1-2 breaks (43%). Conclusions: Respondents appreciated the opportunity to attend online the most important radiological meetings; however, they generally would prefer a change in meeting design. A deeper analysis of subgroup data, for example by country or age, would be very useful.
doi:10.1186/s13244-021-00975-x pmid:33759029 fatcat:66vyf7npqvfnpbdiqbozrk4pda