Filters








8 Hits in 0.77 sec

3D landmark detection for augmented reality based otologic procedures [article]

Raabid Hussain, Kibrom Berihu Girum
2019 arXiv   pre-print
Raabid HUSSAIN, Caroline GUIGOU, Kibrom Berihu GIRUM, Alain LALANDE and Alexis BOZORG GRAYELIround window niche, tip of the incus, umbo and short process of malleus, pyramid, cochlear apex and base.  ... 
arXiv:1909.01647v1 fatcat:rjhfbnviizaxvdi56jrb5czbwu

Real-Time Augmented Reality for Ear Surgery [chapter]

Raabid Hussain, Alain Lalande, Roberto Marroquin, Kibrom Berihu Girum, Caroline Guigou, Alexis Bozorg Grayeli
2018 Lecture Notes in Computer Science  
Transtympanic procedures aim at accessing the middle ear structures through a puncture in the tympanic membrane. They require visualization of middle ear structures behind the eardrum. Up to now, this is provided by an oto endoscope. This work focused on implementing a real-time augmented reality based system for robotic-assisted transtympanic surgery. A preoperative computed tomography scan is combined with the surgical video of the tympanic membrane in order to visualize the ossciles and
more » ... inthine windows which are concealed behind the opaque tympanic membrane. The study was conducted on 5 artificial and 4 cadaveric temporal bones. Initially, a homography framework based on fiducials (6 stainless steel markers on the periphery of the tympanic membrane) was used to register a 3D reconstructed computed tomography image to the video images. Micro/endoscope movements were then tracked using Speeded-Up Robust Features. Simultaneously, a micro-surgical instrument (needle) in the frame was identified and tracked using a Kalman filter. Its 3D pose was also computed using a 3-collinear-point framework. An average initial registration accuracy of 0.21 mm was achieved with a slow propagation error during the 2-minute tracking. Similarly, a mean surgical instrument tip 3D pose estimation error of 0.33 mm was observed. This system is a crucial first step towards keyhole surgical approach to middle and inner ears.
doi:10.1007/978-3-030-00937-3_38 fatcat:uxz2ajtf35byzjfii4yu7dv3ie

Surgical Visual Domain Adaptation: Results from the MICCAI 2020 SurgVisDom Challenge [article]

Aneeq Zia, Kiran Bhattacharyya, Xi Liu, Ziheng Wang, Satoshi Kondo, Emanuele Colleoni, Beatrice van Amsterdam, Razeen Hussain, Raabid Hussain, Lena Maier-Hein, Danail Stoyanov, Stefanie Speidel (+1 others)
2021 arXiv   pre-print
Surgical data science is revolutionizing minimally invasive surgery by enabling context-aware applications. However, many challenges exist around surgical data (and health data, more generally) needed to develop context-aware models. This work - presented as part of the Endoscopic Vision (EndoVis) challenge at the Medical Image Computing and Computer Assisted Intervention (MICCAI) 2020 conference - seeks to explore the potential for visual domain adaptation in surgery to overcome data privacy
more » ... ncerns. In particular, we propose to use video from virtual reality (VR) simulations of surgical exercises in robotic-assisted surgery to develop algorithms to recognize tasks in a clinical-like setting. We present the performance of the different approaches to solve visual domain adaptation developed by challenge participants. Our analysis shows that the presented models were unable to learn meaningful motion based features form VR data alone, but did significantly better when small amount of clinical-like data was also made available. Based on these results, we discuss promising methods and further work to address the problem of visual domain adaptation in surgical data science. We also release the challenge dataset publicly at https://www.synapse.org/surgvisdom2020.
arXiv:2102.13644v1 fatcat:7czni7xhzzfohfbvn7wchlkmuy

Automatic segmentation of inner ear on CT-scan using auto-context convolutional neural network

Raabid Hussain, Alain Lalande, Kibrom Berihu Girum, Caroline Guigou, Alexis Bozorg Grayeli
2021 Scientific Reports  
Temporal bone CT-scan is a prerequisite in most surgical procedures concerning the ear such as cochlear implants. The 3D vision of inner ear structures is crucial for diagnostic and surgical preplanning purposes. Since clinical CT-scans are acquired at relatively low resolutions, improved performance can be achieved by registering patient-specific CT images to a high-resolution inner ear model built from accurate 3D segmentations based on micro-CT of human temporal bone specimens. This paper
more » ... sents a framework based on convolutional neural network for human inner ear segmentation from micro-CT images which can be used to build such a model from an extensive database. The proposed approach employs an auto-context based cascaded 2D U-net architecture with 3D connected component refinement to segment the cochlear scalae, semicircular canals, and the vestibule. The system was formulated on a data set composed of 17 micro-CT from public Hear-EU dataset. A Dice coefficient of 0.90 and Hausdorff distance of 0.74 mm were obtained. The system yielded precise and fast automatic inner-ear segmentations.
doi:10.1038/s41598-021-83955-x pmid:33623074 pmcid:PMC7902630 fatcat:wal4phxc5banvaqoe35nmzroju

Deep Generative Model-Driven Multimodal Prostate Segmentation in Radiotherapy [chapter]

Kibrom Berihu Girum, Gilles Créhange, Raabid Hussain, Paul Michael Walker, Alain Lalande
2019 Lecture Notes in Computer Science  
Deep learning has shown unprecedented success in a variety of applications, such as computer vision and medical image analysis. However, there is still potential to improve segmentation in multimodal images by embedding prior knowledge via learning-based shape modeling and registration to learn the modality invariant anatomical structure of organs. For example, in radiotherapy automatic prostate segmentation is essential in prostate cancer diagnosis, therapy, and post-therapy assessment from
more » ... weighted MR or CT images. In this paper, we present a fully automatic deep generative model-driven multimodal prostate segmentation method using convolutional neural network (DGMNet). The novelty of our method comes with its embedded generative neural network for learning-based shape modeling and its ability to adapt for different imaging modalities via learning-based registration. The proposed method includes a multi-task learning framework that combines a convolutional feature extraction and an embedded regression and classification based shape modeling. This enables the network to predict the deformable shape of an organ. We show that generative neural networkbased shape modeling trained on a reliable contrast imaging modality (such as MRI) can be directly applied to low contrast imaging modality (such as CT) to achieve accurate prostate segmentation. The method was evaluated on MRI and CT datasets acquired from different clinical centers with large variations in contrast and scanning protocols. Experimental results reveal that our method can be used to automatically and accurately segment the prostate gland in different imaging modalities.
doi:10.1007/978-3-030-32486-5_15 fatcat:y5fj3e4bajcrxjv4lfthxr4deu

Video-based augmented reality combining CT-scan and instrument position data to microscope view in middle ear surgery

Raabid Hussain, Alain Lalande, Roberto Marroquin, Caroline Guigou, Alexis Bozorg Grayeli
2020 Scientific Reports  
The aim of the study was to develop and assess the performance of a video-based augmented reality system, combining preoperative computed tomography (CT) and real-time microscopic video, as the first crucial step to keyhole middle ear procedures through a tympanic membrane puncture. Six different artificial human temporal bones were included in this prospective study. Six stainless steel fiducial markers were glued on the periphery of the eardrum, and a high-resolution CT-scan of the temporal
more » ... ne was obtained. Virtual endoscopy of the middle ear based on this CT-scan was conducted on Osirix software. Virtual endoscopy image was registered to the microscope-based video of the intact tympanic membrane based on fiducial markers and a homography transformation was applied during microscope movements. These movements were tracked using Speeded-Up Robust Features (SURF) method. Simultaneously, a micro-surgical instrument was identified and tracked using a Kalman filter. The 3D position of the instrument was extracted by solving a three-point perspective framework. For evaluation, the instrument was introduced through the tympanic membrane and ink droplets were injected on three middle ear structures. An average initial registration accuracy of 0.21 ± 0.10 mm (n = 3) was achieved with a slow propagation error during tracking (0.04 ± 0.07 mm). The estimated surgical instrument tip position error was 0.33 ± 0.22 mm. The target structures' localization accuracy was 0.52 ± 0.15 mm. The submillimetric accuracy of our system without tracker is compatible with ear surgery.
doi:10.1038/s41598-020-63839-2 pmid:32317726 pmcid:PMC7174368 fatcat:mxhcnvmlx5bjlj2eazn6cv7pq4

Deep Learning methods for automatic evaluation of delayed enhancement-MRI. The results of the EMIDEC challenge [article]

Alain Lalande, Zhihao Chen, Thibaut Pommier, Thomas Decourselle, Abdul Qayyum, Michel Salomon, Dominique Ginhac, Youssef Skandarani, Arnaud Boucher, Khawla Brahim, Marleen de Bruijne, Robin Camarasa (+21 others)
2021 arXiv   pre-print
A key factor for assessing the state of the heart after myocardial infarction (MI) is to measure whether the myocardium segment is viable after reperfusion or revascularization therapy. Delayed enhancement-MRI or DE-MRI, which is performed several minutes after injection of the contrast agent, provides high contrast between viable and nonviable myocardium and is therefore a method of choice to evaluate the extent of MI. To automatically assess myocardial status, the results of the EMIDEC
more » ... ge that focused on this task are presented in this paper. The challenge's main objectives were twofold. First, to evaluate if deep learning methods can distinguish between normal and pathological cases. Second, to automatically calculate the extent of myocardial infarction. The publicly available database consists of 150 exams divided into 50 cases with normal MRI after injection of a contrast agent and 100 cases with myocardial infarction (and then with a hyperenhanced area on DE-MRI), whatever their inclusion in the cardiac emergency department. Along with MRI, clinical characteristics are also provided. The obtained results issued from several works show that the automatic classification of an exam is a reachable task (the best method providing an accuracy of 0.92), and the automatic segmentation of the myocardium is possible. However, the segmentation of the diseased area needs to be improved, mainly due to the small size of these areas and the lack of contrast with the surrounding structures.
arXiv:2108.04016v2 fatcat:2kxgjecvrrc3norhcq26t722we

GANs for Medical Image Synthesis: An Empirical Study [article]

Youssef Skandarani, Pierre-Marc Jodoin, Alain Lalande
2021 arXiv   pre-print
We would like to thank Kibrom Berihu Girum and Raabid Hussain for useful discussions and remarks.  ... 
arXiv:2105.05318v2 fatcat:d5nxc42kf5eoflt6rtq7nd2l7i