Filters








3,117 Hits in 6.5 sec

Real-Time Segmentation of Non-rigid Surgical Tools Based on Deep Learning and Tracking [chapter]

Luis C. García-Peraza-Herrera, Wenqi Li, Caspar Gruijthuijsen, Alain Devreker, George Attilakos, Jan Deprest, Emmanuel Vander Poorten, Danail Stoyanov, Tom Vercauteren, Sébastien Ourselin
2017 Lecture Notes in Computer Science  
Real-time tool segmentation is an essential component in computer-assisted surgical systems.  ...  We propose a novel real-time automatic method based on Fully Convolutional Networks (FCN) and optical flow tracking.  ...  Ebner and S. Nousias for the ground truth of FetalFlexTool and E. Maneas for preparing setup with an ex vivo placenta.  ... 
doi:10.1007/978-3-319-54057-3_8 fatcat:gtya6pjshvhvtbveyvyf6htnv4

A Review on Deep Learning in Minimally Invasive Surgery

Irene Rivas-Blanco, Carlos J. Perez-Del-Pulgar, Isabel Garcia-Morales, Victor F. Munoz
2021 IEEE Access  
Real-time tracking of the surgical tools is addressed in [36] .  ...  [49] present a weakly supervised framework for surgical tools tracking and segmentation based on a hybrid sensor system that integrates electromagnetic tracking with processing of visual data.  ...  In 2004, he was given a permanent position on the research support staff at the University of Malaga.  ... 
doi:10.1109/access.2021.3068852 fatcat:gfpghqfptzdktlody5z263cdju

ToolNet: Holistically-nested real-time segmentation of robotic surgical tools

Luis C. Garcia-Peraza-Herrera, Wenqi Li, Lucas Fidon, Caspar Gruijthuijsen, Alain Devreker, George Attilakos, Jan Deprest, Emmanuel Vander Poorten, Danail Stoyanov, Tom Vercauteren, Sebastien Ourselin
2017 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)  
We propose two novel deep learning architectures for automatic segmentation of non-rigid surgical instruments.  ...  Real-time tool segmentation from endoscopic videos is an essential part of many computer-assisted robotic surgical systems and of critical importance in robotic surgical data science.  ...  The authors would like to thank NVIDIA Corporation for the donated GeForce GTX TITAN X GPU and all the members of the GIFT-Surg project for the always useful discussions.  ... 
doi:10.1109/iros.2017.8206462 dblp:conf/iros/Garcia-Peraza-Herrera17 fatcat:3xelwt5fubhpxgpjf6qahbto2q

Artificial Intelligence in Surgery [article]

Xiao-Yun Zhou, Yao Guo, Mali Shen, Guang-Zhong Yang
2019 arXiv   pre-print
In this article, the recent successful and influential applications of AI in surgery are reviewed from pre-operative planning and intra-operative guidance to the integration of surgical robots.  ...  Artificial Intelligence (AI) is gradually changing the practice of surgery with the advanced technological development of imaging, navigation and robotic intervention.  ...  These methods have different advantages and disadvantages. In the context of deep learning based surgical instrument tracking, the proposed methods were built on the tracking by detection [84, 85] .  ... 
arXiv:2001.00627v1 fatcat:dywtv6v36rgf3fummidyluy3zi

Front Matter: Volume 10576

Robert J. Webster, Baowei Fei
2018 Medical Imaging 2018: Image-Guided Procedures, Robotic Interventions, and Modeling  
-35] Machine learning-based colon deformation estimation method for colonoscope tracking [10576-36] CARDIAC AND LUNG IMAGING AND TRACKING 10576 1A A real-time system for prosthetic valve tracking  ...  [10576-43] 10576 1H A system for automatic monitoring of surgical instruments and dynamic non-rigid surface deformations in breast cancer surgery [10576-44] 10576 1I Intraoperative deformation  ... 
doi:10.1117/12.2323924 fatcat:hva4ny4ftbe2voqifpbt5wewky

Front Matter: Volume 10135

2017 Medical Imaging 2017: Image-Guided Procedures, Robotic Interventions, and Modeling  
A unique citation identifier (CID) number is assigned to each article at the time of publication.  ...  Utilization of CIDs allows articles to be fully citable as soon as they are published online, and connects the same identifier to all online and print versions of the publication.  ...  10135 1F Optimization of real-time rigid registration motion compensation for prostate biopsies using 2D/3D ultrasound [10135-50] Part Two SESSION 11 ANATOMICAL MEASUREMENT AND RESPIRATORY TRACKING  ... 
doi:10.1117/12.2277134 dblp:conf/miigp/X17 fatcat:inrmsopsvfd47nfo35fhucwcmq

HMD-EgoPose: Head-Mounted Display-Based Egocentric Marker-Less Tool and Hand Pose Estimation for Augmented Surgical Guidance [article]

Mitchell Doughty, Nilesh R. Ghugre
2022 arXiv   pre-print
HMD-EgoPose outperformed current state-of-the-art approaches on a benchmark dataset for surgical tool pose estimation, achieving an average tool 3D vertex error of 11.0 mm on real data and furthering the  ...  Our framework utilized an efficient convolutional neural network (CNN) backbone for multi-scale feature extraction and a set of subnetworks to jointly learn the 6DoF pose representation of the rigid surgical  ...  learning-based [13] , and deep learning-based techniques [14] .  ... 
arXiv:2202.11891v2 fatcat:svuq6tbmbjad5aykflpygefnra

SenseCare: A Research Platform for Medical Image Informatics and Interactive 3D Visualization [article]

Qi Duan, Guotai Wang, Rui Wang, Chao Fu, Xinjun Li, Maoliang Gong, Xinglong Liu, Qing Xia, Xiaodi Huang, Zhiqiang Hu, Ning Huang, Shaoting Zhang
2020 arXiv   pre-print
In addition, SenseCare is clinic-oriented and supports a wide range of clinical applications such as diagnosis and surgical planning for lung cancer, pelvic tumor, coronary artery disease, etc.  ...  To facilitate clinical research with Artificial Intelligence (AI), SenseCare provides a range of AI toolkits for different tasks, including image segmentation, registration, lesion and landmark detection  ...  Besides real-time data synchronization, SenseCare also supports pulling data directly from PACS/RIS based on user-defined rules.  ... 
arXiv:2004.07031v1 fatcat:aczelk3365ftniy2vt7tcxwe7i

Adaptive Physics-Based Non-Rigid Registration for Immersive Image-Guided Neuronavigation Systems

Fotis Drakopoulos, Christos Tsolakis, Angelos Angelopoulos, Yixun Liu, Chengjun Yao, Kyriaki Rafailia Kavazidi, Nikolaos Foroglou, Andrey Fedorov, Sarah Frisken, Ron Kikinis, Alexandra Golby, Nikos Chrisochoides
2021 Frontiers in Digital Health  
physics based non-rigid registration, and four times compared to B-Spline interpolation methods which are part of ITK and 3D Slicer.  ...  An Adaptive Physics-Based Non-Rigid Registration method (A-PBNRR) registers preoperative and intraoperative MRI for each patient.  ...  On average, based on Table 7 , A-PBNRR with deep learning is ∼8.45 times better than rigid registration, ∼6.71 times better than B-Spline registration, and ∼7.9 times better than PBNRR.  ... 
doi:10.3389/fdgth.2020.613608 pmid:34713074 pmcid:PMC8521897 fatcat:5edxya6tovekncnq3wavu77y2q

i3PosNet: Instrument Pose Estimation from X-Ray in temporal bone surgery [article]

David Kügler, Jannik Sehring, Andrei Stefanov, Igor Stenin, Julia Kristin, Thomas Klenzner, Jörg Schipper, Anirban Mukhopadhyay
2020 arXiv   pre-print
Conclusion: The translation of Deep Learning based methods to surgical applications is difficult, because large representative datasets for training and testing are not available.  ...  It outperforms conventional image registration-based approaches reducing average and maximum errors by at least two thirds. i3PosNet trained on synthetic images generalizes to real x-rays without any further  ...  Previous non-Deep Learning pipelines based on 2D/3D registration [12] and template matching [26] achieve submillimeter accuracy for simple geometries.  ... 
arXiv:1802.09575v2 fatcat:gd6gzecy2rccbcsmvzjs6tpdci

The Future of Endoscopic Navigation: A Review of Advanced Endoscopic Vision Technology

Zuoming Fu, Ziyi Jin, Chongan Zhang, Zhongyu He, Zhenzhou Zha, Chunyong Hu, Tianyuan Gan, Qinglai Yan, Peng Wang, Xuesong Ye
2021 IEEE Access  
These techniques help surgeons or surgical robots locate instruments and lesions and expand the field of view of the endoscope.  ...  Endoscopic vision is a specific application of computer vision involving the use of endoscopes that include instrument tracking, endoscopic view expansion, and suspicious lesion tracking in the application  ...  Xiao Liang at Sir Run-Run Shaw Hospital, Hangzhou, China, for helping inspire the development of experiments and this article.  ... 
doi:10.1109/access.2021.3065104 fatcat:nprqk4gjhnhbrjvfbrqouoee6y

m2caiSeg: Semantic Segmentation of Laparoscopic Images using Convolutional Neural Networks [article]

Salman Maqbool, Aqsa Riaz, Hasan Sajid, Osman Hasan
2020 arXiv   pre-print
To address the identification of human anatomy and the surgical settings, we propose a deep learning based semantic segmentation algorithm to identify and label the tissues and organs in the endoscopic  ...  We propose a new dataset and a deep learning method for pixel level identification of various organs and instruments in a endoscopic surgical scene.  ...  In Section 4, we present the results of our network on the dataset. Finally, in Section 5, we conclude the paper and present some potential directions for future work.  ... 
arXiv:2008.10134v2 fatcat:dnqcythdpbghrk32x5yetntjfe

Comparative evaluation of instrument segmentation and tracking methods in minimally invasive surgery [article]

Sebastian Bodenstedt, Max Allan, Anthony Agustinos, Xiaofei Du, Luis Garcia-Peraza-Herrera, Hannes Kenngott, Thomas Kurmann, Beat Müller-Stich, Sebastien Ourselin, Daniil Pakhomov, Raphael Sznitman, Marvin Teichmann, Martin Thoma, Tom Vercauteren (+5 others)
2018 arXiv   pre-print
Based on the results of the validation study, we arrive at the conclusion that modern deep learning approaches outperform other methods in instrument segmentation tasks, but the results are still not perfect  ...  Since additional hardware like tracking systems or the robot encoders are cumbersome and lack accuracy, surgical vision is evolving as promising techniques to segment and track the instruments using only  ...  Here acquiring more annotated data might be the key to improve results of the machine learning based tracking methods, but acquiring large quantities of training data is challenging.  ... 
arXiv:1805.02475v1 fatcat:hagoe34yfzdd7drq5f5fmhkdqa

Synthetic and Real Inputs for Tool Segmentation in Robotic Surgery [article]

Emanuele Colleoni, Philip Edwards, Danail Stoyanov
2020 arXiv   pre-print
We propose a new deep learning based model for parallel processing of both laparoscopic and simulation images for robust segmentation of surgical tools.  ...  Semantic tool segmentation in surgical videos is important for surgical scene understanding and computer-assisted interventions as well as for the development of robotic automation.  ...  [19] : their method is based on a particle filter optimization that repeatedly updates the pose of the tool to match the silhouette projection of the surgical tool with a vision-based segmentation mask  ... 
arXiv:2007.09107v2 fatcat:nqlmtod7grayrjpntfurfu6shy

Learned optical flow for intra-operative tracking of the retinal fundus

Claudio S. Ravasio, Theodoros Pissas, Edward Bloch, Blanca Flores, Sepehr Jalali, Danail Stoyanov, Jorge M. Cardoso, Lyndon Da Cruz, Christos Bergeles
2020 International Journal of Computer Assisted Radiology and Surgery  
The U-Net-based network trained on the synthetic dataset is shown to generalise well to the benchmark of real surgical videos.  ...  We evaluate optical flow estimation by tracking a grid and sparsely annotated ground truth points on a benchmark of challenging real intra-operative clips obtained from an extensive internally acquired  ...  The views expressed are those of the author(s) and not necessarily those of the NHS, the National Institute for Health Research or the Department of Health and Social Care.  ... 
doi:10.1007/s11548-020-02160-9 pmid:32323210 pmcid:PMC7261285 fatcat:ti5e32cukrddxeeh3jqfdwqbry
« Previous Showing results 1 — 15 out of 3,117 results