376 Hits in 5.9 sec

Pelvis Segmentation Using Multi-pass U-Net and Iterative Shape Estimation [chapter]

Chunliang Wang, Bryan Connolly, Pedro Filipe de Oliveira Lopes, Alejandro F. Frangi, Örjan Smedby
2019 Lecture Notes in Computer Science  
Preliminary results show that the proposed multi-pass U-net with iterative shape estimation outperforms both 2D and 3D conventional U-nets without the shape model.  ...  During the testing phase, the input image is fed through the same 3D U-net multiple times, first with blank shape context channels and then with iteratively re-estimated shape models.  ...  This study was supported by the Swedish Heart-lung foundation (grant no. 20160609) and the Swedish Childhood Cancer Foundation (grant no. MT2016-00166).  ... 
doi:10.1007/978-3-030-11166-3_5 fatcat:pbjhwy4wdzb4tkw6epwcff7tom

CAN3D: Fast 3D Medical Image Segmentation via Compact Context Aggregation [article]

Wei Dai, Boyeong Woo, Siyu Liu, Matthew Marques, Craig B. Engstrom, Peter B. Greer, Stuart Crozier, Jason A. Dowling, Shekhar S. Chandra
2021 arXiv   pre-print
and inference.  ...  Further, to fit 3D dataset through these large models using limited computer memory, trade-off techniques such as patch-wise training are often used which sacrifice the fine-scale geometric information  ...  Ambellan et al. [2019] even incorporate Statistical Shape Models (SSM) with both 2D and 3D U-Nets to pass statistical anatomical knowledge during highly pathological knee segmentation.  ... 
arXiv:2109.05443v2 fatcat:jmq3ddulz5f6vlgnuahf54vls4

Automatic Localization and Segmentation of Vertebrae for Cobb Estimation and Curvature Deformity

Joddat Fatima, Amina Jameel, Muhammad Usman Akram, Adeel Muzaffar Syed, Malaika Mushtaq
2022 Intelligent Automation and Soft Computing  
In the second step, edge detection, is done by Holistic Edge Detection (HED) and for corner calculation, the Harris method is used.  ...  A comparative analysis is done for Cobb estimation and the results showed that the proposed framework has reduced mean error up to 2 degree.  ...  This research article presents an automated system that is used for the deformity in spine curvature analysis using Cobb estimation. Localization of vertebrae is performed with YOLO architecture.  ... 
doi:10.32604/iasc.2022.025935 fatcat:bies5admyzdfxn25y4uufh7jte

Segmentation of bones in medical dual-energy computed tomography volumes using the 3D U-Net

José Carlos González Sánchez, Maria Magnusson, Michael Sandborg, Åsa Carlsson Tedgren, Alexandr Malusek
2020 Physica medica (Testo stampato)  
A convolutional neural network based on the 3D U-Net architecture was implemented and evaluated using high tube voltage images, mixed images and dual-energy images from 30 patients.  ...  The method can easily be extended to the segmentation of multi-energy CT data.  ...  The following grants are acknowledged: SNIC 2018/7-22, LiO-724181, CAN 2017/1029 and VR-NT 2016-05033.  ... 
doi:10.1016/j.ejmp.2019.12.014 pmid:31918376 fatcat:omzes3hdmjfwncth5kefo2tfwm

Medical Imaging Synthesis using Deep Learning and its Clinical Applications: A Review [article]

Tonghe Wang, Yang Lei, Yabo Fu, Walter J. Curran, Tian Liu, Xiaofeng Yang
2020 arXiv   pre-print
Specifically, we summarized the recent developments of deep learning-based methods in inter- and intra-modality image synthesis by listing and highlighting the proposed methods, study designs and reported  ...  This paper reviewed the deep learning-based studies for medical imaging synthesis and its clinical application.  ...  [2, 184] Unlike CT scanners using fan-shaped X-ray beam with multi-slice detectors, CBCT uses coneshaped X-ray beam with a flat panel detector.  ... 
arXiv:2004.10322v1 fatcat:bkhct7wzjnfrrd4kwa4rqw6rbe

HUMUS-Net: Hybrid unrolled multi-scale network architecture for accelerated MRI reconstruction [article]

Zalan Fabian, Mahdi Soltanolkotabi
2022 arXiv   pre-print
HUMUS-Net extracts high-resolution features via convolutional blocks and refines low-resolution features via a novel Transformer-based multi-scale feature extractor.  ...  multi-scale network.  ...  training using a standard U-Net network.  ... 
arXiv:2203.08213v1 fatcat:vhjiteq3xfdhjoal3xj2j37im4

Automatic Annotation of Hip Anatomy in Fluoroscopy for Robust and Efficient 2D/3D Registration [article]

Robert Grupp, Mathias Unberath, Cong Gao, Rachel Hegeman, Ryan Murphy, Clayton Alexander, Yoshito Otake, Benjamin McArthur, Mehran Armand, Russell Taylor
2020 arXiv   pre-print
By using these annotations as training data for neural networks, state of the art performance in fluoroscopic segmentation and landmark localization was achieved.  ...  Neural networks are trained to simultaneously segment anatomy and identify landmarks in fluoroscopy.  ...  Several authors have coupled image segmentation with landmark estimation using multi-task networks and achieved favorable results.  ... 
arXiv:1911.07042v2 fatcat:mlfe7nksovaevggovn4p5n6ayy

Single-Shot Multi-Person 3D Pose Estimation From Monocular RGB [article]

Dushyant Mehta, Oleksandr Sotnychenko, Franziska Mueller, Weipeng Xu, Srinath Sridhar, Gerard Pons-Moll, Christian Theobalt
2018 arXiv   pre-print
To further stimulate research in multi-person 3D pose estimation, we will make our new datasets, and associated code publicly available for research purposes.  ...  We propose a new single-shot method for multi-person 3D pose estimation in general scenes from a monocular RGB camera.  ...  We use a batch size of 6 and train for 360k iterations with a cyclical learning rate ranging from 0.1 to 0.000001.  ... 
arXiv:1712.03453v3 fatcat:pnhwtrnqsbhelen47etpk4namm

Learning Multi-Human Optical Flow [article]

Anurag Ranjan and David T. Hoffmann and Dimitrios Tzionas and Siyu Tang and Javier Romero and Michael J. Black
2019 arXiv   pre-print
We use a 3D model of the human body and motion capture data to synthesize realistic flow fields in both single- and multi-person images.  ...  However, the training data used by them does not cover the domain of human motion. Therefore, we develop a dataset of multi-human optical flow and train optical flow networks on this dataset.  ...  Acknowledgements We thank Yiyi Liao for helping us with optical flow evaluation. We thank Cristian Sminchisescu for the Human3.6M MoCap marker data.  ... 
arXiv:1910.11667v1 fatcat:weqfdsuygfatzkn64hskpdzw7m

Dynamic Multi-scale CNN Forest Learning for Automatic Cervical Cancer Segmentation [chapter]

Nesrine Bnouni, Islem Rekik, Mohamed Salah Rhim, Najoua Essoukri Ben Amara
2018 Lecture Notes in Computer Science  
More importantly, while the majority of innovative deeplearning works using convolutional neural networks (CNNs) focus on developing more sophisticated and robust architectures (e.g., ResNet, U-Net, GANs  ...  However, to the best of our knowledge, these have not been used for cervical tumor segmentation.  ...  We could also build our forest using a variety of deep flavors of FCNs (U-Net, GANs). Our boosting strategy is not complex compared to typical boosting algorithms such as ADABOOST.  ... 
doi:10.1007/978-3-030-00919-9_3 fatcat:fihrknukmrdl5mprlpayua5bre

SiCloPe: Silhouette-Based Clothed People [article]

Ryota Natsume, Shunsuke Saito, Zeng Huang, Weikai Chen, Chongyang Ma, Hao Li, Shigeo Morishima
2019 arXiv   pre-print
Inspired by the visual hull algorithm, our implicit representation uses 2D silhouettes and 3D joints of a body pose to describe the immense shape complexity and variations of clothed people.  ...  We then infer the texture of the subject's back view using the frontal image and segmentation mask as input to a conditional generative adversarial network.  ...  Both our silhouette synthesis network and the front-to-back synthesis network follow the U-Net network architecture in [22, 55, 21, 49, 47] with an input channel size of 7 and 4, respectively.  ... 
arXiv:1901.00049v2 fatcat:2orw6o6l5vfzpivggdtxuo7aeu

SiCloPe: Silhouette-Based Clothed People

Ryota Natsume, Shunsuke Saito, Zeng Huang, Weikai Chen, Chongyang Ma, Hao Li, Shigeo Morishima
2019 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
Inspired by the visual hull algorithm, our implicit representation uses 2D silhouettes and 3D joints of a body pose to describe the immense shape complexity and variations of clothed people.  ...  We then infer the texture of the subject's back view using the frontal image and segmentation mask as input to a conditional generative adversarial network.  ...  sponsored by DARPA, the Andrew and Erna Viterbi Early Career Chair, the U.S.  ... 
doi:10.1109/cvpr.2019.00461 dblp:conf/cvpr/NatsumeSH0MLM19 fatcat:fmmii7wry5gcvbvmo3xdbevuya

CT-ORG, a new dataset for multiple organ segmentation in computed tomography

Blaine Rister, Darvin Yi, Kaushik Shivakumar, Tomomi Nobashi, Daniel L. Rubin
2020 Scientific Data  
We hope this dataset and code, available through TCIA, will be useful for training and evaluating organ segmentation models.  ...  For the lungs and bones, we expedited annotation using unsupervised morphological segmentation algorithms, which were accelerated by 3D Fourier transforms.  ...  Code availability All code is available on Github for our morphological segmentation, GPU data augmentation, and pre-trained model.  ... 
doi:10.1038/s41597-020-00715-8 pmid:33177518 fatcat:5koedxe2ozgffayoik55zrpepm

Deformable M-Reps for 3D Medical Image Segmentation

Stephen M Pizer, P Thomas Fletcher, Sarang Joshi, Andrew Thall, James Z Chen, Yonatan Fridman, Daniel S Fritsch, Graham Gash, John M Glotzer, Michael R Jiroutek, Conglin Lu, Keith E Muller (+3 others)
2003 International Journal of Computer Vision  
This paper focuses on the use of single figure models to segment objects of relatively simple structure.  ...  While these models and approaches also exist in 2D, we focus on 3D objects. The segmentation of the kidney from CT and the hippocampus from MRI serve as the major examples in this paper.  ...  Acknowledgments We appreciate advice from Valen Johnson and Stephen Marron on statistical matters and help from Guido Gerig, Joshua Stough, Martin Styner, and Delphine Bull.  ... 
pmid:23825898 pmcid:PMC3697155 fatcat:tvdm6t56tjerxjs6b7tzlz7uji

Advanced process planning for subtractive rapid prototyping

Joseph E. Petrzelka, Matthew C. Frank, Dave Bourell
2010 Rapid prototyping journal  
Subtractive Rapid Prototyping (SRP) borrows from additive rapid prototyping technologies by using 2½D layer based toolpath processing; however, it is limited by tool accessibility.  ...  The developed algorithms intend to improve the efficiency and reliability of these multiple layer-based removal steps for rapid manufacturing.  ...  Acknowledgment Financial support was provided by grants from the National Institutes of Health (AR48939 and AR55533) and Deere and Company (Account 400-60-41).  ... 
doi:10.1108/13552541011034898 fatcat:44bp3if3lvesnalvbcbc2xv72e
« Previous Showing results 1 — 15 out of 376 results