A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Towards Contactless Patient Positioning
2020
IEEE Transactions on Medical Imaging
Our patient positioning routine comprises a novel robust dynamic fusion (RDF) algorithm for accurate 3D patient body modeling. ...
With its multi-modal inference capability, RDF can be trained once and used across different applications (without re-training) having various sensor choices, a key feature to enable system deployment ...
Much recent algorithmic work [7] - [9] in patient positioning has focused on estimating the 2D or 3D keypoint locations on the patient body. ...
doi:10.1109/tmi.2020.2991954
pmid:32365022
fatcat:2wckwhlbmbdcjj37ogk6pih56u
Visual tracking for multi-modality computer-assisted image guidance
2017
Medical Imaging 2017: Image-Guided Procedures, Robotic Interventions, and Modeling
imaging modalities, visual observation of meshes of widelyseparated markers, instant automatic registration, and target tracking and guidance on real-time multi-modality fusion views). ...
We developed a family of novel guidance systems based on widespectrum cameras and vision algorithms for real-time tracking of interventional instruments and multi-modality markers. ...
Multi-modality stick-on markers can be applied to instruments or the patient, providing robust 6-DoF tracking. ...
doi:10.1117/12.2254362
dblp:conf/miigp/BasafaFHBS17
fatcat:oyu74ek6jjedlhr3zovmxdp5j4
Deep Motion Analysis for Epileptic Seizure Classification
2018
2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)
Here we present a multi-modal analysis approach to quantitatively classify patients with mesial temporal lobe (MTLE) and extra-temporal lobe (ETLE) epilepsy, relying on the fusion of facial expressions ...
A multi-fold cross-validation of the fusion model exhibited an average test accuracy of 92.10%, while a leave-one-subject-out cross-validation scheme, which is the first in the literature, achieves an ...
Fig. 1 . 1 The proposed multi-modal system fuses spatio-temporal information from the face and body using deep-learning. 1. ...
doi:10.1109/embc.2018.8513031
pmid:30441151
fatcat:fg2b5xvwbvfijo5wtwdsbcmiki
SkinSpecs: A Mobile Solution that Addresses an Unmet Need for Tracking Chronic Skin Diseases in the Office and at Home
2019
SKIN The Journal of Cutaneous Medicine
patients' skin disease and renders 3D true-to-life models that were evaluated by Stanford Health Care dermatologists.Results: We utilized video input to accurately reconstruct interactive 3D models of ...
Dermatologists maintained highest accuracy, confidence and satisfaction with 3D reconstruction. Dermatologist preferred SkinSpecs for documentation over other capture modalities. ...
Lastly, 3D full body multi-camera imaging systems are expensive and with limited access. ...
doi:10.25251/skin.3.5.3
fatcat:d6wozdiihvabzkbwtkl3pst724
A 3D-2D Image Registration Algorithm for Kinematic Analysis of the Knee after Total Knee Arthroplasty (TKA)
2013
2013 International Conference on Digital Image Computing: Techniques and Applications (DICTA)
In this paper, we propose a noninvasive and robust 3D to 2D registration method, which can be used for 3D evaluations of the status of knee implants. ...
The experimental results show that the proposed method is not only robust but also fast. KEYWORDS Model to image registration, medical image analysis, similarity measure, Edge Position Difference ...
This method is based on a new multi-modal similarity measure, which provides a fast and robust coarse-to-fine registration for 3D kinematics. ...
doi:10.1109/dicta.2013.6691472
dblp:conf/dicta/HossainMPSS13
fatcat:2phvk7zzl5fb7mddhiq23frmwy
Viewpoints on Medical Image Processing: From Science to Application
2013
Current Medical Imaging Reviews
Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image ...
ACKNOWLEDGEMENTS We would like to thank Hans-Peter Meinzer, Co-Chair of the German BVM, for his helpful suggestions and for encouraging his research fellows to contribute and hence, giving this paper a "multi-generation ...
In probabilistic statistical shape models, these correspondence uncertainties are respected explicitly to improve the robustness and accuracy of shape modeling and model-based segmentation. ...
doi:10.2174/1573405611309020002
pmid:24078804
pmcid:PMC3782694
fatcat:kmnerde6iffcln6r3jze5trrhm
Magnetic Resonance Angiography (MRA)
[chapter]
2012
Imaging and Technology in Urology
This study proposes a new method for matching vascular imaging modalities without the use of external frame or external landmarks. ...
As the 3D structure is known in the MRA referential, this method enables us to match information from DSA and MRA. ...
The developments presented have been applied on both phantom and patient images. Pre-operative MRA scans have been used to generate a 3D model of the vascular tree. ...
doi:10.1007/978-1-4471-2422-1_16
fatcat:beqcs24mvrcxhiwpy42uoqrdoq
Magnetic Resonance Angiography (MRA)
[chapter]
1995
MRI Physics for Radiologists
This study proposes a new method for matching vascular imaging modalities without the use of external frame or external landmarks. ...
As the 3D structure is known in the MRA referential, this method enables us to match information from DSA and MRA. ...
The developments presented have been applied on both phantom and patient images. Pre-operative MRA scans have been used to generate a 3D model of the vascular tree. ...
doi:10.1007/978-1-4612-0785-6_17
fatcat:abt2zxvu2bhnplypitzahtbbqa
Recurrent Multi-Fiber Network for 3D MRI Brain Tumor Segmentation
2021
Symmetry
from a 3D recurrent unit and 3D multi-fiber unit. ...
applied in our paper to solve the problem of brain tumor segmentation, including a 3D recurrent unit and 3D multi-fiber unit. ...
[12] proposed a new deep network called LSTM multi-modal UNet, which consists of multi-model UNet and convolution LSTM [13] . ...
doi:10.3390/sym13020320
fatcat:j3odnadij5bgvjbynu2nobrbve
Is a PET all you need? A multi-modal study for Alzheimer's disease using 3D CNNs
[article]
2022
arXiv
pre-print
Therefore, we propose a framework for the systematic evaluation of multi-modal DNNs and critically re-evaluate single- and multi-modal DNNs based on FDG-PET and sMRI for binary healthy vs. ...
We argue that future work on multi-modal fusion should systematically assess the contribution of individual modalities following our proposed evaluation framework. ...
In this work, we critically re-evaluate single-and multi-modal DL models based on FDG-PET and structural MRI for classifying healthy vs. AD subjects. ...
arXiv:2207.02094v1
fatcat:wyhwu33knvhonpnnddbyf6qupu
A New Coarse-to-Fine Framework for 3D Brain MR Image Registration
[chapter]
2005
Lecture Notes in Computer Science
Based on this new perspective, we develop a new image registration framework by combining the multi-resolution method with novel multi-scale algorithm, which could achieve higher accuracy and robustness ...
on 3D brain MR images. ...
(II) Multi scales of the brain image obtained by the TV-L 1 model. (III) The contour image obtained by 3D TV-L 1 model (right) does not contain artifacts or noise in the original image (left). ...
doi:10.1007/11569541_13
fatcat:773wcrjhebfbdbbk3fokkyhi5e
Unpaired Multi-modal Segmentation via Knowledge Distillation
2020
IEEE Transactions on Medical Imaging
Experimental results on both tasks demonstrate that our novel multi-modal learning scheme consistently outperforms single-modal training and previous multi-modal approaches. ...
Multi-modal learning is typically performed with network architectures containing modality-specific layers and shared layers, utilizing co-registered images of different modalities. ...
., multi-modal data are acquired from the same patient and co-registered across the sequences. ...
doi:10.1109/tmi.2019.2963882
pmid:32012001
fatcat:htw4dwhhsbbcbjnqdbwcvrkaxe
Model Guided Multimodal Imaging and Visualization for Computer Assisted Interventions
2011
IAPR International Workshop on Machine Vision Applications
In [54] , we apply manifold learning to the multi-modal registration problem. ...
However in the absence of well-established scientific models of diagnosis and treatment, and in an environment where no concept and solution integrating patient specific models and process modeling was ...
dblp:conf/mva/Navab11
fatcat:mwayyz5jlfhefmlx2s2d7if5yi
Multitask radiological modality invariant landmark localization using deep reinforcement learning
2020
International Conference on Medical Imaging with Deep Learning
Additionally, a 3D multi-agent model was trained to localize knee, trochanter, heart, and kidney in the whole body mpMRIs. ...
A 2D single agent model was trained to localize six different anatomical structures throughout the body, including, knee, trochanter, heart, kidney, breast nipple, and prostate across T1 weighted, T2 weighted ...
We also evaluated an analogous 3D multi-agent model adapted from on 3D whole body mpMRIs. ...
dblp:conf/midl/ParekhBBJ20
fatcat:o3ukt4p5rfhxnghn4aicf2jpea
Automatic Determination of Anatomical Correspondences for Multimodal Field of View Correction
[chapter]
2014
Lecture Notes in Computer Science
The above combination deals with the challenges of multi-modal studies namely intensity differences, inhomogeneity, and gross patient movement. ...
In spite of a huge body of work in medical image registration, there seems to be very little effort in Field of View (FOV) correction or anatomical overlap estimation especially for multi-modal studies ...
We also present a multi-modal feature descriptor that is robust to acquisition differences across modalities. ...
doi:10.1007/978-3-319-11752-2_35
fatcat:jzysgphhdjemtobhfpsh2agzaa
« Previous
Showing results 1 — 15 out of 13,972 results