6,186 Hits in 3.6 sec

MMLN: Leveraging Domain Knowledge for Multimodal Diagnosis [article]

Haodi Zhang, Chenyu Xu, Peirou Liang, Ke Duan, Hao Ren, Weibin Cheng, Kaishun Wu
2022 arXiv   pre-print
To address this problem, we propose a knowledge-driven and data-driven framework for lung disease diagnosis.  ...  Finally, a multimodal fusion consisting of text and image data is designed to infer the marginal probability of lung disease. We conduct experiments on a real-world dataset collected from a hospital.  ...  Figure 7 : 7 Figure 7: ROC of multimodal model learning on large and small size dataset.  ... 
arXiv:2202.04266v1 fatcat:6sbdv2gypfexpexgst3l4gn2xm

A Graph-Based Integration of Multimodal Brain Imaging Data for the Detection of Early Mild Cognitive Impairment (E-MCI) [chapter]

Dokyoon Kim, Sungeun Kim, Shannon L. Risacher, Li Shen, Marylyn D. Ritchie, Michael W. Weiner, Andrew J. Saykin, Kwangsik Nho
2013 Lecture Notes in Computer Science  
Using a graph-based semi-supervised learning (SSL) method to integrate multimodal brain imaging data and select valid imaging-based predictors for optimizing prediction accuracy, we developed a model to  ...  By the time an individual has been diagnosed with AD, it may be too late for potential disease modifying therapy to strongly influence outcome.  ...  Acknowledgments Data collection and sharing for this project was funded by the Alzheimer's Disease Neuroimaging Initiative (ADNI) (National Institutes of Health Grant U01 AG024904).  ... 
doi:10.1007/978-3-319-02126-3_16 pmid:25383392 pmcid:PMC4224282 fatcat:dl4aai6vujfptl3meu7xzss4ly

Multimodal Brain Connectomics-Based Prediction of Parkinson's Disease Using Graph Attention Networks

Apoorva Safai, Nirvi Vakharia, Shweta Prasad, Jitender Saini, Apurva Shah, Abhishek Lenka, Pramod Kumar Pal, Madhura Ingalhalikar
2022 Frontiers in Neuroscience  
Deep learning-based graph neural network models generate higher-level embeddings that could capture intricate structural and functional regional interactions related to PD.ObjectiveThis study aimed at  ...  investigating the role of structure–function connections in predicting PD, by employing an end-to-end graph attention network (GAT) on multimodal brain connectomes along with an interpretability framework.MethodsThe  ...  Neuroimaging studies have developed and employed machine learning frameworks for performing a brain connectome-based multimodal classification of the diseased population by fusing brain connectivity with  ... 
doi:10.3389/fnins.2021.741489 pmid:35280342 pmcid:PMC8904413 fatcat:qp4ds76bnzcv7or7cgmjpfao6i

A Prior Guided Adversarial Representation Learning and Hypergraph Perceptual Network for Predicting Abnormal Connections of Alzheimer's Disease [article]

Qiankun Zuo, Baiying Lei, Shuqiang Wang, Yong Liu, Bingchuan Wang, Yanyan Shen
2021 arXiv   pre-print
Moreover, the hypergraph perceptual network is developed to effectively fuse the learned representations while establishing high-order relations within and between multimodal images.  ...  The proposed model can evaluate characteristics of abnormal brain connections at different stages of Alzheimer's disease, which is helpful for cognitive disease study and early treatment.  ...  The extracted features by these methods neglect the high-order relations of multiple ROIs both within and between multimodal images, which is essential for cognitive disease analysis.  ... 
arXiv:2110.09302v1 fatcat:al4laaigjfbk3eotv6qnkbwxom

AMA-GCN: Adaptive Multi-layer Aggregation Graph Convolutional Network for Disease Prediction [article]

Hao Chen, Fuzhen Zhuang, Li Xiao, Ling Ma, Haiyan Liu, Ruifang Zhang, Huiqin Jiang, Qing He
2021 arXiv   pre-print
data for disease prediction.  ...  Experimental results on two databases show that our method can significantly improve the diagnostic accuracy for Autism spectrum disorder and breast cancer, indicating its universality in leveraging multimodal  ...  Related Work In the past, disease classification based on deep learning was usually achieved using medical imaging.  ... 
arXiv:2106.08732v1 fatcat:lyxa73zfw5dbdoaadyiyaijagu

Research Based on Multimodal Deep Feature Fusion for the Auxiliary Diagnosis Model of Infectious Respiratory Diseases

Jingyuan Zhao, Liyan Yu, Zhuo Liu, Qingchen Zhang
2021 Scientific Programming  
The establishment of an auxiliary diagnosis model for the infectious respiratory diseases can intelligentize and automate the diagnosis process of infectious respiratory, which has important significance  ...  It also studies the deep feature fusion algorithm of multimodal data, couples the private and shared features of different modal data of infectious respiratory diseases, and digs into the hidden information  ...  Deep Feature Fusion Learning Model Based on Multimodal Data of Infectious Respiratory Diseases is paper studies the deep nonnegative correlation feature fusion algorithm of multimodal data. rough the colearning  ... 
doi:10.1155/2021/5576978 fatcat:i7xa4rj7lzepzmclgbxetpyx7y

Cognitive Computing-Based CDSS in Medical Practice

Jun Chen, Chao Lu, Haifeng Huang, Dongwei Zhu, Qing Yang, Junwei Liu, Yan Huang, Aijun Deng, Xiaoxu Han
2021 Health Data Science  
The characteristics of managing multimodal data and computerizing medical knowledge distinguish cognitive computing-based CDSS from other categories.  ...  The last decade has witnessed the advances of cognitive computing technologies that learn at scale and reason with purpose in medicine studies.  ...  Once ontology is constructed, relations can be automatically extracted from various kinds of documents [73, 74] .  ... 
doi:10.34133/2021/9819851 fatcat:iiq3i22yszec7g2zi333u2adea

Literature mining for context-specific molecular relations using multimodal representations (COMMODAR)

Jaehyun Lee, Doheon Lee, Kwang Hyung Lee
2020 BMC Bioinformatics  
In this paper, we propose COMMODAR, a machine learning-based literature mining framework for context-specific molecular relations using multimodal representations.  ...  The main idea of COMMODAR is the feature augmentation by the cooperation of multimodal representations for relation extraction.  ...  About this supplement This article has been published as part of BMC Bioinformatics, Volume 21 Supplement 5, 2020: Proceedings of the 13th International Workshop on Data and Text Mining in Biomedical Informatics  ... 
doi:10.1186/s12859-020-3396-y pmid:33106154 fatcat:s3oha7endfcqhnxifzdckxxi5a

A Duo-generative Approach to Explainable Multimodal COVID-19 Misinformation Detection

Lanyu Shang, Ziyi Kou, Yang Zhang, Dong Wang
2022 Proceedings of the ACM Web Conference 2022  
We evaluate DGExplain on two real-world multimodal COVID-19 news datasets.  ...  This paper focuses on a critical problem of explainable multimodal COVID-19 misinformation detection where the goal is to accurately detect misleading information in multimodal COVID-19 news articles and  ...  Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on.  ... 
doi:10.1145/3485447.3512257 fatcat:ulm73gaapbe57ja6scep32ah5m

Unsupervised Feature Learning for Endomicroscopy Image Retrieval [chapter]

Yun Gu, Khushi Vyas, Jie Yang, Guang-Zhong Yang
2017 Lecture Notes in Computer Science  
We build a multiscale multimodal graph based on both pCLE mosaics and histology images.  ...  In this paper, we propose Unsupervised Multimodal Graph Mining (UMGM) to learn the discriminative features for probe-based confocal laser endomicroscopy (pCLE) mosaics of breast tissue.  ...  Based on the graph, we tend to extract positive pairs and negative pairs for feature learning.  ... 
doi:10.1007/978-3-319-66179-7_8 fatcat:etwojnhfozgxtfrrz3r6a2sfba

Integration of Text and Graph-based Features for Detecting Mental Health Disorders from Voice [article]

Nasser Ghadiri, Rasoul Samani, Fahime Shahrokh
2022 arXiv   pre-print
In this paper, two methods are used to enrich voice analysis for depression detection: graph transformation of voice signals, and natural language processing of the transcript based on representational  ...  The current methods involve extracting features directly from audio signals.  ...  Related Work In this section, the related work is reviewed in three categories including text-based, voice-based and multimodal method for depression detection.  ... 
arXiv:2205.07006v1 fatcat:vlbsimfgondx3lktyywilwvcaa

The Development and Applications of Food Knowledge Graphs in the Food Science and Industry [article]

Weiqing Min, Chunlin Liu, Leyi Xu, Shuqiang Jiang
2021 arXiv   pre-print
To our knowledge, this is the first comprehensive review on food knowledge graphs in the food science and industry.  ...  We also discuss future directions in this field, such as food knowledge graphs for food supply chain systems and human health, which deserve further study.  ...  Multimodal Food Knowledge Graph. Most of existing food knowledge graphs focus on organizing verbal knowledge extracted from text.  ... 
arXiv:2107.05869v2 fatcat:shmaifcz6bf6zj2tze2k2uovpi

Characterization Multimodal Connectivity of Brain Network by Hypergraph GAN for Alzheimer's Disease Analysis [article]

Junren Pan, Baiying Lei, Yanyan Shen, Yong Liu, Zhiguang Feng, Shuqiang Wang
2021 arXiv   pre-print
Using multimodal neuroimaging data to characterize brain network is currently an advanced technique for Alzheimer's disease(AD) Analysis.  ...  However, Due to the heterogeneity and complexity between BOLD signals and fiber tractography, Most existing multimodal data fusion algorithms can not sufficiently take advantage of the complementary information  ...  deep learning models to obtain AD-related features from brain network.  ... 
arXiv:2107.09953v1 fatcat:dx3twqgca5dppff4btg4wznhtq

Multi-modal Graph Learning for Disease Prediction [article]

Shuai Zheng, Zhenfeng Zhu, Zhizhe Liu, Zhenyu Guo, Yang Liu, Yuchen Yang, Yao Zhao
2022 arXiv   pre-print
To this end, we propose an end-to-end Multi-modal Graph Learning framework (MMGL) for disease prediction with multi-modality.  ...  For disease prediction tasks, most existing graph-based methods tend to define the graph manually based on specified modality (e.g., demographic information), and then integrated other modalities to obtain  ...  To address the issues mentioned above, we concentrate in this paper on graph learning for disease prediction with multimodality, and the main contributions can be highlighted in the following aspects:  ... 
arXiv:2203.05880v1 fatcat:cr47jpdbk5edha2qdkbtxb3xjq

Deep Representation Learning For Multimodal Brain Networks [article]

Wen Zhang, Liang Zhan, Paul Thompson, Yalin Wang
2020 arXiv   pre-print
The recent success of deep learning techniques on graph-structured data suggests a new way to model the non-linear cross-modality relationship.  ...  To address these challenges, we propose a novel end-to-end deep graph representation learning (Deep Multimodal Brain Networks - DMBN) to fuse multimodal brain networks.  ...  We also gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.  ... 
arXiv:2007.09777v1 fatcat:m74yqcdcerfk5ezxozqzwfjebi
« Previous Showing results 1 — 15 out of 6,186 results