Filters








24,762 Hits in 5.2 sec

Deep Embedded Complementary and Interactive Information for Multi-View Classification

Jinglin Xu, Wenbin Li, Xinwang Liu, Dingwen Zhang, Ji Liu, Junwei Han
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
In this work, we propose a novel multi-view learning framework that seamlessly embeds various view-specific information and deep interactive information and introduces a novel multi-view fusion strategy  ...  information, deep interactive information between different views, and the strategy of fusing various views.  ...  CX201814), State Key Laboratory of Geo-Information Engineering (No. SKLGIE2017-Z-3-2), National NSF of China (Nos. 61432008, 61806092), and Jiangsu Natural Science Foundation (No. BK20180326).  ... 
doi:10.1609/aaai.v34i04.6122 fatcat:3drk4ngdjbdnldaw7n4ikb2xse

Embedded Deep Bilinear Interactive Information and Selective Fusion for Multi-view Learning [article]

Jinglin Xu, Wenbin Li, Jiantao Shen, Xinwang Liu, Peicheng Zhou, Xiangsen Zhang, Xiwen Yao, Junwei Han
2020 arXiv   pre-print
In particular, we train different deep neural networks to learn various intra-view representations, and then dynamically learn multi-dimension bilinear interactive information from different bilinear similarities  ...  That is, we seamlessly embed various intra-view information, cross-view multi-dimension bilinear interactive information, and a new view ensemble mechanism into a unified framework to make a decision via  ...  It is worth mentioning that we have developed a preliminary work [34] named deep embedded complementary and interactive information for multi-view classification (denoted as MvNNcor).  ... 
arXiv:2007.06143v1 fatcat:hvq66vutmrfprdv6wp2tpsiwm4

Structure Fusion Based on Graph Convolutional Networks for Node Classification in Citation Networks

Guangfeng Lin, Jing Wang, Kaiyang Liao, Fan Zhao, Wanjun Chen
2020 Electronics  
By solving this function, we can simultaneously obtain the fusion spectral embedding from the multi-view data and the fusion structure as the adjacent matrix to input graph convolutional networks for node  ...  for node classification in citation networks and usually ignore capturing the complete graph structure of nodes for enhancing classification performance.  ...  Acknowledgments: The authors would like to thank the anonymous reviewers for their insightful comments that helped improve the quality of this paper.  ... 
doi:10.3390/electronics9030432 fatcat:tuaxxlo3ibdlpo4bmcaruxg64m

MEGAN: A Generative Adversarial Network for Multi-View Network Embedding [article]

Yiwei Sun, Suhang Wang, Tsung-Yu Hsieh, Xianfeng Tang, Vasant Honavar
2019 arXiv   pre-print
There is an urgent need for methods to obtain low-dimensional, information preserving and typically nonlinear embeddings of such multi-view networks.  ...  Specifically, we investigate a novel Generative Adversarial Network (GAN) framework for Multi-View Network Embedding, namely MEGAN, aimed at preserving the information from the individual network views  ...  The content is solely the responsibility of the authors and does not necessarily represent the official views of the sponsors.  ... 
arXiv:1909.01084v1 fatcat:hqvq4g5wxvgf7mdq23mc3y4vwa

Survey on Deep Multi-modal Data Analytics: Collaboration, Rivalry and Fusion [article]

Yang Wang
2020 arXiv   pre-print
With the development of web technology, multi-modal or multi-view data has surged as a major stream for big data, where each modal/view encodes individual property of data objects.  ...  Throughout this survey, we further indicate that the critical components for this field go to collaboration, adversarial competition and fusion over multi-modal spaces.  ...  Furthermore, it jointly considered multi-view complementary information and class distribution, which enforced all views to learn from each other and enhanced the classification performance.  ... 
arXiv:2006.08159v1 fatcat:g4467zmutndglmy35n3eyfwxku

Integrate multi-omics data with biological interaction networks using Multi-view Factorization AutoEncoder (MAE)

Tianle Ma, Aidong Zhang
2019 BMC Genomics  
Our method learns feature and patient embeddings simultaneously with deep representation learning.  ...  We developed a method called Multi-view Factorization AutoEncoder (MAE) with network constraints that can seamlessly integrate multi-omics data and domain knowledge such as molecular interaction networks  ...  Acknowledgements We thank The Cancer Genome Atlas (TCGA) network for making the high-quality multi-omics data freely available to the public.  ... 
doi:10.1186/s12864-019-6285-x pmid:31856727 pmcid:PMC6923820 fatcat:lmx32nlkwngvzgmv4tawn2yzei

A Survey of Multi-View Representation Learning [article]

Yingming Li, Ming Yang, Zhongfei Zhang
2017 arXiv   pre-print
This paper introduces two categories for multi-view representation learning: multi-view representation alignment and multi-view representation fusion.  ...  sparse coding, and multi-view latent space Markov networks, to neural network-based methods including multi-modal autoencoders, multi-view convolutional neural networks, and multi-modal recurrent neural  ...  This approach learns multi-modal embeddings for language and visual data and then exploits their complementary information to predict a variable-sized text given an image.  ... 
arXiv:1610.01206v4 fatcat:xsi7ufxnlbdk5lz6ykrsnexfvm

Recent Deep Learning Methodology Development for RNA–RNA Interaction Prediction

Yi Fang, Xiaoyong Pan, Hong-Bin Shen
2022 Symmetry  
overview of deep learning models in the prediction of microRNA (miRNA)–mRNA interactions and long non-coding RNA (lncRNA)–miRNA interactions.  ...  In recent years, with more and more experimentally verified RNA–RNA interactions being deposited into databases, statistical machine learning, especially recent deep-learning-based automatic algorithms  ...  For instance, MVMTMDA [96] infers microRNA-disease associations (MDA) from lncRNA-microRNA interactions by multi-view, multi-task learning.  ... 
doi:10.3390/sym14071302 fatcat:5lovtdwn3zb4dfbnjitwjnj3y4

A review of heterogeneous data mining for brain disorder identification

Bokai Cao, Xiangnan Kong, Philip S. Yu
2015 Brain Informatics  
disease mechanisms and for informing therapeutic interventions.  ...  It is expected that integrating complementary information in the tensor data and the brain network data, and incorporating other clinical parameters will be potentially transformative for investigating  ...  Acknowledgments This work is supported in part by NSF through grants III-1526499, CNS-1115234, and OISE-1129076, and Google Research Award.  ... 
doi:10.1007/s40708-015-0021-3 pmid:27747561 pmcid:PMC4883173 fatcat:rhvqh4vmeffnnoxts7esxwxlsq

Deep learning based feature-level integration of multi-omics data for breast cancer patients survival analysis

Li Tong, Jonathan Mitchel, Kevin Chatlin, May D. Wang
2020 BMC Medical Informatics and Decision Making  
Methods Motivated by multi-view learning, we propose a novel strategy to integrate multi-omics data for breast cancer survival prediction by applying complementary and consensus principles.  ...  The proposed ConcatAE and CrossAE models can inspire future deep representation-based multi-omics integration techniques.  ...  Hang Wu for his kind suggestions on the experiment design and analysis.  ... 
doi:10.1186/s12911-020-01225-8 pmid:32933515 fatcat:fg25bjnhbvdnhda7t2csotlj5i

Multi-view Factorization AutoEncoder with Network Constraints for Multi-omic Integrative Analysis [article]

Tianle Ma, Aidong Zhang
2018 arXiv   pre-print
Our framework employs deep representation learning to learn feature embeddings and patient embeddings simultaneously, enabling us to integrate feature interaction network and patient view similarity network  ...  Here we propose a framework termed Multi-view Factorization AutoEncoder with network constraints to integrate multi-omic data with domain knowledge (biological interactions networks).  ...  Each view has a different feature set (for example, gene features, miRNA features, protein features etc.) and can provide complementary information for other views.  ... 
arXiv:1809.01772v1 fatcat:2ca7u6dt2vet5kdfxri53hreui

Deep Multi-View Learning for Tire Recommendation [article]

Thomas Ranvier, Kilian Bourhis, Khalid Benabdeslem, Bruno Canitia
2022 arXiv   pre-print
Our goal is to use a multi-view learning approach to improve our recommender system and improve its capacity to manage multi-view data.  ...  The data representing the users, their interactions with the system or the products may come from different sources and be of a various nature.  ...  By using several views, all of which provide useful information, a deep composite model is able to naturally make use of the complementary principle.  ... 
arXiv:2203.12451v1 fatcat:bulihckhxnbjxhos4jtx62l7ym

MuVAM: A Multi-View Attention-based Model for Medical Visual Question Answering [article]

Haiwei Pan, Shuning He, Kejia Zhang, Bo Qu, Chunling Chen, Kun Shi
2021 arXiv   pre-print
It consists of classification loss and image-question complementary (IQC) loss.  ...  Multi-view attention can correlate the question with image and word in order to better analyze the question and get an accurate answer.  ...  It consists of classification loss and image-question complementary (IQC) loss.  ... 
arXiv:2107.03216v1 fatcat:4yqcxspwjrdphl6x22naqvh2aq

Man is What He Eats: A Research on Hinglish Sentiments of YouTube Cookery Channels Using Deep learning

2019 International journal of recent technology and engineering  
Our study focuses on the sentiment analysis of Hinglish comments by multi-label text classification on cookery channels of YouTube using Deep learning.  ...  embeddings and customizedembeddings.  ...  multi label text classification.  ... 
doi:10.35940/ijrte.b1153.0982s1119 fatcat:jpdo5d3b7rhzziboqq6ggszxfe

IF-ConvTransformer

Ye Zhang, Longguang Wang, Huiling Chen, Aosheng Tian, Shilin Zhou, Yulan Guo
2022 Proceedings of the ACM on Interactive Mobile Wearable and Ubiquitous Technologies  
Then, the extracted features are fed into the applied ConvTransformer subnet for classification.  ...  Inspired by the complementary filter, our IMU fusion block performs multi-modal fusion of commonly used sensors according to their physical relationships.  ...  In [53] , a hybrid model (DeepSense) composed of a multi-branch CNN and GRUs was proposed for classification.  ... 
doi:10.1145/3534584 fatcat:isxnkuwfnjbjrnjjkpsvtcydh4
« Previous Showing results 1 — 15 out of 24,762 results