Filters








9,250 Hits in 4.8 sec

Auto-Encoder based Co-Training Multi-View Representation Learning [article]

Run-kun Lu, Jian-wei Liu, Yuan-fang Wang, Hao-jie Xie, Xin Zuo
2022 arXiv   pre-print
Co-training Multi-View Learning (ACMVL), which utilizes both complementarity and consistency and finds a joint latent feature representation of multiple views.  ...  As we known, auto-encoder is a method of deep learning, which can learn the latent feature of raw data by reconstructing the input, and based on this, we propose a novel algorithm called Auto-encoder based  ...  Conclusion In this paper, we propose a novel multi-view learning algorithm called Auto-Encoder based Co-Training Multi-View Representation Learning (ACMVL), which is aimed to subspace learning and model  ... 
arXiv:2201.02978v1 fatcat:3gdck36mrng5zn5a6hg7mk2ljq

Multi-view Deep Subspace Clustering Networks [article]

Pengfei Zhu, Binyuan Hui, Changqing Zhang, Dawei Du, Longyin Wen, Qinghua Hu
2019 arXiv   pre-print
A latent space is built upon deep convolutional auto-encoders and a self-representation matrix is learned in the latent space using a fully connected layer.  ...  Dnet learns view-specific self-representation matrices while Unet learns a common self-representation matrix for all views.  ...  The proposed method learns multi-view self-representation in an end-to-end manner by combining convolutional auto-encoder and self-representation together.  ... 
arXiv:1908.01978v1 fatcat:4uf2efjh5va5lmx4ywhwng7pni

Modeling Heterogeneous Edges to Represent Networks with Graph Auto-Encoder [article]

Lu Wang, Yu Song, Hong Huang, Fanghua Ye, Xuanhua Shi, Hai Jin
2021 arXiv   pre-print
Given this, we propose a regularized graph auto-encoders (RGAE) model, committed to utilizing abundant information in multiple views to learn robust network representations.  ...  We convert the heterogeneous networks into multiple views by using each view to describe a specific type of relationship between nodes, so that we can leverage the collaboration of multiple views to learn  ...  Some traditional multi-view learning algorithms, such as co-training [13] , co-clustering [39] , and cross-domain fusion [6] analyze multi-view networks for specific tasks.  ... 
arXiv:2103.07042v1 fatcat:hbbqaymnl5eszgrbvzlazsqvd4

From multiple views to single view

Subendhu Rongali, A. P. Sarath Chandar, Balaraman Ravindran
2015 Proceedings of the Second ACM IKDD Conference on Data Sciences - CoDS '15  
In this paper, we propose one such subspace learning approach based on neural networks. Our aim is to explore the application of auto-encoders in a multi-view setting.  ...  PREDICTIVE AUTO-ENCODER In this section, we describe the Predictive Auto-Encoder model introduced in [11] , which is the basis for the proposed multi-view learning approach.  ... 
doi:10.1145/2732587.2732602 dblp:conf/cods/RongaliCR15 fatcat:laryexbzlnad3okc4aguhviy3m

Deep Multiple Auto-Encoder-Based Multi-view Clustering

Guowang Du, Lihua Zhou, Yudi Yang, Kevin Lü, Lizhen Wang
2021 Data Science and Engineering  
In this paper, we propose a deep multi-view clustering algorithm based on multiple auto-encoder, termed MVC-MAE, to cluster multi-view data.  ...  However, most existing MVC algorithms are shallow models, which learn structure information of multi-view data by mapping multi-view data to low-dimensional representation space directly, ignoring the  ...  In this paper, we propose a multi-view clustering algorithm based on multiple auto-encoder, named MVC-MAE (see Fig. 1 ).  ... 
doi:10.1007/s41019-021-00159-z fatcat:dd4ml5u7dzf65ihdi4ryx5r6di

Generative Partial Multi-View Clustering [article]

Qianqian Wang, Zhengming Ding, Zhiqiang Tao, Quanxue Gao, Yun Fu
2020 arXiv   pre-print
First, multi-view encoder networks are trained to learn common low-dimensional representations, followed by a clustering layer to capture the consistent cluster structure across multiple views.  ...  These two steps could be promoted mutually, where learning common representations facilitates data imputation and the generated data could further explores the view consistency.  ...  Additionally, we are able to achieve a more common representation that works more effectively in partial multi-view clustering. 5) Overall objective: By integrating auto-encoder loss, adversarial training  ... 
arXiv:2003.13088v1 fatcat:64zojvgtwjhs3pxxek2zsg7vge

Hierarchical Encoder with Auxiliary Supervision for Neural Table-to-Text Generation: Learning Better Representation for Tables

Tianyu Liu, Fuli Luo, Qiaolin Xia, Shuming Ma, Baobao Chang, Zhifang Sui
2019 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
Most neural table-to-text models are based on the encoder-decoder framework. However, it is hard for a vanilla encoder to learn the accurate semantic representation of a complex table.  ...  and multi-labeling classification, as the auxiliary supervisions for the table encoder.  ...  We view all the attribute names which appear in the specific table as the targets for the multi-label classification on the internal representation of the table encoder.  ... 
doi:10.1609/aaai.v33i01.33016786 fatcat:c7hhd5zt7bdlnmph6tlv3h2ywy

Deep Co-Attention Network for Multi-View Subspace Learning [article]

Lecheng Zheng, Yu Cheng, Hongxia Yang, Nan Cao, Jingrui He
2021 arXiv   pre-print
To address these issues, in this paper, we propose a deep co-attention network for multi-view subspace learning, which aims to extract both the common information and the complementary information in an  ...  This improves the quality of latent representation and accelerates the convergence speed.  ...  As for the subspace learning, the authors of [14] proposed a deep multiview robust representation learning algorithm based on auto-encoder to learn a shared representation from multi-view observations  ... 
arXiv:2102.07751v1 fatcat:ufmiwpf7szbpzkrw6go7fv72ru

Heterogeneous Graph Neural Network with Multi-view Representation Learning [article]

Zezhi Shao, Yongjun Xu, Wei Wei, Fei Wang, Zhao Zhang, Feida Zhu
2021 arXiv   pre-print
The proposed model consists of node feature transformation, view-specific ego graph encoding and auto multi-view fusion to thoroughly learn complex structural and semantic information for generating comprehensive  ...  To address the problem, we propose a Heterogeneous Graph Neural Network with Multi-View Representation Learning (named MV-HetGNN) for heterogeneous graph embedding by introducing the idea of multi-view  ...  Auto Multi-view Fusion After the view-specific ego graph encoding module, we get diverse representations of target nodes from each view.  ... 
arXiv:2108.13650v1 fatcat:qgxlddjql5fdlcecjp4hsmyo4a

ConvMAE: Masked Convolution Meets Masked Autoencoders [article]

Peng Gao, Teli Ma, Hongsheng Li, Ziyi Lin, Jifeng Dai, Yu Qiao
2022 arXiv   pre-print
In this paper, our ConvMAE framework demonstrates that multi-scale hybrid convolution-transformer can learn more discriminative representations via the mask auto-encoding scheme.  ...  We also propose to more directly supervise the multi-scale features of the encoder to boost multi-scale features.  ...  Self-supervised Representation Learning. Contrastive learning and Masked Auto-encoding are two popular branches among self-supervised presentation learning.  ... 
arXiv:2205.03892v2 fatcat:uealrq7tizbpbpum7zdtfkzdmy

Multimodal Machine Learning: Integrating Language, Vision and Speech

Louis-Philippe Morency, Tadas Baltrušaitis
2017 Proceedings of ACL 2017, Tutorial Abstracts  
Multimodal machine learning is a vibrant multi-disciplinary research field which addresses some of the original goals of artificial intelligence by integrating and modeling multiple communicative modalities  ...  , auto-encoders 4.  ...  Multimodal fusion and co-learning • Model free approaches -Early and late fusion, hybrid models • Kernel-based fusion -Multiple kernel learning • Multimodal graphical models -Factorial HMM, Multi-view  ... 
doi:10.18653/v1/p17-5002 dblp:conf/acl/MorencyB17 fatcat:m24h75t6mvdyfeedrsjbvjjaom

Efficient Region Embedding with Multi-View Spatial Networks: A Perspective of Locality-Constrained Spatial Autocorrelations

Yanjie Fu, Pengyang Wang, Jiadi Du, Le Wu, Xiaolin Li
2019 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
Specifically, we first construct multi-view (i.e., distance and mobility connectivity) POI-POI networks to represent regions.  ...  We propose a new encoder-decoder based formulation that preserves the two properties while remaining efficient.  ...  Multi-view graphs are built to represent regions and then fed into Auto-Encoder to learn region embeddings.  ... 
doi:10.1609/aaai.v33i01.3301906 fatcat:5hqbc6j7pzep5euitakavziulm

SSL-Net: Point-cloud generation network with self-supervised learning

Ran Sun, Yongbin Gao, Zhijun Fang, Anjie Wang, Cengsi Zhong
2019 IEEE Access  
DEEP LEARNING ON SINGLE IMAGE GENERATION Prior works including 3D auto-encoder [28] and recurrent network [14] learn a latent representation for volumetric data.  ...  The learning-based approach requires sufficient training data to learn semantic features. However, there is no large open 3D database in the field that meets the requirements in the early stage.  ...  A. 3D POINT CLOUD RECONSTRUCTION FROM IMAGE The 3D point cloud reconstruction is based on a pre-trained auto-encoder, which consists of an encoder and a decoder, the encoder learns the latent features  ... 
doi:10.1109/access.2019.2923842 fatcat:s53kttnwk5dw3jzy2aqe6deaji

Hierarchical Point Cloud Encoding and Decoding with Lightweight Self-Attention based Model [article]

En Yen Puang, Hao Zhang, Hongyuan Zhu, Wei Jing
2022 arXiv   pre-print
In this paper we present SA-CNN, a hierarchical and lightweight self-attention based encoding and decoding architecture for representation learning of point cloud data.  ...  Following conventional hierarchical pipeline, the encoding process extracts feature in local-to-global manner, while the decoding process generates feature and point cloud in coarse-to-fine, multi-resolution  ...  In contrast, latent representation of point cloud trained with an auto-encoder can be used for the data retrieval task.  ... 
arXiv:2202.06407v1 fatcat:uo6wekbvobc4zlzln4s22avuam

AnomMAN: Detect Anomaly on Multi-view Attributed Networks [article]

Ling-Hao Chen, He Li, Wenhao Yang
2022 arXiv   pre-print
In this paper, we propose a Graph Convolution based framework, AnomMAN, to detect Anomaly on Multi-view Attributed Networks.  ...  Therefore, AnomMAN uses a graph auto-encoder module to overcome the shortcoming and transform it to our strength.  ...  In this work, we design a Graph Convolution based auto-encoder to detect anomalies.  ... 
arXiv:2201.02822v1 fatcat:34e7qd4hebezdh4tqe3z5yzap4
« Previous Showing results 1 — 15 out of 9,250 results