A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Filters
Multi-View Non-negative Matrix Factorization Discriminant Learning via Cross Entropy Loss
[article]
2022
arXiv
pre-print
Zhong Zhang et al. explore the discriminative and non-discriminative information exist-ing in common and view-specific parts among different views via joint non-negative matrix factorization. ...
In this paper, we improve this algorithm on this ba-sis by using the cross entropy loss function to constrain the objective function better. ...
In this paper, we propose multi-view non-negative matrix factorization discriminant learning via cross entropy loss. we distinguish discriminative and non-discriminative information existing in the consistent ...
arXiv:2201.04726v1
fatcat:iu2jy2br4zbu7cfsdkqro45esa
Iterative Graph Self-Distillation
[article]
2021
arXiv
pre-print
How to discriminatively vectorize graphs is a fundamental challenge that attracts increasing attentions in recent years. ...
Inspired by the recent success of unsupervised contrastive learning, we aim to learn graph-level representation in an unsupervised manner. ...
[17] shows the benefits of treating diffusion matrix as an augmented view of multi-view contrastive graph representation learning. ...
arXiv:2010.12609v2
fatcat:h5csmfxatbg4jcliukikimsgnm
Skeleton-Based Action Recognition with Synchronous Local and Non-local Spatio-temporal Learning and Frequency Attention
[article]
2019
arXiv
pre-print
Besides, a soft-margin focal loss (SMFL) is proposed to optimize the learning whole process, which automatically conducts data selection and encourages intrinsic margins in classifiers. ...
To better extract synchronous detailed and semantic information from multi-domains, we propose a residual frequency attention (rFA) block to focus on discriminative patterns in the frequency domain, and ...
The model is optimized with the cross entropy loss (CE), focal loss (FL), soft-margin cross entropy loss (SMCE), and soft-margin focal loss (SMFL), respectively. ...
arXiv:1811.04237v3
fatcat:7zceaqr7e5aj7m6veuxfom4y5e
Triplet is All You Need with Random Mappings for Unsupervised Visual Representation Learning
[article]
2021
arXiv
pre-print
Contrastive self-supervised learning (SSL) has achieved great success in unsupervised visual representation learning by maximizing the similarity between two augmented views of the same image (positive ...
In contrast, some recent non-contrastive SSL methods, such as BYOL and SimSiam, attempt to discard negative pairs by introducing asymmetry and show remarkable performance. ...
From another perspective, the whole training phase can also be considered as a multi-task optimization process via triplet loss and cross-entropy loss, which improves the robustness of ROMA to some extent ...
arXiv:2107.10419v2
fatcat:yx4u2jotbbhlpdsikn7yeewg2m
Inducing Optimal Attribute Representations for Conditional GANs
[article]
2020
arXiv
pre-print
Moreover, prior-arts are given priorities to condition on the generator side, not on the discriminator side of GANs. We apply the conditions to the discriminator side as well via multi-task learning. ...
The GAN losses, i.e. the discriminator and attribute classification losses, are fed back to the Graph resulting in the synthetic images that are more natural and clearer in attributes. ...
In this work, we introduce conditioning of the discriminator with multi-task learning framework while minimising the target attributes cross entropy loss. ...
arXiv:2003.06472v2
fatcat:pb37ca5glzhr3mxanho62p2ws4
Deep Co-Attention Network for Multi-View Subspace Learning
[article]
2021
arXiv
pre-print
To address these issues, in this paper, we propose a deep co-attention network for multi-view subspace learning, which aims to extract both the common information and the complementary information in an ...
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation by incorporating the classifier into our model. ...
Here we aim to minimize the following loss function: = E ( 1 , 2 )∼ ( 1 , 2 ) H( ,ˆ) (9) where H( ,ˆ) is the cross-entropy loss. ...
arXiv:2102.07751v1
fatcat:ufmiwpf7szbpzkrw6go7fv72ru
Semi-Supervised Deep Learning for Multiplex Networks
[article]
2021
arXiv
pre-print
In this work, we present a novel semi-supervised approach for structure-aware representation learning on multiplex networks. ...
Multiplex networks are complex graph structures in which a set of entities are connected to each other via multiple types of relations, each relation representing a distinct layer. ...
To achieve this, we adapt [22] 's Non-Negative Matrix Factorization formulation to learn label-correlated clusters to a Neural Network setup as follows. ...
arXiv:2110.02038v1
fatcat:koof45ms6fbrpaz6izecoswroi
Pluggable Weakly-Supervised Cross-View Learning for Accurate Vehicle Re-Identification
[article]
2021
arXiv
pre-print
Through hallucinating the cross-view samples as the hardest positive counterparts in feature domain, we can learn the consistent feature representation via minimizing the cross-view feature distance based ...
In this study, we present a pluggable Weakly-supervised Cross-View Learning (WCVL) module for vehicle ReID. ...
With extensive experiments on three vehicle ReID benchmark datasets, the proposed method obviously outperforms most existing non-cross-view learning and supervised cross-view learning baselines with a ...
arXiv:2103.05376v1
fatcat:5ui33m27szezrncuo4ga2jrfze
Discriminative coupled dictionary hashing for fast cross-media retrieval
2014
Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval - SIGIR '14
We introduce multi-view features on the relatively "weak" modalities into DCDH and extend it to multi-view DCDH (MV-DCDH) in order to enhance their representation capability. ...
We propose a discriminative coupled dictionary hashing (DCDH) method in this paper. In DCDH, the coupled dictionary for each modality is learned with side information (e.g., categories). ...
Z m ≥ 0 (12) The non-negative constraint on Z m is needed for the following hash functions learning step. ...
doi:10.1145/2600428.2609563
dblp:conf/sigir/YuWYTLZ14
fatcat:igpcpkocsrggvmldkcboj2ofly
Boost-RS: Boosted Embeddings for Recommender Systems and its Application to Enzyme-Substrate Interaction Prediction
[article]
2021
arXiv
pre-print
We show that each of our auxiliary tasks boosts learning of the embedding vectors, and that contrastive learning using Boost-RS outperforms attribute concatenation and multi-label learning. ...
Specifically, Boost-RS is trained and dynamically tuned on multiple relevant auxiliary learning tasks Boost-RS utilizes contrastive learning tasks to exploit relational data. ...
The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. ...
arXiv:2109.14766v1
fatcat:gqeub2uhjzdh3p4nvqzrb2wfvi
Deep Supervised Information Bottleneck Hashing for Cross-modal Retrieval based Computer-aided Diagnosis
[article]
2022
arXiv
pre-print
., hashing-based cross-modal medical data retrieval), provides a new view to promot computeraided diagnosis. ...
Benefiting from this, the superfluous information is reduced, which facilitates the discriminability of hash codes. ...
For example, Collective Matrix Factorization Hashing (Ding et al. 2016 ) learns unified hash codes by collective matrix factorization with a latent factor model to capture instance-level correlations. ...
arXiv:2205.08365v1
fatcat:ur3u6wk7abfy7npragdin5f3ku
Multi-view Contrastive Graph Clustering
[article]
2021
arXiv
pre-print
; we then learn a consensus graph regularized by graph contrastive loss. ...
Most existing multi-view clustering techniques either focus on the scenario of multiple graphs or multi-view attributes. ...
cross entropy loss (NT-Xent) . ...
arXiv:2110.11842v1
fatcat:w2cpv4os2ffrpk5atfzqkia5vi
Deep Adversarial Inconsistent Cognitive Sampling for Multi-view Progressive Subspace Clustering
[article]
2021
arXiv
pre-print
learn a multi-view common progressive subspace and clustering network for more efficient clustering. ...
A multiview binary classification (easy or difficult) loss and a feature similarity loss are proposed to jointly learn a binary classifier and a deep consistent feature embedding network, throughout an ...
clustering methods: Deep Canonical Correlation
Analysis (DCCA) [9], Multi-view clustering via Deep Matrix
Factorization (MvDMF) [16], Deep Adversarial Multi-view
Clustering network (DAMC) [14], End-to-End ...
arXiv:2101.03783v3
fatcat:5tfhk3msufhglmtjkruxeztq4q
Generative Partial Multi-View Clustering
[article]
2020
arXiv
pre-print
First, multi-view encoder networks are trained to learn common low-dimensional representations, followed by a clustering layer to capture the consistent cluster structure across multiple views. ...
In this study, we design and build a generative partial multi-view clustering model, named as GP-MVC, to address the incomplete multi-view problem by explicitly generating the data of missing views. ...
Rather than cross entropy loss function, it adopts least squares loss function for the discriminator. Zhang et al. ...
arXiv:2003.13088v1
fatcat:64zojvgtwjhs3pxxek2zsg7vge
A Comprehensive Survey on Community Detection with Deep Learning
[article]
2021
arXiv
pre-print
This survey devises and proposes a new taxonomy covering different state-of-the-art methods, including deep learning-based models upon deep neural networks, deep nonnegative matrix factorization and deep ...
Finally, we outline future directions by suggesting challenging topics in this fast-growing deep learning field. ...
with aligned autoencoder DMGC Deep Multi-Graph Clustering [120] Deep multi-graph clustering via attentive cross-graph association Deep Graph Infomax for attributed DMGI Multiplex network embedding [61 ...
arXiv:2105.12584v2
fatcat:matipshxnzcdloygrcrwx2sxr4
« Previous
Showing results 1 — 15 out of 6,205 results