A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Learning Latent Representations in Neural Networks for Clustering through Pseudo Supervision and Graph-based Activity Regularization
[article]
2018
arXiv
pre-print
Due to the unsupervised objective based on Graph-based Activity Regularization (GAR) terms, softmax duplicates of each parent-class are specialized as the hidden information captured through the help of ...
Generated pseudo observation-label pairs are subsequently used to train a neural network with Auto-clustering Output Layer (ACOL) that introduces multiple softmax nodes for each pseudo parent-class. ...
While being trained over this pseudo supervision, through ACOL and GAR, the neural network learns the latent representation distinguishing the real digit identities in an unsupervised manner. ...
arXiv:1802.03063v1
fatcat:zqgvih6qvrhhzovt7w3oec4jyy
Pseudo-supervised Deep Subspace Clustering
[article]
2021
arXiv
pre-print
Pseudo-graphs and pseudo-labels, which allow benefiting from uncertain knowledge acquired during network training, are further employed to supervise similarity learning. ...
Auto-Encoder (AE)-based deep subspace clustering (DSC) methods have achieved impressive performance due to the powerful representation extracted using deep neural networks while prioritizing categorical ...
Specifically, we introduce pseudo-graph supervision and pseudo-label supervision to guide the network training by constructing pseudographs and pseudo-labels. Pseudo-graph Supervision. ...
arXiv:2104.03531v1
fatcat:jjyq52m42raoxlukjdqueawtwy
Hypergraph-Supervised Deep Subspace Clustering
2021
Mathematics
As a standard module in current AE-based DSC, the self-reconstruction cost plays an essential role in regularizing the feature learning. ...
learning and, thus, alleviating the adverse effect of the self-reconstruction cost. ...
H-DSC, base-1, base-2, and base-3 have the same network architecture and training policy, except that base-1 adopts the self-reconstruction (as in Equation ( 2 )), base-2 employs the graph-regularized ...
doi:10.3390/math9243259
fatcat:vc4mzydy4zejpmq5fs2jvtfv5i
A survey on semi-supervised learning
2019
Machine Learning
In recent years, research in this area has followed the general trends observed in machine learning, with much attention directed at neural network-based models and generative learning. ...
Lastly, we show how the fundamental assumptions underlying most semi-supervised learning algorithms are closely connected to each other, and how they relate to the well-known semi-supervised clustering ...
reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. ...
doi:10.1007/s10994-019-05855-6
fatcat:lm2obxiqtrcujfbyzz3erna5p4
Unsupervised Deep Learning: Taxonomy and algorithms
2022
Informatica (Ljubljana, Tiskana izd.)
As a result, deep neural networks can be used to learn more accurate data representations for clustering. ...
Many recent studies have focused on employing deep neural networks to develop a clustering-friendly representation, which has resulted in a significant improvement in clustering performance. ...
, for the financial support of LISCO Laboratory. ...
doi:10.31449/inf.v46i2.3820
fatcat:35wje347s5ar3ixzkeqgxc3cbu
A Survey on Deep Semi-supervised Learning
[article]
2021
arXiv
pre-print
We first present a taxonomy for deep semi-supervised learning that categorizes existing methods, including deep generative methods, consistency regularization methods, graph-based methods, pseudo-labeling ...
This paper provides a comprehensive survey on both fundamentals and recent advances in deep semi-supervised learning methods from perspectives of model design and unsupervised loss functions. ...
Through latent skip connections, the ladder network is differentiated from regular denoising AutoEncoder. ...
arXiv:2103.00550v2
fatcat:lymncf5wavgkhaenbvqlyvhuaa
A Comprehensive Survey on Deep Clustering: Taxonomy, Challenges, and Future Directions
[article]
2022
arXiv
pre-print
Classic clustering methods follow the assumption that data are represented as features in a vectorized form through various representation learning techniques. ...
Recently, the concept of Deep Clustering, i.e., jointly optimizing the representation learning and clustering, has been proposed and hence attracted growing attention in the community. ...
For example, deep neural networks for vectorized features [220] , convolutional networks for images and graph neural networks for graphs [150] , 3D convolutional networks and LSTM auto-encoder for videos ...
arXiv:2206.07579v1
fatcat:r2zxpt24i5e4rjvuq2lwf5td2y
A Comprehensive Survey on Community Detection with Deep Learning
[article]
2021
arXiv
pre-print
Despite the classical spectral clustering and statistical inference methods, we notice a significant development of deep learning techniques for community detection in recent years with their advantages ...
This survey devises and proposes a new taxonomy covering different state-of-the-art methods, including deep learning-based models upon deep neural networks, deep nonnegative matrix factorization and deep ...
DNGR
Deep Neural Networks for Graph Representation
[112] Deep neural networks for learning graph representations
DNMF
Deep Nonnegative Matrix Factorization
TABLE XII : XII Abbreviations in this ...
arXiv:2105.12584v2
fatcat:matipshxnzcdloygrcrwx2sxr4
Collaborative Graph Convolutional Networks: Unsupervised Learning Meets Semi-Supervised Learning
2020
PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE
CGCN is composed of an attributed graph clustering network and a semi-supervised node classification network. ...
Graph convolutional networks (GCN) have achieved promising performance in attributed graph clustering and semi-supervised node classification because it is capable of modeling complex graphical structure ...
Acknowledgments This work was supported by the National Natural Science Foundation of China under Grants 61876127, Natural Science Foundation of Tianjin Under Grants 17JCZDJC30800, 18YFZCGX00390, 18YFZCGX00680, and ...
doi:10.1609/aaai.v34i04.5843
fatcat:6nsxdrutsngubko4ii5qpgk6vu
Locally Embedding Autoencoders: A Semi-Supervised Manifold Learning Approach of Document Representation
2016
PLoS ONE
Topic models and neural networks can discover meaningful low-dimensional latent representations of text corpora; as such, they have become a key technology of document representation. ...
regularization hyper-parameter n size of each batch training J(θ; X (i) , S (i) ) reconstruction error for given input X (i) ...
The parameter learning problem can be solved by training this regularized neural network with a mini-batch stochastic gradient descent (SGD). ...
doi:10.1371/journal.pone.0146672
pmid:26784692
pmcid:PMC4718658
fatcat:3ihhkndpw5hitapelxkkwglfx4
Semi-supervised Adversarial Active Learning on Attributed Graphs
[article]
2019
arXiv
pre-print
In this paper, we propose a SEmi-supervised Adversarial active Learning (SEAL) framework on attributed graphs, which fully leverages the representation power of deep neural networks and devises a novel ...
as already labelled, and a semi-supervised discriminator network that distinguishes the unlabelled from the existing labelled nodes in the latent space. ...
Traditional graph-based semi-supervised learning assumes that the connected nodes are likely to share the same label and as such enforces a graph-based regularization in the loss function. ...
arXiv:1908.08169v1
fatcat:i2olskc7vvcodeqzsgyfvpctue
A Survey on Deep Hashing Methods
2022
ACM Transactions on Knowledge Discovery from Data
Moreover, deep unsupervised hashing is categorized into similarity reconstruction-based methods, pseudo-label-based methods and prediction-free self-supervised learning-based methods based on their semantic ...
Hashing is one of the most widely used methods for its computational and storage efficiency. With the development of deep learning, deep hashing methods show more advantages than traditional methods. ...
We also thank Zeyu Ma, Huasong Zhong and Xiaokang Chen who discussed with us and provided instructive suggestions. ...
doi:10.1145/3532624
fatcat:7lxtu2qzvvhrpnjngefli2mvca
Latent representation learning in biology and translational medicine
2021
Patterns
Latent variable modeling allows for such interpretation by learning non-measurable hidden variables from observations. ...
We anticipate that a wider dissemination of latent variable modeling in the life sciences will enable a more effective and productive interpretation of studies based on heterogeneous and high-dimensional ...
ACKNOWLEDGMENTS We thank the anonymous reviewers for their feedback and insightful suggestions to improve this review. A.K. was supported by the grants SystemsX.ch HDL-X and PHRT 2017-103. ...
doi:10.1016/j.patter.2021.100198
pmid:33748792
pmcid:PMC7961186
fatcat:d6ttueb5rbhotbsztha3wyvjt4
Relation-Guided Representation Learning
[article]
2020
arXiv
pre-print
Deep auto-encoders (DAEs) have achieved great success in learning data representations via the powerful representability of neural networks. ...
In this work, we propose a new representation learning method that explicitly models and leverages sample relations, which in turn is used as supervision to guide the representation learning. ...
The deep comprehensive correlation mining (DCCM) [59] makes use of the local robustness assumption and utilizes above pseudo-graph and pseudo-label to learn better representation. ...
arXiv:2007.05742v1
fatcat:vpw3r7sriffr7bfdmz6dhe6zxm
Visual Interpretability for Deep Learning: a Survey
[article]
2018
arXiv
pre-print
This paper reviews recent studies in understanding neural-network representations and learning neural networks with interpretable/disentangled middle-layer representations. ...
CNN representations, learning of CNNs with disentangled representations, and middle-to-end learning based on model interpretability. ...
Acknowledgement This work is supported by ONR MURI project N00014-16-1-2007 and DARPA XAI Award N66001-17-2-4029, and NSF IIS 1423305. ...
arXiv:1802.00614v2
fatcat:g55ax3lso5axtb6cn7munbaidi
« Previous
Showing results 1 — 15 out of 1,620 results