Filters








13,788 Hits in 7.4 sec

Geometric Graph Representation Learning via Maximizing Rate Reduction [article]

Xiaotian Han, Zhimeng Jiang, Ninghao Liu, Qingquan Song, Jundong Li, Xia Hu
2022 arXiv   pre-print
To this end, we propose Geometric Graph Representation Learning (G2R) to learn node representations in an unsupervised manner via maximizing rate reduction.  ...  G2R adopts a graph neural network as the encoder and maximizes the rate reduction with the adjacency matrix.  ...  To address the above challenges, we propose Geometric Graph Representation Learning (G 2 R) to learn node representations via maximizing coding rate reduction.  ... 
arXiv:2202.06241v1 fatcat:hyd6lcbbwjb2tcwicuopsm4qr4

A Novel Graph Constructor for Semisupervised Discriminant Analysis: Combined Low-Rank and k-Nearest Neighbor Graph

Baokai Zu, Kewen Xia, Yongke Pan, Wenjia Niu
2017 Computational Intelligence and Neuroscience  
Since the low-rank representation can capture the global structure and the k-nearest neighbor algorithm can maximally preserve the local geometrical structure of the data, the LRKNN graph can significantly  ...  Different from these relative works, the regularized graph construction is researched here, which is important in the graph-based semisupervised learning methods.  ...  Since the low-rank representation can capture the global structure and the -nearest neighbor algorithm can maximally preserve the local geometrical structure of the data, therefore the LRKNN graph can  ... 
doi:10.1155/2017/9290230 pmid:28316616 pmcid:PMC5338073 fatcat:3vqg2fiz4zfsxfxuokqjc7owgi

Block-Diagonal Constrained Low-Rank and Sparse Graph for Discriminant Analysis of Image Data

Tan Guo, Xiaoheng Tan, Lei Zhang, Chaochen Xie, Lu Deng
2017 Sensors  
Meanwhile, low-rank and sparse graphs have been applied for semi-supervised learning [29, 30] .  ...  Alternatively, Feng et al. [26] explicitly pursued a block-diagonal structure solution via a graph Laplacian constraint-based formulation.  ...  In GE, an intrinsic graph G = {X, W} that describes certain desired statistical or geometrical properties of data, and a penalty graph G P = X, W P characterizes a statistical or geometric property which  ... 
doi:10.3390/s17071475 pmid:28640206 pmcid:PMC5539604 fatcat:kldrc6zrvnb7zagdbhvavspiom

A New Graph Constructor for Semi-supervised Discriminant Analysis via Group Sparsity

Haoyuan Gao, Liansheng Zhuang, Nenghai Yu
2011 2011 Sixth International Conference on Image and Graphics  
the representation in each class to be quite similar.  ...  Inspired by the advances of compressive sensing, we propose a novel graph construction method via group sparsity,which means to constrain the reconstruct data to be sparse for each sample, and constrain  ...  [12] proposed a novel graph called 1 -graph via sparse representation by L 1 optimization.  ... 
doi:10.1109/icig.2011.82 dblp:conf/icig/GaoZY11 fatcat:ju6j36znsnggzm7kv2iguyxjlu

A Dynamic Reduction Network for Point Clouds [article]

Lindsey Gray Fermi National Accelerator Laboratory
2020 arXiv   pre-print
It achieves this by dynamically learning the most important relationships between data via an intermediate clustering.  ...  Classifying whole images is a classic problem in machine learning, and graph neural networks are a powerful methodology to learn highly irregular geometries.  ...  This text describes a new pooling architecture using dynamic graph convolutions [4] and clustering algorithms to learn an optimized representation and corresponding graph for pooling.  ... 
arXiv:2003.08013v1 fatcat:t5c5kfdgxfauzplqixdqr32moa

Semi-supervised subspace learning with L2graph

Xi Peng, Miaolong Yuan, Zhiding Yu, Wei Yun Yau, Lei Zhang
2016 Neurocomputing  
Subspace learning aims to learn a projection matrix from a given training set so that a transformation of raw data to a low-dimensional representation can be obtained.  ...  In practice, the labels of some training samples are available, which can be used to improve the discrimination of low-dimensional representation.  ...  The key of subspace learning is identifying the geometric relations among different data points, i.e., the construction of the similarity graph.  ... 
doi:10.1016/j.neucom.2015.11.112 fatcat:ndxrci6a5bhn7jzjv4pjdcy7ma

Face Recognition Using A Kernelization Of Graph Embedding

Pang Ying Han, Hiew Fu San, Ooi Shih Yin
2012 Zenodo  
So, a kernelization of graph embedding is proposed as a dimensionality reduction technique in face recognition.  ...  Linearization of graph embedding has been emerged as an effective dimensionality reduction technique in pattern recognition.  ...  The main purpose is to reveal the underlying intrinsic data structures in this new representation. In addition, KDE employs neighbourhood preserving criterion to learn local features of the data.  ... 
doi:10.5281/zenodo.1080863 fatcat:aobe6irgvze3xhbpj3mavpdk6u

Sparse Low-Rank Preserving Projection for Dimensionality Reduction

Zhonghua Liu, Jingjing Wang, Gang Liu, Jiexin Pu
2019 IEEE Access  
INDEX TERMS Low-rank representation, dimensionality reduction, feature representation, sparse representation.  ...  The representation-based learning methods, such as sparse representation-based classification and low-rank representation, show effective and robust for image clustering and classification.  ...  [38] presented a low-rank sparse preserving projections (LSPP), in which the intrinsic geometric structure can be preserved, and the robust representation is simultaneously learned. Zhang et al.  ... 
doi:10.1109/access.2019.2893915 fatcat:2ou6snjxdvghlmrfe73pi2pcmm

Discriminative Partition Sparsity Analysis

Li Liu, Ling Shao
2014 2014 22nd International Conference on Pattern Recognition  
In each cluster, a number of sparse sub-graphs are computed via the 1-norm constraint to optimally represent the intrinsic data structure.  ...  All the sub-graphs from the clusters are then combined into a whole discriminative optimization framework for final reduction.  ...  LINEAR REDUCTION FRAMEWORK In this Section, we provide a general framework for the existing subspace learning algorithms from the graph embedding point of view.  ... 
doi:10.1109/icpr.2014.283 dblp:conf/icpr/LiuS14 fatcat:zlsrtvw5jbddlnl4q762ny6cby

Discriminative Hessian Eigenmaps for face recognition

Si Si, Dacheng Tao, Kwok-Ping Chan
2010 2010 IEEE International Conference on Acoustics, Speech and Signal Processing  
DHE will consider encoding the geometric and discriminative information in a local patch by improved Hessian Eigenmaps and margin maximization respectively.  ...  Most of dimension reduction algorithms can not well model both the intra-class geometry and interclass discrimination simultaneously.  ...  However the geometric and discriminative information in these dimension reduction algorithms are not well modeled, e.g., LDA does not consider the geometric information; MFA ignores the discriminative  ... 
doi:10.1109/icassp.2010.5495241 dblp:conf/icassp/SiTC10 fatcat:uxcnew6imjbq3hibtldusmeafm

Sparse Discriminant Preserving Projections for Face Recognition

Jianbo Zhang, Jinkuan Wang, Kun Zhang
2019 DEStech Transactions on Computer Science and Engineering  
Previous works have demonstrated that dimensionality reduction algorithms that combine sparse subspace learning (SSL) and discriminant information of sample data can improve the classification performance  ...  To address this problem, in this paper, we propose a new dimensionality reduction algorithm called sparse discriminant preserving projections (SDPP).Different from the existing methods, SDPP uses between-class  ...  DSPP minimizes intrinsic adjacency matrix and maximizes penalty matrix, and meanwhile, adds global within-class structure into objective function for dimensionality reduction.  ... 
doi:10.12783/dtcse/iciti2018/29151 fatcat:cadlp3yedzdm3ht7pr3vzzbb6e

Nonlinear Supervised Dimensionality Reduction via Smooth Regular Embeddings [article]

Cem Ornek, Elif Vural
2018 arXiv   pre-print
The recovery of the intrinsic geometric structures of data collections is an important problem in data analysis.  ...  Supervised extensions of several manifold learning approaches have been proposed in the recent years.  ...  can potentially be coupled with progressing representation learning techniques that can capture the geometric structure of data invariantly to its acquirement conditions.  ... 
arXiv:1710.07120v2 fatcat:hwe46a53mnbivb5cgpzpubzmlq

Intelligent Credit Assessment System by Kernel Locality Preserving Projections and Manifold-Regularized SVM Models

Shian Chang Huang
2014 International Journal of Modeling and Optimization  
Index Terms-Credit rating, dimensionality reduction, kernel locality preserving projections, subspace analysis, semi-supervised SVM.  ...  KLPP is employed to gain a perfect approximation of data manifold and simultaneously preserve local within-class geometric structures according to prior class-label information.  ...  Considering graph-based nonlinear subspace learning (KLPP) and manifold-based semi-supervised SVM in rating problems are more effective.  ... 
doi:10.7763/ijmo.2014.v4.405 fatcat:xqclta3bindkxeilexw7uhscru

OrthoNet: Multilayer Network Data Clustering [article]

Mireille El Gheche and Giovanni Chierchia and Pascal Frossard
2020 arXiv   pre-print
The first step aggregates the different layers of network information into a graph representation given by the geometric mean of the network Laplacian matrices.  ...  a clustering-friendly representation of the feature space.  ...  Note that many formulations have been proposed for representation learning on graphs [32] .  ... 
arXiv:1811.00821v5 fatcat:vmggeipeynfsdfaj4wolvwwz3i

Maximum Neighborhood Margin Discriminant Projection for Classification

Jianping Gou, Yongzhao Zhan, Min Wan, Xiangjun Shen, Jinfu Chen, Lan Du
2014 The Scientific World Journal  
We develop a novel maximum neighborhood margin discriminant projection (MNMDP) technique for dimensionality reduction of high-dimensional data.  ...  By maximizing the margin between intraclass and interclass neighborhoods of all points, MNMDP cannot only detect the true intrinsic manifold structure of the data but also strengthen the pattern discrimination  ...  Generally, PCA aims to preserve the global geometric structure of data by maximizing the trace of the feature covariance matrix and produces compact representation of the original space in a low-dimensional  ... 
doi:10.1155/2014/186749 pmid:24701144 pmcid:PMC3951105 fatcat:rd3wxzjed5hutmerd3tras2vwa
« Previous Showing results 1 — 15 out of 13,788 results