1,101 Hits in 3.4 sec

Non-linear Dimensionality Reduction by Locally Linear Isomaps [chapter]

Ashutosh Saxena, Abhinav Gupta, Amitabha Mukerjee
2004 Lecture Notes in Computer Science  
We demonstrate that the proposed algorithm works better than Isomap algorithm for normal, noisy and sparse data sets.  ...  We propose a new variant of Isomap algorithm based on local linear properties of manifolds to increase its robustness to short-circuiting.  ...  (b) Comparison of Tenenbaum's Isomap with KLL-Isomap for varying level of sparseness.  ... 
doi:10.1007/978-3-540-30499-9_161 fatcat:mruznql6gvgwjb345ssug2sdee

Spectral Methods for Dimensionality Reduction [chapter]

Saul Lawrence K., Weinberger Kilian Q., Sha Fei, Ham Jihun, Lee Daniel D.
2006 Semi-Supervised Learning  
To analyze data that lies on a low dimensional submanifold, the matrices are constructed from sparse weighted graphs whose vertices represent input patterns and whose edges indicate neighborhood relations  ...  These methods are able to reveal low dimensional structure in high dimensional data from the top or bottom eigenvectors of specially constructed matrices.  ...  for sparse matrices.  ... 
doi:10.7551/mitpress/9780262033589.003.0016 fatcat:7jiquy4jjjekdbriruxgzfe6lu

Exploiting manifold geometry in hyperspectral imagery

C.M. Bachmann, T.L. Ainsworth, R.A. Fusina
2005 IEEE Transactions on Geoscience and Remote Sensing  
Using land-cover classification of hyperspectral imagery in the Virginia Coast Reserve as a test case, we show that the new manifold representation provides better separation of spectrally similar classes  ...  ISOMAP guarantees a globally optimal solution, but is computationally practical only for small datasets because of computational and memory requirements.  ...  These zones were used as seed samples for classification of the linear mixture representation and an ISOMAP coordinate representation of the tile.  ... 
doi:10.1109/tgrs.2004.842292 fatcat:mqhoprk7dffm7daplzymyymrzm

Out-of-Sample Embedding for Manifold Learning Applied to Face Recognition

F. Dornaika, B. Raduncanu
2013 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops  
In this paper, we demonstrate that sparse coding theory not only serves for automatic graph reconstruction as shown in recent works, but also represents an accurate alternative for out-of-sample embedding  ...  Manifold learning techniques are affected by two critical aspects: (i) the design of the adjacency graphs, and (ii) the embedding of new test data-the out-of-sample problem.  ...  We will bypass this limitation by using the coding provided by sparse representation. We apply the sparse coding/representation principle for computing the set of coefficients W (N +1)i [14] .  ... 
doi:10.1109/cvprw.2013.127 dblp:conf/cvpr/DornaikaR13 fatcat:7embzgl2wbcntc3nkedihio3ua

Improved Manifold Coordinate Representations of Large-Scale Hyperspectral Scenes

C.M. Bachmann, T.L. Ainsworth, R.A. Fusina
2006 IEEE Transactions on Geoscience and Remote Sensing  
Full hyperspectral scenes of O(10 6 ) samples or greater are obtained via a reconstruction algorithm, which allows insertion of large numbers of samples into a representative "backbone" manifold obtained  ...  The CPU time of the enhanced ISOMAP approach scales as O(N log 2 (N )), where N is the number of samples, while the memory requirement is bounded by O (N log(N ) ).  ...  of sparse graph neighborhoods used in the geodesic distance calculation.  ... 
doi:10.1109/tgrs.2006.881801 fatcat:6lvar7qherfsbfy2duqitdn4ki

Iterative Manifold Embedding Layer Learned by Incomplete Data for Large-scale Image Retrieval [article]

Jian Xu, Chunheng Wang, Chengzuo Qi, Cunzhao Shi, Baihua Xiao
2018 arXiv   pre-print
According to the original descriptors and the IME representations of database images, we estimate the weights of IME layer by ridge regression.  ...  We embed the original descriptors of database images which lie on manifold in a high dimensional space into manifold-based representations iteratively to generate the IME representations in off-line learning  ...  ACKNOWLEDGMENT This work was supported by the National Natural Science Foundation of China under Grant 61531019, Grant 61601462, and Grant 71621002.  ... 
arXiv:1707.09862v2 fatcat:7crk6zmtpzautb5vbaeikspvme

Spectral Latent Variable Models for Perceptual Inference

Atul Kanaujia, Cristian Sminchisescu, Dimitris Metaxas
2007 2007 IEEE 11th International Conference on Computer Vision  
We propose non-linear generative models referred to as Sparse Spectral Latent Variable Models (SLVM), that combine the advantages of spectral embeddings with the ones of parametric latent variable models  ...  Empirically, we observe that SLVMs are effective for the automatic 3d reconstruction of low-dimensional human motion in movies.  ...  (c) Reconstruction from our sparse SLVM model with associations color-coded; (d) Active (sparse) basis set with 1.6%=16 of the datapoints shown in latent space, as automatically selected by SLVM.  ... 
doi:10.1109/iccv.2007.4408845 dblp:conf/iccv/KanaujiaSM07 fatcat:alca33lpzfegrdzgaz52hln5bm

Manifold Clustering of Shapes

Dragomir Yankov, Eamonn Keogh
2006 IEEE International Conference on Data Mining. Proceedings  
We further propose a modification of the Isomap projection based on the concept of degree-bounded minimum spanning trees.  ...  Here we demonstrate that a nonlinear projection algorithm such as Isomap can attract together shapes of similar objects, suggesting the existence of isometry between the shape space and a low dimensional  ...  However, if different regions have different densities, or if there is considerable amount of noise, Isomap fails to reconstruct correctly the exact structure of the embedding.  ... 
doi:10.1109/icdm.2006.101 dblp:conf/icdm/YankovK06 fatcat:4qujxr4s7vfv5mtgcdud2noiq4

DeepNose: Using artificial neural networks to represent the space of odorants [article]

Ngoc Tran, Daniel Kepple, Sergey A. Shuvaev, Alexei A. Koulakov
2018 bioRxiv   pre-print
We trained artificial neural networks to represent the chemical space of odorants and used that representation to predict human olfactory percepts.  ...  First, we trained an autoencoder, called DeepNose, to deduce a low-dimensional representation of odorant molecules which were represented by their 3D spatial structure.  ...  representation of odorant .  ... 
doi:10.1101/464735 fatcat:s3s7knjo7vdf5eoufaeas6szhu

Image spaces and video trajectories: using Isomap to explore video sequences

2003 Proceedings Ninth IEEE International Conference on Computer Vision  
The nonlinear dimensionality reduction technique of Isomap, gives, for many interesting scenes, a very low dimensional representation of the space of possible images.  ...  Here we explore a video representation that considers a video as two parts -a space of possible images and a trajectory through that space.  ...  video representation using the (relatively) new non-linear dimensionality reduction technique of Isomap [13] .  ... 
doi:10.1109/iccv.2003.1238658 dblp:conf/iccv/Pless03 fatcat:tauqr7ep2rfybcralzuxxsb2ue

Increasing the Capability of Neural Networks for Surface Reconstruction from Noisy Point Clouds [article]

Adam R White, Li Bai
2018 arXiv   pre-print
This paper builds upon the current methods to increase their capability and automation for 3D surface construction from noisy and potentially sparse point clouds.  ...  It presents an analysis of an artificial neural network surface regression and mapping method, describing caveats, improvements and justification for the different approach.  ...  The hallmark of Isomap is that points are reconstructed according to their pairwise geodesic distance.  ... 
arXiv:1811.12464v2 fatcat:7palekxehjgklb7o5t6rchx7ky

Localization of wireless sensors using compressive sensing for manifold learning

Chen Feng, Shahrokh Valaee, Zhenhui Tan
2009 2009 IEEE 20th International Symposium on Personal, Indoor and Mobile Radio Communications  
We represent the pair-wise distance measurement as a sparse matrix.  ...  Intersensor communication costs are reduced significantly by applying the theory of compressive sensing, which indicates that sparse signals can be recovered from far fewer samples than that needed by  ...  Measurement stage Let D k ∈ R n be a sparse representation of the full pair-wise distance matrix D.  ... 
doi:10.1109/pimrc.2009.5449918 dblp:conf/pimrc/FengVT09 fatcat:s5ytvrnkqvc4djg62vb3ckn6t4

Nonlinear Dimensionality Reduction Using Circuit Models [chapter]

Fredrik Andersson, Jens Nilsson
2005 Lecture Notes in Computer Science  
The property of global approximation sets Isomap in contrast to many competing methods, which approximate only locally.  ...  A serious drawback of Isomap is that it is topologically instable, i.e., that incorrectly chosen algorithm parameters or perturbations of data may abruptly alter the resulting configurations.  ...  The research of J.N. was partially supported by the Swedish Knowledge Foundation (KKstiftelsen), and AstraZeneca.  ... 
doi:10.1007/11499145_96 fatcat:cnttf2ijhza3ngsggo22dprc6e

A duality view of spectral methods for dimensionality reduction

Lin Xiao, Jun Sun, Stephen Boyd
2006 Proceedings of the 23rd international conference on Machine learning - ICML '06  
In particular, it resolves the myth about these methods in using either the top eigenvectors of a dense matrix, or the bottom eigenvectors of a sparse matrix -these two eigenspaces are exactly aligned  ...  We present a unified duality view of several recently emerged spectral methods for nonlinear dimensionality reduction, including Isomap, locally linear embedding, Laplacian eigenmaps, and maximum variance  ...  Part of this work was done when Lin Xiao was on a supported visit at the Institute for Mathematical Sciences, National University of Singapore.  ... 
doi:10.1145/1143844.1143975 dblp:conf/icml/XiaoSB06 fatcat:vs2xqraw25atroonnhjnukjna4

Software defect prediction based on non-linear manifold learning and hybrid deep learning techniques

Kun Zhu, Nana Zhang, Qing Zhang, Shi Ying, Xu Wang
2020 Computers Materials & Continua  
The experimental results verify that the superiority of SL-Isomap and DLDD on four evaluation indicators.  ...  We compare the SL-Isomap with seven state-of-the-art feature extraction methods and compare the DLDD model with six baseline models across 20 open source software projects.  ...  to learn the reconstructed distribution and more robust feature representation by changing the reconstruction error term, and utilizes deep neural network to learn the abstract deep semantic features.  ... 
doi:10.32604/cmc.2020.011415 fatcat:plovojm7kvgsjkwmrl46shcwi4
« Previous Showing results 1 — 15 out of 1,101 results