Filters








83,319 Hits in 4.5 sec

Learning principal directions: Integrated-squared-error minimization

Jong-Hoon Ahn, Jong-Hoon Oh, Seungjin Choi
2007 Neurocomputing  
In this paper, we introduce and investigate an alternative error measure, integrated-squared-error (ISE), the minimization of which determines the exact principal axes (without rotational ambiguity) of  ...  We show that exact principal directions emerge from the minimization of ISE.  ...  We have shown that exact principal directions of a set of observed data emerged through integrated-squared-error minimization and have presented simple but efficient EM algorithms.  ... 
doi:10.1016/j.neucom.2006.06.004 fatcat:ewulgufwcngo7p67kpfoxr7sgm

Principal manifold learning by sparse grids

Christian Feuersänger, Michael Griebel
2009 Computing  
Here, we consider principal manifolds as the minimum of a regularized, non-linear empirical quantization error functional.  ...  The arising non-linear problem is solved by a descent method which resembles the expectation minimization algorithm.  ...  +ḟ 2 (n) dt is an integral over the squared speed of the curve f .  ... 
doi:10.1007/s00607-009-0045-8 fatcat:xewaqii2t5cjzclxmczpl37jey

Principal Components Analysis Competitive Learning

Ezequiel López-Rubio, Juan Miguel Ortiz-de-Lazcano-Lobato, José Muñoz-Pérez, José Antonio Gómez-Ruiz
2004 Neural Computation  
We present a new neural model that extends the classical competitive learning by performing a principal components analysis (PCA) at each neuron.  ...  Z ) 2 ]. (2.14) Note that if Z = 0, we get the maximum possible mean squared error.  ...  The PCA methods overcome this problem by obtaining the principal directions of the data, that is, the maximum variance directions (Kendall, 1975; Jolliffe, 1986) .  ... 
doi:10.1162/0899766041941880 pmid:15476607 fatcat:rgdiiocxjrdkhbmn5ec2p47zza

Statistical properties of kernel principal component analysis

Gilles Blanchard, Olivier Bousquet, Laurent Zwald
2006 Machine Learning  
The main goal of this paper is to prove inequalities on the reconstruction error for kernel principal component analysis.  ...  We also obtain a new relative bound on the error.  ...  ., which minimizes the error (measured through the averaged squared Hilbert norm) of approximating the data by their projections.  ... 
doi:10.1007/s10994-006-6895-9 fatcat:bi6jg755i5gxpl6ifdorb7434y

Penalized Preimage Learning in Kernel Principal Component Analysis

Wei-Shi Zheng, JianHuang Lai, Pong C. Yuen
2010 IEEE Transactions on Neural Networks  
Experimental results show that the proposed preimage learning algorithm obtains lower mean square error (MSE) and better visual quality of reconstructed images.  ...  Second, a penalized function is integrated as part of the optimization function to guide the preimage learning process.  ...  [5] first reported the concept of preimage and proposed an iterative method to determine the preimage by minimizing least square distance error. This work gave a foundation for preimage learning.  ... 
doi:10.1109/tnn.2009.2039647 pmid:20144918 fatcat:3stoilsszrfrpoppf7ggndwfqe

A Novel Regularized Principal Graph Learning Framework on Explicit Graph Representation [article]

Qi Mao, Li Wang, Ivor W. Tsang, Yijun Sun
2016 arXiv   pre-print
As showcases, models that can learn a spanning tree or a weighted undirected ℓ_1 graph are proposed, and a new learning algorithm is developed that learns a set of principal points and a graph structure  ...  To address these issues, we develop a new regularized principal graph learning framework that captures the local information of the underlying graph structure based on reversed graph embedding.  ...  The empirical quantization error [36] is widely used as the fitting criterion to be minimized for the optimal cluster centroids, and it is also frequently employed in principal curve learning methods  ... 
arXiv:1512.02752v2 fatcat:r6nnt6stw5h7ximaris667vja4

Exploring dimension learning via a penalized probabilistic principal component analysis [article]

Wei Q. Deng, Radu V. Craiu
2022 arXiv   pre-print
of dimension in finite samples as a constrained optimization problem, where the estimated dimension is a maximizer of a penalized profile likelihood criterion within the framework of a probabilistic principal  ...  Establishing a low-dimensional representation of the data leads to efficient data learning strategies. In many cases, the reduced dimension needs to be explicitly stated and estimated from the data.  ...  In this case, the stopping rule based on a single threshold could be useful for recovering the original data in the sense of asymptotic mean squared error, but does not directly inform the minimal rank  ... 
arXiv:1803.07548v3 fatcat:y3p4odga6ndb5fgfxsm3hk5atq

Modeling motor learning using heteroskedastic functional principal components analysis

Daniel Backenroth, Jeff Goldsmith, Michelle D. Harran, Juan C. Cortes, John W. Krakauer, Tomoko Kitago
2017 Journal of the American Statistical Association  
We extend the functional principal components analysis framework by modeling the variance of principal component scores as a function of covariates and subject-specific random effects.  ...  Our work is motivated by a novel dataset from an experiment assessing upper extremity motor control, and quantifies the reduction in motion variance associated with skill learning.  ...  The bottom row shows integrated squared errors (ISEs) for the FPCs across each possible J i .  ... 
doi:10.1080/01621459.2017.1379403 pmid:30416231 pmcid:PMC6223649 fatcat:h6kkd4v2bjav5n3ik7kyrqyzgu

Principal Graph and Structure Learning Based on Reversed Graph Embedding

Qi Mao, Li Wang, Ivor W. Tsang, Yijun Sun
2017 IEEE Transactions on Pattern Analysis and Machine Intelligence  
As showcases, models that can learn a spanning tree or a weighted undirected 1 graph are proposed, and a new learning algorithm is developed that learns a set of principal points and a graph structure  ...  To address these issues, we develop a novel principal graph and structure learning framework that captures the local information of the underlying graph structure based on reversed graph embedding.  ...  Instead of learning directed graphs by using the above two methods, an integrated model for learning an undirected graph by imposing a sparsity penalty on a symmetric similarity matrix and a positive semi-definite  ... 
doi:10.1109/tpami.2016.2635657 pmid:28114001 pmcid:PMC5899072 fatcat:ashwrjrr6nedpn422d2uxalbzy

Can we imitate the principal investor's behavior to learn option price? [article]

Xin Jin
2022 arXiv   pre-print
Eventually the optimal option price is learned by reinforcement learning to maximize the cumulative risk-adjusted return of a dynamically hedged portfolio over simulated price paths.  ...  This paper presents a framework of imitating the principal investor's behavior for optimal pricing and hedging options.  ...  In such a manner, the Maximum A Posteriori (MAP) estimation θ = arg max θ P (θ | D) is equivalent to minimize mean square error.  ... 
arXiv:2105.11376v2 fatcat:ky6r72yiqnc35jyv7dhfaabbze

Principal Component Flows

Edmond Cunningham, Adam D. Cobb, Susmit Jha
2022 International Conference on Machine Learning  
In our experiments we show that PCFs and iPCFs are able to learn the principal manifolds over a variety of datasets.  ...  We introduce a novel class of normalizing flows, called principal component flows (PCF), whose contours are its principal manifolds, and a variant for injective flows (iPCF) that is more efficient to train  ...  The principal manifold of a flow is found by integrating along the direction of a principal component. dx(t) dt = w K (x(t)) (69) Lemma 3 tells us that the contours of a PCF are locally spanned by the  ... 
dblp:conf/icml/CunninghamCJ22 fatcat:ol3r2gic4jacpdxhzjyyd2zvba

Metastimuli for Human Learning via Machine Learning and Principal Component Analysis of Personal Information Graphs

Dane Webb, Rico Picone, Rico A. R. Picone, Daniel R Einstein, Frank Washko
2021 Zenodo  
Using principal component analysis, the dimensionality of the PIMS graph is reduced to m and the graph is represented in R^m, where m is also the actuator dimensionality.  ...  Studies on human learning have provided evidence that additional modalities of information transfer during the learning process improves human learning rates.  ...  The mean squared error (MSE) is one of several regression loss functions.  ... 
doi:10.5281/zenodo.4771324 fatcat:h5g2g73divevbaawfciqtiue2e

Fast Estimation of Information Theoretic Learning Descriptors using Explicit Inner Product Spaces [article]

Kan Li, Jose C. Principe
2020 arXiv   pre-print
Kernel methods form a theoretically-grounded, powerful and versatile framework to solve nonlinear problems in signal processing and machine learning.  ...  An RKHS for ITL defined on a space of probability density functions simplifies statistical inference for supervised or unsupervised learning.  ...  Gaussian Quadrature (GQ) Features with Subsampled Grids A quadrature rule is a choice of points ω i and weights a i to minimize the maximum error ǫ.  ... 
arXiv:2001.00265v1 fatcat:3aqtnnh43jcgfllhyedqkf3mym

Quantitative comparison of principal component analysis and unsupervised deep learning using variational autoencoders for shape analysis of motile cells [article]

Caleb K. Chan, Amalia Hadjitheodorou, Tony Y.-C. Tsai, Julie A. Theriot
2020 bioRxiv   pre-print
We were able to decompose these complex shapes into low-dimensional encodings with both principal component analysis (PCA) and an unsupervised deep learning technique using variational autoencoders (VAE  ...  Contrary to the conventional viewpoint that the latent space is a "black box", we demonstrated that the information learned and encoded within the latent space is consistent with PCA and is reproducible  ...  for root mean squared 864 error (RMS) between predicted and actual cell areas.  ... 
doi:10.1101/2020.06.26.174474 fatcat:ovabyy3vj5eoteyrr7m3wkhdza

Prediction of carbon dioxide emissions based on principal component analysis with regularized extreme learning machine: The case of China

Wei Sun, Jingyi Sun
2017 Environmental Engineering Research  
This paper proposes a novel hybrid model combined principal component analysis (PCA) with regularized extreme learning machine (RELM) to make CO2 emissions prediction based on the data from 1978 to 2014  ...  According to the modeling results, the proposed model outperforms a single RELM model, extreme learning machine (ELM), back propagation neural network (BPNN), GM(1,1) and Logistic model in terms of errors  ...  (RE), mean absolute percentage error (MAPE), maximum absolute percentage error (MaxAPE), median absolute percentage error (MdAPE) and root mean square error (RMSE).  ... 
doi:10.4491/eer.2016.153 fatcat:zhvbck3bivhjzl7fyhtbljurya
« Previous Showing results 1 — 15 out of 83,319 results