66,687 Hits in 6.4 sec

Page 6648 of Mathematical Reviews Vol. , Issue 94k [page]

1994 Mathematical Reviews  
This extends our earlier work on preconditioners for Toeplitz least squares iterations for one-dimensional problems.  ...  We consider solving such block least squares problems by the pre- conditioned conjugate gradient algorithm using square nonsingular circulant-block and related preconditioners, constructed from the blocks  ... 

Cross-Modal Learning via Pairwise Constraints [article]

Ran He and Man Zhang and Liang Wang and Ye Ji and Qiyue Yin
2014 arXiv   pre-print
We first propose a compound regularization framework to deal with the pairwise constraint, which can be used as a general platform for developing cross-modal algorithms.  ...  This paper studies cross-modal learning via the pairwise constraint, and aims to find the common structure hidden in different modalities.  ...  This ℓ 21 -norm can be viewed as an extension of the ℓ 1 -norm in sparse multiview co-regularized least squares [10] .  ... 
arXiv:1411.7798v1 fatcat:pp77pnvwmvftnkrwql5gu4my34

Nonlinear joint latent variable models and integrative tumor subtype discovery

Binghui Liu, Xiaotong Shen, Wei Pan
2016 Statistical analysis and data mining  
Most existing integrative analysis methods are based on joint latent variable models, which are generally divided into two classes: joint factor analysis and joint mixture modeling, with continuous and  ...  We propose a method, called integrative and regularized generative topographic mapping (irGTM), to perform simultaneous dimension reduction across multiple types of data while achieving feature selection  ...  [8] , naive integration of PCA, partial least squares regression of ref. [23] , co-inertia analysis of ref. [24] , and canonical correlation analysis of [25] .  ... 
doi:10.1002/sam.11306 pmid:29333206 pmcid:PMC5761081 fatcat:rx244ipogne2fmwfhbqxuwzqqm

Unsupervised Machine Learning of Quenched Gauge Symmetries: A Proof-of-Concept Demonstration [article]

Daniel Lozano-Gómez, Darren Pereira, Michel J. P. Gingras
2020 arXiv   pre-print
We demonstrate the ability of an unsupervised machine learning protocol, the Principal Component Analysis method, to detect hidden quenched gauge symmetries introduced via the so-called Mattis gauge transformation  ...  As in [22, 24] , we first perform PCA on the full dataset {{cos(φ i )}, {sin(φ i )}} generated from MC simulations.  ...  The clusters identified by PCA for the regular Ising model are shown in Fig. 8 . For the regular XY model, PCA is applied to either the X dataset ({cos(φ i )}) or the Y dataset ({sin(φ i )}).  ... 
arXiv:2003.00039v1 fatcat:nqewh7emwzfsbdjhcfeg3ak2qm

Scalable and interpretable product recommendations via overlapping co-clustering [article]

Reinhard Heckel, Michail Vlachos, Thomas Parnell, Celestine Dünner
2017 arXiv   pre-print
We consider the problem of generating interpretable recommendations by identifying overlapping co-clusters of clients and products, based only on positive or implicit feedback.  ...  Our approach is applicable on very large datasets because it exhibits almost linear complexity in the input examples and the number of co-clusters.  ...  A common approach to solve the corresponding optimization problem is weighted alternating least squares (wALS) [29] .  ... 
arXiv:1604.02071v2 fatcat:d3nph7gc35gnda65dhqmkd2l7i

Is depression associated with health risk-related behaviour clusters in adults?

P. Verger, C. Lions, B. Ventelou
2009 European Journal of Public Health  
A cluster analysis of various HRBs (tobacco use, alcohol use, binge drinking, physical inactivity, certain eating habits) was used to study their co-occurrence.  ...  For Cluster 3, no association was found: OR 1.01 (95% CI 0.84-1.21). Conclusions: HRBs tend to co-occur in the general population, more frequently in case of probable depression.  ...  A dichotomous indicator for alcohol use and binge drinking was constructed: consumption of alcohol at least 4 or 5 times a week (regular users) vs. less, consumption of at least six glasses on the same  ... 
doi:10.1093/eurpub/ckp057 pmid:19403786 fatcat:qmewnfba5nde7jezxnu6pxybye

Multitask Diffusion Adaptation Over_newline Asynchronous Networks

Roula Nassif, Cedric Richard, Andre Ferrari, Ali H. Sayed
2016 IEEE Transactions on Signal Processing  
Index Terms-Distributed optimization, asynchronous networks, diffusion adaptation, multitask learning, mean-square performance analysis. 1053-587X  ...  In this paper, we describe a model for the solution of multitask problems over asynchronous networks and carry out a detailed mean and mean-square error analysis.  ...  We introduced a general model for asynchronous behavior with random step-sizes, combination coefficients, and co-regularization factors.  ... 
doi:10.1109/tsp.2016.2518991 fatcat:kmqmxpei55grlok2dmmkvtcgyu

High-Order Co-Clustering via Strictly Orthogonal and Symmetric L1-Norm Nonnegative Matrix Tri-Factorization

Kai Liu, Hua Wang
2018 Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence  
Different to traditional clustering methods that deal with one single type of data, High-Order Co- Clustering (HOCC) aims to cluster multiple types of data simultaneously by utilizing the inter- or/and  ...  Thus we derive the solution algorithm using the alternating direction method of multipliers.  ...  High-Order Co-Clustering via Graph Regularized Symmetric 1 -Norm NMTF Throughout this paper, we use A (ij) to denote the entry at the i-th row and j-th column of a matrix A.  ... 
doi:10.24963/ijcai.2018/340 dblp:conf/ijcai/LiuW18 fatcat:rhjgljmwdbaono3kmkpn5alayq

A Selective Review of Multi-Level Omics Data Integration Using Variable Selection

Cen Wu, Fei Zhou, Jie Ren, Xiaoxi Li, Yu Jiang, Shuangge Ma
2019 High-Throughput  
High-throughput technologies have been used to generate a large amount of omics data.  ...  single level analysis.  ...  In addition, K-means clustering is a popular method for conducting clustering analysis and can be viewed as minimization over within-cluster sum of squares (WCSS).  ... 
doi:10.3390/ht8010004 pmid:30669303 pmcid:PMC6473252 fatcat:6p5b3c7jlzb2bd56fky7pl3bxe

Robust PCA as Bilinear Decomposition With Outlier-Sparsity Regularization

Gonzalo Mateos, Georgios B. Giannakis
2012 IEEE Transactions on Signal Processing  
A least-trimmed squares estimator of a low-rank bilinear factor analysis model is shown closely related to that obtained from an ℓ_0-(pseudo)norm-regularized criterion encouraging sparsity in a matrix  ...  Outliers are identified by tuning a regularization parameter, which amounts to controlling sparsity of the outlier matrix along the whole robustification path of (group) least-absolute shrinkage and selection  ...  A natural least-trimmed squares (LTS) PCA estimator is first shown closely related to an estimator obtained from an -(pseudo)norm-regularized criterion, adopted to fit a low-rank bilinear factor analysis  ... 
doi:10.1109/tsp.2012.2204986 fatcat:up4cvmexj5f6bclc64bvsrwtxa

Optimal reverse prediction

Linli Xu, Martha White, Dale Schuurmans
2009 Proceedings of the 26th Annual International Conference on Machine Learning - ICML '09  
In particular, we show how supervised least squares, principal components analysis, k-means clustering and normalized graph-cut can all be expressed as instances of the same training principle.  ...  These algorithms can all be combined with standard regularizers and made non-linear via kernels.  ...  We then present the unification of supervised least squares with principal components analysis, k-means clustering and normalized graph-cut.  ... 
doi:10.1145/1553374.1553519 dblp:conf/icml/XuWS09 fatcat:o6x6dkn6mzas7l3ewdicggdolm

Predicting Personality from Book Preferences with User-Generated Content Labels [article]

Ng Annalyn, Maarten W. Bos, Leonid Sigal, Boyang Li
2017 arXiv   pre-print
Moreover, user-generated tag labels reveal unexpected insights, such as cultural differences, book reading behaviors, and other non-content factors affecting preferences.  ...  For each personality trait, we use the regularization coefficient yielding the lowest mean squared test error from a 10-fold cross-validation.  ...  Data collected via Facebook has been shown to be comparable to data collected via standalone websites [17] .  ... 
arXiv:1707.06643v1 fatcat:mbyo45uvjzcevftokn7vdyhh44

Learning Topics in Short Texts by Non-negative Matrix Factorization on Term Correlation Matrix [chapter]

Xiaohui Yan, Jiafeng Guo, Shenghua Liu, Xueqi Cheng, Yanfeng Wang
2013 Proceedings of the 2013 SIAM International Conference on Data Mining  
To obtain reliable topics from term correlation data, we first introduce a novel way to compute term correlation in short texts by representing each term with its co-occurred terms.  ...  While in the topic inference stage, we solves a non-negative least squares problem in Eq. (3.2) or Eq. (3.3) .  ...  However, exactly solving NNLS is much slower than solving unconstrained least squares problems, especially in high dimensional space [1] .  ... 
doi:10.1137/1.9781611972832.83 dblp:conf/sdm/ChengGLWY13 fatcat:p72w36mwkfaovoji75gtv4icpe

Sparse multitask regression for identifying common mechanism of response to therapeutic targets

K. Zhang, J. W. Gray, B. Parvin
2010 Bioinformatics  
Results: In this article, we propose a sparse, multitask regression model together with co-clustering analysis to explore the intrinsic grouping in associating the gene expression with phenotypic signatures  ...  Motivation: Molecular association of phenotypic responses is an important step in hypothesis generation and for initiating design of new experiments.  ...  least squares (RLS).  ... 
doi:10.1093/bioinformatics/btq181 pmid:20529943 pmcid:PMC2881366 fatcat:mip33rpj4ve57ge5ohcmhospgm

Scalable Kernel Learning via the Discriminant Information [article]

Mert Al, Zejiang Hou, Sun-Yuan Kung
2020 arXiv   pre-print
We utilize the Discriminant Information criterion, a measure of class separability with a strong connection to Discriminant Analysis.  ...  By generalizing this measure to cover a wider range of kernel maps and learning settings, we develop scalable methods to learn kernel features with high discriminant power.  ...  We first compare the training and generalization performances of our method to the Least Squares (LS) based kernel learning methodology as presented in [8] .  ... 
arXiv:1909.10432v2 fatcat:5g775fb7nnfztl4azytssansli
« Previous Showing results 1 — 15 out of 66,687 results