10,914 Hits in 6.4 sec

Information Theoretic Limits for Linear Prediction with Graph-Structured Sparsity [article]

Adarsh Barik, Jean Honorio, Mohit Tawarmalani
2017 arXiv   pre-print
We analyze the necessary number of samples for sparse vector recovery in a noisy linear prediction setup. This model includes problems such as linear regression and classification.  ...  We focus on structured graph models. In particular, we prove that sufficient number of samples for the weighted graph model proposed by Hegde and others is also necessary.  ...  Moreover, our results not only pertain to linear regression but also apply to linear prediction problems in general.  ... 
arXiv:1701.07895v2 fatcat:e3bgclwfunbrhbwka6aem3x3uu

Group and Graph Joint Sparsity for Linked Data Classification

Longwen Gao, Shuigeng Zhou
Various sparse regularizers have been applied to machine learning problems, among which structured sparsity has been proposed for a better adaption to structured data.  ...  Both theoretical analysis and experimental results on four benchmark datasets show that the joint sparsity model outperforms traditional group sparsity model and graph sparsity model, as well as the latest  ...  Figure 1: A motivation example for illustrating the limitations of three existing structured sparsity models and the differences between them and our group & graph joint sparsity model.  ... 
doi:10.1609/aaai.v30i1.10194 fatcat:co7afwpoc5di5pvpmtar4bargq

Detecting Strong Ties Using Network Motifs

Rahmtin Rotabi, Krishna Kamath, Jon Kleinberg, Aneesh Sharma
2017 Proceedings of the 26th International Conference on World Wide Web Companion - WWW '17 Companion  
In this work, we demonstrate via experiments on Twitter data that using only such structural network features is sufficient for detecting strong ties with high precision.  ...  graph models.  ...  As in prior work on random graphs with planted community structure, we cannot directly observe community membership (a proxy for strong ties), but the structure of the graph conveys latent information  ... 
doi:10.1145/3041021.3055139 dblp:conf/www/RotabiKKS17a fatcat:tnebqr3tlrbvjcsirbssmmtnom

Learning mixed graphical models with separate sparsity parameters and stability-based model selection

Andrew J. Sedgewick, Ivy Shi, Rory M. Donovan, Panayiotis V. Benos
2016 BMC Bioinformatics  
This information can be used for feature selection, classification and other important tasks.  ...  Our results show that MGMs reliably uncover the underlying graph structure, and when used for classification, their performance is comparable to popular discriminative methods (lasso regression and support  ...  Acknowledgements We thank two anonymous reviewers for constructive comments that helped us improve this manuscript.  ... 
doi:10.1186/s12859-016-1039-0 pmid:27294886 pmcid:PMC4905606 fatcat:ucawbodzvve7xioaxxpuihdn3e

Adaptive Sampling For Fast Sparsity Pattern Recovery

M. Lamarca, David Matas, Francisco Ramirez-Javega
2011 Zenodo  
In the noiseless case, methods and theoretical limits do not differ from CS [4] , which is intrinsically also a sparsity support recovery problem.  ...  The average number of samples for perfect recovery versus for = 500 is shown and compared with the theoretical bound.  ... 
doi:10.5281/zenodo.42424 fatcat:vhadoqxecrfs7bsjxdtg5vvvwu

A systematic review of structured sparse learning

Lin-bo Qiao, Bo-feng Zhang, Jin-shu Su, Xi-cheng Lu
2017 Frontiers of Information Technology & Electronic Engineering  
With the assumption of sparsity, many computational problems can be handled efficiently in practice.  ...  In experiments, we present applications in unsupervised learning, for structured signal recovery and hierarchical image reconstruction, and in supervised learning in the context of a novel graph-guided  ...  Signal recovery with fused structured sparsity To show the capability of the fused structure for solving fused structured problems, we conduct the following tests.  ... 
doi:10.1631/fitee.1601489 fatcat:bbcxcyg6vjbknlysm73bzqc2lq

A linear programming approach for estimating the structure of a sparse linear genetic network from transcript profiling data

Sahely Bhadra, Chiranjib Bhattacharyya, Nagasuma R Chandra, I Saira Mian
2009 Algorithms for Molecular Biology  
Prevailing strategies for learning the structure of a genetic network from highdimensional transcript profiling data assume sparsity and linearity.  ...  Many methods consider relatively small directed graphs, inferring graphs with up to a few hundred nodes.  ...  Here, the task of learning the structure of a SLGN is equated with that of solving a collection of sparse linear regression problems, one for each gene in a network (node in the graph).  ... 
doi:10.1186/1748-7188-4-5 pmid:19239685 pmcid:PMC2654898 fatcat:jdhsnh3zyvdm3a2zyqvxmvmt6y

The heterogeneous feature selection with structural sparsity for multimedia annotation and hashing: a survey

Fei Wu, Yahong Han, Xiang Liu, Jian Shao, Yueting Zhuang, Zhongfei Zhang
2012 International Journal of Multimedia Information Retrieval  
Therefore, the selection of limited discriminative features for certain semantics is hence crucial to make the understanding of multimedia more interpretable.  ...  Zhang was on leave from SUNY representation of the latent information hidden in the related features is hence crucial during multimedia understanding.  ...  Thus the complexity of the regression model with the structural grouping sparsity is roughly O( p × n).  ... 
doi:10.1007/s13735-012-0001-9 fatcat:4ihorofn6zbg3mzqnifihgcydm

Complexity Analysis and Efficient Measurement Selection Primitives for High-Rate Graph SLAM [article]

Kristoffer M. Frey, Ted J. Steiner, Jonathan P. How
2018 arXiv   pre-print
Sparsity has been widely recognized as crucial for efficient optimization in graph-based SLAM.  ...  Because the sparsity and structure of the SLAM graph reflect the set of incorporated measurements, many methods for sparsification have been proposed in hopes of reducing computation.  ...  Furthermore, we would like to thank DARPA for supporting this research and additionally Julius Rose from Draper and Professor Nicholas Roy from MIT for their leadership and general support.  ... 
arXiv:1709.06821v2 fatcat:7ntbq45fzvccvlio5f7rvar5zq

Modeling Ideological Agenda Setting and Framing in Polarized Online Groups with Graph Neural Networks and Structured Sparsity [article]

Valentin Hofmann, Janet B. Pierrehumbert, Hinrich Schütze
2021 arXiv   pre-print
The architecture we propose combines graph neural networks with structured sparsity and results in representations for concepts and subreddits that capture phenomena such as ideological radicalization  ...  We also create a new dataset of political discourse covering 12 years and more than 600 online groups with different ideologies.  ...  We are very grateful to Jeremy Frimer for making the MFD 2.0 z-scores available to us. We also thank Xiaowen Dong for extremely helpful feedback.  ... 
arXiv:2104.08829v2 fatcat:7yqdgca3nzffdjusbh2kd6shxq

Structured, sparse regression with application to HIV drug resistance [article]

Daniel Percival, Kathryn Roeder, Roni Rosenfeld, Larry Wasserman
2010 arXiv   pre-print
Our modification finds solutions to regression problems where the selected predictors appear in a structured pattern, with respect to a predefined distance measure over the candidate predictors.  ...  We also demonstrate our method in a simulation study and present some theoretical results and connections.  ...  Acknowledgements The authors would like to thank the reviewers for many helpful comments. Mr. Percival was supported partially by National Institutes of Health SBIR award 2R44GM074313-02.  ... 
arXiv:1002.3128v2 fatcat:566z2v7psfbmrkq56bgzhqt2hq

A Phylogeny-Regularized Sparse Regression Model for Predictive Modeling of Microbial Community Data

Jian Xiao, Li Chen, Yue Yu, Xianyang Zhang, Jun Chen
2018 Frontiers in Microbiology  
Unfortunately, predictive models of microbial community data taking into account both the sparsity and the tree structure remain under-developed.  ...  The phylogenetic tree is an informative prior for more efficient prediction since the microbial community changes are usually not randomly distributed on the tree but tend to occur in clades at varying  ...  ACKNOWLEDGMENTS This work was supported by Mayo Clinic Gerstner Family Career Development Awards, Mayo Clinic Center for Individualized  ... 
doi:10.3389/fmicb.2018.03112 pmid:30619188 pmcid:PMC6305753 fatcat:vt4ljparhvenbkjtjlx7xzbmwe

Variable selection and regression analysis for graph-structured covariates with an application to genomics

Caiyan Li, Hongzhe Li
2010 Annals of Applied Statistics  
We study a graph-constrained regularization procedure and its theoretical properties for regression analysis to take into account the neighborhood information of the variables measured on a graph.  ...  We demonstrate by simulations and a real data set that the proposed procedure can lead to better variable selection and prediction than existing methods that ignore the graph information associated with  ...  We thank the two reviewers for their comments that have greatly improved the presentation of this paper.  ... 
doi:10.1214/10-aoas332 pmid:22916087 pmcid:PMC3423227 fatcat:gp34ek5o75g5folc6nww7t5fra

Manifold-Regularized Selectable Factor Extraction for Semi-supervised Image Classification

Xin Shi, Chao Zhang, Fangyun Wei, Hongyang Zhang, Yiyuan She
2015 Procedings of the British Machine Vision Conference 2015  
To this end, we implement feature selection for RRR, where which promotes row sparsity of B.  ...  Recently, a novel selectable factor extraction (SFE[3]) framework is proposed to simultaneously perform feature selection and extraction, and is theoretically and practically proved to be effective for  ...  Figure 1 : 1 Prediction accuracy vs. the number of selected features (left) using linear SVM (right) using nonlinear SVM significant features from X L , while the orthogonal matrix V determines the subspace  ... 
doi:10.5244/c.29.132 dblp:conf/bmvc/ShiZWZS15 fatcat:sjsthiaunrch5btr7xxej6bbui

Structured sparsity through convex optimization [article]

Francis Bach, Rodolphe Jenatton, Julien Mairal, Guillaume Obozinski
2012 arXiv   pre-print
We present applications to unsupervised learning, for structured sparse principal component analysis and hierarchical dictionary learning, and to supervised learning in the context of non-linear variable  ...  We show that the ℓ_1-norm can then be extended to structured norms built on either disjoint or overlapping groups of variables, leading to a flexible framework that can deal with various structures.  ...  This is where structured sparsity comes into play. In order to obtain polynomialtime algorithms and theoretically controlled predictive performance, we may add an extra constraint to the problem.  ... 
arXiv:1109.2397v2 fatcat:wyszj4ba3bhixfnv7mfxxmzqty
« Previous Showing results 1 — 15 out of 10,914 results