4,637 Hits in 2.9 sec

Simple Analysis of Sparse, Sign-Consistent JL [article]

Meena Jagadeesan
2019 arXiv   pre-print
We present a simple, combinatorics-free analysis of sparse, sign-consistent JL that yields the same dimension and sparsity upper bounds as the original analysis.  ...  However, their analysis of the upper bound on the dimension and sparsity requires a complicated combinatorial graph-based argument similar to Kane and Nelson's analysis of sparse JL.  ...  Outline for the rest of the paper In Section 2, we describe the construction and analysis of [2] for sparse, sign-consistent JL.  ... 
arXiv:1708.02966v2 fatcat:vvxn2ux2gray3ee743btvwsikq

Simple Analysis of Sparse, Sign-Consistent JL

Meena Jagadeesan, Michael Wagner
2019 International Workshop on Approximation Algorithms for Combinatorial Optimization  
We present a simple, combinatorics-free analysis of sparse, sign-consistent JL that yields the same dimension and sparsity upper bounds as the original analysis.  ...  However, their analysis of the upper bound on the dimension and sparsity requires a complicated combinatorial graph-based argument similar to Kane and Nelson's analysis of sparse JL.  ...  Theorem 2. 61:8 Simple Analysis of Sparse, Sign-Consistent JL Failure of the Hanson-Wright approach for sparse, sign-consistent JL The Hanson-Wright-based approach for sparse JL in [4] cannot be  ... 
doi:10.4230/lipics.approx-random.2019.61 dblp:conf/approx/Jagadeesan19 fatcat:viyredm4tjbvnihh434xdrlnxi

Sparse sign-consistent Johnson–Lindenstrauss matrices: Compression with neuroscience-based constraints: Fig. 1

Zeyuan Allen-Zhu, Rati Gelashvili, Silvio Micali, Nir Shavit
2014 Proceedings of the National Academy of Sciences of the United States of America  
We construct sparse JL matrices that are sign-consistent, and prove that our construction is essentially optimal.  ...  all non-positive), and the fact that any given neuron connects to a relatively small subset of other neurons implies that the JL matrix had better be sparse.  ...  For instance, one may use the Cauchy-Shwartz technique of ref. 18 to deduce ‡ ‡ of Sparse, Efficient, and Sign-Consistent JL Matrices."  ... 
doi:10.1073/pnas.1419100111 pmid:25385619 pmcid:PMC4250157 fatcat:aoulhcj4vzdo3nf5sqmg6ya6xy

Simple Analysis of Johnson-Lindenstrauss Transform under Neuroscience Constraints [article]

Maciej Skorski
2020 arXiv   pre-print
The paper re-analyzes a version of the celebrated Johnson-Lindenstrauss Lemma, in which matrices are subjected to constraints that naturally emerge from neuroscience applications: a) sparsity and b) sign-consistency  ...  At the heart of our proof is a novel variant of Hanson-Wright Lemma (on concentration of quadratic forms). Of independent interest are also auxiliary facts on sub-gaussian random variables.  ...  This discussion leads to Challenge Prove sparse sign-consistent JL, relying on standard estimates of MGF.  ... 
arXiv:2008.08857v1 fatcat:hfszf5b4tfdrhkadjetvq5cf3m

Fast and efficient dimensionality reduction using Structurally Random Matrices

Thong T. Do, Lu Gan, Yi Chen, Nam Nguyen, Trac D. Tran
2009 2009 IEEE International Conference on Acoustics, Speech and Signal Processing  
that the projection dimension is on the order of O( −2 log 3 N ), where N denotes the number of ddimensional vectors.  ...  Motivated by the bridge between compressed sensing and the Johnson-Lindenstrauss lemma [2] , this paper introduces a related application of SRMs regarding to realizing a fast and highly efficient embedding  ...  (iii) Principal component analysis (PCA), where Φ consists of the eigenvectors corresponding to the M largest eigenvalues of the covariance matrix of the dataset. And (iv) SRM.  ... 
doi:10.1109/icassp.2009.4959960 dblp:conf/icassp/DoGCNT09 fatcat:aqkizzcefzfufi7qhrq6mfg2dy

Group Sparsity and Graph Regularized Semi-Nonnegative Matrix Factorization with Discriminability for Data Representation

Peng Luo, Jinye Peng
2017 Entropy  
Semi-Nonnegative Matrix Factorization (Semi-NMF) , as a variant of NMF, inherits the merit of parts-based representation of NMF and possesses the ability to process mixed sign data, which has attracted  ...  In addition, 21 norm constraints are adopted for the basis matrix, which can encourage the basis matrix to be row sparse.  ...  condition, i.e., Φ jl V jl = 0, we get the following equations, ((UX) + + (UU T ) − V + αVL − + 2βV) jl V jl = ((UX) − + (UU T ) + V + αVL + + 2βVV T V) jl V jl (19) where we separate the positive and  ... 
doi:10.3390/e19120627 fatcat:l5ipuakwuvh5fk2wwz6sasnknu

Robust and Discriminative Concept Factorization for Image Representation

Yuchen Guo, Guiguang Ding, Jile Zhou, Qiang Liu
2015 Proceedings of the 5th ACM on International Conference on Multimedia Retrieval - ICMR '15  
Specifically, RDCF explicitly considers the influence of noise by imposing a sparse error matrix, and exploits the discriminative information by approximate orthogonal constraints which can also lead to  ...  of naturally occurring data.  ...  One representative and effective work is Locality Consistent Concept Factorization (LCCF) [2] , which incorporates graph regularization to the objective function of conventional CF to learn locality consistent  ... 
doi:10.1145/2671188.2749317 dblp:conf/mir/GuoDZL15 fatcat:r5pyljlsz5h65n6xbni5xjj7ke

A pairwise interaction model for multivariate functional and longitudinal data

Jeng-Min Chiou, Hans-Georg Müller
2016 Biometrika  
Functional data vectors consisting of samples of multivariate data, where each component is a random function, are increasingly encountered but have not yet been comprehensively investigated.  ...  analysis (Pan & Yao, 2008; Lam et al., 2011) .  ...  and on theoretical considerations, including proofs of Theorems 1 and 2, as well as other simulation results.  ... 
doi:10.1093/biomet/asw007 pmid:27279664 fatcat:rhawwlzvorbh3g7zq5fkuedrlq

Sparser Johnson-Lindenstrauss Transforms [article]

Daniel M. Kane, Jelani Nelson
2014 arXiv   pre-print
We give two different and simple constructions for dimensionality reduction in ℓ_2 via linear mappings that are sparse: only an O(ε)-fraction of entries in each column of our embedding matrices are non-zero  ...  These are the first constructions to provide subconstant sparsity for all values of parameters, improving upon previous works of Achlioptas (JCSS 2003) and Dasgupta, Kumar, and Sarlós (STOC 2010).  ...  our proof of Lemma 11 to the types of arguments that are frequently used to analyze the eigenvalue spectrum of random matrices.  ... 
arXiv:1012.1577v6 fatcat:h2z2ndvgg5hovgf56nkojsnqj4

On Variable Ordination of Modified Cholesky Decomposition for Sparse Covariance Matrix Estimation [article]

Xiaoning Kang, Xinwei Deng
2020 arXiv   pre-print
Estimation of large sparse covariance matrices is of great importance for statistical analysis, especially in the high-dimensional settings.  ...  The proposed method not only ensures the estimator to be positive definite, but also can capture the underlying sparse structure of the covariance matrix.  ...  M k=1Σ k −Λ + ) jl , if Σ i+1 jl ≥Σ + jl ≤ (−Σ + + 1 M M k=1Σ k −Λ + ) jl , if Σ i+1 jl <Σ + jl , that is, (Σ + − Σ i+1 +Λ + − Λ i+1 ) jl    ≥ 0, if Σ i+1 jl ≥Σ + jl ≤ 0, if Σ i+1 jl <Σ + jl .  ... 
arXiv:1801.00380v3 fatcat:hbtulnx6fja35dgmrm3czqdf7u

High-dimensional model recovery from random sketched data by exploring intrinsic sparsity

Tianbao Yang, Lijun Zhang, Qihang Lin, Shenghuo Zhu, Rong Jin
2020 Machine Learning  
an introduced sparse regularizer on the dual solution; (ii) for high-dimensional sparse least-squares regression problems, we employ randomized reduction methods to reduce the scale of data and solve  ...  the data matrix or the data are linearly separable); (iii) the theory covers both smooth and non-smooth loss functions for classification; (iv) the analysis is applicable to a broad class of randomized  ...  A rich theoretical literature (Tibshirani 1996; Zhao and Yu 2006; Wainwright 2009 ) describes the consistency, in particular the sign consistency, of various sparse regression techniques.  ... 
doi:10.1007/s10994-019-05865-4 fatcat:o6avb5zabnh55fnxhse7na3qy4

Detectability Threshold of the Spectral Method for Graph Partitioning [chapter]

Tatsuro Kawamoto, Yoshiyuki Kabashima
2015 Proceedings of the International Conference on Social Modeling and Simulation, plus Econophysics Colloquium 2014  
In order to analyze the performance of the spectral method, we consider a regular graph of two loosely connected clusters, each of which consists of a random graph, i.e., a random graph with a planted  ...  Since we focus on the bisection of regular random graphs, whether the unnormalized Laplacian, the normalized Laplacian, or the modularity matrix is used does not make a difference.  ...  Acknowledgements This work was supported by JSPS KAKENHI Nos. 26011023 (TK), 25120013 (YK), and the JSPS Core-to-Core Program "Non-equilibrium dynamics of soft matter and information."  ... 
doi:10.1007/978-3-319-20591-5_12 fatcat:jcnjkeamfrcktm3utdpkg5e7me

PIANO: A Fast Parallel Iterative Algorithm for Multinomial and Sparse Multinomial Logistic Regression [article]

R. Jyothi, P. Babu
2020 arXiv   pre-print
We also prove that PIANO converges to a stationary point of the Multinomial and the Sparse Multinomial Logistic Regression problems.  ...  In particular, we work out the extension of PIANO to solve the Sparse Multinomial Logistic Regression problem with l1 and l0 regularizations.  ...  of f 1 (0) and sign of f 1 (b) are opposite of each other.  ... 
arXiv:2002.09133v1 fatcat:gm76zl6dnneu7brroijxyniepm

Very sparse random projections

Ping Li, Trevor J. Hastie, Kenneth W. Church
2006 Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD '06  
R typically consists of entries of standard normal N (0, 1). It is well known that random projections preserve pairwise distances (in the expectation).  ...  Achlioptas proposed sparse random projections by replacing the N (0, 1) entries in R with entries in {−1, 0, 1} with probabilities { 1 6 , 2 3 , 1 6 }, achieving a threefold speedup in processing time.  ...  It is "simple" in terms of theoretical analysis, but not in terms of random number generation.  ... 
doi:10.1145/1150402.1150436 dblp:conf/kdd/LiHC06 fatcat:xc6errlxwjgbrg242enan2cwpu

Baryon-Meson Couplings in the qq Pair-Creation Quark Model. II: Mesonic Decay Widths of P-Wave Baryons

Y. Fujiwara
1993 Progress of theoretical physics  
lJ(k) are separately assumed to follow the exact SU6 rules with respect to the spin and flavor degrees of freedom. A detailed analysis of model predictions for Jl(k) and !  ...  These vertex functions consist of two independent flavor amplitudes Jl(k) and !lJ(k) evaluated at the decaying momentum k, which mainly contribute to S-wave and D-wave decays, respectively.  ...  Acknowledgements The author would like to thank members of Nuclear Theory Group, Department of Physics, Kyoto University, for useful discussion.  ... 
doi:10.1143/ptp/89.2.455 fatcat:u2qex7o3vrdaziqcc5nfk5hho4
« Previous Showing results 1 — 15 out of 4,637 results