Filters








105,120 Hits in 6.4 sec

Network Pruning for Low-Rank Binary Indexing [article]

Dongsoo Lee, Se Jung Kwon, Byeongwook Kim, Parichay Kapoor, Gu-Yeon Wei
2019 arXiv   pre-print
In this paper, we propose a new network pruning technique that generates a low-rank binary index matrix to compress index data while decompressing index data is performed by simple binary matrix multiplication  ...  We also propose a tile-based factorization technique that not only lowers memory requirements but also enhances compression ratio.  ...  Binary Matrix Factorization based on Non-Negative Matrix Factorization Non-negative matrix factorization (NMF) factorizes a real-valued matrix H into two real-valued matrices H 1 and H 2 under the constraint  ... 
arXiv:1905.05686v1 fatcat:eamkvta55nfsphkrlcqq25qi4m

Composite Binary Decomposition Networks [article]

You Qiaoben, Zheng Wang, Jianguo Li, Yinpeng Dong, Yu-Gang Jiang, Jun Zhu
2018 arXiv   pre-print
binary tensors into two low-rank binary tensors, so that the number of parameters and operations are greatly reduced comparing to the original ones.  ...  In this paper, we propose the composite binary decomposition networks (CBDNet), which first compose real-valued tensor of each layer with a limited number of binary tensors, and then decompose some conditioned  ...  Binary Matrix Decomposition We first review the property of matrix rank: rank(B * C) ≤ min{rank(B), rank(C)}. (8) Comparing with binary matrix factorization methods (Zhang et al. 2007; Miettinen 2010)  ... 
arXiv:1811.06668v1 fatcat:vdjklxuwrvey7ctmmkj6bexp7i

Composite Binary Decomposition Networks

You Qiaoben, Zheng Wang, Jianguo Li, Yinpeng Dong, Yu-Gang Jiang, Jun Zhu
2019 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
binary tensors into two low-rank binary tensors, so that the number of parameters and operations are greatly reduced comparing to the original ones.  ...  In this paper, we propose the composite binary decomposition networks (CBDNet), which first compose real-valued tensor of each layer with a limited number of binary tensors, and then decompose some conditioned  ...  Binary Matrix Decomposition We first review the property of matrix rank: rank(B * C) ≤ min{rank(B), rank(C)}. ( 8 ) Comparing with binary matrix factorization methods (Zhang et al. 2007; Miettinen 2010  ... 
doi:10.1609/aaai.v33i01.33014747 fatcat:cqqchaik2rciradk5stjkxkydu

On Finding Joint Subspace Boolean Matrix Factorizations [chapter]

Pauli Miettinen
2012 Proceedings of the 2012 SIAM International Conference on Data Mining  
Furthermore, the matrix factorization is based on the Boolean arithmetic. This restricts the presented approach suitable to only binary matrices.  ...  If A is binary, A 2 F = |A|.  ...  The Boolean rank of an n-by-m binary matrix A, rank B (A), is the least integer k such that there exists an n-by-k binary matrix B and a k-by-m binary matrix C for which A = B • C.  ... 
doi:10.1137/1.9781611972825.82 dblp:conf/sdm/Miettinen12 fatcat:z7vrp3tl4ng6zpcipoxiragqtu

Supplementary Analysis from Personality and the collective: bold homing pigeons occupy higher leadership ranks in flocks

Takao Sasaki, Richard P. Mann, Katherine N. Warren, Tristian Herbert, Tara Wilson, Dora Biro
2018 Figshare  
on personality either in binary or ordinal analyses, though both are not far from significant in the binary analysis Summary for effect of age and weight on personality No consistent effect of either  ...  This is true for both binary and ordinal boldness Effect of age, weight, boldness and release number on flight characteristics in solo training (CH site) We test for effects on flight speed, efficiency  ... 
doi:10.6084/m9.figshare.5867796.v1 fatcat:efkmqoxdobfgfepym5o4abde5m

Factorization of Binary Matrices: Rank Relations, Uniqueness and Model Selection of Boolean Decomposition [article]

Derek DeSantis, Erik Skau, Duc P. Truong, Boian Alexandrov
2021 arXiv   pre-print
Representing a matrix as a mixture of a small collection of latent vectors via low-rank decomposition is often seen as an advantageous method to interpret and analyze data.  ...  We examine the relationships between the different ranks, and discuss when factorization is unique.  ...  When dealing with a N × M binary matrix X ∈ B M,N one can consider different decompositions X = W H where the pattern matrix W and weight matrix H either belong to different sets, such as the reals R,  ... 
arXiv:2012.10496v2 fatcat:3hn5qt6njvastowrnjencmpg54

Low Rank Approximation of Binary Matrices: Column Subset Selection and Generalizations

Chen Dan, Kristoffer Arnsfelt Hansen, He Jiang, Liwei Wang, Yuchen Zhou, Michael Wagner
2018 International Symposium on Mathematical Foundations of Computer Science  
Given a data matrix, low rank approximation helps to find factors, patterns, and provides concise representations for the data. Research on low rank approximation usually focuses on real matrices.  ...  Unlike low rank approximation of a real matrix which can be efficiently solved by Singular Value Decomposition, we show that approximation of a binary matrix is NP-hard, even for k = 1.  ...  Also note that (2A − J d,n ) is a {−1, 1}-matrix. Thus NP-hardness of Maximum Edge Weight Biclique with {−1, 1} edge weights implies NP-hardness of rank-1 Binary Matrix Approximation.  ... 
doi:10.4230/lipics.mfcs.2018.41 dblp:conf/mfcs/DanHJ0Z18 fatcat:5dkc3yjjmbdnrpsaxgtblh5i6u

Generalized Matrix Factorizations as a Unifying Framework for Pattern Set Mining: Complexity Beyond Blocks [chapter]

Pauli Miettinen
2015 Lecture Notes in Computer Science  
There are many ways to interpret the factorizations, but one particularly suited for data mining utilizes the fact that a matrix product can be interpreted as a sum of rank-1 matrices.  ...  Seen this way, it becomes obvious that many problems in data mining can be expressed as matrix factorizations with correct definitions of what a rank-1 matrix and a sum of rank-1 matrices mean.  ...  The results concentrate on binary matrices, but recent work has generalized the binary setting to ordered lattices [5] , ternary values [23] , and rank matrices [20] .  ... 
doi:10.1007/978-3-319-23525-7_3 fatcat:mtwojzjzznhp7pufselyyatwh4

A Comparison of Monotonicity Analysis with Factor Analysis

P.M. Bentler
1970 Educational and Psychological Measurement  
In factoring binary matrices, Horst sug- gested removing the latent Guttman scale (simplex) from the matrix of covariances and factoring the residuals.  ...  the diagonal matrix of eigenvalues, one can obtain a loading matrix L = VA analogous to a factor loading matrix, which may be considered to yield indices of monotonicity of variables with the components  ... 
doi:10.1177/001316447003000203 fatcat:oc4tgwnabvf4dpxmwvzefzck2m

Binary component decomposition Part II: The asymmetric case [article]

Richard Kueng, Joel A. Tropp
2019 arXiv   pre-print
This work builds on a companion paper that addresses the related problem of decomposing a low-rank positive-semidefinite matrix into symmetric binary factors.  ...  This paper studies the problem of decomposing a low-rank matrix into a factor with binary entries, either from {± 1} or from {0,1}, and an unconstrained factor.  ...  In this second paper, we consider the problem of factorizing a rectangular matrix into a binary factor and an unconstrained matrix of weights.  ... 
arXiv:1907.13602v1 fatcat:sm647dz3hbal3dtlqejifq26qu

Decoupled Collaborative Ranking

Jun Hu, Ping Li
2017 Proceedings of the 26th International Conference on World Wide Web - WWW '17  
The results are used subsequently to generate a ranking score that puts higher weights on the output of those binary classification problems concerning high values of c so as to improve the ranking performance  ...  As our method crucially builds on a decomposition into binary classification problems, we call our proposed method as Decoupled Collaborative Ranking (DCR).  ...  It is shown in Figure 2 that the ranking performance achieved through binary classification on a single binary matrix is worse than the performance achieved through matrix factorization on the original  ... 
doi:10.1145/3038912.3052685 dblp:conf/www/HuL17 fatcat:dnviy5wtundw5emsgvzca3fez4

Training Binary Weight Networks via Semi-Binary Decomposition [chapter]

Qinghao Hu, Gang Li, Peisong Wang, Yifan Zhang, Jian Cheng
2018 Lecture Notes in Computer Science  
We also implement binary weight AlexNet on FPGA platform, which shows that our proposed method can achieve ∼ 9× speed-ups while reducing the consumption of on-chip memory and dedicated multipliers significantly  ...  Since the matrix product of binary matrices has more numerical values than binary matrix, the proposed semi-binary decomposition has more representation capacity.  ...  [26] proposed the fixed-point factorized network which decomposes the weights into two fixed-point matrix and one diagonal matrix.  ... 
doi:10.1007/978-3-030-01261-8_39 fatcat:emdugucdnjh5vfo4l2blid427a

Binary-Decomposed DCNN for Accelerating Computation and Compressing Model Without Retraining

Ryuji Kamiya, Takayoshi Yamashita, Mitsuru Ambai, Ikuro Sato, Yuji Yamauchi, Hironobu Fujiyoshi
2017 2017 IEEE International Conference on Computer Vision Workshops (ICCVW)  
With VGG-16, speed increased by a factor of 2.07, model sizes decreased by 81%, and error increased by only 2.16%.  ...  To that end, this paper proposes Binary-decomposed DCNN, which resolves these issues without the need for retraining.  ...  Exhaustive algorithm [19] Decomposition by the exhaustive algorithm computes a binary basis matrix, M, and scaling vector, c, that minimize the cost function in Eqn. 1 on the weight vector, w.  ... 
doi:10.1109/iccvw.2017.133 dblp:conf/iccvw/KamiyaYASYF17 fatcat:2suxaq2vqrbe5ffd3uai6rbqju

Binary-decomposed DCNN for accelerating computation and compressing model without retraining [article]

Ryuji Kamiya, Takayoshi Yamashita, Mitsuru Ambai, Ikuro Sato, Yuji Yamauchi, Hironobu Fujiyoshi
2017 arXiv   pre-print
With VGG-16, speed increased by a factor of 2.07, model sizes decreased by 81%, and error increased by only 2.16%.  ...  To that end, this paper proposes Binary-decomposed DCNN, which resolves these issues without the need for retraining.  ...  Exhaustive algorithm [19] Decomposition by the exhaustive algorithm computes a binary basis matrix, M, and scaling vector, c, that minimize the cost function in Eqn. 1 on the weight vector, w.  ... 
arXiv:1709.04731v1 fatcat:b627kmvwpfgkda7iw7x4qp2ocy

User Graph Regularized Pairwise Matrix Factorization for Item Recommendation [chapter]

Liang Du, Xuan Li, Yi-Dong Shen
2011 Lecture Notes in Computer Science  
In this paper, we propose a novel method, called User Graph regularized Pairwise Matrix Factorization (UGPMF), to seamlessly integrate user information into pairwise matrix factorization procedure.  ...  Experiments on real-world recommendation data sets demonstrate that the proposed method significantly outperforms various competing alternative methods on top-k ranking performance of one-class item recommendation  ...  Matrix Factorization Based on BPR (BPR-MF) BPR-MF learns two low-rank matrices W and H.  ... 
doi:10.1007/978-3-642-25856-5_28 fatcat:csikjrn4r5br7fywwanqb3w4va
« Previous Showing results 1 — 15 out of 105,120 results