Filters








1,570 Hits in 5.3 sec

Deterministic and Las Vegas Algorithms for Sparse Nonnegative Convolution [article]

Karl Bringmann, Nick Fischer, Vasileios Nakos
2021 arXiv   pre-print
However, often the involved vectors are sparse and hence one could hope for output-sensitive algorithms to compute nonnegative convolutions.  ...  Our algorithm is a blend of algebraic and combinatorial ideas and techniques. Additionally, we provide two fast Las Vegas algorithms for computing sparse nonnegative convolutions.  ...  We proved that sparse nonnegative convolution is in Las Vegas time O(t log t log log t). 3 Is the restriction to nonnegative vectors necessary?  ... 
arXiv:2107.07625v1 fatcat:dwbpi55cnzh4tf4lvrlqx3ztsi

Towards Understanding Residual and Dilated Dense Neural Networks via Convolutional Sparse Coding [article]

Zhiyang Zhang, Shihua Zhang
2019 arXiv   pre-print
Inspired by these considerations, we propose two novel multi-layer models--residual convolutional sparse coding model (Res-CSC) and mixed-scale dense convolutional sparse coding model (MSD-CSC), which  ...  Convolutional neural network (CNN) and its variants have led to many state-of-art results in various fields. However, a clear theoretical understanding about them is still lacking.  ...  Soft Non-negative Thresholding Operator For a nonnegative sparse coding problem, one only needs to consider a nonnegative sparse coding.  ... 
arXiv:1912.02605v2 fatcat:5plssrwymne2jlaqwwoa2pqaiu

Dataless Model Selection with the Deep Frame Potential [article]

Calvin Murdock, Simon Lucey
2020 arXiv   pre-print
Building upon theoretical connections between deep learning and sparse approximation, we propose the deep frame potential: a measure of coherence that is approximately related to representation stability  ...  We validate its use as a criterion for model selection and demonstrate correlation with generalization error on a variety of common residual and densely connected network architectures.  ...  Within each group, a dense block is the concatenation of smaller convolutions that take all previous outputs as inputs with filter numbers equal to a fixed growth rate.  ... 
arXiv:2003.13866v1 fatcat:ctvt43qmzrdohk6uti7qm4uscq

Dataless Model Selection With the Deep Frame Potential

Calvin Murdock, Simon Lucey
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
Building upon theoretical connections between deep learning and sparse approximation, we propose the deep frame potential: a measure of coherence that is approximately related to representation stability  ...  We validate its use as a criterion for model selection and demonstrate correlation with generalization error on a variety of common residual and densely connected network architectures.  ...  Within each group, a dense block is the concatenation of smaller convolutions that take all previous outputs as inputs with filter numbers equal to a fixed growth rate.  ... 
doi:10.1109/cvpr42600.2020.01127 dblp:conf/cvpr/MurdockL20 fatcat:dnxvxnm6dvbvlp3xgjp75v3mda

A New Architecture for Optimization Modeling Frameworks [article]

Matt Wytock, Steven Diamond, Felix Heide, Stephen Boyd
2016 arXiv   pre-print
Our approach is particularly well adapted to first-order and indirect optimization algorithms.  ...  Our new architecture makes it easy for modeling frameworks to support high performance computational platforms like GPUs and distributed clusters, as well as to generate solvers specialized to individual  ...  Table II compares the TensorFlow version of SCS to the native implementation and demonstrates that in the dense matrix and convolution cases, the solve time on GPU is faster with TensorFlow.  ... 
arXiv:1609.03488v2 fatcat:lsqhowdnlbaotlodee7fdzmxfe

Nonnegative Matrix Factorization: A Comprehensive Review

Yu-Xiong Wang, Yu-Jin Zhang
2013 IEEE Transactions on Knowledge and Data Engineering  
Some related work not on NMF that NMF should learn from or has connections with is involved too. Moreover, some open issues remained to be solved are discussed.  ...  Nonnegative Matrix Factorization (NMF), a relatively novel paradigm for dimensionality reduction, has been in the ascendant since its inception.  ...  ACKNOWLEDGMENTS The authors would like to thank Dr. Le Li and the reviewers for their helpful comments and suggestions.  ... 
doi:10.1109/tkde.2012.51 fatcat:ocxepl7gdrhszawqhsj36qhpme

Convex Optimization with Abstract Linear Operators

Steven Diamond, Stephen Boyd
2015 2015 IEEE International Conference on Computer Vision (ICCV)  
We introduce a convex optimization modeling framework that transforms a convex optimization problem expressed in a form natural and convenient for the user into an equivalent cone program in a way that  ...  linear functions in the transformation process not as matrices, but as graphs that encode composition of abstract linear operators, we arrive at a matrix-free cone program, i.e., one whose data matrix is  ...  We call the cone program matrix-free if A is represented implicitly as an FAO, rather than explicitly as a dense or sparse matrix.  ... 
doi:10.1109/iccv.2015.84 dblp:conf/iccv/DiamondB15 fatcat:yxb64zokrvgmtl3orzuvfoctaq

Reframing Neural Networks: Deep Structure in Overcomplete Representations [article]

Calvin Murdock and George Cazenavette and Simon Lucey
2022 arXiv   pre-print
But despite their clear empirical advantages, it is still not well understood what makes them so effective.  ...  In comparison to classical shallow representation learning techniques, deep neural networks have achieved superior performance in nearly every application benchmark.  ...  A comparison between (a) projection onto a simplex, which corresponds to nonnegative and 1 norm constraints, and (b) the rectified linear unit (ReLU) nonlinear activation function, which is equivalent  ... 
arXiv:2103.05804v2 fatcat:jpdvnlfhkngcniodyzyipzbfgi

Matrix-Free Convex Optimization Modeling [chapter]

Steven Diamond, Stephen Boyd
2016 Optimization and Its Applications in Control and Data Sciences  
We introduce a convex optimization modeling framework that transforms a convex optimization problem expressed in a form natural and convenient for the user into an equivalent cone program in a way that  ...  representing linear functions in the transformation process not as matrices, but as graphs that encode composition of linear operators, we arrive at a matrix-free cone program, i.e., one whose data matrix is  ...  This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-114747 and by the DARPA X-DATA program.  ... 
doi:10.1007/978-3-319-42056-1_7 fatcat:esxdma3ymncjrf3uc7wkqpoygy

Matrix-Free Convex Optimization Modeling [article]

Steven Diamond, Stephen Boyd
2015 arXiv   pre-print
We introduce a convex optimization modeling framework that transforms a convex optimization problem expressed in a form natural and convenient for the user into an equivalent cone program in a way that  ...  representing linear functions in the transformation process not as matrices, but as graphs that encode composition of linear operators, we arrive at a matrix-free cone program, i.e., one whose data matrix is  ...  This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-114747 and by the DARPA X-DATA program.  ... 
arXiv:1506.00760v2 fatcat:g4tyihejpbearodtlbr7apjg6m

Joint Camera Spectral Sensitivity Selection and Hyperspectral Image Recovery [chapter]

Ying Fu, Tao Zhang, Yinqiang Zheng, Debing Zhang, Hua Huang
2018 Lecture Notes in Computer Science  
Later, we append a CSS selection layer onto the recovery network, and the optimal CSS can thus be automatically determined from the network weights under the nonnegative sparse constraint.  ...  In this paper, we present an efficient convolutional neural network (CNN) based method, which can jointly select the optimal CSS from a candidate dataset and learn a mapping to recover HSI from a single  ...  To enforce the nonnegative constraint, the weights in this convolution layer for CSS selection are set to be positive.  ... 
doi:10.1007/978-3-030-01219-9_48 fatcat:su54xqecdzayllf3v3iuvhvp7m

Learning image representations from the pixel level via hierarchical sparse coding

Kai Yu, Yuanqing Lin, John Lafferty
2011 CVPR 2011  
After pooling within local regions, the rst layer codes are then passed to the second layer, which jointly encodes signals from the region.  ...  We present a method for learning image representations using a two-layer sparse coding scheme at the pixel level. The rst layer encodes local patches of an image.  ...  One important feature of the above formulation is that the set-level dictionary Φ is required to be nonnegative.  ... 
doi:10.1109/cvpr.2011.5995732 dblp:conf/cvpr/YuLL11 fatcat:m5m2aikkkveablambe4qwbixy4

Fast Computation on Semirings Isomorphic to (×, ) on R_+ [article]

Oliver Serang
2016 arXiv   pre-print
This manuscript generalizes recent advances on max-convolution: in this approach a small family of p-norm rings are used to efficiently approximate results on a nonnegative semiring.  ...  The general approach can be used to easily compute sub-cubic estimates of the all-pairs shortest paths in a graph with nonnegative edge weights and sub-quadratic estimates of the top k values in x_i+y_j  ...  In this case, H corresponds to an O(n log(n)) FFT standard convolution algorithm, which is chosen exclusively based on its numerical equivalence to G.  ... 
arXiv:1511.05690v2 fatcat:lop6h2ejjrehxaal372cjxigae

Graph-based Neural Acceleration for Nonnegative Matrix Factorization [article]

Jens Sjölund, Maria Bånkestad
2022 arXiv   pre-print
., sparse linear algebra, but has not yet been exploited to design graph neural networks for matrix computations.  ...  We describe a graph-based neural acceleration technique for nonnegative matrix factorization that builds upon a connection between matrices and bipartite graphs that is well-known in certain fields, e.g  ...  , 1 2 ‖ ⊤ − ‖ 2 F subject to , ≥ 0. (2) If either of the factors or is held fixed, what remains is a nonnegative least-squares problem, which is convex.  ... 
arXiv:2202.00264v1 fatcat:l3q6i2sxlzcwrgifb4pwbksozi

Sparse Head-Related Impulse Response for Efficient Direct Convolution [article]

Yuancheng Luo, Dmitry N. Zotkin, Ramani Duraiswami
2015 arXiv   pre-print
The described factorization is shown to be applicable to the arbitrary source signal case and allows one to employ time-domain convolution at a computational cost lower than using convolution in the frequency  ...  Further, the reflection filter can be made sparse with minimal HRIR distortion.  ...  A free Matlab solver for L 1 -NNLS is available online at http://www. stanford.edu/ ∼ boyd/papers/l1 ls.html 6 Convolution in time domain is equivalent to windowing in frequency domain, and vice versa  ... 
arXiv:1502.03162v1 fatcat:wszzbprb5bgv7cbjwyo65k5l2q
« Previous Showing results 1 — 15 out of 1,570 results