Filters








124 Hits in 5.6 sec

Near-separable Non-negative Matrix Factorization with ℓ1and Bregman Loss Functions [chapter]

Abhishek Kumar, Vikas Sindhwani
2015 Proceedings of the 2015 SIAM International Conference on Data Mining  
In this paper, we develop separable NMF algorithms with ℓ1 loss and Bregman divergences, by extending the conical hull procedures proposed in our earlier work (Kumar et al., 2013) .  ...  We show that on foreground-background separation problems in computer vision, robust near-separable NMFs match the performance of Robust PCA, considered state of the art on these problems, with an order  ...  In fact, none of the existing near-separable NMF algorithms works with ℓ 1 and Bregman loss functions.  ... 
doi:10.1137/1.9781611974010.39 dblp:conf/sdm/0001S15 fatcat:g22smkba6nelvfwmge3uhrdvja

Decomposition into low-rank plus additive matrices for background/foreground separation: A review for a comparative evaluation with a large-scale dataset

Thierry Bouwmans, Andrews Sobral, Sajid Javed, Soon Ki Jung, El-Hadi Zahzah
2017 Computer Science Review  
However, similar robust implicit or explicit decompositions can be made in the following problem formulations: Robust Non-negative Matrix Factorization (RNMF), Robust Matrix Completion (RMC), Robust Subspace  ...  Then, we examine carefully each method in each robust subspace learning/tracking frameworks with their decomposition, their loss functions, their optimization problem and their solvers.  ...  Acknowledgment The authors would like to thank the following researchers: Zhouchen Lin (Visual Computing Group, Microsoft Research Asia) who has kindly provided the solver LADMAP [192] and the l 1 -filtering  ... 
doi:10.1016/j.cosrev.2016.11.001 fatcat:vdh7ic4n6zfkjlccnyiq74z5wu

An elementary introduction to information geometry [article]

Frank Nielsen
2020 arXiv   pre-print
In this survey, we describe the fundamental differential-geometric structures of information manifolds, state the fundamental theorem of information geometry, and illustrate some use cases of these information  ...  (ICA), Non-negative Matrix Factorization (NMF), • Mathematical programming: Barrier function of interior point methods, • Game theory: Score functions.  ...  Arwini and Christopher Terence John Dodson. Information Geometry: Near Randomness and Near Independance. Springer, 2008. [11] John Ashburner and Karl J. Friston.  ... 
arXiv:1808.08271v2 fatcat:q7cum6qk7ffbhpbmmyynvdeeya

Global Convergence of Model Function Based Bregman Proximal Minimization Algorithms [article]

Mahesh Chandra Mukkamala, Jalal Fadili, Peter Ochs
2020 arXiv   pre-print
However, many functions arising in practical applications such as low rank matrix factorization or deep neural network problems do not have a Lipschitz continuous gradient.  ...  However, the L-smad property cannot handle nonsmooth functions, for example, simple nonsmooth functions like x^4-1 and also many practical composite problems are out of scope.  ...  Acknowledgments Mahesh Chandra Mukkamala and Peter Ochs thank German Research Foundation for providing financial support through DFG Grant OC 150/1-1.  ... 
arXiv:2012.13161v1 fatcat:eosshvsbqje5taizcjvhh2dblu

Tensor Sparse Coding for Positive Definite Matrices

Ravishankar Sivalingam, Daniel Boley, Vassilios Morellas, Nikolaos Papanikolopoulos
2014 IEEE Transactions on Pattern Analysis and Machine Intelligence  
In the matrix case, the data points are merely vectorized and treated as vectors thereafter (for e.g., image patches).  ...  Synthetic and real-world computer vision experiments with region covariance descriptors demonstrate the need for and the applicability of the new sparse coding model.  ...  Army Research Laboratory and the U.S.  ... 
doi:10.1109/tpami.2013.143 pmid:24457513 fatcat:cmhywwu62neldck4trfkiorybm

Implicit regularization and momentum algorithms in nonlinear adaptive control and prediction [article]

Nicholas M. Boffi, Jean-Jacques E. Slotine
2020 arXiv   pre-print
We show that the Euler Lagrange equations for the Bregman Lagrangian lead to natural gradient and mirror descent-like adaptation laws with momentum, and we recover their first-order analogues in the infinite  ...  We prove that when there are multiple dynamics consistent with the data, these non-Euclidean adaptation laws implicitly regularize the learned model.  ...  If f (x, a, t) has the form (11) and is non-decreasing, gradient flow on the loss function L(x,â, a, t) = 1 2f 2 (x,â, a, t) with a gain matrix D 1 P leads toȧ = −f (x,â, a, t)f m (x, φ Tâ , t)Pα(x, t)  ... 
arXiv:1912.13154v6 fatcat:7bs5d63sfbh7dbkxbqzcujhdde

Distributed Scalable Collaborative Filtering Algorithm [chapter]

Ankur Narang, Abhinav Srivastava, Naga Praveen Kumar Katta
2011 Lecture Notes in Computer Science  
Our distributed algorithm (implemented using OpenMP with MPI) delivered training time of around 6s on the full Netflix dataset and prediction time of 2.5s on 1.4M ratings (1.78μs per rating prediction)  ...  Our training time is around 20× (more than one order of magnitude) better than the best known parallel training time, along with high accuracy (0.87 ± 0.02 RMSE).  ...  The matrix factorization approaches include Singular Value Decomposition (SVD [14] ) and Non-Negative Matrix Factorization (NNMF) based [17] filtering techniques.  ... 
doi:10.1007/978-3-642-23400-2_33 fatcat:o7bt43aitbarng65f2aw75m5aq

Nearly second-order asymptotic optimality of sequential change-point detection with one-sample updates [article]

Yang Cao, Liyan Xie, Yao Xie, Huan Xu
2017 arXiv   pre-print
When the post-change parameters are unknown, we consider a set of detection procedures based on sequential likelihood ratios with non-anticipating estimators constructed using online convex optimization  ...  Numerical and real data examples validate our theory.  ...  d ; "ACM": T ACM (b) with Γ = R d ; "ASR-L1": T ASR (b) with Γ = {θ : θ 1 ≤ 5}; "ACM-L1": T ACM (b) with Γ = {θ : θ 1 ≤ 5}. p is the proportion of non-zero entries in θ.  ... 
arXiv:1705.06995v4 fatcat:7xicjnig4fe5nkjwxy25pn3r44

A Robust Probabilistic Model for Motion Layer Separation in X-ray Fluoroscopy [chapter]

Peter Fischer, Thomas Pohl, Thomas Köhler, Andreas Maier, Joachim Hornegger
2015 Lecture Notes in Computer Science  
We show that a robust penalty function is required in the data term to deal with noise and shortcomings of the image formation model.  ...  Given the motion of each layer, it is state of the art to compute the layer separation by minimizing a least-squares objective function.  ...  The concepts and information presented in this paper are based on research and are not commercially available.  ... 
doi:10.1007/978-3-319-19992-4_22 fatcat:iu6ja2bowzhjzobbhuftkgr4re

Augmented $\ell_1$ and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm

Ming-Jun Lai, Wotao Yin
2013 SIAM Journal of Imaging Sciences  
This paper studies the long-existing idea of adding a nice smooth function to "smooth" a non-differentiable objective function in the context of sparse optimization, in particular, the minimization of  ...  ||x||_1+1/(2α)||x||_2^2, where x is a vector, as well as the minimization of ||X||_*+1/(2α)||X||_F^2, where X is a matrix and ||X||_* and ||X||_F are the nuclear and Frobenius norms of X, respectively.  ...  The former dynamically sets the step size h in (63) by the Barzilai-Borwein method with nonmontone line search using techniques from [43] .  ... 
doi:10.1137/120863290 fatcat:fynlqf3cvff6hm2chn5xbvx5gy

Compressive sensing: From theory to applications, a survey

Saad Qaisar, Rana Muhammad Bilal, Wafa Iqbal, Muqaddas Naureen, Sungyoung Lee
2013 Journal of Communications and Networks  
This article gives a brief background on the origins of this idea, reviews the basic mathematical foundation of the theory and then goes on to highlight different areas of its application with a major  ...  emphasis on communications and network domain.  ...  In [109] , authors estimate the missing round trip time (RTT) measurements in computer networks using doubly non-negative (DN) matrix completion and compressed sensing.  ... 
doi:10.1109/jcn.2013.000083 fatcat:mlxbvwfxivbubdbtzkqxgx5c3y

Efficient Primal-Dual Algorithms for Large-Scale Multiclass Classification [article]

Dmitry Babichev, Dmitrii Ostrovskii (SIERRA, Inria, PSL), Francis Bach
2019 arXiv   pre-print
a favorable accuracy bound; (iii) devising non-uniform sampling schemes to approximate the matrix products.  ...  Our focus is on a special class of losses that includes, in particular, the multiclass hinge and logistic losses.  ...  Acknowledgments DB and FB acknowledge support from the European Research Council (grant SEQUOIA 724063).  ... 
arXiv:1902.03755v1 fatcat:cexbqh2gprbajnmqyxng2ogfky

Krüppel-like transcription factors in the nervous system: Novel players in neurite outgrowth and axon regeneration

Darcie L. Moore, Akintomide Apara, Jeffrey L. Goldberg
2011 Molecular and Cellular Neuroscience  
With at least 15 of 17 KLF family members expressed in neurons, it will be important for us to determine how this complex family functions to regulate the intricate gene programs of axon growth and regeneration  ...  to compensate for another's function.  ...  ., and P30 EY014801 to Univ. of Miami), by the National Institute of Neurological Disorders and Stroke(NS061348, J.L.G.), and an unrestricted grant from Research to Prevent Blindness to the Univ. of Miami  ... 
doi:10.1016/j.mcn.2011.05.005 pmid:21635952 pmcid:PMC3143062 fatcat:cr6fvp5y75eclffs4twvkkxa6m

High Dimensional Optimization through the Lens of Machine Learning [article]

Felix Benning
2021 arXiv   pre-print
We build intuition on quadratic models to figure out which methods are suited for non-convex optimization, and develop convergence proofs on convex functions for this selection of methods.  ...  With this theoretical foundation for stochastic gradient descent and momentum methods, we try to explain why the methods used commonly in the machine learning field are so successful.  ...  It limits the rate of change of the gradient, and thus allows us to formulate and optimize over an upper bound of the loss function, resulting in learning rate L1 .  ... 
arXiv:2112.15392v1 fatcat:4v4s7z3jyrb6dlhwbgd3mcpwyi

Convex Optimization without Projection Steps [article]

Martin Jaggi
2011 arXiv   pre-print
We obtain matching upper and lower bounds of Θ(1/ϵ) for the sparsity for l1-problems.  ...  The method allows us to understand the sparsity of approximate solutions for any l1-regularized convex optimization problem (and for optimization over the simplex), expressed as a function of the approximation  ...  Optimizing over Non-Negative Matrix Factorizations.  ... 
arXiv:1108.1170v6 fatcat:w7wiokyl2fdureamsv25oneywu
« Previous Showing results 1 — 15 out of 124 results