Filters








123,586 Hits in 2.5 sec

Learning Reductions to Sparse Sets [chapter]

Harry Buhrman, Lance Fortnow, John M. Hitchcock, Bruno Loff
2013 Lecture Notes in Computer Science  
We furthermore show that if Sat disjunctive truth-table (or majority truth-table) reduces to a sparse set then Sat ≤ p m LT1 and hence a collapse of PH to P NP also follows.  ...  We take up this question and use results from computational learning theory to show that if Sat ≤ p m LT1 then PH = P NP .  ...  2 j=1C(s)jC(y)j. dtt SPARSE ... truth-table reductions to sparse sets are powerful enough to simulate bounded truth-table reductions to sparse sets [2] .  ... 
doi:10.1007/978-3-642-40313-2_23 fatcat:ocm23fh4lzfyppn3w2qp5q4nyq

A General Approach for Achieving Supervised Subspace Learning in Sparse Representation

Jianshun Sang, Dezhong Peng, Yongsheng Sang
2019 IEEE Access  
INDEX TERMS Subspace learning, sparse representation, supervised algorithm, manifold learning, dimensionality reduction.  ...  Over the past few decades, a large family of subspace learning algorithms based on dictionary learning have been designed to provide different solutions to learn subspace feature.  ...  This is the key step to transform these unsupervised sparse subspace learning algorithms into supervised dimensionality reduction techniques.  ... 
doi:10.1109/access.2019.2898923 fatcat:teu4jcdqs5anbcscwlbhaykgdu

A Deep Neural Network Architecture Using Dimensionality Reduction with Sparse Matrices [chapter]

Wataru Matsumoto, Manabu Hagiwara, Petros T. Boufounos, Kunihiko Fukushima, Toshisada Mariyama, Zhao Xiongxin
2016 Lecture Notes in Computer Science  
computationally-efficient sparse dimensionality reduction matrix.  ...  computationally-efficient sparse dimensionality reduction matrix.  ...  These were connected to 500 hidden units, so that = 784, = 500. For the unsupervised training, we set the momentum to ! = 0.5, the learning rate to !  ... 
doi:10.1007/978-3-319-46681-1_48 fatcat:e3e7fkwkrvgadliavosmsazmka

Manifold-preserving graph reduction for sparse semi-supervised learning

Shiliang Sun, Zakria Hussain, John Shawe-Taylor
2014 Neurocomputing  
Moreover, we apply manifold-preserving sparse graphs to semi-supervised learning and propose sparse Laplacian support vector machines (SVMs).  ...  results on multiple data sets which indicate their feasibility for classification.  ...  As labeled examples are usually few in semi-supervised learning settings, we include all the labeled examples in the sparse graph.  ... 
doi:10.1016/j.neucom.2012.08.070 fatcat:xty2tsh5nzbfhpjra7vbnvosey

Gene Expression Data Classification Using Discriminatively Regularized Sparse Subspace Learning

Chunming Xu
2011 Zenodo  
To overcome this limitation, we develop a novel dimensionality reduction algorithm namely dscriminatively regularized sparse subspace learning(DR-SSL) in this paper.  ...  The proposed DR-SSL algorithm can not only make use of the sparse representation to model the data, but also can effective employ the label information to guide the procedure of dimensionality reduction  ...  Sparse Subspace Learning Sparse subspace learning algorithm attempts to find a projection matrix which maps high dimensional data to lower dimensional data space for classification problems.  ... 
doi:10.5281/zenodo.1333725 fatcat:vjwxcqbxcrffhegsf4ftairwju

Sparsely Preserving Based Semi-supervised Dimensionality Reduction

Pingrong Lin, Yaxin Sun
2017 DEStech Transactions on Engineering and Technology Research  
In this paper, a new sparse subspace learning algorithm called Sparsely Preserving Based Semi-Supervised Dimensionality Reduction (SPSSDR) is proposed by adding the sparsely information into liner discriminant  ...  Sparse subspace learning has drawn more and more attentions recently.  ...  In this paper, we propose a discriminant Sparsely Preserving Based Semi-Supervised Dimensionality Reduction (SPSSDR) by combining sparse subspace learning and semi-supervised learning.  ... 
doi:10.12783/dtetr/iceta2016/6985 fatcat:lschzl3kvngl3apzf72m7r6byu

HYPERSPECTRAL IMAGE CLASSIFICATION BASED ON MANIFOLD DATA ANALYSIS AND SPARSE SUBSPACE PROJECTION

ZHENG Zhijun, PENG Yanbin
2021 International Journal of Engineering Technologies and Management Research  
The new method combines sparse coding and manifold learning to generate features with better classification ability.  ...  To keep the geometric structure of the manifold, the objective function is regularized by the manifold learning method.  ...  All sample points are projected into low-dimensional space by learning the dimensionality reduction matrix through training set.  ... 
doi:10.29121/ijetmr.v8.i9.2021.1040 fatcat:ba6j54sypfbjrnebr4v6xj36rm

Automatic Eigentemplate Learning for Sparse Template Tracker [chapter]

Keiji Sakabe, Tomoyuki Taguchi, Takeshi Shakunaga
2009 Lecture Notes in Computer Science  
Automatic eigentemplate learning is discussed for a sparse template tracker.  ...  However, it has not been easy to prepare an eigentemplate automatically for any image sequences. This paper provides a feasible solution to this problem in the framework of sparse template tracking.  ...  Reduction of Learning Set Once a sequence of S-templates is extracted stably, the next problem is how to select the leaning set from the sequence.  ... 
doi:10.1007/978-3-540-92957-4_62 fatcat:d3eu3wzo5jcvbnckab2hsexvjy

Coherence regularized dictionary learning

Mansour Nejati, Shadrokh Samavi, S. M. Reza Soroushmehr, Kayvan Najarian
2016 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)  
set, and learning run time  ...  Introduction Dictionary Learning: Objective: adapting dictionary to data for their sparse representations.  ... 
doi:10.1109/icassp.2016.7472572 dblp:conf/icassp/NejatiSSN16 fatcat:dahydoei6vd75pgvezmp2k5ez4

Unsupervised Feature Learning by Deep Sparse Coding [chapter]

Yunlong He, Koray Kavukcuoglu, Yun Wang, Arthur Szlam, Yanjun Qi
2014 Proceedings of the 2014 SIAM International Conference on Data Mining  
In this paper, we propose a new unsupervised feature learning framework, namely Deep Sparse Coding (DeepSC), that extends sparse coding to a multi-layer architecture for visual object recognition tasks  ...  As a result, the new method is able to learn multiple layers of sparse representations of the image which capture features at a variety of abstraction levels and simultaneously preserve the spatial smoothness  ...  First of all, we define the pooling function as a map from a set of sparse codes on a sampling grid to a set of dense codes on a new sampling grid.  ... 
doi:10.1137/1.9781611973440.103 dblp:conf/sdm/HeKWSQ14 fatcat:5m5tbo5tkbc77du4vw3z3p2olu

Kernel Regression with Sparse Metric Learning [article]

Rongqing Huang, Shiliang Sun
2017 arXiv   pre-print
Our work is the first to combine kernel regression with sparse metric learning. To verify the effectiveness of the proposed method, it is evaluated on 19 data sets for regression.  ...  It learns a Mahalanobis distance metric by a gradient descent procedure, which can simultaneously conduct dimensionality reduction and lead to good prediction results.  ...  It has the capability of learning a good distance metric and simultaneously remove noise in data leading to dimensionality reduction. KR SML targets the objective of sparse metric learning directly.  ... 
arXiv:1712.09001v1 fatcat:dwnwrmeupvfolgfvtrjrsmosm4

Contextual Bandits with Sparse Data in Web setting [article]

Björn H Eriksson
2021 arXiv   pre-print
Five categories of methods are described, making it easy to choose how to address sparse data using contextual bandits with a method available for modification in the specific setting of concern.  ...  This paper is a scoping study to identify current methods used in handling sparse data with contextual bandits in web settings. The area is highly current and state of the art methods are identified.  ...  Dimensionality reduction Sparse data can be handled with dimensionality reduction. A common technique is called matrix factoring. [23] .  ... 
arXiv:2105.02873v1 fatcat:ewa5c6ag35edppez72kzx2meuu

Online Learning and Resource-Bounded Dimension: Winnow Yields New Lower Bounds for Hard Sets [chapter]

John M. Hitchcock
2006 Lecture Notes in Computer Science  
If we have a reduction from the unknown set to a concept in learnable concept class, we can view the reduction as generating a sequence of examples, apply the learning algorithm to these examples, and  ...  use the learning algorithm's predictions to design a good betting strategy.  ...  to sparse sets: P btt (SPARSE) ⊆ P d (SPARSE).  ... 
doi:10.1007/11672142_33 fatcat:v47l6siel5haxih6ehriiljgx4

Online Learning and Resource‐Bounded Dimension: Winnow Yields New Lower Bounds for Hard Sets

John M. Hitchcock
2007 SIAM journal on computing (Print)  
If we have a reduction from the unknown set to a concept in learnable concept class, we can view the reduction as generating a sequence of examples, apply the learning algorithm to these examples, and  ...  use the learning algorithm's predictions to design a good betting strategy.  ...  to sparse sets: P btt (SPARSE) ⊆ P d (SPARSE).  ... 
doi:10.1137/050647517 fatcat:guy3kru2bjalbfnihusdwyxs6e

LGE-KSVD: Flexible Dictionary Learning for Optimized Sparse Representation Classification

Raymond Ptucha, Andreas Savakis
2013 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops  
The dimensionality reduction matrix, sparse representation dictionary, sparse coefficients, and sparsity-based linear classifier are jointly learned through LGE-KSVD.  ...  The atom optimization process is redefined to have variable support using graph embedding techniques to produce a more flexible and elegant dictionary learning algorithm.  ...  We wish to combine the dimensionality reduction matrix U from (2) with a method to learn a dictionary ĭ and sparse coefficients a.  ... 
doi:10.1109/cvprw.2013.126 dblp:conf/cvpr/PtuchaS13 fatcat:azcypst265auzithphqscec6uy
« Previous Showing results 1 — 15 out of 123,586 results