Filters








234,782 Hits in 7.8 sec

Restricted Boltzmann Machines for Gender Classification [chapter]

Jordi Mansanet, Alberto Albiol, Roberto Paredes, Mauricio Villegas, Antonio Albiol
2014 Lecture Notes in Computer Science  
Moreover, in order to increase even more the classification accuracy, we have run some experiments where an SVM is fed with the non-linear mapping obtained by the GRBM in a tandem configuration.  ...  The GRBM is presented together with some practical learning tricks to improve the learning capabilities and speedup the training process.  ...  Note that the GRBM is an unsupervised technique that leads to a non-linear mapping of the original representation space.  ... 
doi:10.1007/978-3-319-11758-4_30 fatcat:fijsl3fxtzhqncf3oawf2gg3aa

Convolutional networks and applications in vision

Yann LeCun, Koray Kavukcuoglu, Clement Farabet
2010 Proceedings of 2010 IEEE International Symposium on Circuits and Systems  
We describe new unsupervised learning algorithms, and new non-linear stages that allow ConvNets to be trained with very few labeled samples.  ...  Each stage in a ConvNets is composed of a filter bank, some non-linearities, and feature pooling layers. With multiple stages, a ConvNet can learn multi-level hierarchies of features.  ...  More interestingly, how could an artificial vision system learn appropriate internal representations automatically, the way animals and human seem to learn by simply looking at the world?  ... 
doi:10.1109/iscas.2010.5537907 dblp:conf/iscas/LeCunKF10 fatcat:dnus6yikzzbzxjpnbcblkq45o4

Discriminative Autoencoder for Feature Extraction: Application to Character Recognition [article]

Anupriya Gogna, Angshul Majumdar
2019 arXiv   pre-print
Use of supervised discriminative learning ensures that the learned representation is robust to variations commonly encountered in image datasets.  ...  The efficiency of our feature extraction algorithm ensures a high classification accuracy with even simple classification schemes like KNN (K-nearest neighbor).  ...  Our design enforces the derived feature vectors to be consistent with the corresponding class labels via a linear mapping.  ... 
arXiv:1912.12131v1 fatcat:guftes4dbzafrb6urnmfynjfz4

Deep learning for brain decoding

Orhan Firat, Like Oztekin, Fatos T. Yarman Vural
2014 2014 IEEE International Conference on Image Processing (ICIP)  
We employ sparse autoencoders for unsupervised feature learning, leveraging unlabeled fMRI data to learn efficient, non-linear representations as the building blocks of a deep learning architecture by  ...  In this study, we explore deep learning methods for fMRI classification tasks in order to reduce dimensions of feature space, along with improving classification performance for brain decoding.  ...  Classification performances of Non-Linear Feature Mapping (NLFM) with classical methods for fMRI ML tasks (MVPA) and a local-linear feature extraction method (MAD).  ... 
doi:10.1109/icip.2014.7025563 dblp:conf/icip/FiratOY14 fatcat:6rx6oy5lhzef5bey6pl57vco6a

Unsupervised feature learning for optical character recognition

Devendra K Sahu, C. V. Jawahar
2015 2015 13th International Conference on Document Analysis and Recognition (ICDAR)  
In this work, we investigate the possibility of learning an appropriate set of features for designing OCR for a specific language.  ...  We learn the language specific features from the data with no supervision. This enables the seamless adaptation of the architecture across languages.  ...  , each f 1 , f 2 , f 3 can be learned by an RBM which gives f some characteristics like efficient, compact, non-linear and hierarchical representation [2] .  ... 
doi:10.1109/icdar.2015.7333920 dblp:conf/icdar/SahuJ15 fatcat:353hqn3x6nhuhjgiyc6qq7bb4y

Deep Perceptual Mapping for Thermal to Visible Face Recogntion

M. Saquib Sarfraz, Rainer Stiefelhagen
2015 Procedings of the British Machine Vision Conference 2015  
For an input of x ∈ R d , each layer will output a non-linear projection by using the learned projection matrix W and the non-linear activation function g(·).  ...  Our model attempts to learn a non-linear mapping from visible to thermal spectrum while preserving the identity information.  ...  For an input of x ∈ R d , each layer will output a non-linear projection by using the learned projection matrix W and the non-linear activation function g(·).  ... 
doi:10.5244/c.29.9 dblp:conf/bmvc/SarfrazS15 fatcat:jvpbd3bd35drpccgywlpbso56u

Non-linearity matters: a deep learning solution to the generalization of hidden brain patterns across population cohorts [article]

Mariam Zabihi, Seyed Mostafa Kia, Thomas Wolfers, Richard Dinga, Alberto Llera, Danilo Bzdok, Christian Beckmann, Andre marquand
2021 bioRxiv   pre-print
Finally, our results show that with careful implementation, nonlinear features can provide complementary information that accessible to purely linear methods.  ...  We showed that the model did not only learn salient features such as age but also high-level behavioral characteristics and that this representation was highly generic and generalizable to an independent  ...  We showed that our model learned not only salient features like age but also high-level behavioral features and that this representation was highly generic and generalized to an independent dataset.  ... 
doi:10.1101/2021.03.10.434856 fatcat:cv3ymi57czhgxgqk3faou3dfay

Learning invariant features through topographic filter maps

Koray Kavukcuoglu, Marc'Aurelio Ranzato, Rob Fergus, Yann LeCun
2009 2009 IEEE Conference on Computer Vision and Pattern Recognition  
We propose a method that automatically learns such feature extractors in an unsupervised fashion by simultaneously learning the filters and the pooling units that combine multiple filter outputs together  ...  The first stage is often composed of three main modules: (1) a bank of filters (often oriented edge detectors); (2) a non-linear transform, such as a point-wise squashing functions, quantization, or normalization  ...  Instead learned representations using IPSD seem to be quite robust to shifts, with an overall lower area under the curve.  ... 
doi:10.1109/cvpr.2009.5206545 dblp:conf/cvpr/KavukcuogluRFL09 fatcat:ze5eoanhknc4nhrbuwgneti5eu

Learning invariant features through topographic filter maps

K. Kavukcuoglu, M.A. Ranzato, R. Fergus, Yann Le-Cun
2009 2009 IEEE Conference on Computer Vision and Pattern Recognition  
We propose a method that automatically learns such feature extractors in an unsupervised fashion by simultaneously learning the filters and the pooling units that combine multiple filter outputs together  ...  The first stage is often composed of three main modules: (1) a bank of filters (often oriented edge detectors); (2) a non-linear transform, such as a point-wise squashing functions, quantization, or normalization  ...  Instead learned representations using IPSD seem to be quite robust to shifts, with an overall lower area under the curve.  ... 
doi:10.1109/cvprw.2009.5206545 fatcat:zxun6vqtezb4pjudj6jgnyweby

Fast Inference in Sparse Coding Algorithms with Applications to Object Recognition [article]

Koray Kavukcuoglu, Marc'Aurelio Ranzato, Yann LeCun
2010 arXiv   pre-print
The applicability of these methods to visual object recognition tasks has been limited because of the prohibitive cost of the optimization algorithms required to compute the sparse representation.  ...  In this work we propose a simple and efficient algorithm to learn basis functions.  ...  Note that, a linear mapping would not be able to produce sparse representations using an overcomplete set because of the non-orthogonality of the filters, therefore a non-linear mapping is required.  ... 
arXiv:1010.3467v1 fatcat:jazauxq4eng6dloeb6mnvzrvxm

A Novel Approach for Efficient SVM Classification with Histogram Intersection Kernel

Gaurav Sharma, Frederic Jurie
2013 Procedings of the British Machine Vision Conference 2013  
The kernel trick -commonly used in machine learning and computer vision -enables learning of non-linear decision functions without having to explicitly map the original data to a high dimensional space  ...  In this paper, we propose a novel approach for learning non-linear SVM corresponding to the histogram intersection kernel without using the kernel trick.  ...  With the kernel trick a linear decision boundary in the feature space is learned which corresponds to a non linear decision boundary in the input space.  ... 
doi:10.5244/c.27.10 dblp:conf/bmvc/SharmaJ13 fatcat:xzpddiqixvdy5h7ydt6wdxic3q

Representation Learning with Deep Extreme Learning Machines for Efficient Image Set Classification [article]

Muhammad Uzair, Faisal Shafait, Bernard Ghanem, Ajmal Mian
2015 arXiv   pre-print
We learn the non-linear structure of image sets with Deep Extreme Learning Machines (DELM) that are very efficient and generalize well even on a limited number of training samples.  ...  In this paper, we propose an efficient image set representation that does not make any prior assumptions about the structure of the underlying data.  ...  The proposed representation is based on deep Extreme Learning Machines and automatically learns the non-linear structure of image sets.  ... 
arXiv:1503.02445v3 fatcat:2x3pddwt3jagldi2asogmgnvlq

An alternative text representation to TF-IDF and Bag-of-Words [article]

Zhixiang Xu, Minmin Chen, Kilian Q. Weinberger, Fei Sha
2013 arXiv   pre-print
With this approach, dCoT learns to reconstruct frequent words from co-occurring infrequent words and maps the high dimensional sparse sBoW vectors into a low-dimensional dense representation.  ...  In this paper we propose Dense Cohort of Terms (dCoT), an unsupervised algorithm to learn improved sBoW document features. dCoT explicitly models absent words by removing and reconstructing random sub-sets  ...  We refer to our feature learning algorithm as dCoT (Dense Cohort of Terms). Recursive re-application The linear mapping in eq.  ... 
arXiv:1301.6770v1 fatcat:bekjfg2kg5aszkjpvgk2xnrf2i

Unsupervised Temporal Feature Learning Based on Sparse Coding Embedded BoAW for Acoustic Event Recognition

Liwen Zhang, Jiqing Han, Shiwen Deng
2018 Interspeech 2018  
compared with the frame-level feature learning methods, its temporal information is unreserved.  ...  In this paper, we proposed a novel unsupervised temporal feature learning method, which can effectively capture the temporal dynamics for an entire audio signal with arbitrary duration by building direct  ...  12 v*1t v* n1 v* n2 v* nt Non-Linear Feature Mapping In order to incorporate non-linearities, we employ non-linear feature maps [23] on each smoothed BoAW feature.  ... 
doi:10.21437/interspeech.2018-1243 dblp:conf/interspeech/ZhangHD18 fatcat:aydiltvs65ek3os5ptzeszdsma

From sBoW to dCoT marginalized encoders for text representation

Zhixiang (Eddie) Xu, Minmin Chen, Kilian Q. Weinberger, Fei Sha
2012 Proceedings of the 21st ACM international conference on Information and knowledge management - CIKM '12  
With this approach, dCoT learns to reconstruct frequent words from co-occurring infrequent words and maps the high dimensional sparse sBoW vectors into a low-dimensional dense representation.  ...  In this paper we propose Dense Cohort of Terms (dCoT), an unsupervised algorithm to learn improved sBoW document features. dCoT explicitly models absent words by removing and reconstructing random sub-sets  ...  We refer to our feature learning algorithm as dCoT (Dense Cohort of Terms). Recursive re-application The linear mapping in eq.  ... 
doi:10.1145/2396761.2398536 dblp:conf/cikm/XuCWS12 fatcat:zbdqxp2ts5cnlhxsga3d25dex4
« Previous Showing results 1 — 15 out of 234,782 results