Filters








3,648 Hits in 7.6 sec

No more meta-parameter tuning in unsupervised sparse feature learning [article]

Adriana Romero, Petia Radeva, Carlo Gatta
2014 arXiv   pre-print
We propose a meta-parameter free, off-the-shelf, simple and fast unsupervised feature learning algorithm, which exploits a new way of optimizing for sparsity.  ...  Experiments on STL-10 show that the method presents state-of-the-art performance and provides discriminative features that generalize well.  ...  Meta-parameters to tune of state-of-the-art unsupervised feature learning methods.  ... 
arXiv:1402.5766v1 fatcat:cmncgzcwzfgijl6x5xz2c2kkbq

Stacked unsupervised learning with a network architecture found by supervised meta-learning [article]

Kyle Luther, H. Sebastian Seung
2022 arXiv   pre-print
Stacked unsupervised learning (SUL) seems more biologically plausible than backpropagation, because learning is local to each layer.  ...  The hyperparameters of the network architecture are found by supervised meta-learning, which optimizes unsupervised clustering accuracy.  ...  Meta-learning of unsupervised learning rules Our work will rely on using a label-based clustering objective to evaluate and tune an unsupervised learning rule.  ... 
arXiv:2206.02716v1 fatcat:cvkk34c62bhofn3lsdnus3kuii

Meta-Parameter Free Unsupervised Sparse Feature Learning

Adriana Romero, Petia Radeva, Carlo Gatta
2015 IEEE Transactions on Pattern Analysis and Machine Intelligence  
We propose a meta-parameter free, off-the-shelf, simple and fast unsupervised feature learning algorithm, which exploits a new way of optimizing for sparsity.  ...  Experiments on CIFAR-10, STL-10 and UCMerced show that the method achieves the state-of-theart performance, providing discriminative features that generalize well.  ...  This work has been supported in part by projects TIN2013-41751 and 2014-SGR-221.  ... 
doi:10.1109/tpami.2014.2366129 pmid:26353006 fatcat:vdilc72ckbf5zixxp2hrqjwn6q

Unsupervised Deep Feature Extraction for Remote Sensing Image Classification

Adriana Romero, Carlo Gatta, Gustau Camps-Valls
2016 IEEE Transactions on Geoscience and Remote Sensing  
Therefore, we propose the use of greedy layer-wise unsupervised pre-training coupled with a highly efficient algorithm for unsupervised learning of sparse features.  ...  The algorithm is rooted on sparse representations and enforces both population and lifetime sparsity of the extracted features, simultaneously.  ...  Diane Whited at the University of Montana for the VHR imagery used in some experiments of this paper.  ... 
doi:10.1109/tgrs.2015.2478379 fatcat:sirgcd47m5fdralgdeea5zgc3u

An Analysis of Single-Layer Networks in Unsupervised Feature Learning

Adam Coates, Andrew Y. Ng, Honglak Lee
2011 Journal of machine learning research  
More surprisingly, our best performance is based on K-means clustering, which is extremely fast, has no hyperparameters to tune beyond the model structure itself, and is very easy to implement.  ...  In this paper, however, we show that several simple factors, such as the number of hidden nodes in the model, may be more important to achieving high performance than the learning algorithm or the depth  ...  This is particularly notable since K-means requires no tuning whatsoever, unlike the sparse auto-encoder and sparse RBMs which require us to choose several hyper-parameters for best results.  ... 
dblp:journals/jmlr/CoatesNL11 fatcat:tkcb26p4v5elliw5kz7nl4p6uy

A Deep Bag-of-Features Model for Music Auto-Tagging [article]

Juhan Nam, Jorge Herrera, Kyogu Lee
2016 arXiv   pre-print
The first stage learns to project local spectral patterns of an audio track onto a high-dimensional sparse space in an unsupervised manner and summarizes the audio track as a bag-of-features.  ...  Feature learning and deep learning have drawn great attention in recent years as a way of transforming input data into more effective representations using learning algorithms.  ...  ACKNOWLEDGMENT This work was supported by Korea Advanced Institute of Science and Technology (Project No. G04140049).  ... 
arXiv:1508.04999v3 fatcat:dc2zlksbgnhbxmodrtgpvtd7te

Deep Learning Approach for Secondary Structure Protein Prediction based on First Level Features Extraction using a Latent CNN Structure

Adil Al-Azzawi
2017 International Journal of Advanced Computer Science and Applications  
on the unsupervised fashion where the whole network can be fine-tuned in a supervised learning fashion.  ...  In this paper, our proposed approach which is a Latent Deep Learning approach relies on detecting the first level features based on using Stacked Sparse Autoencoder.  ...  in the convolutional layer with the original protein data to learn more features than relies just on the random or initialized filters for the convolutional layers in the main Deep Learning Structure.  ... 
doi:10.14569/ijacsa.2017.080402 fatcat:xvsujwctdfb6jaosorkheu375y

Unsupervised Feature Pre-training of the Scattering Wavelet Transform for Musical Genre Recognition

Mariusz Kleć, Danijel Koržinek
2014 Procedia Technology - Elsevier  
The pre-training phase is performed in unsupervised manner. Next, the network is fine-tuned in supervised way with respect to the genre classes. We used GTZAN database for fine-tuning the network.  ...  This paper examines the utilization of Sparse Autoencoders (SAE) in the process of music genre recognition. We used Scattering Wavelet Transform (SWT) as an initial signal representation.  ...  Unsupervised Feature Learning Training an Artificial Neural Network (ANN) with multiple layers (i.e. more than 2 or 3 hidden layers) using backpropagation produces sub-optimal results in most practical  ... 
doi:10.1016/j.protcy.2014.11.025 fatcat:342spcu2anbzdmnfclvd2llmxi

Hierarchical Discriminative Deep Dictionary Learning

Ulises Rodriguez-Dominguez, Oscar Dalmau
2020 IEEE Access  
In addition, local sparse representation objectives are approximated during the forward pass, introducing local regularization.  ...  In deep dictionary learning multiple dictionaries are learned based on information at various levels of abstraction.  ...  are learned in an unsupervised way.  ... 
doi:10.1109/access.2020.3008841 fatcat:d2gugtehmbcuxagmfrmmbp2d2i

Unsupervised deep feature extraction of hyperspectral images

Adriana Romero, Carlo Gatta, Gustavo Camps-Valls
2014 2014 6th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS)  
This paper presents an effective unsupervised sparse feature learning algorithm to train deep convolutional networks on hyperspectral images.  ...  Index Terms-Convolutional networks, deep learning, sparse learning, feature extraction, hyperspectral image classification * The work of A.  ...  The algorithm learns discriminative features without requiring any meta-parameter tuning. Moreover, thanks to its computational efficiency, it can learn a large set of parameters.  ... 
doi:10.1109/whispers.2014.8077647 dblp:conf/whispers/RomeroGC14 fatcat:2wpbpmki55bevfm22mp3vajmfq

Reverse Image Search Using Deep Unsupervised Generative Learning and Deep Convolutional Neural Network

Aqsa Kiran, Shahzad Ahmad Qureshi, Asifullah Khan, Sajid Mahmood, Muhammad Idrees, Aqsa Saeed, Muhammad Assam, Mohamad Reda A. Refaai, Abdullah Mohamed
2022 Applied Sciences  
In the first phase, sparse auto-encoder (SAE), a deep generative model, is applied to RGB channels of each image for unsupervised representational learning.  ...  In the second phase, transfer learning is utilized by using VGG-16, a variant of deep convolutional neural network (CNN).  ...  Conflicts of Interest: The authors declare that they have no conflict of interest.  ... 
doi:10.3390/app12104943 fatcat:n2l6tfz5drc3bkjgodhcig3o6m

Review on Machine Learning for Analog Circuit Design

Nirali Hemant Patel, Charotar University of Science and Technology
2020 International Journal of Engineering Research and  
In this review paper, different methods of Machine Learning are being discussed in this paper.  ...  The flow of the paper starts with the basic introduction to Analog circuits and Machine learning then different techniques of machine learning, afterwards the usage of machine learning in analog circuits  ...  Pros of Linear Regression: 1) Vary Fast 2) No parameter tuning 3) Easy to understand and highly interpretable Model evaluation: The basic approaches are train and test on the same dataset and Train/test  ... 
doi:10.17577/ijertv9is050710 fatcat:t4y6274wbjfnfbqnc2guqrpyqa

What is Hidden among Translation Rules

Libin Shen, Bowen Zhou
2013 Conference on Empirical Methods in Natural Language Processing  
Experimental results show about one point improvement on TER-BLEU over a strong baseline in Chinese-to-English translation.  ...  In this short paper, we propose a novel method to model rules as observed generation output of a compact hidden model, which leads to better generalization capability.  ...  The views, opinions, and/or findings contained in this article/presentation are those of the author/presenter and should not be interpreted as representing the ofcial views or policies, either expressed  ... 
dblp:conf/emnlp/ShenZ13 fatcat:mgl6fxg27zh3bimdhrdsq43hpm

Unsupervised Algorithms to Detect Zero-Day Attacks: Strategy and Application

Tommaso Zoppi, Andrea Ceccarelli, Andrea Bondavalli
2021 IEEE Access  
UNSUPERVISED META-LEARNING Recent studies [41] , [46] started investigating applications of meta-learning for unsupervised anomaly detection.  ...  Does meta-learning help in reducing misclassifications of unsupervised anomaly detection algorithms? RQ5.  ...  His scientific activities originated more than 220 papers appeared in international Journals and Conferences.  ... 
doi:10.1109/access.2021.3090957 fatcat:zg3vagumlffvbei4g4rj3h7knu

Inductive Graph Embeddings through Locality Encodings [article]

Nurudin Alvarez-Gonzalez, Andreas Kaltenbrunner, Vicenç Gómez
2020 arXiv   pre-print
Interestingly, the resulting embeddings generalize well across unseen or distant regions in the network, both in unsupervised settings, when combined with language model learning, as well as in supervised  ...  tasks, when used as additional features in a neural network.  ...  Table 1 : 1 Hyper-Parameters used in unsupervised tasks.  ... 
arXiv:2009.12585v1 fatcat:wbpevwstpnd33l3yssdqgicjlu
« Previous Showing results 1 — 15 out of 3,648 results