Filters








3,239 Hits in 2.9 sec

Unsupervised manifold learning through reciprocal kNN graph and Connected Components for image retrieval tasks

Daniel Carlos Guimarães Pedronette, Filipe Marcel Fernandes Gonçalves, Ivan Rizzo Guilherme
2018 Pattern Recognition  
The underlying dataset manifold is modeled and analyzed in terms of a Reciprocal kNN Graph and its Connected Components.  ...  The method computes the new retrieval results on an unsupervised way, without the need of any user intervention.  ...  Acknowledgments The authors are grateful to FAPESP -São Paulo Research Foundation (grant # 2013/08645-0 ) and CAPES -Coordination for Higher Education Staff Development.  ... 
doi:10.1016/j.patcog.2017.05.009 fatcat:wdojfu33brf5zbzm6kzboms4ru

Unsupervised Multiple-Instance Learning for Functional Profiling of Genomic Data [chapter]

Corneliu Henegar, Karine Clément, Jean-Daniel Zucker
2006 Lecture Notes in Computer Science  
Three algorithmic solutions are suggested by derivation from available conventional methods: agglomerative or partition clustering and MIL's citation-kNN approach.  ...  Multiple-instance learning (MIL) is a popular concept among the AI community to support supervised learning applications in situations where only incomplete knowledge is available.  ...  Acknowledgments This work was supported by the Institut National de la Santé et de la Recherche Médicale (INSERM) and the Assistance Publique -Hôpitaux de Paris.  ... 
doi:10.1007/11871842_21 fatcat:7cckavyv6zbaxj4xu3g5vkftuy

Person Re-identification by Saliency Learning [article]

Rui Zhao, Wanli Ouyang, Xiaogang Wang
2014 arXiv   pre-print
K-Nearest Neighbors and One-class SVM, to estimate a saliency score for each image patch, through which distinctive features stand out without using identity labels in the training procedure. (3) saliency  ...  The proposed saliency learning and matching framework consists of four steps: (1) To handle misalignment caused by drastic viewpoint change and pose variations, we apply adjacency constrained patch matching  ...  Unsupervised saliency Learning K-Nearest Neighbor (KNN) saliency Byers et al. [7] found the KNN distances can be used for clutter removal.  ... 
arXiv:1412.1908v1 fatcat:7b4y45j6cffnxfn57qclorr7iy

A Local Potential-based Clustering Algorithm for Unsupervised Hyperspectral Band Selection

Zhaokui Li, Lin Hunag, Jinrong He, Cuiwei Liu, Xiangbin Shi
2019 IEEE Access  
The FDPC chooses the cluster center through the local density and the intracluster distance of each point and ranks each point through a certain rule.  ...  First, the local potential of each band is calculated according to the similarity of between bands, and the larger similarity has a greater effect on the local potential.  ...  And the ranking rule is defined as γ i = α i + β i (13) where α i = p i × V i and β i = δ i × V i . It can evaluate the ranking order of each band through γ i .  ... 
doi:10.1109/access.2019.2914161 fatcat:4oxmg5s2gbhmpavgmg3pj36rwq

KB-CB-N classification: Towards unsupervised approach for supervised learning

Zahraa Said Abdallah, Mohamed Medhat Gaber
2011 2011 IEEE Symposium on Computational Intelligence and Data Mining (CIDM)  
The basic principle is to apply unsupervised learning on the instances of each class in the dataset and then use the output as an input for the classification algorithm to find the K best neighbours of  ...  Different data clustering techniques use different similarity measures. Each measure has its own strength and weakness.  ...  measures (Euclidean distance, cosine similarity, and Pearson correlation) represents the first attempt.  ... 
doi:10.1109/cidm.2011.5949435 dblp:conf/cidm/AbdallahG11 fatcat:zpbflayfjveipad4ormslam36q

Unsupervised Algorithms to Detect Zero-Day Attacks: Strategy and Application

Tommaso Zoppi, Andrea Ceccarelli, Andrea Bondavalli
2021 IEEE Access  
., Scikit-Learn, PyTorch, Tensorflow and RELOAD) allow to instantiate ensembles of unsupervised algorithms through meta-learning, which improves classification capabilities of unsupervised algorithms as  ...  set and test set.  ...  and experimental evaluation of dependable and secure systems, and systems-ofsystems.  ... 
doi:10.1109/access.2021.3090957 fatcat:zg3vagumlffvbei4g4rj3h7knu

Comprehensive Performance Evaluation Of Network Intrusion System Using Machine Learning Approach

Shahzad Haroon, Shaheed Zulfiqar Ali Bhutto Institute of Science and Technology, Karachi, Syed Sajjad Hussain
2019 Journal of Independent Studies and Research - Computing  
From the dataset UNSWNB15, we have used medium and strong correlated features. All the results from different classifiers are compared.  ...  Prominent results are achieved by ensemble bagged tree which classifies normal and individual attacks with an accuracy of 79%.  ...  Whereas, Unsupervised machine learning algorithm is used to group the data which are similar in type with respect to their properties.  ... 
doi:10.31645/jisrc-019-01 fatcat:53p34hh6rbhctd6ywjdpzf5c5e

Band Selection for Hyperspectral Imagery Using Affinity Propagation

Sen Jia, Yuntao Qian, Zhen Ji
2008 2008 Digital Image Computing: Techniques and Applications  
and less correlation/similarity.  ...  a good set of exemplars and clusters emerges.  ...  On the contrary, AP not only considers the discriminative capability of each individual band through s(k, k), but the correlation/ similarity among bands as well through s(i, k), so that the exemplars  ... 
doi:10.1109/dicta.2008.42 dblp:conf/dicta/JiaQJ08 fatcat:pftvz7sgxvddxjarf2z4i7fqra

Band selection for hyperspectral imagery using affinity propagation

Y. Qian, F. Yao, S. Jia
2009 IET Computer Vision  
and less correlation/similarity.  ...  a good set of exemplars and clusters emerges.  ...  On the contrary, AP not only considers the discriminative capability of each individual band through s(k, k), but the correlation/ similarity among bands as well through s(i, k), so that the exemplars  ... 
doi:10.1049/iet-cvi.2009.0034 fatcat:fk5sjghq4bcrhfvctoqebdcok4

Unsupervised Embedding Learning from Uncertainty Momentum Modeling [article]

Jiahuan Zhou, Yansong Tang, Bing Su, Ying Wu
2021 arXiv   pre-print
Moreover, the shortage of positive data and disregard for global discrimination consideration also pose critical issues for unsupervised learning but are always ignored by existing methods.  ...  Existing popular unsupervised embedding learning methods focus on enhancing the instance-level local discrimination of the given unlabeled images by exploring various negative data.  ...  embedding learning methods on CIFAR-10 and STL-10 based on the linear classifier and kNN classifier.  ... 
arXiv:2107.08892v1 fatcat:tkib2zoq7rci7ox7yjxiud6vpi

Practical Cross-modal Manifold Alignment for Grounded Language [article]

Andre T. Nguyen, Luke E. Richards, Gaoussou Youssouf Kebe, Edward Raff, Kasra Darvish, Frank Ferraro, Cynthia Matuszek
2020 arXiv   pre-print
Our approach learns these embeddings by sampling triples of anchor, positive, and negative data points from RGB-depth images and their natural language descriptions.  ...  We propose a cross-modality manifold alignment procedure that leverages triplet loss to jointly learn consistent, multi-modal embeddings of language-based concepts of real-world items.  ...  Table 1 and Table 2 summarize the performance of the triplet method in this unsupervised setting. While there is a decrease in MRR and KNN accuracy, DC remains strong and even increases.  ... 
arXiv:2009.05147v1 fatcat:ztkbw2sgg5hc5ehaflye567ymu

Unsupervised Word Mapping Using Structural Similarities in Monolingual Embeddings [article]

Hanan Aldarmaki, Mahesh Mohan, Mona Diab
2018 arXiv   pre-print
The proposed method exploits local and global structures in monolingual vector spaces to align them such that similar words are mapped to each other.  ...  We propose an unsupervised approach for learning a bilingual dictionary for a pair of languages given their independently-learned monolingual word embeddings.  ...  Using a set of 200 fundamental words, (Calude and Pagel, 2011) reported a high correlation between word frequency ranks across 17 languages drawn from six language families.  ... 
arXiv:1712.06961v2 fatcat:jrjwoiqhfbff5ptwbrhsqlhpp4

EXPLORING GENE EXPRESSION DATA WITH CLASS SCORES

PAUL PAVLIDIS, DARRIN P. LEWIS, WILLIAM STAFFORD NOBLE
2001 Biocomputing 2002  
We show that all three methods reveal significant classes in each of three different gene expression data sets.  ...  We address a commonly asked question about gene expression data sets: "What functional classes of genes are most interesting in the data?"  ...  In this way, a p-value can be obtained for any KNN cross-validated total error. Results We measured correlation, experiment, and learnability scores for the yeast, cancer, and brain data sets.  ... 
doi:10.1142/9789812799623_0044 fatcat:nvorzth6yvd6dkprrh6ufl6eru

Unsupervised Outlier Detection: A Meta-Learning Algorithm Based on Feature Selection

Vasilis Papastefanopoulos, Pantelis Linardatos, Sotiris Kotsiantis
2021 Electronics  
In this study, a new meta-learning algorithm for unsupervised outlier detection is introduced in order to mitigate this problem.  ...  To add to that, in an unsupervised setting, the absence of ground-truth labels makes finding a single best algorithm an impossible feat even for a single given dataset.  ...  It has been shown [40] that selecting subsets of features, according to some similarity or correlation criteria, can boost unsupervised learning algorithms analogously to how supervised learning algorithms  ... 
doi:10.3390/electronics10182236 fatcat:ucfeg2qxvrhsxhgsaizcnnnluu

Ensembles for unsupervised outlier detection

Arthur Zimek, Ricardo J.G.B. Campello, Jörg Sander
2014 SIGKDD Explorations  
Ensembles for unsupervised outlier detection is an emerging topic that has been neglected for a surprisingly long time (although there are reasons why this is more difficult than supervised ensembles or  ...  Complementary to his points, here we focus on the core ingredients for building an outlier ensemble, discuss the first steps taken in the literature, and identify challenges for future research.  ...  For example, the results of LOF [9] and of the LOF variant LoOP [38] seem highly correlated, and the results of the kNN model [60] and the kNN weight model [4] are strongly correlated as well (  ... 
doi:10.1145/2594473.2594476 fatcat:a7nwevaajnh3rp3hn6aafm2df4
« Previous Showing results 1 — 15 out of 3,239 results