9,279 Hits in 5.3 sec

Human Activity Recognition Using Semi-supervised Multi-modal DEC for Instagram Data [chapter]

Dongmin Kim, Sumin Han, Heesuk Son, Dongman Lee
2020 Lecture Notes in Computer Science  
In this paper, we propose a semi-supervised multi-modal deep embedding clustering method to recognize human activities on Instagram.  ...  By utilizing a large number of unlabeled data, it learns a more generalized feature distribution for each HAR class and avoids overfitting to limited labeled data.  ...  This semi-supervised method helps us to learn the optimal representations of image and text features and apply Multi-modal DEC to HAR.  ... 
doi:10.1007/978-3-030-47426-3_67 fatcat:wqnltmtwencbrdvamml5cm4fqe

Graph based multi-modality learning

Hanghang Tong, Jingrui He, Mingjing Li, Changshui Zhang, Wei-Ying Ma
2005 Proceedings of the 13th annual ACM international conference on Multimedia - MULTIMEDIA '05  
For semi-supervised learning, two different fusion schemes, namely linear form and sequential form, are proposed.  ...  For each scheme, it is derived from optimization point of view; and further justified from two sides: similarity propagation and Bayesian interpretation.  ...  Both semi-supervised and un-supervised learning are investigated; 2) For semi-supervised learning, propose two different schemes.  ... 
doi:10.1145/1101149.1101337 dblp:conf/mm/TongHLZM05 fatcat:ux2tzibo6nbfrhr3pca2hxme6m

Infinite Mixture Prototypes for Few-Shot Learning [article]

Kelsey R. Allen, Evan Shelhamer, Hanul Shin, Joshua B. Tenenbaum
2019 arXiv   pre-print
In clustering labeled and unlabeled data by the same clustering rule, infinite mixture prototypes achieves state-of-the-art semi-supervised accuracy.  ...  By inferring the number of clusters, infinite mixture prototypes interpolate between nearest neighbor and prototypical representations, which improves accuracy and robustness in the few-shot regime.  ...  Acknowledgements We gratefully acknowledge support from DARPA grant 6938423 and KA is supported by NSERC. We thank Trevor Darrell and Ghassen Jerfel for advice and helpful discussions.  ... 
arXiv:1902.04552v1 fatcat:mmillrfyqjduxb5vutjwcmiwrm

Exhaustive and Efficient Constraint Propagation: A Graph-Based Learning Approach and Its Applications

Zhiwu Lu, Yuxin Peng
2012 International Journal of Computer Vision  
be solved in quadratic time using label propagation based on k-nearest neighbor graphs.  ...  The resulting exhaustive set of propagated pairwise constraints are further used to adjust the similarity matrix for constrained spectral clustering.  ...  More importantly, these semi-supervised learning subproblems can be solved efficiently and in parallel using the label propagation technique based on k-nearest neighbor graphs.  ... 
doi:10.1007/s11263-012-0602-z fatcat:aly6m6seo5gx7ebqujzpj6xbca

Heterogeneous Image Features Integration via Multi-modal Semi-supervised Learning Model

Xiao Cai, Feiping Nie, Weidong Cai, Heng Huang
2013 2013 IEEE International Conference on Computer Vision  
Therefore, how to integrate heterogeneous visual features to do the semi-supervised learning is crucial for categorizing large-scale image data.  ...  On one hand, though using more labeled training data may improve the prediction performance, obtaining the image labels is a time consuming as well as biased process.  ...  Related Work As the most popularly used semi-supervised learning models, the graph based semi-supervised methods define a graph where the nodes encompass labeled as well as unlabeled data, and edges (may  ... 
doi:10.1109/iccv.2013.218 dblp:conf/iccv/CaiNCH13a fatcat:exig5ikgeral3mqvwbe4luybl4

Guess What's on my Screen? Clustering Smartphone Screenshots with Active Learning [article]

Agnese Chiatti, Dolzodmaa Davaasuren, Nilam Ram, Prasenjit Mitra, Byron Reeves, Thomas Robinson
2019 arXiv   pre-print
Thus, there is need to examine utility of unsupervised and semi-supervised methods for digital screenshot classification.  ...  This work introduces the implications of applying clustering on large screenshot sets when only a limited amount of labels is available.  ...  Acknowledgments We thank all the Screenomics Lab ( members for the useful discussions and acknowledge the data and computational support provided for these experiments.  ... 
arXiv:1901.02701v2 fatcat:uepzsys4lzdexd3bak75v5fjmq

Survey on Deep Multi-modal Data Analytics: Collaboration, Rivalry and Fusion [article]

Yang Wang
2020 arXiv   pre-print
Throughout this survey, we further indicate that the critical components for this field go to collaboration, adversarial competition and fusion over multi-modal spaces.  ...  With the development of web technology, multi-modal or multi-view data has surged as a major stream for big data, where each modal/view encodes individual property of data objects.  ...  [64] proposed a cross-modal image generation model which can deal with semi-supervised problems. It contained one generator and two discriminators and leveraged large amounts of unpaired data.  ... 
arXiv:2006.08159v1 fatcat:g4467zmutndglmy35n3eyfwxku

One Million Scenes for Autonomous Driving: ONCE Dataset [article]

Jiageng Mao, Minzhe Niu, Chenhan Jiang, Hanxue Liang, Jingheng Chen, Xiaodan Liang, Yamin Li, Chaoqiang Ye, Wei Zhang, Zhenguo Li, Jie Yu, Hang Xu (+1 others)
2021 arXiv   pre-print
To facilitate future research on exploiting unlabeled data for 3D detection, we additionally provide a benchmark in which we reproduce and evaluate a variety of self-supervised and semi-supervised methods  ...  The ONCE dataset consists of 1 million LiDAR scenes and 7 million corresponding camera images.  ...  The initial learning rate is 0.003 for both the pretraining and semi-supervised learning process. We use the adam optimizer and the cosine annealing learning scheme for all the methods.  ... 
arXiv:2106.11037v3 fatcat:fwgrb57yarhujmetzpewtdzzei

Iterative, Deep Synthetic Aperture Sonar Image Segmentation [article]

Yung-Chen Sun, Isaac D. Gerg, Vishal Monga
2022 arXiv   pre-print
Finally, we also develop a semi-supervised (SS) extension of IDUS called IDSS and demonstrate experimentally that it can further enhance performance while outperforming supervised alternatives that exploit  ...  superpixels. 3) Superpixels are clustered into class assignments (which we call pseudo-labels) using k-means. 4) Resulting pseudo-labels are used for loss backpropagation of the deep network prediction  ...  Tory Cobb of the Naval Surface Warfare Center for providing the data used in this work.  ... 
arXiv:2203.15082v1 fatcat:z64j3jdkabarpas5khh4xurz4q

Pairwise Constraint Propagation: A Survey [article]

Zhenyong Fu, Zhiwu Lu
2015 arXiv   pre-print
At least two reasons account for this trend: the first is that compared to the data label, pairwise constraints are more general and easily to collect, and the second is that since the available pairwise  ...  As one of the most important types of (weaker) supervised information in machine learning and pattern recognition, pairwise constraint, which specifies whether a pair of data points occur together, has  ...  These images can typically be represented using two separate modalities, based respectively on the visual features and the user-provided textual tags.  ... 
arXiv:1502.05752v1 fatcat:djagaxttkjawpjfzys2q476zom

Automatic acute ischemic stroke lesion segmentation using semi-supervised learning [article]

Bin Zhao, Shuxue Ding, Hong Wu, Guohua Liu, Chen Cao, Song Jin, Zhiyang Liu
2020 arXiv   pre-print
In this paper, we propose a semi-supervised method to automatically segment AIS lesions in diffusion weighted images and apparent diffusion coefficient maps.  ...  By using a large number of weakly labeled subjects and a small number of fully labeled subjects, our proposed method is able to accurately detect and segment the AIS lesions.  ...  We initialize these networks by Xavier's method [40] and use the Adam method [41] with 1 0.9  = , 2 0.999  = and initial learning rate of 0.001 as our optimizer.  ... 
arXiv:1908.03735v3 fatcat:oxpt2lqgtbe4tfgifj3b3ulcwa

Self-Taught Semi-Supervised Anomaly Detection on Upper Limb X-rays [article]

Antoine Spahr, Behzad Bozorgtabar, Jean-Philippe Thiran
2021 arXiv   pre-print
Supervised deep networks take for granted a large number of annotations by radiologists, which is often prohibitively very time-consuming to acquire.  ...  Through extensive experiments, we show that our method outperforms baselines across unsupervised and self-supervised anomaly detection settings on a real-world medical dataset, the MURA dataset.  ...  In total, we end up with 19,037 training images 1 Non-meaningful clusters refer to those clusters in which the cluster cardinality is not large enough or include noisy samples.  ... 
arXiv:2102.09895v2 fatcat:nzf4mmcqyrcutafxr4jhtobalu

A Survey on Machine Learning Techniques for Auto Labeling of Video, Audio, and Text Data [article]

Shikun Zhang, Omid Jafari, Parth Nagarkar
2021 arXiv   pre-print
In this survey paper, we provide a review of previous techniques that focuses on optimized data annotation and labeling for video, audio, and text data.  ...  Data labeling has always been one of the most important tasks in machine learning. However, labeling large amounts of data increases the monetary cost in machine learning.  ...  In [25] , authors learn an optimal graph (OGL) from multicues (i.e. partial tags and multiple features) and propose a semi-supervised annotation approach.  ... 
arXiv:2109.03784v1 fatcat:uu55zfmtajcvdjekxeaue76izy

Special issue on concept detection with big data

Shih-Fu Chang, Thomas S. Huang, Michael S. Lew, Bart Thomee
2015 International Journal of Multimedia Information Retrieval  
One of the grand challenges of machine intelligence and pattern recognition for the past decade has been bridging the semantic gap, that is, determining how to translate the low-level features from images  ...  Concept detection is an important approach toward bridging the semantic gap by allowing computers to understand imagery using the conceptual vocabulary of humans.  ...  In the paper, "Large image modality labeling initiative using semi-supervised and optimized clustering" by S. Vajda, D. You, S. Antani and G.  ... 
doi:10.1007/s13735-015-0083-2 fatcat:faywbr2khbhplo7z5wmtoto5ha

Trash to Treasure: Harvesting OOD Data with Cross-Modal Matching for Open-Set Semi-Supervised Learning [article]

Junkai Huang, Chaowei Fang, Weikai Chen, Zhenhua Chai, Xiaolin Wei, Pengxu Wei, Liang Lin, Guanbin Li
2021 arXiv   pre-print
Extensive experiments show that our approach substantially lifts the performance on open-set SSL and outperforms the state-of-the-art by a large margin.  ...  Open-set semi-supervised learning (open-set SSL) investigates a challenging but practical scenario where out-of-distribution (OOD) samples are contained in the unlabeled data.  ...  For UDA, FixMatch and our method, SGD is used to optimize network weights. The learning rate is initially set to 0.03 and adjusted via the cosine decay strategy [37, 33] .  ... 
arXiv:2108.05617v1 fatcat:e3hkqoboq5ewnodvphe5unkl3a
« Previous Showing results 1 — 15 out of 9,279 results