Filters








16,712 Hits in 6.9 sec

Learning from the Best - Visual Analysis of a Quasi-Optimal Data Labeling Strategy [article]

Jürgen Bernard, Marco Hutter, Markus Lehmann, Martin Müller, Matthias Zeppelzauer, Michael Sedlmair
2018 Eurographics Conference on Visualization  
In this work, we focus on the analysis of a (theoretical) quasi-optimal, ground-truth-based strategy for labeling instances, which we refer to as the upper limit of performance (ULoP).  ...  Results show that the strategy of ULoP is not constant (as in most state-of-the-art active learning strategies) but changes within the labeling process.  ...  To simulate a quasi-optimal labeling strategy, our ULoP strategy is modeled by executing a greedy search for instances based on ground truth data.  ... 
doi:10.2312/eurovisshort.20181085 dblp:conf/vissym/BernardHLMZS18 fatcat:qgyofduzdzdrjikkudicne77x4

Quasi-supervised learning for biomedical data analysis

Bilge Karaçalı
2010 Pattern Recognition  
The fitness of the method in biomedical data analysis was further demonstrated on real multi-color flow cytometry and multi-channel electroencephalography data.  ...  We adopt a binary recognition scenario where a control dataset contains samples of one class only, while a mixed dataset contains an unlabeled collection of samples from both classes.  ...  In the literature, quasi-supervised learning refers to learning strategies that deal with prominently unlabeled data, where some labels are available and only through indirect user interaction [7, 8]  ... 
doi:10.1016/j.patcog.2010.04.024 fatcat:2jzd4cgbz5darol3kghsdtlcny

Combining Low-Density Separators with CNNs

Yu-Xiong Wang, Martial Hebert
2016 Neural Information Processing Systems  
By encouraging these units to learn diverse sets of low-density separators across the unlabeled data, we capture a more generic, richer description of the visual world, which decouples these units from  ...  Using off-the-shelf CNNs becomes the best strategy, despite the specialization and reduced performance. In this work we investigate how to improve pre-trained CNNs for the learning from few examples.  ...  Army Research Laboratory (ARL) under the Collaborative Technology Alliance Program, Cooperative Agreement W911NF-10-2-0016.  ... 
dblp:conf/nips/WangH16 fatcat:q4dq4gbs2jgovdmy2bqc5el2aa

A Criterion for Optimizing Kernel Parameters in KBDA for Image Retrieval

L. Wang, K.L. Chan, P. Xue
2005 IEEE Transactions on Systems Man and Cybernetics Part B (Cybernetics)  
Retrieval experiments on two benchmark image databases demonstrate the effectiveness of proposed criterion for KBDA to achieve the best possible performance at the cost of a small fractional computational  ...  A criterion is proposed to optimize the kernel parameters in Kernel-based Biased Discriminant Analysis (KBDA) for image retrieval.  ...  The "biased discriminant analysis" means that different strategies are applied to the two image classes.  ... 
doi:10.1109/tsmcb.2005.846660 pmid:15971923 fatcat:tg2sg6fdmvhdlhvmj5fghqp6wu

You CAN Teach an Old Dog New Tricks! On Training Knowledge Graph Embeddings

Daniel Ruffinelli, Samuel Broscheit, Rainer Gemulla
2020 International Conference on Learning Representations  
Knowledge graph embedding (KGE) models learn algebraic representations of the entities and relations in a knowledge graph.  ...  These approaches differ along a number of dimensions, including different model architectures, different training strategies, and different approaches to hyperparameter optimization.  ...  Figure 3 : 3 Figure 3: Best filtered MRR (%) on validation data achieved during quasi-random search as a function of the number of training epochs.  ... 
dblp:conf/iclr/RuffinelliBG20 fatcat:hztfcce5uncjrjhwz4xxz3sygy

Adopting Robustness and Optimality in Fitting and Learning [article]

Zhiguang Wang, Tim Oates, James Lo
2015 arXiv   pre-print
We generalized a modified exponentialized estimator by pushing the robust-optimal (RO) index λ to -∞ for achieving robustness to outliers by optimizing a quasi-Minimin function.  ...  Optimality is guaranteed by expansion of the convexity region in the Hessian matrix to largely avoid local optima. Detailed quantitative analysis on both robustness and optimality are provided.  ...  As a general error estimator, we provide a quantitative analysis and validate its effectiveness on three function fitting and one visual recognition tasks.  ... 
arXiv:1510.03826v3 fatcat:22ztvhzuejb7riqf5yj2setiii

BIG DATA LEARNING THROUGH TEXT ANALYTICS LABELED COMPOUNDS OF THE IoT BIO ENVIRONMENT

2021 International Journal of Biology Pharmacy and Allied Sciences  
To continue with the analysis procedure. But that's all. The publication offers a summary of whatever doesn't disclose results.  ...  Thus far in the IoT environment of data visualization and has been done. Throughout this line of science, implementation of information accumulation. This article highlights the problems of IoT  ...  out a quasi method of machine learning that majority of existing systems, even some of algorithm from the generator to distinguish to the problem.  ... 
doi:10.31032/ijbpas/2021/10.11.1117 fatcat:zbpqyz7fprcdnnul2zbcj44i5u

Improving One-Shot Learning through Fusing Side Information [article]

Yao-Hung Hubert Tsai, Ruslan Salakhutdinov
2018 arXiv   pre-print
Deep Neural Networks (DNNs) often struggle with one-shot learning where we have only one or a few labeled training examples per category.  ...  First, we propose to enforce the statistical dependency between data representations and multiple types of side information.  ...  Instead of learning the networks exclusively from data, we extend the training from data and side information jointly.  ... 
arXiv:1710.08347v2 fatcat:cm5chbpui5fc7kfxl5ffcjnncm

Automated labelling of cancer textures in colorectal histopathology slides using quasi-supervised learning

Devrim Onder, Sulen Sarioglu, Bilge Karacali
2013 Micron  
The resulting labelling performances were compared to that of a conventional powerful supervised classifier using manually labelled ground-truth data.  ...  The results in this series in comparison with the benchmark classifier, suggest that quasi-supervised image texture labelling may be a useful method in the analysis and classification of pathological slides  ...  In this study, the computational infrastructure of the Biomedical Information Processing Laboratory (BIPLAB) that was supported by a grant from the European Commission (PIRG03-GA-2008-230903) was used.  ... 
doi:10.1016/j.micron.2013.01.003 pmid:23415158 fatcat:s74gxyzvvrcgtddvfgnmx6umey

Information-theoretic semantic multimedia indexing

João Magalhães, Stefan Rüger
2007 Proceedings of the 6th ACM international conference on Image and video retrieval - CIVR '07  
The unique continuous feature-space of text and visual data is constructed by using a minimum description length criterion to find the optimal feature-space representation (optimal from an information  ...  To solve the problem of indexing collections with diverse text documents, image documents, or documents with both text and images, one needs to develop a model that supports heterogeneous types of documents  ...  The high computational cost of the learning process resides on the clustering of the visual feature space and on the quasi-Newton algorithm.  ... 
doi:10.1145/1282280.1282368 dblp:conf/civr/MagalhaesR07 fatcat:buqjdgf4dfdf5l76i7xyivjwou

Weakly Supervised Vessel Segmentation in X-ray Angiograms by Self-Paced Learning from Noisy Labels with Suggestive Annotation [article]

Jingyang Zhang, Guotai Wang, Hongzhi Xie, Shuyang Zhang, Ning Huang, Shaoting Zhang, Lixu Gu
2020 arXiv   pre-print
of coronary arteries derived directly from raw data.  ...  To alleviate the burden on the annotator, we propose a novel weakly supervised training framework that learns from noisy pseudo labels generated from automatic vessel enhancement, rather than accurate  ...  Then, robust principle component analysis (RPCA) [12] is used to further separate the difference image into a quasi-static background layer and a vessel layer based on the quasi-static motion constraint  ... 
arXiv:2005.13366v1 fatcat:lh32out7efholk5tn4po4blypy

Training data-efficient image transformers distillation through attention [article]

Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou
2021 arXiv   pre-print
More importantly, we introduce a teacher-student strategy specific to transformers. It relies on a distillation token ensuring that the student learns from the teacher through attention.  ...  We show the interest of this token-based distillation, especially when using a convnet as a teacher.  ...  Acknowledgements We thank Vinicius Reis and Mannat Singh for exploring a first implementation of image transformers and the insights they gathered at this occasion.  ... 
arXiv:2012.12877v2 fatcat:at6mrl4jozepflmmkkrycno56e

High-throughput adaptive sampling for whole-slide histopathology image analysis (HASHI) via convolutional neural networks: Application to invasive breast cancer detection

Angel Cruz-Roa, Hannah Gilmore, Ajay Basavanhally, Michael Feldman, Shridar Ganesan, Natalie Shih, John Tomaszewski, Anant Madabhushi, Fabio González, Yuanquan Wang
2018 PLoS ONE  
HASHI was trained and validated using three different data cohorts involving near 500 cases and then independently tested on 195 studies from The Cancer Genome Atlas.  ...  The results show that (1) the adaptive sampling method is an effective strategy to deal with WSI without compromising prediction accuracy by obtaining comparative results of a dense sampling (*6 million  ...  Acknowledgments Visualization: Angel Cruz-Roa.  ... 
doi:10.1371/journal.pone.0196828 pmid:29795581 pmcid:PMC5967747 fatcat:vubrn66qznhajneeohx4pj537u

Multilevel MIMO Detection with Deep Learning [article]

Vincent Corlay, Joseph J. Boutros, Philippe Ciblat, Loïc Brunel
2019 arXiv   pre-print
Then, after showing the DNN architecture for detection, we propose a twin-network neural structure. Batch size and training statistics for efficient learning are investigated.  ...  Near-Maximum-Likelihood performance with a relatively reasonable number of parameters is achieved.  ...  In light of the above discussion, we would want both to learn the necessary structure of the code to get quasi-MLD performance (i.e. the SNR should not be too high) but the "noise" in the label (i.e. messages  ... 
arXiv:1812.01571v2 fatcat:6kzmnm23q5b67eyjnfwxezruia

Multi-Objective Parameter Selection for Classifiers

Christoph Müssel, Ludwig Lausser, Markus Maucher, Hans A. Kestler
2012 Journal of Statistical Software  
The algorithm determines a set of Pareto-optimal parameter configurations and leaves the ultimate decision on the weighting of objectives to the researcher.  ...  Several strategies for sampling and optimizing parameters are supplied.  ...  Acknowledgments This work is supported by the Graduate School of Mathematical Analysis of Evolution, Information and Complexity at the University of Ulm (CM, HAK) and by the German Federal Ministry of  ... 
doi:10.18637/jss.v046.i05 fatcat:mouyb6hvwnbnhetn7r75mhlpju
« Previous Showing results 1 — 15 out of 16,712 results