Filters








15,632 Hits in 6.5 sec

Stochastic Negative Mining for Learning with Large Output Spaces [article]

Sashank J. Reddi, Satyen Kale, Felix Yu, Dan Holtmann-Rice, Jiecao Chen, Sanjiv Kumar
2018 arXiv   pre-print
We consider the problem of retrieving the most relevant labels for a given input when the size of the output space is very large.  ...  Finally, we conduct experiments which demonstrate that Stochastic Negative Mining yields benefits over commonly used negative sampling approaches.  ...  We provide experimental evidence that Stochastic Negative Mining does indeed help improve performance when learning with large output spaces compared to simpler sampling based strategies.  ... 
arXiv:1810.07076v1 fatcat:qnjqyy6s6ja2lpqr5vsygboaia

Improving Calibration in Deep Metric Learning With Cross-Example Softmax [article]

Andreas Veit, Kimberly Wilber
2020 arXiv   pre-print
We further introduce Cross-Example Negative Mining, in which each pair is compared to the hardest negative comparisons across the entire batch.  ...  Modern image retrieval systems increasingly rely on the use of deep neural networks to learn embedding spaces in which distance encodes the relevance between a given query and image.  ...  Stochastic Negative Mining.  ... 
arXiv:2011.08824v1 fatcat:ujmbmi76efa6tjvnkf7cmhdj6m

Discriminative Learning of Deep Convolutional Feature Point Descriptors

Edgar Simo-Serra, Eduard Trulls, Luis Ferraz, Iasonas Kokkinos, Pascal Fua, Francesc Moreno-Noguer
2015 2015 IEEE International Conference on Computer Vision (ICCV)  
We deal with the large number of potential pairs with the combination of a stochastic sampling of the training set and an aggressive mining strategy biased towards patches that are hard to classify.  ...  In this paper we use Convolutional Neural Networks (CNNs) to learn discriminant patch representations and in particular train a Siamese network with pairs of (non-)corresponding patches.  ...  In this paper we introduce a novel training scheme, based on mining of both positive and negative correspondences, and obtain large performance gains in patch retrieval.  ... 
doi:10.1109/iccv.2015.22 dblp:conf/iccv/Simo-SerraTFKFM15 fatcat:s7ovndlftjf4rapj2cxt65llwe

Application of Genetic Algorithm Optimized Neural Network Connection Weights for Medical Diagnosis of PIMA Indians Diabetes

Asha Gowda Karegowda, A.S Manjunath, M.A Jayaram
2011 International Journal of Soft Computing  
Neural Networks are one of many data mining analytical tools that can be utilized to make predictions for medical data.  ...  In addition the hybrid GA-BPN with relevant inputs lead to further improvised categorization accuracy compared to results produced by GA-BPN alone with some redundant inputs.  ...  GA is a stochastic general search method, capable of effectively exploring large search spaces, is used with BPN for determining the optimized connection weights of BPN.  ... 
doi:10.5121/ijsc.2011.2202 fatcat:gvgcdmd6yrek5kliraf6n2u7ha

Graph-based relational learning

Lawrence B. Holder, Diane J. Cook
2003 SIGKDD Explorations  
Learning from graphs, rather than logic, presents representational issues both in input data preparation and output pattern language.  ...  many graph-based data mining techniques.  ...  With the increased need for mining streaming data, the development of new methods for incremental learning from dynamic graphs is important.  ... 
doi:10.1145/959242.959254 fatcat:l2j3t2nb4jfgvej5zvzrsbzb3m

Federated Learning with Only Positive Labels [article]

Felix X. Yu, Ankit Singh Rawat, Aditya Krishna Menon, Sanjiv Kumar
2020 arXiv   pre-print
We further extend the proposed method to the settings with large output spaces.  ...  As a result, during each federated learning round, the users need to locally update the classifier without having access to the features and the model parameters for the negative classes.  ...  . • FedAwS: Our method with stochastic negative mining (cf.  ... 
arXiv:2004.10342v1 fatcat:vjodhbg5xvgp7bhfmn4u3uvodu

Scalable Nonlinear AUC Maximization Methods [article]

Majdi Khalid, Indrakshi Ray, Hamidreza Chitsaz
2019 arXiv   pre-print
The area under the ROC curve (AUC) is a measure of interest in various machine learning and data mining applications.  ...  However, the high training complexity renders the kernelized AUC machines infeasible for large-scale data.  ...  While this learning algorithm is applicable for large datasets, it becomes expensive for training enormous datasets embedded in a large dimensional feature space.  ... 
arXiv:1710.00760v4 fatcat:dqpgvvabyjcczbk4373ovsctjq

Learning Deep Structure-Preserving Image-Text Embeddings [article]

Liwei Wang, Yin Li, Svetlana Lazebnik
2016 arXiv   pre-print
This paper proposes a method for learning joint embeddings of images and text using a two-branch neural network with multiple layers of linear projections followed by nonlinearities.  ...  The network is trained using a large margin objective that combines cross-view ranking constraints with within-view neighborhood structure preservation constraints inspired by metric learning literature  ...  We would like to thank Bryan Plummer for help with phrase localization evaluation.  ... 
arXiv:1511.06078v2 fatcat:zagdm4qg3resxfgnv4afq4xyvm

Ultra-Fast Data-Mining Hardware Architecture Based on Stochastic Computing

Antoni Morro, Vincent Canals, Antoni Oliver, Miquel L. Alomar, Josep L. Rossello, Frederique Lisacek
2015 PLoS ONE  
Minimal hardware implementations able to cope with the processing of large amounts of data in reasonable times are highly desired in our information-driven society.  ...  We design pulse-based stochastic-logic blocks to obtain an efficient pattern recognition system.  ...  On the other hand, for operations requiring correlated signals we employ the same LFSR output for all stochastic variables.  ... 
doi:10.1371/journal.pone.0124176 pmid:25955274 pmcid:PMC4425430 fatcat:gdgxyz22nvghnhdyb5emkl5sdq

Temporally Coherent Embeddings for Self-Supervised Video Representation Learning [article]

Joshua Knights, Ben Harwood, Daniel Ward, Anthony Vanderkop, Olivia Mackenzie-Ross, Peyman Moghadam
2020 arXiv   pre-print
Using TCE we learn robust representations from large quantities of unlabeled video data.  ...  This paper presents TCE: Temporally Coherent Embeddings for self-supervised video representation learning.  ...  Drawing both from annealing methods and semi-hard mining approaches in the supervised domain [26] , [27] we formulate a novel approach for negative selection during selfsupervised learning on large  ... 
arXiv:2004.02753v5 fatcat:xu7hkj2yjvctllq76mr76t3wuu

Learning Deep Structure-Preserving Image-Text Embeddings

Liwei Wang, Yin Li, Svetlana Lazebnik
2016 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
This paper proposes a method for learning joint embeddings of images and text using a two-branch neural network with multiple layers of linear projections followed by nonlinearities.  ...  The network is trained using a largemargin objective that combines cross-view ranking constraints with within-view neighborhood structure preservation constraints inspired by metric learning literature  ...  We would like to thank Bryan Plummer for help with phrase localization evaluation.  ... 
doi:10.1109/cvpr.2016.541 dblp:conf/cvpr/WangLL16 fatcat:ximgbdttdrc7xfqe4jf4qfin6u

Metric Embedding Learning on Multi-Directional Projections

Gábor Kertész
2020 Algorithms  
, and the output of such a model maps them to a multidimensional encoding space.  ...  While latest developments in computer vision—mostly driven by deep learning—have shown that high performance models for classification or categorization can be engineered, the problem of discriminating  ...  Firstly, embedding learning in MDIPFL space is feasible as the learned representations are suitable for discrimination.  ... 
doi:10.3390/a13060133 fatcat:gwmbpdgorffwzmkoinj27qllku

Fracking Deep Convolutional Image Descriptors [article]

Edgar Simo-Serra and Eduard Trulls and Luis Ferraz and Iasonas Kokkinos and Francesc Moreno-Noguer
2015 arXiv   pre-print
We propose to explore this space with a stochastic sampling of the training set, in combination with an aggressive mining strategy over both the positive and negative samples which we denote as "fracking  ...  In this paper we propose a novel framework for learning local image descriptors in a discriminative manner.  ...  However, when the pool of negative samples is very large random sampling will produce many negatives with a very small loss, which do not contribute to the global loss, and thus stifle the learning process  ... 
arXiv:1412.6537v2 fatcat:cwq34j4pyjcnxhfa6wviecsxxa

PREDICTION OF CARDIOVASCULAR DISEASES USING GENETIC ALGORITHM AND DEEP LEARNING TECHNIQUES

KRATIKA SHARMA, T. SATYA KIRANMAI
2021 International Journal of Emerging Trends in Engineering and Development  
The dataset provided by University of California, Irvine (UCI) machine learning repository is used for training and testing.  ...  Recent research has delved into uniting these techniques to provide hybrid machine learning algorithms.  ...  disease (true negative rate).  ... 
doi:10.26808/rs.ed.i11v3.01 fatcat:s5bfqsj4kzfw5lf5iwcoxoll2i

Learning an Approximation to Inductive Logic Programming Clause Evaluation [chapter]

Frank DiMaio, Jude Shavlik
2004 Lecture Notes in Computer Science  
One challenge faced by many Inductive Logic Programming (ILP) systems is poor scalability to problems with large search spaces and many examples.  ...  These hypotheses and their corresponding evaluation scores serve as training data for learning an approximate hypothesis evaluator.  ...  One challenge many ILP systems face is scalability to large datasets with large hypothesis spaces.  ... 
doi:10.1007/978-3-540-30109-7_10 fatcat:wpmn4di6dfgf7mu6a7wseimgv4
« Previous Showing results 1 — 15 out of 15,632 results