Filters








28,474 Hits in 6.4 sec

Surrogate Functions for Maximizing Precision at the Top [article]

Purushottam Kar and Harikrishna Narasimhan and Prateek Jain
2015 arXiv   pre-print
The problem of maximizing precision at the top of a ranked list, often dubbed Precision@k (prec@k), finds relevance in myriad learning applications such as ranking, multi-label classification, and learning  ...  At the heart of our results is a family of truly upper bounding surrogates for prec@k.  ...  The ranking of items at the top is of utmost importance in these applications and several performance measures, such as Precision@k, Average Precision and NDCG have been designed to promote accuracy at  ... 
arXiv:1505.06813v1 fatcat:voqv6yjp65cffhxnv5db5w3dua

Algorithmic Foundation of Deep X-Risk Optimization [article]

Tianbao Yang
2022 arXiv   pre-print
It includes many widely used measures or objectives, e.g., AUROC, AUPRC, partial AUROC, NDCG, MAP, top-K NDCG, top-K MAP, listwise losses, p-norm push, top push, precision/recall at top K positions, precision  ...  Discussions about the presented results and future studies are given at the end. Efficient algorithms for optimizing a variety of X-risks are implemented in the LibAUC library at www.libauc.org.  ...  Other measures of similar kind include accuracy at the top such as precision at top-K positions (P@K), recall at top-K positions (R@K), and precision at a certain recall level (P@R).  ... 
arXiv:2206.00439v4 fatcat:57sqc2rrcrg7djd6w4xurboile

ZQBA: A Zero-Query, Boosted Ambush Adversarial Attack on Image Retrieval

Aarnav Sawant, Tyler Giallanza
2022 International Journal on Cybernetics & Informatics  
We hope our method serves as a baseline for the evaluation of robustness for future image retrieval research.  ...  Our approach is successfully able to disrupt the relevance of our target image retrieval models with a 65% decrease in Mean Average Precision (mAP) as compared to state-of-the-art UAP [18].  ...  Special thanks to Jie Li [19] and the other UAP [19] authors for generously providing their universal perturbations, which served as the benchmark for this research.  ... 
doi:10.5121/ijci.2022.110404 fatcat:s4xpnvchgbb3zpylps3hftslkq

Nonlinear classifiers for ranking problems based on kernelized SVM [article]

Václav Mácha, Lukáš Adam, Václav Šmídl
2020 arXiv   pre-print
As an example, we can mention ranking problems, accuracy at the top or search engines where only the top few queries matter.  ...  Many classification problems focus on maximizing the performance only on the samples with the highest relevance instead of all samples.  ...  Accuracy at the Top The second problem Accuracy at the Top was introduced in [5] .  ... 
arXiv:2002.11436v1 fatcat:h3jeobsqfzat7hdcz32czibn34

Surrogate ranking for very expensive similarity queries

Fei Xu, Ravi Jampani, Mingxi Wu, Chris Jermaine, Tamer Kahveci
2010 2010 IEEE 26th International Conference on Data Engineering (ICDE 2010)  
We develop a general-purpose, statistical framework for answering top-k queries in such databases, when the database administrator is able to supply an inexpensive surrogate ranking function that substitutes  ...  We develop a robust method that learns the relationship between the surrogate function and the similarity measure.  ...  Given an appropriate surrogate ranking function R , the algorithm that we propose to answer a top-k query is simple: 1) First, find the top k matches for q using the surrogate ranking function R , for  ... 
doi:10.1109/icde.2010.5447888 dblp:conf/icde/XuJWJK10 fatcat:eul6wx4fh5gf3ff3eqh7kzzzqq

Performance or Trust? Why Not Both. Deep AUC Maximization with Self-Supervised Learning for COVID-19 Chest X-ray Classifications [article]

Siyuan He, Pengcheng Xi, Ashkan Ebadi, Stephane Tremblay, Alexander Wong
2021 arXiv   pre-print
Ablation study is conducted for both the performance and the trust on feature learning methods and loss functions.  ...  In this work, we integrate a new surrogate loss with self-supervised learning for computer-aided screening of COVID-19 patients using radiography images.  ...  We also acknowledge support from the Pandemic Response Challenge Program at the National Research Council of Canada.  ... 
arXiv:2112.08363v1 fatcat:l4rz6xnyxrf25jtjamierovkye

Efficient Top Rank Optimization with Gradient Boosting for Supervised Anomaly Detection [chapter]

Jordan Frery, Amaury Habrard, Marc Sebban, Olivier Caelen, Liyun He-Guelton
2017 Lecture Notes in Computer Science  
We tackle this task with a learning to rank strategy by optimizing a differentiable smoothed surrogate of the so-called Average Precision (AP).  ...  We show that using AP is much better to optimize the top rank alerts than the state of the art measures.  ...  In other words, one aims at maximizing the number of true positives in the top rank alerts (i.e. the so-called precision) rather than discriminating between abnormal and normal cases.  ... 
doi:10.1007/978-3-319-71249-9_2 fatcat:uousny3ypjgzhb4x47hb3gsgvu

DeepTopPush: Simple and Scalable Method for Accuracy at the Top [article]

Lukáš Adam, Václav Mácha, Václav Šmídl
2020 arXiv   pre-print
We consider classifiers in the form of an arbitrary (deep) network and propose a new method DeepTopPush for minimizing the top loss function.  ...  Accuracy at the top is a special class of binary classification problems where the performance is evaluated only on a small number of relevant (top) samples.  ...  Its left-hand side contains any classifier f and computes the scores z i while the right-hand side is the extension for maximizing accuracy at the top.  ... 
arXiv:2006.12293v1 fatcat:3zjsssah7bejfowkahbxo7yu7y

Constrained Classification and Ranking via Quantiles [article]

Alan Mackey, Xiyang Luo, Elad Eban
2018 arXiv   pre-print
We explicitly model the threshold at which a classifier must operate to satisfy the constraint, yielding a surrogate loss function which avoids the complexity of constrained optimization.  ...  Binary classifiers which face class imbalance are often evaluated by the F_β score, area under the precision-recall curve, Precision at K, and more.  ...  Acknowledgements The authors would like to thank Ofer Meshi for helpful discussions and insightful comments on the manuscript.  ... 
arXiv:1803.00067v1 fatcat:e36mrilp7rhpxbslcm3sslz62u

A Tale of Two Models: Constructing Evasive Attacks on Edge Models [article]

Wei Hao, Aahil Awatramani, Jiayang Hu, Chengzhi Mao, Pin-Chun Chen, Eyal Cidon, Asaf Cidon, Junfeng Yang
2022 arXiv   pre-print
as the authoritative model version, used for validation, debugging and retraining.  ...  Full-precision deep learning models are typically too large or costly to deploy on edge devices.  ...  ACKNOWLEDGMENTS We thank the reviewers for their comments.  ... 
arXiv:2204.10933v1 fatcat:kxq5sv6tozam5ejsxc22twegvy

Stochastic Emergence of Repeating Cortical Motifs in Spontaneous Membrane Potential Fluctuations In Vivo

Alik Mokeichev, Michael Okun, Omri Barak, Yonatan Katz, Ohad Ben-Shahar, Ilan Lampl
2007 Neuron  
We found no evidence for the existence of deterministically generated cortical motifs.  ...  We searched for motifs in spontaneous activity, recorded from the rat barrel cortex and from the cat striate cortex of anesthetized animals, and found numerous repeating patterns of high similarity and  ...  We thank Gilad Jacobson for his insightful comments during the preparation of this manuscript. We thank the MOSIX group for providing  ... 
doi:10.1016/j.neuron.2007.01.017 pmid:17270737 fatcat:hlmcsotuzzcazd6n5sc56yvwdm

Optimization and Analysis of the pAp@k Metric for Recommender Systems

Gaurush Hiranandani, Warut Vijitbenjaronk, Sanmi Koyejo, Prateek Jain
2020 International Conference on Machine Learning  
The pAp@k metric, which combines the partial-AUC and the precision@k metrics, was recently proposed to evaluate such recommendation systems and has been used in real-world deployments.  ...  Conceptually, pAp@k measures the probability of correctly ranking a top-ranked positive instance over top-ranked negative instances.  ...  Acknowledgements We thank the anonymous reviewers for providing helpful and constructive feedback on the paper. We also thank Harikrishna Narasimhan for helpful discussions.  ... 
dblp:conf/icml/HiranandaniVKJ20 fatcat:k44v5mwzojce3drwh3uozzocca

Data-Efficient Design Exploration through Surrogate-Assisted Illumination [article]

Adam Gaier, Alexander Asteroth, Jean-Baptiste Mouret
2018 arXiv   pre-print
Design optimization techniques are often used at the beginning of the design process to explore the space of possible designs.  ...  In this article we introduce a new illumination algorithm, Surrogate-Assisted Illumination (SAIL), that leverages surrogate modeling techniques to create a map of the design space according to user-defined  ...  The authors would like to thank Alexander Hagg and the ResiBots team for all their feedback.  ... 
arXiv:1806.05865v1 fatcat:gkkygkdyvrhbrdohb7hdmj6qvy

Data-efficient exploration, optimization, and modeling of diverse designs through surrogate-assisted illumination

Adam Gaier, Alexander Asteroth, Jean-Baptiste Mouret
2017 Proceedings of the Genetic and Evolutionary Computation Conference on - GECCO '17  
The Surrogate-Assisted Illumination algorithm (SAIL), introduced here, integrates approximative models and intelligent sampling of the objective function to minimize the number of evaluations required  ...  This technique has the potential to be a powerful tool for design space exploration, but is limited by the need for numerous evaluations.  ...  The authors would like to thank Roby Velez, Alexander Hagg, and the ResiBots team for their feedback.  ... 
doi:10.1145/3071178.3071282 dblp:conf/gecco/GaierAM17 fatcat:elnfrncg7ramnfo7ozybvbhmtm

Implicit Rate-Constrained Optimization of Non-decomposable Objectives [article]

Abhishek Kumar, Harikrishna Narasimhan, Andrew Cotter
2021 arXiv   pre-print
Examples of such problems include optimizing the false negative rate at a fixed false positive rate, optimizing precision at a fixed recall, optimizing the area under the precision-recall or ROC curves  ...  The code for the proposed method is available at https://github.com/google-research/google-research/tree/master/implicit_constrained_optimization .  ...  To maximize the model's precision at the threshold λ ∈ R at which it achieves a coverage of k, we can set: f (θ, λ) = −precision(s θ λ ); g(θ, λ) = TP(s θ λ ) + FP(s θ λ ) − k. Example 4 (AUC-PR).  ... 
arXiv:2107.10960v3 fatcat:pmeqycvjbjhilkskk7xw4q6fje
« Previous Showing results 1 — 15 out of 28,474 results