A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
PRADA: Practical Black-Box Adversarial Attacks against Neural Ranking Models
[article]
2022
arXiv
pre-print
Neural ranking models (NRMs) have shown remarkable success in recent years, especially with pre-trained language models. However, deep neural models are notorious for their vulnerability to adversarial examples. Adversarial attacks may become a new type of web spamming technique given our increased reliance on neural information retrieval models. Therefore, it is important to study potential adversarial attacks to identify vulnerabilities of NRMs before they are deployed. In this paper, we
arXiv:2204.01321v3
fatcat:6cjnc35w5nc2fc4knehckzcrxm