A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Squared ℓ_2 Norm as Consistency Loss for Leveraging Augmented Data to Learn Robust and Invariant Representations
[article]
2020
arXiv
pre-print
Data augmentation is one of the most popular techniques for improving the robustness of neural networks. In addition to directly training the model with original samples and augmented samples, a torrent of methods regularizing the distance between embeddings/representations of the original samples and their augmented counterparts have been introduced. In this paper, we explore these various regularization choices, seeking to provide a general understanding of how we should regularize the
arXiv:2011.13052v1
fatcat:wpn6qlr3a5fwnahngpjalirzye