A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Self-Contextualized Attention for Abusive Language Identification
2021
Proceedings of the Ninth International Workshop on Natural Language Processing for Social Media
unpublished
The use of attention mechanisms in deep learning approaches has become popular in natural language processing due to its outstanding performance. The use of these mechanisms allows one managing the importance of the elements of a sequence in accordance to their context, however, this importance has been observed independently between the pairs of elements of a sequence (self-attention) and between the application domain of a sequence (contextual attention), leading to the loss of relevant
doi:10.18653/v1/2021.socialnlp-1.9
fatcat:fmnrmhnrgbffpcxnotjhjhlwpi