TX$^2$: Transformer eXplainability and eXploration

Nathan Martindale, Scott Stewart
2021 Journal of Open Source Software  
The Transformer eXplainability and eXploration (Martindale & Stewart, 2021) , or TX 2 software package, is a library designed for artificial intelligence researchers to better understand the performance of transformer models (Vaswani et al., 2017) used for sequence classification. The tool is capable of integrating with a trained transformer model and a dataset split into training and testing populations to produce an ipywidget (Project Jupyter Contributors, 2021) dashboard with a number of
more » ... alizations to understand model performance with an emphasis on explainability and interpretability. The TX 2 package is primarily intended to integrate into a workflow centered around Jupyter Notebooks (Kluyver et al., 2016) , and currently assumes the use of PyTorch (Paszke et al., 2019) and Hugging Face transformers library (Wolf et al., 2020) . The dashboard includes visualization and data exploration features to aid researchers, including an interactive UMAP embedding graph (McInnes et al., 2018) to understand classification clusters, a word salience map that can be updated as researchers alter textual entries in near real time, a set of tools to understand word frequency and importance based on the clusters in the UMAP embedding graph, and a set of traditional confusion matrix analysis tools.
doi:10.21105/joss.03652 fatcat:3s77qada7jeltoxwwcwjhvihd4