Explainable Artificial Intelligence (XAI) towards Model Personality in NLP task

Dimas Adi, Nadhila Nurdin
2021 IPTEK Journal of Engineering  
Abstract⎯ In recent years, the development of Deep Learning in the field of Natural Language Processing, especially in sentiment analysis, has achieved significant progress and success. It is because of the availability of large amounts of text data and the ability of deep learning techniques to produce sophisticated predictive results from various data features. However, the sophisticated predictions that are not accompanied by sufficient information on what is happening in the model will be a
more » ... major setback. Therefore, the significant development of the Deep Learning model must be accompanied by the development of the XAI method, which helps provide information about what drives the model to get predictable results. Simple Bidirectional LSTM and complex Bi-GRU-LSTM-CNN model for Sentiment Analysis were proposed in the present research. Both models were analyzed further using three different XAI methods (LIME, SHAP, and Anchor) in which they were used and compared to two proposed models, proving that XAI is not limited to giving information about what happens in the model but can also help us to understand and distinguish models' personality and behaviour. Keywords⎯ Deep learning, Explainable artificial intelligence, Natural language processing, Sentiment analysis I.
doi:10.12962/j23378557.v7i1.a8989 fatcat:ewjw2opqknec3oohzcwq5abfgu