Explaining automatic answers generated from knowledge base embedding models [thesis]

Andrey Ruschel
While many chatbot systems rely on templates and shallow semantic analysis, advanced question-answering devices are typically produced with the help of largescale knowledge bases such as DBpedia or Freebase. Information extraction is often based on embedding models that map semantically rich information into low-dimensional vectors, allowing computationally efficient calculations. When producing new facts about the world, embeddings often provide correct answers that are very hard to explain
more » ... m a human perspective as they are based on operations performed in the low-dimensional vector space, thus bearing no meaning to human users. Although interpretability has become a central concern in machine learning, the literature so far has focused on non-relational classifiers (such as deep neural networks); embeddings, however, require a whole range of different approaches. In this work we improve an existing method designed to provide explanations for predictions made by embedding models.
doi:10.11606/d.3.2022.tde-07072022-084934 fatcat:24q7ormhznexjcze33lwkmmjdy