Neural graph reasoning for explainable decision-making

Yikun Xian
Researchers have been seeking to develop intelligent systems with the ability to behave like humans by autonomously making accurate and reasonable decisions for real-world tasks. It now becomes imaginable and achievable with the help of advanced artificial intelligence (AI), especially the deep learning technique that is known for its superior representation and predictive power. Such deep learning based decision-making systems have been shown to be surprisingly effective in delivering accurate
more » ... predictions, but at the price of lack of explainability due to the "black-box" of deep neural networks. However, explainability plays a pivotal role in practical human-involved applications such as user modeling, digital marketing and e-commerce platforms. Explanations can be leveraged to not only assist model developers to understand and debug the working mechanism of the decision-making process, but also facilitate better engagement and trustworthiness for the end users who consume the results produced by the systems. In this thesis, we concentrate on one category of explainable decision-making system that relies on external heterogeneous graphs to generate accurate predictions accompanied with faithful and comprehensible explanations, which is also known as the neural graph reasoning for explainable decision-making. Unlike existing work on explainable machine learning that mainly yields model-agnostic explanations for deep neural networks, we attempt to develop intrinsically interpretable models based on graphs with the guarantee of both accuracy and explainability. The meaningful and versatile graph structures (e.g., knowledge graphs) are shown to be effective in improving model performance, and more importantly, make it possible for an intelligent decision-making system to conduct explicit reasoning over graphs to generate predictions. The benefit is that the resulting graph paths can be directly regarded as the explanations to the prediction results because the traceable facts along the paths reflect the decision-mak [...]
doi:10.7282/t3-bpj3-ng33 fatcat:ysf3yu5ynjcnhalk7vx5p6rsii