A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit <a rel="external noopener" href="https://www.atlantis-press.com/article/125914877.pdf">the original URL</a>. The file type is <code>application/pdf</code>.
Content Determination for Natural Language Descriptions of Predictive Bayesian Networks
<span title="">2019</span>
<i title="Atlantis Press">
<a target="_blank" rel="noopener" href="https://fatcat.wiki/container/q47jwxc36zhozejiwja7mtgcsu" style="color: black;">Proceedings of the 2019 Conference of the International Fuzzy Systems Association and the European Society for Fuzzy Logic and Technology (EUSFLAT 2019)</a>
</i>
The dramatic success of Artificial Intelligence and its applications has been accompanied by an increasing complexity, which makes its comprehension for final users more difficult and damages their trustworthiness. Within this context, the emergence of Explainable AI aims to make intelligent systems decisions more transparent and understandable for human users. In this paper, we propose a framework for the explanation of predictive inference in Bayesian Networks (BN) in natural language to
<span class="external-identifiers">
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.2991/eusflat-19.2019.107">doi:10.2991/eusflat-19.2019.107</a>
<a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/eusflat/Pereira-FarinaB19.html">dblp:conf/eusflat/Pereira-FarinaB19</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/x7qiz2pxajguzanih33rg7ooim">fatcat:x7qiz2pxajguzanih33rg7ooim</a>
</span>
more »
... pecialized users. The model represents the embedded information in the BN by means of (fuzzy) quantified statements and reasons using the a fuzzy syllogism. The framework provides how this can be used for the content determination stage in Natural Language Generation explanation systems for BNs. Through a number of realistic scenarios of use examples, we show how the generated explanations allows the user to trace the inference steps in the approximate reasoning process in predictive BNs.
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210715153756/https://www.atlantis-press.com/article/125914877.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/f3/6f/f36f3e41ce0dd6efb21642c9ad4d8d3c3198416e.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.2991/eusflat-19.2019.107">
<button class="ui left aligned compact blue labeled icon button serp-button">
<i class="external alternate icon"></i>
Publisher / doi.org
</button>
</a>