Artificial Intelligence in economic decision making: how to assure a trust?

Sylwester Bejger, Stephan Elster
2020 Ekonomia i Prawo.  
Motivation: The decisions made by modern 'black box' artificial intelligence models are not understandable and therefore people do not trust them. This limits down the potential power of usage of Artificial Intelligence. Aim: The idea of this text is to show the different initiatives in different countries how AI, especially black box AI, can be made transparent and trustworthy and what kind of regulations will be implemented or discussed to be implemented. We also show up how a commonly used
more » ... w a commonly used development process within Machine Learning can be enriched to fulfil the requirements e.g. of the Ethics guidelines for trustworthy AI of the High-Level Expert Group of the European Union. We support our discussion with a proposition of empirical tools providing interpretability. Results: The full potential of AI or products using AI can only be raised if the decision of AI models are transparent and trustworthy. Regulations which are followed over the whole life cycle of AI models, algorithms or the products they using these are therefore necessary as well as understandability or explainability of the decisions these models and algorithms made. Initiatives on every level of stakeholders started, e.g. international level on the European Union, country level, USA, China etc. as well on a company level. The post-hoc local interpretability methods could and should be implemented by economic decision makers to provide compliance with the regulations.
doi:10.12775/eip.2020.028 fatcat:l7rjp6ndu5fvlf25zakbbllu4q