Filters








3,388 Hits in 8.2 sec

Verbal Explanations for Deep Reinforcement Learning Neural Networks with Attention on Extracted Features

Xinzhi Wang, Shengcheng Yuan, Hui Zhang, Michael Lewis, Katia Sycara
2019 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)  
In this paper, we focus on generating verbal (natural language) descriptions and explanations of deep reinforcement learning policies.  ...  In recent years, there has been increasing interest in transparency in Deep Neural Networks. Most of the works on transparency have been done for image classification.  ...  [12] , combined Q-learning with a flexible deep neural network. Hasselt et al.  ... 
doi:10.1109/ro-man46459.2019.8956301 dblp:conf/ro-man/WangYZLS19 fatcat:7jtws3f66nan3gjoqnfsxi2iye

Explanation of Reinforcement Learning Model in Dynamic Multi-Agent System [article]

Xinzhi Wang, Huao Li, Hui Zhang, Michael Lewis, Katia Sycara
2020 arXiv   pre-print
Recently, there has been increasing interest in transparency and interpretability in Deep Reinforcement Learning (DRL) systems.  ...  This paper reports a novel work in generating verbal explanations for DRL behaviors agent.  ...  In the learning model, a new convolutional neural network and attention mechanism is put forward to extract distinguishable features from structural images. • The performance of the model on Atari game  ... 
arXiv:2008.01508v2 fatcat:4oibnmy6wjgi3i2neh2427n5du

From Humans and Back: a Survey on Using Machine Learning to both Socially Perceive Humans and Explain to Them Robot Behaviours

Adina M. Panchea, François Ferland
2020 Current Robotics Reports  
To do so, machine learning (ML) is often employed.  ...  First, we present literature background on these three research areas and finish with a discussion on limitations and future research venues.  ...  Nao [8] 2016 Survey on of using vocal prosody to convey emotion in robot speech Hidden Markov models Deep belief networks Deep neural networks [48] Socially adaptive path planning Inverse reinforcement  ... 
doi:10.1007/s43154-020-00013-6 fatcat:l5dneve33faolgvlvf3ddqkv24

Deep Learning for Cognitive Neuroscience [article]

Katherine R. Storrs, Nikolaus Kriegeskorte
2019 arXiv   pre-print
Deep learning also provides the tools for testing cognitive theories.  ...  In the coming years, neural networks are likely to become less reliant on learning from massive labelled datasets, and more robust and generalisable in their task performance.  ...  Deep neural network models do not replace intuitive explanations, verbal theories, and concise mathematical descriptions.  ... 
arXiv:1903.01458v1 fatcat:64cray7ohncmjnwh3dfz65lrmi

Sentiment analysis using deep learning approaches: an overview

Olivier Habimana, Yuhua Li, Ruixuan Li, Xiwu Gu, Ge Yu
2019 Science China Information Sciences  
Keywords sentiment analysis, opinion mining, deep learning, neural network, natural language processing (NLP), social network Citation Habimana O, Li Y H, Li R X, et al.  ...  Sentiment analysis using deep learning approaches: an overview. these features, you can refer to [2, 8] .  ...  Suggestions for application of deep learning to sentiment analysis Trending deep learning methods such as deep reinforcement learning, generative adversarial networks are at the inception stage in sentiment  ... 
doi:10.1007/s11432-018-9941-6 fatcat:nbevrfiyybhszirol2af26c6ve

Towards Complementary Explanations Using Deep Neural Networks [chapter]

Wilson Silva, Kelwin Fernandes, Maria J. Cardoso, Jaime S. Cardoso
2018 Lecture Notes in Computer Science  
Recently, deep neural networks gained the attention of the scientific community due to their high accuracy in vast classification problems.  ...  This paper proposes a deep model with monotonic constraints that generates complementary explanations for its decisions both in terms of style and depth.  ...  Lastly, we have interpretability given by investigation on hidden layers of deep convolutional neural networks [10] .  ... 
doi:10.1007/978-3-030-02628-8_15 fatcat:nulgsh6fjzhshmmkjt2erlufuy

A Review on Explainability in Multimodal Deep Neural Nets

Gargi Joshi, Rahee Walambe, Ketan Kotecha
2021 IEEE Access  
Deep-HOSeq Deep network with higherorder common and unique sequence information is proposed for sentiment analysis that models the inter and intra modalities with no reliance on attention mechanism [134  ...  A neural network-based model architecture based on global workspace theory from cognitive science is proposed to cope with uncertainties in data fusion with attention models across different modalities  ... 
doi:10.1109/access.2021.3070212 fatcat:5wtxr4nf7rbshk5zx7lzbtcram

Explaining Explanations: An Overview of Interpretability of Machine Learning [article]

Leilani H. Gilpin, David Bau, Ben Z. Yuan, Ayesha Bajwa, Michael Specter, Lalana Kagal
2019 arXiv   pre-print
We discuss why current approaches to explanatory methods especially for deep neural networks are insufficient.  ...  Finally, based on our survey, we conclude with suggested future research directions for explanatory artificial intelligence.  ...  The authors also wish to express their appreciation for Jonathan Frankle for sharing his insightful feedback on earlier versions of the manuscript.  ... 
arXiv:1806.00069v3 fatcat:zegbomvrrredxazh2t7z2og4ju

Explainable Goal-Driven Agents and Robots – A Comprehensive Review [article]

Fatai Sado, Chu Kiong Loo, Wei Shiung Liew, Matthias Kerzel, Stefan Wermter
2021 arXiv   pre-print
AI systems based on the connectionist deep learning neural network approach lack capabilities of explaining their decisions and actions to others, despite their great successes.  ...  The review highlights key strategies that emphasize transparency, understandability, and continual learning for explainability.  ...  Acknowledgment This research was supported by the Georg Forster Research Fellowship for Experienced Researchers from Alexander von Humboldt-Stiftung/Foundation and Impact Oriented Interdisciplinary Research  ... 
arXiv:2004.09705v7 fatcat:p5jxv5hfk5elphzre4cn6acgsa

A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI [article]

Erico Tjoa, Cuntai Guan
2020 arXiv   pre-print
of deep learning.  ...  Explanations for machine decisions and predictions are thus needed to justify their reliability.  ...  On the other hand, using neural network could obscure the meaning of input variables. (B) Feature extraction.  ... 
arXiv:1907.07374v5 fatcat:ssup2eanlvertbcztuovdakykq

CNN Variants for Computer Vision: History, Architecture, Application, Challenges and Future Scope

Dulari Bhatt, Chirag Patel, Hardik Talsania, Jigar Patel, Rasmika Vaghela, Sharnil Pandya, Kirit Modi, Hemant Ghayvat
2021 Electronics  
Deep CNN (convolution neural network) has benefited the computer vision community by producing excellent results in video processing, object recognition, picture classification and segmentation, natural  ...  Spatial exploitation, multi-path, depth, breadth, dimension, channel boosting, feature-map exploitation, and attention-based CNN are the eight categories.  ...  Acknowledgments: The authors would like to thank the reviewers for their valuable suggestions which helped in improving the quality of this paper.  ... 
doi:10.3390/electronics10202470 fatcat:aqhrysjtbjagzl6byalgy2du5a

Explainability in Deep Reinforcement Learning [article]

Alexandre Heuillet, Fabien Couthouis, Natalia Díaz-Rodríguez
2020 arXiv   pre-print
A large set of the explainable Artificial Intelligence (XAI) literature is emerging on feature relevance techniques to explain a deep neural network (DNN) output or explaining models that ingest image  ...  However, assessing how XAI techniques can help understand models beyond classification tasks, e.g. for reinforcement learning (RL), has not been extensively studied.  ...  We thank Sam Greydanus, Zoe Juozapaitis, Benjamin Beyret, Prashan Madumal, Pedro Sequiera, Jianhong Wang, Mathieu Seurin and Vinicius Zambaldi for allowing us to use their original images for illustration  ... 
arXiv:2008.06693v4 fatcat:r62o6dabufc4ddfklhjx3lgjnq

Knowledge as Invariance – History and Perspectives of Knowledge-augmented Machine Learning [article]

Alexander Sagel and Amit Sahu and Stefan Matthes and Holger Pfeifer and Tianming Qiu and Harald Rueß and Hao Shen and Julian Wörmann
2020 arXiv   pre-print
Major weaknesses of present-day deep learning models are, for instance, their lack of adaptability to changes of environment or their incapability to perform other kinds of tasks than the one they were  ...  While supervised deep learning has conquered the field at a breathtaking pace and demonstrated the ability to solve inference problems with unprecedented accuracy, it still does not quite live up to its  ...  It is known that the features extracted by convolutional neural networks become more complex and expressive with increasing number of layers [47] .  ... 
arXiv:2012.11406v1 fatcat:nnbnsrwfr5fbxg4qj3a4cfajk4

Machine Learning in Python: Main Developments and Technology Trends in Data Science, Machine Learning, and Artificial Intelligence

Sebastian Raschka, Joshua Patterson, Corey Nolet
2020 Information  
Deep neural networks, along with advancements in classical machine learning and scalable general-purpose graphics processing unit (GPU) computing, have become critical components of artificial intelligence  ...  We cover widely-used libraries and concepts, collected together for holistic comparison, with the goal of educating the reader and driving the field of Python machine learning forward.  ...  Convolutional neural network CPU Central processing unit DAG Directed acyclic graph DL Deep learning DNN Deep neural network ETL Extract translate load GAN Generative adversarial networks  ... 
doi:10.3390/info11040193 fatcat:hetp7ngcpbbcpkhdcyowuiiwxe

Mathematical decisions and non-causal elements of explainable AI [article]

Atoosa Kasirzadeh
2019 arXiv   pre-print
In particular, I offer a multi-faceted conceptual framework for the explanations and the interpretations of algorithmic decisions, and I claim that this framework can lay the groundwork for a focused discussion  ...  In attempting to address this lacuna, this paper argues that a hierarchy of different types of explanations for why and how an algorithmic decision outcome is achieved can establish the relevant connection  ...  to a deep supervised neural network.  ... 
arXiv:1910.13607v2 fatcat:gd7foys7mjdk5apweh4je7jshq
« Previous Showing results 1 — 15 out of 3,388 results