Filters








186,976 Hits in 3.0 sec

Explainability in Deep Reinforcement Learning [article]

Alexandre Heuillet, Fabien Couthouis, Natalia Díaz-Rodríguez
<span title="2020-12-18">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL), a relatively new subfield of Explainable Artificial Intelligence, intended to be used in general public applications  ...  A large set of the explainable Artificial Intelligence (XAI) literature is emerging on feature relevance techniques to explain a deep neural network (DNN) output or explaining models that ingest image  ...  Those last two points are the main arguments in favor of the necessity of explainable reinforcement learning (XRL).  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2008.06693v4">arXiv:2008.06693v4</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/r62o6dabufc4ddfklhjx3lgjnq">fatcat:r62o6dabufc4ddfklhjx3lgjnq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200827115451/https://arxiv.org/pdf/2008.06693v2.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2008.06693v4" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Explainable AI: Deep Reinforcement Learning Agents for Residential Demand Side Cost Savings in Smart Grids [article]

Hareesh Kumar, Priyanka Mary Mammen, Krithi Ramamritham
<span title="2019-10-30">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Motivated by recent advancements in Deep Reinforcement Learning (RL), we have developed an RL agent to manage the operation of storage devices in a household and is designed to maximize demand-side cost  ...  We explain the learning progression of the RL agent, and the strategies it follows based on the capacity of the storage device.  ...  Time of Day pricing was modeled as shown in Table 1 Explainable AI Reinforcement Learning Agents for Residential Cost Savings , Figure 4 .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1910.08719v2">arXiv:1910.08719v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/a3mvdua7hbfv7idxse7lx4kqgu">fatcat:a3mvdua7hbfv7idxse7lx4kqgu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200912025540/https://arxiv.org/pdf/1910.08719v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/87/ed/87ed125df978850d6b3ec89c0ff3c3ca4a9b1453.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1910.08719v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Explaining Deep Reinforcement Learning Agents In The Atari Domain through a Surrogate Model [article]

Alexander Sieusahai, Matthew Guzdial
<span title="2021-10-07">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
One major barrier to applications of deep Reinforcement Learning (RL) both inside and outside of games is the lack of explainability.  ...  In this paper, we describe a lightweight and effective method to derive explanations for deep RL agents, which we evaluate in the Atari domain.  ...  Ethics Statement We introduce an Explainable AI approach, which has the potential to offer broad impacts in terms of how and where Reinforcement Learning can be applied.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2110.03184v1">arXiv:2110.03184v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/6jndc2esu5dr5jdjeoa6xg4kvu">fatcat:6jndc2esu5dr5jdjeoa6xg4kvu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211011133633/https://arxiv.org/pdf/2110.03184v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/a7/1f/a71fc1608de416a2d2bb6e2a26362873667cefc8.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2110.03184v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Explainable Deep Reinforcement Learning Using Introspection in a Non-episodic Task [article]

Angel Ayala, Francisco Cruz, Bruno Fernandes, Richard Dazeley
<span title="2021-08-18">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Explainable reinforcement learning allows artificial agents to explain their behavior in a human-like manner aiming at non-expert end-users.  ...  In this work, we adapt the introspection method to be used in a non-episodic task and try it in a continuous Atari game scenario solved with the Rainbow algorithm.  ...  ACKNOWLEDGMENT This work has been financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior -Brasil (CAPES) -Finance Code 001, Fundação de Amparo a Ciência e Tecnologia do Estado  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2108.08911v1">arXiv:2108.08911v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/dcr5exteljconapddfmpowo4zy">fatcat:dcr5exteljconapddfmpowo4zy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210825222153/https://arxiv.org/pdf/2108.08911v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/e9/c5/e9c5215fe42e403f57babfeac3f34dc89fe535c1.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2108.08911v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Explainable Artificial Intelligence (XAI) for Increasing User Trust in Deep Reinforcement Learning Driven Autonomous Systems [article]

Jeff Druce, Michael Harradon, James Tittle
<span title="2021-06-07">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We consider the problem of providing users of deep Reinforcement Learning (RL) based systems with a better understanding of when their output can be trusted.  ...  We offer an explainable artificial intelligence (XAI) framework that provides a three-fold explanation: a graphical depiction of the systems generalization and performance in the current game state, how  ...  Explainable artificial intelligence (XAI) methods offer means to peer inside the black-box of deep RL based systems.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2106.03775v1">arXiv:2106.03775v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/2dbhks4offewvlov7awgl3rywm">fatcat:2dbhks4offewvlov7awgl3rywm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210610004426/https://arxiv.org/pdf/2106.03775v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/29/0f/290fd5943318764d9184215caa44c0977aa33901.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2106.03775v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Towards Explainable Deep Reinforcement Learning for Traffic Signal Control

Lincoln Schreiber, Gabriel Ramos, Ana Bazzan
<span title="2021-07-24">2021</span> <i title="Journal of LatinX in AI Research"> LatinX in AI at International Conference on Machine Learning 2021 </i> &nbsp; <span class="release-stage">unpublished</span>
Deep reinforcement learning has shown potential for traffic signal control. However, the lack of explainability has limited its use in real-world conditions.  ...  In this work, we present a Deep Q-learning approach, with the SHAP framework, able to explain its policy.  ...  Concluding Remarks In this paper, we proposed a way to explain a reinforcement learning-based agent's policy capable of optimizing traffic at an intersection.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.52591/lxai202107249">doi:10.52591/lxai202107249</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/drw7bzcf3za6fh7mhqwjkncjhm">fatcat:drw7bzcf3za6fh7mhqwjkncjhm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220323055228/https://research.latinxinai.org/papers/icml/2021/pdf/paper_26.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/d6/5e/d65e76bde28f52fefdf28022201e1ae856349e23.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.52591/lxai202107249"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

Explainable Artificial Intelligence (xAI) Approaches and Deep Meta-Learning Models [chapter]

Evren Dağlarli
<span title="2020-12-09">2020</span> <i title="IntechOpen"> Advances and Applications in Deep Learning </i> &nbsp;
These deep learning methods can yield highly effective results according to the data set size, data set quality, the methods used in feature extraction, the hyper parameter set used in deep learning models  ...  This is an important open point in artificial neural networks and deep learning models.  ...  Explainable meta-reinforcement learning (xMRL) In this section, we will discuss the development of deep reinforcement learning models with an explicable approach to artificial intelligence.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.5772/intechopen.92172">doi:10.5772/intechopen.92172</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/sgmxtwloa5bbzb5sp7tpi75i3y">fatcat:sgmxtwloa5bbzb5sp7tpi75i3y</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201213232731/https://api.intechopen.com/chapter/pdf-download/72398.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/b2/59/b259ce81ff2434f0054cbc32bbd88ca5763ede41.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.5772/intechopen.92172"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> Publisher / doi.org </button> </a>

Explainable Deep Reinforcement Learning for Portfolio Management: An Empirical Approach [article]

Mao Guan, Xiao-Yang Liu
<span title="2021-12-18">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Deep reinforcement learning (DRL) has been widely studied in the portfolio management task.  ...  In this paper, we propose an empirical approach to explain the strategies of DRL agents for the portfolio management task.  ...  In this paper, we empirically explained the DRL agents’ strategies FinRL: Deep reinforcement learning framework to automate trading in quantita- for the portfolio management  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2111.03995v2">arXiv:2111.03995v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/t4hcw2hxqzedfb5luyx5u6gble">fatcat:t4hcw2hxqzedfb5luyx5u6gble</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211230065820/https://arxiv.org/pdf/2111.03995v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/46/7f/467ff83987087e6abc1696720d03df9c75d6ea6e.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2111.03995v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Multi-Agent Path Planning Using Deep Reinforcement Learning [article]

Mert Çetinkaya
<span title="2021-10-04">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this paper a deep reinforcement based multi-agent path planning approach is introduced.  ...  The produced problems are actually similar to a vehicle routing problem and they are solved using multi-agent deep reinforcement learning.  ...  A deep reinforcement learning model is trained from scratch to solve the produced different problems explained in this way.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2110.01460v1">arXiv:2110.01460v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/pm5hgh5agfdexkn35pjyhafsem">fatcat:pm5hgh5agfdexkn35pjyhafsem</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211006065713/https://arxiv.org/ftp/arxiv/papers/2110/2110.01460.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/85/37/853729c77103443fcd39438e908be863c6570592.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2110.01460v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Object-sensitive Deep Reinforcement Learning [article]

Yuezhang Li, Katia Sycara, Rahul Iyer
<span title="2018-09-17">2018</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We also propose a new approach called "object saliency maps" to visually explain the actions made by deep reinforcement learning agents.  ...  In this paper, we propose a novel method that can incorporate object recognition processing to deep reinforcement learning models.  ...  We also proposed object saliency maps for visually explaining the actions taken by deep reinforcement learning agents.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1809.06064v1">arXiv:1809.06064v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/nttn2vu2kzhfnpyhqnvibpedte">fatcat:nttn2vu2kzhfnpyhqnvibpedte</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200824204248/https://arxiv.org/pdf/1809.06064v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/ba/01/ba01a031c19f7553f9c55dc6a1f02e2942121aee.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1809.06064v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Logic-Based Sequential Decision-Making

Daoming Lyu, Fangkai Yang, Bo Liu, Daesub Yoon
<span title="2019-07-17">2019</span> <i title="Association for the Advancement of Artificial Intelligence (AAAI)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/wtjcymhabjantmdtuptkk62mlq" style="color: black;">PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE</a> </i> &nbsp;
In this paper, we introduce symbolic planning into DRL and propose a framework of Symbolic Deep Reinforcement Learning (SDRL) that can handle both high-dimensional sensory inputs and symbolic planning.  ...  Deep reinforcement learning (DRL) has gained great success by learning directly from high-dimensional sensory inputs, yet is notorious for the lack of interpretability.  ...  Conclusions In this paper, we propose SDRL framework by integrating symbolic planning with deep reinforcement learning for decision making.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1609/aaai.v33i01.33019995">doi:10.1609/aaai.v33i01.33019995</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/oqnjfilgzzfsfhtq2lh5omznhu">fatcat:oqnjfilgzzfsfhtq2lh5omznhu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200305174005/https://aaai.org/ojs/index.php/AAAI/article/download/5134/5007" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/bf/45/bf4592391b55cf54be8cc139725a24aa1138fad6.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1609/aaai.v33i01.33019995"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

Knowledge-Based Sequential Decision-Making Under Uncertainty [article]

Daoming Lyu
<span title="2020-05-16">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Deep reinforcement learning (DRL) algorithms have achieved great success on sequential decision-making problems, yet is criticized for the lack of data-efficiency and explainability.  ...  To improve the data-efficiency and explainability of DRL, declarative knowledge is introduced in this work and a novel algorithm is proposed by integrating DRL with symbolic planning.  ...  With the help of deep learning, deep reinforcement learning (DRL) algorithms have made a lot of achievements on sequential decision-making problems involving high-dimensional sensory inputs such as Atari  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1905.07030v2">arXiv:1905.07030v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/gxjzdkrnhrcrtadlsj63mzkxrm">fatcat:gxjzdkrnhrcrtadlsj63mzkxrm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200521001637/https://arxiv.org/pdf/1905.07030v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/e0/20/e0204600f44ab08390426e45f25d2611864381ad.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1905.07030v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Crop Yield Prediction using Deep Reinforcement Learning Model for Sustainable Agrarian Applications

Dhivya Elavarasan, Durai Raj Vincent P M
<span title="">2020</span> <i title="Institute of Electrical and Electronics Engineers (IEEE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/q7qi7j4ckfac7ehf3mjbso4hne" style="color: black;">IEEE Access</a> </i> &nbsp;
The reinforcement learning agent incorporates a combination of parametric features with the threshold that assist in predicting crop yield.  ...  Combining the intelligence of reinforcement learning and deep learning, deep reinforcement learning builds a complete crop yield prediction framework that can map the raw data to the crop prediction values  ...  This section explains in detail the reinforcement learning, Q-learning and the deep Q-Network algorithm. A.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2020.2992480">doi:10.1109/access.2020.2992480</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/tks4eqp7drhnlnnetzkgmrln3a">fatcat:tks4eqp7drhnlnnetzkgmrln3a</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201108123206/https://ieeexplore.ieee.org/ielx7/6287639/8948470/09086620.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/99/1a/991ac1efb61d7b2eacd6d7de5817b5a702382bd3.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2020.2992480"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> ieee.com </button> </a>

Visual Rationalizations in Deep Reinforcement Learning for Atari Games [article]

Laurens Weitkamp, Elise van der Pol, Zeynep Akata
<span title="2019-02-01">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Deep reinforcement learning models, as other deep learning models, tend to be opaque in their decision-making process.  ...  Due to the capability of deep learning to perform well in high dimensional problems, deep reinforcement learning agents perform well in challenging tasks such as Atari 2600 games.  ...  Deep Reinforcement Learning In general, there are two main methods in deep reinforcement learning.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1902.00566v1">arXiv:1902.00566v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/gjklhit3wngnxg6n77xr4vqaby">fatcat:gjklhit3wngnxg6n77xr4vqaby</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200831121915/https://arxiv.org/pdf/1902.00566v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/fe/ed/feed2cb80cf56920489b44434fb944b4fad717cb.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1902.00566v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

In Machines We Trust: Are Robo-Advisers More Trustworthy Than Human Financial Advisers?

Hui Xian Chia
<span title="2019-09-23">2019</span> <i title="Queensland University of Technology"> Law, Technology and Humans </i> &nbsp;
The rise of deep learning has been met with calls for 'explainability' of how deep learning agents make their decisions.  ...  This paper argues that greater explainability can be achieved by describing the 'personality' of deep learning robo-advisers, and further proposes a framework for describing the parameters of the deep  ...  Based on the example used to explain reinforcement learning in Dettmers, "Deep Learning in a Nutshell." 91 Dettmers.92 Sun, "Designing Non-greedy Reinforcement Learning Agents with Diminishing Reward  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.5204/lthj.v1i0.1261">doi:10.5204/lthj.v1i0.1261</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/b227w4qv25hifjeqihggx4pyfq">fatcat:b227w4qv25hifjeqihggx4pyfq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200718150149/https://lthj.qut.edu.au/article/download/1261/819" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/e6/d9/e6d931cac7a8d1c4b4385d34402303ace408dfbf.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.5204/lthj.v1i0.1261"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> Publisher / doi.org </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 186,976 results