Filters








4,085 Hits in 5.3 sec

Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Yuxin Ma, Tiankai Xie, Jundong Li, Ross Maciejewski
<span title="2019-08-26">2019</span> <i title="Institute of Electrical and Electronics Engineers (IEEE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/hjrujdrg7zaghbdsp5pdzq7cmm" style="color: black;">IEEE Transactions on Visualization and Computer Graphics</a> </i> &nbsp;
In this paper, we present a visual analytics framework for explaining and exploring model vulnerabilities to adversarial attacks.  ...  While the visual analytics community has developed methods for opening the black box of machine learning models, little work has focused on helping the user understand their model vulnerabilities in the  ...  ACKNOWLEDGMENTS Fig. 3 . 3 A visual analytics framework for explaining model vulnerabilities to adversarial machine learning attacks.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/tvcg.2019.2934631">doi:10.1109/tvcg.2019.2934631</a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pubmed/31478859">pmid:31478859</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/u36fkkxspjdldbys67jnizc5nm">fatcat:u36fkkxspjdldbys67jnizc5nm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200929040615/https://arxiv.org/pdf/1907.07296v3.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/d9/bb/d9bb0355ce85f9af21a7893fcf4400f2b65647ae.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/tvcg.2019.2934631"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Explainable Adversarial Attacks in Deep Neural Networks Using Activation Profiles [article]

Gabriel D. Cantareira, Rodrigo F. Mello, Fernando V. Paulovich
<span title="2021-03-18">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
This paper presents a visual framework to investigate neural network models subjected to adversarial examples, revealing how models' perception of the adversarial data differs from regular data instances  ...  Through different use cases, we show how observing these elements can quickly pinpoint exploited areas in a model, allowing further study of vulnerable features in input data and serving as a guide to  ...  Several efforts have been made by the machine learning and visual analytics research communities directed at improving model explainability, comprehending a field known as Explainable Artificial Intelligence  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2103.10229v1">arXiv:2103.10229v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/knyxgpyhybhyhjq2fofkt5yqwu">fatcat:knyxgpyhybhyhjq2fofkt5yqwu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210324132647/https://arxiv.org/pdf/2103.10229v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/33/a2/33a279507d1982dd2aa6ac9e696e6a3c64f9401a.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2103.10229v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Visual Analytics for Explainable Deep Learning [article]

Jaegul Choo, Shixia Liu
<span title="2018-04-07">2018</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this paper, we review visual analytics, information visualization, and machine learning perspectives relevant to this aim, and discuss potential challenges and future research directions.  ...  In response, efforts are being made to make deep learning interpretable and controllable by humans.  ...  Improving the robustness of deep learning for secure artificial intelligence Deep learning models are generally vulnerable to adversarial perturbations, where adversarial examples are maliciously generated  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1804.02527v1">arXiv:1804.02527v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/efwpg3ot5nfgfnbhnlt6rfkm44">fatcat:efwpg3ot5nfgfnbhnlt6rfkm44</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200909020953/https://arxiv.org/ftp/arxiv/papers/1804/1804.02527.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/1f/10/1f103818bf5526ec75e700fb69af5f59cdded6f1.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1804.02527v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

The Challenges of Leveraging Threat Intelligence to Stop Data Breaches

Amani Ibrahim, Dhananjay Thiruvady, Jean-Guy Schneider, Mohamed Abdelrazek
<span title="2020-08-28">2020</span> <i title="Frontiers Media SA"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/t3lu7sf5drbsxiqgjr2db4qffm" style="color: black;">Frontiers in Computer Science</a> </i> &nbsp;
This is followed by an illustration of how the future of effective threat intelligence is closely linked to efficiently applying Artificial Intelligence and Machine Learning approaches, and we conclude  ...  This helps explain who the adversary is, how and why they are comprising the organization's digital assets, what consequences could happen following the attack, what assets actually could be compromised  ...  However, machine learning itself introduces a new set of vulnerabilities, when used in real-world, which makes it susceptible to adversarial activity.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3389/fcomp.2020.00036">doi:10.3389/fcomp.2020.00036</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/qb2flxny7raehdbgwhn3pb2sqq">fatcat:qb2flxny7raehdbgwhn3pb2sqq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200901034341/https://fjfsdata01prod.blob.core.windows.net/articles/files/562053/pubmed-zip/.versions/1/.package-entries/fcomp-02-00036/fcomp-02-00036.pdf?sv=2015-12-11&amp;sr=b&amp;sig=kF0MjMOckqIbIRaNSEE6yOP9DRQXhMu8DEj%2FFFeYfaM%3D&amp;se=2020-09-01T03%3A44%3A10Z&amp;sp=r&amp;rscd=attachment%3B%20filename%2A%3DUTF-8%27%27fcomp-02-00036.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/d2/0c/d20c96a5047860dab5517af4189542bc06bf796c.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3389/fcomp.2020.00036"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> frontiersin.org </button> </a>

Analyzing the Noise Robustness of Deep Neural Networks [article]

Mengchen Liu, Shixia Liu, Hang Su, Kelei Cao, Jun Zhu
<span title="2018-10-09">2018</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
To address this issue, we present a visual analytics approach to explain the primary cause of the wrong predictions introduced by adversarial examples.  ...  Deep neural networks (DNNs) are vulnerable to maliciously generated adversarial examples.  ...  RELATED WORK Visual Analytics for Explainable Deep Learning A number of visual analytics approaches [7, 27, 28, 33, 40, 41, 49, 57] have been developed to illustrate the working mechanism of DNNs.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1810.03913v1">arXiv:1810.03913v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/cjnggpp3zbho5aszqoz7vcnxlq">fatcat:cjnggpp3zbho5aszqoz7vcnxlq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20191019002850/https://arxiv.org/pdf/1810.03913v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/af/09/af09e3bb3505809aebefbff165e2f4296022246b.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1810.03913v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

CyGraph [chapter]

S. Noel, E. Harley, K.H. Tam, M. Limiero, M. Share
<span title="">2016</span> <i title="Elsevier"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/bkh644pu7rhutnkgtjetyxat74" style="color: black;">Handbook of Statistics</a> </i> &nbsp;
correlates events to known vulnerability paths.  ...  To help manage visual complexity, CyGraph supports the separation of graph models into interdependent layers.  ...  We wish to thank Bill Chan of MITRE for providing the architecture diagram for CAVE.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/bs.host.2016.07.001">doi:10.1016/bs.host.2016.07.001</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/nub7hrs5tvcxxoe4tybjjodrhy">fatcat:nub7hrs5tvcxxoe4tybjjodrhy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20180920163018/http://csis.gmu.edu:80/noel/pubs/2016_Cognitive_Computing_chapter.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/6e/30/6e30c183dc06d51a907f1f287c402210d5ed5d11.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/bs.host.2016.07.001"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> elsevier.com </button> </a>

Adversarial Examples and the Deeper Riddle of Induction: The Need for a Theory of Artifacts in Deep Learning [article]

Cameron Buckner
<span title="2020-03-20">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Relatedly, these systems also possess bewildering new vulnerabilities: most notably a susceptibility to "adversarial examples".  ...  Thus, machine learning researchers urgently need to develop a theory of artifacts for deep neural networks, and I conclude by sketching some initial directions for this area of research.  ...  unusual points in data space-discovered by further "adversarial" machine learning methods designed to fool deep learning systems-can cause them to produce behavior that the model evaluates with extreme  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2003.11917v1">arXiv:2003.11917v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/xijgm2kez5evdkivnhgodiwk6u">fatcat:xijgm2kez5evdkivnhgodiwk6u</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200328015421/https://arxiv.org/ftp/arxiv/papers/2003/2003.11917.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2003.11917v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

A Visual Analytics Framework for Adversarial Text Generation [article]

Brandon Laughlin, Christopher Collins, Karthik Sankaranarayanan, Khalil El-Khatib
<span title="2019-09-24">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
The framework extends existing attack algorithms to work within an evolutionary attack process paired with a visual analytics loop.  ...  This paper presents a framework which enables a user to more easily make corrections to adversarial texts.  ...  In this section we start with a review of related works on adversarial machine learning, followed by research on how visual analytics has been used to help address these issues.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1909.11202v1">arXiv:1909.11202v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/t6tsk6ozv5dc7pumejjxut6mtm">fatcat:t6tsk6ozv5dc7pumejjxut6mtm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200824191058/https://arxiv.org/pdf/1909.11202v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/fb/c1/fbc11bab836269406ef361542307f77f2ac5da81.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1909.11202v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Algorithms in future insurance markets

Małgorzata Śmietanka, Adriano Koshiyama, Philip Treleaven
<span title="2021-02-05">2021</span> <i title="SvedbergOpen"> International Journal of Data Science and Big Data Analytics </i> &nbsp;
The current main disrupting forms of learning include deep learning, adversarial learning, federated learning, transfer and meta learning.  ...  These forms of learning have produced new models (e.g., long short-term memory, generative adversarial networks) and leverage important applications (e.g., Natural Language Processing, Adversarial Examples  ...  employed in the field of machine learning which attempts to 'fool' models through malicious input.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.51483/ijdsbda.1.1.2021.1-19">doi:10.51483/ijdsbda.1.1.2021.1-19</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/gty5qdugnbhm3mophojqyxmkja">fatcat:gty5qdugnbhm3mophojqyxmkja</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210311221833/https://www.svedbergopen.com/files/1614613438_(1)_IJDSBDA10112020MTN002_(p_1-19).pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/10/2b/102bfe0dbc2324e68a0d16ad55fbc6d5b1f79c98.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.51483/ijdsbda.1.1.2021.1-19"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

A Survey on Threat Situation Awareness Systems: Framework, Techniques, and Insights [article]

Hooman Alavizadeh, Julian Jang-Jaccard, Simon Yusuf Enoch, Harith Al-Sahaf, Ian Welch, Seyit A. Camtepe, Dong Seong Kim
<span title="2021-10-29">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Cyberspace is full of uncertainty in terms of advanced and sophisticated cyber threats which are equipped with novel approaches to learn the system and propagate themselves, such as AI-powered threats.  ...  and devising a plan to avoid further attacks.  ...  ACKNOWLEDGEMENT This work was supported by the Cyber Security Research Programme-"Artificial Intelligence for Automating Response to Threats" from the Ministry of Business, Innovation, and Employment (  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2110.15747v1">arXiv:2110.15747v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/zboddcg4a5gdxmq5hqmo5cpj34">fatcat:zboddcg4a5gdxmq5hqmo5cpj34</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211103171523/https://arxiv.org/pdf/2110.15747v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/0b/e3/0be3c527d4a55c6d27dc1b31d7fe511eaa9213ea.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2110.15747v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

A survey of visual analytics techniques for machine learning

Jun Yuan, Changjian Chen, Weikai Yang, Mengchen Liu, Jiazhi Xia, Shixia Liu
<span title="2020-11-25">2020</span> <i title="Springer Science and Business Media LLC"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/jnfwhcgai5dalfpeugj6pkswji" style="color: black;">Computational Visual Media</a> </i> &nbsp;
AbstractVisual analytics for machine learning has recently evolved as one of the most exciting areas in the field of visualization.  ...  To better identify which research topics are promising and to learn how to apply relevant techniques in visual analytics, we systematically review 259 papers published in the last ten years together with  ...  Explaining vulnerabilities to adversarial machine learning through visual analytics. IEEE Transactions on Visualization and Computer Graphics Vol. 26, No.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/s41095-020-0191-7">doi:10.1007/s41095-020-0191-7</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ibrmost24rgtnctztrmcdvz6dq">fatcat:ibrmost24rgtnctztrmcdvz6dq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210429095935/https://link.springer.com/content/pdf/10.1007/s41095-020-0191-7.pdf?error=cookies_not_supported&amp;code=f5a645ab-0b15-4207-b87d-b80121812160" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/91/4d/914d2be109ede0a582bec6f27da9c2c7e25100fe.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/s41095-020-0191-7"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> springer.com </button> </a>

Advances in adversarial attacks and defenses in computer vision: A survey [article]

Naveed Akhtar, Ajmal Mian, Navid Kardan, Mubarak Shah
<span title="2021-09-02">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
However, it is now known that DL is vulnerable to adversarial attacks that can manipulate its predictions by introducing visually imperceptible perturbations in images and videos.  ...  To ensure authenticity, we mainly consider peer-reviewed contributions published in the prestigious sources of computer vision and machine learning research.  ...  The literature has witnessed numerous hypotheses to explain the adversarial vulnerability of deep learning.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2108.00401v2">arXiv:2108.00401v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/23gw74oj6bblnpbpeacpg3hq5y">fatcat:23gw74oj6bblnpbpeacpg3hq5y</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210906192640/https://arxiv.org/pdf/2108.00401v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/1a/08/1a0829a7bef8ea3ecb33b55871b4498dd328ff68.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2108.00401v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Analyzing the Noise Robustness of Deep Neural Networks

Mengchen Liu, Shixia Liu, Hang Su, Kelei Cao, Jun Zhu
<span title="">2018</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/kuoffmnqkvak5ckkro4rt6dxpa" style="color: black;">2018 IEEE Conference on Visual Analytics Science and Technology (VAST)</a> </i> &nbsp;
To address this issue, we present a visual analysis method to explain why adversarial examples are misclassified.  ...  A quantitative evaluation and a case study were conducted to demonstrate the promise of our method to explain the misclassification of adversarial examples.  ...  Index Terms-Robustness, deep neural networks, adversarial examples, explainable machine learning. Liu is the corresponding author. • H. Su and J. Zhu are with Dept. of Comp.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/vast.2018.8802509">doi:10.1109/vast.2018.8802509</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/ieeevast/LiuLSCZ18.html">dblp:conf/ieeevast/LiuLSCZ18</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/stmivhftfvdnpfp3jmwdesvggi">fatcat:stmivhftfvdnpfp3jmwdesvggi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200505065503/http://orca.cf.ac.uk/129315/1/TVCG_Analyzing_the_Robustness_of_Deep_Neural_Networks.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/fd/54/fd54b1ce5dd757ca44c1ef456ca93329a1a7428a.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/vast.2018.8802509"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

VATLD: A Visual Analytics System to Assess, Understand and Improve Traffic Light Detection [article]

Liang Gou, Lincan Zou, Nanxiang Li, Michael Hofmann, Arvind Kumar Shekar, Axel Wendt, Liu Ren
<span title="2020-09-27">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this work, we propose a visual analytics system, VATLD, equipped with a disentangled representation learning and semantic adversarial learning, to assess, understand, and improve the accuracy and robustness  ...  The disentangled representation learning extracts data semantics to augment human cognition with human-friendly visual summarization, and the semantic adversarial learning efficiently exposes interpretable  ...  Surveys from both machine learning [38] and visual analytics [11, 28, 46] offer more insights into this.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2009.12975v1">arXiv:2009.12975v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/35gtds7y6rcjxm7tvnk2orxgaa">fatcat:35gtds7y6rcjxm7tvnk2orxgaa</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200930023430/https://arxiv.org/pdf/2009.12975v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2009.12975v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

The Next Generation Cognitive Security Operations Center: Adaptive Analytic Lambda Architecture for Efficient Defense against Adversarial Attacks

Konstantinos Demertzis, Nikos Tziritas, Panayiotis Kikiras, Salvador Llopis Sanchez, Lazaros Iliadis
<span title="2019-01-10">2019</span> <i title="MDPI AG"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/tdvcddxjjzfavkwy267ww7wn5m" style="color: black;">Big Data and Cognitive Computing</a> </i> &nbsp;
It implements the Lambda machine learning architecture that can analyze a mixture of batch and streaming data, using two accurate novel computational intelligence algorithms.  ...  Specifically, it uses an Extreme Learning Machine neural network with Gaussian Radial Basis Function kernel (ELM/GRBFk) for the batch data analysis and a Self-Adjusting Memory k-Nearest Neighbors classifier  ...  In recent times, different types of adversaries based on their threat model leverage these vulnerabilities to compromise a machine learning system where adversaries have high incentives.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/bdcc3010006">doi:10.3390/bdcc3010006</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/qskf3u5xkfephh5tcis3ibo35i">fatcat:qskf3u5xkfephh5tcis3ibo35i</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190504024952/https://res.mdpi.com/BDCC/BDCC-03-00006/article_deploy/BDCC-03-00006-v2.pdf?filename=&amp;attachment=1" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/ff/1b/ff1bffd74bcc13f84dbbce3060eae0dff80565ed.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/bdcc3010006"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> mdpi.com </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 4,085 results