Filters








246,560 Hits in 3.1 sec

Explaining Anomalies Detected by Autoencoders Using SHAP [article]

Liat Antwarg, Ronnie Mindlin Miller, Bracha Shapira, Lior Rokach
2020 arXiv   pre-print
In this research, we extend SHAP to explain anomalies detected by an autoencoder, an unsupervised model.  ...  Recently, a game theory-based framework known as SHapley Additive exPlanations (SHAP) has been shown to be effective in explaining various supervised learning models.  ...  Model High reconstruction error features Explanatory features Set of features explaining the anomaly Model 1 X5 X1, X2 X1, X2, X5 Model 2 X2 X5, X1 X1, X2, X5 Model 3 X1 X5, X2  ... 
arXiv:1903.02407v2 fatcat:mbz2dfq32ffknbdvvmzpo7t2ea

ACE – An Anomaly Contribution Explainer for Cyber-Security Applications [article]

Xiao Zhang and Manish Marwah and I-ta Lee and Martin Arlitt and Dan Goldwasser
2020 arXiv   pre-print
In this paper, we introduce Anomaly Contribution Explainer or ACE, a tool to explain security anomaly detection models in terms of the model features through a regression framework, and its variant, ACE-KL  ...  ACE and ACE-KL provide insights in diagnosing which attributes significantly contribute to an anomaly by building a specialized linear model to locally approximate the anomaly score that a black-box model  ...  In this paper, we focus on explaining the outputs of complex models in the cyber-security anomaly detection domain, where outputs are usually anomaly scores.  ... 
arXiv:1912.00314v2 fatcat:vgxzeutadjg5llier4y6ifmjau

Utilizing XAI Technique to Improve Autoencoder based Model for Computer Network Anomaly Detection with Shapley Additive Explanation(SHAP)

Khushnaseeb Roshan, Aasim Zafar
2021 International Journal of Computer Networks & Communications  
Explainable Artificial Intelligence (XAI) is a promising area that can improve the trustworthiness of these models by giving explanations and interpreting its output.  ...  The objective of this paper is to show that how XAI can be used to interpret the results of the DL model, the autoencoder in this case.  ...  Hence, shapley values would be useful to detect and explain the anomalies as it provides the true contribution of each feature in model prediction [14].  ... 
doi:10.5121/ijcnc.2021.13607 fatcat:zzegrqlzmncb7jk6be7b3ocxwu

Explainable Anomaly Detection Framework for Maritime Main Engine Sensor Data

Donghyun Kim, Gian Antariksa, Melia Putri Handayani, Sangbong Lee, Jihwan Lee
2021 Sensors  
Although several unsupervised methods in the maritime industry have existed, the common limitation was the interpretation of the anomaly; they do not explain why the model classifies specific data instances  ...  This study combines explainable AI techniques with anomaly detection algorithm to overcome the limitation above.  ...  After that, Section 3 explains the background theory about models used in this study. In Section 4, we explain our procedure and discuss the experiment result.  ... 
doi:10.3390/s21155200 fatcat:e25qogvmvzgwvoktf2mcnck3h4

Explainable AI: Using Shapley Value to Explain Complex Anomaly Detection ML-Based Systems [chapter]

Jinying Zou, Ovanes Petrosian
2020 Frontiers in Artificial Intelligence and Applications  
In our research, we focus on the application of Explainable AI for log anomaly detection systems of a different kind.  ...  In particular, we use the Shapley value approach from cooperative game theory to explain the outcome or solution of two anomaly-detection algorithms: Decision tree and DeepLog.  ...  In the case of Explainable AI, the Shapley value can show the contribution each feature makes to the result of the anomaly detection system.  ... 
doi:10.3233/faia200777 fatcat:rxke326zhrb3dlbicme4osyslu

X-MAN: Explaining multiple sources of anomalies in video [article]

Stanislaw Szymanowicz, James Charles, Roberto Cipolla
2021 arXiv   pre-print
Our objective is to detect anomalies in video while also automatically explaining the reason behind the detector's response.  ...  In a practical sense, explainability is crucial for this task as the required response to an anomaly depends on its nature and severity.  ...  and (c) the model also has to explain it's response.  ... 
arXiv:2106.08856v1 fatcat:mhafdijumzawlinferhi7zfmre

Shapley Values of Reconstruction Errors of PCA for Explaining Anomaly Detection [article]

Naoya Takeishi
2020 arXiv   pre-print
Because features are usually correlated when PCA-based anomaly detection is applied, care must be taken in computing a value function for the Shapley values.  ...  We also present numerical examples, which imply that the Shapley values are advantageous for explaining detected anomalies than raw reconstruction errors of each feature.  ...  In Shapley value regression [12] , [18] , v(S) is defined as the coefficient of determination of models using features in S, with which they measure contributions of features to the explained variance  ... 
arXiv:1909.03495v2 fatcat:vufu3e7pvra3tdascuvgofsuwu

Locally Interpretable One-Class Anomaly Detection for Credit Card Fraud Detection [article]

Tungyu Wu, Youting Wang
2021 arXiv   pre-print
In addition, prediction analyses by three explainers are presented, offering a clear perspective on how each feature of an instance of interest contributes to the final model output.  ...  The explanation module has three white-box explainers in charge of interpretations of the AutoEncoder, discriminator, and the whole detection model, respectively.  ...  METHODOLOGY The proposed framework comprises two modules: (1) the anomaly detection model and (2) the model explainers.  ... 
arXiv:2108.02501v2 fatcat:bcxjn4c4zrfaxk7hhzn2glv2xy

Unsupervised Anomaly Detection of Healthcare Providers Using Generative Adversarial Networks [chapter]

Krishnan Naidoo, Vukosi Marivate
2020 Lecture Notes in Computer Science  
This study evaluates previous anomaly detection machine learning models and proposes an unsupervised framework to identify anomalies using a Generative Adversarial Network (GANs) model.  ...  The GANs anomaly detection (GAN-AD) model was applied on two different healthcare provider data sets.  ...  The second modelling step uses the anomaly labels in the supervised classification models and SHAP (SHapley Additive exPlanation) to explain the features contributing to the anomaly.  ... 
doi:10.1007/978-3-030-44999-5_35 fatcat:qxuczgunonaazh7a6ei6saifze

Probing the Origin of the Large-angle CMB Anomalies [article]

Kaiki Taro Inoue
2007 arXiv   pre-print
We review various proposed ideas to explain the origin of the anomalies and discuss how we can constrain the proposed models using future observational data.  ...  It has been argued that the large-angle cosmic microwave background anisotropy has anomalies at 3-sigma level.  ...  However, none of these explanations has succeeded in explaining the specific features of the anomalies, namely, the octopole planarity, the alignment between the quadrupole (l = 2) and the octopole (l  ... 
arXiv:0710.2404v1 fatcat:e3sdur5uzzdddazeymrgatcdj4

Investigating hidden Markov models capabilities in anomaly detection

Shrijit S. Joshi, Vir V. Phoha
2005 Proceedings of the 43rd annual southeast regional conference on - ACM-SE 43  
Hidden Markov Model (HMM) based applications are common in various areas, but the incorporation of HMM's for anomaly detection is still in its infancy.  ...  For training HMM, 12.195% features out of the total features (5 features out of 41 features) present in the KDD Cup 1999 data set are used.  ...  Training procedure in our case (for anomaly detection system) is explained in section 3.3.  ... 
doi:10.1145/1167350.1167387 dblp:conf/ACMse/JoshiP05 fatcat:b2t2pykqbjaclmauginkztrwpy

Explainable Anomaly Detection for Industrial Control System Cybersecurity [article]

Do Thu Ha, Nguyen Xuan Hoang, Nguyen Viet Hoang, Nguyen Huu Du, Truong Thu Huong, Kim Phuc Tran
2022 arXiv   pre-print
In this study, we suggest using Explainable Artificial Intelligence to enhance the perspective and reliable results of an LSTM-based Autoencoder-OCSVM learning model for anomaly detection in ICS.  ...  Anomaly detection, therefore, is essential for preventing network security intrusions and system attacks.  ...  Integration of Explainable Artificial Intelligence Explainable Artificial Intelligence (XAI) refers to the algorithms that enable humans to comprehend the AI models, leading to trust in the output of the  ... 
arXiv:2205.01930v1 fatcat:yah6azefvvcbhhskylf4oupiri

Discrete neural representations for explainable anomaly detection [article]

Stanislaw Szymanowicz, James Charles, Roberto Cipolla
2021 arXiv   pre-print
The aim of this work is to detect and automatically generate high-level explanations of anomalous events in video.  ...  Here we show how to robustly detect anomalies without the use of object or action classifiers yet still recover the high level reason behind the event.  ...  Explainability module Explaining the decision behind the anomaly requires the system to specify which actions and objects are responsible for the error in prediction.  ... 
arXiv:2112.05585v1 fatcat:5knrdor6hjhxbnaj3uejkrqjki

Joint Detection and Recounting of Abnormal Events by Learning Deep Generic Knowledge [article]

Ryota Hinami, Tao Mei, Shin'ichi Satoh
2017 arXiv   pre-print
In this paper, we tackle this problem by integrating a generic CNN model and environment-dependent anomaly detectors.  ...  By appropriately plugging the model into anomaly detectors, we can detect and recount abnormal events while taking advantage of the discriminative power of CNNs.  ...  Each detected event was finally processed for recounting, as was explained in Sec. 3.2. Anomaly detectors for semantic features.  ... 
arXiv:1709.09121v1 fatcat:ecuzpw6yxvcktgqcx2v4mc4snm

Joint Detection and Recounting of Abnormal Events by Learning Deep Generic Knowledge

Ryota Hinami, Tao Mei, Shin'ichi Satoh
2017 2017 IEEE International Conference on Computer Vision (ICCV)  
In this paper, we tackle this problem by integrating a generic CNN model and environment-dependent anomaly detectors.  ...  By appropriately plugging the model into anomaly detectors, we can detect and recount abnormal events while taking advantage of the discriminative power of CNNs.  ...  Each detected event was finally processed for recounting, as was explained in Sec. 3.2. Anomaly detectors for semantic features.  ... 
doi:10.1109/iccv.2017.391 dblp:conf/iccv/HinamiMS17 fatcat:hgwrqpuxqnfwxeetg6afjmcqqy
« Previous Showing results 1 — 15 out of 246,560 results