Filters








3,306 Hits in 5.7 sec

A Model-Agnostic Causal Learning Framework for Recommendation using Search Data [article]

Zihua Si
2022 pre-print
In this paper, we propose a model-agnostic framework named IV4Rec that can effectively decompose the embedding vectors into these two parts, hence enhancing recommendation results.  ...  IV4Rec is model-agnostic and can be applied to a number of existing RSs such as DIN and NRHUB.  ...  CONCLUSIONS In this paper, we proposed a model agnostic IV-based causal learning framework to improve recommendation using search data, called IV4Rec.  ... 
doi:10.1145/3485447.3511951 arXiv:2202.04514v1 fatcat:6lmcsmwtgbdkrkq2tpkqrwaoqu

Model-Agnostic Counterfactual Reasoning for Eliminating Popularity Bias in Recommender System [article]

Tianxin Wei, Fuli Feng, Jiawei Chen, Chufeng Shi, Ziwei Wu, Jinfeng Yi, Xiangnan He
2020 arXiv   pre-print
Remarkably, our solution amends the learning process of recommendation which is agnostic to a wide range of models.  ...  ., fitting a recommender model to recover the user behavior data with pointwise or pairwise loss, makes the model biased towards popular items.  ...  Model-Agnostic Counterfactual Reasoning To this end, we devise a model-agnostic counterfactual reasoning (MACR) framework, which performs multi-task learning for recommender training and counterfactual  ... 
arXiv:2010.15363v1 fatcat:oqazsv33qres5kssf6lsb4fply

FIND:Explainable Framework for Meta-learning [article]

Xinyue Shao, Hongzhi Wang, Xiao Zhu, Feng Xiong
2022 arXiv   pre-print
Meta-learning is used to efficiently enable the automatic selection of machine learning models by combining data and prior knowledge.  ...  This paper proposes FIND, an interpretable meta-learning framework that not only can explain the recommendation results of meta-learning algorithm selection, but also provide a more complete and accurate  ...  To integrate causality for a more precise interpretation, the latent factors for each feature in the causal structure model need to be discovered using the latent factors search.  ... 
arXiv:2205.10362v2 fatcat:2mo4kyd3onap5av4qyugz4f2ru

Causality Learning: A New Perspective for Interpretable Machine Learning [article]

Guandong Xu, Tri Dung Duong, Qian Li, Shaowu Liu, Xianzhi Wang
2021 arXiv   pre-print
Therefore, interpreting machine learning model is currently a mainstream topic in the research community.  ...  Recent years have witnessed the rapid growth of machine learning in a wide range of fields such as image recognition, text classification, credit scoring prediction, recommendation system, etc.  ...  Causal Models We now introduce the two most important formal frameworks used for causal inference, namely the structural causal models and the potential outcome framework.  ... 
arXiv:2006.16789v2 fatcat:ole3dvpnjnfkflldd6to4nrrwq

MACFE: A Meta-learning and Causality Based Feature Engineering Framework [article]

Ivan Reyes-Amezcua and Daniel Flores-Araiza and Gilberto Ochoa-Ruiz and Andres Mendez-Vazquez and Eduardo Rodriguez-Tello
2022 arXiv   pre-print
In this paper, a novel method, called Meta-learning and Causality Based Feature Engineering (MACFE), is proposed; our method is based on the use of meta-learning, feature distribution encoding, and causality  ...  In MACFE, meta-learning is used to find the best transformations, then the search is accelerated by pre-selecting "original" features given their causal relevance.  ...  Next, we search for the most similar encoding on the Transformation Recommendation Matrix (T RM ) in order to recommend a useful transformation from it.  ... 
arXiv:2207.04010v1 fatcat:5y3oi6k72fe47ftm5lwilwtehy

Counterfactuals and Causability in Explainable Artificial Intelligence: Theory, Algorithms, and Applications [article]

Yu-Liang Chou and Catarina Moreira and Peter Bruza and Chun Ouyang and Joaquim Jorge
2021 arXiv   pre-print
There has been a growing interest in model-agnostic methods that can make deep learning models more transparent and explainable to a user.  ...  This research suggests that current model-agnostic counterfactual algorithms for explainable AI are not grounded on a causal theoretical formalism and, consequently, cannot promote causability to a human  ...  relatively fast. • SEDC Moreover, SEDC is a model-agnostic algorithm for counterfactual which can manage behavioral and textual data sources, it conducts a best-first search with local improvement.  ... 
arXiv:2103.04244v2 fatcat:uqs3y7v7hrhtxkh2ltl4wluyqe

Counterfactual Explanations for Machine Learning: A Review [article]

Sahil Verma and John Dickerson and Keegan Hines
2020 arXiv   pre-print
Explaining, in a human-understandable way, the relationship between the input and output of machine learning models is essential to the development of trustworthy machine-learning-based systems.  ...  In this paper, we seek to review and categorize research on counterfactual explanations, a specific class of explanation that provides a link between what could have happened had input to a model been  ...  We thank Jason Wittenbach, Aditya Kusupati, Divyat Mahajan, Jessica Dai, Soumye Singhal, Harsh Vardhan, and Jesse Michel for helpful comments.  ... 
arXiv:2010.10596v1 fatcat:bhw56sorfbfrhm7y473wlizt24

On the Importance of Attention in Meta-Learning for Few-Shot Text Classification [article]

Xiang Jiang, Mohammad Havaei, Gabriel Chartrand, Hassan Chouaib, Thomas Vincent, Andrew Jesson, Nicolas Chapados, Stan Matwin
2018 arXiv   pre-print
Based on the Model-Agnostic Meta-Learning framework (MAML), we introduce the Attentive Task-Agnostic Meta-Learning (ATAML) algorithm for text classification.  ...  We address this problem by integrating a meta-learning procedure that uses the knowledge learned across many tasks as an inductive bias towards better natural language understanding.  ...  Visualizing Learned Attentions The MAML model illustrated in Figure 1 is over-fitting on the training data and only searches for repetitive words, such as "tobacco" and "drug", that are merely spurious  ... 
arXiv:1806.00852v1 fatcat:g6jwlt5xxbfe7btti5ukkfqvbm

Generating personalized counterfactual interventions for algorithmic recourse by eliciting user preferences [article]

Giovanni De Toni, Paolo Viappiani, Bruno Lepri, Andrea Passerini
2022 arXiv   pre-print
We integrate this preference elicitation strategy into a reinforcement learning agent coupled with Monte Carlo Tree Search for efficient exploration, so as to provide personalized interventions achieving  ...  For example, a user might prefer doing certain actions with respect to others.  ...  substantially cheaper interventions than the user-independent counterpart with just a handful of queries.  ... 
arXiv:2205.13743v1 fatcat:367v24gkkzgktorglwkbpqdtgi

A Survey of Contrastive and Counterfactual Explanation Generation Methods for Explainable Artificial Intelligence

Ilia Stepin, Jose M. Alonso, Alejandro Catala, Martin Pereira-Farina
2021 IEEE Access  
Then, we report the state-of-the-art computational frameworks for contrastive and counterfactual explanation generation.  ...  As a result, we highlight a variety of properties of the approaches under study and reveal a number of shortcomings thereof.  ...  Alonso is a Ramon y Cajal Researcher (RYC-2016-19802). Alejandro Catala is a Juan de la Cierva Researcher (IJC2018-037522-I).  ... 
doi:10.1109/access.2021.3051315 fatcat:3zupk4jfdncuvj5osdkk7rykdm

Explainable artificial intelligence enhances the ecological interpretability of black‐box species distribution models

Masahiro Ryo, Boyan Angelov, Stefano Mammola, Jamie M. Kass, Blas M. Benito, Florian Hartig
2020 Ecography  
Model accuracy has steadily increased as a result, but the interpretability of the fitted models, for example the relative importance of predictor variables or their causal effects on focal species, has  ...  During the past two decades, the field has increasingly made use of machine learning approaches for constructing and validating SDMs.  ...  SM, JMK, BMB and FH provided substantial suggestions on species distribution modeling and other theoretical aspects.  ... 
doi:10.1111/ecog.05360 fatcat:l7v3ko3zafcdvpoktxd7tbiv2y

Explainable Goal-Driven Agents and Robots – A Comprehensive Review [article]

Fatai Sado, Chu Kiong Loo, Wei Shiung Liew, Matthias Kerzel, Stefan Wermter
2022 arXiv   pre-print
Finally, the paper presents requirements for explainability and suggests a roadmap for the possible realization of effective goal-driven explainable agents and robots.  ...  The review highlights key strategies that emphasize transparency, understandability, and continual learning for explainability.  ...  Acknowledgment This research was supported by the Georg Forster Research Fellowship for Experienced Researchers from Alexander von Humboldt-Stiftung/Foundation and Impact Oriented Interdisciplinary Research  ... 
arXiv:2004.09705v8 fatcat:ytz7hlwwdjd4na2h23rge3waqe

Mitigating Bias in Algorithmic Systems – A Fish-Eye View [article]

Kalia Orphanou, Jahna Otterbacher, Styliani Kleanthous, Khuyagbaatar Batsuren, Fausto Giunchiglia, Veronika Bogina, Avital Shulner Tal, AlanHartman, Tsvi Kuflik
2022 arXiv   pre-print
, and the solutions being proposed to address them, from a broad, cross-domain perspective.  ...  This survey provides a "fish-eye view," examining approaches across four areas of research.  ...  [158] use a causal Bayesian network and a learning structure algorithm to identify the causal factors for discrimination.  ... 
arXiv:2103.16953v2 fatcat:b27zb3zusnfmzcspyl2njbivkq

Disentangling User Interest and Conformity for Recommendation with Causal Embedding [article]

Yu Zheng, Chen Gao, Xiang Li, Xiangnan He, Depeng Jin, Yong Li
2021 arXiv   pre-print
In this paper, we present DICE, a general framework that learns representations where interest and conformity are structurally disentangled, and various backbone recommendation models could be smoothly  ...  Recommendation models are usually trained on observational interaction data.  ...  The proposed concise causal model is sourced from how the data is generated, thus the proposed framework is independent with backbone recommendation models.  ... 
arXiv:2006.11011v2 fatcat:q6m5efzoz5fdlmpfx4fikx43em

Learning to Augment for Casual User Recommendation [article]

Jianling Wang, Ya Le, Bo Chang, Yuyan Wang, Ed H. Chi, Minmin Chen
2022 arXiv   pre-print
To bridge the gap, we propose a model-agnostic framework L2Aug to improve recommendations for casual users through data augmentation, without sacrificing core user experience.  ...  As a result, consumption activities from core users often dominate the training data used for learning.  ...  for core and casual users from the data augmentation perspective. • We propose a model-agnostic framework L2Aug to learn a data augmentation policy using REINFORCE and improve the recommendation system  ... 
arXiv:2204.00926v1 fatcat:savvmou7kzb6vktzwku2hu37tu
« Previous Showing results 1 — 15 out of 3,306 results