Filters








3,684 Hits in 5.0 sec

Explanation-Guided Fairness Testing through Genetic Algorithm [article]

Ming Fan, Wenying Wei, Wuxia Jin, Zijiang Yang, Ting Liu
2022 arXiv   pre-print
Experiments on multiple real-world benchmarks, including tabular and text datasets, show that ExpGA presents higher efficiency and effectiveness than four state-of-the-art approaches.  ...  Benefiting from this combination of explanation results and GA, ExpGA is both efficient and effective to detect discriminatory individuals.  ...  The evaluation experiments demonstrate that ExpGA can detect discriminatory samples much faster with a higher success rate than four state-of-the-art methods, both on the text and tabular benchmarks.  ... 
arXiv:2205.08335v1 fatcat:kwcxbsoif5ct3cq4m4i77rwee4

A Review on Explainability in Multimodal Deep Neural Nets

Gargi Joshi, Rahee Walambe, Ketan Kotecha
2021 IEEE Access  
Multimodal explanations based on counterfactuality provide recommendations that provide actionable insights and recourse [189] .  ...  In this model, a clinical diagnostic decision is conveyed with visual pointing and textual explanation in a coordinated fashion with a "visual word constraint" model.  ... 
doi:10.1109/access.2021.3070212 fatcat:5wtxr4nf7rbshk5zx7lzbtcram

Explainable Recommendation: A Survey and New Perspectives [article]

Yongfeng Zhang, Xu Chen
2020 arXiv   pre-print
We then conduct a comprehensive survey of explainable recommendation on three perspectives: 1) We provide a chronological research timeline of explainable recommendation. 2) We provide a two-dimensional  ...  Explainable recommendation attempts to develop models that generate not only high-quality recommendations but also intuitive explanations.  ...  Finally, the authors model users' preferences based on their activities and present explanations in a personalized fashion (Figure 3.9(b) ).  ... 
arXiv:1804.11192v10 fatcat:scsd3htz65brbiae35zd3nixe4

Attribute-aware Explainable Complementary Clothing Recommendation [article]

Yang Li, Tong Chen, Zi Huang
2021 arXiv   pre-print
This work aims to tackle the explainability challenge in fashion recommendation tasks by proposing a novel Attribute-aware Fashion Recommender (AFRec).  ...  When performing clothes matching, most existing approaches leverage the latent visual features extracted from fashion item images for compatibility modelling, which lacks explainability of generated matching  ...  Another explainable fashion recommendation model [22] learns to generate review comments by an attentive RNN-based decoder using the fused item-level embeddings.  ... 
arXiv:2107.01655v1 fatcat:lzw5eesxh5cwpkonw3wtzppxxu

A model for multimodal humanlike perception based on modular hierarchical symbolic information processing, knowledge integration, and learning

Rosemarie Velik
2007 2007 2nd Bio-Inspired Models of Network, Information and Computing Systems  
In this article, a model for humanlike perception is introduced based on hierarchical modular fusion of multi-sensory data, symbolic information processing, integration of knowledge and memory, and learning  ...  The model and the underlying concepts are explained by means of a concrete example taken from building automation.  ...  As the explanation shall be kept simple, it is assumed that there is always only one person present that can trigger sensors.  ... 
doi:10.1109/bimnics.2007.4610105 fatcat:ye73yb4owbffzbdncdjthqhsve

PAI-BPR: Personalized Outfit Recommendation Scheme with Attribute-wise Interpretability [article]

Dikshant Sagar, Jatin Garg, Prarthana Kansal, Sejal Bhalla, Rajiv Ratn Shah, Yi Yu
2020 arXiv   pre-print
Fashion is an important part of human experience. Events such as interviews, meetings, marriages, etc. are often based on clothing styles.  ...  Our paper devises an attribute-wise interpretable compatibility scheme with personal preference modelling which captures user-item interaction along with general item-item interaction.  ...  On the other hand, the personal preference modeling focuses on extracting the latent preference factor based on the multimodal data (textual description and image) of fashion items and hence captures the  ... 
arXiv:2008.01780v1 fatcat:4mn7pronyfcrdpivngbsyax5d4

Characterizing Hirability via Personality and Behavior [article]

Harshit Malik, Hersh Dhillon, Roland Goecke, Ramanathan Subramanian
2020 arXiv   pre-print
predicting personality and hirability. (3) Explanatory analyses reveal the impact of multimodal behavior on personality impressions; , Conscientiousness impressions are impacted by the use of cuss words  ...  Modeling hirability as a discrete/continuous variable with the big-five personality traits as predictors, we utilize (a) apparent personality annotations, and (b) personality estimates obtained via audio  ...  Specifically, [10] explains hirability predictions based on personality annotations, while [8] shows typical faces reflective of apparent traits.  ... 
arXiv:2006.12041v1 fatcat:t2lm3b5gdnh4bpfz2pdal7t7bu

A Survey on Accuracy-oriented Neural Recommendation: From Collaborative Filtering to Information-rich Recommendation [article]

Le Wu, Xiangnan He, Xiang Wang, Kun Zhang, Meng Wang
2021 arXiv   pre-print
Influenced by the great success of deep learning in computer vision and language understanding, research in recommendation has shifted to inventing new recommender models based on neural networks.  ...  Specifically, based on the data usage during recommendation modeling, we divide the work into collaborative filtering and information-rich recommendation: 1) collaborative filtering, which leverages the  ...  Image content based models are suitable for recommendation scenarios that rely heavily on visual influence (e.g., fashion recommendation) or new items with little user feedback.  ... 
arXiv:2104.13030v3 fatcat:7bzwaxcarrgbhe36teik2rhl6e

Learning to Match on Graph for Fashion Compatibility Modeling

Xun Yang, Xiaoyu Du, Meng Wang
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
Understanding the mix-and-match relationships between items receives increasing attention in the fashion industry.  ...  Finally, we predict pairwise compatibility based on a compatibility metric learning module. Extensive experiments show that DREP can significantly improve the performance of state-of-the-art methods.  ...  In ), a multi-modal attention neural network is designed to generate visual explanation for explainable fashion recommendation.  ... 
doi:10.1609/aaai.v34i01.5362 fatcat:rpzqrjyiarbejmidblcznqez4i

Revisiting Cross Modal Retrieval [article]

Shah Nawaz, Muhammad Kamran Janjua, Alessandro Calefati, Ignazio Gallo
2018 arXiv   pre-print
We evaluate our approach on two famous multimodal datasets: MS-COCO and Flickr30K.  ...  Most multimodal architectures employ separate networks for each modality to capture the semantic relationship between them.  ...  The Figure 2 visually explains the architecture of the network.  ... 
arXiv:1807.07364v1 fatcat:l4pp5cq6f5e77gnwd3tcbggnne

Multimodal Research in Vision and Language: A Review of Current and Emerging Trends [article]

Shagun Uppal, Sarthak Bhagat, Devamanyu Hazarika, Navonil Majumdar, Soujanya Poria, Roger Zimmermann, Amir Zadeh
2020 arXiv   pre-print
Moreover, we shed some light on multi-disciplinary patterns and insights that have emerged in the recent past, directing this field towards more modular and transparent intelligent systems.  ...  In this paper, we present a detailed overview of the latest trends in research pertaining to visual and language modalities.  ...  based on a visual input.  ... 
arXiv:2010.09522v2 fatcat:l4npstkoqndhzn6hznr7eeys4u

A Model for Multimodal Humanlike Perception based on Modular Hierarchical Symbolic Information Processing, Knowledge Integration, and Learning

Rosemarie Velik
2007 Proceedings of the 2nd International Conference on Bio-Inspired Models of Network Information and Computing Systems  
In this article, a model for humanlike perception is introduced based on hierarchical modular fusion of multi-sensory data, symbolic information processing, integration of knowledge and memory, and learning  ...  The model and the underlying concepts are explained by means of a concrete example taken from building automation.  ...  As the explanation shall be kept simple, it is assumed that there is always only one person present that can trigger sensors.  ... 
doi:10.4108/icst.bionetics2007.2421 dblp:conf/bionetics/Velik07 fatcat:o7rml6ac2ze2tcx7ts4gkalogu

Personalized Transformer for Explainable Recommendation [article]

Lei Li, Yongfeng Zhang, Li Chen
2021 arXiv   pre-print
To address this problem, we present a PErsonalized Transformer for Explainable Recommendation (PETER), on which we design a simple and effective learning objective that utilizes the IDs to predict the  ...  words in the target explanation, so as to endow the IDs with linguistic meanings and to achieve personalized Transformer.  ...  Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsors.  ... 
arXiv:2105.11601v2 fatcat:zpycrkehsjdsxh72dyos3i3ufy

Towards Natural Language Interfaces for Data Visualization: A Survey [article]

Leixian Shen, Enya Shen, Yuyu Luo, Xiaocong Yang, Xuming Hu, Xiongshuai Zhang, Zhiwei Tai, Jianmin Wang
2021 arXiv   pre-print
In order to classify each paper, we develop categorical dimensions based on a classic information visualization pipeline with the extension of a V-NLI layer.  ...  It enables users to focus on their tasks rather than worrying about operating the interface to visualization tools.  ...  [80] made the first step toward design guidelines for dealing with vague modifiers based on existing cognitive linguistics research and a crowdsourcing study.  ... 
arXiv:2109.03506v1 fatcat:7cz5ibrwyrhdlbkij3e74u6lem

Multimodal Co-learning: Challenges, Applications with Datasets, Recent Advances and Future Directions [article]

Anil Rahate, Rahee Walambe, Sheela Ramanna, Ketan Kotecha
2021 arXiv   pre-print
We present the comprehensive taxonomy of multimodal co-learning based on the challenges addressed by co-learning and associated implementations.  ...  The various techniques employed to include the latest ones are reviewed along with some of the applications and datasets.  ...  CRediT authorship contribution statement Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared  ... 
arXiv:2107.13782v2 fatcat:s4spofwxjndb7leqbcqnwbifq4
« Previous Showing results 1 — 15 out of 3,684 results