7,331 Hits in 6.2 sec

A modularized framework for explaining hierarchical attention networks on text classifiers

Mahtab Sarvmaili, Amilcar Soares, Riccardo Guidotti, Anna Monreale, Fosca Giannotti, Dino Pedreschi, Stan Matwin
2021 Proceedings of the Canadian Conference on Artificial Intelligence  
In this paper, we propose FEHAN, a modularized Framework for Explaining HiErarchical Attention Network trained to classify text data.  ...  It then generates a set of similar sentences using a Markov chain text generator, and it replaces the salient sentences with the synthetic ones, resulting in a new set of semantically similar documents  ...  Acknowledgements The authors would like to thank NSERC (Natural Sciences and Engineering Research Council of Canada) for financial support.  ... 
doi:10.21428/594757db.23db72bf fatcat:w2vtraapmzc43m53x755572ehi

A Field Guide to Scientific XAI: Transparent and Interpretable Deep Learning for Bioinformatics Research [article]

Thomas P Quinn, Sunil Gupta, Svetha Venkatesh, Vuong Le
2021 arXiv   pre-print
Unfortunately, the opacity of deep neural networks limit their role in scientific discovery, creating a new demand for models that are transparently interpretable.  ...  It provides a taxonomy of transparent model design concepts, a practical workflow for putting design concepts into practice, and a general template for reporting design choices.  ...  We can also classify modularity based on the arrangement of the modules.  ... 
arXiv:2110.08253v1 fatcat:xghw4z53fvbivkzqp3aczlbpky

Explainable Rumor Detection using Inter and Intra-feature Attention Networks [article]

Mingxuan Chen, Ning Wang, K.P. Subbalakshmi
2020 arXiv   pre-print
We tackle the problem of automated detection of rumors in social media in this paper by designing a modular explainable architecture that uses both latent and handcrafted features and can be expanded to  ...  This approach will allow the end user to not only determine whether the piece of information on the social media is real of a rumor, but also give explanations on why the algorithm arrived at its conclusion  ...  A post-level attention model (PLAN), a structure aware self-attention model (StA-PLAN) and a hierarchical token and post-level attention model (StA-HiTPLAN) were proposed to explain both post-level and  ... 
arXiv:2007.11057v1 fatcat:mah35jrq6re7vdabskubn57c64

Hierarchical Self Attention Based Autoencoder for Open-Set Human Activity Recognition [article]

M Tanjid Hasan Tonmoy, Saif Mahmud, A K M Mahbubur Rahman, M Ashraful Amin, Amin Ahsan Ali
2021 arXiv   pre-print
Furthermore, attention maps generated by the hierarchical model demonstrate explainable selection of features in activity recognition.  ...  Hence, the proposed self attention based approach combines data hierarchically from different sensor placements across time to classify closed-set activities and it obtains notable performance improvement  ...  Using self-attention in a hierarchical manner has been proposed for various tasks such as classifying text documents [6] , generating recommendations [10] etc. in order to break up the task into relevant  ... 
arXiv:2103.04279v1 fatcat:mfm5ao4hxzaxvipfvgvizv4rse

Topic-oriented community detection of rating-based social networks

Ali Reihanian, Behrouz Minaei-Bidgoli, Hosein Alizadeh
2016 Journal of King Saud University: Computer and Information Sciences  
Finding meaningful communities in this kind of networks is an interesting research area and has attracted the attention of many researchers.  ...  Most of the researches in the field of community detection mainly focus on the topological structure of the network without performing any content analysis.  ...  For example, a novel method has been proposed by to cluster the text social objects like emails.  ... 
doi:10.1016/j.jksuci.2015.07.001 fatcat:bhhles57z5a75mbt5fuj3kxjhy

LRTA: A Transparent Neural-Symbolic Reasoning Framework with Modular Supervision for Visual Question Answering [article]

Weixin Liang, Feiyang Niu, Aishwarya Reganti, Govind Thattai, Gokhan Tur
2020 arXiv   pre-print
We propose LRTA [Look, Read, Think, Answer], a transparent neural-symbolic reasoning framework for visual question answering that solves the problem step-by-step like humans and provides human-readable  ...  Our experiments on GQA dataset show that LRTA outperforms the state-of-the-art model by a large margin (43.1% v.s. 28.0%) on the full answer generation task.  ...  Acknowledgement We would like to thank Robinson Piramuthu, Dilek Hakkani-Tur, Arindam Mandal, Yanbang Wang and the anonymous reviewers for their insightful feedback and discussions that have notably shaped  ... 
arXiv:2011.10731v1 fatcat:jk7gpqjhvjdpnosoyc6ccpjfde

Hierarchical Semantic Perceptron Grid Based on Neural Network

Huaihu CAO, Zhenwei YU, Yinyan WANG
2005 2005 First International Conference on Semantics, Knowledge and Grid  
A hierarchical semantic perceptron grid architecture based neural network has been proposed in this paper, the semantic spotting is a key issue of this architecture, for finding solution to this problem  ...  , firstly we has formulated it, then a Semantic neural network classifier frame has been proposed.  ...  We decided to build a flat modular classifier that is implemented as a set of 103 individual expert networks.  ... 
doi:10.1109/skg.2005.79 dblp:conf/skg/CaoYW05a fatcat:m5dknjkwgjesrhvfcjdamufof4

Referring Expression Comprehension: A Survey of Methods and Datasets [article]

Yanyuan Qiao, Chaorui Deng, Qi Wu
2020 arXiv   pre-print
This task has attracted a lot of attention from both computer vision and natural language processing community, and several lines of work have been proposed, from CNN-RNN model, modular network to complex  ...  We classify methods by their mechanism to encode the visual and textual modalities. In particular, we examine the common approach of joint embedding images and expressions to a common feature space.  ...  [28] propose the Modular Attention Network (MAttNet).  ... 
arXiv:2007.09554v2 fatcat:32wmggwnezggnermyh5iw3uq2y

Fine-tuned BERT Model for Large Scale and Cognitive Classification of MOOCs

Hanane Sebbaq, Nour-eddine El Faddouli
2022 International Review of Research in Open and Distance Learning  
In addition to applying a simple softmax classifier, we chose prevalent neural networks long short-term memory (LSTM) and Bi-directional long short-term memory (Bi-LSTM).  ...  The results of our experiments showed, on the one hand, that choosing a more complex classifier does not boost the performance of classification.  ...  They presented a text classification model that used a back-propagation learning approach to train a text classifier using an artificial neural network.  ... 
doi:10.19173/irrodl.v23i2.6023 fatcat:vujlpvddg5gsrhwysdrb34ymcm

A Novel Clustering Methodology Based on Modularity Optimisation for Detecting Authorship Affinities in Shakespearean Era Plays

Leila M. Naeni, Hugh Craig, Regina Berretta, Pablo Moscato, Tamar Schlick
2016 PLoS ONE  
Our methodology is based on a newly proposed memetic algorithm (iMA-Net) for discovering clusters of data elements by maximizing the modularity function in proximity graphs of literary works.  ...  In this study we propose a novel, unsupervised clustering methodology for analyzing large datasets.  ...  Acknowledgments We would like to thank Renato Vimieiro and Carlos Riveros for providing the R package for JSD matrix computation, and to Luke Mathieson for his extensive proof-reading and constructive  ... 
doi:10.1371/journal.pone.0157988 pmid:27571416 pmcid:PMC5003342 fatcat:k4ob56bnl5h4hpndqbdqtlxkau

Betty: An Automatic Differentiation Library for Multilevel Optimization [article]

Sang Keun Choe, Willie Neiswanger, Pengtao Xie, Eric Xing
2022 arXiv   pre-print
To this end, we develop an automatic differentiation procedure based on a novel interpretation of multilevel optimization as a dataflow graph.  ...  Multilevel optimization has been widely adopted as a mathematical foundation for a myriad of machine learning problems, such as hyperparameter optimization, meta-learning, and reinforcement learning, to  ...  MLO has gained considerable attention as a unified mathematical framework for studying diverse problems including meta-learning [12, 34] , hyperparameter optimization [13, 14, 30] , neural architecture  ... 
arXiv:2207.02849v1 fatcat:rkdkasfml5fi5irzmrn25ydxxa

Survey on graph embeddings and their applications to machine learning problems on graphs

Ilya Makarov, Dmitrii Kiselev, Nikita Nikitinsky, Lovro Subelj
2021 PeerJ Computer Science  
Using the constructed feature spaces, many machine learning problems on graphs can be solved via standard frameworks suitable for vectorized feature representation.  ...  As a result, our survey covers a new rapidly growing field of network feature engineering, presents an in-depth analysis of models based on network types, and overviews a wide range of applications to  ...  DyHAN presents the model for dynamic heterogeneous graphs with hierarchical attention. Another way to use the attention mechanism in dynamic heterogeneous networks is the .  ... 
doi:10.7717/peerj-cs.357 pmid:33817007 pmcid:PMC7959646 fatcat:ntronyrbgfbedez5dks6h4hoq4

A Review on Explainability in Multimodal Deep Neural Nets

Gargi Joshi, Rahee Walambe, Ketan Kotecha
2021 IEEE Access  
This paper extensively reviews the present literature to present a comprehensive survey and commentary on the explainability in multimodal deep neural nets, especially for the vision and language tasks  ...  This has given rise to the quest for model interpretability and explainability, more so in the complex tasks involving multimodal AI methods.  ...  A "Multimodal Knowledge-aware Hierarchical Attention Network "in which a knowledge graph with multiple modalities and different features is built for the medical field.  ... 
doi:10.1109/access.2021.3070212 fatcat:5wtxr4nf7rbshk5zx7lzbtcram

Towards Opinion Summarization of Customer Reviews

Samuel Pecar
2018 Proceedings of ACL 2018, Student Research Workshop  
It is impossible for any human reader to process even the most relevant of these documents. The most promising tool to solve this task is a text summarization.  ...  In this paper, we introduce our research plan to use neural networks on user-generated travel reviews to generate summaries that take into account shifting opinions over time.  ...  For text summarization, a framework for abstractive summarization based on the recent development of a treebank for AMR can be employed.  ... 
doi:10.18653/v1/p18-3001 dblp:conf/acl/Pecar18 fatcat:5d3p4w6gvvfixdvp2jpcq6l4ia

On Interpretability of Artificial Neural Networks: A Survey [article]

Fenglei Fan, Jinjun Xiong, Mengzhou Li, Ge Wang
2021 arXiv   pre-print
Deep learning as represented by the artificial deep neural networks (DNNs) has achieved great success in many important areas that deal with text, images, videos, graphs, and so on.  ...  Due to the huge potential of deep learning, interpreting neural networks has recently attracted much research attention.  ...  The authors are grateful for Dr. Hongming Shan's suggestions (Fudan University) and anonymous reviewers' advice.  ... 
arXiv:2001.02522v4 fatcat:pxa66n2wfjcbxfwc3k5gm3r2xa
« Previous Showing results 1 — 15 out of 7,331 results