A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Context-aware Non-linear and Neural Attentive Knowledge-based Models for Grade Prediction
[article]
2020
arXiv
pre-print
In this paper, we propose context-aware non-linear and neural attentive models that can potentially better estimate a student's knowledge state from his/her prior course information, as well as model the ...
One of the successful approaches for accurately predicting a student's grades in future courses is Cumulative Knowledge-based Regression Models (CKRM). ...
Access to research and computing facilities was provided by the Digital Technology Center and the Minnesota Supercomputing Institute, http://www.msi.umn.edu. ...
arXiv:2003.05063v1
fatcat:leyzugpnwfdlnkw2thwnafzwa4
Context-aware Nonlinear and Neural Attentive Knowledge-based Models for Grade Prediction
2020
Zenodo
In this paper, we propose context-aware nonlinear and neural attentive models that can potentially better estimate a student's knowledge state from his/her prior course information, as well as model the ...
One of the successful approaches for accu- rately predicting a student's grades in future courses is Cumulative Knowledge-based Regression Models (CKRM). ...
Access to research and computing facilities was provided by the Digital Technology Center and the Minnesota Supercomputing Institute, http://www.msi.umn.edu. ...
doi:10.5281/zenodo.3911794
fatcat:cxcfo556uzeixndpnsxoledu6i
GraphSpeech: Syntax-Aware Graph Attention Network For Neural Speech Synthesis
[article]
2021
arXiv
pre-print
We propose a novel neural TTS model, denoted as GraphSpeech, that is formulated under graph neural network framework. ...
GraphSpeech encodes explicitly the syntactic relation of input lexical tokens in a sentence, and incorporates such information to derive syntactically motivated character embeddings for TTS attention mechanism ...
We use Mel Linear, Post-Net and Stop Linear to predict the mel-spectrum and the stop token respectively. ...
arXiv:2010.12423v3
fatcat:5gz2wgjpmzeljjechq3pf47pzu
Context-Aware Convolutional Neural Network for Grading of Colorectal Cancer Histology Images
[article]
2019
arXiv
pre-print
We propose a novel way to incorporate larger context by a context-aware neural network based on images with a dimension of 1,792x1,792 pixels. ...
Code and dataset related information is available at this link: https://tia-lab.github.io/Context-Aware-CNN ...
The proposed context-aware model is evaluated for CRC grading and breast cancer classification. ...
arXiv:1907.09478v1
fatcat:ctw2rls3pnhw7hh74xfr7ngjou
Semantics of the Black-Box: Can knowledge graphs help make deep learning systems more interpretable and explainable?
[article]
2020
arXiv
pre-print
However, the Black-Box nature of DL models and their over-reliance on massive amounts of data condensed into labels and dense representations poses challenges for interpretability and explainability of ...
This aspect is missing in early data-focused approaches and necessitated knowledge-infused learning and other strategies to incorporate computational knowledge. ...
to hidden layers of neural models for explainable decision making [21] , (c) Infusing contextual representations from relevant subgraph or paths of KG through either concatenation or pooling or non-linear ...
arXiv:2010.08660v4
fatcat:hcoahll2ivhdpcix7t6ezh425y
A Trio Neural Model for Dynamic Entity Relatedness Ranking
2018
Proceedings of the 22nd Conference on Computational Natural Language Learning
In this work, we propose a neural networkbased approach for dynamic entity relatedness, leveraging the collective attention as supervision. ...
Our model is capable of learning rich and different entity representations in a joint framework. ...
We thank the reviewers for the suggestions on the content and structure of the paper. ...
doi:10.18653/v1/k18-1004
dblp:conf/conll/NguyenTN18
fatcat:liasz2zhpnbnjmqh5dculu4lfa
Automated Essay Scoring with Discourse-Aware Neural Models
2019
Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications
Neural networks offer an alternative to feature engineering, but they typically require more annotated data. ...
This paper explores network structures, contextualized embeddings and pre-training strategies aimed at capturing discourse characteristics of essays. ...
Simpler discourse-aware neural models are still useful, but they benefit from combination with a feature-based model. ...
doi:10.18653/v1/w19-4450
dblp:conf/bea/NadeemNLO19
fatcat:yc34nb5w75ar5niz4ntfjywnde
On the Comparison of Popular End-to-End Models for Large Scale Speech Recognition
2020
Interspeech 2020
Index Terms: end-to-end, RNN-transducer, attention-based encoder-decoder, transformer Popular End-to-End Models In this section, we give a brief introduction of current popular E2E models: RNN-T, RNN-AED ...
In this study, we conduct an empirical comparison of RNN-T, RNN-AED, and Transformer-AED models, in both non-streaming and streaming modes. ...
The non-streaming RNN-AED model uses exactly the same encoder and decoder structures as the non-streaming RNN-T model. Similar to [34] , a location-aware attention mechanism is used. ...
doi:10.21437/interspeech.2020-2846
dblp:conf/interspeech/Li0G0Z020
fatcat:2xfo2lo4q5cgbgecg3lufby7oq
On the Comparison of Popular End-to-End Models for Large Scale Speech Recognition
[article]
2020
arXiv
pre-print
Currently, there are three promising E2E methods: recurrent neural network transducer (RNN-T), RNN attention-based encoder-decoder (AED), and Transformer-AED. ...
In this study, we conduct an empirical comparison of RNN-T, RNN-AED, and Transformer-AED models, in both non-streaming and streaming modes. ...
The non-streaming RNN-AED model uses exactly the same encoder and decoder structures as the non-streaming RNN-T model. Similar to [34] , a location-aware attention mechanism is used. ...
arXiv:2005.14327v2
fatcat:44j4uohzn5h5dn5i34pk33o5ty
Sentence Level Human Translation Quality Estimation with Attention-based Neural Networks
[article]
2020
arXiv
pre-print
Empirical results on a large human annotated dataset show that the neural model outperforms feature-based methods significantly. The dataset and the tools are available. ...
Conventional methods for solving this task rely on manually engineered features and external knowledge. ...
Human
Model Predictions In the upper example of Table 4 , the neural model with attention predicts the scores for 'IW' and 'TM' fairly accurately, which are about the fluency of the translation. ...
arXiv:2003.06381v1
fatcat:3q66lyvbvvadpign23kyt2clou
Personalized News Recommendation: Methods and Challenges
[article]
2022
arXiv
pre-print
Next, we introduce the public datasets and evaluation methods for personalized news recommendation. ...
We first review the techniques for tackling each core problem in a personalized news recommender system and the challenges they face. ...
It predicts the topic of news based on texts and concepts, and used the predicted topic to enrich the knowledge graph and learn topic enriched knowledge representations of news with graph neural networks ...
arXiv:2106.08934v3
fatcat:iagqsw73hrehxaxpvpydvtr26m
Improving Explainable Recommendations by Deep Review-Based Explanations
2021
IEEE Access
We leverage our methods' performance by comparing with non-review based recommender systems and advanced review-aware recommender systems. ...
In this paper, we develop two character-level deep neural network-based personalised review generation models, and improve recommendation accuracy by generating high-quality text which meets the input ...
Thus, we argue that generation models have learned the relevant patterns and contexts to improve the quality of their internal representations and out-perform the traditional non-review aware recommender ...
doi:10.1109/access.2021.3076146
fatcat:6eedwpdhjbfsvp52alb6zlsdoy
Learning an Executable Neural Semantic Parser
[article]
2018
arXiv
pre-print
The generation process is modeled by structured recurrent neural networks, which provide a rich encoding of the sentential context and generation history for making predictions. ...
are provided, and distant supervision where only unlabeled sentences and a knowledge base are available. ...
The process of generating logical forms is modeled by recurrent neural networks-a powerful tool for encoding the context of a sentence and the generation history for making predictions (Section 3.5). ...
arXiv:1711.05066v2
fatcat:afikadyvarevvjit7adoscsx7u
A Trio Neural Model for Dynamic Entity Relatedness Ranking
[article]
2018
arXiv
pre-print
In this work, we propose a neural networkbased approach for dynamic entity relatedness, leveraging the collective attention as supervision. ...
Our model is capable of learning rich and different entity representations in a joint framework. ...
We thank the reviewers for the suggestions on the content and structure of the paper. ...
arXiv:1808.08316v3
fatcat:byi3hqroqbbgxmzd2pwha7sdve
Quality Inspection of Food and Agricultural Products using Artificial Intelligence
2021
Advances in Agricultural and Food Research Journal
A rising awareness for quality inspection of food and agricultural products has generated a growing effort to develop rapid and non-destructive techniques. ...
It was demonstrated that ANN provides the best result for modelling and effective in real-time monitoring techniques. ...
Acknowledgments: The authors would like to acknowledge the valuable support from the Department of Biological and Agricultural Engineering, Faculty of Engineering, Universiti Putra Malaysia for providing ...
doi:10.36877/aafrj.a0000237
fatcat:zmmgbq43xvhidhzmubslr6izom
« Previous
Showing results 1 — 15 out of 8,854 results