A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
Filters
Linking artificial and human neural representations of language
2019
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
Our results constrain the space of NLU models that could best account for human neural representations of language, but also suggest limits on the possibility of decoding fine-grained syntactic information ...
Through further task ablations and representational analyses, we find that tasks which produce syntax-light representations yield significant improvements in brain decoding performance. ...
Mitchell et al. (2008) first demonstrated that distributional word representations could be used to predict human brain activations, when subjects were presented with individual words in isolation. ...
doi:10.18653/v1/d19-1050
dblp:conf/emnlp/GauthierL19
fatcat:gaaqsgzx3fdb5cgi353625x2py
Linking artificial and human neural representations of language
[article]
2019
arXiv
pre-print
Our results constrain the space of NLU models that could best account for human neural representations of language, but also suggest limits on the possibility of decoding fine-grained syntactic information ...
Through further task ablations and representational analyses, we find that tasks which produce syntax-light representations yield significant improvements in brain decoding performance. ...
Mitchell et al. (2008) first demonstrated that distributional word representations could be used to predict human brain activations, when subjects were presented with individual words in isolation. ...
arXiv:1910.01244v1
fatcat:6kxuvlw65zachbnccv7ywme2au
Fine-Grained Attention Mechanism for Neural Machine Translation
[article]
2018
arXiv
pre-print
In experiments with the task of En-De and En-Fi translation, the fine-grained attention method improves the translation quality in terms of BLEU score. ...
Neural machine translation (NMT) has been a new paradigm in machine translation, and the attention mechanism has become the dominant approach with the state-of-the-art records in many language pairs. ...
With alignment analysis, the fine-grained attention method revealed that the different dimensions of context play different roles in neural machine translation. ...
arXiv:1803.11407v2
fatcat:7pksn55hrzc75mshb4r2m2ruiq
What Makes Different People's Representations Alike: Neural Similarity Space Solves the Problem of Across-subject fMRI Decoding
2012
Journal of Cognitive Neuroscience
However, the goal of being able to decode across subjects is still challenging: It has remained unclear what population-level regularities of neural representation there might be. ...
The key to finding this solution was questioning the seemingly obvious idea that neural decoding should work directly on neural activation patterns. ...
Across-subject decoding of fine-grained neural representations has therefore remained a challenge. ...
doi:10.1162/jocn_a_00189
pmid:22220728
fatcat:43an7wpfxnfd3o647nlvhzu3ha
Innovative Deep Neural Network Modeling for Fine-grained Chinese Entity Recognition
2020
Electronics
feature extraction and information representation of deep neural models. ...
In this paper, we propose an innovative neural network model named En2BiLSTM-CRF to improve the effect of fine-grained Chinese entity recognition tasks. ...
These enhanced representations and weights play an essential role in the process of fine-grained entity recognition; (3) We conducted sufficient experiments on the latest fine-grained public dataset and ...
doi:10.3390/electronics9061001
fatcat:lzbl2gx3mzg3pgiwi4epukref4
Exploring the Representations of Individual Entities in the Brain Combining EEG and Distributional Semantics
2022
Frontiers in Artificial Intelligence
In the second set of analyses, we learn to decode from evoked responses to distributional word vectors. ...
., the referents of proper names such as Jacinta Ardern) is fine-grained, episodic, and strongly social in nature, when compared with knowledge about generic entities (the referents of common nouns such ...
been shown to perform well with neural data (Jat et al., 2019) . ...
doi:10.3389/frai.2022.796793
pmid:35280237
pmcid:PMC8905499
doaj:cc9b484391c74bde98bb833975f0f872
fatcat:zxavy6w5qrarriglf4y2qp2wi4
Fine-Grained and Semantic-Guided Visual Attention for Image Captioning
2018
2018 IEEE Winter Conference on Applications of Computer Vision (WACV)
In this way, a mechanism of fine-grained and semantic-guided visual attention is created, which can better link the relevant visual information with each semantic meaning inside the text through LSTM. ...
Based on the end-to-end CNN-LSTM framework, it tries to link the relevant visual information on the image with the semantic representation in the text (i.e. captioning) for the first time. ...
To the best of our knowledge, our FCN-LSTM model is the first work to propose a novel attention mechanism that combines the grid-wise visual representation with gridwise semantic label at a fine-grained ...
doi:10.1109/wacv.2018.00190
dblp:conf/wacv/ZhangWWC18
fatcat:m2ofhqjsgbb6ddsof2mkbdd5ki
Improving End-to-End Contextual Speech Recognition with Fine-Grained Contextual Knowledge Selection
[article]
2022
arXiv
pre-print
In this work, we focus on mitigating confusion problems with fine-grained contextual knowledge selection (FineCoS). ...
In FineCoS, we introduce fine-grained knowledge to reduce the uncertainty of token predictions. ...
and meanwhile fully use fine-grained knowledge. ...
arXiv:2201.12806v2
fatcat:upitfw6jsnd4lbgb4bvp6dvpd4
Hierarchical Multi-Grained Generative Model for Expressive Speech Synthesis
[article]
2021
arXiv
pre-print
This paper proposes a hierarchical generative model with a multi-grained latent variable to synthesize expressive speech. ...
In recent years, fine-grained latent variables are introduced into the text-to-speech synthesis that enable the fine control of the prosody and speaking styles of synthesized speech. ...
These representations have a hierarchical linguistic dependency and correlate with the content of the text. These fine-grained representations also have temporal coherency. ...
arXiv:2009.08474v2
fatcat:cvdnfbhvwvb2tlp5nmzjpgxd4y
Hierarchical Multi-Grained Generative Model for Expressive Speech Synthesis
2020
Interspeech 2020
This paper proposes a hierarchical generative model with a multi-grained latent variable to synthesize expressive speech. ...
In recent years, fine-grained latent variables are introduced into the text-to-speech synthesis that enable the fine control of the prosody and speaking styles of synthesized speech. ...
These representations have a hierarchical linguistic dependency and correlate with the content of the text. These fine-grained representations also have temporal coherency. ...
doi:10.21437/interspeech.2020-2477
dblp:conf/interspeech/HonoTSHONT20
fatcat:gwmaqc6fmrfg7krx2azwwr4qiq
Focus-Constrained Attention Mechanism for CVAE-based Response Generation
[article]
2020
arXiv
pre-print
To tackle it, our idea is to transform the coarse-grained discourse-level information into fine-grained word-level information. ...
Specifically, we firstly measure the semantic concentration of corresponding target response on the post words by introducing a fine-grained focus signal. ...
This focus captures to what extent the response semantics is related to the post words, which will serve as fine-grained signals for the decoder. ...
arXiv:2009.12102v1
fatcat:ekdbsoju25g6zewgrldshhtyla
Layer-Wise Multi-View Decoding for Improved Natural Language Generation
[article]
2022
arXiv
pre-print
In this work, we propose layer-wise multi-view decoding, where for each decoder layer, together with the representations from the last encoder layer, which serve as a global view, those from other encoder ...
., natural language generation, the decoder relies on the attention mechanism to efficiently extract information from the encoder. ...
Following See et al. (2017) , we truncate each source sentence to 400 words and each target sentence to 100 words. ROUGE-1, -2 and ...
arXiv:2005.08081v6
fatcat:ozcvveewvva2dgiyul6riu2fru
A Survey on Deep Learning for Named Entity Recognition
[article]
2020
arXiv
pre-print
Then, we systematically categorize existing works based on a taxonomy along three axes: distributed representations for input, context encoder, and tag decoder. ...
Finally, we present readers with the challenges faced by NER systems and outline future directions in this area. ...
Fine-grained NER and Boundary Detection. ...
arXiv:1812.09449v3
fatcat:36tnstbyo5h4xizjpqn4cevgui
CopyCat2: A Single Model for Multi-Speaker TTS and Many-to-Many Fine-Grained Prosody Transfer
[article]
2022
arXiv
pre-print
In Stage I, the model learns speaker-independent word-level prosody representations from speech which it uses for many-to-many fine-grained prosody transfer. ...
We compare CC2 to two strong baselines, one in TTS with contextually appropriate prosody, and one in fine-grained prosody transfer. ...
We hypothesise that training on a multi-speaker dataset at the word-level, helped get denser and fine-grained, acoustic and duration prosody representations. ...
arXiv:2206.13443v1
fatcat:t47abhn3ifbsvkwlhbsmwrdipu
Lessons From Deep Neural Networks for Studying the Coding Principles of Biological Neural Networks
2021
Frontiers in Systems Neuroscience
/irrelevant features or overestimating the network feature representation/noise correlation. ...
Additionally, we present studies investigating neural coding principles in biological neural networks to which our points can be applied. ...
model) and the actual situation (fine-grained representation model). ...
doi:10.3389/fnsys.2020.615129
pmid:33519390
pmcid:PMC7843526
fatcat:4rvgny3irnhuxn6qha3t66e5h4
« Previous
Showing results 1 — 15 out of 8,168 results