Filters








14,573 Hits in 5.3 sec

Co-embedding of Nodes and Edges with Graph Neural Networks [article]

Xiaodong Jiang, Ronghang Zhu, Pengsheng Ji, Sheng Li
<span title="2020-10-25">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
To address this problem, we present CensNet, Convolution with Edge-Node Switching graph neural network, for learning tasks in graph-structured data with both node and edge features.  ...  CensNet is a general graph embedding framework, which embeds both nodes and edges to a latent feature space.  ...  The main contributions of this work are summarized as follows. • Co-embedding of nodes and edges.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.13242v1">arXiv:2010.13242v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/a4hrwguabzbcblu4zo5gx7rrhq">fatcat:a4hrwguabzbcblu4zo5gx7rrhq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201030201319/https://arxiv.org/pdf/2010.13242v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/c5/66/c5661e4d63c2d1145efa3d32246238f25256d982.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.13242v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Graph Convolutional Networks for Text Classification

Liang Yao, Chengsheng Mao, Yuan Luo
<span title="2019-07-17">2019</span> <i title="Association for the Advancement of Artificial Intelligence (AAAI)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/wtjcymhabjantmdtuptkk62mlq" style="color: black;">PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE</a> </i> &nbsp;
We build a single text graph for a corpus based on word co-occurrence and document word relations, then learn a Text Graph Convolutional Network (Text GCN) for the corpus.  ...  However, only a limited number of studies have explored the more flexible graph convolutional neural networks (convolution on non-grid, e.g., arbitrary graph) for the task.  ...  Method Graph Convolutional Networks (GCN) A GCN (Kipf and Welling 2017) is a multilayer neural network that operates directly on a graph and induces embedding vectors of nodes based on properties of their  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1609/aaai.v33i01.33017370">doi:10.1609/aaai.v33i01.33017370</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/obbkbmeqkvf73k4keg5a6w6hgq">fatcat:obbkbmeqkvf73k4keg5a6w6hgq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200306222733/https://wvvw.aaai.org/ojs/index.php/AAAI/article/download/4725/4603" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/48/f1/48f14e1cdb35d379cb4a034186c571251c9ae132.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1609/aaai.v33i01.33017370"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

TempNodeEmb:Temporal Node Embedding considering temporal edge influence matrix [article]

Khushnood Abbas, Alireza Abbasi, Dong Shi, Niu Ling, Mingsheng Shang, Chen Liong, Bolun Chen
<span title="2020-08-16">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
This gap motivated us to propose in this research a new node embedding technique which exploits the evolving nature of the networks considering a simple three-layer graph neural network at each time step  ...  To overcome this problem, automated frameworks are proposed for learning low-dimensional vectors for nodes or edges, as state-of-the-art techniques in predicting temporal patterns in networks such as link  ...  This has led to development of deep neural network based approaches to • learn node/edge level features [3] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2008.06940v1">arXiv:2008.06940v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/zee2vxnudrgklbquyhh6k6rui4">fatcat:zee2vxnudrgklbquyhh6k6rui4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200915113043/https://arxiv.org/pdf/2008.06940v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/69/50/6950052186823c84afaa50dddbd6682929172a6c.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2008.06940v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Graph Convolutional Networks for Text Classification [article]

Liang Yao, Chengsheng Mao, Yuan Luo
<span title="2018-11-13">2018</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We build a single text graph for a corpus based on word co-occurrence and document word relations, then learn a Text Graph Convolutional Network (Text GCN) for the corpus.  ...  However, only a limited number of studies have explored the more flexible graph convolutional neural networks (convolution on non-grid, e.g., arbitrary graph) for the task.  ...  Method Graph Convolutional Networks (GCN) A GCN (Kipf and Welling 2017) is a multilayer neural network that operates directly on a graph and induces embedding vectors of nodes based on properties of their  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1809.05679v3">arXiv:1809.05679v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/vowvwvcojzfdxakiipmnxzzwzm">fatcat:vowvwvcojzfdxakiipmnxzzwzm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20191025213542/https://arxiv.org/pdf/1809.05679v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/82/e1/82e1e83afedcd2dbb03421061299849da97af562.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1809.05679v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Topology and Content Co-Alignment Graph Convolutional Learning [article]

Min Shi, Yufei Tang, Xingquan Zhu
<span title="2020-03-28">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In traditional Graph Neural Networks (GNN), graph convolutional learning is carried out through topology-driven recursive node content aggregation for network representation learning.  ...  Given a network, CoGL first reconstructs a content network from node features then co-aligns the content network and the original network though a unified optimization goal with (1) minimized content loss  ...  RELATED WORK Given a network with edge connections and content (features) associated to each node, graph embedding learns a low-dimensional vector for each node to preserve node content and network topology  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2003.12806v1">arXiv:2003.12806v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/qg7mch3fgbbm3pqp7mj4fm3eru">fatcat:qg7mch3fgbbm3pqp7mj4fm3eru</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200408101133/https://arxiv.org/pdf/2003.12806v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2003.12806v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

DistDGL: Distributed Graph Neural Network Training for Billion-Scale Graphs [article]

Da Zheng, Chao Ma, Minjie Wang, Jinjing Zhou, Qidong Su, Xiang Song, Quan Gan, Zheng Zhang, George Karypis
<span title="2021-08-02">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In these domains, the graphs are typically large, containing hundreds of millions of nodes and several billions of edges.  ...  Graph neural networks (GNN) have shown great success in learning from graph-structured data. They are widely used in various applications, such as recommendation, fraud detection, and search.  ...  Graph Neural Networks GNNs emerge as a family of neural networks capable of learning a joint representation from both the graph structure and vertex/edge features.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.05337v3">arXiv:2010.05337v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/6xvbvgvyczgsdhyfwolyhpmvwi">fatcat:6xvbvgvyczgsdhyfwolyhpmvwi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210805144940/https://arxiv.org/pdf/2010.05337v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/03/7d/037df1500b9b8d4a57455b7ad205f86cc94a0b13.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.05337v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Line Graph Neural Networks for Link Prediction [article]

Lei Cai and Jundong Li and Jie Wang and Shuiwang Ji
<span title="2020-10-20">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In particular, each node in a line graph corresponds to a unique edge in the original graph.  ...  With the advances of deep learning, current link prediction methods commonly compute features from subgraphs centered at two neighboring nodes and use the features to predict the label of the link between  ...  In this work, we employ graph convolution neural networks to learn the node embedding in the line graph, which can represent an edge in the original graph.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.10046v1">arXiv:2010.10046v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/4dkyqwhktrhqlokbnr6xyhjtxy">fatcat:4dkyqwhktrhqlokbnr6xyhjtxy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201023002942/https://arxiv.org/pdf/2010.10046v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/ad/72/ad720ec891fd143090de390a2e800eb8e9e6eb4d.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.10046v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Node Co-occurrence based Graph Neural Networks for Knowledge Graph Link Prediction [article]

Dai Quoc Nguyen and Vinh Tong and Dinh Phung and Dat Quoc Nguyen
<span title="2021-12-26">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
NoGE then computes weights for edges among nodes based on the co-occurrence of entities and relations.  ...  We introduce a novel embedding model, named NoGE, which aims to integrate co-occurrence among entities and relations into graph neural networks to improve knowledge graph completion (i.e., link prediction  ...  Conclusion We have presented a novel model NoGE to integrate co-occurrence among entities and relations into graph neural networks for knowledge graph completion (i.e., link prediction).  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2104.07396v3">arXiv:2104.07396v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/mkiifuqcfzcpzek4ui557ilhci">fatcat:mkiifuqcfzcpzek4ui557ilhci</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210714224819/https://arxiv.org/pdf/2104.07396v2.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/f4/24/f424515516d1e3f64a7c92452c1c0c5072491e8f.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2104.07396v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

A Tutorial on Network Embeddings [article]

Haochen Chen, Bryan Perozzi, Rami Al-Rfou, Steven Skiena
<span title="2018-08-08">2018</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We further demonstrate the applications of network embeddings, and conclude the survey with future work in this area.  ...  Network embedding methods aim at learning low-dimensional latent representation of nodes in a network.  ...  Signed Graph Embeddings Recall that in a signed graph, an edge with weight of 1 denotes a positive link between nodes, whereas an edge with weight of -1 denotes a negative link.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1808.02590v1">arXiv:1808.02590v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ramuqdavczfabb4o7r42kice7q">fatcat:ramuqdavczfabb4o7r42kice7q</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20191015145241/https://arxiv.org/pdf/1808.02590v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/17/06/1706a4ef5556ecdd680416d46e033e0476290361.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1808.02590v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Temporal network embedding using graph attention network

Anuraj Mohan, K V Pramod
<span title="2021-03-30">2021</span> <i title="Springer Science and Business Media LLC"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/nnhntxrl5vg2zjrahf7teas5oy" style="color: black;">Complex &amp; Intelligent Systems</a> </i> &nbsp;
Finally, we conduct link prediction experiments by designing a TempGAN autoencoder to evaluate the quality of the embedding generated, and the results are compared with other state-of-the-art methods.  ...  Furthermore, we design a TempGAN architecture which uses both adjacency and PPMI information to generate node embeddings from temporal network.  ...  Graph convolutional network (GCN) [13] GCN is a variant of graph neural network which applies a graph convolution at each node to perform the propagation and aggregation of node features from neighbouring  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/s40747-021-00332-x">doi:10.1007/s40747-021-00332-x</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/jy6q2meccnbqvjhhxkdlmmhnmm">fatcat:jy6q2meccnbqvjhhxkdlmmhnmm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210717224431/https://link.springer.com/content/pdf/10.1007/s40747-021-00332-x.pdf?error=cookies_not_supported&amp;code=94dcb610-bc95-486d-8dd9-e134dd759db2" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/a0/65/a0653b8b8eab6eea5c5d6f609b77cd70dd589686.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/s40747-021-00332-x"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> springer.com </button> </a>

Neural Stochastic Block Model Scalable Community-Based Graph Learning [article]

Zheng Chen, Xinli Yu, Yuan Ling, Xiaohua Hu
<span title="2020-05-16">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
For example, 1) the GAT+, an improved design of GAT (Graph Attention Network), the scaled-cosine similarity, and a unified implementation of the convolution/attention based and the random-walk based neural  ...  Compared with SBM, our framework is flexible, naturally allows soft labels and digestion of complex node attributes.  ...  Then we want to design a neural network such that, = ( | , ) (2) That is, the neural network is a function that assigns a mixed membership to each node given the node features and the edges .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2005.07855v1">arXiv:2005.07855v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/6svanl223vgwfmu2on23c3mi3q">fatcat:6svanl223vgwfmu2on23c3mi3q</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200527201421/https://arxiv.org/ftp/arxiv/papers/2005/2005.07855.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2005.07855v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Graph Neural Networks: Methods, Applications, and Opportunities [article]

Lilapati Waikhom, Ripon Patgiri
<span title="2021-09-08">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
This article provides a comprehensive survey of graph neural networks (GNNs) in each learning setting: supervised, unsupervised, semi-supervised, and self-supervised learning.  ...  Taxonomy of each graph based learning setting is provided with logical divisions of methods falling in the given learning setting.  ...  in the neural network for nodes and edges, respectively.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2108.10733v2">arXiv:2108.10733v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/j3rfmkiwenebvmfyboasjmx4nu">fatcat:j3rfmkiwenebvmfyboasjmx4nu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210925123909/https://arxiv.org/pdf/2108.10733v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/8c/24/8c24ef76d0a15bae316d1b9e6ab526ea5af93530.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2108.10733v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Fast Sequence-Based Embedding with Diffusion Graphs [article]

Benedek Rozemberczki, Rik Sarkar
<span title="2020-01-21">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Vertex sequence-based embedding procedures use features extracted from linear sequences of nodes to create embeddings using a neural network.  ...  A graph embedding is a representation of graph vertices in a low-dimensional space, which approximately preserves properties such as distances between nodes.  ...  A schematic of the neural network architecture is in Figure 2 (b). The neural network has d hidden neurons, each with |V | inputs and 2 · w · |V | outputs.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2001.07463v1">arXiv:2001.07463v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ngjsvc3s3fezfndhuadcl5hzoa">fatcat:ngjsvc3s3fezfndhuadcl5hzoa</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200828083843/https://arxiv.org/pdf/2001.07463v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/e8/ff/e8ffa7fc85a5d266ed6cd11cf8fff3434783de1f.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2001.07463v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Graph-based Neural Multi-Document Summarization [article]

Michihiro Yasunaga, Rui Zhang, Kshitijh Meelu, Ayush Pareek, Krishnan Srinivasan, Dragomir Radev
<span title="2017-08-23">2017</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We employ a Graph Convolutional Network (GCN) on the relation graphs, with sentence embeddings obtained from Recurrent Neural Networks as input node features.  ...  In our experiments on DUC 2004, we consider three types of sentence relation graphs and demonstrate the advantage of combining sentence relations in graphs with the representation power of deep neural  ...  Any opinions, findings, conclusions, or recommendations expressed herein are those of the authors and do not necessarily reflect the views of IBM.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1706.06681v3">arXiv:1706.06681v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/nq6s6fmdpzetnhtqbws7c6depm">fatcat:nq6s6fmdpzetnhtqbws7c6depm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20191025025516/https://arxiv.org/pdf/1706.06681v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/57/bb/57bb5c8cfe1a330866fc23b7d951cb85eea5cf89.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1706.06681v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

GraphTTS: graph-to-sequence modelling in neural text-to-speech [article]

Aolan Sun, Jianzong Wang, Ning Cheng, Huayi Peng, Zhen Zeng, Jing Xiao
<span title="2020-03-04">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
This paper leverages the graph-to-sequence method in neural text-to-speech (GraphTTS), which maps the graph embedding of the input sequence to spectrograms.  ...  The graphical inputs consist of node and edge representations constructed from input texts. The encoding of these graphical inputs incorporates syntax information by a GNN encoder module.  ...  Table 1 , where E in edge embedding denotes the number of edges of a graph, 2 denotes two types of nodes, the source and target nodes, 3 denotes the number of the types of edges.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2003.01924v1">arXiv:2003.01924v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/tp5hxdhp3fawfkpyn4244ssxce">fatcat:tp5hxdhp3fawfkpyn4244ssxce</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200321170128/https://arxiv.org/pdf/2003.01924v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2003.01924v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 14,573 results