A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
Mutually-aware Sub-Graphs Differentiable Architecture Search
[article]
2021
arXiv
pre-print
In this paper, we propose a conceptually simple yet efficient method to bridge these two paradigms, referred as Mutually-aware Sub-Graphs Differentiable Architecture Search (MSG-DAS). ...
The core of our framework is a differentiable Gumbel-TopK sampler that produces multiple mutually exclusive single-path sub-graphs. ...
In this work, we study both multi-path (e.g., DARTS) and singe-path approaches (e.g., GDAS), and propose Mutually-aware Sub-Graphs Differentiable Architecture Search (MSG-DAS), which is able to train multiple ...
arXiv:2107.04324v3
fatcat:xi47dor6ljdp3iygqhvgcvucwq
Global Context Enhanced Social Recommendation with Hierarchical Graph Neural Networks
[article]
2021
arXiv
pre-print
In particular, we first design a relation-aware reconstructed graph neural network to inject the cross-type collaborative semantics into the recommendation framework. ...
In addition, we further augment SR-HGNN with a social relation encoder based on the mutual information learning paradigm between low-level user embeddings and high-level global representation, which endows ...
augment the user representation process under a global graph-structured mutual information maximization architecture. ...
arXiv:2110.04039v1
fatcat:txaqxvdtozg4vdqqwitdfncb7u
DiffMG: Differentiable Meta Graph Search for Heterogeneous Graph Neural Networks
[article]
2021
arXiv
pre-print
We formalize the problem within the framework of neural architecture search (NAS) and then perform the search in a differentiable manner. ...
Specifically, we search for a meta graph, which can capture more complex semantic relations than a meta path, to determine how graph neural networks (GNNs) propagate messages along different types of edges ...
learn by attention GEMS [11] Meta graph Genetic search DiffMG Meta graph Differentiable search graphs between source node types and target node types for recommendation, but it is too slow to fully ...
arXiv:2010.03250v2
fatcat:iwoqqcoavrfm3aq6o3gldm66my
Subtractive Perceptrons for Learning Images: A Preliminary Report
[article]
2019
arXiv
pre-print
In this preliminary work, some ideas are proposed to define a subtractive Perceptron (s-Perceptron), a graph-based neural network that delivers a more compact topology to learn one specific task. ...
by progressively making the sub-graphs smaller and smaller until |V | |V |, 2) the sub-graphs G are selected randomly, and 3) the adjustments are graph-wise, meaning that we chose between the sub-graph ...
The most apparent reason is perhaps that our training graphs with neurons with mutual dependencies may be very difficult. ...
arXiv:1909.12933v1
fatcat:6em5arhilbay7fbuz5k4yp6lei
Creating Customized CGRAs for Scientific Applications
2021
Electronics
We offer analysis metrics from various scientific applications and tailor the results that are to be used by MC-Def, a novel Mixed-CGRA Definition Framework targeting a Mixed-CGRA architecture that leverages ...
A graph is only extracted if it exceeds a frequency and a resource utilization threshold, thus limiting the search space to sub-graphs that have high occurrence frequency and use the most hardware resources ...
On the other hand, by broadening the search space in lower frequency of appearance, we were able to discover more complex constructs and sub-graphs that utilize more resources. ...
doi:10.3390/electronics10040445
fatcat:uc33xobotvg45cpf6xsqi4t4hy
Contrastive Meta Learning with Behavior Multiplicity for Recommendation
[article]
2022
arXiv
pre-print
The behavior-aware graph neural architecture with multi-behavior self-supervision bring benefits to the heterogeneous relational learning for recommendation. ...
We search the batch size for our meta contrastive network (meta batch) and the graph neural architecture (train batch) from the range of {128, 256, 512, 1024, 2048} and {256, 512, 1024, 2048, 4096}, respectively ...
arXiv:2202.08523v1
fatcat:mvx3u5hxkvh2hekxisptmupyp4
SSAP: Single-Shot Instance Segmentation With Affinity Pyramid
[article]
2019
arXiv
pre-print
The mutual benefits between the two complementary sub-tasks are also unexplored. ...
Generally, proposal-free methods generate instance-agnostic semantic segmentation labels and instance-aware features to group pixels into different object instances. ...
In fact, the mutual benefits between the two sub-tasks can be exploited, which will further improve the performance of instance segmentation. ...
arXiv:1909.01616v1
fatcat:nvvdv3cre5havgscli4mryalxy
Knowledge Infused Learning (K-IL): Towards Deep Incorporation of Knowledge in Deep Learning
[article]
2020
arXiv
pre-print
Learning the underlying patterns in data goes beyond instance-based generalization to external knowledge represented in structured graphs or networks. ...
) as well as in building explainable AI systems for which knowledge graphs will provide scaffolding for punctuating neural computing. ...
Acknowledgement We acknowledge partial support from the National Science Foundation (NSF) award CNS-1513721: "Context-Aware Harassment Detection on Social Media". ...
arXiv:1912.00512v2
fatcat:xd6ttudnmbgbdpcjspgponaio4
Semi-Supervised Deep Learning for Multiplex Networks
[article]
2021
arXiv
pre-print
Our approach relies on maximizing the mutual information between local node-wise patch representations and label correlated structure-aware global graph representations to model the nodes and cluster structures ...
Empirically, we demonstrate that the proposed architecture outperforms state-of-the-art methods in a range of tasks: classification, clustering, visualization, and similarity search on seven real-world ...
But in similarity search, we can differentiate SSDCM as the best performing model among all. ...
arXiv:2110.02038v1
fatcat:koof45ms6fbrpaz6izecoswroi
TF-NAS: Rethinking Three Search Freedoms of Latency-Constrained Differentiable Neural Architecture Search
[article]
2020
arXiv
pre-print
With the flourish of differentiable neural architecture search (NAS), automatically searching latency-constrained architectures gives a new perspective to reduce human labor and expertise. ...
For the depth-level, we introduce a sink-connecting search space to ensure the mutual exclusion between skip and other candidate operations, as well as eliminate the architecture redundancy. ...
Following gradient based optimization in DARTS, GDAS [12] is proposed to sample one sub-graph from the whole directed acyclic graph (DAG) in one iteration, accelerating the search procedure. ...
arXiv:2008.05314v1
fatcat:kjb2353t3jhb7f62m57zdo4sjy
Unsupervised Graph Representation by Periphery and Hierarchical Information Maximization
[article]
2020
arXiv
pre-print
For this purpose, we combine the idea of hierarchical graph neural networks and mutual information maximization into a single framework. ...
However, for the entire graph representation, most of the existing graph neural networks are trained on a graph classification loss in a supervised way. ...
We differentiate GraPHmax from the three key related works. DIFFPOOL [46] : DIFFPOOL proposes a GNN based supervised hierarchical graph classification architecture. ...
arXiv:2006.04696v1
fatcat:ihbpx22jfvhx5pzhajeznyytyq
Graph Meta Network for Multi-Behavior Recommendation
[article]
2021
arXiv
pre-print
To tackle the above challenges, we propose a Multi-Behavior recommendation framework with Graph Meta Network to incorporate the multi-behavior pattern modeling into a meta-learning paradigm. ...
There are three key sub-modules in the meta graph neural network, i.e., i) Behavior semantic encoding; ii) Behavior mutual dependency learning; iii) High-order multi-behavioral context aggregation. ...
[41] learns price-aware purchase intent of users with a GCN-based architecture. ...
arXiv:2110.03969v1
fatcat:w4mrfiyvabgsvhcu7jzototitu
Heterogeneous Graph Transformer
[article]
2020
arXiv
pre-print
In this paper, we present the Heterogeneous Graph Transformer (HGT) architecture for modeling Web-scale heterogeneous graphs. ...
To handle Web-scale graph data, we design the heterogeneous mini-batch graph sampling algorithm---HGSampling---for efficient and scalable training. ...
CONCLUSION In this paper, we propose the Heterogeneous Graph Transformer (HGT) architecture for modeling Web-scale heterogeneous and dynamic graphs. ...
arXiv:2003.01332v1
fatcat:cqxioa7gi5aqtmyzwemntwgt3e
Homogeneous and Heterogeneous Relational Graph for Visible-infrared Person Re-identification
[article]
2021
arXiv
pre-print
with routing search way. ...
CMCC computes the mutual information between modalities and expels semantic redundancy. ...
Wu [30] exploited pose alignment connections and feature similarity connections to construct adaptive structure-aware neighborhood graphs. ...
arXiv:2109.08811v2
fatcat:uczpqlaxufgnpk6thqtz5ntygm
Link Scheduling using Graph Neural Networks
[article]
2022
arXiv
pre-print
Our centralized heuristic is based on tree search guided by a GCN and 1-step rollout. ...
Moreover, a novel reinforcement learning scheme is developed to train the GCN in a non-differentiable pipeline. ...
Fig. 1 : 1 Fig. 1: Architecture of GCN-based centralized MWIS solvers. (a) Iterative framework of GCN-guided tree search. ...
arXiv:2109.05536v2
fatcat:oiuvxmqqtba6zbfpqbbyvr5bqu
« Previous
Showing results 1 — 15 out of 9,597 results