SpeqNets: Sparsity-aware Permutation-equivariant Graph Networks [article]

Christopher Morris, Gaurav Rattan, Sandra Kiefer, Siamak Ravanbakhsh
2022 arXiv   pre-print
While (message-passing) graph neural networks have clear limitations in approximating permutation-equivariant functions over graphs or general relational data, more expressive, higher-order graph neural networks do not scale to large graphs. They either operate on k-order tensors or consider all k-node subgraphs, implying an exponential dependence on k in memory requirements, and do not adapt to the sparsity of the graph. By introducing new heuristics for the graph isomorphism problem, we
more » ... a class of universal, permutation-equivariant graph networks, which, unlike previous architectures, offer a fine-grained control between expressivity and scalability and adapt to the sparsity of the graph. These architectures lead to vastly reduced computation times compared to standard higher-order graph networks in the supervised node- and graph-level classification and regression regime while significantly improving over standard graph neural network and graph kernel architectures in terms of predictive performance.
arXiv:2203.13913v3 fatcat:bazb6yobm5cfdmiicex7r43qba