A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Compressing deep graph convolution network with multi-staged knowledge distillation
2021
PLoS ONE
Given a trained deep graph convolution network (GCN), how can we effectively compress it into a compact network without significant loss of accuracy? Compressing a trained deep GCN into a compact GCN is of great importance for implementing the model to environments such as mobile or embedded systems, which have limited computing resources. However, previous works for compressing deep GCNs do not consider the multi-hop aggregation of the deep GCNs, though it is the main purpose for their
doi:10.1371/journal.pone.0256187
pmid:34388224
pmcid:PMC8363007
fatcat:cv75gwtcnfbc7luszqgvqpcpqm