A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Distilling Knowledge From Graph Convolutional Networks
2020
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Figure 1: (a) Unlike existing knowledge distillation methods that focus on only the prediction or the middle activation, our method explicitly distills knowledge about how the teacher model embeds the topological structure and transfers it to the student model. (b) We display the structure of the feature space, visualized by the distance between the red point and the others on a point cloud dataset. Here, each object is represented as a set of 3D points. Top Row: structures obtained from the
doi:10.1109/cvpr42600.2020.00710
dblp:conf/cvpr/YangQSTW20
fatcat:nzswe5fls5brhgj2n3lof7zjmy