Filters








2,545 Hits in 3.5 sec

EGAD: Evolving Graph Representation Learning with Self-Attention and Knowledge Distillation for Live Video Streaming Events [article]

Stefanos Antaris, Dimitrios Rafailidis, Sarunas Girdzijauskas
2020 arXiv   pre-print
We propose EGAD, a neural network architecture to capture the graph evolution by introducing a self-attention mechanism on the weights between consecutive graph convolutional networks.  ...  In this study, we present a dynamic graph representation learning model on weighted graphs to accurately predict the network capacity of connections between viewers in a live video streaming event.  ...  on evolving graphs of social networks.  ... 
arXiv:2011.05705v1 fatcat:kdx4qoupmjcjfi6owyrytxjs4e

Deep geometric knowledge distillation with graphs [article]

Carlos Lassance, Myriam Bontonou, Ghouthi Boukli Hacene, Vincent Gripon, Jian Tang, Antonio Ortega
2019 arXiv   pre-print
A popular approach to reduce the size of a deep learning architecture consists in distilling knowledge from a bigger network (teacher) to a smaller one (student).  ...  In this work, we focus instead on relative knowledge distillation (RKD), which considers the geometry of the respective latent spaces, allowing for dimension-agnostic transfer of knowledge.  ...  Neural network distillation: Following [11] , we distinguish approaches transferring knowledge input by input, from approaches focusing on relative distances on a batch of inputs.  ... 
arXiv:1911.03080v1 fatcat:nkxhqehva5akbgpexcjy7wam4m

Deep Collaborative Learning for Randomly Wired Neural Networks

Ehab Essa, Xianghua Xie
2021 Electronics  
Knowledge distillation is an effective learning scheme for improving the performance of small neural networks by using the knowledge learned by teacher networks.  ...  In this paper, we created a chain of randomly wired neural networks based on a random graph algorithm and collaboratively trained the models using functional-preserving transfer learning, so that the small  ...  Furthermore, neural architecture search (NAS) [5] has evolved to automatically design the neural networks by optimally searching for the number of layers, the operation of each layer, and the wiring  ... 
doi:10.3390/electronics10141669 fatcat:5ln6nvbgebdwljlrq7vse243ou

Neural Architecture Evolution in Deep Reinforcement Learning for Continuous Control [article]

Jörg K.H. Franke, Gregor Köhler, Noor Awad, Frank Hutter
2020 arXiv   pre-print
Current Deep Reinforcement Learning algorithms still heavily rely on handcrafted neural network architectures.  ...  We propose a novel approach to automatically find strong topologies for continuous control tasks while only adding a minor overhead in terms of interactions in the environment.  ...  -We propose a novel genetic operator based on network distillation [9] for stable architecture mutation.  ... 
arXiv:1910.12824v3 fatcat:c6z54jty2ff2bmbxjwctefraru

CMNN: Coupled Modular Neural Network

Md Intisar Chowdhury, Qiangfu Zhao, Kai Su, Yong Liu
2021 IEEE Access  
Knowledge Distillation (KD) is currently the most popular and simple to implement approach for compressing neural networks without significant loss of performance.  ...  respective sub-graph networks by leveraging the knowledge of complex super-graph through co-distillation objective function.  ...  His research interests include evolutionary computation and neural networks.  ... 
doi:10.1109/access.2021.3093541 fatcat:iddsctbybzfghegvmgbsg4ov4y

CCGL: Contrastive Cascade Graph Learning [article]

Xovee Xu, Fan Zhou, Kunpeng Zhang, Siyuan Liu
2021 arXiv   pre-print
In this work, we present Contrastive Cascade Graph Learning (CCGL), a novel framework for cascade graph representation learning in a contrastive, self-supervised, and task-agnostic way.  ...  However, its direct applicability for cascade modeling, especially graph cascade related tasks, remains underexplored.  ...  different distillation settings for teacher and student networks on Weibo dataset.Fig. 6.  ... 
arXiv:2107.12576v1 fatcat:kb3si37j65gntar64i63aq7hzi

SlimNets: An Exploration of Deep Model Compression and Acceleration [article]

Ini Oguntola, Subby Olubeko, Christopher Sweeney
2018 arXiv   pre-print
Deep neural networks have achieved increasingly accurate results on a wide variety of complex tasks.  ...  This work evaluates and compares three distinct methods for deep model compression and acceleration: weight pruning, low rank factorization, and knowledge distillation.  ...  ACKNOWLEDGMENTS We would like to thank the 6.883 staff at MIT for their instrumental feedback on this project.  ... 
arXiv:1808.00496v1 fatcat:cmcfyslxiza4nehd3beqmulwli

K-Core based Temporal Graph Convolutional Network for Dynamic Graphs [article]

Jingxin Liu, Chang Xu, Chang Yin, Weiqiang Wu, You Song
2020 arXiv   pre-print
However, many existing methods focus on static graphs while ignoring evolving graph patterns.  ...  Inspired by the success of graph convolutional networks(GCNs) in static graph embedding, we propose a novel k-core based temporal graph convolutional network, the CTGCN, to learn node representations for  ...  Graph Neural Networks(P-GNNs) [23] .  ... 
arXiv:2003.09902v3 fatcat:6apuy6vp5zalhphkihas7rab5m

Learning to Play against Any Mixture of Opponents [article]

Max Olan Smith, Thomas Anthony, Yongzhao Wang, Michael P. Wellman
2021 arXiv   pre-print
Intuitively, experience playing against one mixture of opponents in a given domain should be relevant for a different mixture in the same domain.  ...  We find that Q-Mixing is able to successfully transfer knowledge across any mixture of opponents.  ...  A.4 Policy Distillation In the policy distillation framework, a larger neural network referred to as the "teacher" is used as a training signal for a smaller neural network called the "student".  ... 
arXiv:2009.14180v2 fatcat:2n2oc3pytvg3ljuqqc7lvl7dna

Low-Resolution Face Recognition in the Wild via Selective Knowledge Distillation

Shiming Ge, Shengwei Zhao, Chenyu Li, Jia Li
2019 IEEE Transactions on Image Processing  
Inspired by that, this paper proposes a learning approach to recognize low-resolution faces via selective knowledge distillation.  ...  on GPU.  ...  Selective Knowledge Distillation To study selective knowledge distillation, first we would like to explore the influence of different settings of parameter λ on the parse graph optimization algorithm.  ... 
doi:10.1109/tip.2018.2883743 fatcat:32z2fr6vpzbn3esvl5uwqrgd6e

ATBRG: Adaptive Target-Behavior Relational Graph Network for Effective Recommendation [article]

Yufei Feng, Binbin Hu, Fuyu Lv, Qingwen Liu, Zhiqiang Zhang, Wenwu Ou
2020 arXiv   pre-print
Existing methods either explore independent meta-paths for user-item pairs over KG, or employ graph neural network (GNN) on whole KG to produce representations for users and items separately.  ...  Recently, knowledge graph (KG) attracts much attention in RS due to its abundant connective information.  ...  Recently, graph neural network has shown its potential in learning accurate node embeddings with the high-order graph topology.  ... 
arXiv:2005.12002v1 fatcat:uvjxqnmtdfhchcjhjncifyijge

Graphonomy: Universal Human Parsing via Graph Transfer Learning

Ke Gong, Yiming Gao, Xiaodan Liang, Xiaohui Shen, Meng Wang, Liang Lin
2019 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
By distilling universal semantic graph representation to each specific task, Graphonomy is able to predict all levels of parsing labels in one system without piling up the complexity.  ...  Various graph transfer dependencies (e.g., similarity, linguistic knowledge) between different datasets are analyzed and encoded to enhance graph transfer capability.  ...  Based on the high-level graph feature Z, we leverage semantic constraints from the human body structured knowledge to evolve global representations by graph reasoning.  ... 
doi:10.1109/cvpr.2019.00763 dblp:conf/cvpr/Gong0LS0L19 fatcat:rv5xgqag4jcghmzffi3s67fp2u

Graphonomy: Universal Human Parsing via Graph Transfer Learning [article]

Ke Gong, Yiming Gao, Xiaodan Liang, Xiaohui Shen, Meng Wang, Liang Lin
2019 arXiv   pre-print
By distilling universal semantic graph representation to each specific task, Graphonomy is able to predict all levels of parsing labels in one system without piling up the complexity.  ...  Various graph transfer dependencies (\eg, similarity, linguistic knowledge) between different datasets are analyzed and encoded to enhance graph transfer capability.  ...  Based on the high-level graph feature Z, we leverage semantic constraints from the human body structured knowledge to evolve global representations by graph reasoning.  ... 
arXiv:1904.04536v1 fatcat:di2yce3ytbhadml5lljt7yn66m

Slimmable Generative Adversarial Networks [article]

Liang Hou, Zehuan Yuan, Lei Huang, Huawei Shen, Xueqi Cheng, Changhu Wang
2021 arXiv   pre-print
To facilitate the \textit{consistency} between generators of different widths, we present a stepwise inplace distillation technique that encourages narrow generators to learn from wide ones.  ...  Generative adversarial networks (GANs) have achieved remarkable progress in recent years, but the continuously growing scale of models makes them challenging to deploy widely in practical applications.  ...  Dynamic Neural Networks Unlike model compression, dynamic neural networks can adaptively choose the computational graph to reduce computation during training and inference.  ... 
arXiv:2012.05660v3 fatcat:dnpkc5owprbizegvvrjbw3mrfm

EVOC: A Computer Model of the Evolution of Culture [article]

Liane Gabora
2014 arXiv   pre-print
It consists of neural network based agents that invent ideas for actions, and imitate neighbors' actions.  ...  Acknowledgments Thanks to Martin Denton and Jillian Dicker for work on EVOC.  ...  This project is funded by Foundation for the Future and the Social Sciences and Humanities Research Council of Canada (SSHRC).  ... 
arXiv:1310.0522v2 fatcat:rzfhk5h3tbga5naaodiq2bkobi
« Previous Showing results 1 — 15 out of 2,545 results