A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Avoiding Forgetting and Allowing Forward Transfer in Continual Learning via Sparse Networks
[article]
2022
arXiv
pre-print
Using task-specific components within a neural network in continual learning (CL) is a compelling strategy to address the stability-plasticity dilemma in fixed-capacity models without access to past data. Current methods focus only on selecting a sub-network for a new task that reduces forgetting of past tasks. However, this selection could limit the forward transfer of relevant past knowledge that helps in future learning. Our study reveals that satisfying both objectives jointly is more
arXiv:2110.05329v3
fatcat:dr44mump5jhp3eqy4uppvuu72q