A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2024; you can also visit the original URL.
The file type is application/pdf
.
Representation Ensembling for Synergistic Lifelong Learning with Quasilinear Complexity
[article]
2024
arXiv
pre-print
In lifelong learning, data are used to improve performance not only on the current task, but also on previously encountered, and as yet unencountered tasks. While typical transfer learning algorithms can improve performance on future tasks, their performance on prior tasks degrades upon learning new tasks (called forgetting). Many recent approaches for continual or lifelong learning have attempted to maintain performance on old tasks given new tasks. But striving to avoid forgetting sets the
arXiv:2004.12908v18
fatcat:nxkqtdhb5zdonh2uaxmpcphhoy