Multi-Task Personalized Learning with Sparse Network Lasso

Jiankun Wang, Lu Sun
2022 Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence   unpublished
Multi-task learning learns multiple related tasks together, in order to improve the generalization performance. Existing methods typically build a global model shared by all the samples, which saves the homogeneity but ignores the individuality (heterogeneity) of samples. Personalized learning is recently proposed to learn sample-specific local models by utilizing sample heterogeneity, however, directly applying it in the multi-task learning setting poses three key challenges: 1) model sample
more » ... mogeneity, 2) prevent from over-parameterization and 3) capture task correlations. In this paper, we propose a novel multi-task personalized learning method to handle these challenges. For 1), each model is decomposed into a sum of global and local components, that saves sample homogeneity and sample heterogeneity, respectively. For 2), regularized by sparse network Lasso, the joint models are embedded into a low-dimensional subspace and exhibit sparse group structures, leading to a significantly reduced number of effective parameters. For 3), the subspace is further separated into two parts, so as to save both commonality and specificity of tasks. We develop an alternating algorithm to solve the proposed optimization problem, and extensive experiments on various synthetic and real-world datasets demonstrate its robustness and effectiveness.
doi:10.24963/ijcai.2022/485 fatcat:pxxjc7bpbnazxm374ojcdt44fm