A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
Optimization with Non-Differentiable Constraints with Applications to Fairness, Recall, Churn, and Other Goals
[article]
2018
arXiv
pre-print
We show that many machine learning goals, such as improved fairness metrics, can be expressed as constraints on the model's predictions, which we call rate constraints. We study the problem of training non-convex models subject to these rate constraints (or any non-convex and non-differentiable constraints). In the non-convex setting, the standard approach of Lagrange multipliers may fail. Furthermore, if the constraints are non-differentiable, then one cannot optimize the Lagrangian with
arXiv:1809.04198v1
fatcat:mvzkt5dwvzfwlava2hesyqzb64