Attack and defence in cellular decision-making: lessons from machine learning [article]

Thomas J. Rademaker, Emmanuel Bengio, Paul François
2019 arXiv   pre-print
Machine learning algorithms are sensitive to meaningless (or "adversarial") perturbations. This is reminiscent of cellular decision-making where ligands (called "antagonists") prevent correct signalling, like in early immune recognition. We draw a formal analogy between neural networks used in machine learning and models of cellular decision-making (adaptive proofreading). We apply attacks from machine learning to simple decision-making models, and show explicitly the correspondence to
more » ... m by weakly bound ligands. Such antagonism is absent in more nonlinear models, which inspired us to implement a biomimetic defence in neural networks filtering out adversarial perturbations. We then apply a gradient-descent approach from machine learning to different cellular decision-making models, and we reveal the existence of two regimes characterized by the presence or absence of a critical point. The critical point causes the strongest antagonists to lie close to the threshold. This is validated in the loss landscapes of robust neural networks and cellular decision-making models, and observed experimentally for immune cells. For both regimes, we explain how associated defence mechanisms shape the geometry of the loss landscape, and why different adversarial attacks are effective in different regimes. Our work connects evolved cellular decision-making to machine learning, and motivates the design of a general theory of adversarial perturbations, both for in vivo and in silico systems.
arXiv:1807.04270v2 fatcat:sclnxpvfubayvns4u6u7ui2spe