Drug discovery with explainable artificial intelligence

José Jiménez-Luna, Francesca Grisoni, Gisbert Schneider
2020 Nature Machine Intelligence  
V arious concepts of 'artificial intelligence' (AI) have been successfully adopted for computer-assisted drug discovery in the past few years 1-3 . This advance is mostly owed to the ability of deep learning algorithms, that is, artificial neural networks with multiple processing layers, to model complex nonlinear inputoutput relationships, and perform pattern recognition and feature extraction from low-level data representations. Certain deep learning models have been shown to match or even
more » ... eed the performance of the familiar existing machine learning and quantitative structure-activity relationship (QSAR) methods for drug discovery 4-6 . Moreover, deep learning has boosted the potential and broadened the applicability of computer-assisted discovery, for example, in molecular design 7,8 , chemical synthesis planning 9,10 , protein structure prediction 11 and macromolecular target identification 12,13 . The ability to capture intricate nonlinear relationships between input data (for example, chemical structure representations) and the associated output (for example, assay readout) often comes at the price of limited comprehensibility of the resulting model. While there have been efforts to explain QSARs in terms of algorithmic insights and molecular descriptor analysis 14-19 , deep neural network models notoriously elude immediate accessibility by the human mind 20 . In medicinal chemistry in particular, the availability of 'rules of thumb' correlating biological effects with physicochemical properties underscores the willingness, in certain situations, to sacrifice accuracy in favour of models that better fit human intuition 21-23 . Thus, blurring the lines between the 'two QSARs' 24 (that is, mechanistically interpretable versus highly accurate models) may be key to accelerated drug discovery with AI 25 . Automated analysis of medical and chemical knowledge to extract and represent features in a human-intelligible format dates back to the 1990s 26,27 , but has been receiving increasing attention due to the re-emergence of neural networks in chemistry and healthcare. Given the current pace of AI in drug discovery and related fields, there will be an increased demand for methods that help us understand and interpret the underlying models. In an effort to mitigate the lack of interpretability of certain machine learning models, and to augment human reasoning and decision-making, 28 , attention has been drawn to explainable AI (XAI) approaches 29, 30 . Providing informative explanations alongside the mathematical models aims to (1) render the underlying decision-making process
doi:10.1038/s42256-020-00236-4 fatcat:nlkwpc2jvvhcblmiulbdzzxaiq