InterpretML: A Unified Framework for Machine Learning Interpretability [article]

Harsha Nori and Samuel Jenkins and Paul Koch and Rich Caruana
2019 arXiv   pre-print
InterpretML is an open-source Python package which exposes machine learning interpretability algorithms to practitioners and researchers. InterpretML exposes two types of interpretability - glassbox models, which are machine learning models designed for interpretability (ex: linear models, rule lists, generalized additive models), and blackbox explainability techniques for explaining existing systems (ex: Partial Dependence, LIME). The package enables practitioners to easily compare
more » ... lity algorithms by exposing multiple methods under a unified API, and by having a built-in, extensible visualization platform. InterpretML also includes the first implementation of the Explainable Boosting Machine, a powerful, interpretable, glassbox model that can be as accurate as many blackbox models. The MIT licensed source code can be downloaded from github.com/microsoft/interpret.
arXiv:1909.09223v1 fatcat:uapsreh465cftf4u25rbamxxje