Generating Interpretable Data-Based Explanations for Fairness Debugging using Gopher.

Published in SIGMOD (Demo), 2022

Jiongli Zhu, Romila Pradhan, Boris Glavic, Babak Salimi.

Machine learning (ML) models, while increasingly being used to make life-altering decisions, are known to reinforce systemic bias and discrimination. Consequently, practitioners and model developers need tools to facilitate debugging for bias in ML models. We introduce Gopher, a system that generates compact, interpretable and causal explanations for ML model bias. Gopher identifies the top-k coherent subsets of the training data that are root causes for model bias by quantifying the extent to which removing or updating a subset can resolve the bias. We describe the architecture of Gopher and will walk the audience through real-world use cases to highlight how Gopher generates explanations that enable data scientists to understand how subsets of the training data contribute to the bias of a machine learning (ML) model. Gopher is available as open-source software; The code and the demonstration video are available at https://gopher-sys.github.io/.


Paper | Code | Project Website