Publications
Publications
- Advances in Neural Information Processing Systems (NeurIPS)
Counterfactual Explanations Can Be Manipulated
By: Dylan Slack, Sophie Hilgard, Himabindu Lakkaraju and Sameer Singh
Abstract
Counterfactual explanations are useful for both generating recourse and auditing fairness between groups. We seek to understand whether adversaries can manipulate counterfactual explanations in an algorithmic recourse setting: if counterfactual explanations indicate both men and women must earn $100 more on average to receive a loan, can we be sure lower cost recourse does not exist for the men? By construction, we show that adversaries can design models for which counterfactual explanations generate similar cost recourses between groups. However, the same methods provide much lower cost recourses for specific subgroups in the data when the original instances are slightly perturbed, effectively hiding recourse disparities in models. We demonstrate vulnerabilities in a variety of counterfactual explanation techniques. On loan and violent crime prediction data sets, we train models where counterfactual explanations find up to 20x lower cost recourse for specific subgroups in the data. These results raise crucial concerns regarding the dependability of current counterfactual explanation techniques with adversarial actors, which we hope will inspire further investigations in robust and reliable counterfactual explanations.
Keywords
Citation
Slack, Dylan, Sophie Hilgard, Himabindu Lakkaraju, and Sameer Singh. "Counterfactual Explanations Can Be Manipulated." Advances in Neural Information Processing Systems (NeurIPS) 34 (2021).