Skip to Main Content
HBS Home
  • About
  • Academic Programs
  • Alumni
  • Faculty & Research
  • Baker Library
  • Giving
  • Harvard Business Review
  • Initiatives
  • News
  • Recruit
  • Map / Directions
Faculty & Research
  • Faculty
  • Research
  • Featured Topics
  • Academic Units
  • …→
  • Harvard Business School→
  • Faculty & Research→
Publications
Publications
  • Article
  • Advances in Neural Information Processing Systems (NeurIPS)

Counterfactual Explanations Can Be Manipulated

By: Dylan Slack, Sophie Hilgard, Himabindu Lakkaraju and Sameer Singh
  • Format:Print
ShareBar

Abstract

Counterfactual explanations are useful for both generating recourse and auditing fairness between groups. We seek to understand whether adversaries can manipulate counterfactual explanations in an algorithmic recourse setting: if counterfactual explanations indicate both men and women must earn $100 more on average to receive a loan, can we be sure lower cost recourse does not exist for the men? By construction, we show that adversaries can design models for which counterfactual explanations generate similar cost recourses between groups. However, the same methods provide much lower cost recourses for specific subgroups in the data when the original instances are slightly perturbed, effectively hiding recourse disparities in models. We demonstrate vulnerabilities in a variety of counterfactual explanation techniques. On loan and violent crime prediction data sets, we train models where counterfactual explanations find up to 20x lower cost recourse for specific subgroups in the data. These results raise crucial concerns regarding the dependability of current counterfactual explanation techniques with adversarial actors, which we hope will inspire further investigations in robust and reliable counterfactual explanations.

Keywords

Machine Learning Models; Counterfactual Explanations

Citation

Slack, Dylan, Sophie Hilgard, Himabindu Lakkaraju, and Sameer Singh. "Counterfactual Explanations Can Be Manipulated." Advances in Neural Information Processing Systems (NeurIPS) 34 (2021).
  • Read Now

About The Author

Himabindu Lakkaraju

Technology and Operations Management
→More Publications

More from the Authors

    • May 2022
    • Faculty Research

    Altibbi: Revolutionizing Telehealth Using AI

    By: Himabindu Lakkaraju
    • 2022
    • Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS)

    Exploring Counterfactual Explanations Through the Lens of Adversarial Examples: A Theoretical and Empirical Analysis.

    By: Martin Pawelczyk, Chirag Agarwal, Shalmali Joshi, Sohini Upadhyay and Himabindu Lakkaraju
    • 2022
    • Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS)

    Probing GNN Explainers: A Rigorous Theoretical and Empirical Analysis of GNN Explanation Methods.

    By: Chirag Agarwal, Marinka Zitnik and Himabindu Lakkaraju
More from the Authors
  • Altibbi: Revolutionizing Telehealth Using AI By: Himabindu Lakkaraju
  • Exploring Counterfactual Explanations Through the Lens of Adversarial Examples: A Theoretical and Empirical Analysis. By: Martin Pawelczyk, Chirag Agarwal, Shalmali Joshi, Sohini Upadhyay and Himabindu Lakkaraju
  • Probing GNN Explainers: A Rigorous Theoretical and Empirical Analysis of GNN Explanation Methods. By: Chirag Agarwal, Marinka Zitnik and Himabindu Lakkaraju
ǁ
Campus Map
Harvard Business School
Soldiers Field
Boston, MA 02163
→Map & Directions
→More Contact Information
  • Make a Gift
  • Site Map
  • Jobs
  • Harvard University
  • Trademarks
  • Policies
  • Accessibility
  • Digital Accessibility
Copyright © President & Fellows of Harvard College