Skip to Main Content
HBS Home
  • About
  • Academic Programs
  • Alumni
  • Faculty & Research
  • Baker Library
  • Giving
  • Harvard Business Review
  • Initiatives
  • News
  • Recruit
  • Map / Directions
Faculty & Research
  • Faculty
  • Research
  • Featured Topics
  • Academic Units
  • …→
  • Harvard Business School→
  • Faculty & Research→
Publications
Publications
  • Mar 2020
  • Conference Presentation

A New Analysis of Differential Privacy's Generalization Guarantees

By: Christopher Jung, Katrina Ligett, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi and Moshe Shenfeld
  • Format:Print
  • | Language:English
ShareBar

Abstract

We give a new proof of the "transfer theorem" underlying adaptive data analysis: that any mechanism for answering adaptively chosen statistical queries that is differentially private and sample-accurate is also accurate out-of-sample. Our new proof is elementary and gives structural insights that we expect will be useful elsewhere. We show: 1) that differential privacy ensures that the expectation of any query on the posterior distribution on datasets induced by the transcript of the interaction is close to its true value on the data distribution, and 2) sample accuracy on its own ensures that any query answer produced by the mechanism is close to its posterior expectation with high probability. This second claim follows from a thought experiment in which we imagine that the dataset is resampled from the posterior distribution after the mechanism has committed to its answers. The transfer theorem then follows by summing these two bounds, and in particular, avoids the "monitor argument" used to derive high probability bounds in prior work.
An upshot of our new proof technique is that the concrete bounds we obtain are substantially better than the best previously known bounds, even though the improvements are in the constants, rather than the asymptotics (which are known to be tight). As we show, our new bounds outperform the naive "sample-splitting" baseline at dramatically smaller dataset sizes compared to the previous state of the art, bringing techniques from this literature closer to practicality.

Keywords

Machine Learning; Transfer Theorem; Mathematical Methods

Citation

Jung, Christopher, Katrina Ligett, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi, and Moshe Shenfeld. "A New Analysis of Differential Privacy's Generalization Guarantees." Paper presented at the 11th Innovations in Theoretical Computer Science Conference, Seattle, March 2020.
  • Read Now

About The Author

Seth Neel

Technology and Operations Management
→More Publications

More from the Authors

    • Advances in Neural Information Processing Systems (NeurIPS)

    Adaptive Machine Unlearning

    By: Varun Gupta, Christopher Jung, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi and Chris Waites
    • Mar 2021
    • Faculty Research

    Descent-to-Delete: Gradient-Based Methods for Machine Unlearning

    By: Seth Neel, Aaron Leon Roth and Saeed Sharifi-Malvajerdi
    • 2021
    • Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society

    Fair Algorithms for Infinite and Contextual Bandits

    By: Matthew Joseph, Michael J Kearns, Jamie Morgenstern, Seth Neel and Aaron Leon Roth
More from the Authors
  • Adaptive Machine Unlearning By: Varun Gupta, Christopher Jung, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi and Chris Waites
  • Descent-to-Delete: Gradient-Based Methods for Machine Unlearning By: Seth Neel, Aaron Leon Roth and Saeed Sharifi-Malvajerdi
  • Fair Algorithms for Infinite and Contextual Bandits By: Matthew Joseph, Michael J Kearns, Jamie Morgenstern, Seth Neel and Aaron Leon Roth
ǁ
Campus Map
Harvard Business School
Soldiers Field
Boston, MA 02163
→Map & Directions
→More Contact Information
  • Make a Gift
  • Site Map
  • Jobs
  • Harvard University
  • Trademarks
  • Policies
  • Accessibility
  • Digital Accessibility
Copyright © President & Fellows of Harvard College