Filter Results
:
(78)
Show Results For
-
All HBS Web
(319)
- Faculty Publications (78)
Show Results For
-
All HBS Web
(319)
- Faculty Publications (78)
Page 1 of
78
Results
→
- 2023
- Working Paper
An Experimental Design for Anytime-Valid Causal Inference on Multi-Armed Bandits
By: Biyonka Liang and Iavor I. Bojinov
Typically, multi-armed bandit (MAB) experiments are analyzed at the end of the study and thus require the analyst to specify a fixed sample size in advance. However, in many online learning applications, it is advantageous to continuously produce inference on the...
View Details
Liang, Biyonka, and Iavor I. Bojinov. "An Experimental Design for Anytime-Valid Causal Inference on Multi-Armed Bandits." Harvard Business School Working Paper, No. 24-057, March 2024.
- 2023
- Article
M4: A Unified XAI Benchmark for Faithfulness Evaluation of Feature Attribution Methods across Metrics, Modalities, and Models
By: Himabindu Lakkaraju, Xuhong Li, Mengnan Du, Jiamin Chen, Yekun Chai and Haoyi Xiong
While Explainable Artificial Intelligence (XAI) techniques have been widely studied to explain predictions made by deep neural networks, the way to evaluate the faithfulness of explanation results remains challenging, due to the heterogeneity of explanations for...
View Details
Keywords:
AI and Machine Learning
Lakkaraju, Himabindu, Xuhong Li, Mengnan Du, Jiamin Chen, Yekun Chai, and Haoyi Xiong. "M4: A Unified XAI Benchmark for Faithfulness Evaluation of Feature Attribution Methods across Metrics, Modalities, and Models." Advances in Neural Information Processing Systems (NeurIPS) (2023).
- 2023
- Article
MoPe: Model Perturbation-based Privacy Attacks on Language Models
By: Marvin Li, Jason Wang, Jeffrey Wang and Seth Neel
Recent work has shown that Large Language Models (LLMs) can unintentionally leak sensitive information present in their training data. In this paper, we present Model Perturbations (MoPe), a new method to identify with high confidence if a given text is in the training...
View Details
Li, Marvin, Jason Wang, Jeffrey Wang, and Seth Neel. "MoPe: Model Perturbation-based Privacy Attacks on Language Models." Proceedings of the Conference on Empirical Methods in Natural Language Processing (2023): 13647–13660.
- 2023
- Article
Post Hoc Explanations of Language Models Can Improve Language Models
By: Satyapriya Krishna, Jiaqi Ma, Dylan Slack, Asma Ghandeharioun, Sameer Singh and Himabindu Lakkaraju
Large Language Models (LLMs) have demonstrated remarkable capabilities in performing complex tasks. Moreover, recent research has shown that incorporating human-annotated rationales (e.g., Chain-of-Thought prompting) during in-context learning can significantly enhance...
View Details
Krishna, Satyapriya, Jiaqi Ma, Dylan Slack, Asma Ghandeharioun, Sameer Singh, and Himabindu Lakkaraju. "Post Hoc Explanations of Language Models Can Improve Language Models." Advances in Neural Information Processing Systems (NeurIPS) (2023).
- 2023
- Other Article
The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and Multi-Purpose Corpus of Patent Applications
By: Mirac Suzgun, Luke Melas-Kyriazi, Suproteem K. Sarkar, Scott Duke Kominers and Stuart Shieber
Innovation is a major driver of economic and social development, and information about many kinds of innovation is embedded in semi-structured data from patents and patent applications. Though the impact and novelty of innovations expressed in patent data are difficult...
View Details
Keywords:
USPTO;
Natural Language Processing;
Classification;
Summarization;
Patent Novelty;
Patent Trolls;
Patent Enforceability;
Patents;
Innovation and Invention;
Intellectual Property;
AI and Machine Learning;
Analytics and Data Science
Suzgun, Mirac, Luke Melas-Kyriazi, Suproteem K. Sarkar, Scott Duke Kominers, and Stuart Shieber. "The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and Multi-Purpose Corpus of Patent Applications." Conference on Neural Information Processing Systems (NeurIPS), Datasets and Benchmarks Track 36 (2023).
- 2023
- Article
Verifiable Feature Attributions: A Bridge between Post Hoc Explainability and Inherent Interpretability
By: Usha Bhalla, Suraj Srinivas and Himabindu Lakkaraju
With the increased deployment of machine learning models in various real-world applications, researchers and practitioners alike have emphasized the need for explanations of model behaviour. To this end, two broad strategies have been outlined in prior literature to...
View Details
Bhalla, Usha, Suraj Srinivas, and Himabindu Lakkaraju. "Verifiable Feature Attributions: A Bridge between Post Hoc Explainability and Inherent Interpretability." Advances in Neural Information Processing Systems (NeurIPS) (2023).
- 2023
- Article
Which Models Have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness
By: Suraj Srinivas, Sebastian Bordt and Himabindu Lakkaraju
One of the remarkable properties of robust computer vision models is that their input-gradients are often aligned with human perception, referred to in the literature as perceptually-aligned gradients (PAGs). Despite only being trained for classification, PAGs cause...
View Details
Srinivas, Suraj, Sebastian Bordt, and Himabindu Lakkaraju. "Which Models Have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness." Advances in Neural Information Processing Systems (NeurIPS) (2023).
- Working Paper
An AI Method to Score Celebrity Visual Potential from Human Faces
By: Flora Feng, Shunyuan Zhang, Xiao Liu, Kannan Srinivasan and Cait Lamberton
Celebrities have extraordinary abilities to attract and influence others. Predicting celebrity visual potential is important in the domains of business, politics, media, and entertainment. Can we use human faces to predict celebrity visual potential? If so, which...
View Details
Feng, Flora, Shunyuan Zhang, Xiao Liu, Kannan Srinivasan, and Cait Lamberton. "An AI Method to Score Celebrity Visual Potential from Human Faces." SSRN Working Paper Series, No. 4071188, November 2023.
- October 2023
- Article
Improving Regulatory Effectiveness Through Better Targeting: Evidence from OSHA
By: Matthew S. Johnson, David I. Levine and Michael W. Toffel
We study how a regulator can best target inspections. Our case study is a U.S. Occupational Safety and Health Administration (OSHA) program that randomly allocated some inspections. On average, each inspection averted 2.4 serious injuries (9%) over the next five years....
View Details
Keywords:
Safety Regulations;
Regulations;
Regulatory Enforcement;
Machine Learning Models;
Safety;
Operations;
Service Operations;
Production;
Forecasting and Prediction;
Decisions;
United States
Johnson, Matthew S., David I. Levine, and Michael W. Toffel. "Improving Regulatory Effectiveness Through Better Targeting: Evidence from OSHA." American Economic Journal: Applied Economics 15, no. 4 (October 2023): 30–67. (Profiled in the Regulatory Review.)
- 2023
- Working Paper
In-Context Unlearning: Language Models as Few Shot Unlearners
By: Martin Pawelczyk, Seth Neel and Himabindu Lakkaraju
Machine unlearning, the study of efficiently removing the impact of specific training points on the
trained model, has garnered increased attention of late, driven by the need to comply with privacy
regulations like the Right to be Forgotten. Although unlearning is...
View Details
Pawelczyk, Martin, Seth Neel, and Himabindu Lakkaraju. "In-Context Unlearning: Language Models as Few Shot Unlearners." Working Paper, October 2023.
- 2024
- Working Paper
Generative AI and Creative Problem Solving
The rapid advances in generative artificial intelligence (AI) open up attractive opportunities for creative
problem-solving through human-guided AI partnerships. To explore this potential, we initiated a
crowdsourcing challenge focused on sustainable, circular...
View Details
Boussioux, Léonard, Jacqueline N. Lane, Miaomiao Zhang, Vladimir Jacimovic, and Karim R. Lakhani. "Generative AI and Creative Problem Solving." Harvard Business School Working Paper, No. 24-005, July 2023. (Revised March 2024.)
- August 2023
- Article
Explaining Machine Learning Models with Interactive Natural Language Conversations Using TalkToModel
By: Dylan Slack, Satyapriya Krishna, Himabindu Lakkaraju and Sameer Singh
Practitioners increasingly use machine learning (ML) models, yet models have become more complex and harder to understand. To understand complex models, researchers have proposed techniques to explain model predictions. However, practitioners struggle to use...
View Details
Slack, Dylan, Satyapriya Krishna, Himabindu Lakkaraju, and Sameer Singh. "Explaining Machine Learning Models with Interactive Natural Language Conversations Using TalkToModel." Nature Machine Intelligence 5, no. 8 (August 2023): 873–883.
- 2023
- Article
Towards Bridging the Gaps between the Right to Explanation and the Right to Be Forgotten
By: Himabindu Lakkaraju, Satyapriya Krishna and Jiaqi Ma
The Right to Explanation and the Right to be Forgotten are two important principles outlined to regulate algorithmic decision making and data usage in real-world applications. While the right to explanation allows individuals to request an actionable explanation for an...
View Details
Keywords:
Analytics and Data Science;
AI and Machine Learning;
Decision Making;
Governing Rules, Regulations, and Reforms
Lakkaraju, Himabindu, Satyapriya Krishna, and Jiaqi Ma. "Towards Bridging the Gaps between the Right to Explanation and the Right to Be Forgotten." Proceedings of the International Conference on Machine Learning (ICML) 40th (2023): 17808–17826.
- 2023
- Working Paper
Insufficiently Justified Disparate Impact: A New Criterion for Subgroup Fairness
By: Neil Menghani, Edward McFowland III and Daniel B. Neill
In this paper, we develop a new criterion, "insufficiently justified disparate impact" (IJDI), for assessing whether recommendations (binarized predictions) made by an algorithmic decision support tool are fair. Our novel, utility-based IJDI criterion evaluates false...
View Details
Menghani, Neil, Edward McFowland III, and Daniel B. Neill. "Insufficiently Justified Disparate Impact: A New Criterion for Subgroup Fairness." Working Paper, June 2023.
- 2023
- Working Paper
Auditing Predictive Models for Intersectional Biases
By: Kate S. Boxer, Edward McFowland III and Daniel B. Neill
Predictive models that satisfy group fairness criteria in aggregate for members of a protected class, but do not guarantee subgroup fairness, could produce biased predictions for individuals at the intersection of two or more protected classes. To address this risk, we...
View Details
Boxer, Kate S., Edward McFowland III, and Daniel B. Neill. "Auditing Predictive Models for Intersectional Biases." Working Paper, June 2023.
- June 2020
- Article
Real-time Data from Mobile Platforms to Evaluate Sustainable Transportation Infrastructure
By: Omar Isaac Asensio, Kevin Alvarez, Arielle Dror, Emerson Wenzel, Catharina Hollauer and Sooji Ha
By displacing gasoline and diesel fuels, electric cars and fleets reduce emissions from the transportation sector, thus offering important public health benefits. However, public confidence in the reliability of charging infrastructure remains a fundamental barrier to...
View Details
Keywords:
Environmental Sustainability;
Transportation;
Infrastructure;
Behavior;
AI and Machine Learning;
Demand and Consumers
Asensio, Omar Isaac, Kevin Alvarez, Arielle Dror, Emerson Wenzel, Catharina Hollauer, and Sooji Ha. "Real-time Data from Mobile Platforms to Evaluate Sustainable Transportation Infrastructure." Nature Sustainability 3, no. 6 (June 2020): 463–471.
- 2023
- Article
Exploiting Discovered Regression Discontinuities to Debias Conditioned-on-observable Estimators
By: Benjamin Jakubowski, Siram Somanchi, Edward McFowland III and Daniel B. Neill
Regression discontinuity (RD) designs are widely used to estimate causal effects in the absence of a randomized experiment. However, standard approaches to RD analysis face two significant limitations. First, they require a priori knowledge of discontinuities in...
View Details
Jakubowski, Benjamin, Siram Somanchi, Edward McFowland III, and Daniel B. Neill. "Exploiting Discovered Regression Discontinuities to Debias Conditioned-on-observable Estimators." Journal of Machine Learning Research 24, no. 133 (2023): 1–57.
- 2023
- Article
Probabilistically Robust Recourse: Navigating the Trade-offs between Costs and Robustness in Algorithmic Recourse
By: Martin Pawelczyk, Teresa Datta, Johannes van-den-Heuvel, Gjergji Kasneci and Himabindu Lakkaraju
As machine learning models are increasingly being employed to make consequential decisions in real-world settings, it becomes critical to ensure that individuals who are adversely impacted (e.g., loan denied) by the predictions of these models are provided with a means...
View Details
Pawelczyk, Martin, Teresa Datta, Johannes van-den-Heuvel, Gjergji Kasneci, and Himabindu Lakkaraju. "Probabilistically Robust Recourse: Navigating the Trade-offs between Costs and Robustness in Algorithmic Recourse." Proceedings of the International Conference on Learning Representations (ICLR) (2023).
- 2023
- Working Paper
Feature Importance Disparities for Data Bias Investigations
By: Peter W. Chang, Leor Fishman and Seth Neel
It is widely held that one cause of downstream bias in classifiers is bias present in the training data. Rectifying such biases may involve context-dependent interventions such as training separate models on subgroups, removing features with bias in the collection...
View Details
Chang, Peter W., Leor Fishman, and Seth Neel. "Feature Importance Disparities for Data Bias Investigations." Working Paper, March 2023.
- March–April 2023
- Article
Pricing for Heterogeneous Products: Analytics for Ticket Reselling
By: Michael Alley, Max Biggs, Rim Hariss, Charles Herrmann, Michael Lingzhi Li and Georgia Perakis
Problem definition: We present a data-driven study of the secondary ticket market. In particular, we are primarily concerned with accurately estimating price sensitivity for listed tickets. In this setting, there are many issues including endogeneity, heterogeneity in...
View Details
Keywords:
Price;
Demand and Consumers;
AI and Machine Learning;
Investment Return;
Entertainment and Recreation Industry;
Entertainment and Recreation Industry
Alley, Michael, Max Biggs, Rim Hariss, Charles Herrmann, Michael Lingzhi Li, and Georgia Perakis. "Pricing for Heterogeneous Products: Analytics for Ticket Reselling." Manufacturing & Service Operations Management 25, no. 2 (March–April 2023): 409–426.