Research Summary
Research Summary
Overview
By: Iavor I. Bojinov
Description
Over the last decade, technology companies like Amazon, Google, and Netflix have pioneered data-driven research and development processes centered on massive experimentation. However, as companies increase the breadth and scale of their experiments to millions of interconnected customers, existing statistical methods have become inadequate, causing inefficiencies and biased results. The bias is often substantial enough to change the magnitude and sign of the results, leading managers to make incorrect decisions — such as releasing inferior products and dropping promising initiatives. The inefficiencies are similarly costly, slowing the innovation process and inadvertently overexposing customers to harmful changes or needlessly delaying beneficial offerings.
My research develops novel statistical methodologies to address these challenges and enable managers to experiment more rigorously, safely, and efficiently in modern business contexts.
Rigor: Traditional statistical theory overlooks three fundamental factors that, when ignored, lead to biased results. First, incorporating time is essential as customers arrive sequentially, not in batches, and past changes can have a prolonged impact. Second, customers often interact directly (through communications) or indirectly by, for example, competing for a limited resource (like riders vying for drivers on Lyft), so changes to one person’s experience can interfere with the outcomes of others. My research focuses on developing methods for designing and analyzing experiments that incorporate time and accommodate interference by either grouping units to limit the interference or by adjusting for it in the analysis, ensuring that the results are unbiased and robust.
Examples of academic papers:
Safety: Managers use experimentation to de-risk the innovation process by limiting their customers’ exposure to negative changes; for example, the Google Play and Apple App stores launch all changes to applications using a sequence of experiments known as a phased release. My work provides frameworks for managers to balance the inherent trade-off between the risk of releasing a negative change and the desire to learn causal effects.
Examples of academic papers:
Efficiency: As companies integrate experimentation into their innovation process, managers increasingly seek ways to improve the efficiency of their experiments by achieving the same precision with fewer participants. This is especially important for multi-sided platform companies, like Doordash and Uber, that use complex experimental designs to overcome interference. My work provides novel experimental designs that draw on optimization techniques to achieve the same precision with far fewer participants, drastically increasing efficiency and reducing the cost of the experiment.
Examples of academic papers:
Currently, I am particularly interested in studying the application of experimentation to the operationalization of artificial intelligence (AI), the process by which AI products are developed and integrated into real-world applications. Two idiosyncrasies make experimentation particularly challenging in this context. First, AI products often interact with other algorithms, products, or systems, causing unintended consequences. Second, AI products are built by iterating between experimentation and development, allowing managers to identify improvement opportunities that often lead to product changes during the experiment.
Examples of academic papers:
Much of my work in this area has been summarized in the following practitioner-focused articles:
My research develops novel statistical methodologies to address these challenges and enable managers to experiment more rigorously, safely, and efficiently in modern business contexts.
Rigor: Traditional statistical theory overlooks three fundamental factors that, when ignored, lead to biased results. First, incorporating time is essential as customers arrive sequentially, not in batches, and past changes can have a prolonged impact. Second, customers often interact directly (through communications) or indirectly by, for example, competing for a limited resource (like riders vying for drivers on Lyft), so changes to one person’s experience can interfere with the outcomes of others. My research focuses on developing methods for designing and analyzing experiments that incorporate time and accommodate interference by either grouping units to limit the interference or by adjusting for it in the analysis, ensuring that the results are unbiased and robust.
Examples of academic papers:
- Han, K. W., Basse, G., and Bojinov, I. (2024). Population Interference in Panel Experiments. Journal of Econometrics, 238(1), 105565.
- Bojinov, I., and Shepherd, N. (2019). Time Series Experiments and Causal Estimands: Exact Randomization Tests and Trading. Journal of the American Statistical Association, 14(528), 1665-1682.
Safety: Managers use experimentation to de-risk the innovation process by limiting their customers’ exposure to negative changes; for example, the Google Play and Apple App stores launch all changes to applications using a sequence of experiments known as a phased release. My work provides frameworks for managers to balance the inherent trade-off between the risk of releasing a negative change and the desire to learn causal effects.
Examples of academic papers:
- Li, Y., Mao, J., and Bojinov, I. (2023). Balancing Risk and Reward: An Automated Phased Release Strategy. Advances in Neural Information Processing Systems, 36.
- Woong Ham, D., Lindon, M., Tingley, M., and Bojinov, I. Design-Based Confidence Sequences: A General Approach to Risk Mitigation in Online Experimentation.
Efficiency: As companies integrate experimentation into their innovation process, managers increasingly seek ways to improve the efficiency of their experiments by achieving the same precision with fewer participants. This is especially important for multi-sided platform companies, like Doordash and Uber, that use complex experimental designs to overcome interference. My work provides novel experimental designs that draw on optimization techniques to achieve the same precision with far fewer participants, drastically increasing efficiency and reducing the cost of the experiment.
Examples of academic papers:
- Bojinov, I., Simchi-Levi, D., and Zhao, J. (2023). Design and Analysis of Switchback Experiments. Management Science 69 (7), 3759-3777.
- Ni, T., Bojinov, I., and Zhao, J. Design of Panel Experiments with Spatial and Temporal Interference. Available at SSRN 4466598.
Currently, I am particularly interested in studying the application of experimentation to the operationalization of artificial intelligence (AI), the process by which AI products are developed and integrated into real-world applications. Two idiosyncrasies make experimentation particularly challenging in this context. First, AI products often interact with other algorithms, products, or systems, causing unintended consequences. Second, AI products are built by iterating between experimentation and development, allowing managers to identify improvement opportunities that often lead to product changes during the experiment.
Examples of academic papers:
- Rajkumar K., Saint-Jacques G., Bojinov I., Brynjolfsson E., and Aral S. (2022). A Causal Test of the Strength of Weak Ties. Science 377.6612: 1304-1310.
- Yue, D., Hamilton, P., and Bojinov, I. Nailing Prediction: Experimental Evidence on the Value of Tools in Predictive Model Development.
Much of my work in this area has been summarized in the following practitioner-focused articles:
- Bojinov, I. (2023) Keep Your AI Projects on Track. Harvard Business Review 101, (6): 5359.
- Bojinov I. and Gupta S. (2022) Online Experimentation: Benefits, Operational and Methodological Challenges, and Scaling Guide. Harvard Data Science Review, 4(3).
- Bojinov, I., Saint-Jacques, G., and Tingley, M., (2020) Avoid the Pitfalls of A/B Testing, Harvard Business Review 98 (2), 48-53.