Project Abstract/Summary
This research project will develop new theory and methods for assessing the sensitivity of causal inferences to violations of underlying assumptions. Applied causal inference work in the social, behavioral, and economic sciences often use a handful of methods commonly known as “quasi-experimental” designs. However, the validity of these methods requires strong assumptions about the data-generating process, many of which are difficult to defend in practice. What if these assumptions are false? In such cases, sensitivity analyses play an essential role by allowing researchers to quantify how strong violations of key assumptions need to be in order to substantially change a research conclusion. This project will develop a unified theoretical framework that allows researchers to easily quantify how violations of key assumptions affect causal effect estimates using such methods. These results will allow applied scientists and decision makers to draw robust and trustworthy conclusions using valid, but imperfect scientific evidence. Graduate and undergraduate students will be trained and mentored. Dissemination activities for the new methods will include redesigned academic courses, workshops on sensitivity analysis for practitioners, and the development of open-source software.
This research project will develop a comprehensive suite of sensitivity analysis tools for popular identification strategies in causal inference, such as instrumental variables, difference-in-differences, and regression discontinuity designs. The investigator will bound the bias due to violations of common assumptions utilized in these settings, such as violations of the exclusion and independence restrictions in instrumental variable estimation; violations of the parallel trends assumption in differences-in-differences designs; or violations of the continuity assumption at the cutoff point in regression discontinuity designs. Analytical formulas will be derived for the bias due to such violations, as well as easily interpretable sensitivity statistics for routine reporting that can be used to quickly communicate the robustness of a result. Other innovations will include allowing for multiple, simultaneous violations of assumptions, incorporating expert knowledge on the relative importance of variables to bound the bias, and enabling valid statistical inference using both classical methods as well as leveraging modern machine learning algorithms.
This award reflects NSF’s statutory mission and has been deemed worthy of support through evaluation using the Foundation’s intellectual merit and broader impacts review criteria.
Principal Investigator
Carlos Cinelli – University of Washington located in SEATTLE, WA
Co-Principal Investigators
Funders
Funding Amount
$350,000.00
Project Start Date
08/15/2024
Project End Date
07/31/2027
Will the project remain active for the next two years?
The project has more than two years remaining
Source: National Science Foundation
Please be advised that recent changes in federal funding schemes may have impacted the project’s scope and status.
Updated: April, 2025