Introduction to Evidence-Based Decision-Making: Examples in Economic Development


  • Why randomization is important for obtaining causal evidence and impact evaluation.
  • Why causal claims implicitly make reference to a “counterfactual” outcome.
  • Practical advice for study design, as well as ideas for collaboration between organizations, researchers, and governments.
Taught by
  • Quentin Palfrey
40 mins


If you invest in things that do not work rather than those that do, real people’s lives may be affected in dramatic ways. However, how do you know what works? Through real-world examples and contemporary debates (e.g., the expansion of health insurance, or crime and violence prevention), you will understand the basic principles of evidence-based decision making, how this relates to “counterfactual” reasoning, and frameworks for thinking about how to apply rigorous, scientifically-based methods to solve problems in ways that are both effective and cost-effective.


Introduction to Evaluations” by JPAL - an overarching summary of “impact evaluations”, their history, how to randomize effectively in experiments, why randomization is important, and common concerns in study design

Promoting Policies that work: Six steps for the Commission on Evidence-Based Policymaking” by Quentin Palfrey - a list of six concrete steps policymakers “can take to institutionalize the use of administrative data to support policy-relevant research and evidence-informed policymaking.”

Incentives for Immunization” by JPAL - a write-up of a study referenced in Quentin’s lecture, on how using costly incentives in an evaluation may actually decrease the marginal cost of participation in a social program.

Watch “Social experiments to fight poverty” by Esther Duflo, Professor of Economics at MIT, explain in further detail why social experiments may help policymakers with poverty alleviation, and why experimental designs provide compelling and unforeseen insights where other study approaches may fail.

Utilize the Methods Guides an online experimental Tools written by Evidence in Governance and Politics (EGAP). These guides provide both technical and non-technical discussions on challenges commonly faced in causal inference, why randomization is important to experimental design and causal attribution, but also cases when of non-experimental designs may lead to causal insights. Example code, written in the R programming language, is accessible here online, as well as example vignettes.

Questions? Need help with a project?
Ask the public data science community at The Network of Innovators (NoI). NoI is a peer learning platform for finding practitioners with expertise and experience.