Week 1 - Interrupted Time Series

DUE: MONDAY March 23rd

SUBMIT LAB


Resources:

Bernal, J. L., Cummins, S., & Gasparrini, A. (2017). Interrupted time series regression for the evaluation of public health interventions: a tutorial. International journal of epidemiology, 46(1), 348-355. [PDF]

Chapter on Interrrupted Time Series [PDF]: From Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin.




Week 2 - Difference-in-Difference Models

Due MON Mar 30th

SUBMIT LAB


Review:

Hypothesis testing with dummy variables: lecture notes

Varieties of the counterfactual: lecture notes

Reference:

Wing, C., Simon, K., & Bello-Gomez, R. A. (2018). Designing difference in difference studies: best practices for public health policy research. Annual review of public health, 39. [pdf]




Week 3 - Panel Data with Fixed Effects

Due MON Apr 6th

SUBMIT LAB



useful notes on interpretting output

random effects example

Group-level variable is correlated with the outcome, but uncorrelated with the policy variable. Thus omission does not cause bias, but inclusion increases efficiency.

Recall the taxonomy of control variables.

Random effects are Type A controls. Fixed effects are Type B controls:









Week 4 - Instrumental Variables

Due MON Apr 13th

SUBMIT LAB






Week 5 - Regression Discontinuity Design

Due MON Apr 20th

SUBMIT LAB








Week 6 - Logistic Regression

Due MON Apr 27th

SUBMIT LAB




Week 7 - Propensity Score Matching

video overview

Due MON May 4th

SUBMIT LAB




Review of Causal Analysis with Observational Data



Evidence-Based Practices

What does it mean to live in an evidence-based world? How do we become more data-driven?

It turns out that using data to improve decision-making and organizatoinal performance is not a trivial affair because of a little problem called omitted variable bias (correlation does not equal causation). As a result, we need to combine judicious analytical techniques with feasible approaches to research design in order to understand causal impact of social programs.

Here is a great introduction to a case study that uses evaluation to better understant the impact of a government program by getting past anecdotes to measure program impact.




Understanding Causal Impact Without Randomized Control Trials

In most cases we don’t have resources for large-scale Randomized Control Studies. They typically cost millions of dollars, are sometime unethical, and are often times not feasible. For example, does free trade prevent war? How do you randomized free trade across countries?

Statistics and econometricians have spent 75 years developing powerful regression tools that can be used with observational data and clever quasi-experimental research designs to tease out program impact when RCT’s are not possible. The courses in the Foundations of Program Evaluation sequence build the tools to deploy these methods.

Let’s start with a simple example. Is caffeine good for you?



What evidence is used to create these assertions? [ link ]

Can you conduct a Randomized Control Trial to study the effects of caffeine on mental health over a long period of time? This would require us to demand that some individuals that enjoy coffee not consume it over long periods of time (several months if studying depression, several years if studying things like heart health, diabetes, or cancer), and you force people that don’t like coffee to drink it on a daily basis for years.

As you might expected, an RCT would be challenging. As a result most of our evidence comes from long-term observational studies where participants self-report daily coffee consumption, and these are physical health is measured through regular physician check-ups and self-reported health measures. For example, one of the most important public health studies began in 1976 with a sample of 121,000 nurses and has followed the cohort over 50 years [see the Nurses’ Health Study]. Does evidence from this study represent correlation or causation?

How can we be sure we are measuring the causal impact of coffee on health?


Why is evidence-based management hard?

Just listen to this summary of current knowledge on the topic, then try to succinctly translate it to a rule of thumb physicians should use on whether to recommend coffee to patients.




Estimating Program Impact




Program Impact

This course provides foundational skills in quantitative program evaluation:

Reichardt, C. S., & Bormann, C. A. (1994). Using regression models to estimate program effects. Handbook of practical program evaluation, 417-455. [pdf]

Gertler, P. J., Martinez, S., Premand, P., Rawlings, L. B., & Vermeersch, C. M. (2016). Impact evaluation in practice. The World Bank. [pdf]

The Broader Field of Evaluation

Program evaluation is a large field that deploys a diversity of methodologies beyond quantitative modeling and impact analysis. We focus heavily on the quantitative skills in the Foundations of Eval I, II, and III in this program because data is hard to use, so you need several courses to build a skill set. Qualitative and case-study approaches build from the same foundations in research design, so you can more fully develop some of those skills drawing from knowledge of formal modeling and inference.

For some useful context on evaluation as a field, this short (6-page overview) is helpful:

McNamara, C. (2008). Basic guide to program evaluation. Free Management Library. [__pdf__]

And to get a flavor for debates around approaches to measuring program impact in evaluation:

White, H. (2010). A contribution to current debates in impact evaluation. Evaluation, 16(2), 153-164. [__pdf__]



Varieties of the Counterfactual

Description

This week introduces the notion of counterfactual reasoning using quasi-experimental design.

Learning Objectives

Lecture Materials

Cook, T. D., Scriven, M., Coryn, C. L., & Evergreen, S. D. (2010). Contemporary thinking about causation in evaluation: A dialogue with Tom Cook and Michael Scriven. American Journal of Evaluation, 31(1), 105-117. [ LINK ]

Skim: Gertler, P. J., Martinez, S., Premand, P., Rawlings, L. B., & Vermeersch, C. M. (2016). Impact evaluation in practice. The World Bank.

Key Take-Aways

We rarely have the resources or opportunity to utilize Randomized Control Trials (RCTs) in policy and management. There is a growing field of quasi-experimental methodologies that allow us to reproduce many of the features of RCTs to make strong causal claims when certain conditions are met.