Chapter 3 of 6

Supporting Parallel Trends

Difference-in-Differences

We can't prove — or even really test — parallel trends.

The parallel trends assumption is about a counterfactual: what would have happened to the treated group if treatment had not occurred. We can never observe this counterfactual.

No test of the data can confirm or disprove parallel trends. What we can do is look for suggestive evidence that makes the assumption more or less plausible.

If these tests fail, that makes parallel trends less plausible. And that's about it. But that's still worth knowing.

Two common approaches:

  1. Test of prior trends
  2. Placebo test

Test of prior trends: were the groups trending similarly before treatment?

If the treated and control groups were heading in the same direction before treatment, that's a good clue they would have continued similarly without treatment.

Parallel trends looks plausible

2.402.602.803.00123456treatmentgapgapSeason

Stable gap before treatment. Both groups trend together.

Parallel trends looks unlikely

2.202.402.602.803.00123456treatmentbigsmallSeason

The gap was already shrinking before treatment — this trend would likely continue regardless.

Treated
Control

You can also test this statistically by estimating a regression on pre-treatment data that allows the time trend to differ by group:

Y=αg+β1Time+β2Time×Group+εY = \alpha_g + \beta_1 \cdot \text{Time} + \beta_2 \cdot \text{Time} \times \text{Group} + \varepsilon

A test of β2=0\beta_2 = 0 tells you whether the pre-treatment trends differ. But remember: this test often has low statistical power, so a "pass" doesn't guarantee parallel trends holds.


Placebo test: pretend treatment happened earlier. Do you still find an effect?

The idea is simple: take only the pre-treatment data, pick a fake treatment date, and estimate DiD. If you find an "effect" where there shouldn't be one, that is a warning of potential parallel trend violation.

Placebo test steps:

  1. Use only data from before the actual treatment went into effect.
  2. Pick a fake treatment date within that pre-treatment window.
  3. Estimate DiD using the fake treatment. The "treated" group stays the same, but the "after" period is now based on your fake date.
  4. If you find a significant "effect" where there shouldn't be one, that's a warning for potential parallel trends violation.

Note: A placebo test requires multiple pre-treatment periods so you can split them into fake "before" and "after" windows. In our timekeeping rule example, we have two pre-treatment seasons (21/22 and 22/23) — just enough to run a basic placebo test by pretending the rule started in 22/23. With more pre-treatment data, placebo tests become an even more powerful diagnostic tool.

A nonzero DiD "effect" at a fake treatment date tells us that the non-treatment changes in the treated group don't cancel out the non-treatment changes in the control group. Although it doesn't necessarily mean violation of parallel trends, it requires some explanation of what might happened at the fake treatment date and additional argument is needed to convince that the parallel trend still holds in the real treatment date.


What if you conclude that parallel trends probably doesn't hold?

You don't have to give up entirely. There are a few options:

Partial identification

If you think the violation is small, you can say: "if parallel trends were violated by x, then my estimate is biased by x." Taking a range of plausible violation amounts gives you a range of plausible effect estimates.

Add control variables

Sometimes parallel trends doesn't hold unconditionally, but does hold after accounting for certain covariates. This is the conditional parallel trends approach.

Try a different design

If your prior trends test looks bad and adding covariates doesn't fix it, DiD may not be the right tool for your question. Consider other approaches like synthetic control.

We can never prove parallel trends — but we can build a credible case for it by showing similar pre-trends and passing placebo tests.

Built with SvelteKit + D3.js