Are We Measuring Health Equity All Wrong?
By Jon Scaccia
32 views

Are We Measuring Health Equity All Wrong?

Picture this: a city rolls out a new school-based nutrition program. Kids are encouraged to eat more fruits and vegetables, teachers add fun lessons on healthy food choices, and the results look great—at least on paper. Test scores tick upward, and obesity rates decline.

But dig deeper, and a troubling pattern emerges. The biggest improvements happen in wealthier neighborhoods with stable schools and engaged parents. In lower-income areas, participation lags, and the benefits never materialize.

Did the program work? That depends on what you measure.

A new scoping review from the University of Bordeaux reveals a surprising blind spot in public health research: while most programs claim to promote equity, very few actually measure whether they reduce inequities.

When “Equity” Isn’t Measured, It’s Just a Slogan

The research team, led by Iñaki Blanco-Cazeaux and colleagues, analyzed nearly 500 peer-reviewed studies evaluating complex public health interventions. Only 19 studies met their criteria—meaning they both targeted health inequalities and assessed outcomes across socioeconomic groups.

Even among those 19, most stopped short of actually measuring equity. They compared results by income or education level but never used formal inequality indicators such as the Gini index, concentration curves, or Distributional Cost-Effectiveness Analysis (DCEA).

That’s like trying to fix the economy without looking at the income gap.

The authors put it bluntly: “Despite the stated goal of reducing health inequalities, evaluations often fail to measure this impact explicitly.”

The Paradox at the Heart of Public Health

This gap isn’t just academic—it can distort real-world decisions. Programs that raise overall health outcomes can still widen disparities, a phenomenon known as the inverse equity hypothesis.

When new interventions—especially those involving technology, education, or digital tools—are rolled out, they often reach the most advantaged first. Wealthier, better-connected groups have the time, access, and literacy to benefit quickly, while marginalized populations catch up years later, if ever.

Without tracking how outcomes are distributed, policymakers may declare victory too soon, unaware that progress has deepened the divide.

What the Review Found

The Bordeaux team reviewed interventions ranging from smoking prevention in Danish schools to urban renewal projects in Glasgow.
Here’s what they found:

  • 10 studies used randomized controlled trials (RCTs).
  • 6 used quasi-experimental designs.
  • Only 4 applied a difference-in-differences approach to track changes across groups.
  • None used dedicated inequality indicators.

The majority relied on subgroup analyses—comparing results by education, income, or neighborhood deprivation. While helpful, these analyses show differences between groups, not whether the gap itself is shrinking.

Imagine two runners starting a race: one at the starting line, the other halfway ahead. If both speed up equally, the gap stays the same—but that’s invisible if you only measure average pace.

Why Economic Tools Matter

Public health has long borrowed ideas from social science—think “social determinants of health” or “proportionate universalism.” Yet when it comes to measuring fairness, the field still lags behind economics.

Economists routinely use metrics such as:

  • Gini index (inequality across a population)
  • Entropy and interquartile measures (dispersion of outcomes)
  • Distributional Cost-Effectiveness Analysis (DCEA) (weighing efficiency against equity trade-offs)

These tools allow evaluators to ask not just “Did it work?” but “Who did it work for?” and “Did it narrow or widen the gap?”

The Bordeaux review argues that it’s time for public health researchers to embrace these methods—to ensure equity isn’t treated as a moral add-on, but as a measurable outcome.

What This Means in Practice

For local health departments and NGOs:

  • Build equity metrics early. Collect socioeconomic data from the start, not as an afterthought.
  • Go beyond averages. Report absolute and relative differences across key subgroups.
  • Borrow from economics. Consider using the Gini index or DCEA to assess distributional impacts.
  • Collaborate across disciplines. Partner with economists, statisticians, and behavioral scientists.
  • Leverage existing tools. WHO’s Health Equity Assessment Toolkit (HEAT) offers ready-to-use indicators for tracking progress.

These changes don’t require vast new budgets—just intentional design and transparent reporting.

Visualizing the Gap

The review recommends incorporating visual storytelling—simple charts or infographics showing how outcomes differ by income, race, or geography. A “before-and-after” equity curve can be more persuasive to funders and community leaders than a dense regression table.

Imagine a dashboard where community programs can see at a glance whether their efforts are narrowing or widening local health gaps. That’s the future of evidence-based equity.

Barriers and Future Directions

So why hasn’t this become standard practice?

  • Data limitations: Many projects don’t collect consistent socioeconomic indicators.
  • Complex methods: Few evaluators are trained in economic modeling.
  • Policy inertia: Funders and agencies still reward efficiency and scale, not equity.

The authors suggest new directions, including machine-learning-based causal inference methods like causal forests that can detect hidden inequities without pre-specifying every subgroup. These techniques could uncover where interventions help—or harm—different populations.

From Talk to Metrics

If health equity is the north star of modern public health, then our compass—evaluation—needs recalibration. As this review shows, the field has made remarkable progress in identifying disparities but remains surprisingly unscientific about measuring whether we’re closing them.

Equity can’t be assumed; it must be quantified.

Conversation Starters

  • How might your agency track who benefits most from your programs?
  • What trade-offs between efficiency and equity are you willing to make?
  • Could collaboration with economists strengthen your evaluations?

Because until we measure fairness as carefully as we measure efficiency, we’ll keep celebrating half victories.

Discussion

No comments yet

Share your thoughts and engage with the community

No comments yet

Be the first to share your thoughts!

Join the conversation

Sign in to share your thoughts and engage with the community.

New here? Create an account to get started