Blog post

The Evidence Problem in Paying for Health

By Jeff Cribbs | June 23, 2016 | 2 Comments

We should practice more evidence-based medicine, ok?  And to the extent the mechanisms of payment can encourage the practice of evidence-based medicine, we should do that too, wouldn’t you agree? And let’s encourage consumers everywhere to make evidence-based lifestyle decisions. Sound good? Great blog. I enjoyed this one. Have a great week!

Wait… sorry, just a second. Can we just clarify real quick what we mean by “evidence”?

Usually, of course, we mean what comes out of research. Most especially we mean research conducted using the “gold standard” of research methodologies (randomized controlled clinical trials), subjected peer review, and published in a few dozen prestigious journals. There are some practical limitations to this evidence. It usually takes a long time and a lot of money to create the pristine conditions needed to isolate a variable (a medical or pharmaceutical intervention, for example) and determine what effect it has on health. That’s why we are always grubbing for money “for research”. That shortage of resources ultimately caps the number of experiments we can run. And even when we find a winner (an intervention that does better for health than the control), it takes a while to get the word out to the clinicians making the decisions. And then once it actually gets used in the field, where there are lots of confounding variables that were deliberately kept out of the study, the outcomes are often different.

If the “gold standard” were only expensive, limited in scope, unrepresentative of the real world and painfully slow, that would be frustrating, but hey, gold don’t come cheap, right? It’s gold! So if those were its only problems, we could just set about the job of getting the resources, processes, and priorities in place to get the “gold standard” evidence we need to improve population health. Onward.

But what if the “gold standard” isn’t so gold after all? In practice, clinical trials sometimes conflict in their findings. It is alarmingly difficult to reproduce the exact findings we declare to meet the “gold standard” of evidence.  You don’t have to be a cynic to suspect that somehow all that grubbing for research money might introduce, shall we say,  a certain penchant for finding something exciting. And of course when a new intervention shows significant improvement over the standard of care, its inventor stands to make a lot of money. More reason for a researcher to find what they set out looking for. So maybe not outright fraud (though, now that you mention it, there are some creeps out there), but a subtle practice of choosing datasets that deliver statistical significance in favor of a “finding” that is likely to be published (a bias known as “p-hacking” in the biz).

Figure 2. PPV (Probability That a Research Finding Is True) as a Function of the Pre-Study Odds for Various Numbers of Conducted Studies, n Panels correspond to power of 0.20, 0.50, and 0.80. source: Why Most Published Research Findings Are False, John P. A. Ioannidis

This is the domain of meta-research (that is research about research – how’d you like that job?). If you want more detail, you can start with Why Most Published Research Findings Are False by meta-researcher / firebrand John P. A. Ioannidis. Or a more digestible treatment of his work in the Atlantic in 2010 (Lies, Damned Lies, and Medical Science). In brief, his research suggests that a majority of the “evidence” we are referring to when we talk about “evidence based medicine” is “mis-leading, exaggerated, or flat-out wrong”. [Gulp.]

This is the evidence used (ideally, anyway) by doctors when choosing diagnostics and therapies for patients. It is also  what we use in establishing medical policy (what a health insurance company pays for and what it doesn’t pay for, under what conditions of medical “necessity”). Also what we use to measure the quality of physician performance. And, to bring it full circle, the evidence we use to evaluate the quality of the health insurers paying the physicians choosing the diagnostic or therapy. So if there is a problem with this evidence, it is not of marginal concern.

The evidence problem gets a bit worse, actually. Think about the consumer’s perspective on all of this. Up to this point we have been talking about medical evidence that emerges from established research channels. For all of its troubles, it is highly governed and quite orderly.  But most consumers never see this evidence directly when evaluating treatment options or, more commonly, thinking through their lifestyle choices (and I don’t need to remind you how important those lifestyle choices are, right?). Consumers are barraged with advertisements, popular press articles, and notifications of new “studies”, all of them citing their own set of facts and own sources of evidence. Some of these messages have a distant origin in legitimate research, but a lot of it is of very dubious quality.  For consumers not trained in interpreting research, it is undifferentiated chaos — and a majority of “evidence” they see is being delivered by marketers pushing product, wellness “gurus” pushing themselves, or blowhard scolds with their own financial or moral agendas.

The first hypothesis in my Gartner Maverick project is this:

All current systems of financing healthcare on not equipped to match the pace of innovation in healthcare delivery, much less the broader demands of population health.

Last week, I made my first point in support of this hypothesis, which was that we had a problem with economic alignment. My point here is this: All current systems of financing healthcare rely on a spectrum of evidence that is insufficient to the challenges and opportunities of the next generation of population health. We cannot realize the full value of innovation in healthcare delivery without rethinking how we establish and deploy evidence in paying for healthcare.

But is there any alternative? In the scope of this Maverick research, I submit that there is — or soon could be, anyway. New digital approaches to real-world evidence (expertly covered by the Gartner Life Sciences team, including Stephen Davies’ blog here) are a great start. But we will get more into solutioning for the evidence problem in future posts.

[note: for those of you joining in middle of this project, welcome!  If you want to get some oriented, here a little GPS of the project for you]

Intro to the project: A Grassroots, Digital Solution to Financing Global Population Health

In this research I will explore the following three hypotheses:

  1. All current systems of financing healthcare on not equipped to match the pace of innovation in healthcare delivery, much less the broader demands of population health.
  2. Emerging capabilities in analytic modeling will, in the not-so-distant future, be better suited to making decisions about allocating scarce healthcare resources than all of the humans and institutions currently performing that role.
  3. An architecture is possible for a better system for financing health (working title: The Universal Health Intervention Hub #UHIH) using the emerging technologies of digital business, distributed ledger (“grassroots”), Internet of Things (IoT), and a common set of protocols between stakeholders.

The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.

Leave a Comment


  • lina says:

    I have read so many posts about the blogger lovers however this post is really a good piece of writing, keep it up