Blog post

Getting to “Whoa” for Smart Machines in Paying for Health

By Jeff Cribbs | July 07, 2016 | 0 Comments

Just a few weeks after I had completed uploading roughly ten thousand personal images to Google Photos, a good friend of mine died.  Ginny (I’ll call her) was 95 and five years into an Alzheimer’s diagnosis — so her death was not surprising nor, in a sense, unwelcome. But it was a moment that sent the many people who loved her in search of memories, meaning, and closure. For me, I went looking for photos.

I had chosen to adopt Google Photos for the usual, objective reasons. I wanted automated, free back-ups for my family’s images across all of our cameras and mobile devices. I knew the structured metadata attached to digital images (date/time, location, device) would be available to Google’s search capabilities. Of course, I’m a technology analyst for Gartner, so I knew there were some image recognition algorithms (Deep Neural Networks or DNNs, if you care) running in the background that had made some significant technical accomplishments. For example, this DNN could identify individuals across images and therefore group together all images that contain a certain person. It could also identify objects and abstract concepts (a picture of a person running and one of water running, both share a concept of “running”). As I was getting set up, it was cool to see those features working very well on my personal photos. It was also a more than a little bit creepy, to tell you the truth. There were some funny DNN snafus, too. For example, while the DNN rather deftly distinguished between my daughter’s two baby dolls (“Pink Baby” and “The Sister”) who I can never tell apart, it incorrectly identified them as human beings (a mistake haven’t yet made).

Babies
The Google DNN correctly distinguished between my daughter’s two dolls, but incorrectly categorized them as People.

But there is difference between being objectively interested in a technology, and having a technology actually improve your life at an important moment. When I heard the news of Ginny’s death, I was able to retrieve dozens of photos of her with a single click. I scrolled through years of memories of her, with no searching, no distractions, no frustration. I came across one image that I thought was a mistake: Ginny wasn’t in it. It showed my daughters hunting Easter eggs in her back yard on a beautiful Spring day, ecstatically showing their loot to Ginny’s husband and me. But when I looked closer into the background, deep in the shadows of the kitchen window was the faint outline of Ginny’s face. And at that moment I realized this technology had given me the image I will retain in my memory as the representation of her final years: Ginny, peering out the window gamely, positively, expectantly, at the life she relished for 95 years, but nonetheless at a distance, trapped in a room of dimming light, obscured to us by a relentless darkness that eventually took her away altogether.

The wonders continued. The Google Assistant cobbled together a video of a butterfly release ceremony we arranged for my older daughter, the day after her little sister was born, set to music I actually quite liked. I needed to replace some wine glasses, and I immediately retrieved the photo I took in the store when I purchased the originals years earlier. I had to register one of my kids for school and forgot to bring the birth certificate – which I retrieved on my phone in seconds.

Somewhere along the line I realized my relationship to my images was different now. And one key function that I had performed (organizing, categorizing and visually searching my images) had been replaced by a DNN that can do it much better. That was my Smart Machine “whoa” moment. It was a moment that I was simultaneously grateful for a tangible improvement in my life, and a bit unsettled by the shake up.

What does this have to do with the subject of this blog? I am about to argue in this space that there is a critical function in healthcare, currently performed by humans, that may, over time, be better suited to computers. Namely, that advance learning algorithms, like DNNs, should take a leading role in allocating healthcare resources and deciding what is paid for and what isn’t. It will be a tough pill to swallow for many reasons — some having to do with plausibility (“this could never work!”), some having to do with our notions of human autonomy and ethics (“we can’t let a computer do this!”). But for those of us looking for opportunities to change healthcare, we must recognize how those objections are a replay of the objections made in every era of technology innovation. We must allow space to look beyond reflexive objections, and see if there might be real opportunity to make life better.

In healthcare, we have done it before — and recently too! In the era of data warehousing, reporting, and BI, we got used to the idea that we should often avoid “going with our gut” (or “practicing our art”), and inform our healthcare decisions with actual evidence. In the era of big data and advanced analytics we have started to grow accustomed to the idea that we can train algorithms to make the same predictions we would make and take the same decisions we would take, if we were able to process the same vast expanse of data. In the era of Smart Machines, we will need to get used to having algorithms perceive things we would have missed and make decisions we never could have anticipated.

Which is why I find it very grounding to have had a personal “whoa” with a DNN in another sphere – like my personal photos. At about the time Google Photos was publicly released, a meme was circulated that referred to how the new service was “raising the prospect of a populace lured into participating in a crowdsourced visual surveillance network that can identify and locate people.” “Many scary use cases for this,” it warned. I certainly couldn’t exclude the possibility of nefarious uses of my images (and still can’t), but I went ahead let myself be “lured” anyhow. And at least so far, I am happy to report there are also many use cases that couldn’t be farther from scary. In a small but significant way, life is better.

Healthcare, take note.

[note: for those of you joining in middle of this project, welcome!  If you want to get some oriented, here a little GPS of the project for you]

Intro to the project: A Grassroots, Digital Solution to Financing Global Population Health

In this research I will explore the following three hypotheses:

  1. All current systems of financing healthcare on not equipped to match the pace of innovation in healthcare delivery, much less the broader demands of population health.
  2. Emerging capabilities in analytic modeling will, in the not-so-distant future, be better suited to making decisions about allocating scarce healthcare resources than all of the humans and institutions currently performing that role.
    • Getting to “Whoa” for Smart Machines in Paying for Health [YOU ARE HERE]
  3. An architecture is possible for a better system for financing health (working title: The Universal Health Intervention Hub #UHIH) using the emerging technologies of digital business, distributed ledger (“grassroots”), Internet of Things (IoT), and a common set of protocols between stakeholders.

The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.

Leave a Comment