One of the insights of the PCAST report was the utility of sending the question to the data vs centralizing the data to answer questions. It may take a long time to achieve this by defining a new universal data language and using digital content management to manage consumer privacy preferences, but what happens if we uncouple that the basic “question to the data” notion from the more elaborate vision of the Report?
The utility to public health and some kinds of research could be substantial and it may be possible to get started with an intense, voluntary effort similar to The Direct Project.
Public health covers a lot of territory. Some concepts for gathering data are enshrined in law and regulation and, therefore, are stable. “Reportable diseases” are typically enumerated and it is conceivable to build filters into lab systems to “send the data to the question (asker).”
But what of the requirements for situational awareness during an epidemic. Would public health officials want to ask about presenting symptoms, ED diagnoses, lab or micro results, pathological findings? As their understanding of the disease process grows how often would they want to change their inquiries to add data or refine the selection criteria? Pretty darn frequently, I’d wager.
How might it work?
In general, the sources of clinical data (EHRs, stand-alone labs, ePrescribing networks, etc.) would “sign up” to receive queries from requesters such as public health departments, researchers or other carefully identified requestors of information. It is critical that the identity of both ends of the transaction are established mutually authenticated each time data is transmitted. It seems that the security about the institutional asker’s of questions may require a higher assurance level.
Once a relationship has been established the sources of data might receive requests for data from time to time such as “tell me about encounters you have had with high fevers and vomiting as a ratio to the total number of encounters you have had for the same period.” The request would also include (or imply) information on where to send the data and how to associate it with a specific request.”
Requesting aggregates rather than individual data is a two-edge sword. On the one hand, it avoids policy issues around sending individual health information. On the other hand there is no way for the recipient to weed out duplicates from multiple sources so the statistical inaccuracy will rule out fine-grained measurements. Furthermore, because the statistics are generated at the source fine-grained pattern discovery based on sophisticated analyses of millions of records does not seem possible. This is an area where the perfect should not be an effective enemy of the good. Any pragmatic scheme to get even crude data into the hands of public health officials is better than a more precise scheme that would take years to get going.
In the un-PCAST world that we occupy today, it is unlikely that the data-source organizations would respond automatically. Some officer of the organization will likely approve each individual request. However, it would be ideal if that officer didn’t have to schedule the time of a programmer to determine if the request is feasible and, if so, code it up. In other words, it would be ideal if fulfilling the request was automatic and there was a workflow for the officer of the source organization to know about requests, and decide whether to authorize the release.
“Same old stuff” or a new day?
We all know the reasons that this can’t work. The policy issues are difficult but the biggest issue is that EHRs and other clinical data systems have different internal data schemas, so it is hard to frame a request and know that it is compatible with source systems and it is hard to fulfill the request without custom programming.
As that legendary scientist and explorer, Jean-Luc Picard, once said, “things are only impossible until they’re not.” It’s time to decide if we are approaching a cusp of possibility with regard to this issue. We have new resources (more EHRS, a fundamental secure communications infrastructure and standard coding systems in support of meaningful use requirements).
It is time to assemble a group of users and vendors willing to look at this issue intensely, follow the principles of building on available standards and, if pragmatic solutions exist, support them with an approach similar to the Direct Project, which is, “rough consensus, open source code implementations, final specifications.”
Read Complimentary Relevant Research
Predicts 2017: Artificial Intelligence
Artificial intelligence is changing the way in which organizations innovate and communicate their processes, products and services. Practical...
View Relevant Webinars
The BI & Analytics Challenge for T&SPs: Major Disruptions on the Way
From artificial intelligence (AI) to machine learning to smart data discovery, the BI market is once again going through a major transformation...
Category: healthcare-providers interoperability vertical-industries
Tags: direct-project distributed-query ehr health-information-exchange healthcare-interoperability healthcare-providers open-source pcast-report public-health
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.