Wes Rishel

A member of the Gartner Blog Network

Wes Rishel
VP Distinguished Analyst
12 years at Gartner
45 years IT industry

Wes Rishel is a vice president and distinguished analyst in Gartner's healthcare provider research practice. He covers electronic medical records, interoperability, health information exchanges and the underlying technologies of healthcare IT, including application integration and standards. Read Full Bio

We Need To Unboggle Health Information Queries

by Wes Rishel  |  February 12, 2013  |  6 Comments

Health Information Exchange (the verb) requires a massively complicated boggle of policy, trust and technology issues.

In 2010 ONC chose to carve “push transactions” out of the boggle. It asked the HIT Policy Committee Privacy & Security Tiger Team to provide specific policy recommendations around pushing, which it referred to as “directed exchange.” The Policy Committee approved and forwarded the recommendations to ONC in September 2010. Three and a half years later, 97 percent of about 27.5 million transactions per month that flow through HIEs are pushed. Only about 3 percent are query-response transactions.1,2

We need more progress on queries. The same approach, carving a subset out of the boggle can work again. Epic’s Care Everywhere constitutes an existence proof that there is a workable set of queries that healthcare delivery organizations will support with automated response given the right framework of trust and operating agreements.

The open questions are whether such a capability can be expanded to multi-vendor approaches and whether it needs be operated exclusively by EHR vendors. What we don’t need is to regulate an approach based on a single vendor. We don’t even need to exactly follow the Epic model.

I am happy to report that the Tiger Team is now working on policy statements that are comparable to the 2010 effort on directed exchange. The new work targets a specific set of circumstances where health record holders may safely respond to automated queries. The policy may also support electronic transmission of the request and response even when the workflow includes a manual review of the request. As with directed query, the policy should support a number of technological solutions.

This is a very important start. Many questions remain to be answered about the appropriate standards, the degree to the industry should rely on standards vs. competitive approaches, the level of trust that must be supported and how ONC could accelerate such an effort through meaningful use Stage 3.


Notes

1These percentages were computed from data reported by Mickey Tripathi on 29 January 2013 during Tiger Team testimony on health information exchange.

2All of the pushes described in are enabled under the “directed exchange” policy. Not all of them are transmitted using the Direct protocols. Likewise, not all data sent with the Direct protocols is routed through an HIE that reports traffic to ONC. Because of the decentralized nature of Direct it is hard to assemble traffic estimates.

6 Comments »

Category: Healthcare Providers Interoperability Uncategorized Vertical Industries     Tags: , , , , , ,

A Much-Needed Nudge for Stage 2 Interoperability

by Wes Rishel  |  January 6, 2013  |  3 Comments

The chatter on the HL7 Strucdoc list server this month claims that two EHRs that are fully conform to the HL7 Consolidated CDA (C-CDA) may not interoperate, so the organizations using them could fail to meet Stage 2 meaningful use requirements. This is, indeed, an issue but the members of the list have conceived a solution and are anxious to start. A number of contributors representing major implementers are willing to contribute. In fact, the only concern about the solution is to find a way to do it once and right, rather than have a disjointed set of attempts.

The issue: It is not always clear how to express some clinical thoughts in the C-CDA, at least for some developers. Some examples include  ”allergic to latex but no known allergies to medications”, “the patient states she has no allergies” vs “we don’t know if the patient has allergies”, “the patient believes he is allergic to penicillin” vs “a physician has reported anaphylaxis as a reaction to penicillin.” Sometimes there is more than one way to express the same clinical thought in C-CDA-compliant XML. If the system that creates the document expresses it one way, but the system that is interpreting the document is looking for a alternate expression of the clinical thought, it may miss the data altogether.

The Solution: The jointly conceived solution is to publish examples of the XML used by the C-CDA to represent hundreds of important clinical thoughts in a wiki. The wiki has the double benefit of showing newbies how their XML should look and helping all implementors know the preferred choice when there are multiple ways to express a clinical thought.

There is general agreement about the solution and there are a lot of enthusiastic ideas about how to go about it. This is good because the industry needs these examples right now. The developers are working on the code right now that will be the heart of their implementations for Stage 2.

I expect that the at the HL7 Working Group meeting next week the Structured Documents group will be able to settle on a single approach and the HL7 Board and ONC will come forth with the nominal funding that it takes to mount a wiki and facilitate the process of creating and deciding among examples of clinical thoughts. The volunteer spirit has never been higher and the volunteers are pursuing a critical goal. If they end up spinning their wheels for lack of support, the entire notion of semantic interoperability could get a black eye.

3 Comments »

Category: Interoperability Vertical Industries     Tags: , , , , ,

EHRs and Billing Fraud. Guest Blog from Don Simborg.

by Wes Rishel  |  October 2, 2012  |  1 Comment

My friend and mentor, Don Simborg, MD, has seen a lot and done a lot in healthcare IT. He practiced in internal medicine while he was the CIO at UCSF and his second IT startup was an EHR targeted at oncology. In his so-called retirement he has taken an interest on the issue of the relationship between EHRs and fraud (when he is not taking trapeze lessons). We have gone around on this issue over many a fish sandwich. We don’t agree on every specific but we both believe that there is a problem and that IT should do as little as possible to aid those who cheat and as much as possible to assist those whose role is catch them. In light of recent attention being that has fallen this way I asked him to summarize his views. They appear below; in future blogs I hope to deal with reader comments and drill into some of the pragmatic issues that must be solved to make progress. Here’s Don:

EHRs have had a bad week. Studies published by both the Center for Public Integrity, a non-profit investigative news organization, and the New York Times indicate that billings go up concomitant with a switch to EHRs raising healthcare costs instead of reducing them. DHHS Secretary Sebelius and Attorney General Holder sent a letter to the major hospital provider organizations threatening to crack down on EHR fraud. A New York Times editorial decried the EHR abuse. A full hour on National Public Radio was devoted to this problem. The Center for Public Integrity article indicated that the increase in cost so far is over $11B. The use of the two most expensive E&M codes for emergency room visits increased from 25% to 45% between 2001 and 2008. The New York Times article cited an OIG report showing that in 2010, just 1,700 of the 440,000 physicians added over $100M due to increased E&M coding.

 There is a lot that we don’t know about these numbers. Were EHRs the cause? Were the increases legitimate or fraudulent? How much of the baseline was fraud to begin with? Estimates of fraud vary widely from as little as 3% of all healthcare transactions to over 10%. The usual figure bandied about for Medicare is $80B/year in fraud. It may be over $250B/year overall. That’s well more than the Afghan war has cost us. We could cover most of the uninsured with less than that.

 The healthcare IT and provider communities say that the increase in billings with EHRs should be expected and is good because physicians are finally documenting what they do properly and decision support in EHRs prompts physicians to remember to do appropriate follow-ups. At a minimum, EHRs eliminate the fear of an audit, which causes physicians to under code in the paper system. Others say it is bad because it represents fraudulent up-coding made easy by single-click notes, cloning, E&M code prompts and other tools built into EHRs. We don’t have any data to distinguish this at present and it is unlikely that we will any time soon. So what to do about it?

 I say we should follow the advice given ONC seven years ago by a group of industry experts in an extensive report which warned that fraud will increase in an electronic environment unless we are proactive in putting in safeguards against fraud. The feeling was that we shouldn’t wait until EHRs are widely implemented (at the time EHR penetration was less than 10%), as it will be more difficult to alter legacy systems later. It’s now seven years later and EHR penetration is already much greater and increasing rapidly. We don’t have time to wait for definitive proof, which may never come. We know fraud is occurring. We know some EHR tools, on the face of it, invite fraud. Eliminating or greatly constraining the use of these tools will have little or no negative impact on clinical usability. If fact many physicians argue that encounter notes that contain pages of normal negatives in the PE and ROS produced by a single click are simply not trustworthy clinically. The same has been said of cloned notes.

 There are three things that ONC can do now. Investigate for Phase III of Meaningful Use which tools in EHRs should be grounds for decertification. These could include such tools as E&M code prompts that are solely for the purpose of up-coding, cloning and copy-forward, “make me an author”, amended report bypasses, single-click encounter notes, disabling the audit log and others. Second, work with OIG and CMS to determine what metadata from EHRs would best help in fraud detection analytics software. Medicare is currently using such software in an attempt to detect potentially fraudulent claims prior to payment rather than the usual “pay and chase” method that is ineffective. After determining the minimal metadata set that would be helpful, then define, standardize, and require that metadata for EHR certification. For starters, define at what level of detail the user/date/time stamp is required. Other metadata that could be considered would be the date/time the claim was produced from an encounter and the method of entry of the data (dictation, default, typing, copy forward, menu selection, etc.). Third, fund a cost/benefit analysis of several possible alternatives for provider and patient authentication at the point of care. Include at least the possibility of a biomarker mechanism independent of the EHR. Such a system would make it much more difficult to fabricate visits which we know occurs. It would help insure that the provider and patient at an encounter are really who they are alleged to be and were present at the time and place alleged.

 In summary, fraud is a huge problem. EHRs can be tools to both increase and decrease fraud. ONC is the only agency that can require changes to EHRs. ONC needs to stop giving lip service to “cooperating with other agencies” regarding fraud but to take the lead in seriously looking at how EHRs should be altered to reduce fraud.

 

1 Comment »

Category: Healthcare Providers     Tags: , ,

Mostashari Rant on “Walled Gardens”

by Wes Rishel  |  August 24, 2012  |  1 Comment

In the innovative style of this ONC we had a new first today, a dramatic reading from a federal regulation.

One of the concessions to comments in the final rule for stage 2 meaningful use was eviscerating a criterion that required that 10% of transitions of care must be accompanied by electronic transmission of structured data across vendor boundaries. The final rule still requires 10% transmissions but eliminates the requirement for transmission across vendor boundaries. There is an added requirement that a single transition of care be sent across vendor boundaries or to a test EHR operated by CMS for the purpose of receiving and validating these transactions.

In commenting on this part of the final rules, Farzad gave an eloquent statement on the continued intent to find find policy levers to avoid vendor-built “walled gardens” of interoperability. You can listen to a one-minute excerpt of his statement here. I highly recommend it. It includes the aforementioned dramatic reading.

In the end, are ONC and CMS trying to stop the tides? I don’t think so. I can’t imagine that inter-vendor transmission of clinical data will ever match what can be accomplished between two well-run healthcare organizations that have the same product implemented well. But the stage 2 regulatory approach involves setting a minimum bar for the data elements that must be exchanged rather than implying that all the data in document must be compatible on a structured basis. This enables the government to set a base-level bar and raise it over time. Inter-vendor exchange at the base level is a realizable target.

It is a concern that you can’t manage what you can’t measure. Meaningful use might have convenient, but it’s not the only ruler in the toolbox.

Neither is it the only hammer.

1 Comment »

Category: Healthcare Providers Uncategorized     Tags: , , ,

Gawande: Big Medicine Should Be More Like the Cheesecake Factory

by Wes Rishel  |  August 10, 2012  |  1 Comment

Atul Gawande’s recent New Yorker online article, Big Med: Restaurant chains have managed to combine quality control, cost control, and innovation. Can health care?, has presumably completed all stages of hypertweetilation so we can proceed to discuss it with more than 140-character thought bytes.

Gawande’s choice of The Cheesecake Factory was a brilliant way to get right up in the face of Traditional Medicine and highlight the putative advantages of Big Medicine. Then again, maybe it missed its target. Upon my mentioning the article one physician friend wrote back, “Interesting article, not sure people’s health can be managed in the same way as restaurants though; diseases are too unpredictable, cheesecake is not.”

The key words are “how” and “should be like.” Each of the items he discussed was an example of ideas we all accept as virtuous goals in medicine:

  • Finding means to reduce unnecessary variability in procedure
  • Finding means to reduce unnecessary variability in ingredients (implants, medicines, etc.)
  • Using the purchasing power that comes from reduced variability and organizational size to control costs and standardize the quality of ingredients
  • Standardizing and monitoring the process from front to back (pre-op through rehab)
  • While the Cheesecake Factory menu of 350 items seems trivial when compared to the total number of services offered in a medical center, it is huge compared with most franchise restaurants. Its ability to achieve standardization and tight quality control over that menu should give pause to those who dismiss the idea that variability can be controlled by saying “medicine is too complicated.”
  • Innovation within a strict framework. New menu items come along every six months, but the current batch has been in the pipeline for more than 18 months. When comparing that to the quoted 15 years for widespread use of beta blockers one can’t help but note that the CF has built innovation into its routine.
  • Large investment in training is the key to accelerated innovation. Brain surgery is often cited as using the “see one, do one, teach one” paradigm for training. CF takes it one step further by evaluating the trainees on the “teach one” phase. I doubt that neurology residents get graded on how well they teach a procedure.

Gawande is a practicing surgeon at Partners that has long since proved his chops on actually bringing simple solutions to reduce errors in medicine. No one would accuse him of not understanding the complexities. There can be no doubt he chose the Cheesecake Factory as a deliberately in-your-face means of exploring (a) the benefits of big medicine, and (b) the opportunities to reduce variability and thereby increase quality. The fact that it manages to do so with a menu that is maybe 10 times more complex than its competitors provides a skosh of credibility that Gawande is on the right track.

 

Read more

1 Comment »

Category: Healthcare Providers Vertical Industries     Tags: , ,

+4 For Micky Tripathi on HIE

by Wes Rishel  |  June 27, 2012  |  1 Comment

In The Dangers of Too Much Ambition in Health Information Exchange Micky gets straight to the source of so much frustration in building health information exchanges (HIEs) and created a very credible prediction for a repeat of the same hype cycle.

Read it and then come back to this; my comments will be more germane.

Micky gets several +1s:

  • +1 for calling out the new PPT-ware about a completely architected “eco-system” as wrong-headed at a time when so little is known about the demands of accountable care, the malleability of organizations and systems and the relative urgency of various drivers. One CIO recently told me that after presentations from seven vendors on how they had little now but planned to solve create a new architecture, her staff has started to refer to these as “vendor ego-systems.”
  • +1 for placing the problem in the laps of high-end buyers as much as the vendors. They are drowning in unknowns and looking for any rope someone might throw their way.
  • +1 for recognizing that when the market is the wild West, the best strategy is incrementalism.
  • +1 for recognizing the frightening tangle created by the natural desire to pull data and the ability to duck the tangle by innovating with “push.”

Who wouldn’t want to the the first to be given the IT solution for accountable care? To paraphrase Conan Doyle, when you have eliminated the impossible, whatever remains, however improbable, must be the path forward. The not-impossible for now is to create an environment that facilitates the use of point solutions in projects that can be up and running before the requirements change.

1 Comment »

Category: Healthcare Providers Interoperability Vertical Industries     Tags: , , , ,

The Biggest Healthcare Interop Issue: Frozen Interface Syndrome

by Wes Rishel  |  April 13, 2012  |  7 Comments

Remember the term “bilateral asynchronous cutover.” By the end of this post you will understand and like it. You may even want to mention it in comments on the current CMS and ONC NPRMs for meaningful use.

I have spoken often of the biggest threat to achieving steadily improving interoperability: the tendency to stick with what’s working. It is expensive to change an interface, even to make a minor change. Attempts to force upgrades by regulation or treaty often fail.

The problem is not when you establish the first network. That may be hard, but changing it is nearly impossible. How do you get all participants to cut over on the same day? It can’t be done. You don’t shut a network down and start over. I call this frozen interface syndrome.

Perhaps the most famous failure of a “second network” was the ISO Open System Interconnect (OSI) protocol suite, which was the planned TCP/IP killer of the early 1990s. The complete replacement for all levels of TCP/IP fixed many known problems such as the paucity of IP addresses. Dozens of consultants with solid TCP/IP credentials spent years developing it. The DoD decreed that it would no longer by systems based on TCP/IP, NATO agreed on it, GM build a whole plant-floor architecture based on it and — guess what? It never happened. Instead we adopted network address translation until IP6 could be rolled out and many other less elegant fixes that could be introduced incrementally.

What “Sort of ” Worked?
One of the few national conversions in healthcare that has stuck is the upgrade from X12 4010 to 5010 but there was collateral damage. More important, the 4010 went live in roughly 2001 with known problems and that version became “frozen” (without corrections) until 2011 when CMS forced the issue as foreplay to ICD-10. Even then, the upgrade probably couldn’t have happened if it weren’t for the predominant role of clearinghouses, preventing many end users from having to undergo software changes. We can’t possible upgrade clinical standards at such a slow pace. What Has Worked?

What Worked?
The Internet provides many examples of major improvements being implemented over time, rather than all at once. A few examples include

  • Plain-text email to multimedia email
  • Secure Socket Layer (SSL) to Transport Layer Security (TLS) versions 1.0, 1.1, and 1.2.

These upgrades all have one thing in common not all the senders and receivers had to change their software at the same time. Instead there was bilaterally asynchronous cutover.

Let’s look at the change from plain-text to multimedia e-mail, with multiple fonts, bold-face, built-in pictures, video and audio and all that razzmatazz. One strategy would have been to change all e-mail clients to receive multimedia before any client starts to use it. But the Internet is not so orderly. Instead the IETF adopted an approach that required multimedia senders to send everything twice in each message, once in plain text and once in a multimedia format such as HTML E-mail clients that could not display multi-media would display the plain text. This way any combination of new or old sender or receiver could communicate the basic content of the message. This is the send everything twice approach to asynchronous cutover. It worked because the features of the old format allowed an old-style receiver to know what to ignore. It also worked because the rules with the old format called for accepting unexpected data without calling it an error. The old-style receiver was expected to ignore what it didn’t understand. This was the compatible upgrade approach, where the original protocol anticipated future upgrades even before their form was known.

SSL and TLS each have two phases: a setup phase and an operational phase. During the setup phase they say what security versions they prefer and which other versions they support. Either side has the ability to say no if they won’t accept a low level of security but generally the two sides find the highest level they both support and use that for the operational phase.

Relevance to NPRMs
Where is the healthcare industry with respect for frozen interfaces? If we really don’t have much information flowing across enterprises now, we are just specifying the first interface. If, however, (a) we have a lot of standard data flowing by 1 January 2014, (b) the CMS meaningful use measures for interoperability only include data sent in standard format, and, (c) the 2014 Edition standards are different than the 2011 Edition then the effect of the 2014 Edition standards is to force many EPs and EHs to convert working interfaces with all the bilateral agreement that entails. (Remember that many EPs and EHs will be required to adopt 2014 Edition standards in order to meet the requirements for Stage 1 of meaningful use.

The 2014 Edition NPRM allows preadoption of 2014 standards in 2013, which could help. However without bilateral asynchronous cutover the industry is still left in the situation where nobody in a network can begin to send the new format until everyone in that network can send the old? How big is the network that has to roll out the change? With Direct and the NwHIN it is essentially the whole country.

Even where the 2014 Edition is only establishing the first interface we need to ensure that we have the foresight to enable bilateral asynchronous cutover for the 2016 Edition.

It takes a careful reading of the NPRMs to ferret out where we stand. I hope to provide that in my next post, and to have it available in time for you to consider commenting on the issue in your NPRM responses.

7 Comments »

Category: Healthcare Providers Interoperability Vertical Industries     Tags: , , , ,

A New Approach to Clinical Interop in Stage 2 Meaningful Use

by Wes Rishel  |  March 19, 2012  |  4 Comments

The certification regulations associated with Meaningful Use Stage 1 (§ 170.205) called for using the HITSP C32 as the specification for the Continuity of Care Document (CCD). The NPRM for Stage 2 points to a different source for specifications, the HL7 Consolidated Clinical Document Architecture (CCDA — officially known as HL7 document CDAR2_IG_IHE_CONSOL_R1_DSTU_2011DEC). HL7 members can download the document from www.hl7.org. This might be interpreted as little but an updated CCD specification. However, the fine print in the regulation belies a very different approach to certification and interoperability in Stage 2.

Both the CCDA and the new regulatory approach have a lot of pluses. In this blog I hope to give a “Sunday supplement” (highly simplified) overview of the CCDA. A future blog will build on this blog to discuss the new regulatory approach.

For those who wan’t an intensive discussion on the technological applicability of the CCDA a highly recommend the classes offered by HL7 and a series of posts by Keith Boone in his Motorcycle Guy blogs.

The Consolidated CDA is one of the finest examples I have seen of of organizing a slew of separate specifications into a coherent and consistent set of specifications. It represents a prodigious amount of work in the HL7 Structured Documents Work Group, many others  in HL7, the S&I framework, IHE  and the Health Story Project. I suspect that ONC was actively working in the background to push the various organizations to solve intellectual propert and cultural issues that made prior collaboration more difficult.

Previously documents based on the HL7 Clinical Document Architecture were individual specifications developed by different consensus groups. Although the CDA did impose some consistency on all such documents the individual groups often approached the representation of clinical data differently, in terms of the sections of a document and in terms of how individual data items such as vital signs or lab orders were represented.

As illustrated in Figure 1 the CCDA represents revised specifications of 9 major types of documents based on a consistent framework of document sections and representation of different types of structured clinical data. The templates in the box are just what you would expect templates to be, a skeleton for positioning a group of data items inside an XML document. There are big templates (such as the CCD or a discharge summary), “middle-sized” templates (such the header or as one that shows how to organize allergy information) and little templates that represent how specific data elements are represented (such as an allergen or the severity of an allergic reaction).

Figure 1. Conceptual View of CCDA.

Figure 2 is a screen shot of a partial page of the specification that describes the required and optional sections of the CCD. What do you do about the required section if there are no allergies? That’s easy; the templates that support the section allow you to say why you are not sending any. Perhaps the patient wasn’t asked, perhaps they said no allergies, perhaps they declined to answer. The so-called “flavors of null” represent different reasons for not sending data.
Figure 2. Required and optional sections of the CCD within the CCDA.
The specific document types in this version of the CCDA are:
  • Continuity of Care Document (CCD)/HITSP C32
  • Consultation Note
  • Diagnostic Imaging Report
  • Discharge Summary
  • History and Physical (H&P) Note
  • Operative Note
  • Procedure Note
  • Progress Note
  • Unstructured Document

However many of these document types describe a variety of different notes based on the specific procedure being described. As these document types were incorporated in the CCDA HL7 uncovered and dealt with potential inconsistencies in their definition, and real-world experience using older definitions. You should regard each document type as the “new and improved” version of prior specifications. A document conformant to the CCDA CCD template will have some differences when compared to one conformant to the HITSP C32 specification.

Each document template is specified in a manner responsive to its individual use cases so they will need different data in the header, different sections and specific entry-level templates. When a consensus group is compiling the templates that compose a document their job is easier if they re-use existing templates rather than creating new ones. To make re-use easier the lines in Figure 1 represent “constraints.” So, for example a Consultation note “SHALL contain exactly one [1..1] componentOf [template which] “SHALL contain exactly one [1..1] encompassingEncounter [template]“.
The CCDA enables consistency without arbitrarily requiring it. This is a critical advantage for developers. When they develop code to relate the contents of specific templates to their database they can reuse the entry-level templates in multiple sections and multiple documents. The section templates help the programmer establish the context of a simple data item in order to know how to represent the data in the developer’s database. The incremental work to program a second document type should be much less than the first. I will come back to this advantage of re-use in the blog that discusses the NPRM.
In addition to enabling consistency the CCDA has several other benefits. The fist is that it includes all the specifications necessary to describe a document type in a single specification. Previously working with the C32 required simultaneously using material from several different documents.
Another major benefit is that it brings the business names for sections and data elements much closer to the surface.

I am very enthusiastic about the CCDA. This is not to say that there may not be reasons to be critical of specific issues, or that we don’t find important issues as this draft goes into trial use. That is the cost of progress. Nonetheless, this is such an improvement over the C32 it is worth celebrating.

4 Comments »

Category: Healthcare Providers Interoperability Uncategorized Vertical Industries     Tags: , , , , , , , , , , , , ,

“Direct” is Much More than Email

by Wes Rishel  |  March 6, 2012  |  1 Comment

As the trades rush be “firstest” with interesting or alarming info on the meaningful use Stage 2 NPRMs, we are seeing the observation that the health information exchange  envisioned in Stage 2 is primarily point-to-point using secure e-mail technology, as most state HIEs and many regional ones aren’t operational or ready for more advanced types of exchange.

There are two important points to keep in mind.

  1. The secure email technology in Direct can be used for much more than email. The technology supports sending data directly into EHRs or out of EHRs and it supports sending structured data. In fact the NPRMs contemplate using Direct to send various consolidated CDA documents for transitions of care and patient engagement.
  2. The glass is half full, not half empty.

Having Direct has enabled CMS and ONC to create higher thresholds for conformance with sending structured data. Otherwise they would have had to give performance waivers to EPs and hospitals that did not work in an area supported by a robust HIE or where the HIE couldn’t on-board all available users in the time frames required for Stage 2.

Would we all like for all EPs to have richer functionality through a query mechanism? Resoundingly yes.

If we had a query mechanism would that eliminate the need for push? No.

Direct gets something going with an absolute minimum of the need for intermediate organizations to guarantee the trustworthiness of one participant to the other.

Should we mourn the reality that the more complex issues around query aren’t going to be solved evenly throughout the country, or should we celebrate the larger number of EPs and hospitals that will be interoperating sooner?

I choose the latter.

The direct glass may only be half full, but you can die of thirst waiting waiting for a half-empty glass to be filled.

1 Comment »

Category: Interoperability Vertical Industries     Tags: , , , ,

OK, XML Schema Does Earn its Keep in HL7

by Wes Rishel  |  December 31, 2011  |  2 Comments

In Does XML Schema Earn its Keep? I argued that XML schema was of little use in validating HL7 messages and document, and an obstacle to adopting other, more concise syntaxes.  In his comments on my post and subsequent emails Graham Grieve made an excellent case for looking at the business of producing messages and driving tooling. I have to agree with him that for the current state of RIM-based standards XML schema is a necessary ingredient.

There was a time — and apparently it still is the time — when XML is the way to roll with the industry in terms of adaptability to existing tooling.

I was partly projecting onto XML itself my disaffection for the HL7 style of using XML.

However, as Graham has pointed out in The XML consensus is breaking down XML has not worked out to be the panacea we all hoped, wedding the data, text and object-oriented worlds. In the early years it offered one promise and delivered — it was a syntax that IBM, Oracle and Microsoft would agree on. That has been a great step forward.

Like most steps forward, once we have achieved it we begin to say “what have you done for me lately?” It appears to me that we are in a state across the IT industry where XML is becoming the “old technology” and people are waiting for a new trend that fixes some of the problems.

What will the new panacea be? I would like to believe that the work being done in JSON now, like the early work in XML circa the year 2000, will be at the heart of whatever develops.

But as Graham suggests what is needed is more than a simpler syntax.

For now I guess we’ll plod along with XML and XML schema and keep an eye on the nascent use new technologies.

2 Comments »

Category: Healthcare Providers Interoperability Vertical Industries     Tags: , , , , ,