Mark Frisse started an interesting thread on Google+ today based on Assessing the Effect of Standards in Digital Health Records on Innovation by Steve Lohr. He chose not to make it public so I can’t cite it directly here.
I now face the typical Google-plusser dilemma. I want to add to the thread but I also want my thought to be public. So I am posting the blog entry and will cite it there.
There is not yet enough science to determine the one right way to do user interfaces. I doubt that there is enough science to objectively conclude trade-offs between, for example, being able to see everything at once that is on a printed cover sheet vs. using smaller or less costly devices. (It would be easy to see a standard based solely on usability principles proscribing any devices other than wall-mounted HD screens with special glasses needed to prohibit accidental viewing by non-authorized people.) I *AM NOT* making light of usability principles; I am simply pointing out that they represent one set of factors in a trade-off in selecting an EHR.
If we do have some science now, it should go into lifting the veil of obscurity that vendors place around their usability.
I propose the notion of a “usability measure.” Like a “quality measure” it would be specific and might have specific exclusions. In order to be a “measurable measure” it could not usually be generic to the entire EHR. While “all references to a patient should include gender, age and unit number” might well be a good principle, the measure for this principle might enumerate four or five situations where the patient information is displayed such as patient selection, results lists, individual results viewing, ordering and documentation and NIST criteria would include scripts for determining yes-no answers in those situations.
Experience with clinical measures has shown that going from generally agreed-on principles to computable measures is a substantial investment of time.
A public dialogue with minority representation from the vendor community should review proposed measures and categorize them into these groups:
1) So obvious it would be unethical not require this of all EHRs. The criteria for inclusion in this group should be very high.
2) Likely to be important factors; submission to the measure should be required for EHR certification and the average across all submissions be published. If a certified product is purchased by a health delivery organization and the vendor failed to provide its own scores before the HDO was committed to the product, that HDO should be ineligible for receiving the benefit of meaningful use incentives (bonuses now, relief from penalties in the future).
3) Candidate measures; submission to the measure should be required for EHR certification but there is no requirement for EHR vendors to provide their scores. Anonymous scores would be published so that healthcare industry could see the spread of how products performed on the measure.
4) Worthy of further study; the measures should be published as fodder for study, no scripts are prepared and testing does not occur as a part of certification.
OTHER REGULATORY ACTIONS
Given threats to patient safety based on usability, any EHR that is purchased under a contract that includes a “usability gag clause” should be ineligible to benefit under the incentive program. Such a clause would penalize the client for speaking publicly about the usability of the product. Vendors have justifiable concerns about a minority of irresponsible who will make up or exaggerate such issues in order to put pressure on the vendor on some unrelated point. They also have justifiable concerns about “Internet warriors” who take every public dialogue as a no-holds-barred battle for attention and dominance. Nonetheless, the greater good for healthcare and patients is achieved by favoring transparency. All of us are Internet users and we are learning to glean the gems from the crud.