Gartner Blog Network

Usability: No Stupid Questions

by Wes Rishel  |  August 1, 2010  |  11 Comments

Dick Taylor, the CMIO at Providence Health and Services Oregon Region has been speaking to this principle of usability for some time.

“No Stupid Questions: remember why your clinical users came to work today, and honor their needs.”

Until recently I had been filing this under “obvious but impractical.” Recently I had a chance to talk this through with him. I came away convinced that his principle is a valuable approach to the most nuanced issues of UI design.

Like any good scientist-philosopher Dick has developed a taxonomy. He describes

  • First order stupid questions: those that are so trivial that they are an insult to user’s time to ask them.
  • Second order stupid questions: those that cannot be answered by most users.

He offers this example of a first-order stupid question. You say you want to delete something and the system asks “do you want to delete this thing? Yes No. “ First-order stupid questions arose because of the need to protect users from their mistakes. “Do you really want to delete 5 pages of text?” Before computers had “undo” function users might have preferred such alerts; these days we expect software to offer “undo” or to “exit without changing” or other ways to worry about errors after they happen.

Stupid questions sometimes come disguised as statements. If, after the user signs an order there is a dialogue box that says “order submitted, press OK” the system is either insulting the user or expressing great doubt about its own reliability. From the time the user signed the order her mind has been running ahead to the next order or patient. Let’s not distract her.

An example of a second-order stupid question might be “this is a secure web page but some of the contents come from a non-secure server, do you want to proceed?” Few users know what this means. Even those that do couldn’t make a decision without knowing which data came from the insecure server. All this question does is to encourage users to develop the habit of clicking through security alerts blindly and to give some security person somewhere the false idea that users are safer.

Today every implementation of clinical decision support involves finding a balance between improving care and creating alert fatigue. Alert fatigue is really “stupid question fatigue.” We don’t ask pilots whether they want to climb steeply when they pull back on the yoke even though lives may be put at risk. We optimize the pilot’s time and attention by assuming a certain level of professionalism and training. If the result is to approach stall speed then the balance swings to providing an alert.

The “no stupid questions” principle doesn’t lead to a formulaic determination on the validity of questions.  However the principle should be applied routinely during of any functional or UI design. Designers should ask themselves:

  1. Is this question necessary for a competent professional user who is working efficiently?
  2. If the question is about a potential user error, can we wait until the error occurs? (If not, we should rethink the overall system design.)
  3. If the question is going to be “blown through” by most users why ask it? If the goal is to detect rare security issues maybe the right approach would be to create an entry in an audit file.
  4. Does this question represent trading a fraction of a second of user’s time for a few days’  programming time? If the answer is “yes” this is a bad decision even under deadline pressure. For example, if the question identifies a rare but important user error that might occur can more complex programming eliminate the question most of the time for most users?

Sometimes reviewing these issues at design will eliminate stupid questions. Sometimes it will affirm the questions are not stupid. Sometimes it will cause the designer to rethink the flow so the question becomes unnecessary. In all cases the user benefits.

Category: healthcare-providers  vertical-industries  

Tags: clinical-decision-support  ehr  emr  usability  

Thoughts on Usability: No Stupid Questions

  1. I like the taxonomy.

    At one point, earlier in my career, I had a similar rule: “Only ask actionable questions.”

    In a like vein, there are two types of trivial problems: those that can be solved by spending money, and those that cannot be solved. The remaining problems are interesting.

  2. John Moehrke says:

    I was going to say something smart… but your blog required me to prove that I was human… clearly a First Order Stupid Question.

  3. Wes Rishel says:

    Oh I dunno. There are days when if I were asked that of myself, it might seem like a second-order stupid question.

  4. Peter Basch says:

    Great post Wes – and sage advice from a very wise colleague. A question / comment pertaining to alerts and alert fatigue in the category of medication safety…
    Per this rule, we would say that an alert that reminds a clinician not to prescribe penicillin to a penicillin allergic patient fits category #1? What % of medication interactions are of this type (that’s a real question – I dont know the answer)? And assuming that this % is > 0 (which I am sure it is), the reason that clinicians prescribe the equivalent of penicillin to a penicillin allergic patient is not that they dont know better – but because of being rush, fatigued, multi-tasking, etc (our everyday life in the outpatient world at least).

    I would also say that 100% of the prompts that we have currently in our EHR implementation are for things that clinicians know or could know – if they took the time to search the database at each patient visit (which of course is time that doesnt exist).

    I am a firm believer in another fairly simple principle, which is that if clinicians consistently applied existing knowledge and evidence to every patient at every opportunity of care – we would collectively see enormous improvements in quality and safety (and perhaps even efficiency) of care. Unfortunately to operationalize this principle, unless we return office visits / phone calls / e-visits to 20-60 minutes each, we have to rely on prompts and alerts – most of which remind the clinical enduser of something that he/she already knows.

    I agree these have to be actionable and elegant in their construction to be useful – and to some degree, we have to think about them differently. Per the example above re aviation, pilots and co-pilots do a mundane checklist prior to each commercial flight. Using a similar concept (artfully described by Atul Gawade) in medicine may be insulting to some – but I believe it is something our profession needs.

  5. The best UI designs I’ve seen are adaptive – i.e. the system interacts more “maturely” as the user’s experience level grows. Ideally a smooth transition, the rate of progression is controlled by the user and/or “learned” by the system.

    And just like with people interactions: no unpleasant surprises or misunderstandings please.

  6. Deborah Lafky says:

    It’s kind of interesting playing contrarian to Wes :), but what the heck:

    It’s precisely those most confident in their knowledge who most need these stupid questions to be asked. Before the highly skilled pilot of an Airbus 380 can take off and put 600 lives at risk, he still has to go through a long checklist of “stupid” questions. As Gawande points out, no one can be trusted to remember everything they need to remember to carry out a complex task, no matter how much education and experience they may have. Believing onself exempt from the simple/stupid questions is mere arrogance.

    It may be that system developers could come up with more felicitous ways to ask their questions. But I think it’s dangerous to imagine that the so-called stupid questions should be eliminated.

    I definitely want an EHR that asks the doctor, “did you really mean that?” when he accidentally types in a heparin dosage that is off by a factor of 10, or before he is allowed to delete my whole medical record. If that causes the process to take an extra 1/10 of a second of the doctor’s time, it’s a small price to pay. No one’s time is so valuable as to refuse to perform these important checks.

  7. Bill Braithwaite says:

    Deborah, it is very dangerous playing contrarian to Wes!
    I think the “stupid” question related to your example would be, “Did you really mean that?” when the dose is within the expected range for that particular patient. I don’t think it is stupid to ask the question when an out-of-range condition exists, especially if the ‘question’ includes the logic for why the alert is being raised.

  8. Erik Pupo says:

    Wes, this is a very thought provoking article. I am especially intrigued by questions defined as —-“those that are so trivial that they are an insult to user’s time to ask them.”

    My understanding with many of these questions and alerts is that they create the “paper” trail or evidential basis to judge how and why decisions were made. The user may perceive them as “stupid” or a waste of time but to someone else, the response of the user might be valuable information in attempting to ascertain their thought process.

    Also, who defines what “stupid” is when it comes to a question or alert? I am sure most people encounter signs, alerts, stupid questions, and warnings in their life every day that would save time and energy if they weren’t there. In many cases, they just blow through them already.

    Designers start with the view that they are designing a system for someone who has little to no understanding of how to use that system. Also, many of these warnings and alerts may be legally required or based on regulation so the designer must put them in regardless of their usability.

    Good designers build a system that can be customized based on technology know-how and experience. I think in many cases designers allow users to simply turn alerts off — just like many machines allow users to turn alerts off, not that its always a good thing to allow that..

  9. Joe Bormel says:

    Nice post and dialogue.

    There is a recurring “design” theme in your post; check out this link for a thread on correctly modeling the actors (end user and the computer as two actors). It includes insightful observations from Jim Walker, and Scott Finley. In short, designers might want to consider viewing EMR CDDS in terms of how a Butler might choose to observe and interact. It includes several taxonomies of non-stupid answers. The common model today is closer to inviting a loud, anxious and often ignorant person into the room with a doctor and patient.

    I liked your observation that some problems are best addressed with a redesign. It’s not clear to me that modal dialogue boxes are ever a good idea, when the larger context is considered.

    Interested readers on the topic of usability, or more specifically, the proces of getting to a usable system will appreciate John Halamka’s post here:

  10. […] and the clinicians who have to deal with their less brilliant decisions – might find this taxonomy of stupid questions […]

  11. Val Koteles says:

    From a developer’s point of view: when permitting users to delete complex items, some of these can be very difficult to undo. It may be necessary to delete sizeable amounts of data originating from object dependencies. Notifications to other users may be applicable and there can be logs present that track user actions. Many of these cannot be recalled, or are not permitted to be undone (such as activity log entries). This is certainly the case when government regulations towards management of data are relevant. Additionally, there can be legal implications for the user who is performing the delete, depending on the particular item discarded. Some users may feel uncomfortable that the application recorded their actions, and would prefer to prevent an accidental delete even when it could have been undone.

    Another challenge when designing an efficient EMR is the fact that the target audience is inconsistent. Many types of users must act in a variety of roles while running the same system, manipulating data that is shared. Because there is no well-defined typical user, a pragmatic system should implement profile-based preferences. The default system interaction can then be recalibrated for each person. Or it can be borrowed from the profile of a role-based user group, such as: physicians, administrators, and patients, as applicable. It must then become the responsibility of each user (or regulator) to enforce the desired system behaviors, depending on what would be the safest and most efficient workflows for each individual.

Comments are closed

Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.