Gartner Blog Network


How Can You Know What You Don’t Know?

by Frank Buytendijk  |  November 30, 2017  |  Comments Off on How Can You Know What You Don’t Know?

Once in a while in a conversation, a question comes up that is really triggering. In this case the question was “How can you know what you don’t know?”

It got me thinking. At first glance, it is a question on the level of “what is the meaning of life?” or “what is true, and what is real?”. In fact, Immanual Kant argued that this question is one of the three big questions in philosophy: “What can I know? What should I do? What may I hope?”. In fact, there is a whole field in philosophy that focuses on this question: epistemology, the theory of knowledge. So it is easy to get lost in this question.

But the question is relevant. And important. Particularly when business and society are, as the cliché goes, are in uncertain times. So we must have a practical answer. We had a discussion about this in our research community, and this is what came out.

If you don’t know what you don’t know, it doesn’t mean it is unknowable. Others may know.

  • If it is known by others, you can benchmark and compare. See what are the hot topics and the issues in other industries, and see how that translates to you. Investigate.
  • If it is unknowable by others too, then it requires experimentation.
  • If it is unknowable for all (most prominently: the future is unknowable), then it can only be assumed or imagined, it requires scenaric thinking.

Let’s go into all three of them, in a slightly more formal way:

  • Inductive Analytics
  • Structured Ontological Inquiry
  • Continuous scanning

Inductive analytics

There are different thinking styles. Most business professionals excel in deductive thinking, and use the same analytical style. What is the business strategy, and how do you translate that to relevant business questions? This is a very valuable set of skills in areas of deliberate designs, and control, but indeed, in more emergent situations, it doesn’t help. It leads to the question “how do we know what we don’t know”.

A more inductive thinking and analytical style helps. Starting with a pile of data and a set of observations, and then start experimenting. In the field of Data and Analytics there is actually a trend towards what is called “augmented analytics”, making use of ensemble techniques, machine learning, natural language query and more. You could imagine this as an email every morning from your analytics tool that tells you the 10 most interesting things it found for you. In the research note I referenced above there is an example of our “BI Bake Off”, using university data. One analytics tool found out that, that the common wisdom that attending a top university is the main predictor of high earning power, simply wasn’t true.

Structured ontological inquiry

Think “diapositive”. Unknowns may comprise an area, white space, surrounding (hopefully) by more knowns. So by plotting out the knowns, you at least get a shape of the unknown. Here is how that process works.

First, identify the knowns. Use some kinds of causal modelling, to understand the dynamics of the situation. The model may not always (will not always!) work. Reality provides different outcomes. These are then the unknowns to the model. Inductive analytics will help come up with new models, new sets of correlations that provide a better description of the behaviors of your environment. Use a variety of techniques to develop multiple of these models. Rank those models based on the confidence that the model indeed is predictive. The models with high confidence know contain more knowns. Great, you learn. You know more. The models with low confidence point at more unknowns. Even better, you know there is more to know. Repeat this process until the effort doesn’t weigh up against the benefits and risks.

It is not about the answer, but about the realization

One of the most insightful frameworks I have learned about in years is called “Cynefin”. It shows there are 4 types of decision-making, based on the situation, the type of problem. There are simple problems, complicated problems, complex problems and chaotic ones. We can exhaust our understanding of our area of knowledge in simple and complicated types of problems. Simple problems we simply observe, derive some rules, and then we know how it works. Think of a game of tic-tac-toe. Complicated problems, like a game of checkers, require deeper analytics. The goal usually is some kind of optimization. The computer helps us to create exhaustive knowledge about the subject.

It becomes more difficult in complex environments. Think of video games. Here, as a player, we are part of the game. Every action affects our environment, there is no way to know everything. So what we do is “try stuff”. Sense and respond. And slowly figure out a way on how to operate in our environment. And as we know that we don’t know, we keep scanning whether our set of assumptions of the environment works. We use scenaric thinking to challenge those assumptions, and keep a keen eye out for change.

Chaotic problems require a different approach as well. Think of a product recall. There is no time to sense and respond. We need to act immediately. Knowledge here borders ethics and introspection, knowing what is the right thing to do, even without knowing the specifics of the situation.

I’d like to end this blog by recognizing the input of the Gartner data and analytics research community, and specifically (and in random order): Neil Chandler, Jorgen Heizenberg, Rita Sallam, Debra Logan, Alan Duncan, Kurt Schlegel, Mark Beyer, Mike Rollings, Rick Greenwald.

Category: 

Frank Buytendijk
Research VP & Fellow
10 years at Gartner
23 years IT Industry

Frank Buytendijk is a Gartner Research VP & Fellow, covering digital ethics and data & analytics. Read Full Bio




Comments are closed

Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.