There’ve been a number of ongoing dialogues on Twitter, on Quora, and various people’s blogs about the Magic Quadrant. I thought, however, that it would be helpful to talk about the process of doing a Magic Quadrant, before I write another blog post that explains how we came to our current cloud-and-hosting MQ. We lay this process out formally in the initiation letters sent to vendors when we start MQ research, so I’m not giving away any secrets here, just exposing a process that’s probably not well known to people outside of analyst relations roles.
A Magic Quadrant starts its life with a market definition and inclusion criteria. It’s proposed to our group of chief analysts (each sector we cover, like Software, has a chief), who are in charge of determining what markets are MQ-worthy, whether or not the market is defined in a reasonable way, and so forth. In other words, analysts can’t decide to arbitrarily write an MQ, and there’s oversight and a planning process, and an editorial calendar that lays out MQ publication schedules for the entire year.
The next thing that you do is to decide your evaluation criteria, and the weights for these criteria — in other words, how you are going to quantitatively score the MQ. These go out to the vendors near the beginning of the MQ process (usually about 3 months before the target publication date), and are also usually published well in advance in a research note describing the criteria in detail. (We didn’t do a separate criteria note for this past MQ for the simple reason that we were much too busy to do the writing necessary.) Gartner’s policy is to make analysts decide these things in advance for fairness — deciding your criteria and their weighting in advance makes it clear to vendors (hopefully) that you didn’t jigger things around to favor anyone.
In general, when you’re doing an MQ in a market, you are expected to already know the vendors well. The research process is useful for gathering metrics, letting the vendors tell you about small things that they might not have thought to brief you on previously, and getting the summary briefing of what the vendor thought were important business changes in the last year. Vendors get an hour to tell you what they think you need to know. We contact three to five reference customers provided by the vendor, but we also rely heavily upon what we’ve heard from our own clients. There should generally not be any surprises involved for either the analysts or the vendors, assuming that the vendors have done a decent job of analyst relations.
Client status and whatnot doesn’t make any difference whatsoever on the MQ. (Gartner gets 80% of its revenue from IT buyers who rely on us to be neutral evaluators. Nothing a vendor could pay us would ever be worth risking that revenue stream.) However, it generally helps vendors if they’ve been more transparent with us, over the previous year. That doesn’t require a client relationship, although I suspect most vendors are more comfortable being transparent if they have an NDA in place with us and can discuss these things in inquiry, rather than in the non-NDA context of a briefing (though we always keep things confidential if asked to). Ongoing contact tends to mean that we’re more likely to understand not just what a vendor has done, but why they’ve done it. Transparency also helps us to understand the vendor’s apparent problems and bad decisions, and the ways they’re working to overcome them. It leads to an evaluation that takes into account not just what the vendor is visibly doing, but also the thought process behind it.
Once the vendors have gone through their song and dance, we enter our numeric scores for the defined criteria into a tool that then produces the Magic Quadrant graph. We cannot arbitrarily move vendors around; you can’t say, well, gosh, that vendor seems like they ought to be a Leader / Challenger / Visionary / Niche Player, let’s put them in that box, or X vendor is superior to Y vendor and they should come out higher. The only way to change where a vendor is placed is to change their score on the criterion. We do decide the boundaries of the MQ (the scale of the overall graph compared to the whitespace in the graph) and thus where the axes fall, but since a good MQ is basically a scatterplot, any movement of axis placement alters the quadrant placement of not just one vendor but a bunch.
Once the authoring analysts get done with that, and have done all the write-up text, it goes into a peer review process. It’s formally presented in a research meeting, any analyst can comment, and we get challenged to defend the results. Content gets clarified, and in some cases, text as well as ratings get altered as people point out things that we might not have considered.
Every vendor then gets a fact-check review; they get a copy of the MQ graphic, plus the text we’ve written about them. They’re entitled to a phone call. They beg and plead, the ones who are clients call their account executives and make promises or threats. Vendors are also entitled to escalate into our management chain, and to the Ombudsman. We never change anything unless the vendor can demonstrate something is erroneous or unclear.
MQs also get management and methodologies review — ensuring that the process has been followed, basically, and that we haven’t done anything that we could get sued for. Then, and only then, does it go to editing and publication. Theoretically the process takes four months. It consumes an incredible amount of time and effort.
(Please note that per Gartner’s social media policies for analysts, I am posting this strictly as an individual; I am not speaking on behalf of the company.)
The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.