Andreas Bitterer

A member of the Gartner Blog Network

Andreas Bitterer
Research VP
9 years at Gartner
27 years IT industry

Andreas Bitterer is a research vice president in Gartner, where he specializes in business intelligence, data integration and data quality, with expertise in analytical applications, data warehousing and information management.Read Full Bio

Coverage Areas:

Setting the Record Straight

by Andy Bitterer  |  December 28, 2008  |  23 Comments

Caution: Long post, but lots to say…

Not often, a vendor reacts with a blog posting to the publication of a Gartner document. Press releases, yes, and a lot, kinda like “Company XYZ announces that a leading analyst firm has placed it in the leaders quadrant of the ABC Magic Quadrant…” Rarely, though, do we see a posting directly referring to a magic quadrant, particularly from a vendor that wasn’t included. (Draw your own conclusions.)

Yves de Montcheuil, VP of marketing at Talend, did just that on his blog, where he commented on the latest data integration magic quadrant. I never met Yves, only talked to him on the phone a few times, and I have likewise a lot of respect for him, just like he wrote he has for the Gartner analysts who authored the Magic Quadrant for Data Integration Tools. In his post, Yves makes a few statements that call for a response, as they indicate a distorted perspective on the authors’ view and some other comments are just plain wrong.

No surprise, Gartner’s analyses are still very conservative

If this means that we are not jumping on every new idea and announce it as the silver bullet, then yes, we are conservative. It does not mean, though, that we are turning a blind eye on new developments. Quite the contrary, in fact, as demonstrated by the large number of tiny and often virtually unknown vendors that are featured as a “Cool Vendor”.

Their analysts use mostly their rearview mirror, to look at what happened behind them, whereas they should have a radar to see what’s happening around them and ahead of them.

This is rather meaningless to me, because it’s rather irrelevant what was “happening behind me“, or is “happening ahead of me“. Of course, we are evaluating each vendor’s past, present, and potential future. As an author of any magic quadrant, what the analyst is looking at, and that is clearly communicated to every vendor that is potentially included, are not only things such as product capablities,  but also revenues, sales, marketing, support, alliances, or their financial position. I’m sure, even Yves would agree that those criteria are more backwards looking, as they describe a track record, which, by definition, is always looking at the past. A “future track record” does not exist, with radar or without. And “future revenues” are fairy tales.

… the Magic Quadrant reflects past adoption of certain technologies by large accounts in the US, who are customers of Gartner.

Wrong on all accounts. The MQ is not an adoption map. It has nothing to do with the size of the company. The MQ does not have a US focus. And finally, surveyed companies do not have to be a Gartner client.

Updated every 18 to 24 months and reflecting the long cycles of traditional vendors, who used to take years before their could achieve a significant position on a market

Wrong again. This MQ is updated every year, and Yves should know that. And the MQ is definitely not reflecting sales or product cycles from anybody.

This quadrant includes a combination of dying technologies which have been acquired over and over again (ETI, Open Text’s Genio…), loading utilities (Syncsort, Pervasive, Sybase’s Solonde…) and real enterprise solutions (Informatica, IBM’s DataStage). One component is missing: open source – of course.

I may be repeating myself here, but for every MQ there is a pre-defined set of inclusion criteria, and every vendor that meets those criteria, will be included. Period. Whether they have been acquired a gazillion times (not sure, why that matters), sell “dying technology” (as Yves puts it, well, it’s his job as marketier to knock the competition) or sell products that are made from clay. It does not matter, as long as they match the criteria. And open source vendors are not “missing, of course”, but because not one of them can meet those set criteria (to date). We are even changing the inclusion criteria to make it possible at all for open source vendors to be evaluated. If we would stick with “minimum license revenue of 20M USD” (I picked an arbitrary figure!) as one of the inclusion criteria, it’s hard to see how any open source data integration tools vendor would make the MQ any time soon, as they typically don’t license the software. Because this wouldn’t be fair to open source vendors, we use a figure for “minimum total revenue” instead (only for open source vendors, that generate revenues mostly from service subscriptions), so they have at least a chance of being included in the MQ. If the vendor does not make those figures, they are still out. Nobody gets included for good will.

Some would say that open source vendors cannot afford to pay Gartner (I personally don’t think it makes a difference). This may be true for some vendors. But in our case, Talend is a commercial vendor with strong resources and could afford a contract with Gartner.

Ah, the old mis-perception. Being included in any MQ (not just for data integration tools) is in no way depending on being a Gartner client. I’m happy to see that you don’t think it makes a difference. Because it doesn’t.

But why? To hear that “open source is immature (probability 0.9) and will become mature in 5 to 20 years (probability 0.8)”?

That is funny. But Yves made that up, of course. Interesting though is, that Yves makes the connection between Talend’s exclusion in the MQ and general open source maturity. I think that’s a bit of a stretch, as there are many technology areas that are considered mature (operating system, web servers, app servers, DBMS, to name a few), but data integration is just not there yet, according to our data from user surveys. Yves himself has provided reference accounts of Talend users for our open source data integration survey.

No thanks. We know, and our clients know, that open source has changed a lot over the past years and has become a true alternative for the enterprise (probability 1.0). Maybe even Gartner will realize this one day (probability 0.2)!

We are not disagreeing at all, that “open source has changed a lot”, but that’s not the point. We are also realizing that open source tools (of many kinds) are used in small, medium and large enterprises. All of these things are definitely taken into consideration when we evaluate open source, as we update the MQ again next year. Equally, we will be looking how the rest of the data integration market has developed, as open source does not exist in a vacuum. Criteria may change, too.  Until then, I hope this clarifies the 2008 version of the data integration MQ, even for Yves. Probability … uh.. never mind. We stopped using those probabilities quite some time ago, anyway.

23 Comments »

Category: Data Integration ETL Gartner Magic Quadrant Open Source Technology Uncategorized Vendors     Tags: , , ,

23 responses so far ↓

  • 1 Yves de Montcheuil   December 29, 2008 at 6:00 am

    Andreas,

    Thanks for the post, and for giving your point of view on my comments. I stand corrected on some facts (the frequency of the updates for example, that has increased in the past few years) but still believe that your cycles and inclusion criteria are no longer suited to accelerating market cycles.

    Take the Data Integration quadrant, for example. Its latest update cycle was initiated in the summer of 2008. Inclusion criteria such as revenue or number of customers were based on 2007 numbers. The MQ was released in Q3 – based on numbers that were almost one year old.

    Take on the other hand a fast growing and well funded vendor like Talend. We launched our GPL product just two years ago, our commercial open source version not even 18 months ago, and have already achieved a market penetration that took some proprietary vendors 10+ years to achieve. Because your inclusion criteria were based on 2007 numbers, Talend did not make it.

    Am I frustrated not to see Talend included? Of course I am. Is it hurting our business? I don’t think so. When I ask CIOs or CTOs of companies who have chosen our technology if they are willing to talk to Gartner analysts about their choice, the answer I get is usually a variation of “I would be happy to, but why? Gartner does not believe in open source anyway.”

    I agree that this is a difficult exercise, and that you cannot include in the MQ every small vendor that promises to revolutionize the world. But you could – and should – give more weight to real-time track record, adoption and market trends, and to the potential of alternative models.

  • 2 Andy Bitterer   December 29, 2008 at 11:07 am

    Thanks for your comments, Yves. Good discussion. Here are a few more remarks to what you wrote:

    I … still believe that your cycles and inclusion criteria are no longer suited to accelerating market cycles.

    Honestly, I don’t know what “market cycles” are and how they are supposed to accelerate. With that said, I don’t really understand how you would want us to change the inclusion criteria. As far as I’m concerned, I think our inclusion criteria are dead on and I don’t think any intra-year updates to the MQ are really warranted, as this market does not change so dramatically that a yearly update wouldn’t suffice.

    As far as the revenue figures go, it is completely up to every vendor to provide whatever year-end or even quarter-end numbers they would like to provide. Most vendors choose year-end which gives us a sense of revenue growth. If Talend had an outrageuosly successful 1Q/08 or 1H/08, where your revenues skyrocketed tenfold over YE/07, I’m sure you would have told us. And we would have been very interested in that fact. BTW, it’s still a level playing field, as we are looking at every vendors’ revenues in the exact same light. Talend is not treated any different than IBM, Informatica, Oracle, or anybody else on the MQ.

    We launched our GPL product just two years ago, … and have already achieved a market penetration that took some proprietary vendors 10+ years to achieve

    You see, I don’t believe that for a second. How do you count “market penetration”? If you had any supporting facts for this claim, I would absolutely love to see those.

    Because your inclusion criteria were based on 2007 numbers, Talend did not make it.

    You seem to imply that revenues are the only inclusion criteria. They are not.

    When I ask CIOs or CTOs of companies who have chosen our technology if they are willing to talk to Gartner analysts about their choice, the answer I get is usually a variation of “I would be happy to, but why? Gartner does not believe in open source anyway.”

    OK, so here’s the dilemma. In your original post, you dismissed Gartner (Quote: Talend … could afford a contract with Gartner. But why? To hear that “open source is immature (probability 0.9) and will become mature in 5 to 20 years (probability 0.8)”? No Thanks.) as “they just don’t get open source” (I’m paraphrasing here). And now your CIO and CTO customers say the same, according to some hearsay. Would it surprise you that Gartner has over 3000 (in words: three thousand!) documents published that talk about “open source” in one way or another, dating back 10 years? I just did a quick search, and I obviously didn’t read them all. So you can now believe in your own perception or that of your CIO/CTO friends, or you look at the facts. Your decision.

    BTW, I’m really not sure what those CIOs of yours mean by “not believing in open source”, as open source is not a religion. Although, the many heated debates that have gone on for years may want me to think otherwise.

    But you could – and should – give more weight to real-time track record, adoption and market trends, and to the potential of alternative models.

    Sorry, Yves, a “real-time track record”? Are you making this up? In case you’ve been in the snow lately, turn around once and look behind you. That’s where you recorded the tracks, and it happened in the past. I have no idea what a “real-time track record” is supposed to be. And to be clear, we are absolutely monitoring adoption, market trends (hey, that’s our job!), and alternative business or delivery models. We’ve always done it. Maybe you weren’t watching.

  • 3 Yves de Montcheuil   December 29, 2008 at 1:12 pm

    Andy,

    I am not arguing whether the playing field is level or not, but about its suitability. And Gartner may have written 3000 documents that contain the term “open source” in the past 10 years, that does not mean that this research recognizes the disruption brought by open source on a market.

    As far as “supporting facts for market penetration” and a “skyrocketing 2008″, I would encourage you to check back in your archives for the data we provided to you last summer, under NDA.

    Most important: is open source a religion? It may be for some, but not for us. It is a deployment model for our company that encompasses both a product development approach and a business model, both allowing us to deploy much faster and much more globally than alternate models. But you can believe in other things than God (or maybe “believe” is not the right term?). When VCs invest initially in a company, they do so because they believe in the team, the concept, and the market potential. When they come back for a second, third, etc. round, they do so based on the results (the track in the snow) and the potential they see ahead of them.

    And BTW, regarding the snow – don’t wait too long before checking the track, because when the snow melts, you can’t see it anymore.

  • 4 Vendor complains in a very public blog post about Gartner’s Data Integration Magic Quadrant « SageCircle Blog   December 29, 2008 at 3:35 pm

    [...] latest Magic Quadrant for Data Integration, photo left) and Gartner’s Andy Bitterer (Setting the Record Straight, photo right). This is interesting because it is unusual for a vendor to engage Gartner in a public [...]

  • 5 Carter Lusher   December 29, 2008 at 5:18 pm

    This is an interesting post because it is unusual for a vendor to engage Gartner in a public forum about its research or methodology, and for a Gartner analyst to respond to criticism. Kudos to both you and Yves for engaging in this conversation.

    Let me throw in a couple of points in response to something you wrote. I also posted Vendor complains in a very public blog post about Gartner’s Data Integration Magic Quadrant to discuss this issue further.

    I agree that Yves’ comment point about MQs being “Updated every 18 to 24 months and reflecting the long cycles of traditional vendors” is pretty much completely off the mark… when it comes to being tied to vendor product cycles. Common sense tells us that vendors on a MQ do not coordinate their product releases so that there are vendors releasing products throughout the time between MQ updates. However, that does not mean that all MQs are updated regularly. No, what determines MQ updates is how disciplined the analysts are managing their work. Some analysts are very good about annual or even more frequent updates (like you). Unfortunately, other analysts cannot manage their workloads leading to a situation where MQs become out-of-date with stale information and analysis. So 18 to 24 months is not completely inaccurate. Lack of timely MQ updates has been brought by the AR community on a number of occasions with Gartner management, most recently with Gartner SVP Michael Yoo on the December 9th/10th Quarterly AR call.

    There is a certain amount of truth in Yves’ comment that MQs reflect “…past adoption of certain technologies by large accounts…” A significant source of information for the average Gartner analyst comes informally via phone-based end-user client inquiry. This reliance on end-user clients as a source obviously skews the pool of data points. Gartner’s client base is not a statistically valid sample for all types of research and only clients that choose to set up an inquiry are counted. Thus analysts are getting an incomplete picture and they probably do not realize it.

    Finally, because I cannot resist being snarky… About your comment “We stopped using those probabilities quite some time ago, anyway.” Does last Tuesday the 23rd qualify as “quite some time ago”? That is the most recent research note that I can find using probabilities. Yes, the stated policy is that Gartner analysts have stopped using the “p=” or “probability =” in research notes, but not every analyst must have received the memo because I can still find probabilities being used in conjunction with Strategic Planning Assumptions.

    Cheers, Carter Lusher

    SageCircle, experts on the analyst ecosystem and AR best practices

    http://www.sagecircle.com
    http://www.sagecircle.wordpress.com
    http://www.twitter.com/carterlusher

  • 6 Finding vendors: Magic Quadrants and so on « ITasITis   January 6, 2009 at 6:51 am

    [...] public blog post about Gartner’s Data Integration Magic Quadrant Sage Circle, 29 Dec 2008 • Setting the Record Straight Andreas Bitterer, Gartner blog, 28 Dec 2008 • CIO best practices for thriving in a recession [...]

  • 7 Ludovic   January 7, 2009 at 7:12 am

    Hi Andy,

    It’s great to see you, as a Gartner analyst, engaging in a public debate on your blog: it does a lot for transparency and openness.

    Kudos to you!

  • 8 Mark Madsen   January 7, 2009 at 5:21 pm

    Interesting discussion. My observation from being a Gartner client in the past (both vendor and IT) is that some of Yves’s points are correct and some of the critiques are a little off the mark.

    Gartner as an organization is so large that there’s variety in analyst coverage of open source in specific markets, and huge variations in quality and thoughtfulness of the analysts. I’d side with Yves that until recently open source was largely dismissed as nothing in almost all coverage areas. I think that’s changed significantly in the past 18-24 months.

    As an example or variability, I think the data integration quadrant is a travesty because of the decision to make it “data integration”. While I understand reason (shift to multiple techniques/technologies, consolidation, drive to a DI platform), it is a unusable for most IT organizations deploying at a tactical level.

    How can someone take seriously “IBM” or “Oracle” in the quadrant? Which of the dozen or so products from a single vendor is implied for leadership? If it stated “Oracle Data Integration” or “IBM’s DataMirror” it would be more useful. Some vendor’s products are DOA while others are terrific. Since the evaluation is not the products but the vendors, it’s not possible to use the MQ to choose DI technology appropriate to the situation. When I’m with clients who are Gartner subscribers I have to guide them in their use of these results.

    The challenge for Gartner is having an MQ that has enough vendors in it to be profitable, that makes sense for vendors to participate in (avoiding the “but we aren’t an X” problem), and that still matches the IT use of the MQ. Ideally, I’d like to see a DI MQ that subdivides the technologies rather than the muddle there now, e.g. a division into ETL, replication, federation, suite/platform offerings that integrate them (and measure how well they are integrated).

    When I’m with clients who are Gartner subscribers I have to guide them in their use of these results. Which goes to a separate issue: there are two types of analyst reports. One type is used for detailed product selection. The other is used for making the short list and for CYA, which is where I think the MQ fits. “We didn’t do wrong because this vendor is up and to the right” is a sad fact of life in a large number of IT shops.

    Open source is certainly a problem for analyst reports due to inclusion metrics. It’s also a problem because there is such variability from one OSS vendor or project to another. I haven’t looked, but I’d hazard a guess that the R project isn’t included in the BI or related MQs yet it’s hugely influential, enough so to make an NYT article.

    I think you may be right to exclude Talend or Pentaho from the DI MQ on the basis of market penetration and size last year (sorry Yves) but that may be incorrect given the use in operational rather than pure BI ETL scenarios, e.g. Gartner may be looking in the wrong places. I’m seeing such a huge lift in interest and use for areas where the mainstream ETL tools are not a match that I can’t help but think you may be at a point where coverage will have to be initiated.

    I believe we’ve hit a crossover point in the past year with regard to open source DI tools, but then I work in markets more familiar with open source so my judgement may be biased, in the same way that Gartner’s focus on the mainstream peak of the curve can distort the perspective of what’s coming or going – one of Yves’s complaints.

    As always, a pleasure to hear your thoughts. I’m also pleased to see Carter chime in with some other insights.

  • 9 Bob Zurek   January 7, 2009 at 9:45 pm

    This has certainly been one of the most insightful debates between an analyst and a vendor in a long time. I’ve been an analyst, someone involved in the creation of data integration products and also someone involved in open source which is why I enjoyed reading the thread. I guess I can relate on all fronts.

    Frankly, I am quite surprised that Talend wasn’t included in the Data Integration Magic Quadrant. I’m also surprised that a player like Expressor-Software is not included. In fact, Ab Initio, someone that has been in the market for a long time and that has some very large customers, is probably quite profitable, yet hasn’t ever been in the DI MQ that I’m aware of. Probably because Ab Initio won’t talk to analysts. I supposed Ab Initio doesn’t care at this point, I don’t think it really has impacted their business.

    I believe that analysts have a responsibility to paint a complete picture of the vendor landscape to their client community and to the industry. Customers should be aware of open source options or new emerging companies at all times. In fact, analysts should be the ones that are proactively reaching out to their customers to let them know about emerging players whether it is thru a phone conversation or a write-up on their site. I’d be very surprised if they weren’t doing this.

    I also think that marketing professionals should be proactive with the analysts. I recall the time when I was an analyst and I had a vendor (not a customer) call me frequently for just about every customer win they had. She was incredible. She provided a case study, the contact name of the customer (approved by the customer) and why the customer was so great. At the time, this was an emerging vendor in their respective space. In fact, I don’t think they were listed on the Magic Quadrant at the time because the marketing person mentioned it to me. When it came to my clients, I felt that they needed to be aware of the company and the progress they were making even though they were a fairly new company.

    Cool Vendors? Well, I never really liked the idea of labeling a vendor “cool” as I think that every company that is building a solution that serves the needs of their customers would probably think their product is cool.

    Why really does the MQ have to be so static?? I think I’ve heard Gartner mention the world Real Time a few times. Maybe Gartner needs a MQ 2.0? If so, I suppose Talend and Expressor-Software would show up.

  • 10 The business intelligence funk | DBMS2 -- DataBase Management System Services   January 8, 2009 at 7:08 pm

    [...] analyst Andreas Bitterer’s rarely-updated blog has gotten some recent attention because of his kerfuffle with Yves de Montcheuil of Talend.   Reading same, I went on to notice another post by Andreas that captured my own feelings, to [...]

  • 11 Donald Farmer (MSFT)   January 9, 2009 at 1:37 am

    This is a great thread. I’ll add my thanks to Andy for being so open – having this discussion en plein air is very welcome.

    My opinions expressed here are purely personal – don’t read any Microsoft policy into them. My AR team will cut off my coffee supply if you do.

    Firstly, to the discussion of the Magic Quadrant “playing field” – whether it was level or suitable. Yves recognizes the field as level, and I agree. The inclusion criteria are tough when you’re a vendor, even a megavendor. When Microsoft released SQL Server Integration Services our challenge (with the original ETL quadrant) that we do not monetize our ETL features separately. Ironically, this difficulty (calling out market numbers for inclusion metrics) is very similar to the problems that an open source vendor would face. The monetization is not easily allocated, and I can hardly call up Andy and say “Look at these great numbers from IDC.” So I do sympathize with Yves … a little. It took work to make our case for inclusion, there was no free pass for Microsoft.

    The suitability of the quadrant is another matter. Quadrants can suffer from being either too general or specific. For example the Customer Data Mining quadrant excludes Microsoft and Oracle because our successful data mining products are not vertical enough in their implementation or marketing for inclusion. That quadrant is, to my mind, too specific. (A quick aside. Talking about data mining, I have to say that the New York Times article about R that Mark mentioned was ridiculous. Enjoyable though R is to use for specialists, it’s hardly comparable to Excel for usability.)

    On the other hand, I think Mark makes a good point about the data integration quadrant being too general – it really is difficult to see how it supports actionable purchasing decisions, compared to the old ETL quadrant. I well remember the first call we had with Andy and Ted as we at MS struggled to balance the MQ questionnaire with answers covering BizTalk, Host Integration Server, SQL Server Integration Services, and Replication: these products coverage a huge range of strategic and tactical needs. Oracle and IBM have similar problems. The DI MQ favours a data integration stack that is neither restricted to ETL, nor as wide ranging as those of the megavendors. We used to joke that the integration quadrant was the Informatica quadrant. Not to suggest favouritism in evaluation, but rather that it appeared that quadrant definitions were weighted in such a way that a specialist, independent, vendor would typically be smack in the middle of the leader’s corner. Maybe that is not even surprising; market leaders define markets, and specialist independent vendors probably define markets more clearly than the muddy megavendors. John Radcliffe’s MDM Quadrant perhaps disproved that line of thinking last year.

    Having said all this, in my experience, Gartner have been good at listening to vendors and thinking really carefully about their quadrants and criteria. We may not have persuaded them to our point of view in all cases, but I have never doubted that they were open to persuasion.

    Mark suggests that the use of open source DI tools in operational (rather than BI) scenarios masks them from Gartner’s view of the market. Yet I’m not so sure that schlepping data around with low cost tools really counts as Data Integration in the sense that Gartner intend it. The predecessor to SQL Server Integration Services, good old DTS, was a classic cheap operational data movement tool, but I’m not at all sure I could defend it against Gartner’s new DI criteria (as opposed to the old ETL criteria), whereas the new Integration Services clearly does play in that space. Talend and Pentaho are pushing their feature sets into the data integration space with every release, but if their market adoption is primarily for operational data schlepping, rather than enterprise data integration, I would suggest they may be reasonably excluded.

    Let’s see how this develops. I, for one, would love to get Gartner’s view on how OSS data integration vendors are performing in the high-end scenarios where they are being successful. I’m not sure it would serve Gartner’s customer base well to see them in an MQ yet. Mark’s says that “Gartner may be looking in the wrong places,” which may be true if they are only looking for emerging trends; but if they are looking for what is relevant to their customers needs, they may be getting it right, for now.

    Again, thanks Andy for a super blog post and for being so engaged.

  • 12 Timo Elliott   January 9, 2009 at 5:06 am

    This is a very civil, and high-minded debate (although I’m sure Andy is starting to regret opening a can of worms)

    But the real issue is that appearing on the quadrant gives a measure of “credibility” to a vendor (note to Gartner: choose whatever word you prefer, but don’t try to weasel out of it — if appearing in the quadrant doesn’t mean anything, what would be the point of anybody paying for Gartner’s services?)

    This in turn inevitably means that if you’re NOT in the quadrant, people naturally assume that Gartner has reviewed the company/products (because they are supposed to be omniscient) and decided that you’re not “credible.”

    And that inevitably means lots of board, CEO, and sales pressure on marketing to “fix the problem”… Yves’ cry of indignant pain was an unfortunately counter-productive response (I think — you could make the argument that the subsequent interest in the BI community has worked wonders for Talend awareness).

    It would have been more productive to have approached the Gartner analysts (again), explained (again) why you felt you were a “credible” vendor that should be included, listen hard to their answers, and then post a discussion about that on the blog.

    Or another way to think about it — next time you feel like lashing out at unfair coverage, think to yourself “what would Obama do”? :-)

  • 13 Andy Bitterer   January 9, 2009 at 12:27 pm

    Wow. What a great discussion. Thanks to everyone who responded, here on the blog or in private, via email, twitter, or smoke signs across the prairie. As I am currently rather busy preparing for our BI Summit in The Hague (January 20-22), I am obviously a little behind responding to all those comments here. So I’ll try to do them justice in one big swoop.

    First of all, thanks for the kudos. Much appreciated. I never thought of myself as tight-lipped and always welcomed open discussion. As long as it is truly open. There were times when I got flak for something I wrote from people that were themselves hiding in anonymity. Not so here. Thank you.

    So here it goes, trying to address the major points of each response.

    @Yves: If writing 3000 documents mentioning “open source” is not an indicator for “recognizing open-source” then I don’t know what is. Even if only 10% of those did a detailed evaluation of any open-source technology or topic, that would still amount to a lot of research available to our clients (and that’s what counts). We are doing primary research of open-source adoption trends as well as qualitative research in form of things like MQs and other research notes. There is even an open-source software research community that meets every week. I’m afraid, Yves, your outside-in perspective does not reflect the facts, just perception.

    Regarding any “skyrocketing sales figures” as in the data that you had provided: it’s still all relative. Just for argument’s sake: If Vendor A, a large blue-chip software company, has 2000 customers and added 20 net new customers in 2008, that would amount to a 1% increase. Good, but not extraordinary. If Vendor B, a small VC-backed software firm, has 10 customers and has also signed 20 net new accounts in 2008, they would have tripled their install base, a 200% increase. Fantastic, but still a blip compared to Vendor A. So, when it comes to who “defines the market” we are obviously going top-down and at some point there is a cut. That Vendor B, despite huge absolute growth, may be excluded from an MQ because it is still largely irrelevant in the greater scheme of things. Plus, there is a finite number of vendors that the analysts producing the MQ can handle in the process.

    Right-on regarding the melting tracks in the snow. I would have interpreted it a little different, though, particularly when talking about open-source. Over time, many tracks disappear because the open-source projects seize to exist or become acquired by someone else, and there is no point following each and every new open-source project until they really start laying tracks in the snow. For the record: we are recognizing Talend’s success.

    @Carter: Although your comments don’t address the MQ in question, but rather the MQ process in general, I will still need to respond to your comments.

    It’s great if you provide feedback about the MQ process and other things at the quarterly AR call. The DI MQ, the one that triggered this whole thread, is updated every year during the summer, and as such, your “18 to 24 months” actually is completely inaccurate.

    There is a certain amount of truth in Yves’ comment that MQs reflect “…past adoption of certain technologies by large accounts…”

    Again, nobody denies this, but you make it sound as if large accounts were the only clients we talk to. I typically don’t interact with companies á la “three guys and a website”, but a large portion of information comes from the midmarket, not only GE, HSBC, or Vodafone-type organizations. In every other inquiry with a client, someone asks “considering a company our size and industry, who else has done what we are planning?” And that’s when they want to know about “past adoption” not “potential developments”.

    A significant source of information for the average Gartner analyst comes informally via phone-based end-user client inquiry.

    Informally? It’s not that clients scribble something on a napkin and slide it across the lunch table for us to consume as research. Not really sure what you mean. As far as I’m concerned, those client inquiries are as formal as the come, via phone or face-to-face.

    This reliance on end-user clients as a source obviously skews the pool of data points.

    You must clearly be joking. What other sources would you expect to see than end-user clients? There are only two types of sources: the vendors and the users. And we obviously talk to both. Would you have any other constituents that I could add to the pool of data points?

    Gartner’s client base is not a statistically valid sample for all types of research and only clients that choose to set up an inquiry are counted.

    Another bold claim, and rather unsubstantiated. Even if the client base wasn’t a statistically valid sample, so what? I would still argue that I have a very good understanding of what is happening in all kinds of organizations, relative to my coverage. I have also never heard of anybody dismissing my opinion because I didn’t have the statistical proof that my advice was representative of the whole world. Secondly, it’s absolutely untrue (and as a former Gartner analyst, you should know) that we get input only through client inquiry. I don’t live in a vacuum and I interact with people through a wide variety of channels, inquiry just being one.

    Thus analysts are getting an incomplete picture and they probably do not realize it.

    Oh, please. Thanks for pointing out that we did not interview 6 billion people for every piece of research just so we can claim a “complete picture.” :-) Trust me, we do realize (and so do our clients) that we don’t have the 360 degree view of everything. Any suggestions as to how to correct that?

    Finally, regarding those probabilities, I have no problem with you being snarky. Hey, that adds the spice to conversations. I can only talk for myself here and that’s why there is my name on this blog. You won’t find any mention of probabilities in my research notes. If you take issue with probabilities published by others, please raise it there.

    @Mark: Thanks for adding your perspective. Like in the posts above I’ll try to address those points that stick out.

    I’d side with Yves that until recently open source was largely dismissed as nothing in almost all coverage areas. I think that’s changed significantly in the past 18-24 months.

    It’s interesting that you claim to know our view on open-source for “almost all coverage areas”. Have you read all those documents and talked to all analysts that touch open source? I’m on the inside and yet I don’t have that insight just off the bat. Wow. I would agree that we are spending more cycles in the last two years on open-source software. Regarding open-source data integration two or three years ago, there just wasn’t anything to research or report that had a major influence on our clients (and again, that’s what counts).

    I think the data integration quadrant is a travesty because of the decision to make it “data integration”. While I understand reason (shift to multiple techniques/technologies, consolidation, drive to a DI platform), it is a unusable for most IT organizations deploying at a tactical level

    Sorry, Mark, a travesty? Have you talked to the vendors in question lately? Most of them do not consider themselves “ETL vendors” anymore, and for some time already. Quite the contrary, they provide many different components, ETL being just one, that all sit under a data integration umbrella. As you said you understand the reason, why would that be “unusable” and how would you know that? You see, the MQ is one data point for our clients when they need to be making a decision, in this case about data integration tools. While I’m sure there are clients that just pick whoever is in the Northeastern corner of the MQ, the smart clients also talk to us about their specific requirements, in which we provide more specific advice, whether that’s for a tactical solution or a strategic enterprise-wide data integration standard. The MQ is not the tool to make that decision for you.

    Since the evaluation is not the products but the vendors, it’s not possible to use the MQ to choose DI technology appropriate to the situation. When I’m with clients who are Gartner subscribers I have to guide them in their use of these results

    Very good. As I just wrote above, the MQ is not the ultimate decision maker (or breaker). Every client is different and so is every DI situation. The point about those megavendors that own a wide variety of tools all relative to the MQ is, of course, a challenge to address. How should one reasonably compare an IBM portfolio of data integration products with, for example, an open-source product that is basically an XML transformation engine with JDBC and flat-file connectivity? You compare the components like-for-like (and there may be many similarities) and then you check what breadth of technology is provided by the vendor (and that’s where you’ll find huge differences).

    The challenge for Gartner is having an MQ that has enough vendors in it to be profitable, that makes sense for vendors to participate in

    What you describe as a challenge is really a non-issue. There is no profitability of an MQ. And the number of vendors on any specific MQ is irrelevant. Sure, nobody will publish an MQ with a single dot (what market would that be?), equally, nobody will go through the exercise of having 100 vendors evaluated either. It typically balances out somewhere between 5 and 25. If the numbers get too high, the market definition is probably to broad, if too few vendors make the inclusion criteria, the definition is too rigid. It’s also a matter of resource management and timing. Simple as that. If we accepted every vendor in an MQ, we’d be talking to them and their references until the cows came home. And then Carter would come along and start complaining again about the MQ not being published on time.

    Secondly, it’s not a vendor’s choice to “participate” in an MQ. In other words, they cannot opt-in or opt-out. If they make the inclusion criteria, based on what we know, they’re in. If the vendor chooses not to respond to the RFI and provide the information required for the ranking (I don’t recall that to happen, though) they will still be evaluated using the information at hand. Equally, if a vendor wants us to drop them from the MQ, mostly because of a sub-optimal position (and that has happened before), we acknowledge that but it’s not going to happen.

    Open source is certainly a problem for analyst reports due to inclusion metrics. … I’d hazard a guess that the R project isn’t included in the BI or related MQs yet it’s hugely influential, enough so to make an NYT article

    As I wrote before, inclusion criteria are actually adapted so that open-source vendors are not getting shut out because of a lack of license revenue. And you’re guessing right, R is not included in the BI MQ. Whether it’s “hugely influential” is open for debate. Maybe in academic circles, but it sure isn’t mainstream, from where I can see. There are other technologies that are growing like wildfire, think Twitter, and even though the NYT or the WSJ write an article about it, it doesn’t mean, every CIO would stop in their tracks and jump with both feet into whatever new gimmick comes around the corner.

    I think you may be right to exclude Talend or Pentaho from the DI MQ on the basis of market penetration and size last year (sorry Yves) but that may be incorrect given the use in operational rather than pure BI ETL scenarios, e.g. Gartner may be looking in the wrong places.

    I think you are contradicting yourself here. First you’re saying that the DI MQ is too broad in terms of technologies, now you’re saying we would be looking only in “pure BI ETL scenarios”. The DI MQ is exactly not focusing on ETL, that’s why we expanded the market definition two or three years ago, much in line, in fact, with how the majority of vendors are addressing this market.

    @Bob: Thank you for your comments. Your comments about Talend or Expressor not being included in the DI MQ are somewhat incorrect. They are both mentioned in the document (which contains 33 pages, btw, which is much more than just the graphic), along with a significant number of other vendors, such as Attunity, Datawatch, Embarcadero, GoldenGate, SchemaLogic, or XAware, to name a few. All of the above have, however, not made the inclusion criteria and are as such not evaluated with their own dot on the graphic. Ab Initio has been dropped from the evaluation two years ago, but they are also still mentioned.

    I believe that analysts have a responsibility to paint a complete picture of the vendor landscape to their client community and to the industry.

    Similar to my response to Carter’s comment above, I guess I’m still struggling with what a “complete picture” means and how to get there. Completion in what sense, every vendor, every product, every user? To be honest, I don’t think that’s possible.
    Also, our responsibility to our clients is not just one-way, telling the end-user about the vendor landscape. Equally, our vendor clients want to get our perspective on the current end-user trends.

    … let them know about emerging players whether it is thru a phone conversation or a write-up on their site. I’d be very surprised if they weren’t doing this.

    That’s exactly what we do. While we can’t proactively call every client and tell them about this brand-new emerging technology that just popped up, through written research and during interactions like inquiries, 1-on-1 sessions or even on stage at a conference, we absolutely talk about new developments, trends, or the new cool gadgets.

    Why really does the MQ have to be so static?? I think I’ve heard Gartner mention the world Real Time a few times. Maybe Gartner needs a MQ 2.0? If so, I suppose Talend and Expressor-Software would show up.

    I think you’re mixing a lot of things here: MQ cycles, real-time (?), and inclusion criteria. First, why do say the MQ is “static”? It’s not, as it is updated on pre-defined schedules, which are necessary because of the time it takes to produce one MQ. It’s not that we could churn out a new version of an MQ over a weekend, after throwing a few eggs at a wall and see where the vendors stick. (I do understand that many people not familiar with the process think that’s exactly how we come up with the positions for the dots.)
    Second, “real-time” what? Changing criteria in real-time, monitoring every sale a vendor does, or watching a vendor’s cash position, stock trades, or SEC filings? No. An MQ is a snapshot of a certain market, which typically doesn’t have any erratic movements anyway.
    Third, if you really want to put versions against MQ, then we’d probably be at MQ 32.5., with all the changes that have happened over time. Still, every MQ will continue to have inclusion criteria, and both Talend and Expressor need to fulfill those, “real-time” or otherwise.

    @Donald: Thanks for your valuable addition to this thread. Great to see another vendor position, no wait, “personal position” by an expert who happens to work at another vendor, and a fairly large one at that. That ok, no danger to your coffee supply?

    Ironically, this difficulty (calling out market numbers for inclusion metrics) is very similar to the problems that an open source vendor would face. The monetization is not easily allocated, and I can hardly call up Andy and say “Look at these great numbers from IDC.”

    I understand that the way products are sometimes bundled are posing a challenge to break out market-relevant numbers. In an extreme case, I was once evaluating some product adoption where the vendor kept saying “we have 2100 customers”. Portal: 2100 customers. Application server: 2100. Any product component had the same number of customers. That was true magic! Of course, only a fraction of each product component was used, or maybe even installed. Well, I’m getting off track here…

    In order to get a sense for how a particular product is used in the market, we rely on that information provided by the vendor. Microsoft is clearly more challenged there than a “one-trick pony” company (no offense to anyone). Of course, Donald, you could always call me up and show me IDC numbers. No issue on this end. :-)

    On the other hand, I think Mark makes a good point about the data integration quadrant being too general – it really is difficult to see how it supports actionable purchasing decisions, compared to the old ETL quadrant.

    I can’t win, can I? The vendors keep adding stuff to their “data integration suites” (note that they stopped talking about their “ETL product”!) but want to be evaluated only as a provider of just ETL, or just replication, or just changed-data capture? While some tactical decisions, e.g. for solving an imminent replication problem, continue to be made by end-users, there is a large number of clients looking at data integration beyond ETL. Going forward, and certainly not to a surprise by anyone reading up to here, the markets of data integration and data quality continue to converge. Think about all those offerings from Business Objects, DataFlux, IBM, Informatica, Group1, etc. Instead of breaking out individual components from the DI MQ, I could much rather picture a DIDQ MQ at some point in the future. Now, consider this “thinking out loud”, and my colleagues Ted and Mark certainly will throw in their perspective on this.

    Think about how customers buy ERP solutions today. Would you expect an MQ just for financials, one for sales and distribution, one for marketing, one for HR, … you get the picture? While a potential user may just deploy the finance and HR modules, they would still make the decision for the ERP provider and look for some insight from the analyst. The MQ helps steer the discussion, but the devil is as always in the details and that’s why clients book inquiries.

    We used to joke that the integration quadrant was the Informatica quadrant. Not to suggest favouritism in evaluation, but rather that it appeared that quadrant definitions were weighted in such a way that a specialist, independent, vendor would typically be smack in the middle of the leader’s corner.

    Oy. The Informatica quadrant? I’m sure some folks in Redwood City are pleased to hear that. :-) But glad you don’t think that we are giving out favors. No way, we’re a tough bunch.

    When I look at the current MQ, there are quite a few “specialist, independent” vendors on the chart, but just one is really “smack in the middle of the leader’s corner” and all the rest are scattered across the MQ. So, not sure I can agree with that assumption. You see, we define criteria and their individual weightings long before we have started talking to any vendor. Criteria and weights also change year-over-year. That’s why a dot sometimes can make significant jumps from one year to the next. Not always a pleasure to explain, particularly when a vendor’s dot went South.

    … good old DTS, was a classic cheap operational data movement tool, but I’m not at all sure I could defend it against Gartner’s new DI criteria (as opposed to the old ETL criteria), whereas the new Integration Services clearly does play in that space.

    Aha! There you have it. Microsoft has been one of the companies that addressed data integration from a broader perspective than ETL. Case in point: the product’s name is “Integration Services”, not “ETL Services”. And Microsoft wasn’t the only one doing it, obviously, which means, that the market for data integration has progressed, and some companies have boarded the train, and some other have not. Some are running to catch the train (and I would include open-source vendors here), some others deliberately stay on the ETL platform. What a nice visual.

    I, for one, would love to get Gartner’s view on how OSS data integration vendors are performing in the high-end scenarios where they are being successful.

    You know, you could just set up an inquiry. :-)

    I think I need to wrap up here. Don’t want to get RSI from just one blog post. Thanks, again, for all your input. Happy to continue the discussion, here or offline.

  • 14 Mark Madsen   January 9, 2009 at 6:36 pm

    Great replies to all these comments Andy. I’ll work through the points you commented on for me:

    Open source at Gartner: Point taken, I was too broad in my characterization. I read the reports on information management-related topics which for me was maybe a dozen or so areas of coverage, and heard or read a lot of “pay no attention to open source, it’s a fad” advice. I think the right answer was “pay attention, but don’t jump in just yet unless you have good reason”.

    Second point on open source: OSS in the information management / BI space 18-24 months ago was still in the techno-geek ghetto and was just hitting the elbow in adoption, so it made sense not to cover. But pooh-poohing the topic is what I saw many analysts doing, e.g. I read a number of “kids in the basement coding” proclamations. 18-24 months ago was too early so your point is valid, but I have this sense that Gartner consistently downplays disruptors until the enthusiasts and early adopters are in, then suddenly Gartner is right there at the meaty part of the vendor curve saying “we called this all along.” With .8 probability no less :-)

    Regarding whether I’ve talked with vendors or customers: Yes, a lot of them. Donald’s comment about “the Informatica quadrant” was a pretty broad perception after the shift from the ETL MQ. This one requires an aside:

    ETL is perceived as BI only. ETL vendors can only get good growth by expanding outside that niche market and eyeball the huge opportunities in the larger application market. DQ, DP, etc. get them partway there. Your shift in MQ helps them say “use our tools for other things and better manage your data.” The assumption is that completely centralized DI works for operational (as opposed to analytic) DI. Yes for migrations, consolidations, etc. But for broad-based low latency application DI, I am not convinced for a lot of reasons. Licensing, scalability, low latency, high concurrency, architectural match, skill match, etc. are real problems.

    This makes a DI MQ hard to accept because the use cases between BI and OLTP have different characteristics and one-size-fits-all product model may not be right. I was harsh in saying this was a travesty, but I still believe the ability to inform is fairly low because of the generality.

    Regarding customers / clients of the DI vendors and of Gartner, yes I talk to them all the time. Clients ask me to advise them on enterprise scale DI architecture and product selection. I have to sit in the trenches with the evaluators and users, and I get nailed if I do a face plant when it’s in production. I’ve seen some of the evaluation spreadsheets given to Gartner clients and I stand by the utility criticism. We can talk about those another time as this isn’t the right forum.

    You said “The MQ is not the tool to make that decision for you.” My criticism is that clients often use the MQ as a crutch for thinking. While you personally can’t be held to account for misuse of the MQ, there is an undercurrent in Gartner’s messaging that this is exactly what the MQ is for.

    You’re being disingenuous when you say there’s no profit in the MQ. Then why do it? This is the vehicle that creates a market perception others noted: “if you aren’t there you aren’t anywhere”. Of course no vendors opt out of participation even though the MQ process is an expensive undertaking and may not match their marketing position. If they don’t they run the risk of bad press through mischaracterization which is a pretty big pressure. It would be fairer to not include them if they didn’t answer, or to stop relying on them for answers and do completely independent product research.

    Many vendors see the MQ as the gateway to respectability in large corporate IT shops and I would argue that Gartner plays this up, despite the pragmatism of individual analysts like yourself. My view of the MQ is that it’s vendor research, not product research, but that’s not how clients perceive it.

    I have to apologize for steering the discussion into generic MQ-land. That topic and the DI MQ criticism are related but it puts you in a no-win argument defending Gartner policies vs. one quadrant.

    On your last point, my statement on breadth was poorly worded.

    The DI MQ is in a tough transition period where vendors are shifting from batch movement tools to suites and someday maybe to platforms, The challenge I see is that the MQ rates vendors and not products. Why not state which of a company like Oracle’s or IBM’s products are in scope with the DI MQ? That removes an element of criticism.

    I do DI evaluations by looking look at the entire DI suite offerings and the subdomains like DQ, ETL, federation. If the need is a point solution, the point vendors in that domain can be included in the eval. If the need is for components outside their scope then suites are more appropriate and point tools can be excluded, or included for clients with a “best of breed” model.

    My experience with some components of suites in the leader quadrant is that they are bad or poorly integrated in the suite. Point vendors can’t easily meet the entry requirements yet have better products for that component.

    All the single-technology vendors have their place, so why not provide an MQ that evaluates suites and suite integration, and then separately the core technologies? That would best serve client needs when emphasis is on a subset of the components. My experience is that this is almost always the case.

  • 15 Gartner engages in debates on their blog « The IIAR Blog   January 15, 2009 at 2:38 am

    [...] Gartner engages in debates on their blog Posted on Thursday 15th January 2009 by Ludovic Following some critical comments from a vendor on a Magic Quadrant, Gartner analyst Andreas Bitterer posted an answer on his own blog: Setting the Record Straight [...]

  • 16 Matt Casters on Data Integration » Gartner DI MQ   January 27, 2009 at 11:46 am

    [...] A few weeks ago, Yves de Montcheuil from Talend took a shot across the bow of Gartner for not including Talend in their Magic Quadrant (MQ) for data integration.  After that post, Andreas Bitter from Gartner (rightfully) felt personally under assault and felt the need to set the record straight. [...]

  • 17 Gartner forecasts for Business Intelligence at Talend blog   January 29, 2009 at 8:21 am

    [...] Magic Quadrants – you can refer to my earlier post here, and to Andy Bitterer’s Setting the Record Straight reaction. I was, however, amused by the way Gartner danced around the whole issue in their recently-released [...]

  • 18 Gartner BI Summit Part 2 « document∩database   January 29, 2009 at 12:39 pm

    [...] the DI side, other than the Talend/Bitterer argument, it’s not hotting up too quickly.  DI is mostly limited to straight ETL of fairly [...]

  • 19 James Governor’s Monkchips » Talend Update: Open Source Data Integration   January 30, 2009 at 3:01 pm

    [...] ready for open source data management tools, which spurred an interesting conversation over at Andreas Bitterer’s Gartner blog after Yves accused the analyst leviathan of being overly [...]

  • 20 10 reasons to launch Vanilla BI Platfom « FreeAnalysis, Open Source Olap Platform   February 13, 2009 at 9:33 am

    [...] to read answer to Talend post from Gartner analyst … Is this man really aware of what he wrote ? are we [...]

  • 21 The Flight of the Wannabees   February 18, 2009 at 6:53 am

    [...] what is this posting about? It seems, after the wide-spread publicity of the open discussion with Talend here on this blog (thanks again for the many responses), other open-source providers apparently want to jump on the [...]

  • 22 Lovely piece of bitching « Analystanalyst’s Weblog   February 18, 2009 at 12:20 pm

    [...] http://blogs.gartner.com/andreas_bitterer/2008/12/28/setting-the-record-straight/ [...]

  • 23 Gartner recognizes open source as enterprise data integration at Talend blog   November 30, 2009 at 3:53 pm

    [...] Magic Quadrant.  A lively discussion ensued, mostly with Gartner analyst Andreas Bitterer (Setting the Record Straight) and many other parties chimed in – analysts, vendors, [...]