Gartner Blog Network

Requests for Proposals

by Whit Andrews  |  October 22, 2008  |  4 Comments

I have often said that I think with a keyboard, which is a bad way to think sometimes. Especially if you’re working with a spreadsheet instead of a document.So I’m going to scribble notes here for a moment.

I’m putting together a proposed RFP for enterprise search/information access technologies right now. It’s been something people have wanted for years, ever since we started the Gartner for IT Leaders program and began selling non-traditional research (It’s Not Just Linear Narrative Any More). I’ve resisted it — I admit it — because of a fear of the monoculture that standardized RFPs can cause, and also because it seemed too much like working for a living.

Now I’m doing it. My agenda manager, now outgoing, told me it’s the thing people need, so it’s time for me to do it. I did what any lazy man does — I grabbed together all the RFPs that we’ve published so far in my area, and looked at what they did. Then I went and got my clients’ draft RFPs (NDA still firmly tucked in around their corners) and looked at them. Of course, nothing looked alike. Exit the template model.

Deb Logan’s RFP for e-discovery is pretty open-ended. It has a bunch of tabs, and on each tab specific areas of specialty are called out. There’s no real scoring — it’s more explanatory.  The RFP for ECM suites, now a bit dated and so in the archive (that’s like “the stacks” in a library of all those years ago — the library has it but doesn’t keep in the shelves you can browse — we take it out of the search results unless you ask for “archived research”), is very scorish. There’s scores for different capabilities, and vendor guidelines for each on how to score it on a linear Likert scale. I am impressed by their certainty, and I believe it deeply — the authors are a solid team that does probably 1,000 inquiries annually on ECM, Records Management, BCS and WCM issues.

We don’t have that kind of volume in search, and, frankly, search is a weirder thing. Company X needs search NOW because their intranet search is wretched. A giant enterprise with 40,000+ users told me the single most common query on their internal portal is “lunch menu,” keyed in upty-zillion languages in 100+ countries. Funny! Except that if that’s 10,000 hours being spent weekly talking about what’s in the cafeteria (I’m estimating with no basis in fact) then that’s a lot of worker time that’s going in a hole when it could be spent making widgets. On the other hand, Company Y needs search to fix customer service. Company Z needs it to do competitive intelligence. And so forth.

So how do I score these functions? I’ve spent years teasing out of enterprises what it is exactly they need, and I am quite fearful of monoculture here. I just don’t think I can introduce the scoring here– at least not in this generation of the toolkit.

Or should I? Let’s be realistic. People need scores. Somebody has to make a decision somehow and somewhere, and not offering the refinement is its own lack of value. If I were a client, and I had paid a pile of money for this research, I would want somebody to help me pull the trigger. Dar Brown told me in boot camp more than eight years ago when I hired in — a Gartner analyst is sometimes wrong, but never uncertain.

OK, this is what I’m going to try. (Not pretty to watch a grown man think with his keyboard, is it? What was the acronym from back in the day? PLOK? Ah, plok’t*. I’m pressing lots of keys to think, myself.) I’m going to have a light score. Vendors can indicate they do something, or that they don’t, or that they THINK THEY CAN DO SOMETHING. And then users can say to themselves how important that something is. And then in every area, vendors will get a certain score. This will effectively mirror our MQ process in some ways, with particular support for particular user needs.

Whew. Now if only I can figure out what to do to make the spreadsheet work.

Additional Resources

View Free, Relevant Gartner Research

Gartner's research helps you cut through the complexity and deliver the knowledge you need to make the right decisions quickly, and with confidence.

Read Free Gartner Research


Tags: information-access  rfp  search  searchmq  toolkits  

Whit Andrews
VP Distinguished Analyst
14 years at Gartner
18 years IT industry

Whit Andrews is a vice president and distinguished analyst in Gartner Research. Andrews covers enterprise search and enterprise video content management.Read Full Bio

Thoughts on Requests for Proposals

  1. I have mixed feelings. On one hand, some degree of standardization could help rein in the tendency for each vendor to bury itself in unique jargon. On the other hand, your fear of monoculture is well-founded–we’ve seen this problem in the academic world, where information retrieval research is stifled by the status quo of the evaluation process.

    What I’d love to see–both as a vendor and as a researcher–is an enumeration of information access scenarios. A vendor would have to explain if or how a typical implementation would address each scenario. While this would still be qualitative rather than quantitative, at least it would move the focus from product features to productivity.

    Not sure if that approach makes your life easier or harder. But I’m already on record as thinking at the keyboard:

  2. Whit Andrews says:

    The scenario model is perfect — obviously, the more optional elements that one can introduce into such a model, the better off one is. The more I think about this kind of thing, the more I think we get back to rules of triples. We have:

    the variable: What is asked about what can be done.

    the input — the degree to which a vendor says it can or cannot do the variable.

    the calculation of value that the enterprise places on the value input into the variable.

    The place to start, I think, is to give a list of variables, and allow vendors to input what they believe into those variables, and then allow enterprises to set a value on those variables via calculation. Only then can I do better at what you describe — expanding or contracting the variable list based on a scenario, setting different degree dimensions based on the scenario, and allowing (or encouraging) enterprises to vary the values based on the scenario.

    There’s something wooly in there that’s not right. I’m going to go back after I finish this model.

  3. Whit,

    As someone who went through the process in an equally unequal market (everyone wants the same), I would recommend a smidget more than you have so far said… a combo of all things:

    1. list as many features / functions as you believe necessary to conduct an evaluation (remember: you are the expert, so if you say, for example, that scalability should not show up in the list, then it shouldn’t it). explain any odd items people expect to see but are missing – like natural language parsers.
    2. divide the feature / function into categories – including an optional category. if you have function- or vertical-specific items, create categories for them so only people who want them need to score them.
    3. create a way to assign a need and a weight (as you already said) for each entry. create an algorithm for each line that works this way… some of them may find more resistance, or be more cumbersone, so make sure to count for those specifics.
    4. keep in mind one thing: this is an RFP, not a tool selection. they provide the scores, and the weights, you just provide the template with all needs, mark the necessary and the optional, make sure they account to an index – et voila! they fill it in and make sure to create a score per vendor / solution and then compare them.

    simple, right? got lots of interest since no one knew the feature / functions to include, or how to run the EFP algorithms…

    anyways, probably useles — yet. just in case.

  4. Whit Andrews says:

    That’s exactly the kind of thing I’m trying to do, Esteban. I think that’ll work well. Essentially, what I ended up with was a handful of categories with widely varying item counts, and a summary page where all the categories roll up. I’m going to resist the natural desire for One Big Score at this point.

Comments are closed

Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.