Yesterday I was reading David Osimo’s blog (always a good read) and stumbled across an interesting comparison he is developing between traditional government IT initiatives and web 2.0 ones.
One particular line in his comparison table caught my eye: web 2.0 “bottom-up” initiatives require small or no investment in technology, while we know what’s the price tag for many traditional government projects.
I know this is the common wisdom around Web 2.0. Develop pilots rather than full-blown applications, publish then filter, crowdsource portions of design and development.
However, as a seasoned engineer whose first contact with a computer was through punching cards, I have seen too many waves of technology promising to slash development and maintenance costs. Reusable software modules, rapid prototyping (including whatever was at the junction between artificial intelligence and expert systems), 4 GLs, object oriented design and programming, service-oriented architectures: you name it, each promised significant cost savings and – in all fairness – some helped more than others. But we’ve also been many times through the trough of disillusionment: lately with SOA, when we see architectures with hundreds of web services that are rarely reused, if at all.
Now comes web 2.0. Let’s develop a wiki, or create a cool page or group on Facebook, or crowdsource applications that mash up our feeds or ideas to build or revamp a web site. All these initiatives are attractive because they are (or at lest look) cheap and do not require to go through complex procurement procedures. Provided one uses available internal resources (officials or contractors who are already working on the premises on some T&M work), additional costs tend to be negligible.
But what is the actual total cost of whichever Web 2.0 solution makes the cut from an experimental phase to a mission critical application? The total cost of ownership should include the cost of development of the successful pilot, plus the cost of all failed pilots that either directly or indirectly contributed to the solution, plus the cost of running and evolving that solution for a given period of time.
Doesn’t it look cool? You throw a problem to a large audience of vendors, experts, programmers and they bounce back to you solutions, suggestions, designs, and even actual software code.
But there are hidden costs.
First of all you have to promote this, either through your existing channels or creating new ones (such as a specific web site).
Then you need somebody monitoring and moderating response. Maybe you wish to check they are decent ideas and original and do not breach any patent (of course it depends on how far you want to go in making sure that the content you receive and expose does not cause issues). Checking may range from doing some research, to running some tests (if it is software code), to browsing licenses and IPs.
If you allow the audience to rate ideas or software, then you need to make sure that every idea is given a chance, so you may have to edit some of the content to bring them up to the same presentation standard that puts them pretty much on equal footing.
When the submissions are closed you are left with many applications and how they are rated. What do you do with them? You need to go deeper in each of them – possibly starting from those with higher rating – and check whether they make sense in your architecture and really fit your needs.
So rather than spending a considerable amount of time drafting a detailed requirements document that will be the basis for your call for tenders, you now have to check the first five, ten or fifty submissions against your high-level requirements and figure out which one is closer to what you are looking for. Presumably solutions will be very different and the competencies – let alone the process – for doing this may not be available.
At this point somebody will think: why should we do all this’? Isn’t it enough to pick the solution that makes most sense and forget about the others? Isn’t this the way initiatives like innocentive.com work? Sure, but life in government is different. You remain accountable for the whole process, and want to make sure it is fair and transparent. So there is no way you can avoid going through every single submission and be ready to answer questions about why it was not selected (remember: you don’t have those nasty but also useful detailed selection criteria you had in the past).
Now let’s suppose you pick the best solution: this is still a half-baked idea, an incomplete design, a prototype application, which needs to be turned into something you and your stakeholders can trust. Sure you can launch another round of crowdsourcing to get the version 0.2, but you can’t hope that a community of solution developers will suddenly materialize who volunteer to work at very low or no cost at all. And even if it did, how would you get assurance that the result meet all your functional and non-functional requirements? At some point in time, in this seamlessly participative process, the line between the client and the supplier role must be drawn, with all that means in terms of procurement, contracts, T&C, SLAs and so forth.
It is very possible that the end result will be much better than what it would be following a more traditional process. But will it be cheaper?