Yesterday Capgemini published the latest version of its benchmarking of e-government services across the EU, continuing a tradition started by the European Commission several years ago.
I need to provide full disclosure here, for those who have not read my earlier positions on e-government benchmarking, be it EU, UN, or various consulting firms. I have always questioned the value of these surveys and ranking, although I acknowledge their political importance that they play in giving an impulse to countries where investments have been lagging behind.
This year’s survey is incredibly rich in data and detail, and makes a valuable attempt at looking at e-government maturity at the local level as well as at looking in some considerable detail to the important area of e-procurement (guess why? the European Commission is funding a major pilot on e-procurement – see previous post).
Evolved from its early versions where the report was looking only at the online service sophistication, the benchmark now looks at multiple dimensions, such as transparency, multichannel delivery, user satisfaction, ease of use, as well as – at the portal level – how well the one-stop shop approach is implemented, its usability, and the user focus of its design.
I have no reason to believe that the quality of this survey is not good, on the contrary. It is road-tested, people engaged are skilled professionals in the area, and the data collection and analysis cannot but have become better.
Unfortunately the whole exercise is not convincing.
Imagine my surprise, as an Italian citizen, when I have seen that Italy has jumped up to almost number one in full online service availability. Luckily enough, when it comes to user experience, Italy trails behind in the benchmark. Of course the press and the blogosphere in Italy celebrates the former and sort of understates the latter.
Let me give you a flavor of personal experience here. My mother suffers from age-related cognitive impairment and is entitled to disability benefits. However one can file for disability benefits only online or through a number of franchisees, such as unions or associations, that usually charge for their services. A high touch service, where people with disabilities or their relatives could clearly benefit from human interaction, has been turned into an online service, which usually forces people to rely on intermediaries, given the affected demographics.
Another example, coming from my professional experience, is Spain, which made very well in last year’s benchmark, and focused almost relentlessly on bridging the gap with respect to online leaders. In meetings with Spanish officials last year, it was clear that they could hardly justify and sustain some of the online service that they develop, especially in consideration of the tough economic and financial challenges for the country.
This is the benchmarking effect: pushing jurisdictions in the wrong direction, by ignoring their peculiarities and making those who lag behind – often for good reasons – run an often useless race.
The effort of the EU benchmark at capturing new metrics is certainly good, but suggestions for the way forward are disappointing. Here are a few highlights:
Build the findings of the recently launched Action Learning Group (ALG) on “Open and Transparent Government” into the 2011 monitoring framework: This ALG is reviewing monitoring practices within countries to seek a common basis. It can also explore recent developments in ‘cloud’ provisioning, and other aspects of the rapidly advancing technology landscape to recommend pilot activities as well as more table EU27+ monitoring indicators
This looks like putting together two hyped themes – openness and cloud – to replace online services (which were the hype at the beginning). But what would be actually measured and why remains unclear.
Increase the link between these measurements and the current CIPs pilots: Considerable benefit will merge through this in efficiency, commonality of approaches, communication with countries, and monitoring. This will inform the development from these more generic ‘key enablers’ and potential specific ‘common horizontal building blocks’ that can be used within and across Europe. This may result in recommendations in such areas as emerging new trans‐EU ‘platforms’, ‘gov as an API’, new business models (e.g. involving other sectors). Another consideration is to tighten the link between this measurement approach and the DIGIT ISA programme: There is considerable synergy potential through doing so.
CIPs are large scale pilots in areas like procurement and identity management (see previous post), while ISA deals mostly with European interoperability issues. According to this suggestion, the benchmark would be subservient to EU initiatives, potentially focusing even less on individual country peculiarities than it did so far.
Explore possibilities of launching studies to compare practices in EU with practice in other non‐EU regions to both influence global developments, and learn from leading practice: This can build on existing international collaboration (e.g. group of 5: US, Canada, UK, NZ, Australia), and consider developments in major economies such as China, India etc.
It looks like the report does not recognize that there is a need to better analyze country peculiarities, but wants to move to an even more general level (and – by the way – isn’t there already a UN-sponsored e-government ranking?).
At the end, the report promises the following developments
1. Stabilising the 2010 scope of measurement – and provide a new broader set of benchmarks for
countries (and regions) to compare and learn from;
2. Establishing Action Learning Groups (ALG) – a process for indicator innovation; piloting; and (leading) practice sharing. This is in process addressing: Open Government & Transparency, and Life‐Events;
3. Increasing reference to international leading practices – to ensure that Europe remains competitive on a worldwide stage
In a nutshell: (1) provide more metrics, (2) paint this in open government (which is indeed very fashionable, I can’t wait to see a ranking by number of published open data set), and (3) put this in a broader context.
I know there are plenty of great people and genuine efforts behind all this. But could we please pull the plug on EU benchmarking, and move on?
The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.
Comments are closed
3 Comments
Another case of doing something because they can or because it’s a nice idea. To see if there’s value, try adding to each statement, (for instance “provide more metrics”), an “in order to” statement i.e. in order to do what?…..achieve what?….enable what? and each time there is no real benefit or saving keep repeating the exercise with the answer. If nothing concrete is reached then it’s a waste of time.
Too true! Let’s stop this waste of taxpayers money, especially when it’s in such demand!
Mick http://greatemancipator.com
Greetings Andrea,
Not only the UN of course, there is also the Brookings Institute (formerly Brown University) benchmark and others.
Even though Ireland came out on top again (as in the first of the last seried) I agree with you entirely – it is time to move on.
Frank