Andrea DiMaio

A member of the Gartner Blog Network

Andrea Di Maio
Managing VP
15 years at Gartner
28 years IT industry

Andrea Di Maio is a managing vice president for public sector in Gartner Research, covering government and education. His personal research focus is on digital government strategies strategies, Web 2.0, open government, cloud computing, the business value of IT, smart cities, and the impact of technology on the future of government Read Full Bio

Coverage Areas:

It Is Time for Industry, Government and Consumers To Get Serious About IT Ethics

by Andrea Di Maio  |  October 4, 2012  |  2 Comments

(WARNING: this is an unusually long post for my blog. However I want to set the foundations for a dialogue and new line of research, and I hope my reader will find it worthwhile)

Over the last many years, information technology has been celebrated as a driver of business and social innovation, with an impact which is still not fully realized and has the same scale as previous world-changing inventions such as the steam engine or electricity. Annihilating distance, compressing times, boosting productivity, creating new businesses and business models that were not even imaginable just a one or two decades ago, supporting transparency and democracy, improving public safety, health care, education… and the list goes on..

Some have also looked at the downside of this, as I highlighted in a previous posts. There are three main risk areas:

• Digital divide(s)

• Loss of human control and oversight

• Substitution of human labor leading to permanent unemployment

Let’s take a look and each one.

Digital Divide(s)

Many talk about one digital divide between those who have access to technology and those who do not, because of demographics, income, lack of infrastructure, or a combination thereof.

However there are many other types of digital divide. Between those who have relatively slow access to the Internet and those who enjoy fast broadband services; between those who are able to embrace and fully leverage social networking and those who are either suspicious or vulnerable in sharing information with others; between those who are trained and encouraged by their employers to use technology more creatively and effectively, and those who are bound to use technology that simply automates a pre-existing process.

Two examples may help here.

My mother, who suffers from a rare form of progressive aphasia, has a full-time caretaker, a wonderful Rumanian lady who attends to all her needs and practically represents my mother’s interface with the world. She uses a laptop and an ADSL connection to stay in touch with her many relatives in Rumania, as well as to inform me about my mom’s health while I am traveling. A few weeks ago I paid a visit to my mom and found the caretaker in a very agitated state: she had been crying and had not slept at all during the night before. When I inquired about what had happened, she said that her laptop had been remotely blocked by the police, as she was allegedly using it to exchange illegal material and porn, and was about to be fined. I immediately realized this was a scam and reassured her, by showing with a Google search how many people had been affected by the same virus. Luckily enough she did not pay anything, nor did she provide any personal information, and I was able to clean the laptop. Her disproportionate reaction made me think about the nature of a digital divide: she is perfectly able to use technology for certain purposes, but not being fully proficient with the language could not detect the inconsistencies in the fake police page, nor had she any experience about viruses.

The second example concerns the recent trend to introduce tablets in K-12 schools, usually providing those to students (either free of charge or at a very low price) as well as to teachers. This is not dissimilar from prior initiatives with laptops and netbooks, but probably with a greater potential thanks to the different form factor and the touch-based interface. I discussed this both in Italy (there is a plan in the high school where my wife is a teacher as well as a larger plan targeting the southern part of the country) and in Denmark (where a colleague of mine has a 10 year old son who is entering such a program). Interestingly in both cases while nobody doubts about the kids’ ability to use tablets productively and creatively, there does not seem to be any plan about whether textbooks or specific educational apps will be provided, nor is there any evidence of how teachers will be trained. In this case, those teachers and – as a consequence – some of their students will be stuck on the wrong side of a digital divide.

Loss of human control and oversight

Science fiction movies have taught us the danger of information technology. From HAL 9000 in 2001: A Space Odyssey to Skynet in The Terminator, from PreCrime in Minority Report to ARIIA in Eagle Eye, very powerful computers and computer networks try to take over human decision-making power, often by misinterpreting or too literally interpreting a human directive. Of course reality is different from science fiction, but we have had a few glimpses of this with the technology-induced velocity of the global financial crisis, as well as with some of the cyberwar scenarios that get discussed from time to time.

With more people and devices connected to the Internet, with growing machine-to-machine interaction, ensemble programming, cloud-based applications, big data crunching, oversight of IT applications becomes increasingly challenging.

Today concerns about cloud computing revolve around ensuring that personal and other sensitive data stay in jurisdictions where they are duly protected and that vendors operate secure and auditable infrastructure. But as connectedness becomes the norm, the scale of cloud-based infrastructure will grow and zillions of enterprise and consumer-grade devices and applications will interoperate more and more autonomously: existing norms, policies and tools for accreditation and auditing will not suffice.

Job substitution and permanent unemployment

Any major technology innovation or revolution has displaced existing jobs and created new ones. Of course all these have been painful for people who lost their jobs, but their sons and grandsons ended up enjoying a better standard of living. In some cases they have had tragic consequences for some – like technologies used for weapon systems – but they have ultimately led to growth and prosperity – as those have been used to improve transportation, safety and energy production.

The information revolution may be different. Technology allows to automate and replace thousands of processes and roles, creating new digital opportunities for new products and services that can hardly compensate the velocity at which jobs are being displaced. Blue and white collar jobs, intermediaries, and a lot of “human middleware”, as my colleague Mark McDonald calls it, are just gone. The cashier at the supermarket or the bank teller or the ticketing staff at the station, the mailman, the administration staff in enterprises, workers on the shop floor, people working in bookshops and in the printing industry, staff working in music shops and recording companies video rentals… the list is endless.

Even in IT, presumably one of the newest industries, roles are being replaced by programmable machines: infrastructure and operations staff replaced by cloud technology, helpdesk staff replaced by self service portals and peer support social networks, professional application developers replaced by citizen developers. And further innovations on our doorsteps will replace traffic wardens, bus and truck drivers, police officers, and so forth.

New jobs are indeed being created, like information aggregators, technology service providers, digital marketers and more, but they require deeply different skills than those being displaced, and the pace of change is just unsustainable for those left behind.

It could be argued that technology will make society wealthier as a whole, and there would be ways to implement a new form of welfare, to dampen the effect of job substitution: on the other hand, many countries suffer from a very large debt and their ability to sustain current welfare models, let alone moving toward even more generous one, is debatable.

It is time to take responsibility

The IT industry and IT professionals in user organization cannot live any longer just under the auspices of progress and growth, assuming that more technology equates a better economy or better society. Policy makers who pursue digital agendas as a way to boost economic growth need to examine more carefully what are the consequences of nurturing unsustainable technology growth.

Companies and governments have become more sensitive to environmental sustainability. It is common to see advertisements where an organization will plan trees or contribute to a park to offset the carbon production caused by its processes. Why shouldn’t the same apply to societal impact caused by the use of technology? For every job displaced at least a new one should be created. For any innovation that leaves some stakeholders behind, measures should be taken to bridge that gap as a matter of priority, better, as a precondition for that innovation to take place.

Most companies believe they do this already through their corporate social responsibility. But there needs to be a more direct connection between those programs and the IT innovations they deploy internally and externally.

Governments should look more closely at how to regulate and supervise the use of technology, when this can have an adverse impact on citizens and consumers. As more and more “smart” technology is deployed in devices, buildings, cars, objects, and influences how services react to the context in which an individual lives and works, the risks deriving from malfunctions, data aggregation, and behaviors determined by predictive or prescriptive data analysis will be more numerous, more severe and – at the same time – more difficult to detect and cope with.

This is way too complex to be effectively regulated without stifling progress and competition. Containing these risks requires all stakeholders to take responsibility and act responsibly, in cooperation with each other.

A key step is to establish a better concept of IT ethics or, more in general, technology ethics. This means to agree on a set of principles that should apply to the deployment of technology that can have a direct or indirect adverse impact on an individual, be that person a client, a citizen, a partner or a citizen. This should apply both to downstream projects, where technology is being deployed to change processes and transform services, but also to more upstream activities, where new technologies are being applied for a pilot initiative.

Projects should identify clear success metrics that do not relate only to business improvements or customer satisfaction, but also to foreseeable negative impacts (such as job substitution, misuse of public or personal data, context-driven user profiling), and state how they will compensate these effects. Where are new jobs being created and how will displaced employees acquire required new skills? How will users be given ways to control the contextual information that concerns them, or even just a notice that information is being collected so that hey can opt out? What oversight mechanisms are put in place to prevent, identify or confine “ripple effects” caused by information velocity?

Consumers and citizens must become “smarter”

Unless the end users understand they have a duty of care when they accept or expose themselves to technology innovation, there is no way that industry and governments can effectively manage the ensuing risks by themselves. This implies that end users should exercise the right (and the duty) to be adequately informed and educated about both benefits and risks, and should join forces through established groups (such as consumer associations) as well as virtually (i.e. leveraging social networking).

This does not mean to be or become luddites, adverse to any sort of technology innovation. It means that a better and more transparent balance must be sought between the positive outlook associated to the use of smarter technology and a clear understanding of their implications in terms of personal privacy, freedom, socially acceptable behaviors, and so forth.

Waiting for industry to put generic warning on their products’ and services’ labels, as they do with cigarettes, or for governments to start information and awareness campaigns is not enough: understanding of and sensitivity to the risks must be primarily developed from the bottom-up, with consumers and citizens posing tougher questions to administrators who plan for a smart city or to providers proposing wonderful new context-aware services.

2 Comments »

Category: public value of IT smart government     Tags: ,

2 responses so far ↓

  • 1 Bill McCluggage   October 4, 2012 at 4:33 am

    Andrea,

    An interesting coincidence that you mention the movie ‘Minority Report’ in your blog on IT ethics because I was privileged to sit on a panel with Michael Disabato yesterday and he used the same movie to reinforce a concern that I highlighted on the need for a code of responsible ethics in terms of big data analytics.

    One of the questions to the panel was ‘what excites you about future trends in IT’ and I pointed towards a capability outlined by a futurologist and researcher earlier in the day, a ‘super knowledge cloud’, that resembled the predictive capability probably best compared to the capability outlined in Minority Report. The main point was the potential of analytics to ultimately identify intention as well as sentiment.

    Predictive analytics is in its infancy but has great potential to help us fight major health problems such as chronic disease, improve the quality of products and services, and enable significant savings for Government. However, it also has the potential make a huge societal impact and not least on the whole idea of personal privacy in the digital age.

    In a recent piece of work done by Chris Yiu from Policy Exchange, a think tank in London, he recommended in his report ‘Big Data Opportunities’ (http://ow.ly/ed4Bg) that the Government needs to lead the way on the adoption of a Code of Responsible Analytics.

    I agree because although the whole trend on predictive analytics is exciting and holds the potential to radically improve a wide range of problem areas, it also concerns me to think about what this type of capability could do in the hands of rogue regimes, monopoly businesses and criminal/terrorist organisations.

    Your blog is very timely and will hopefully stimulate a wider debate on the whole area of IT ethics.

  • 2 Craig Burton   October 10, 2012 at 12:42 am

    I agree with the prudence proposed in this article. I have been a proponent of certain kinds of automation and after 15 years of this I am now on the opposite side of the fence promoting reductions and controls on the very same technology.

    Some things do not actually automate well or at all. Intuitively or superficially it may seem that anything repetitive or boring should be made automatic. Anything that can be a web page should be a web page. Anything that can be sped up should be. These assumptions born out of the tech boom have to stop as they feed in to not just your three classes above but also security and stability.

    best,
    Craig.