Gartner Blog Network

Over-Engineering 2.0 – Incessant, Thoughtless Automation

by Brian Prentice  |  January 16, 2015  |  Comments Off on Over-Engineering 2.0 – Incessant, Thoughtless Automation

Here’s a prediction. By 2017, the single biggest way in which digital technology will annoy people is through thoughtless, intrusive and annoying algorithmic automation.

Let’s call it Over-engineering 2.0.

Over-engineering 1.0 has been all about feature bloat. It’s the result of an approach to system design that thinks the more things we can give people in technology, the happier we’ll all be. 

Over-engineering 2.0 will emerge as the result of an approach to system design that thinks the more things technology can take away from us, the happier we’ll all be.

Over-engineering 1.0 had a strong appeal to the “power user” – the person who delighted in engaging with technology for technology’s sake. Finding new things the technology could do was as important as the technology getting something meaningful done at all.

Over-engineering 2.0 will have a strong appeal to what I’ll call the “idle user” – this will be the individual who delights in technology doing “stuff” for him or her for no other reason than technology being able to do it.

The thing is that the power user and the idle user are outliers – they’re not indicative of what most people seek and expect from technology. Yet, these two archetypes often exert tremendous influence on those system designers whose own bias of technology is towards perpetual expansion of functional or automation capabilities. This connects to a broader level of confirmation bias.

Decades ago, software engineering practices were designed to add functionality as efficiently as possible for the simple fact that most software didn’t do much. Every new feature added was largely met with a positive reaction.

“Hey, my word processor v3.0 now has bullet points…thank goodness, it’s about time.”

But eventually we hit an inflection point, where the next feature really didn’t matter to most people.”

“My word processor v14.0 can now adjust the text distance for an in-margin drop cap? Ah, I don’t think I’ll ever need that.”

This inflection point is where we move into the realm of over-engineering. But by this stage there’s too much momentum in the design process to stop the train…it’s geared to add functionality and that’s what you’re going to get, like it or not.

And it could end up be the same for algorithmic automation systems.

In the past, it’s been “Cool, Facebook automates a recommended friends list.” Now we’re moving to a place of “What in the heck is this ‘Year in Review’ thing that just popped up in my News Feed.” Or, perhaps into a place called “algorithmic cruelty.”

As we continue to be propelled into a world of massive global-class computational power, vast expanses of mineable data, an increasingly inter-connected global population, and a mind-numbing array of digital access points, the temptation for system designers to piece these together into systems of algorithmic automation will be enormous. Without doubt, many of these systems will be wonderful. Without doubt, some of these systems will be scary bad. 

The challenges with algorithmic automation will have broad implications. Whether it’s the newfound ability of the beautiful but talentless to achieve music frame through auto-tune technology. Or the impact that “automation addiction” has in the world of avionics systems.

So we must be asking ourselves today how we create an environment where systems of algorithmic automation are created in a considered rather than reflexively fashion. Because once the momentum starts building, systems designers will be hard pressed to temper the desire to over-engineer. And that’s what will lead to the inevitable creation of thoughtless, intrusive and annoying systems of algorithmic automation.

It’s my view that one of the prime factors that has propelled the growth of user experience (UX) design has been as a way to tackle technology made incompressible though incessant feature bloat. UX design will still be needed in the world of algorithmic automation. But I believe it UX design needs to sit alongside a broader philosophy that we refer to as digital humanism. This is a specific intent amongst everyone involved in system design to put the interests and values of people at the center of everything they do.

With that in mind, I’ll offer the digital humanist’s laws of algorithmic automation:

  • Empowerment always trumps automation.
  • Never take something away from people that they enjoy doing for themselves.
  • Automation should never startle – surprises that delight people are welcome. Surprises that bewilder people are infuriating and, sometimes, dangerous
  • Don’t automate something for everyone that only some people want automated – technology will never advance to the point where system designers are relieved from the responsibility of understanding the nuances in people’s goals and behaviors.

Additional Resources

View Free, Relevant Gartner Research

Gartner's research helps you cut through the complexity and deliver the knowledge you need to make the right decisions quickly, and with confidence.

Read Free Gartner Research


Brian Prentice
Research VP
9 years at Gartner
26 years IT industry

Brian Prentice is a research vice president and focuses on emerging technologies and trends with an emphasis on those that impact an organization's software and application strategy... Read Full Bio

Comments are closed

Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.