Last month, at Gartner Symposium, we introduced the concept of digital humanism. This concept emerged after reflecting on the profound philosophical considerations that will be driven in an era of digital business.
Digital business will require us to orchestrate a vast array of sensors, data, and devices into impactful new systems. The manner in which we conduct this orchestration will ultimately be value-driven. And we believe there are two value systems that we must choose from.
The first is digital machinism. In this value system, people and organisations are seen to benefit from technology through automation. That is the ongoing dilution or removal of peoples’ direct involvement in activities and processes. The other value system is digital humanism. Digital humanists specifically seek opportunities, through technology, to redefine the way people can achieve their goals or to enable people to achieve things they didn’t previously believe possible.
Examples are ultimately the best way to illustrate a point. So, let me turn your attention to a recent presentation by Prasad Setty, Vice President of People Analytics & Compensation at Google.
In the presentation, Setty talks about one of his department’s first efforts at a applying a data and analytics approach to HR decision making. Their target was the process used to determine the bi-annual promotions of its engineers. After very careful analysis, they developed an algorithm that resulted in what he claims had a 90% accuracy rate for 30% of promotion cases.
This is a perfect example of digital machinism. The very purpose of this project was to automate the promotion process by dramatically reducing people’s role in it. The exchange for the loss of individual control was a reportedly higher level of decision accuracy.
Now, if ever there was an audience you think would warmly embrace an algorithmic approach to HR decisions it would be the very Google engineers whose professional lives are absorbed turning the world into a mass of algorithms.
The result? Google engineers hated it.
Sweet irony? If that were the end of the story, it would be. But to Setty, and Google’s credit they looked further and discovered that engineers:
“…didn’t want to use the model to make decisions for them. They wanted to use it to examine their own decision making process.”
This is precisely the perspective of digital humanism. Rather than removing people as part of a broader system, we seek to enable them to be a more effective part of it. As Setty pointed out, his “doh” moment was realizing that his team should be letting people make people decisions.
That realisation, as I see it, can be extended. Let people make decisions. This doesn’t mean that we cease automating. In fact, the digital humanist embraces automation as an important tool to enable people to achieve their goals. But note – automation is a tool. It is not the goal.