On December 1, 2020, we published five predictions in this discipline (Gartner login required). The first one is here. This time, I’d like to highlight another one of these. And no, I won’t spoil them all five.
First check this out:
By year-end 2025, multiple Internet of Behaviours (IoB) systems will elevate the risk of unintended consequences, potentially affecting over half of the world’s population.
The pervasiveness of monitoring sensors, Internet of Things devices, and wide availability of massive datasets enables an unprecedented evaluation of individual “behaviours” on- and offline. An IoB aims to capture, analyse, understand and respond to these behaviours, with the goal to influence that behaviour in return. An IoB combines multiple sources of intelligence such as commercial customer data, publicly available citizen data, social media, facial recognition, and location tracking to do so.
These systems could lead to positive outcomes, such as improved public health. For example, during COVID-19, an IoB aimed to systematically monitor and analyse hand hygiene behaviour, use face recognition-based analysis to determine mask usage, use device- and video -based algorithmic confirmation to monitor social distancing behaviour, etc. Through information feedback loops, including reminders, inclusion or exclusion decisions, behaviour adjustment can be driven.
However, when left uncontrolled, there could also be negative outcomes, such as censorship or truth fabrication. Therefore, there is naturally an ongoing debate around the position and reliability of algorithms, the ethics behind decision-making, individual rights and freedoms, and protection of autonomy regarding IoB systems. Think of the Cambridge Analytica scandal on multiple occasions, or something as large as the Social Credit System.
Before we relentlessly start experimenting with IoBs (which usually grow over time and is often not deliberately architected from scratch), a framework must be established for privacy, security, ethics, and interconnectivity that all connected entities must subscribe to, proactively reducing the risk of unintended consequences.
What do you think? I’d be happy to see your responses -except when they pertain to ‘behavior’ vs ‘behaviour’. I’m Dutch. Simply trained differently.
Oh… and the others?
– The data risk assessment is on the rise. Considerably so.
– Soon the majority of the world will be protected by modern privacy legislation, against commerce at least. We’re looking at 75% in 2 years
– Embedding privacy in the user experience (i.e. providing a full Privacy UX) will gain trustworthiness and a tangibly increased number of digital revenue opportunities