AI will save the world (no it won’t).
AI will solve world hunger (not on its own it won’t).
And now, AI will cure us of Corona (eehhrm… NOPE). But it can help. And that’s exactly the balance we need.
Techrepublic and a few dozen others recently quoted me with regards to a note I published last year. The interest in Predictions (Gartner paywall) amazes me. Over the last four years of writing predicts-notes for Gartner I have been known to perhaps be a bit conservative. I’d rather be more on the edge and wrong sometimes. But this particular prediction, I stand by:
By year-end ’23, 40% of technology used for privacy compliance will rely on AI capabilities, up from less than 5% in 2019.
If AI and blockchain had not called dibbs, ‘privacy’ would have been the buzzword of the decade. GDPR, PDPB, CCPA, WPA,
LGBT, LGPD… Almost every acronym you can think of now is probably a recently proposed or accepted privacy-driven data protection law. And there’s a debate going on specifically in relation to AI and privacy today.
‘AI invades privacy’ – But mostly so when deployed without appropriate controls on personal data. Often there’s no need for full directly identifiable data. Often there’s also no need to keep all data used for learning (federated learning has great chances here!). I maintain there’s a variety of use cases where AI can certainly be of use in support of a privacy program, and here’s a few:
Data discovery in-context, by understanding attribution and relations, helps find what actually is personal data. Chatbots can help answer privacy inquiries and rights requests. AI can treat data such that analysis remains possible without analyzing the individual, but the information. AI can actually combat other AI in recognition patterns. Et cetera (see ‘5 Areas Where AI Will Turbocharge Privacy Readiness‘).
The technology in itself is not what we must focus on. No matter how our relation to technology changes from use to interaction, it remains a means. Not a goal.
The real value of AI sits in how we develop it. What purpose we give it, and how we then deploy it in the correct context. In how we keep the finger on the pulse. Enjoying desired outcomes while controlling the undesired outcomes. It’s the technology that’s often untrusted in the public domain, mostly because those who create and deploy AI are insufficiently clear and transparent about its use, capabilities and purpose.
AI won’t cure the world of the Corona virus. But it can help enable effective ways to combat it, within the realm of what’s ethical. Without ethical borders, AI will likely wipe out the entire human race for the benefit of ecological balance. Overriding privacy concerns by the masses just because fear globally demands re-prioritization, is not sustainable. Ben Franklin is supposed to have said “Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety.“
Let’s just try to find a balance. A balance that needs no human sacrifice, but technological sacrifice. An acknowledgement that possibility does not equal need. Otherwise we should all be like Gabriel Shear (John Travolta) in the Hollywood production ‘Swordfish’. Instead, we need some Stanley Jobsons.