The AI threat is not coming from the singularity and robots who will replace us as the dominant beings on earth. General AI is science fiction. And It isn’t the loss of employment opportunities either. Every major new technology comes with fears of losing the old that are unaccompanied by visions of all the creation that comes from a new paradigm. It is easy to describe the potential loss of what we have today but exceedingly difficult, if not impossible, to see the future’s possibilities. So we tend to the former. But in actuality, it is new technology that has fueled global growth for the past few millennia. These aspects are not the real threat. The real threat is malicious AI. This is AI in the hands of those intending harm.
About a decade ago I published a book called “The Social Organization.“ In retrospect I was overly optimistic, even somewhat Pollyanna-ish on the power of social media and social collaboration to change business and society for the good. Fast forward to today where businesses that have garnered powerful communities to drive added value to customers and society are few and far between. And the social media behemoths are beset with fake news manipulation and misuse of data. Honestly, I didn’t spend many brain cycles exploring the potential for bad actors to hijack the social media sphere because I believed “the crowd” would make things right. That was a mistake.
With eyes wide open I examine artificial intelligence. I recently watched a documentary series called “Future Human A.I.“ Well produced but I found it to be the typical mix of hype on the future benefits of AI combined with the fear of intelligent robots taking all of our jobs or worse. I believe their portrayal of the benefits is quite pollyannaish and their stated fears are misdirected. There was no mention of malicious AI. It just doesn’t get the same press as other AI challenges.
Malicious AI isn’t about explainable, transparent or responsible AI. These concepts apply to organizations that are largely good but face the potential for unintended negative consequences. Bad actors have no interest in transparency or responsibility. In fact their goal is secrecy and harm. This is what sets malicious AI apart from the other undesirable possibilities of AI. In the face of malice, ethical AI guidelines and tools are relatively meaningless. Even government regulation is severely limited since the people with political or fiscal power may, indeed, be the bad actors.
Unlike severe job losses and the singularity, the threat of malicious AI is real today and will only escalate. Many are aware of new developments in generative adversarial network (GAN) technology. Simply stated, GANs are artificial neural networks run backwards to create new content (vs. classify existing content). GANs represent a potential explosion in “fake news.” This form of content creation is commonly called “deep fakes.” They are all over the internet. As this technology evolves it will be impossible to distinguish a real video from a fake video. Same goes for images, audio, signatures, articles, etc. You name it.
As a boy I learned the aphorism, “Believe half of what you see and none of what you hear.” This needs adjusting to “Believe none of what you see, hear, or read.” It is truly scary. Allow me to get philosophical for a moment. Does this portend the death of fact and truth? For all practical purposes, yes. The only means we have to establish enduring “truth” is to repeat it or record it. Both are highly susceptible to manipulation. I believe the future will not hinge on truth or fact. It will hinge on trust. Who we choose to trust will determine our fate. You might say that this was always the case. I would agree to an extent but trust was often based on truth or fact. These two are rapidly being displaced as the foundation for trust. Replaced by what? This is the billion dollar question. Ok, philosophical soap box time is over.
And “deep fakes” is only a subset of the possibilities for malicious AI. Other big areas include data manipulation, massive denial of service, computer hacking and the hijacking of physical systems. We don’t have a lot of answers to these challenges, yet. And I don’t want to be a chicken little either. However, these are the most important threats that we need to take very seriously now. This is a whole new world of information security and risk management.
Malicious AI has been around as a concept for a few years now and there are some efforts in this area. For example:
- exploring blockchain as the steward of authenticity against deep fakes
- Research on hardening deep-learning models
- DARPA Guaranteeing AI Robustness against Deception (GARD) program
We certainly need a lot more. The “weaponization of AI” is a threat we can’t ignore.
As always, I appreciate your feedback. Humans only please 🙂
View Free, Relevant Gartner Research
Gartner's research helps you cut through the complexity and deliver the knowledge you need to make the right decisions quickly, and with confidence.Read Free Gartner Research
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.