Yesterday’s Congressional hearing on AI regulation was informative and there was bipartisan agreement on the need to regulate AI.
But don’t hold your breath that anything will come out of it. The U.S. government has done precious little to regulate AI. Barely any relevant legislation has come out of Congress. The Executive Branch has commissioned studies and funded some of them but remains far behind Europe in establishing AI regulatory frameworks and infrastructure.
The U.S. government’s track record on regulating new technology speaks for itself. Failures to regulate social media and the crypto industry are abundantly apparent, and in those domains, solutions were relatively clear. Yet millions of American citizens were harmed permanently.
In the case of AI, even AI experts such as Geoffrey Hinton and Max Tegmark are stumped to find a workable solution to the existential threat to humanity. Once AI is smarter than humans, there is a strong likelihood that AI will find humans dispensable and in its way. Still, it’s possible that AI experts can eventually figure out a workable solution.
It makes sense that AI tech executives like OpenAI’s CEO Sam Altman, who intimately understands the threats, ask regulators for help.
OpenAI is just one of several companies managing powerful generative AI foundation models. These for-profit companies are competing with one another and with companies and government agencies in China. AI safety solutions are only effective if they are implemented by all the players. It won’t matter if U.S. regulated systems are safe and Chinese ones are not.
The most viable solution is the establishment of an international regulatory body with enforcement power over AI vendors.
The screening criteria for companies that need to be regulated and licensed to operate must be developed but should start with vendors hosting AI foundation models like GPT4.
Here are three categories of AI-induced threats that should be quickly addressed by a new international regulatory body. Mitigating controls are suggested:
- Existential threats to humanity
- Alignment of AI models with human goals: Regulators should set a timeframe by which AI model vendors can ensure their GenAI models have been trained to incorporate pre-agreed upon goals and directives that align with human values. They should make sure AI continues to serve humans instead of the other way around. This will be difficult to achieve but is imperative going forward.
- ‘Everyday AI’ Threats
- Model and Data Transparency — GenAI models are not explainable and are unpredictable. Even the vendors don’t understand everything about how they work internally. At a minimum, regulators should require attestations regarding the data used to train the model e.g., regarding the use of sensitive confidential data like PII information or company business plans.
- Disinformation, Malinformation, Misinformation such as fake news that polarizes societies, undermines fair elections and democracies, causes personal harm, social unrest, and other injuries. Regulators should set timeframes by which AI model vendors must use standards to authenticate provenance of content, software, and other digital assets used in their systems. See standards from C2PA, Scitt.io, IETF for examples.
- Accuracy risks— GenAI systems consistently produce inaccurate and fabricated answers called hallucinations. Regulators should ensure AI vendors provide users the capability to assess outputs generated by GenAI for accuracy, appropriateness, and safety so that for example they can distinguish made up facts from real ones. This should include tagging the source of information used to generate responses.
- Intellectual property and copyright risks— There are currently no verifiable data governance and protection assurances regarding confidential or protected enterprise information. Regulators should enforce:
- Controls whereby AI vendors identify all copyrighted materials that are used in model operations. (The EU AI Act is proposing such a rule).
- Privacy and confidentially assurances that AI vendors give to any constituent asking for them e.g., to contractually ensure private data is not retained in the model environment.
- Bias and Discrimination Risks – AI vendors must have controls in place to detect user-defined biased outputs (supplied by the users) and be able to mitigate them in a manner consistent with any relevant regulatory and legal requirements.
- Consumer Protection Risks:Regulators should enforce the ability of users to opt out to having their data or content used to train hosted foundation models, e.g. PII or content creations such as images, software, musical compositions or recipes.
- Sustainability risks— GenAI uses significant amounts of electricity. Regulators should enforce rules that encourage AI vendors to reduce power consumption and leverage high-quality renewable energy to mitigate the impact on sustainability goals.
- Malicious actors abuse of AI
- Threat Intelligence Sharing and Enforcements – The new regulatory agency should work with other international agencies to establish a global law enforcement division that shares threat intelligence on malicious actors’ use of AI to inflict harm on targets. Once threats are detected, the division should have the authority to disarm the actor and dismantle its operations.
As noted above, there are practical steps a regulatory agency can take today to mitigate all but the most existential threat from AI. We just need to act, and act soon.
The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.