EXECUTIVE SUMMARY:

As the fastest-growing consumer application in history, with more than 100 million monthly active users, ChatGPT has gained the attention of lawmakers and policy analysts around the world.

Due to the wave of interest in the technology, the U.S. National Telecommunications and Information Administration (NTIA) is asking for public input regarding how to best create accountability measures for AI.

“In the same way that financial audits created trust in the accuracy of financial statements for businesses, accountability mechanisms for AI can help assure that an AI system is trustworthy,” said Alan Davidson, head of the U.S. National Telecommunications and Information Administration (NTIA).

It’s all in the details…

The NTIA intends to establish guardrails that will enable government agencies to assess whether AI platforms are performing in the way that companies claim that they do, whether they’re safe, whether they have discriminatory outcomes or “reflect unacceptable levels of bias,” among other things.

The White House previously rolled out a “guide” around the development of AI systems in the form of a voluntary “bill of rights.” But companies may or may not comply with voluntary guidelines.

Categorizing AI systems

In Europe, regulators have proposed a legal framework that categorizes AI systems based on the level of risk that they introduce. For example, categories could have labels such as unacceptable risk, high risk, limited risk and minimal risk.

However, this proposal has received pushback from technology firms. Some have noted that because chatbots have more than one utility, they cannot simply be categorized as limited risk or high risk.

A low risk chatbot activity might consist of asking ChatGPT to come up with a social media post about a specific topic, while a high risk activity that could occur through the very same chatbot consists of creating malware or even a zero day threat.

Good AI guardrails

In the U.S., “…we do not have the guardrails in place, the laws that we need, the public education or the expertise in government to manage the consequences of the rapid changes that are now taking place,” says Merve Hickok, chairwoman and research director at the Center for AI and Digital Policy.

In contrast with popular thinking, good guardrails arguably should not curtail users’ capabilities. Instead, they should promote innovation while preventing harmful real-world outcomes. AI has the potential to fuel profound social improvements, but only if we allow for this possibility.

Some experts worry that the current conversation around AI could result in stagnation rather than innovation. For instance, the requested moratorium on AI is arguably an extreme measure, as it takes an all-or-nothing stance.

In the eyes of some analysts, the success of AI depends on a flexible governance model that incorporates practical solutions – neither stifling innovation nor accelerating innovation at the expense of human connection and welfare.

For more insights on AI, ChatGPT and policy, please see CyberTalk.org’s past thought leadership coverage. Lastly, check out the CyberTalk.org newsletter. Sign up today to receive top-notch news articles, best practices and expert analyses each week.