EXECUTIVE SUMMARY:

Industry leaders have warned that some of the AI technology that’s under development may one day pose an existential threat to humanity. The argument is that the technology could prove as destructive as nuclear weapons or pandemics.

“Mitigating the risk of extinction from AI should be a global priority…” said a statement released by the Center for AI Safety, a non-profit organization.

Last month, an open letter warning of the risks of artificial intelligence was signed by 350 executives, including those running leading AI companies.

The existential threat

Is the focus on extinction just a distraction from the more immediate threats that AI presents? Recent developments pertaining to the democratization of AI have raised fears around misinformation and propaganda. Some wonder about whether AI could lead to the elimination of millions of white-collar jobs.

These fears are shared by many within the industry and put industry leaders in the unusual position of arguing that the technology that they are eagerly (and rapidly) developing poses unknown potential risks and should be subject to regulation.

Responsible AI management

In a blog post, CEO of OpenAI, Sam Altman and other OpenAI executives have proposed several ways to responsibly manage AI systems. They’ve requested the cooperation of leading AI makers, suggested expanded AI research, and proposed the development of an international AI safety organization.

Mr. Altman has also shown support for regulations that would require producers of wide-reaching, cutting-edge AI models to obtain a government-issued license.

As people have begun to turn to AI chatbots for entertainment, app development, productivity and so much more, the sense of urgency around ensuring the safety of AI has increased.

Class action lawsuit

On Wednesday, OpenAI was hit with a class action lawsuit, filed in a California federal court, alleging that the company stole and misused large quantities of peoples’ data from the internet to train its AI technology.

The 160-page complaint notes that the personal data used included “essentially every piece of data exchanged on the internet it could take” and that the data was seized by OpenAI without notice or due compensation. According to the complaint, data scraping occurred at an “unprecedented scale.”

The lawsuit seeks injunctive relief in the form of a freeze on further commercial use of OpenAI’s technologies. It also seeks payment of “data dividends” as financial compensation for all people whose information was used to develop and train OpenAI’s tools. OpenAI has not yet offered public comment on the matter.

AI in business

A recent poll of 254 technology leaders found that 90% of respondents were exploring use of AI platforms, such as ChatGPT and Bing Chat in the business setting. Eighty percent plan to increase investments in AI in the upcoming year.

Business leaders have expressed that AI can increase efficiencies, boost productivity, lower costs, lead to competitive advantages, and help deliver against rapidly changing market expectations.

Use of AI is shifting from select areas of enterprises to nearly every area within an enterprise. See how organizations are innovatively leveraging AI-based tools or learn more about current artificial intelligence trends here.

Lastly, to receive more timely cyber security news, insights and cutting-edge analyses, please sign up for the cybertalk.org newsletter.