June 2 — In the U.K., government advisors are warning that some artificial general intelligence (AGI) systems may eventually be banned.
CEO of Faculty AI, Marc Warner, says that AGI needs strong transparency, audit requirements and better inbuilt safety technology. In the next six months to a year, tough decisions will need to be made.
Warner’s comments follow the bi-continental initiative to develop a voluntary code of practice for artificial intelligence.
Writing the rules
Some experts believe that “narrow AI,” which consists of AI-based systems used for specific tasks, such as text translations or cancer screenings, should be regulated in much the same way as the majority of exiting technologies.
However, AGI systems, which are arguably a fundamentally different technology, may be more concerning, and may require different regulations.
“If we create objects that are as smart or smarter than us, there is nobody in the world that can give a good scientific justification of why that should be safe,” said Warner.
Driven to distraction
Experts have expressed the concern that AGI and corresponding regulations are distracting leaders from problems within existing technologies. For example, bias in AI recruitment, or within facial recognition tools.
Yet others contend that continued focus on regulation may make the U.K. less appealing to investors, stunting corporate innovation.
Too little, too late?
Prime Minister Rishi Sunak has stated that AI requires “guardrails” and that the U.K. could play a leadership role in their development.
On Wednesday, U.S. Secretary of State Antony Blinken and European Union Commissioner, Margrethe Vestager, expressed that voluntary rules were needed in short order.
While different pieces of legislation are expected to take years in order to come into effect, global leaders will be invited to contribute to a draft voluntary code of conduct within weeks.