Home Game-changing AI video tool by OpenAI

Game-changing AI video tool by OpenAI

February 16th – The artificial intelligence company OpenAI has given the world a preview of a new AI tool that, if given a simple text prompt, can generate beautiful, high-quality 60-second videos. The new tool has been dubbed Sora.

“We’re teaching AI to understand the physical world in motion, with the goal of training models that help people solve problems that require real-world interaction,” says OpenAI’s Sora website.

At present, the tool is only available to a small number of researchers and creatives, who are testing the model ahead of wider public release. The tool brings much wonder, but it also has the potential for malicious use.

Sora video products

Sora can generate complex scenes with multiple animated characters, specific types of motion and accurate details pertaining to a subject and its background. The model has a general understanding of how things move in the real world.

However, Sora may struggle to capture the physics or spacial details of more information-heavy scenes. As a result, something illogical may appear in the video (like someone running in the wrong direction on a treadmill.)

Image courtesy of OpenAI.

AI-generated content

While OpenAI is working to develop tools that can detect when a video is generated by Sora, seemingly making the online world more trustworthy, the U.S. government has promptly proposed its own rules in response to OpenAI’s Sora announcement.

The Federal Trade Commission (FTC) intends to make it illegal to produce AI-based video impressions of real people. In so doing, the FTC would extend protections that it’s implementing around government and business impersonation.

“The agency is taking this action in light of surging complaints around impersonation fraud, as well as public outcry about the harms caused to consumers and impersonated individuals,” stated the FTC in a press release.

The deepfake dilemma

Despite the aforementioned actions being taken to reduce abuse and harm associated with the tool, the actions arguably appear weak in light of the tool’s potential impact.

Users of ChatGPT, for instance, have easily circumnavigated “safety” checks put in by the company, and enforcement of rules seems to only exist in the most extreme of cases.

In light of upcoming elections, experts are concerned that use of video generation tools will lead to mud-slinging, misinformation, disinformation and increased levels of social discord.

What’s next

OpenAI has acknowledged that despite the company’s own research and testing, it cannot predict the ways in which people will use its technology. People may leverage it for positive and uplifting purposes, or they may abuse it and engender harmful outcomes.

In relation to AI-based video generation “…AI feels like it’s about to release an alien virus into the wild that even its creators don’t understand in terms of capabilities or negative impacts…Humanity better buckle up – this is going to be a wild ride that once we get on…I’m not sure we can get off,” wrote one Washington Post reader.

Related resources

  • Now you can talk to ChatGPT and it will talk back – Learn more
  • Top questions that CISOs should be asking about AI (and answers) – Right here
  • Transform your cyber security with an intelligent GenAI assistant – See product details