Meta, challenging OpenAI, announces new AI model that can generate video with sound


https://www.cnbc.com/2024/10/04/meta-announces-new-ai-model-that-can-generate-video-with-sound.html

Facebook owner Meta announced on Friday it had built a new AI model called Movie Gen that can create realistic-seeming video and audio clips in response to user prompts, claiming it can rival tools from leading media generation startups like OpenAI and ElevenLabs. Samples of Movie Gen’s creations provided by Meta showed videos of animals swimming and surfing, as well as videos using people’s real photos to depict them performing actions like painting on a canvas. Movie Gen also can generate background music and sound effects synced to the content of the videos, Meta said in a blog post, and use the tool to edit existing videos.

In one such video, Meta had the tool insert pom-poms into the hands of a man running by himself in the desert, while in another it changed a parking lot where a man was skateboarding from dry ground into one covered by a splashing puddle. Videos created by Movie Gen can be up to 16 seconds long, while audio can be up to 45 seconds long, Meta said. It shared data showing blind tests indicating that the model performs favorably compared with offerings from startups including Runway, OpenAI, ElevenLabs and Kling.

The announcement comes as Hollywood has been wrestling with how to harness generative AI video technology this year, after Microsoft -backed OpenAI in February first showed off how its product Sora could create feature film-like videos in response to text prompts. Technologists in the entertainment industry are eager to use such tools to enhance and expedite filmmaking, while others worry about embracing systems that appear to have been trained on copyright works without permission.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *