Snapchat’s Adding Text-to-Video Creation, Powered by AI

Snapchat’s Adding Text-to-Video Creation, Powered by AI

In among the various announcements at its 2024 Partner Summit today, including new AR glasses, an updated UI, and other features, Snapchat has also announced that users will soon be able to create short video clips in the app, based on text prompts.


[embedded content]


As you can see in this example, Snap is close to launching a new feature that will be able to generate short video clips, based on whatever text input you choose.


So, as per this example, you could enter in “rubber duck floating” and the system will be able to generate that as a video clip, while there’s also a “Style” option to help refine and customize your video as you prefer.


Snap says that the system will also, eventually, be able to animate images as well, which will significantly expand the capacity of its current AI offerings.

In fact, it goes further than the AI processes offered by both Meta and TikTok as well. Both Meta and ByteDance do have their own, working text-to-video models, but they’re not available in their respective apps as yet.


Though Snap’s isn’t either. Snap says that its AI video generator will be made available to a small subset of creators in beta from this week, but it still has some way to go before it’s ready for a broader launch.


So in some ways, Snap’s beating the others to the punch, but then again, either Meta or TikTok could greenlight their own versions and immediately match Snap in this respect.


Videos generated by the tool will come with a Snap AI watermark (you can see the Snapchat+ icon in the top right of the examples shown in the presentation), while Snap’s also undergoing development work to ensure that some of the more questionable uses of generative AI aren’t available in the tool.


Snapchat also announced various other AI tools to assist creators, including its GenAI suite for Lens Studio, which will facilitate text-to-AR object creation, simplifying the process.


Snapchat GenAI

It’s also adding animation tools based on the same logic, so you can bring Bitmoji to life within your AR experiences, with all of these options utilizing AI to streamline and improve Snap’s various creative processes.


Though AI video still seems weird, and really, not overly conducive to what Snap’s traditionally been about, in sharing your personal, real life experiences with friends.


Do you really want to be generating hyper-real AI videos to share in the app? Is that going to enhance or detract from the Snap experience?

I get why social platforms are going this route, as they try to ride the AI wave, and maximize engagement, while also justifying their investment in AI tools. But I don’t know that social apps, which are built upon a foundation of human, social experiences, really benefit from AI generated content. Which is not real, never happened, and doesn’t depict anybody’s actual lived experience.


Maybe I’m missing the point, and there’s no doubt that the technological advancement of such tools is amazing. But I just don’t see it being a big deal to Snapchat users. A novelty, sure, but an enduring, engaging feature? Probably not.


Either way, Snap, again, is looking to hitch its wagon to the AI hype train, in order to keep up with the competition, and if it has the capacity to enable this, why not, I guess.


It’s still a way off a proper launch, but it looks to be coming sometime soon.