AI-powered video creator Runway has added the promised image-to-video feature to its Gen-3 model released a few weeks ago, and it may be just as impressive as promised. Runway has improved the feature to address its biggest limitations on the Gen-2 model released early last year. The updated tool is much better at character consistency and hyperrealism, making it a more powerful tool for creators looking to produce high-quality video content.
Runway’s Gen-3 model is still in alpha testing and is only available to subscribers who pay $12 per month per editor for the most basic package. The new model had already generated a lot of interest even when it launched with only text-to-video capabilities. But as good as a text-to-video engine is, it has inherent limits, especially when it comes to making characters in a video look the same across multiple prompts and appear to be in the real world. Without visual continuity, it’s difficult to create any kind of narrative. In previous iterations of Runway, users often struggled to keep characters and settings consistent across different scenes when relying solely on text prompts.
Delivering reliable consistency across character and environment design is no easy feat, but using a starting image as a reference point to maintain consistency across shots can help. In Gen-3, Runway’s AI can create a 10-second video guided by additional text or motion cues on the platform. You can see how it works in the video below.
See in the
Stills from movies
Runway’s image-to-video feature not only ensures that people and backgrounds remain the same when viewed from a distance. Gen-3 also incorporates Runway’s lip-sync feature so that the person speaking moves their mouth in a way that matches the words they are saying. A user can tell the AI model what they want their character to say and the movement will be animated to match. The combination of synchronized dialogue and realistic character movements will interest many marketing and advertising developers looking for new and, ideally, cheaper ways to produce video.
Runway isn't done adding improvements to the Gen-3 platform either. The next step is to bring the same improvements to the video-to-video option. The idea is to keep the same motion, but with a different style. A human running down the street turns into an animated anthropomorphic fox running through a forest, for example. Runway will also bring its control features to Gen-3, such as Motion Brush, advanced camera controls, and Director Mode.
AI video tools are still in the early stages of development, with most models excelling at creating short-form content but struggling with longer narratives. That puts Runway and its new features in a strong position from a market standpoint, but it’s not alone. Midjourney, Ideogram, Leonardo (now owned by Canva), and other companies are racing to create the ultimate AI video generator. Of course, they’re all keeping a wary eye on OpenAI and its Sora video generator. OpenAI has some advantages in name recognition, among other benefits. In fact, Toys “R” Us has already made a short film ad using Sora and premiered it at the Cannes Lions Festival. Still, the movie about AI video generators is only in its first act, and the triumphant winner clapping in slow motion at the end is far from inevitable.