AI video creator Runway now offers the Gen-3 Alpha Turbo, which expands on the recently released Gen-3 Alpha model with even more speed than the Gen-2 model’s successor. The latest version is supposed to be seven times faster and costs just half as much as the Gen-3 Alpha, which will likely attract plenty of interest from professional filmmakers and hobbyists interested in AI.
As its name suggests, the Gen-3 Alpha Turbo is all about speed. The time between submitting a proposal and seeing the video displayed is reduced to near real-time production, according to Runway. The idea is to offer something for industries where that kind of speed is crucial, such as social media content and themed advertising. The trade-off is in quality. While Runway insists that videos from the Turbo model are basically as good as those from the standard Gen-3 Alpha, the non-Turbo variant can generate higher-quality images for the video as a whole.
Still, the Turbo model is fast enough that Runway CEO Cristobal Valenzuela boasted on X that “it now takes me longer to type a sentence than to generate a video.”
Creators who want to focus on planning and producing videos rather than having to wait for them to render will likely find Gen-3 Alpha Turbo faster. This doubles when the price is cut in half in this case. A second video costs five credits, rather than ten credits for one second of a standard Gen-3 Alpha video made with a model. Credits in Runway come in packs starting at $10 for 1,000 credits, so it’s the difference between 100 seconds of film for $10 or 200 seconds of film for $10. Those interested can also try out the new model through a free trial.
The rise of artificial intelligence in cinema
Runway ML’s aggressive price and performance improvements come at a time when the company is facing stiff competition from other AI video generation models. The most notable is OpenAI and its Sora model, but it’s far from alone. Stability AI, Pika, Luma Labs’ Dream Machine, and others are racing to bring AI video models to the public. Even TikTok’s parent company, Bytedance, has an AI video creator called Jimeng, though it’s limited to China for now.
Runway’s focus on speed and accessibility with the Turbo model could help it stand out in a crowded field. Next, Runway plans to improve its models with better control mechanisms and possibly even real-time interactivity. The Gen-3 Alpha Turbo model incorporates much of what video creators experimenting with AI want, but it will need to deliver consistent results to truly outpace the competition in turning words and images into video.
Delivering reliable consistency across character and environment design is no easy feat, but using a starting image as a reference point to maintain consistency across shots can help. In Gen-3, Runway’s AI can create a 10-second video guided by additional text or motion cues on the platform. You can see how it works in the video below.
“Gen-3 Alpha Turbo Image to Video is now available and can generate images seven times faster at half the price of the original Gen-3 Alpha. All this while maintaining the same performance in many use cases. Turbo is available for all plans, including a free trial.
Runway’s image-to-video feature not only ensures that people and backgrounds remain the same when viewed from a distance. Gen-3 also incorporates Runway’s lip-sync feature so that the person speaking moves their mouth in a way that matches the words they are saying. A user can tell the AI model what they want their character to say and the movement will be animated to match. The combination of synchronized dialogue and realistic character movements will interest many marketing and advertising developers looking for new and, ideally, cheaper ways to produce video.
Next
Runway isn't done adding improvements to the Gen-3 platform either. The next step is to bring the same improvements to the video-to-video option. The idea is to keep the same motion, but with a different style. A human running down the street turns into an animated anthropomorphic fox running through a forest, for example. Runway will also bring its control features to Gen-3, such as Motion Brush, advanced camera controls, and Director Mode.
AI video tools are still in the early stages of development, with most models excelling at creating short-form content but struggling with longer narratives. That puts Runway and its new features in a strong position from a market standpoint, but it’s not alone. Midjourney, Ideogram, Leonardo (now owned by Canva), and others are all racing to create the ultimate AI video generator. Of course, everyone is warily eyeing OpenAI and its Sora video generator. OpenAI has some advantages in name recognition, among other benefits. In fact, Toys “R” Us has already made a small commercial using Sora and premiered it at the Cannes Lions Festival. Still, the movie about AI video generators is only in its first act, and the triumphant winner clapping in slow motion at the end is far from inevitable. As competition heats up, Runway’s launch of Gen-3 Alpha is a strategic move to assert a leading position in the market.