AI video creator Runway has officially launched its new Gen-3 Alpha model after teasing its debut a few weeks ago. The Gen-3 Alpha video creator offers significant improvements in creating hyper-realistic videos from user input. It’s a significant step up from the Gen-2 model launched earlier last year.
Runway’s Gen-3 Alpha is aimed at a variety of content creators, including marketing and advertising groups. The startup claims to outperform any competition when it comes to handling complex transitions, as well as creating keyframes and human characters with expressive faces. The model was trained on a large dataset of video and images annotated with descriptive captions, allowing it to generate highly realistic video clips. At the time of writing, the company does not disclose the sources of its video and image datasets.
The new model is available to all registered users on the RunwayML platform, but unlike Gen-1 and Gen-2, Gen-3 Alpha is not free. Users must upgrade to a paid plan, with prices starting at $12 per month per editor. This change suggests that Runway is ready to professionalize its products after having a chance to refine them, thanks to all the people playing with the free models.
Initially, Gen-3 Alpha will power Runway’s text-to-video mode, allowing users to create videos using natural language prompts. In the coming days, the model’s capabilities will expand to include image-to-video and video-to-video modes. Additionally, Gen-3 Alpha will integrate with Runway’s control features such as Motion Brush, Advanced Camera Controls, and Director Mode.
Runway said Gen-3 Alpha is just the first in a new line of models designed for large-scale, multimodal training. The ultimate goal is what the company calls “general-world models,” which will be able to represent and simulate a wide range of real-world situations and interactions.
See in the
AI Video Race
The immediate question is whether Runway’s advancements can match or surpass what OpenAI is doing with its flashy Sora model. While Sora promises minute-long videos, Runway’s Gen-3 Alpha currently supports video clips of only up to 10 seconds in length. Despite this limitation, Runway is banking on Gen-3 Alpha’s speed and quality to differentiate itself from Sora — at least until it can scale up the model as it has planned, making it capable of producing longer videos.
The race is not limited to Sora. Stability AI, Pika, Luma Labs, and others are eager to grab the title of best AI video maker. As the competition heats up, the launch of Runway’s third-generation alpha version is a strategic move to assert a leadership position in the market.