Runway launches new video generation base model Gen-3 Alpha

2024-06-18

Runway is one of the pioneering companies dedicated to developing realistic high-quality generative AI video creation models.

However, since the release of their first-generation model (Gen-1) in February 2023 and their second-generation model (Gen-2) in June of the same year, the company's shine seems to have been overshadowed by other highly realistic AI video generators, especially OpenAI's yet-to-be-released Sora model and Luma AI's Dream Machine model, which was released last week.

But now, the tables have turned. Runway has launched the Gen-3 Alpha version, making a strong comeback in the generative AI video field. In a blog post, the company claims that this is the first in a series of models trained by Runway on a new infrastructure specifically designed for large-scale multimodal training. It is also an important step towards building a universal world model, an AI model that can "represent and simulate various situations and interactions encountered in the real world." Here are some sample videos created by Runway using Gen-3 Alpha:



Gen-3 Alpha supports users in generating high-quality, detailed, and highly realistic 10-second video clips, with precise emotional expressions and camera movements.

According to a spokesperson from Runway who sent an email to VentureBeat, "The initial release will support the generation of 5-second and 10-second videos, with significantly improved generation speed. A 5-second clip takes only 45 seconds to generate, while a 10-second clip takes only 90 seconds."

The exact release date of this model has not been announced yet. Runway has only showcased demo videos on its website and social media. It is currently unclear whether this model will be available through Runway's free version or if it will require a paid subscription (starting at $15 per month or $144 per year) to access.

Anastasis Germanidis, co-founder and CTO of Runway, confirmed that the new Gen-3 Alpha model will be initially available to paid Runway users in "a few days," but free users will also get access to the model at an undisclosed future date.

A spokesperson from Runway also confirmed this statement via email, stating, "Gen-3 Alpha will go live in the next few days for paid Runway users, our Creative Partners Program, and enterprise users."

On LinkedIn, a Runway user named Gabe Michael expressed his expectation of being able to access the model later this week:

In a post on the X platform, Germanidis wrote that Gen-3 Alpha "will soon be used in the Runway product and will support all the existing patterns you are already familiar with (text-to-video, image-to-video, video-to-video), as well as new patterns that are only possible now with more powerful underlying models."


Germanidis also mentioned that since the release of Gen-2 in 2023, Runway has found that "the performance of video diffusion models is far from saturated. These models have built truly powerful representations of the visual world while learning to predict video tasks."

Diffusion is a process through which AI models are trained to recombine pixelated "noise" into visual (static or dynamic) images of concepts learned from annotated images/videos and text pairs.

In its blog post, Runway states that Gen-3 Alpha is "jointly trained on videos and images" and is the result of collaborative efforts by a multidisciplinary team of research scientists, engineers, and artists. However, the specific dataset used for training has not been disclosed, which aligns with the trend of most other leading AI media generators not revealing the exact datasets their models are trained on, and whether the data is obtained through paid licensing agreements or web scraping.

Critics argue that AI model creators should compensate original creators for the training data through licensing agreements or even file copyright infringement lawsuits. However, most AI model companies insist that they have the legal right to train on publicly available data.

When asked about the training data used for Gen-3 Alpha, a spokesperson from Runway stated, "We have an in-house research team responsible for all training work, and we use curated internal datasets to train our models."

Interestingly, Runway also mentioned that they have "collaborated with leading entertainment and media organizations to create custom versions of Gen-3," which allows for more stylistic control, consistent characters, and other features tailored to specific artistic and narrative requirements.

Although the specific organizations involved in the collaboration were not mentioned, previously acclaimed and award-winning filmmakers behind films like "The Transient Universe" and "The People's Clown" revealed that they used Runway to create special effects for certain parts of their films.

In the announcement of Gen-3 Alpha, Runway included a form inviting other organizations interested in obtaining custom versions of the new model to apply. However, the pricing for custom model training has not been disclosed.

Meanwhile, it is evident that Runway has not given up on its goal of becoming a leading player or leader in the rapidly evolving field of generative AI video creation.