Runway announces the launch of faster and more affordable AI video model Gen-3 Alpha Turbo

2024-08-02

It's been a while since we last explored the cool new AI model with the word "Turbo," but Runway is making sure that won't last.

This New York-based startup recently caught attention for its Gen-3 Alpha model's new image-to-video update and today announced on the social media platform X (formerly known as Twitter) that it will be launching another, faster version of the model: Gen-3 Alpha Turbo, and will be "rolling it out...while significantly reducing the price" in the coming days.

In the post, Runway states that the Turbo model is "7 times faster than the original Gen-3 Alpha."

Runway writes in the post, "We trained a new version of Gen-3 Alpha called Turbo, which generates videos 7 times faster than the original Gen-3 Alpha while maintaining the same performance in many use cases. We will be rolling out Turbo's image-to-video functionality in the coming days and significantly reducing the price..."

Runway co-founder and CEO Cristóbal Valenzuela also posted on X, stating that users can generate new videos in real-time or near real-time using Gen-3 Alpha Turbo, generating 10 seconds of video in 11 seconds.

"Generating 10 seconds of video in 11 seconds. Real-time interactivity unlocks many new things," Valenzuela posted.

We previously tested the image-to-video functionality of the previous version of Gen-3 Alpha and found it to be very fast, often generating videos from static images in less than a minute. However, Runway clearly believes that it can do better.

If the company wants to maintain its leading position in providing highly realistic, Hollywood-quality generative AI video models, this move makes sense, even as up-and-coming players and emerging forces like Pika Labs, Luma AI, Kling, and OpenAI's Sora (the latter of which, despite its debut in February, is still only available to a select group of testers) are hot on their heels.

Valenzuela also responded to a question on X, stating that Runway is working on updates for its mobile app to support image-to-video using Gen-3 Alpha.

"In development," wrote Cristóbal Valenzuela (@c_valenzuelab) on July 31, 2024.

More for less?

But why would Runway offer a newer, faster model with the same AI video generation quality as previous versions, and at a lower price?

"Yes," replied Cristóbal Valenzuela (@c_valenzuelab) on July 31, 2024.

Aside from potentially being a simpler, less computationally intensive model that can run on their servers (thus lowering costs), the company may also rely on the fact that faster generation speeds will lead to increased overall usage, thereby increasing their subscription plans or overall spending in the "credit-based" on-demand generation model.

Currently, Runway offers various monthly subscription plans, each coming with a certain number of credits that must be used on the platform for generating each static image or video.

Gen-3 Alpha, the previous latest version, requires 10 credits to generate 1 second of video.

Their older Gen-2 model had a price of 5 credits per second of video generated, and interestingly, their oldest Gen-1 model was the most expensive, requiring 14 credits to generate 1 second of AI video.

Therefore, the company may offer Gen-3 Alpha Turbo at a price of around 7 credits per second of video, or even as low as 5 credits.

Training issues still exist

Last week, 404 Media obtained a spreadsheet allegedly from a former Runway employee, which showed the company's plans to scrape and train its AI models from popular YouTube channel videos, including copyrighted content from major movies and TV shows that were scraped, uploaded, or edited by other YouTube users.

While the company has faced some criticism online for this strategy, Runway has not commented on 404 Media's report and spreadsheet.

Nevertheless, the company has already faced lawsuits from creators alongside other generative AI creative generation companies, accusing them of copyright infringement on static images.

However, I maintain that scraping data has been widely accepted since Google adopted a similar strategy to build its search index and sell ads based on it.

However, today, a prominent unauthorized generative AI scraping critic, former Stability AI executive and founder of the nonprofit organization Fairly Trained, Ed Newton-Rex, called out Runway on X through his new organization, demanding the company disclose its training dataset.

"Can you share what you trained it on? There have been reports that Runway may have used YouTube videos to train it, and I think clarifying this would put a lot of people's minds at ease," wrote Ed Newton-Rex (@ednewtonrex) on July 31, 2024.

Even though generative AI companies won't admit what they train on, their users often assume that they train on copyrighted works without permission.

"When you ask a generative AI company what they trained on, their supporters rarely deny training on copyrighted works..."

Most leading generative AI companies, even those behind open-source models like Meta's Llama 3.1, have not fully disclosed the complexity of their training datasets, so it is reasonable to assume that they treat their training data as proprietary and competitive secrets.

However, we will see as these lawsuits progress in court what remedies or compensations will be if it is found that forcing generative AI model providers like Runway to disclose their training data and if they are found to have infringed any copyrights.