Runway Claims Its Latest AI Video Model Can Generate Coherent Scenes and Characters

2025-04-02

According to an official announcement, Runway, an artificial intelligence startup, stated that its latest AI video model can generate consistent scenes and characters across multiple shots. While AI-generated videos often struggle with maintaining coherent storytelling, Runway claims on X that the new model, Gen-4, should provide users with greater "continuity and control" when crafting narratives.

The new Gen-4 video synthesis model is currently being rolled out to paying customers and enterprise users. It allows users to generate characters and objects across multiple shots using a single reference image. Users need to describe their desired composition, and the model will then produce consistent outputs from various angles.

For instance, the startup released a video showing a woman who maintains her appearance across different lighting conditions, camera angles, and environments.

Less than a year ago, Runway announced the launch of its third-generation Alpha video generator. This model extended the length of videos users could create but also sparked controversy, as reports indicated it had been trained on thousands of videos scraped from YouTube and pirated movies.