Recently, Runway has rapidly expanded its AI video capabilities by introducing advanced camera control features to its Gen-3 Alpha Turbo model. This addition follows the company's recent release of the Act-One animation tool. The newly launched functionality enables users to precisely manage the direction and intensity of cameras in AI-generated videos, allowing for more detailed control over scene presentation.
With this update, the camera can move in six different ways, including horizontal or vertical movement, panning, tilting, zooming, or rotating. Each type of movement can be finely adjusted through intensity settings, achieving effects ranging from subtle changes to dramatic sweeps. For instance, to slowly orbit around a subject, users can combine horizontal movement with slight panning; to showcase a broader landscape, they can pair upward tilting with smooth zooming.
These camera control features work in conjunction with text prompts to guide the AI in generating coherent scene content during camera movements. For example, when planning a dramatic zoom-out shot, users can describe the broader scene they wish to display, ensuring the AI generates the desired content.
The new features are built on Runway's Gen-3 Alpha Turbo model, which trades some quality for faster generation speed and lower costs, with a video generation cost of 5 credit points per second. It is seen as a more accessible option, facilitating users to quickly iterate and experiment with new camera movements.
This release comes just one week after Runway launched the Act-One feature, which allows the transfer of real performers' facial expressions to AI characters. This series of rapid releases indicates Runway's commitment to providing users with more refined control over AI-generated video content.
Currently, Runway's web interface offers the advanced camera control features, accessible through a dedicated camera control panel when using the Gen-3 Alpha Turbo model.