Recently, RunwayML has taken the lead in showcasing its latest and more realistic video generation model to the public, which has attracted wide attention in the industry. Against this backdrop, Haiper, an AI video innovation company based in London (founded by former Google DeepMind elites Yishu Miao and Ziyu Wang), has also launched an upgraded version of its visual foundation model - Haiper 1.5, marking another solid step forward in AI video creation technology.
Haiper 1.5 is now available on its official website and mobile platform. The core highlight of this upgrade is the significantly enhanced video generation capability. Users can now easily create video clips up to 8 seconds long based on text, images, and video clues. Compared to the initial version, the generation duration has doubled, providing creators with a broader creative space.
In addition, Haiper has innovatively introduced upsampling functionality to further enhance the quality of generated content, making the video images more delicate and clear. At the same time, the company has also revealed its strategic plan to enter the field of image generation, demonstrating its ambitious expansion in the field of AI applications.
Since its official debut four months ago, Haiper has quickly attracted the favor of over 1.5 million users with its precise market positioning and excellent product performance, despite its funding scale still unable to match many "giants" in the AI field. In the face of strong competitors such as Runway, Haiper is continuously consolidating and expanding its market territory through unremitting technological innovation and product iteration.
So, what unique charm does Haiper AI video platform possess? Since its debut in March this year, Haiper has closely followed industry trends and created a one-stop video creation platform based on its self-developed perception-based model. The platform is easy to operate, allowing users to instantly transform scenes in their minds into vivid video works by simply inputting short text descriptions. At the same time, the platform provides rich adjustment options, allowing users to freely customize characters, objects, backgrounds, and artistic styles, fully inspiring creative inspiration.
However, as user demands continue to grow, Haiper also faces some challenges. For example, the initial model was limited in video generation duration, making it difficult to meet the diversified needs of some creators. To address this pain point, Haiper 1.5 was introduced, directly extending the video generation duration to 8 seconds, effectively solving this problem.
In addition, Haiper 1.5 has achieved a comprehensive leap in video quality. In the past, high-definition video generation was mostly limited to short clips, while longer content could only be presented in standard definition. However, after this update, users can enjoy SD or HD-level clarity regardless of the length of the video. At the same time, the built-in upsampling tool allows users to easily enhance the video content to 1080p high-definition level, further improving the visual quality of the works.
What's even more exciting is that Haiper has extended its business reach to the field of image generation. Users can now generate images based on text prompts and then dynamically transform them into videos through the text-to-video function, creating more perfect video works. This integration not only improves efficiency and flexibility in creation but also provides users with more diversified creative possibilities.
Miao, CEO of Haiper, said, "At Haiper, we always prioritize user needs and are committed to turning their visions into reality. The newly launched upsampler and Text2Image tool are the results of our close interaction and active improvement with users. Haiper will always be a community-oriented video generation AI platform, advancing together with users."
Although the new models and feature updates of Haiper company demonstrate great potential, these innovations have not yet undergone comprehensive testing by a wide user base. For example, when attempting to access its image model, this feature is still in the testing phase, and the eight-second video generation and upsampling functions are only available to users subscribed to the Pro plan. However, according to Haiper, they are planning to promote these advanced features more widely through various strategies, including a credit system, and are expected to open the image model to the public for free at the end of this month.
In terms of content quality, the two-second short videos on the Haiper platform have shown high stability. However, for longer videos, the quality still fluctuates. During testing, it was found that four-second videos sometimes appeared slightly blurry due to insufficient detail processing. But as Haiper continues to release updates and implement future plans, the quality of generated content is expected to improve significantly.
Looking ahead, Haiper company stated that it will enhance the understanding of its basic perception model of the real world, aiming to create a general artificial intelligence (AGI) that can accurately capture real-world emotions and physical characteristics. This AI will comprehensively cover every detail in the field of vision, such as light, motion, texture, and interaction between objects, to create more realistic and vivid visual content. The realization of this goal will not only promote the deep application of AI technology in content creation but also bring disruptive changes to multiple fields such as robotics and traffic management.