As OpenAI continues to drive viral growth with its upcoming AI video platform Sora, competitors are also launching their own products to push industry benchmarks. Just a few days ago, Pika Labs introduced lip-syncing functionality to its product. Now, a new AI video startup called "Haiper" has emerged from stealth mode and secured $13.8 million in seed funding from Octopus Ventures.
Founded by former Google DeepMind researchers and current CEO Yishu Miao and Ziyu Wang, this London-based company offers a platform that allows users to generate high-quality videos through text prompts or animate existing images. The platform has its own internal visual model and competes with existing AI video tools on the market such as Runway and Pika Labs. However, early tests suggest that it still lags behind OpenAI's Sora.
Haiper plans to use this funding to expand its infrastructure and improve its product, with the ultimate goal of building a general artificial intelligence (AGI) that can internalize and reflect human understanding of the world.
What does Haiper's AI video platform offer?
Similar to Runway and Pika, Haiper currently provides users with a web platform where they can create AI videos by inputting their chosen text prompts through an intuitive interface. The platform currently offers tools to generate SD and HD quality videos, with a length limit of two seconds for HD content and up to four seconds for SD. The lower-quality video tool also provides options to control the level of motion.
During testing, the output of HD videos was more consistent, possibly due to their shorter length, while the lower-quality videos often became blurry due to variations in subject shape, size, and color, especially at higher motion levels. Although the company claims to have plans to introduce this feature soon, there is no option to extend the number of generations, similar to Runway.
In addition to the text-to-video functionality, the platform also offers tools that allow users to upload and animate existing images or redraw videos, changing their style, background color, elements, or themes through text prompts.
Haiper claims that its platform and underlying visual model can cater to various use cases, from personal applications to commercial ones, such as providing content for social media or generating content for studios. However, the company has not shared any information about its commercialization plans and continues to offer the technology for free.
Plans to build a general artificial intelligence with world perception
With this funding, Haiper plans to refine its infrastructure and product, with the ultimate goal of building a comprehensive general artificial intelligence (AGI) with perception capabilities. This investment brings the total capital raised by the company to $19.2 million.
In the coming months, Haiper plans to iterate based on user feedback and release a series of large-scale training models to improve the quality of AI video outputs, potentially narrowing the gap between competing products in the market.
As the company scales, it will seek to enhance its model's understanding of the world, fundamentally creating a general artificial intelligence capable of replicating the emotional and physical elements of the real world, covering the smallest visual aspects, including lighting, motion, texture, and interactions between objects, to create realistic content.
"Our ultimate goal is to build a general artificial intelligence with comprehensive perception capabilities that has infinite potential in creativity. Our visual model will be a leap forward for AI in deepening its understanding of the physics of the world and replicating its essence in generating videos. Such progress lays the foundation for AI to understand, embrace, and enhance human narratives," said Miao in a statement.
With these next-generation perception capabilities, Haiper expects its technology to go beyond content creation and have an impact on other fields, including robotics and transportation. This approach to video AI makes it a noteworthy company in the hot AI space.