Zhipu AI Unveils Next-Generation Foundational Model GLM-4, Rivaling GPT-4 Performance

2024-01-18

智谱AI held its first Zhipu Developer Day, showcasing its significant progress in large-scale language models. The most notable announcement was the launch of the fourth-generation multimodal base model, GLM-4, which Zhipu claims is on par with OpenAI's GPT-4 in overall capabilities.

Specifically, GLM-4 supports a context length of 128k tokens (equivalent to about 300 pages of text) and can maintain close to 100% accuracy even with very long texts. Compared to the previous version, GLM-4 has made significant improvements in text-to-image generation and multimodal understanding, thanks to accelerated inference speed, higher concurrency support, and reduced inference costs.

Zhipu CEO Zhang Peng stated that GLM-4 is approximately 60% more powerful than its predecessor and performs at a level close to GPT-4 in benchmark tests for natural language processing.

Zhang Peng emphasized the stronger performance of GLM-4 in tests conducted in both Chinese and English, including alignment, understanding, reasoning, and role-playing abilities. This may give the model an advantage in one of the most important global markets for artificial intelligence.

In addition to its core language skills, GLM-4 introduces a "full-tool" feature that enables it to autonomously plan and execute complex instructions by calling appropriate tools such as web browsers, code interpreters, and image generators. It can handle various tasks such as data analysis, chart drawing, and PowerPoint slide generation.

Zhipu has also launched "GLMs," allowing anyone to easily create their personalized "AI agents" using natural language prompts. Users can share the agents they create through a new AI Agent Center. This essentially replicates OpenAI's "GPTs" feature added to ChatGPT in November and the recently launched GPT Store.

Looking ahead, Zhipu AI hopes to inspire the local open-source AI community. The company has announced several funds to promote the development and commercialization of large-scale language models:

  • An open-source fund that provides GPUs, cash, and free API access to support the developer community
  • A $100 million "Z Ventures" venture fund targeting innovative LLM companies
  • Expanded academic funding and industry cooperation through organizations such as the China Computer Federation (CCF)

Since its establishment in 2019, Zhipu AI has raised over 2.5 billion RMB (approximately $360 million) from investors such as Alibaba, Tencent, and GGV Capital. It now collaborates with hundreds of companies to explore practical applications of its GLM models. The launch of GLM-4 highlights Zhipu's rapid progress in the domestic AI field in China.

Zhipu's products are very similar to OpenAI's products. From the model naming (GLM vs GPT) to offering customizable GLMs, Zhipu seems to closely follow OpenAI's product roadmap. Currently, Zhipu's strategy may simply be to replicate OpenAI's work and dominate the vast AI market in China, rather than innovating in new areas.

In the end, Zhipu will need to demonstrate novel capabilities that truly meet the unique needs of Chinese users in order to overcome the impression of being just a clone of OpenAI. However, by essentially "cloning" OpenAI, Zhipu enables Chinese developers and users to access cutting-edge AI more quickly than attempting to independently cultivate entirely new models.