当前位置:主页 > 要闻 >

Chinas Zhipu Debuts GLM-4.5, Outperforming Rivals With Leaner and Faster AI...

时间:2025-07-29 16:06:11

  
 

  AsianFin — Zhipu AI, one of China’s leading foundational model developers, launched its next-generation flagship model series GLM-4.5 on Sunday, as competition in the domestic large language model space intensifies.

  Built on a Mixture of Experts architecture and optimized for AI Agent scenarios, GLM-4.5 has set new benchmarks among open-source models, outperforming key rivals in reasoning, coding, and agent intelligence. In overall global evaluations, GLM-4.5 ranks third worldwide, first among Chinese models, and first among open-source models, ahead of Stepverse’s Step-3, DeepSeek-R1-0528, and Moonshot’s Kimi K2.

  The model series includes two variants: the full GLM-4.5, with 355 billion total parameters , and the lighter GLM-4.5-Air, with 106 billion parameters. Both are fully open-sourced via Hugging Face and Alibaba’s ModelScope, with APIs accessible through the Zhipu Open Platform. The complete feature set is available for free through Zhipu Qingyan and the z.ai official site.

  “The road to AGI has only just begun,” CEO Zhang Peng said. “Current models are far from reaching human-level capability.”

  Zhipu’s push into open-source comes as China’s LLM market undergoes rapid iteration. In the past month alone, the country has seen the release of MiniMax M2, Kimi K2, and Stepverse’s Step-3. Meanwhile, global heavyweight OpenAI is reportedly preparing to launch GPT-5—a closed-source, multimodal model—as early as late July.

  Zhipu’s GLM-4.5 is pre-trained on 15 trillion tokens of general data and refined with 8 trillion tokens of specialized domain data focused on code, reasoning, and agents. The model is further enhanced with reinforcement learningtechniques for complex task execution. According to internal benchmarks, GLM-4.5 uses just 50% of the parameters of DeepSeek-R1 and one-third of those in Kimi-K2, while delivering superior performance in key LLM evaluation tests.

  In real-world performance metrics—including 52 development tasks across software, game, and web development—GLM-4.5 delivered results comparable to Claude-4-Sonnet, while offering better tool invocation reliability and task completion rates.

  The model’s token pricing is highly competitive, with input costs as low as RMB 0.8 per million tokens and RMB 2 for output—approximately one-tenth the cost of Anthropic’s Claude. Zhipu also claims the high-speed version of GLM-4.5 can generate over 100 tokens per second, supporting low-latency, high-concurrency environments for enterprise-grade deployment.

  Zhipu, founded in 2019, is one of China’s earliest developers of large-scale pre-trained models. Since releasing its first ChatGLM model in March 2023, the company has iterated four times and launched over 20 AI products. By year-end 2023, Zhipu reported more than 2,000 ecosystem partners, 1,000 enterprise applications, and over 25 million userson its Qingyan platform. Paid features have helped Zhipu cross an ARR of over 10 million yuan.

  On the funding side, Zhipu recently announced a RMB 1 billion strategic investment from Shanghai’s state-owned capital as it moves closer to a domestic IPO. Prior rounds included backing from Hangzhou Urban Investment, Shangcheng Capital, and Zhuhai Huafa, with a total raise exceeding RMB 10 billion. Zhipu’s investors now span top VCs such as Hillhouse, Qiming, and Legend Capital, alongside internet giants Alibaba, Meituan, Tencent, and Xiaomi.

  The launch of GLM-4.5 also kicks off what the company calls its “Year of Open Source”, with plans to roll out a full suite of foundational, inference, multimodal, and agent models.

  Zhipu’s ambitions underscore a broader trend in Chinas AI strategy—doubling down on open-source at a time when U.S. models increasingly tilt toward closed platforms. Analysts say this divergence could reshape the global LLM landscape.

  “Open-sourcing domestic models injects fresh momentum into the AI ecosystem,” one industry insider told TMTPost. “It’s likely to trigger a new phase of global model realignment.”

  Zhipu’s release coincided with another headline from rival Alibaba, which on Sunday introduced Tongyi Wanxiang 2.2, a cinematic-grade video generation model with more than 60 tunable visual parameters. Last week, Alibaba also unveiled Qwen 3, Qwen3-Reasoning, and Qwen3-Coder, strengthening its position across base, reasoning, and code-generation models.

  Meanwhile, Stepverse’s Step-3, announced at the World Artificial Intelligence Conference , is the company’s first native multimodal model and boasts 321 billion parameters using MoE architecture—reflecting the industry-wide shift toward large, efficient multi-expert systems.

  As the pace of innovation accelerates, the open-source release of GLM-4.5 marks a pivotal moment not only for Zhipu, but for Chinas LLM ambitions at large. With technical superiority, cost-efficiency, and ecosystem momentum, the company is positioning itself as a serious challenger—not just at home, but globally.

热点推荐
1 Starknet推出STRKBTC,带来零知识保护的比特

消息,Starknet宣布推出STRKBTC,旨在将零知识技术驱动的保护比特币引入其第二层网络。...

2 山寨空军车头:ZEC空单减持3416.01枚,当前

消息,地址0xa312114b5795dff9b8db50474dd57701aa78ad1e的ZEC空单减持3416.01枚,约合201,1938.47美元。该地址...

3 中科曙光:未来智算中心建设将提升毛利

消息,中科曙光在互动平台表示,随着全国智算中心的大规模建设,未来有望提升子公司数创...

4 ZEC最大空头:ETH空单增持至350.49枚,持仓

消息,ETH空单增持至350.49枚,约合813,321.89美元,持仓规模达到3,754,134.37美元,均价从2,285.26美...

5 八连阳后首现调整 A股短线或迎“压力测

上证指数周二小幅高开后持续震荡,盘中在短暂击穿4200点后出现回升,午后虽然跌幅收窄,但...

6 Bitget IPO Prime第二期项目preopai现已开放认

消息,Bitget IPO Prime第二期项目preopai现已开放认购,投入时间截止5月15日16:00。代币分配阶段将...

7 中东战火引爆硫磺价格创历史新高

消息,受中东局势影响,今年以来我国硫磺价格一路上涨,5月6日,基准价创下每吨7300元的历...

8 墨西哥国家石油公司:萨利纳克鲁斯炼油

消息,墨西哥国家石油公司表示,萨利纳克鲁斯炼油厂的水电2号机组冷却塔火灾已得到控制。...

9 离岸人民币同业拆息利率多数小跌

消息,离岸人民币香港银行同业拆息周二显示,主要期限利率多数调整,其中隔夜HIBOR涨11个基...

10 某巨鲸10倍杠杆做多289枚BTC,规模达2348万

消息,某巨鲸在HyperLiquid平台以10倍杠杆做多289枚BTC,持仓价值约合2348万美元,当前浮盈约7....

成都来彰科技 蜀ICP备2025134723号-1

资讯来源互联网,如有版权问题请联系管理员删除。