The next-generation giant language model (LLM) announced by China's ZpuAI has quickly become the wor.. – 매일경제

Spread the love

Language
Change font
A
A
A
A
Share
TOP
Most read
Language
Change font
A
A
A
A
Share
The next-generation giant language model (LLM) announced by China’s ZpuAI has quickly become the world’s leader in the open weight model. It has once again ignited competition for AI models between the U.S. and China by narrowing the gap with OpenAI and Anslogic’s highest performance models.
On the 12th, ZpuAI announced its next-generation model “GLM-5,” which specializes in high-level coding and agent work.
ZpuAI, which focuses on developing the original model, is China’s leading AI company aiming for ‘universal artificial intelligence’ (AGI), and was listed on the Hong Kong Stock Exchange last month after ‘Unicorn’ (an unlisted startup with an enterprise value of more than $1 billion).
The new GLM-5 has strengths in code writing and agent engineering capabilities for program development such as Vibe coding.
Compared to the previous generation GLM-4.5 (355 billion parameters), the model size was increased to 744 billion parameters, increasing the pre-training data and improving the performance. In addition, DeepSeek’s sparse attention technology is applied to efficiently handle long contexts, thereby maintaining performance and lowering the cost.
In the benchmark test results released by ZpuAI, GLM-5 scored on par with the latest models of OpenAI, Anslogic, and Google in the high-level benchmarks of Humanity’s Last Test (HLE) and Coding Ability (SWE-bench Verified).
The competency emphasized by ZpuAI in this GLM-5 is agentic performance. While the use of AI agents is expected to increase significantly worldwide this year, it is interpreted to emphasize the model’s ability to perform autonomous tasks.
On Terminal Bench 2.0, which evaluates the task performance of AI models in a terminal (CLI) environment, GLM-5 scored 56.2 points, ranking second among comparative models behind Anthropic’s Claude Opus 4.5 (59.3 points).
In addition, CC-Bench-V2, which measures AI’s ability as a coding agent in detail by dividing it into front-end, back-end, and long-term work, has significantly improved performance compared to its predecessor and has overtaken Ansropic in some indicators.
The characteristic of GLM-5 is that, unlike closed models, it is an open weight model that discloses weights and allows the model to be downloaded from other companies.
Immediately after its release, GLM-5 ranked first among open weight models in performance indicators compiled by global AI research institute Artificial Analytics.
In particular, it ranked third worldwide in the overall model ranking, including closed models, with 50 points after Anthropic’s Claude Opus 4.6 (53 points) and OpenAI’s GPT-5.2 (51 points).
Jifu AI’s announcement came ahead of China’s biggest holiday, the Lunar New Year, and in addition to Jifu AI, China’s leading AI companies are expected to unveil the latest models to showcase their technology before and after the Lunar New Year.
Minimax, another AI company, unveiled its latest model ‘Minimax M2.5’ on the 11th, and Dipshik is also expected to announce its next-generation model ‘Dipshik V4’ before and after Spring Festival.
2026-05-06 13:50:05
2026-05-06 19:40:40
2026-05-06 17:18:17
2026-05-05 17:30:41
2026-05-06 22:16:37
2026-05-04 06:06:09
2026-05-06 22:50:54
2026-05-06 20:56:27
2026-05-06 08:59:16
2026-05-06 18:24:20
※ This article was translated using AI technology for reader convenience.
Maeil Business(MK) provides these translations “as they are” and makes no warranties of any kind, either explicitly or implicitly, regarding accuracy, reliability and marketability, suitability for a particular purpose, etc. of translation. Please be informed that the content provided may not be translated accurately due to limitations in machine translation before using this service.
Copyright (c) 매경AX. Maeil Business News Korea & mk.co.kr, All rights reserved.
Prohibition of unauthorized reproduction, redistribution, and use of AI learning

source

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top