Add arXiv metadata and update citation with paper links

#40
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +16 -3
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  language:
3
- - en
4
- - zh
5
  library_name: transformers
6
  license: mit
7
  pipeline_tag: text-generation
@@ -22,6 +22,11 @@ pipeline_tag: text-generation
22
  👉 One click to <a href="https://chat.z.ai">GLM-5</a>.
23
  </p>
24
 
 
 
 
 
 
25
  ## Introduction
26
 
27
  We are launching GLM-5, targeting complex systems engineering and long-horizon agentic tasks. Scaling is still one of the most important ways to improve the intelligence efficiency of Artificial General Intelligence (AGI). Compared to GLM-4.5, GLM-5 scales from 355B parameters (32B active) to 744B parameters (40B active), and increases pre-training data from 23T to 28.5T tokens. GLM-5 also integrates DeepSeek Sparse Attention (DSA), largely reducing deployment cost while preserving long-context capacity.
@@ -149,4 +154,12 @@ vLLM, SGLang, KTransformers, and xLLM all support local deployment of GLM-5. A s
149
 
150
  ## Citation
151
 
152
- Our technical report is coming soon.
 
 
 
 
 
 
 
 
 
1
  ---
2
  language:
3
+ - en
4
+ - zh
5
  library_name: transformers
6
  license: mit
7
  pipeline_tag: text-generation
 
22
  👉 One click to <a href="https://chat.z.ai">GLM-5</a>.
23
  </p>
24
 
25
+ <p align="center">
26
+ [<a href="https://huggingface.co/papers/2602.15763" target="_blank">Paper</a>]
27
+ [<a href="https://github.com/zai-org/GLM-5" target="_blank">GitHub</a>]
28
+ </p>
29
+
30
  ## Introduction
31
 
32
  We are launching GLM-5, targeting complex systems engineering and long-horizon agentic tasks. Scaling is still one of the most important ways to improve the intelligence efficiency of Artificial General Intelligence (AGI). Compared to GLM-4.5, GLM-5 scales from 355B parameters (32B active) to 744B parameters (40B active), and increases pre-training data from 23T to 28.5T tokens. GLM-5 also integrates DeepSeek Sparse Attention (DSA), largely reducing deployment cost while preserving long-context capacity.
 
154
 
155
  ## Citation
156
 
157
+ ```bibtex
158
+ @article{glm5team2026glm5,
159
+ title={GLM-5: from Vibe Coding to Agentic Engineering},
160
+ author={GLM-5 Team and Aohan Zeng and Xin Lv and Zhenyu Hou and Zhengxiao Du and Qinkai Zheng and Bin Chen and Da Yin and Chendi Ge and Chengxing Xie and others},
161
+ journal={arXiv preprint arXiv:2602.15763},
162
+ year={2026},
163
+ url={https://huggingface.co/papers/2602.15763}
164
+ }
165
+ ```