Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - GLM-4.5: Agentic, Reasoning, and Coding (ARC) Foundation Models
https://arxivexplained.com/papers/glm-45-agentic-reasoning-and-coding-arc-foundation-models\n","updatedAt":"2025-08-12T00:04:43.569Z","author":{"_id":"65d9fc2a0e6ad24551d87a1e","avatarUrl":"/avatars/3aedb9522cc3cd08349d654f523fd792.svg","fullname":"Grant Singleton","name":"grantsing","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":4,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6131966710090637},"editors":["grantsing"],"editorAvatarUrls":["/avatars/3aedb9522cc3cd08349d654f523fd792.svg"],"reactions":[{"reaction":"โค๏ธ","users":["rikwade"],"count":1}],"isReport":false}},{"id":"689a9a516db566b2ea196779","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2025-08-12T01:35:13.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning](https://huggingface.co/papers/2507.01006) (2025)\n* [Kimi K2: Open Agentic Intelligence](https://huggingface.co/papers/2507.20534) (2025)\n* [BlueLM-2.5-3B Technical Report](https://huggingface.co/papers/2507.05934) (2025)\n* [JT-Math: A Multi-Stage Framework for Advanced Mathematical Reasoning in Large Language Models](https://huggingface.co/papers/2507.19748) (2025)\n* [MiniMax-M1: Scaling Test-Time Compute Efficiently with Lightning Attention](https://huggingface.co/papers/2506.13585) (2025)\n* [KAT-V1: Kwai-AutoThink Technical Report](https://huggingface.co/papers/2507.08297) (2025)\n* [Ring-lite: Scalable Reasoning via C3PO-Stabilized Reinforcement Learning for LLMs](https://huggingface.co/papers/2506.14731) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\n
The following papers were recommended by the Semantic Scholar API
\n","updatedAt":"2025-08-16T12:38:05.593Z","author":{"_id":"678c668f2e2b9aa70c7a7eea","avatarUrl":"/avatars/ac3ca378a4992892eff9444637a8ebea.svg","fullname":"Lex","name":"Lex1554","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":1,"identifiedLanguage":{"language":"fr","probability":0.2071985900402069},"editors":["Lex1554"],"editorAvatarUrls":["/avatars/ac3ca378a4992892eff9444637a8ebea.svg"],"reactions":[],"isReport":false}},{"id":"68a0a5e0609d6f29cc5bc1a2","author":{"_id":"684c6d7205b0c0cfb1ce8d7c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/ZmH6aIyEKB-Lf4g7iF4PQ.png","fullname":"buyu","name":"enescg","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false},"createdAt":"2025-08-16T15:38:08.000Z","type":"comment","data":{"edited":true,"hidden":true,"hiddenBy":"","hiddenReason":"Graphic Content","latest":{"raw":"This comment has been hidden","html":"This comment has been hidden","updatedAt":"2026-01-19T15:08:51.786Z","author":{"_id":"684c6d7205b0c0cfb1ce8d7c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/ZmH6aIyEKB-Lf4g7iF4PQ.png","fullname":"buyu","name":"enescg","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"editors":[],"editorAvatarUrls":[],"reactions":[]}},{"id":"68a0a5ff2d8f821a004ad5fe","author":{"_id":"684c6d7205b0c0cfb1ce8d7c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/ZmH6aIyEKB-Lf4g7iF4PQ.png","fullname":"buyu","name":"enescg","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false},"createdAt":"2025-08-16T15:38:39.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"ne dir bu resim tarif edebilirmsin\n","html":"
ne dir bu resim tarif edebilirmsin
\n","updatedAt":"2025-08-16T15:38:39.875Z","author":{"_id":"684c6d7205b0c0cfb1ce8d7c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/ZmH6aIyEKB-Lf4g7iF4PQ.png","fullname":"buyu","name":"enescg","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"tr","probability":0.9699943661689758},"editors":["enescg"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/ZmH6aIyEKB-Lf4g7iF4PQ.png"],"reactions":[],"isReport":false}},{"id":"68adfb1c377c40788e4aef11","author":{"_id":"688fb2cfd217e1864360f42b","avatarUrl":"/avatars/dfdfb7a454804503acf7a8c769328560.svg","fullname":"Rasel AHmed","name":"rdev001","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false},"createdAt":"2025-08-26T18:21:16.000Z","type":"comment","data":{"edited":true,"hidden":true,"hiddenBy":"","hiddenReason":"Off-Topic","latest":{"raw":"This comment has been hidden","html":"This comment has been hidden","updatedAt":"2026-01-19T15:08:59.312Z","author":{"_id":"688fb2cfd217e1864360f42b","avatarUrl":"/avatars/dfdfb7a454804503acf7a8c769328560.svg","fullname":"Rasel AHmed","name":"rdev001","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"editors":[],"editorAvatarUrls":[],"reactions":[]}},{"id":"68c2cb6aac26491116fe4a9b","author":{"_id":"685e304044fb80804115de4a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/UltKS2AddT0xwrVvtERe3.png","fullname":"I am GHOST","name":"fighterash007","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false},"createdAt":"2025-09-11T13:15:22.000Z","type":"comment","data":{"edited":true,"hidden":true,"hiddenBy":"","hiddenReason":"Off-Topic","latest":{"raw":"This comment has been hidden","html":"This comment has been hidden","updatedAt":"2026-01-19T15:09:04.353Z","author":{"_id":"685e304044fb80804115de4a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/UltKS2AddT0xwrVvtERe3.png","fullname":"I am GHOST","name":"fighterash007","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"editors":[],"editorAvatarUrls":[],"reactions":[]}},{"id":"694adb20a8d391e3fb382357","author":{"_id":"65243980050781c16f234f1f","avatarUrl":"/avatars/743a009681d5d554c27e04300db9f267.svg","fullname":"Avi","name":"avahal","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false},"createdAt":"2025-12-23T18:10:40.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"arXiv lens breakdown of this paper ๐ https://arxivlens.com/PaperView/Details/glm-4-5-agentic-reasoning-and-coding-arc-foundation-models-126-7b914dd8\n- Executive Summary\n- Detailed Breakdown\n- Practical Applications","html":"
GLM-4.5, a Mixture-of-Experts large language model with 355B parameters, achieves strong performance across agentic, reasoning, and coding tasks using multi-stage training and reinforcement learning.
AI-generated summary
We present GLM-4.5, an open-source Mixture-of-Experts (MoE) large language
model with 355B total parameters and 32B activated parameters, featuring a
hybrid reasoning method that supports both thinking and direct response modes.
Through multi-stage training on 23T tokens and comprehensive post-training with
expert model iteration and reinforcement learning, GLM-4.5 achieves strong
performance across agentic, reasoning, and coding (ARC) tasks, scoring 70.1% on
TAU-Bench, 91.0% on AIME 24, and 64.2% on SWE-bench Verified. With much fewer
parameters than several competitors, GLM-4.5 ranks 3rd overall among all
evaluated models and 2nd on agentic benchmarks. We release both GLM-4.5 (355B
parameters) and a compact version, GLM-4.5-Air (106B parameters), to advance
research in reasoning and agentic AI systems. Code, models, and more
information are available at https://github.com/zai-org/GLM-4.5.