Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - LatentMem: Customizing Latent Memory for Multi-Agent Systems
\n","updatedAt":"2026-02-06T21:11:59.970Z","author":{"_id":"65243980050781c16f234f1f","avatarUrl":"/avatars/743a009681d5d554c27e04300db9f267.svg","fullname":"Avi","name":"avahal","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.5891566276550293},"editors":["avahal"],"editorAvatarUrls":["/avatars/743a009681d5d554c27e04300db9f267.svg"],"reactions":[],"isReport":false}},{"id":"698697bc49d71b321868cc5d","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2026-02-07T01:39:08.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [StackPlanner: A Centralized Hierarchical Multi-Agent System with Task-Experience Memory Management](https://huggingface.co/papers/2601.05890) (2026)\n* [AMA: Adaptive Memory via Multi-Agent Collaboration](https://huggingface.co/papers/2601.20352) (2026)\n* [MemEvolve: Meta-Evolution of Agent Memory Systems](https://huggingface.co/papers/2512.18746) (2025)\n* [E-mem: Multi-agent based Episodic Context Reconstruction for LLM Agent Memory](https://huggingface.co/papers/2601.21714) (2026)\n* [MemBuilder: Reinforcing LLMs for Long-Term Memory Construction via Attributed Dense Rewards](https://huggingface.co/papers/2601.05488) (2026)\n* [Fine-Mem: Fine-Grained Feedback Alignment for Long-Horizon Memory Management](https://huggingface.co/papers/2601.08435) (2026)\n* [Implicit Graph, Explicit Retrieval: Towards Efficient and Interpretable Long-horizon Memory for Large Language Models](https://huggingface.co/papers/2601.03417) (2026)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\n
The following papers were recommended by the Semantic Scholar API
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2026-02-07T01:39:08.200Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6956989765167236},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2602.03036","authors":[{"_id":"698560fa4ad556f294b7eb9c","name":"Muxin Fu","hidden":false},{"_id":"698560fa4ad556f294b7eb9d","name":"Guibin Zhang","hidden":false},{"_id":"698560fa4ad556f294b7eb9e","name":"Xiangyuan Xue","hidden":false},{"_id":"698560fa4ad556f294b7eb9f","name":"Yafu Li","hidden":false},{"_id":"698560fa4ad556f294b7eba0","name":"Zefeng He","hidden":false},{"_id":"698560fa4ad556f294b7eba1","name":"Siyuan Huang","hidden":false},{"_id":"698560fa4ad556f294b7eba2","name":"Xiaoye Qu","hidden":false},{"_id":"698560fa4ad556f294b7eba3","name":"Yu Cheng","hidden":false},{"_id":"698560fa4ad556f294b7eba4","name":"Yang Yang","hidden":false}],"publishedAt":"2026-02-03T03:03:16.000Z","submittedOnDailyAt":"2026-02-06T01:18:12.484Z","title":"LatentMem: Customizing Latent Memory for Multi-Agent Systems","submittedOnDailyBy":{"_id":"64cb54da1af278541d663708","avatarUrl":"/avatars/c44507cc92bb2e83154bad31b90ce6dd.svg","isPro":false,"fullname":"Xiaoye Qu","user":"Xiaoye08","type":"user"},"summary":"Large language model (LLM)-powered multi-agent systems (MAS) demonstrate remarkable collective intelligence, wherein multi-agent memory serves as a pivotal mechanism for continual adaptation. However, existing multi-agent memory designs remain constrained by two fundamental bottlenecks: (i) memory homogenization arising from the absence of role-aware customization, and (ii) information overload induced by excessively fine-grained memory entries. To address these limitations, we propose LatentMem, a learnable multi-agent memory framework designed to customize agent-specific memories in a token-efficient manner. Specifically, LatentMem comprises an experience bank that stores raw interaction trajectories in a lightweight form, and a memory composer that synthesizes compact latent memories conditioned on retrieved experience and agent-specific contexts. Further, we introduce Latent Memory Policy Optimization (LMPO), which propagates task-level optimization signals through latent memories to the composer, encouraging it to produce compact and high-utility representations. Extensive experiments across diverse benchmarks and mainstream MAS frameworks show that LatentMem achieves a performance gain of up to 19.36% over vanilla settings and consistently outperforms existing memory architectures, without requiring any modifications to the underlying frameworks.","upvotes":14,"discussionId":"698560fb4ad556f294b7eba5","githubRepo":"https://github.com/KANABOON1/LatentMem","githubRepoAddedBy":"user","ai_summary":"LatentMem is a learnable multi-agent memory framework that customizes agent-specific memories through latent representations, improving performance in multi-agent systems without modifying underlying frameworks.","ai_keywords":["multi-agent systems","multi-agent memory","latent memory","experience bank","memory composer","latent memory policy optimization","task-level optimization","agent-specific contexts","token-efficient memory"],"githubStars":27},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"64cb54da1af278541d663708","avatarUrl":"/avatars/c44507cc92bb2e83154bad31b90ce6dd.svg","isPro":false,"fullname":"Xiaoye Qu","user":"Xiaoye08","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"6363a1fa123a5d5cd4a800e2","avatarUrl":"/avatars/a0961ca5463aae05de0b1574c0064fae.svg","isPro":false,"fullname":"guibin zhang","user":"greeky","type":"user"},{"_id":"67bde7dec73e0b462c34d379","avatarUrl":"/avatars/f7656adc28805490124b6ed73fe73858.svg","isPro":false,"fullname":"Kana Boon","user":"Kana-s","type":"user"},{"_id":"665fc6a24058eea6b22015dd","avatarUrl":"/avatars/0c22fe00ab8949bcef99be82c2dc2c62.svg","isPro":false,"fullname":"Kian Chen","user":"keionc","type":"user"},{"_id":"67247adb73d1eb17b6bfd27c","avatarUrl":"/avatars/57bdbb7362f9854c87dd0a71ae071652.svg","isPro":false,"fullname":"Zefeng He","user":"yhx12","type":"user"},{"_id":"6463554dd2044cd1d7c6e0bf","avatarUrl":"/avatars/d7653623117268c545a7063fec69664b.svg","isPro":false,"fullname":"Bingzheng Wei","user":"Bingzheng","type":"user"},{"_id":"6544b9b646dbdeca34ee5f52","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6544b9b646dbdeca34ee5f52/nRx6m1C4wfZ_xSWoBUNJf.png","isPro":false,"fullname":"Yuyang Hu","user":"namespace-ERI","type":"user"},{"_id":"6570450a78d7aca0c361a177","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6570450a78d7aca0c361a177/MX7jHhTQwLs-BvYIu5rqb.jpeg","isPro":false,"fullname":"Harold Chen","user":"Harold328","type":"user"},{"_id":"618c775031eb5ed2af905b2c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1636595521969-noauth.png","isPro":false,"fullname":"Xin Li","user":"lixin67","type":"user"},{"_id":"66935bdc5489e4f73c76bc7b","avatarUrl":"/avatars/129d1e86bbaf764b507501f4feb177db.svg","isPro":false,"fullname":"Abidoye Aanuoluwapo","user":"Aanuoluwapo65","type":"user"},{"_id":"64834b399b352597e41816ac","avatarUrl":"/avatars/63d9d123bffa90f43186a0bdc4455cbd.svg","isPro":false,"fullname":"Shaobai Jiang","user":"shaobaij","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
LatentMem is a learnable multi-agent memory framework that customizes agent-specific memories through latent representations, improving performance in multi-agent systems without modifying underlying frameworks.
AI-generated summary
Large language model (LLM)-powered multi-agent systems (MAS) demonstrate remarkable collective intelligence, wherein multi-agent memory serves as a pivotal mechanism for continual adaptation. However, existing multi-agent memory designs remain constrained by two fundamental bottlenecks: (i) memory homogenization arising from the absence of role-aware customization, and (ii) information overload induced by excessively fine-grained memory entries. To address these limitations, we propose LatentMem, a learnable multi-agent memory framework designed to customize agent-specific memories in a token-efficient manner. Specifically, LatentMem comprises an experience bank that stores raw interaction trajectories in a lightweight form, and a memory composer that synthesizes compact latent memories conditioned on retrieved experience and agent-specific contexts. Further, we introduce Latent Memory Policy Optimization (LMPO), which propagates task-level optimization signals through latent memories to the composer, encouraging it to produce compact and high-utility representations. Extensive experiments across diverse benchmarks and mainstream MAS frameworks show that LatentMem achieves a performance gain of up to 19.36% over vanilla settings and consistently outperforms existing memory architectures, without requiring any modifications to the underlying frameworks.