Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - MemSkill: Learning and Evolving Memory Skills for Self-Evolving Agents
[go: Go Back, main page]

https://github.com/ViktorAxelsen/MemSkill

\n","updatedAt":"2026-02-06T15:27:48.967Z","author":{"_id":"64f58f3468047192d6c7f335","avatarUrl":"/avatars/88be16ee80da7d2eaa0feae878375001.svg","fullname":"XaiverZ","name":"XaiverZ","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":1,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8833367824554443},"editors":["XaiverZ"],"editorAvatarUrls":["/avatars/88be16ee80da7d2eaa0feae878375001.svg"],"reactions":[],"isReport":false},"replies":[{"id":"698753faed642f9f52a951a5","author":{"_id":"65d9fc2a0e6ad24551d87a1e","avatarUrl":"/avatars/3aedb9522cc3cd08349d654f523fd792.svg","fullname":"Grant Singleton","name":"grantsing","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":4,"isUserFollowing":false},"createdAt":"2026-02-07T15:02:18.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"arXiv explained breakdown of this paper 👉 https://arxivexplained.com/papers/memskill-learning-and-evolving-memory-skills-for-self-evolving-agents\n","html":"

arXiv explained breakdown of this paper 👉 https://arxivexplained.com/papers/memskill-learning-and-evolving-memory-skills-for-self-evolving-agents

\n","updatedAt":"2026-02-07T15:02:18.871Z","author":{"_id":"65d9fc2a0e6ad24551d87a1e","avatarUrl":"/avatars/3aedb9522cc3cd08349d654f523fd792.svg","fullname":"Grant Singleton","name":"grantsing","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":4,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7621778249740601},"editors":["grantsing"],"editorAvatarUrls":["/avatars/3aedb9522cc3cd08349d654f523fd792.svg"],"reactions":[],"isReport":false,"parentCommentId":"69860874e5f2f24a44103465"}}]},{"id":"698657e657ce16a729e8a9a2","author":{"_id":"65243980050781c16f234f1f","avatarUrl":"/avatars/743a009681d5d554c27e04300db9f267.svg","fullname":"Avi","name":"avahal","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false},"createdAt":"2026-02-06T21:06:46.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"arXivLens breakdown of this paper 👉 https://arxivlens.com/PaperView/Details/memskill-learning-and-evolving-memory-skills-for-self-evolving-agents-2094-8dae2b13\n- Executive Summary\n- Detailed Breakdown\n- Practical Applications","html":"

arXivLens breakdown of this paper 👉 https://arxivlens.com/PaperView/Details/memskill-learning-and-evolving-memory-skills-for-self-evolving-agents-2094-8dae2b13

\n
    \n
  • Executive Summary
  • \n
  • Detailed Breakdown
  • \n
  • Practical Applications
  • \n
\n","updatedAt":"2026-02-06T21:06:46.388Z","author":{"_id":"65243980050781c16f234f1f","avatarUrl":"/avatars/743a009681d5d554c27e04300db9f267.svg","fullname":"Avi","name":"avahal","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7418103814125061},"editors":["avahal"],"editorAvatarUrls":["/avatars/743a009681d5d554c27e04300db9f267.svg"],"reactions":[],"isReport":false}},{"id":"6986979449d71b321868c908","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2026-02-07T01:38:28.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [ProcMEM: Learning Reusable Procedural Memory from Experience via Non-Parametric PPO for LLM Agents](https://huggingface.co/papers/2602.01869) (2026)\n* [Live-Evo: Online Evolution of Agentic Memory from Continuous Feedback](https://huggingface.co/papers/2602.02369) (2026)\n* [MemEvolve: Meta-Evolution of Agent Memory Systems](https://huggingface.co/papers/2512.18746) (2025)\n* [Agentic Memory: Learning Unified Long-Term and Short-Term Memory Management for Large Language Model Agents](https://huggingface.co/papers/2601.01885) (2026)\n* [MemWeaver: Weaving Hybrid Memories for Traceable Long-Horizon Agentic Reasoning](https://huggingface.co/papers/2601.18204) (2026)\n* [AtomMem : Learnable Dynamic Agentic Memory with Atomic Memory Operation](https://huggingface.co/papers/2601.08323) (2026)\n* [AMA: Adaptive Memory via Multi-Agent Collaboration](https://huggingface.co/papers/2601.20352) (2026)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2026-02-07T01:38:28.185Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7066053152084351},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[{"reaction":"🔥","users":["0sm0s1s","taofeng","XaiverZ"],"count":3}],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2602.02474","authors":[{"_id":"6982023e47987be58cdb7d3a","name":"Haozhen Zhang","hidden":false},{"_id":"6982023e47987be58cdb7d3b","name":"Quanyu Long","hidden":false},{"_id":"6982023e47987be58cdb7d3c","name":"Jianzhu Bao","hidden":false},{"_id":"6982023e47987be58cdb7d3d","name":"Tao Feng","hidden":false},{"_id":"6982023e47987be58cdb7d3e","name":"Weizhi Zhang","hidden":false},{"_id":"6982023e47987be58cdb7d3f","name":"Haodong Yue","hidden":false},{"_id":"6982023e47987be58cdb7d40","name":"Wenya Wang","hidden":false}],"publishedAt":"2026-02-02T18:53:28.000Z","submittedOnDailyAt":"2026-02-06T12:57:48.958Z","title":"MemSkill: Learning and Evolving Memory Skills for Self-Evolving Agents","submittedOnDailyBy":{"_id":"64f58f3468047192d6c7f335","avatarUrl":"/avatars/88be16ee80da7d2eaa0feae878375001.svg","isPro":false,"fullname":"XaiverZ","user":"XaiverZ","type":"user"},"summary":"Most Large Language Model (LLM) agent memory systems rely on a small set of static, hand-designed operations for extracting memory. These fixed procedures hard-code human priors about what to store and how to revise memory, making them rigid under diverse interaction patterns and inefficient on long histories. To this end, we present MemSkill, which reframes these operations as learnable and evolvable memory skills, structured and reusable routines for extracting, consolidating, and pruning information from interaction traces. Inspired by the design philosophy of agent skills, MemSkill employs a controller that learns to select a small set of relevant skills, paired with an LLM-based executor that produces skill-guided memories. Beyond learning skill selection, MemSkill introduces a designer that periodically reviews hard cases where selected skills yield incorrect or incomplete memories, and evolves the skill set by proposing refinements and new skills. Together, MemSkill forms a closed-loop procedure that improves both the skill-selection policy and the skill set itself. Experiments on LoCoMo, LongMemEval, HotpotQA, and ALFWorld demonstrate that MemSkill improves task performance over strong baselines and generalizes well across settings. Further analyses shed light on how skills evolve, offering insights toward more adaptive, self-evolving memory management for LLM agents.","upvotes":55,"discussionId":"6982023e47987be58cdb7d41","projectPage":"https://viktoraxelsen.github.io/MemSkill/","githubRepo":"https://github.com/ViktorAxelsen/MemSkill","githubRepoAddedBy":"user","ai_summary":"MemSkill introduces a learnable and evolvable memory system for LLM agents that dynamically selects and refines memory operations through controller-executor-designer components.","ai_keywords":["Large Language Model","memory systems","learnable memory skills","evolvable memory","controller","executor","designer","skill selection","skill evolution","agent skills","interaction traces","memory extraction","memory consolidation","memory pruning"],"githubStars":205,"organization":{"_id":"6508b28cf36bb51c50faad98","name":"NanyangTechnologicalUniversity","fullname":"Nanyang Technological University","avatar":"https://cdn-uploads.huggingface.co/production/uploads/630ca0817dacb93b33506ce7/ZPD1fvei0bcIGeDXxeSkn.png"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"64f58f3468047192d6c7f335","avatarUrl":"/avatars/88be16ee80da7d2eaa0feae878375001.svg","isPro":false,"fullname":"XaiverZ","user":"XaiverZ","type":"user"},{"_id":"65c1d1bda2239cf479ecf573","avatarUrl":"/avatars/70483a134b3236d690fc4d9409f1ecad.svg","isPro":false,"fullname":"Long","user":"Quanyu001","type":"user"},{"_id":"68b6b867558d1e07f88c90c0","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/mdtk2bjdn4fIrHpvaI0lH.png","isPro":false,"fullname":"HaoDong Yue","user":"HaoDongHD","type":"user"},{"_id":"68eb753bd7b8e72eb5a53e10","avatarUrl":"/avatars/3ef63182b4d929f5b670aa248474409c.svg","isPro":false,"fullname":"ben_hit_save","user":"ben-hit-save","type":"user"},{"_id":"681cadf977c23f72668b020b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/681cadf977c23f72668b020b/Xlf7MADD-K4xhPvo8SpfM.png","isPro":false,"fullname":"Xiao Li","user":"undefined443","type":"user"},{"_id":"61e52be53d6dbb1da842316a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/61e52be53d6dbb1da842316a/gx0WGPcOCClXPymoKglc4.jpeg","isPro":false,"fullname":"Börje Karlsson","user":"tellarin","type":"user"},{"_id":"69860fbec9e8aa8e54c546b4","avatarUrl":"/avatars/c52d3f402f9d91e9cd678be7a29c00db.svg","isPro":false,"fullname":"viktor","user":"uidzhz","type":"user"},{"_id":"698612988e94310526ec055f","avatarUrl":"/avatars/ff89ccf633291f8dc4e6d8ca297e564f.svg","isPro":false,"fullname":"qihuliu","user":"qihuuu","type":"user"},{"_id":"6986131890534a7c3e1f3a78","avatarUrl":"/avatars/ae1fb4137c13e4b7d08e770e6319437a.svg","isPro":false,"fullname":"jiyuyang","user":"jiyuuuu","type":"user"},{"_id":"698613d12c2c5b4ec4c6cddb","avatarUrl":"/avatars/9fecc42dff8dbf11ee5040f3beadb243.svg","isPro":false,"fullname":"tuiqiwang","user":"tuiqiviktor","type":"user"},{"_id":"69861ccc207a57709baf537e","avatarUrl":"/avatars/cf7a6d6c693f36d7cbd1b9bb4ee8282a.svg","isPro":false,"fullname":"hughliu","user":"oiuqer","type":"user"},{"_id":"69861d3121e84ea63df44df0","avatarUrl":"/avatars/b747ceba166fe0c26e7da78b15463730.svg","isPro":false,"fullname":"hujuwang","user":"liuwwwang","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":3,"organization":{"_id":"6508b28cf36bb51c50faad98","name":"NanyangTechnologicalUniversity","fullname":"Nanyang Technological University","avatar":"https://cdn-uploads.huggingface.co/production/uploads/630ca0817dacb93b33506ce7/ZPD1fvei0bcIGeDXxeSkn.png"}}">
Papers
arxiv:2602.02474

MemSkill: Learning and Evolving Memory Skills for Self-Evolving Agents

Published on Feb 2
· Submitted by
XaiverZ
on Feb 6
#3 Paper of the day
Authors:
,
,
,
,
,
,

Abstract

MemSkill introduces a learnable and evolvable memory system for LLM agents that dynamically selects and refines memory operations through controller-executor-designer components.

AI-generated summary

Most Large Language Model (LLM) agent memory systems rely on a small set of static, hand-designed operations for extracting memory. These fixed procedures hard-code human priors about what to store and how to revise memory, making them rigid under diverse interaction patterns and inefficient on long histories. To this end, we present MemSkill, which reframes these operations as learnable and evolvable memory skills, structured and reusable routines for extracting, consolidating, and pruning information from interaction traces. Inspired by the design philosophy of agent skills, MemSkill employs a controller that learns to select a small set of relevant skills, paired with an LLM-based executor that produces skill-guided memories. Beyond learning skill selection, MemSkill introduces a designer that periodically reviews hard cases where selected skills yield incorrect or incomplete memories, and evolves the skill set by proposing refinements and new skills. Together, MemSkill forms a closed-loop procedure that improves both the skill-selection policy and the skill set itself. Experiments on LoCoMo, LongMemEval, HotpotQA, and ALFWorld demonstrate that MemSkill improves task performance over strong baselines and generalizes well across settings. Further analyses shed light on how skills evolve, offering insights toward more adaptive, self-evolving memory management for LLM agents.

Community

Paper submitter

Most Large Language Model (LLM) agent memory systems rely on a small set of static, hand-designed operations for extracting memory. These fixed procedures hard-code human priors about what to store and how to revise memory, making them rigid under diverse interaction patterns and inefficient on long histories.
To this end, we present \textbf{MemSkill}, which reframes these operations as learnable and evolvable memory skills, structured and reusable routines for extracting, consolidating, and pruning information from interaction traces.
Inspired by the design philosophy of agent skills, MemSkill employs a \emph{controller} that learns to select a small set of relevant skills, paired with an LLM-based \emph{executor} that produces skill-guided memories.
Beyond learning skill selection, MemSkill introduces a \emph{designer} that periodically reviews hard cases where selected skills yield incorrect or incomplete memories, and evolves the skill set by proposing refinements and new skills.
Together, MemSkill forms a closed-loop procedure that improves both the skill-selection policy and the skill set itself.
Experiments on LoCoMo, LongMemEval, HotpotQA, and ALFWorld demonstrate that MemSkill improves task performance over strong baselines and generalizes well across settings.
Further analyses shed light on how skills evolve, offering insights toward more adaptive, self-evolving memory management for LLM agents.
Code is available at https://github.com/ViktorAxelsen/MemSkill

·

arXivLens breakdown of this paper 👉 https://arxivlens.com/PaperView/Details/memskill-learning-and-evolving-memory-skills-for-self-evolving-agents-2094-8dae2b13

  • Executive Summary
  • Detailed Breakdown
  • Practical Applications

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.02474 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.02474 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.02474 in a Space README.md to link it from this page.

Collections including this paper 6