Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - Distilling Feedback into Memory-as-a-Tool
[go: Go Back, main page]

https://github.com/vicgalle/feedback-memory-as-a-tool
Data: https://huggingface.co/datasets/vicgalle/rubric-feedback-bench

\n","updatedAt":"2026-01-12T08:07:03.453Z","author":{"_id":"5fad8602b8423e1d80b8a965","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/5fad8602b8423e1d80b8a965/tRqTwcZmrGka8c1vFq2wX.jpeg","fullname":"Victor Gallego","name":"vicgalle","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":141,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.5257311463356018},"editors":["vicgalle"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/5fad8602b8423e1d80b8a965/tRqTwcZmrGka8c1vFq2wX.jpeg"],"reactions":[],"isReport":false}},{"id":"6965a16e44f950f64bed840b","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2026-01-13T01:35:42.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [DeepCode: Open Agentic Coding](https://huggingface.co/papers/2512.07921) (2025)\n* [Prompt Repetition Improves Non-Reasoning LLMs](https://huggingface.co/papers/2512.14982) (2025)\n* [In-Context Distillation with Self-Consistency Cascades: A Simple, Training-Free Way to Reduce LLM Agent Costs](https://huggingface.co/papers/2512.02543) (2025)\n* [DeepSynth-Eval: Objectively Evaluating Information Consolidation in Deep Survey Writing](https://huggingface.co/papers/2601.03540) (2026)\n* [Recursive Language Models](https://huggingface.co/papers/2512.24601) (2025)\n* [From Failure to Mastery: Generating Hard Samples for Tool-use Agents](https://huggingface.co/papers/2601.01498) (2026)\n* [The Instruction Gap: LLMs get lost in Following Instruction](https://huggingface.co/papers/2601.03269) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2026-01-13T01:35:42.522Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.743635356426239},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}},{"id":"696b8f29357a40707525ad49","author":{"_id":"65243980050781c16f234f1f","avatarUrl":"/avatars/743a009681d5d554c27e04300db9f267.svg","fullname":"Avi","name":"avahal","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false},"createdAt":"2026-01-17T13:31:21.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"arXivlens breakdown of this paper 👉 https://arxivlens.com/PaperView/Details/distilling-feedback-into-memory-as-a-tool-2387-dcc9aa3f\n\n- Executive Summary\n- Detailed Breakdown\n- Practical Applications","html":"

arXivlens breakdown of this paper 👉 https://arxivlens.com/PaperView/Details/distilling-feedback-into-memory-as-a-tool-2387-dcc9aa3f

\n
    \n
  • Executive Summary
  • \n
  • Detailed Breakdown
  • \n
  • Practical Applications
  • \n
\n","updatedAt":"2026-01-17T13:31:21.675Z","author":{"_id":"65243980050781c16f234f1f","avatarUrl":"/avatars/743a009681d5d554c27e04300db9f267.svg","fullname":"Avi","name":"avahal","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6354650259017944},"editors":["avahal"],"editorAvatarUrls":["/avatars/743a009681d5d554c27e04300db9f267.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2601.05960","authors":[{"_id":"6964aa85138cc47cbd7653f1","user":{"_id":"5fad8602b8423e1d80b8a965","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/5fad8602b8423e1d80b8a965/tRqTwcZmrGka8c1vFq2wX.jpeg","isPro":false,"fullname":"Victor Gallego","user":"vicgalle","type":"user"},"name":"Víctor Gallego","status":"claimed_verified","statusLastChangedAt":"2026-01-12T10:33:24.342Z","hidden":false}],"publishedAt":"2026-01-09T17:26:52.000Z","submittedOnDailyAt":"2026-01-12T05:37:03.445Z","title":"Distilling Feedback into Memory-as-a-Tool","submittedOnDailyBy":{"_id":"5fad8602b8423e1d80b8a965","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/5fad8602b8423e1d80b8a965/tRqTwcZmrGka8c1vFq2wX.jpeg","isPro":false,"fullname":"Victor Gallego","user":"vicgalle","type":"user"},"summary":"We propose a framework that amortizes the cost of inference-time reasoning by converting transient critiques into retrievable guidelines, through a file-based memory system and agent-controlled tool calls. We evaluate this method on the Rubric Feedback Bench, a novel dataset for rubric-based learning. Experiments demonstrate that our augmented LLMs rapidly match the performance of test-time refinement pipelines while drastically reducing inference cost.","upvotes":3,"discussionId":"6964aa86138cc47cbd7653f2","githubRepo":"https://github.com/vicgalle/feedback-memory-as-a-tool","githubRepoAddedBy":"user","ai_summary":"A framework converts transient critiques into retrievable guidelines using a file-based memory system and agent-controlled tool calls, enabling LLMs to match test-time refinement performance with reduced inference costs.","ai_keywords":["LLMs","test-time refinement","inference-time reasoning","file-based memory system","agent-controlled tool calls","Rubric Feedback Bench","rubric-based learning"],"githubStars":3},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"5fad8602b8423e1d80b8a965","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/5fad8602b8423e1d80b8a965/tRqTwcZmrGka8c1vFq2wX.jpeg","isPro":false,"fullname":"Victor Gallego","user":"vicgalle","type":"user"},{"_id":"686db5d4af2b856fabbf13aa","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/6BjMv2LVNoqvbX8fQSTPI.png","isPro":false,"fullname":"V bbbb","user":"Bbbbbnnn","type":"user"},{"_id":"64834b399b352597e41816ac","avatarUrl":"/avatars/63d9d123bffa90f43186a0bdc4455cbd.svg","isPro":false,"fullname":"Shaobai Jiang","user":"shaobaij","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2601.05960

Distilling Feedback into Memory-as-a-Tool

Published on Jan 9
· Submitted by
Victor Gallego
on Jan 12

Abstract

A framework converts transient critiques into retrievable guidelines using a file-based memory system and agent-controlled tool calls, enabling LLMs to match test-time refinement performance with reduced inference costs.

AI-generated summary

We propose a framework that amortizes the cost of inference-time reasoning by converting transient critiques into retrievable guidelines, through a file-based memory system and agent-controlled tool calls. We evaluate this method on the Rubric Feedback Bench, a novel dataset for rubric-based learning. Experiments demonstrate that our augmented LLMs rapidly match the performance of test-time refinement pipelines while drastically reducing inference cost.

Community

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

arXivlens breakdown of this paper 👉 https://arxivlens.com/PaperView/Details/distilling-feedback-into-memory-as-a-tool-2387-dcc9aa3f

  • Executive Summary
  • Detailed Breakdown
  • Practical Applications

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2601.05960 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2601.05960 in a Space README.md to link it from this page.

Collections including this paper 1