Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - Interpreting Emergent Planning in Model-Free Reinforcement Learning
[go: Go Back, main page]

https://x.com/_tom_bush/status/1907778475043266776
Blog post: https://tuphs28.github.io/projects/interpplanning/

\n","updatedAt":"2025-04-04T08:30:29.829Z","author":{"_id":"65d0c00b0954f06e472909f4","avatarUrl":"/avatars/7dd76d922b781ed9895c7f4e62fefd9c.svg","fullname":"tom bush","name":"tuphs","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.5867885947227478},"editors":["tuphs"],"editorAvatarUrls":["/avatars/7dd76d922b781ed9895c7f4e62fefd9c.svg"],"reactions":[],"isReport":false}},{"id":"67f088bce413fbfbd390fae2","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2025-04-05T01:34:52.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [A Survey of In-Context Reinforcement Learning](https://huggingface.co/papers/2502.07978) (2025)\n* [Learning Symbolic Task Decompositions for Multi-Agent Teams](https://huggingface.co/papers/2502.13376) (2025)\n* [Planning with affordances: Integrating learned affordance models and symbolic planning](https://huggingface.co/papers/2502.02768) (2025)\n* [Distilling Reinforcement Learning Algorithms for In-Context Model-Based Planning](https://huggingface.co/papers/2502.19009) (2025)\n* [Autotelic Reinforcement Learning: Exploring Intrinsic Motivations for Skill Acquisition in Open-Ended Environments](https://huggingface.co/papers/2502.04418) (2025)\n* [Synthesizing world models for bilevel planning](https://huggingface.co/papers/2503.20124) (2025)\n* [Towards Causal Model-Based Policy Optimization](https://huggingface.co/papers/2503.09719) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-04-05T01:34:52.276Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7443541884422302},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2504.01871","authors":[{"_id":"67eea9e5117231f8bb04402b","user":{"_id":"65d0c00b0954f06e472909f4","avatarUrl":"/avatars/7dd76d922b781ed9895c7f4e62fefd9c.svg","isPro":false,"fullname":"tom bush","user":"tuphs","type":"user"},"name":"Thomas Bush","status":"claimed_verified","statusLastChangedAt":"2025-04-03T19:20:34.085Z","hidden":false},{"_id":"67eea9e5117231f8bb04402c","user":{"_id":"67cef8b7d9f3ce4930069e10","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/67cef8b7d9f3ce4930069e10/1o2rYNrmLPozUDaUo_nVE.png","isPro":false,"fullname":"Sephen Chung","user":"stephenchungmh","type":"user"},"name":"Stephen Chung","status":"claimed_verified","statusLastChangedAt":"2025-10-16T10:39:11.686Z","hidden":false},{"_id":"67eea9e5117231f8bb04402d","name":"Usman Anwar","hidden":false},{"_id":"67eea9e5117231f8bb04402e","user":{"_id":"645ecd18f0f92653b9f33d4e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/645ecd18f0f92653b9f33d4e/nHDMWtM9ZHrji0c4Y4XW1.jpeg","isPro":false,"fullname":"Adrià Garriga-Alonso","user":"agaralon","type":"user"},"name":"Adrià Garriga-Alonso","status":"extracted_pending","statusLastChangedAt":"2025-04-03T15:31:53.577Z","hidden":false},{"_id":"67eea9e5117231f8bb04402f","name":"David Krueger","hidden":false}],"publishedAt":"2025-04-02T16:24:23.000Z","submittedOnDailyAt":"2025-04-04T07:00:29.802Z","title":"Interpreting Emergent Planning in Model-Free Reinforcement Learning","submittedOnDailyBy":{"_id":"65d0c00b0954f06e472909f4","avatarUrl":"/avatars/7dd76d922b781ed9895c7f4e62fefd9c.svg","isPro":false,"fullname":"tom bush","user":"tuphs","type":"user"},"summary":"We present the first mechanistic evidence that model-free reinforcement\nlearning agents can learn to plan. This is achieved by applying a methodology\nbased on concept-based interpretability to a model-free agent in Sokoban -- a\ncommonly used benchmark for studying planning. Specifically, we demonstrate\nthat DRC, a generic model-free agent introduced by Guez et al. (2019), uses\nlearned concept representations to internally formulate plans that both predict\nthe long-term effects of actions on the environment and influence action\nselection. Our methodology involves: (1) probing for planning-relevant\nconcepts, (2) investigating plan formation within the agent's representations,\nand (3) verifying that discovered plans (in the agent's representations) have a\ncausal effect on the agent's behavior through interventions. We also show that\nthe emergence of these plans coincides with the emergence of a planning-like\nproperty: the ability to benefit from additional test-time compute. Finally, we\nperform a qualitative analysis of the planning algorithm learned by the agent\nand discover a strong resemblance to parallelized bidirectional search. Our\nfindings advance understanding of the internal mechanisms underlying planning\nbehavior in agents, which is important given the recent trend of emergent\nplanning and reasoning capabilities in LLMs through RL","upvotes":12,"discussionId":"67eea9e9117231f8bb044167","projectPage":"https://tuphs28.github.io/projects/interpplanning/","ai_summary":"Model-free reinforcement learning agents can learn to plan using concept-based interpretability, showing long-term action prediction and influence on behavior, resembling parallelized bidirectional search.","ai_keywords":["model-free reinforcement learning","planners","concept-based interpretability","Sokoban","DRC","concept representations","plan formation","causal effects","planning-like property","emergent planning","reasoning capabilities","LLMs","RL"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"65d0c00b0954f06e472909f4","avatarUrl":"/avatars/7dd76d922b781ed9895c7f4e62fefd9c.svg","isPro":false,"fullname":"tom bush","user":"tuphs","type":"user"},{"_id":"648eb1eb59c4e5c87dc116e0","avatarUrl":"/avatars/c636cea39c2c0937f01398c94ead5dad.svg","isPro":false,"fullname":"fdsqefsgergd","user":"T-representer","type":"user"},{"_id":"621ff334fa5492893dc03d82","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/621ff334fa5492893dc03d82/EAIr-l3O4OeM10f1boLux.jpeg","isPro":false,"fullname":"Xabier de Zuazo","user":"zuazo","type":"user"},{"_id":"65c20ee58aedd6edd2b89000","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65c20ee58aedd6edd2b89000/LtS4YTbmxiCFqHSGHfdC8.png","isPro":false,"fullname":"Chmielewski","user":"Eryk-Chmielewski","type":"user"},{"_id":"67cef8b7d9f3ce4930069e10","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/67cef8b7d9f3ce4930069e10/1o2rYNrmLPozUDaUo_nVE.png","isPro":false,"fullname":"Sephen Chung","user":"stephenchungmh","type":"user"},{"_id":"674c11d135e938b05b7ccae1","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/674c11d135e938b05b7ccae1/TAgsgVTxwbXrEj-KyVjKn.png","isPro":false,"fullname":"Lambda Go","user":"lambda-technologies-limited","type":"user"},{"_id":"665b133508d536a8ac804f7d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/Uwi0OnANdTbRbHHQvGqvR.png","isPro":false,"fullname":"Paulson","user":"Pnaomi","type":"user"},{"_id":"6516eed5fff98a48b2a3552e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6516eed5fff98a48b2a3552e/dDdlwnffAUFG3o-Zq0Oz9.png","isPro":false,"fullname":"BryanBradfo","user":"BryanBradfo","type":"user"},{"_id":"651c80a26ba9ab9b9582c273","avatarUrl":"/avatars/e963452eafd21f517d800f2e58e0f918.svg","isPro":false,"fullname":"siyeng feng","user":"siyengfeng","type":"user"},{"_id":"64d98ef7a4839890b25eb78b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64d98ef7a4839890b25eb78b/215-CSVLl81z6CAq0ECWU.jpeg","isPro":true,"fullname":"Fangyuan Yu","user":"Ksgk-fy","type":"user"},{"_id":"64bbe9b236eb058cd9d6a5b9","avatarUrl":"/avatars/c7c01a3fa8809e73800392679abff6d5.svg","isPro":false,"fullname":"Kai Zuberbühler","user":"kaizuberbuehler","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2504.01871

Interpreting Emergent Planning in Model-Free Reinforcement Learning

Published on Apr 2, 2025
· Submitted by
tom bush
on Apr 4, 2025
Authors:
,

Abstract

Model-free reinforcement learning agents can learn to plan using concept-based interpretability, showing long-term action prediction and influence on behavior, resembling parallelized bidirectional search.

AI-generated summary

We present the first mechanistic evidence that model-free reinforcement learning agents can learn to plan. This is achieved by applying a methodology based on concept-based interpretability to a model-free agent in Sokoban -- a commonly used benchmark for studying planning. Specifically, we demonstrate that DRC, a generic model-free agent introduced by Guez et al. (2019), uses learned concept representations to internally formulate plans that both predict the long-term effects of actions on the environment and influence action selection. Our methodology involves: (1) probing for planning-relevant concepts, (2) investigating plan formation within the agent's representations, and (3) verifying that discovered plans (in the agent's representations) have a causal effect on the agent's behavior through interventions. We also show that the emergence of these plans coincides with the emergence of a planning-like property: the ability to benefit from additional test-time compute. Finally, we perform a qualitative analysis of the planning algorithm learned by the agent and discover a strong resemblance to parallelized bidirectional search. Our findings advance understanding of the internal mechanisms underlying planning behavior in agents, which is important given the recent trend of emergent planning and reasoning capabilities in LLMs through RL

Community

Paper author Paper submitter

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2504.01871 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2504.01871 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2504.01871 in a Space README.md to link it from this page.

Collections including this paper 4