š„ Project Page: https://mqleet.github.io/Paper2Rebuttal_ProjectPage/
š¹ļø Code: https://github.com/AutoLab-SAI-SJTU/Paper2Rebuttal (āļø)
š¤ Huggingface Space: https://huggingface.co/spaces/Mqleet/RebuttalAgent\n","updatedAt":"2026-01-22T03:41:23.316Z","author":{"_id":"6448b2f53e7b3c11be684348","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6448b2f53e7b3c11be684348/QvlUQG3pWf8ZyEVBV6F7w.jpeg","fullname":"Qianli Ma","name":"Mqleet","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":4,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7861562371253967},"editors":["Mqleet"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/6448b2f53e7b3c11be684348/QvlUQG3pWf8ZyEVBV6F7w.jpeg"],"reactions":[],"isReport":false}},{"id":"6972d08e71d30ed41484ea99","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":317,"isUserFollowing":false},"createdAt":"2026-01-23T01:36:14.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [DeepSynth-Eval: Objectively Evaluating Information Consolidation in Deep Survey Writing](https://huggingface.co/papers/2601.03540) (2026)\n* [DeepResearchEval: An Automated Framework for Deep Research Task Construction and Agentic Evaluation](https://huggingface.co/papers/2601.09688) (2026)\n* [LimAgents: Multi-Agent LLMs for Generating Research Limitations](https://huggingface.co/papers/2601.11578) (2025)\n* [Mind2Report: A Cognitive Deep Research Agent for Expert-Level Commercial Report Synthesis](https://huggingface.co/papers/2601.04879) (2026)\n* [IDRBench: Interactive Deep Research Benchmark](https://huggingface.co/papers/2601.06676) (2026)\n* [RhinoInsight: Improving Deep Research through Control Mechanisms for Model Behavior and Context](https://huggingface.co/papers/2511.18743) (2025)\n* [Agent-as-a-Judge](https://huggingface.co/papers/2601.05111) (2026)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\nThe following papers were recommended by the Semantic Scholar API
\n- \n
- DeepSynth-Eval: Objectively Evaluating Information Consolidation in Deep Survey Writing (2026) \n
- DeepResearchEval: An Automated Framework for Deep Research Task Construction and Agentic Evaluation (2026) \n
- LimAgents: Multi-Agent LLMs for Generating Research Limitations (2025) \n
- Mind2Report: A Cognitive Deep Research Agent for Expert-Level Commercial Report Synthesis (2026) \n
- IDRBench: Interactive Deep Research Benchmark (2026) \n
- RhinoInsight: Improving Deep Research through Control Mechanisms for Model Behavior and Context (2025) \n
- Agent-as-a-Judge (2026) \n
Please give a thumbs up to this comment if you found it helpful!
\nIf you want recommendations for any Paper on Hugging Face checkout this Space
\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
TEST
\n","updatedAt":"2026-01-28T06:59:00.008Z","author":{"_id":"6792f9652edd129a258afb2d","avatarUrl":"/avatars/38c2e6712d7670aae1d390c18afab7a8.svg","fullname":"Farchan Hakim Raswa","name":"farchan","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":1,"identifiedLanguage":{"language":"en","probability":0.8028507232666016},"editors":["farchan"],"editorAvatarUrls":["/avatars/38c2e6712d7670aae1d390c18afab7a8.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2601.14171","authors":[{"_id":"69710b60c1c7409747bf9431","user":{"_id":"6448b2f53e7b3c11be684348","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6448b2f53e7b3c11be684348/QvlUQG3pWf8ZyEVBV6F7w.jpeg","isPro":true,"fullname":"Qianli Ma","user":"Mqleet","type":"user"},"name":"Qianli Ma","status":"claimed_verified","statusLastChangedAt":"2026-01-22T08:46:28.336Z","hidden":false},{"_id":"69710b60c1c7409747bf9432","name":"Chang Guo","hidden":false},{"_id":"69710b60c1c7409747bf9433","name":"Zhiheng Tian","hidden":false},{"_id":"69710b60c1c7409747bf9434","name":"Siyu Wang","hidden":false},{"_id":"69710b60c1c7409747bf9435","name":"Jipeng Xiao","hidden":false},{"_id":"69710b60c1c7409747bf9436","name":"Yuanhao Yue","hidden":false},{"_id":"69710b60c1c7409747bf9437","name":"Zhipeng Zhang","hidden":false}],"publishedAt":"2026-01-20T17:23:51.000Z","submittedOnDailyAt":"2026-01-22T01:11:23.304Z","title":"Paper2Rebuttal: A Multi-Agent Framework for Transparent Author Response Assistance","submittedOnDailyBy":{"_id":"6448b2f53e7b3c11be684348","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6448b2f53e7b3c11be684348/QvlUQG3pWf8ZyEVBV6F7w.jpeg","isPro":true,"fullname":"Qianli Ma","user":"Mqleet","type":"user"},"summary":"Writing effective rebuttals is a high-stakes task that demands more than linguistic fluency, as it requires precise alignment between reviewer intent and manuscript details. Current solutions typically treat this as a direct-to-text generation problem, suffering from hallucination, overlooked critiques, and a lack of verifiable grounding. To address these limitations, we introduce RebuttalAgent, the first multi-agents framework that reframes rebuttal generation as an evidence-centric planning task. Our system decomposes complex feedback into atomic concerns and dynamically constructs hybrid contexts by synthesizing compressed summaries with high-fidelity text while integrating an autonomous and on-demand external search module to resolve concerns requiring outside literature. By generating an inspectable response plan before drafting, RebuttalAgent ensures that every argument is explicitly anchored in internal or external evidence. We validate our approach on the proposed RebuttalBench and demonstrate that our pipeline outperforms strong baselines in coverage, faithfulness, and strategic coherence, offering a transparent and controllable assistant for the peer review process. Code will be released.","upvotes":49,"discussionId":"69710b60c1c7409747bf9438","projectPage":"https://mqleet.github.io/Paper2Rebuttal_ProjectPage/","githubRepo":"https://github.com/AutoLab-SAI-SJTU/Paper2Rebuttal","githubRepoAddedBy":"user","ai_summary":"RebuttalAgent is a multi-agent framework that reframes rebuttal generation as an evidence-centric planning task, improving coverage, faithfulness, and strategic coherence in academic peer review.","ai_keywords":["multi-agents framework","evidence-centric planning","rebuttal generation","peer review","strategic coherence","faithful generation","external search module"],"githubStars":448,"organization":{"_id":"68ee0edd23dc954f7744ac27","name":"AutoLab-SJTU","fullname":"AutoLab"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"68980ffdd8a94d606e055d1f","avatarUrl":"/avatars/df83368df0def845e635ed1e40c1905d.svg","isPro":false,"fullname":"Leo Maxwell","user":"LeoMaxwell","type":"user"},{"_id":"66aef5e4fd80ab874958e70d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/66aef5e4fd80ab874958e70d/sCDhgoEGBRandvwVlHPeH.jpeg","isPro":false,"fullname":"Zhiheng Tian","user":"ElysiaTT","type":"user"},{"_id":"691d2d2ff9393e88ed137e2c","avatarUrl":"/avatars/456a516acb9a85f70d83e767c3586ab2.svg","isPro":false,"fullname":"duoli","user":"duosiji","type":"user"},{"_id":"69719f9c2b64ff2ccacfaede","avatarUrl":"/avatars/23789b9e7a49c60ba3b1c75cbe82305a.svg","isPro":false,"fullname":"Cooper Liu","user":"CooperLiew","type":"user"},{"_id":"6971a1d49762bd303b755c40","avatarUrl":"/avatars/99f03c9b016fcd63c6514dc325c8f3dd.svg","isPro":false,"fullname":"Jason Tian","user":"ujiTT","type":"user"},{"_id":"68c1885bfc7c68f3c1b15a39","avatarUrl":"/avatars/407cd2ace14edc1cd2d41e967341cd35.svg","isPro":false,"fullname":"Xiao Jipeng","user":"happy-299","type":"user"},{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},{"_id":"682c33e0d2dc6b34d4198042","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/IBo9E-6_ErEL9rYSiNkrb.png","isPro":false,"fullname":"Geuntaek Lim","user":"geuntaek","type":"user"},{"_id":"695b8320a4394eee0ea22f4b","avatarUrl":"/avatars/6aa8e12365fe6505ac38e0e84cd88d1a.svg","isPro":false,"fullname":"Ma Jiahao","user":"AH26","type":"user"},{"_id":"686960320e940a0d551974e4","avatarUrl":"/avatars/2ba323da5293b9758a5e38867661b827.svg","isPro":false,"fullname":"siyu","user":"wangsiyu","type":"user"},{"_id":"65b9f710e7c83813628a5cd0","avatarUrl":"/avatars/47075fb646359211b2abe601fa8156d5.svg","isPro":false,"fullname":"Yantai Yang","user":"yantaiyang05","type":"user"},{"_id":"68d7beb4bca0cbde5925cf0d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/68d7beb4bca0cbde5925cf0d/hQTEvcectVzUP_n_IQKJ9.jpeg","isPro":false,"fullname":"Zhaokai Yin","user":"QiuGuangwww","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":2,"organization":{"_id":"68ee0edd23dc954f7744ac27","name":"AutoLab-SJTU","fullname":"AutoLab"}}">Paper2Rebuttal: A Multi-Agent Framework for Transparent Author Response Assistance
Abstract
RebuttalAgent is a multi-agent framework that reframes rebuttal generation as an evidence-centric planning task, improving coverage, faithfulness, and strategic coherence in academic peer review.
Writing effective rebuttals is a high-stakes task that demands more than linguistic fluency, as it requires precise alignment between reviewer intent and manuscript details. Current solutions typically treat this as a direct-to-text generation problem, suffering from hallucination, overlooked critiques, and a lack of verifiable grounding. To address these limitations, we introduce RebuttalAgent, the first multi-agents framework that reframes rebuttal generation as an evidence-centric planning task. Our system decomposes complex feedback into atomic concerns and dynamically constructs hybrid contexts by synthesizing compressed summaries with high-fidelity text while integrating an autonomous and on-demand external search module to resolve concerns requiring outside literature. By generating an inspectable response plan before drafting, RebuttalAgent ensures that every argument is explicitly anchored in internal or external evidence. We validate our approach on the proposed RebuttalBench and demonstrate that our pipeline outperforms strong baselines in coverage, faithfulness, and strategic coherence, offering a transparent and controllable assistant for the peer review process. Code will be released.
Community
RebuttalAgent is an AI-powered multi-agent system that helps researchers craft high-quality rebuttals for academic paper reviews. The system analyzes reviewer comments, searches relevant literature, generates rebuttal strategies, and produces formal rebuttal letters, all through an interactive human-in-the-loop workflow.
š Paper: https://arxiv.org/abs/2601.14171
š„ Project Page: https://mqleet.github.io/Paper2Rebuttal_ProjectPage/
š¹ļø Code: https://github.com/AutoLab-SAI-SJTU/Paper2Rebuttal (āļø)
š¤ Huggingface Space: https://huggingface.co/spaces/Mqleet/RebuttalAgent
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- DeepSynth-Eval: Objectively Evaluating Information Consolidation in Deep Survey Writing (2026)
- DeepResearchEval: An Automated Framework for Deep Research Task Construction and Agentic Evaluation (2026)
- LimAgents: Multi-Agent LLMs for Generating Research Limitations (2025)
- Mind2Report: A Cognitive Deep Research Agent for Expert-Level Commercial Report Synthesis (2026)
- IDRBench: Interactive Deep Research Benchmark (2026)
- RhinoInsight: Improving Deep Research through Control Mechanisms for Model Behavior and Context (2025)
- Agent-as-a-Judge (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
TEST
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper