Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - RobustFT: Robust Supervised Fine-tuning for Large Language Models under Noisy Response
[go: Go Back, main page]

https://github.com/luo-junyu/RobustFT

\n

The framework:
\"image.png\"

\n","updatedAt":"2024-12-24T03:03:04.217Z","author":{"_id":"642da1cd99f3110ac27caca5","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/642da1cd99f3110ac27caca5/C1QJY3R_ZdaeANG1y8iW7.jpeg","fullname":"junyu","name":"luojunyu","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":8,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.773991584777832},"editors":["luojunyu"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/642da1cd99f3110ac27caca5/C1QJY3R_ZdaeANG1y8iW7.jpeg"],"reactions":[],"isReport":false}},{"id":"676b60e8b271497926980e8d","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2024-12-25T01:33:28.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Learning to Reason via Self-Iterative Process Feedback for Small Language Models](https://huggingface.co/papers/2412.08393) (2024)\n* [Linear Chain Transformation: Expanding Optimization Dynamics for Fine-Tuning Large Language Models](https://huggingface.co/papers/2411.00039) (2024)\n* [ChainRank-DPO: Chain Rank Direct Preference Optimization for LLM Rankers](https://huggingface.co/papers/2412.14405) (2024)\n* [A Comparative Study of Learning Paradigms in Large Language Models via Intrinsic Dimension](https://huggingface.co/papers/2412.06245) (2024)\n* [Abstract2Appendix: Academic Reviews Enhance LLM Long-Context Capabilities](https://huggingface.co/papers/2411.05232) (2024)\n* [A Comprehensive Evaluation of Large Language Models on Aspect-Based Sentiment Analysis](https://huggingface.co/papers/2412.02279) (2024)\n* [Learn from Downstream and Be Yourself in Multimodal Large Language Model Fine-Tuning](https://huggingface.co/papers/2411.10928) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2024-12-25T01:33:28.265Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6974245309829712},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2412.14922","authors":[{"_id":"676a2354463437b5e1217e15","user":{"_id":"642da1cd99f3110ac27caca5","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/642da1cd99f3110ac27caca5/C1QJY3R_ZdaeANG1y8iW7.jpeg","isPro":false,"fullname":"junyu","user":"luojunyu","type":"user"},"name":"Junyu Luo","status":"claimed_verified","statusLastChangedAt":"2024-12-30T19:35:39.138Z","hidden":false},{"_id":"676a2354463437b5e1217e16","name":"Xiao Luo","hidden":false},{"_id":"676a2354463437b5e1217e17","user":{"_id":"665e2f9301ca1c80a0a311d2","avatarUrl":"/avatars/67c88b55b580e6db74df4d0091197cea.svg","isPro":false,"fullname":"Kaize Ding","user":"kaize0409","type":"user"},"name":"Kaize Ding","status":"extracted_pending","statusLastChangedAt":"2024-12-24T02:58:28.896Z","hidden":false},{"_id":"676a2354463437b5e1217e18","name":"Jingyang Yuan","hidden":false},{"_id":"676a2354463437b5e1217e19","name":"Zhiping Xiao","hidden":false},{"_id":"676a2354463437b5e1217e1a","name":"Ming Zhang","hidden":false}],"publishedAt":"2024-12-19T15:00:18.000Z","submittedOnDailyAt":"2024-12-24T00:33:04.208Z","title":"RobustFT: Robust Supervised Fine-tuning for Large Language Models under\n Noisy Response","submittedOnDailyBy":{"_id":"642da1cd99f3110ac27caca5","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/642da1cd99f3110ac27caca5/C1QJY3R_ZdaeANG1y8iW7.jpeg","isPro":false,"fullname":"junyu","user":"luojunyu","type":"user"},"summary":"Supervised fine-tuning (SFT) plays a crucial role in adapting large language\nmodels (LLMs) to specific domains or tasks. However, as demonstrated by\nempirical experiments, the collected data inevitably contains noise in\npractical applications, which poses significant challenges to model performance\non downstream tasks. Therefore, there is an urgent need for a noise-robust SFT\nframework to enhance model capabilities in downstream tasks. To address this\nchallenge, we introduce a robust SFT framework (RobustFT) that performs noise\ndetection and relabeling on downstream task data. For noise identification, our\napproach employs a multi-expert collaborative system with inference-enhanced\nmodels to achieve superior noise detection. In the denoising phase, we utilize\na context-enhanced strategy, which incorporates the most relevant and confident\nknowledge followed by careful assessment to generate reliable annotations.\nAdditionally, we introduce an effective data selection mechanism based on\nresponse entropy, ensuring only high-quality samples are retained for\nfine-tuning. Extensive experiments conducted on multiple LLMs across five\ndatasets demonstrate RobustFT's exceptional performance in noisy scenarios.","upvotes":88,"discussionId":"676a2354463437b5e1217e51","githubRepo":"https://github.com/luo-junyu/robustft","githubRepoAddedBy":"auto","ai_summary":"A noise-robust supervised fine-tuning framework enhances large language models' performance in downstream tasks by detecting and relabeling noisy data.","ai_keywords":["supervised fine-tuning","large language models","noise-robust framework","multi-expert collaborative system","inference-enhanced models","context-enhanced strategy","response entropy","data selection mechanism"],"githubStars":43},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"642da1cd99f3110ac27caca5","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/642da1cd99f3110ac27caca5/C1QJY3R_ZdaeANG1y8iW7.jpeg","isPro":false,"fullname":"junyu","user":"luojunyu","type":"user"},{"_id":"6434c115a5aed21dd11981c5","avatarUrl":"/avatars/d51e6e384cfc3affe578e7816bcebb35.svg","isPro":false,"fullname":"Yang Liming","user":"chunfenri","type":"user"},{"_id":"676a25c6a8ea3a0aadb59036","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/GeDvnECP4V5_7ngACZuc3.png","isPro":false,"fullname":"jy","user":"jypku","type":"user"},{"_id":"67654f58f53229de33bda646","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/Nm068sHmRPxCNa_4ECfo1.png","isPro":false,"fullname":"Vincent Smith","user":"AmmoNoCrane","type":"user"},{"_id":"65a9c8652bf3e0cbbfcab2c8","avatarUrl":"/avatars/fc690a78b5f2e94e08a40059ae40625c.svg","isPro":false,"fullname":"Alan KOU","user":"alan1027","type":"user"},{"_id":"676a27af05bfcf9b6f57a054","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/3AGR2wDDVu1gulVtLrZH5.png","isPro":false,"fullname":"lgm","user":"lgm3323","type":"user"},{"_id":"66e6c6372c78909baf44cdf8","avatarUrl":"/avatars/458ea1d545d7c022b0463e7fbbd91db1.svg","isPro":false,"fullname":"Junyu Luo","user":"junyuluo","type":"user"},{"_id":"676a27fd99fb22c72abd96b3","avatarUrl":"/avatars/45b5f998d713110443d54df0a0747a24.svg","isPro":false,"fullname":"Alice","user":"AliceDu","type":"user"},{"_id":"6523d81d56fe05f216a559f6","avatarUrl":"/avatars/07fcf56b5b8a0b64c31bdfe8fbf41cc6.svg","isPro":false,"fullname":"Bingxuan Wang","user":"YellowDoge","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"6384fca2a179f8560058dbfe","avatarUrl":"/avatars/242b7bb6a55c4abbe4ea67eb45ebdb2c.svg","isPro":false,"fullname":"___","user":"DiHuo","type":"user"},{"_id":"64e5db81b5be7b46f1d7c86e","avatarUrl":"/avatars/33c72f58a65195c9489ed44d5b48ebac.svg","isPro":false,"fullname":"hfzhong","user":"ddxdaniel","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":1}">
Papers
arxiv:2412.14922

RobustFT: Robust Supervised Fine-tuning for Large Language Models under Noisy Response

Published on Dec 19, 2024
· Submitted by
junyu
on Dec 24, 2024
#1 Paper of the day
Authors:
,
,
,

Abstract

A noise-robust supervised fine-tuning framework enhances large language models' performance in downstream tasks by detecting and relabeling noisy data.

AI-generated summary

Supervised fine-tuning (SFT) plays a crucial role in adapting large language models (LLMs) to specific domains or tasks. However, as demonstrated by empirical experiments, the collected data inevitably contains noise in practical applications, which poses significant challenges to model performance on downstream tasks. Therefore, there is an urgent need for a noise-robust SFT framework to enhance model capabilities in downstream tasks. To address this challenge, we introduce a robust SFT framework (RobustFT) that performs noise detection and relabeling on downstream task data. For noise identification, our approach employs a multi-expert collaborative system with inference-enhanced models to achieve superior noise detection. In the denoising phase, we utilize a context-enhanced strategy, which incorporates the most relevant and confident knowledge followed by careful assessment to generate reliable annotations. Additionally, we introduce an effective data selection mechanism based on response entropy, ensuring only high-quality samples are retained for fine-tuning. Extensive experiments conducted on multiple LLMs across five datasets demonstrate RobustFT's exceptional performance in noisy scenarios.

Community

Paper author Paper submitter

Hi there, today we introduce RobustFT, a noise-robust supervised fine-tuning framework designed to enhance the performance of LLMs in the presence of noisy training data. Supervised fine-tuning (SFT) is essential for adapting LLMs to specific domains, but noisy training data can significantly impact model performance. RobustFT addresses this challenge through:

  • Multi-expert collaborative noise detection
  • Context-enhanced relabeling strategy
  • Response entropy-based data selection

Our Code is available at https://github.com/luo-junyu/RobustFT

The framework:
image.png

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2412.14922 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2412.14922 in a Space README.md to link it from this page.

Collections including this paper 18