Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - PositionID: LLMs can Control Lengths, Copy and Paste with Explicit
Positional Awareness
\n","updatedAt":"2024-10-14T03:33:18.727Z","author":{"_id":"6149a9e95347647e6bb68882","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6149a9e95347647e6bb68882/Jddln1FxScCeVgTSCNBpr.png","fullname":"Zekun Moore Wang","name":"ZenMoore","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":16,"isUserFollowing":false}},"numEdits":0,"editors":["ZenMoore"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/6149a9e95347647e6bb68882/Jddln1FxScCeVgTSCNBpr.png"],"reactions":[],"isReport":false}},{"id":"670dc6c8857f827f8fdc32c7","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2024-10-15T01:35:04.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Ruler: A Model-Agnostic Method to Control Generated Length for Large Language Models](https://huggingface.co/papers/2409.18943) (2024)\n* [Control Large Language Models via Divide and Conquer](https://huggingface.co/papers/2410.04628) (2024)\n* [SAG: Style-Aligned Article Generation via Model Collaboration](https://huggingface.co/papers/2410.03137) (2024)\n* [Integrating Planning into Single-Turn Long-Form Text Generation](https://huggingface.co/papers/2410.06203) (2024)\n* [PersoBench: Benchmarking Personalized Response Generation in Large Language Models](https://huggingface.co/papers/2410.03198) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\n
The following papers were recommended by the Semantic Scholar API
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2024-10-15T01:35:04.748Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2410.07035","authors":[{"_id":"6707d46de2deb345cb3bda89","name":"Zekun Wang","hidden":false},{"_id":"6707d46de2deb345cb3bda8a","name":"Feiyu Duan","hidden":false},{"_id":"6707d46de2deb345cb3bda8b","name":"Yibo Zhang","hidden":false},{"_id":"6707d46de2deb345cb3bda8c","user":{"_id":"628c8598ef14f971b698107f","avatarUrl":"/avatars/3a4ad87e6b5f9e836a1160d869df1447.svg","isPro":false,"fullname":"Zhou","user":"Wangchunshu","type":"user"},"name":"Wangchunshu Zhou","status":"admin_assigned","statusLastChangedAt":"2024-10-14T11:16:28.376Z","hidden":false},{"_id":"6707d46de2deb345cb3bda8d","name":"Ke Xu","hidden":false},{"_id":"6707d46de2deb345cb3bda8e","user":{"_id":"65bb11cb00a03997849e9e85","avatarUrl":"/avatars/17022b0254192a837f4fe00d84389cda.svg","isPro":false,"fullname":"Wenhao Huang","user":"StephenHuang","type":"user"},"name":"Wenhao Huang","status":"claimed_verified","statusLastChangedAt":"2024-10-13T20:16:29.003Z","hidden":false},{"_id":"6707d46de2deb345cb3bda8f","name":"Jie Fu","hidden":false}],"publishedAt":"2024-10-09T16:15:36.000Z","submittedOnDailyAt":"2024-10-14T02:03:18.722Z","title":"PositionID: LLMs can Control Lengths, Copy and Paste with Explicit\n Positional Awareness","submittedOnDailyBy":{"_id":"6149a9e95347647e6bb68882","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6149a9e95347647e6bb68882/Jddln1FxScCeVgTSCNBpr.png","isPro":false,"fullname":"Zekun Moore Wang","user":"ZenMoore","type":"user"},"summary":"Large Language Models (LLMs) demonstrate impressive capabilities across\nvarious domains, including role-playing, creative writing, mathematical\nreasoning, and coding. Despite these advancements, LLMs still encounter\nchallenges with length control, frequently failing to adhere to specific length\nconstraints due to their token-level operations and insufficient training on\ndata with strict length limitations. We identify this issue as stemming from a\nlack of positional awareness and propose novel approaches--PositionID Prompting\nand PositionID Fine-Tuning--to address it. These methods enhance the model's\nability to continuously monitor and manage text length during generation.\nAdditionally, we introduce PositionID CP Prompting to enable LLMs to perform\ncopy and paste operations accurately. Furthermore, we develop two benchmarks\nfor evaluating length control and copy-paste abilities. Our experiments\ndemonstrate that our methods significantly improve the model's adherence to\nlength constraints and copy-paste accuracy without compromising response\nquality.","upvotes":17,"discussionId":"6707d46de2deb345cb3bdad4","ai_summary":"PositionID Prompting, PositionID Fine-Tuning, and PositionID CP Prompting improve length control and copy-paste accuracy in LLMs without degrading response quality.","ai_keywords":["Large Language Models (LLMs)","PositionID Prompting","PositionID Fine-Tuning","PositionID CP Prompting"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6185022420f859b897595ab3","avatarUrl":"/avatars/41623dbf096755c28c4202363ad99a01.svg","isPro":false,"fullname":"simon","user":"david314","type":"user"},{"_id":"64966691990b342dcc9fccb5","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64966691990b342dcc9fccb5/tQSrE3MkBeakk5QYfgHSo.jpeg","isPro":false,"fullname":"sixiang chen","user":"Ephemeral182","type":"user"},{"_id":"6149a9e95347647e6bb68882","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6149a9e95347647e6bb68882/Jddln1FxScCeVgTSCNBpr.png","isPro":false,"fullname":"Zekun Moore Wang","user":"ZenMoore","type":"user"},{"_id":"641b754d1911d3be6745cce9","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/641b754d1911d3be6745cce9/Ydjcjd4VuNUGj5Cd4QHdB.png","isPro":false,"fullname":"atayloraerospace","user":"Taylor658","type":"user"},{"_id":"641a6895fb5ffff5ac79d593","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/641a6895fb5ffff5ac79d593/vxvwsto3llOEWGqQKGMYx.jpeg","isPro":false,"fullname":"Jie Fu","user":"bigaidream","type":"user"},{"_id":"5f32b2367e583543386214d9","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1635314457124-5f32b2367e583543386214d9.jpeg","isPro":false,"fullname":"Sergei Averkiev","user":"averoo","type":"user"},{"_id":"646def60df618b303b419323","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/646def60df618b303b419323/JLJGYen4-5M8ivsLsSk0w.jpeg","isPro":false,"fullname":"Lei Wang","user":"demolei","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"65e1390530160b4c5f3e790c","avatarUrl":"/avatars/301c5f56b031f42697c21a828cce811e.svg","isPro":false,"fullname":"Zhang","user":"Zacharium11","type":"user"},{"_id":"649be8cf867d442094248a40","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/649be8cf867d442094248a40/JGBTkm0K4Nmap9N0DbNHm.jpeg","isPro":false,"fullname":"Constantin","user":"Alexandre-Numind","type":"user"},{"_id":"648eb1eb59c4e5c87dc116e0","avatarUrl":"/avatars/c636cea39c2c0937f01398c94ead5dad.svg","isPro":false,"fullname":"fdsqefsgergd","user":"T-representer","type":"user"},{"_id":"6329d9fc5c15412898acc8e2","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674554939826-6329d9fc5c15412898acc8e2.jpeg","isPro":true,"fullname":"Andreas Wagner","user":"awagner-mainz","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
PositionID Prompting, PositionID Fine-Tuning, and PositionID CP Prompting improve length control and copy-paste accuracy in LLMs without degrading response quality.
AI-generated summary
Large Language Models (LLMs) demonstrate impressive capabilities across
various domains, including role-playing, creative writing, mathematical
reasoning, and coding. Despite these advancements, LLMs still encounter
challenges with length control, frequently failing to adhere to specific length
constraints due to their token-level operations and insufficient training on
data with strict length limitations. We identify this issue as stemming from a
lack of positional awareness and propose novel approaches--PositionID Prompting
and PositionID Fine-Tuning--to address it. These methods enhance the model's
ability to continuously monitor and manage text length during generation.
Additionally, we introduce PositionID CP Prompting to enable LLMs to perform
copy and paste operations accurately. Furthermore, we develop two benchmarks
for evaluating length control and copy-paste abilities. Our experiments
demonstrate that our methods significantly improve the model's adherence to
length constraints and copy-paste accuracy without compromising response
quality.
Large Language Models (LLMs) demonstrate impressive capabilities across various domains, including role-playing, creative writing, mathematical reasoning, and coding. Despite these advancements, LLMs still encounter challenges with length control, frequently failing to adhere to specific length constraints due to their token-level operations and insufficient training on data with strict length limitations. We identify this issue as stemming from a lack of positional awareness and propose novel approaches--PositionID Prompting and PositionID Fine-Tuning--to address it. These methods enhance the model's ability to continuously monitor and manage text length during generation. Additionally, we introduce PositionID CP Prompting to enable LLMs to perform copy and paste operations accurately. Furthermore, we develop two benchmarks for evaluating length control and copy-paste abilities. Our experiments demonstrate that our methods significantly improve the model's adherence to length constraints and copy-paste accuracy without compromising response quality.