Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression
[go: Go Back, main page]

https://llmlingua.com/llmlingua2.html
code: https://github.com/microsoft/LLMLingua
demo: https://huggingface.co/spaces/microsoft/llmlingua-2

\n","updatedAt":"2024-03-20T05:04:32.752Z","author":{"_id":"6278bd42541f3d2dfa77ea70","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6278bd42541f3d2dfa77ea70/ejn49eapnB3UXQckAYdTd.jpeg","fullname":"Huiqiang Jiang","name":"iofu728","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":12,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6920372247695923},"editors":["iofu728"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/6278bd42541f3d2dfa77ea70/ejn49eapnB3UXQckAYdTd.jpeg"],"reactions":[{"reaction":"πŸ”₯","users":["AdinaY"],"count":1}],"isReport":false}},{"id":"65fb1fc563bf548613daa59d","author":{"_id":"65d7c630685624d5f2ccd4be","avatarUrl":"/avatars/3e0ae96b342dcceb51433b17e9d851ee.svg","fullname":"Vu Ha","name":"vuha14","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false},"createdAt":"2024-03-20T17:41:25.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Nice paper! Are you sharing the compression dataset?","html":"

Nice paper! Are you sharing the compression dataset?

\n","updatedAt":"2024-03-20T17:41:25.735Z","author":{"_id":"65d7c630685624d5f2ccd4be","avatarUrl":"/avatars/3e0ae96b342dcceb51433b17e9d851ee.svg","fullname":"Vu Ha","name":"vuha14","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6814266443252563},"editors":["vuha14"],"editorAvatarUrls":["/avatars/3e0ae96b342dcceb51433b17e9d851ee.svg"],"reactions":[{"reaction":"πŸ‘","users":["diwank"],"count":1}],"isReport":false},"replies":[{"id":"65fb8d93a9d83c79a62116a9","author":{"_id":"6278bd42541f3d2dfa77ea70","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6278bd42541f3d2dfa77ea70/ejn49eapnB3UXQckAYdTd.jpeg","fullname":"Huiqiang Jiang","name":"iofu728","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":12,"isUserFollowing":false},"createdAt":"2024-03-21T01:29:55.000Z","type":"comment","data":{"edited":true,"hidden":false,"latest":{"raw":"We release the dataset in https://huggingface.co/datasets/microsoft/MeetingBank-LLMCompressed and https://huggingface.co/datasets/microsoft/MeetingBank-QA-Summary.","html":"

We release the dataset in https://huggingface.co/datasets/microsoft/MeetingBank-LLMCompressed and https://huggingface.co/datasets/microsoft/MeetingBank-QA-Summary.

\n","updatedAt":"2024-05-30T09:56:26.630Z","author":{"_id":"6278bd42541f3d2dfa77ea70","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6278bd42541f3d2dfa77ea70/ejn49eapnB3UXQckAYdTd.jpeg","fullname":"Huiqiang Jiang","name":"iofu728","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":12,"isUserFollowing":false}},"numEdits":1,"identifiedLanguage":{"language":"en","probability":0.7627451419830322},"editors":["iofu728"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/6278bd42541f3d2dfa77ea70/ejn49eapnB3UXQckAYdTd.jpeg"],"reactions":[{"reaction":"πŸ‘","users":["AdinaY","qianhuiwu","vuha14"],"count":3}],"isReport":false,"parentCommentId":"65fb1fc563bf548613daa59d"}}]},{"id":"65fb8c11187c4ac021ca29df","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2024-03-21T01:23:29.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Learning to Compress Prompt in Natural Language Formats](https://huggingface.co/papers/2402.18700) (2024)\n* [Say More with Less: Understanding Prompt Learning Behaviors through Gist Compression](https://huggingface.co/papers/2402.16058) (2024)\n* [PromptKD: Distilling Student-Friendly Knowledge for Generative Language Models via Prompt Tuning](https://huggingface.co/papers/2402.12842) (2024)\n* [BGE Landmark Embedding: A Chunking-Free Embedding Method For Retrieval Augmented Long-Context Large Language Models](https://huggingface.co/papers/2402.11573) (2024)\n* [Identifying Factual Inconsistency in Summaries: Towards Effective Utilization of Large Language Model](https://huggingface.co/papers/2402.12821) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2024-03-21T01:23:29.767Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7162970900535583},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}},{"id":"661f8c8acc26dfa68e494589","author":{"_id":"638eb5f949de7ae552dd6211","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/638eb5f949de7ae552dd6211/mJkQJGpn9tXV37N2VLFCh.jpeg","fullname":"Derek Thomas","name":"derek-thomas","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":131,"isUserFollowing":false},"createdAt":"2024-04-17T08:47:06.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"@iofu728 , Im really excited about the release of the dataset!! Do you have an ETA? ","html":"

\n\n@iofu728\n\t , Im really excited about the release of the dataset!! Do you have an ETA?

\n","updatedAt":"2024-04-17T08:47:06.631Z","author":{"_id":"638eb5f949de7ae552dd6211","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/638eb5f949de7ae552dd6211/mJkQJGpn9tXV37N2VLFCh.jpeg","fullname":"Derek Thomas","name":"derek-thomas","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":131,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8980727791786194},"editors":["derek-thomas"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/638eb5f949de7ae552dd6211/mJkQJGpn9tXV37N2VLFCh.jpeg"],"reactions":[],"isReport":false},"replies":[{"id":"66584d32006739f67ef70658","author":{"_id":"6278bd42541f3d2dfa77ea70","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6278bd42541f3d2dfa77ea70/ejn49eapnB3UXQckAYdTd.jpeg","fullname":"Huiqiang Jiang","name":"iofu728","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":12,"isUserFollowing":false},"createdAt":"2024-05-30T09:56:02.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Hi @derek-thomas, thanks for your support. We release the dataset in https://huggingface.co/datasets/microsoft/MeetingBank-LLMCompressed and https://huggingface.co/datasets/microsoft/MeetingBank-QA-Summary.","html":"

Hi \n\n@derek-thomas\n\t, thanks for your support. We release the dataset in https://huggingface.co/datasets/microsoft/MeetingBank-LLMCompressed and https://huggingface.co/datasets/microsoft/MeetingBank-QA-Summary.

\n","updatedAt":"2024-05-30T09:56:02.020Z","author":{"_id":"6278bd42541f3d2dfa77ea70","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6278bd42541f3d2dfa77ea70/ejn49eapnB3UXQckAYdTd.jpeg","fullname":"Huiqiang Jiang","name":"iofu728","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":12,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7445741295814514},"editors":["iofu728"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/6278bd42541f3d2dfa77ea70/ejn49eapnB3UXQckAYdTd.jpeg"],"reactions":[{"reaction":"πŸš€","users":["derek-thomas"],"count":1}],"isReport":false,"parentCommentId":"661f8c8acc26dfa68e494589"}},{"id":"6658d2608a3a9c5f9e7bfc28","author":{"_id":"638eb5f949de7ae552dd6211","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/638eb5f949de7ae552dd6211/mJkQJGpn9tXV37N2VLFCh.jpeg","fullname":"Derek Thomas","name":"derek-thomas","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":131,"isUserFollowing":false},"createdAt":"2024-05-30T19:24:16.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Amazing, thanks!","html":"

Amazing, thanks!

\n","updatedAt":"2024-05-30T19:24:16.260Z","author":{"_id":"638eb5f949de7ae552dd6211","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/638eb5f949de7ae552dd6211/mJkQJGpn9tXV37N2VLFCh.jpeg","fullname":"Derek Thomas","name":"derek-thomas","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":131,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8028908371925354},"editors":["derek-thomas"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/638eb5f949de7ae552dd6211/mJkQJGpn9tXV37N2VLFCh.jpeg"],"reactions":[],"isReport":false,"parentCommentId":"661f8c8acc26dfa68e494589"}}]}],"primaryEmailConfirmed":false,"paper":{"id":"2403.12968","authors":[{"_id":"65fa5d220236fad825b90c14","user":{"_id":"6565e44b9bf6665f10016213","avatarUrl":"/avatars/0d7cf0e1b42cd116960cd030478446c5.svg","isPro":false,"fullname":"Zhuoshi Pan","user":"panzs19","type":"user"},"name":"Zhuoshi Pan","status":"claimed_verified","statusLastChangedAt":"2025-07-15T19:15:25.674Z","hidden":false},{"_id":"65fa5d220236fad825b90c15","user":{"_id":"63ef330b1e695b35aa484e11","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63ef330b1e695b35aa484e11/bXwpGy0dl8JXeJwJ--ilr.jpeg","isPro":false,"fullname":"Qianhui WU","user":"qianhuiwu","type":"user"},"name":"Qianhui Wu","status":"admin_assigned","statusLastChangedAt":"2024-03-21T12:03:19.190Z","hidden":false},{"_id":"65fa5d220236fad825b90c16","user":{"_id":"6278bd42541f3d2dfa77ea70","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6278bd42541f3d2dfa77ea70/ejn49eapnB3UXQckAYdTd.jpeg","isPro":false,"fullname":"Huiqiang Jiang","user":"iofu728","type":"user"},"name":"Huiqiang Jiang","status":"claimed_verified","statusLastChangedAt":"2024-03-21T08:27:20.472Z","hidden":false},{"_id":"65fa5d220236fad825b90c17","name":"Menglin Xia","hidden":false},{"_id":"65fa5d220236fad825b90c18","user":{"_id":"64b750a2fdb702b3d8619514","avatarUrl":"/avatars/f09181c0825763dff692c4bc65effc4c.svg","isPro":false,"fullname":"Xufang Luo","user":"luoxufang","type":"user"},"name":"Xufang Luo","status":"admin_assigned","statusLastChangedAt":"2024-03-21T12:03:41.340Z","hidden":false},{"_id":"65fa5d220236fad825b90c19","user":{"_id":"65f40e43653c231cbaf7d1e4","avatarUrl":"/avatars/a42ac5454cbe175f04c3420fce90cad2.svg","isPro":false,"fullname":"Jue Zhang","user":"JueZhang","type":"user"},"name":"Jue Zhang","status":"admin_assigned","statusLastChangedAt":"2024-03-21T12:03:47.912Z","hidden":false},{"_id":"65fa5d220236fad825b90c1a","name":"Qingwei Lin","hidden":false},{"_id":"65fa5d220236fad825b90c1b","name":"Victor RΓΌhle","hidden":false},{"_id":"65fa5d220236fad825b90c1c","name":"Yuqing Yang","hidden":false},{"_id":"65fa5d220236fad825b90c1d","name":"Chin-Yew Lin","hidden":false},{"_id":"65fa5d220236fad825b90c1e","name":"H. Vicky Zhao","hidden":false},{"_id":"65fa5d220236fad825b90c1f","name":"Lili Qiu","hidden":false},{"_id":"65fa5d220236fad825b90c20","name":"Dongmei Zhang","hidden":false}],"publishedAt":"2024-03-19T17:59:56.000Z","submittedOnDailyAt":"2024-03-20T02:49:43.549Z","title":"LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic\n Prompt Compression","submittedOnDailyBy":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","isPro":false,"fullname":"AK","user":"akhaliq","type":"user"},"summary":"This paper focuses on task-agnostic prompt compression for better\ngeneralizability and efficiency. Considering the redundancy in natural\nlanguage, existing approaches compress prompts by removing tokens or lexical\nunits according to their information entropy obtained from a causal language\nmodel such as LLaMa-7B. The challenge is that information entropy may be a\nsuboptimal compression metric: (i) it only leverages unidirectional context and\nmay fail to capture all essential information needed for prompt compression;\n(ii) it is not aligned with the prompt compression objective.\n To address these issues, we propose a data distillation procedure to derive\nknowledge from an LLM to compress prompts without losing crucial information,\nand meantime, introduce an extractive text compression dataset. We formulate\nprompt compression as a token classification problem to guarantee the\nfaithfulness of the compressed prompt to the original one, and use a\nTransformer encoder as the base architecture to capture all essential\ninformation for prompt compression from the full bidirectional context. Our\napproach leads to lower latency by explicitly learning the compression\nobjective with smaller models such as XLM-RoBERTa-large and mBERT.\n We evaluate our method on both in-domain and out-of-domain datasets,\nincluding MeetingBank, LongBench, ZeroScrolls, GSM8K, and BBH. Despite its\nsmall size, our model shows significant performance gains over strong baselines\nand demonstrates robust generalization ability across different LLMs.\nAdditionally, our model is 3x-6x faster than existing prompt compression\nmethods, while accelerating the end-to-end latency by 1.6x-2.9x with\ncompression ratios of 2x-5x.","upvotes":25,"discussionId":"65fa5d230236fad825b90c5a","githubRepo":"https://github.com/microsoft/LLMLingua","githubRepoAddedBy":"auto","ai_summary":"A task-agnostic prompt compression method uses data distillation and a Transformer encoder to achieve efficient compression with minimal information loss and improved generalization.","ai_keywords":["prompt compression","data distillation","LLM","token classification","Transformer encoder","XLM-RoBERTa","mBERT","MeetingBank","LongBench","ZeroScrolls","GSM8K","BBH"],"githubStars":5844},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"655ac762cb17ec19ef82719b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/655ac762cb17ec19ef82719b/1kDncYrGLYS_2SR8cNdAL.png","isPro":false,"fullname":"Welcome to matlok","user":"matlok","type":"user"},{"_id":"6278bd42541f3d2dfa77ea70","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6278bd42541f3d2dfa77ea70/ejn49eapnB3UXQckAYdTd.jpeg","isPro":false,"fullname":"Huiqiang Jiang","user":"iofu728","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"64b785384df206a3ed142dc0","avatarUrl":"/avatars/501a90b2c80d9b3a2e0d1819a4211f84.svg","isPro":false,"fullname":"Da Yu","user":"Jellyfish0538","type":"user"},{"_id":"6538119803519fddb4a17e10","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6538119803519fddb4a17e10/ffJMkdx-rM7VvLTCM6ri_.jpeg","isPro":false,"fullname":"samusenps","user":"samusenps","type":"user"},{"_id":"64bbe4d28e051085bada94e5","avatarUrl":"/avatars/4ab1e249eb46139109def9a35a99cc1e.svg","isPro":false,"fullname":"Sandeep ","user":"Sandeep81","type":"user"},{"_id":"646def60df618b303b419323","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/646def60df618b303b419323/JLJGYen4-5M8ivsLsSk0w.jpeg","isPro":false,"fullname":"Lei Wang","user":"demolei","type":"user"},{"_id":"6093a02dc4a92d63a91c5236","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6093a02dc4a92d63a91c5236/yUte6V0FU0BvVFAbON-9n.jpeg","isPro":true,"fullname":"Diwank Tomer","user":"diwank","type":"user"},{"_id":"62ba9f1d64c17f9384f6bc8d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62ba9f1d64c17f9384f6bc8d/UbS4Fa7IuLtfyGzrUpsRJ.jpeg","isPro":false,"fullname":"Zhi-Yi Chin","user":"joycenerd","type":"user"},{"_id":"612246596d9ce900691744d2","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/612246596d9ce900691744d2/9DlHVQDqblKz7QPTA6nDa.jpeg","isPro":false,"fullname":"Edoardo Federici","user":"efederici","type":"user"},{"_id":"635fd74e14657fb8cff2bc13","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/635fd74e14657fb8cff2bc13/lUlHB0z1CRPJpwwT3JcnO.jpeg","isPro":false,"fullname":"Chan Kim","user":"chanmuzi","type":"user"},{"_id":"63fb10e80aab060792f43a41","avatarUrl":"/avatars/3435130d60ac8a9dd65de77a69f2ad7b.svg","isPro":false,"fullname":"YUCHUL JUNG","user":"YUCHUL","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":2}">
Papers
arxiv:2403.12968

LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression

Published on Mar 19, 2024
Β· Submitted by
AK
on Mar 20, 2024
#2 Paper of the day
Authors:
,
,
,
,
,
,
,

Abstract

A task-agnostic prompt compression method uses data distillation and a Transformer encoder to achieve efficient compression with minimal information loss and improved generalization.

AI-generated summary

This paper focuses on task-agnostic prompt compression for better generalizability and efficiency. Considering the redundancy in natural language, existing approaches compress prompts by removing tokens or lexical units according to their information entropy obtained from a causal language model such as LLaMa-7B. The challenge is that information entropy may be a suboptimal compression metric: (i) it only leverages unidirectional context and may fail to capture all essential information needed for prompt compression; (ii) it is not aligned with the prompt compression objective. To address these issues, we propose a data distillation procedure to derive knowledge from an LLM to compress prompts without losing crucial information, and meantime, introduce an extractive text compression dataset. We formulate prompt compression as a token classification problem to guarantee the faithfulness of the compressed prompt to the original one, and use a Transformer encoder as the base architecture to capture all essential information for prompt compression from the full bidirectional context. Our approach leads to lower latency by explicitly learning the compression objective with smaller models such as XLM-RoBERTa-large and mBERT. We evaluate our method on both in-domain and out-of-domain datasets, including MeetingBank, LongBench, ZeroScrolls, GSM8K, and BBH. Despite its small size, our model shows significant performance gains over strong baselines and demonstrates robust generalization ability across different LLMs. Additionally, our model is 3x-6x faster than existing prompt compression methods, while accelerating the end-to-end latency by 1.6x-2.9x with compression ratios of 2x-5x.

Community

Paper author

Welcome to LLMLingua-2, a small-size yet powerful prompt compression method trained via data distillation from GPT-4 for token classification with a BERT-level encoder, excels in task-agnostic compression. It surpasses LLMLingua in handling out-of-domain data, offering 3x-6x faster performance.

website: https://llmlingua.com/llmlingua2.html
code: https://github.com/microsoft/LLMLingua
demo: https://huggingface.co/spaces/microsoft/llmlingua-2

Nice paper! Are you sharing the compression dataset?

Β·

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

@iofu728 , Im really excited about the release of the dataset!! Do you have an ETA?

Β·
Paper author

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 4

Spaces citing this paper 29

Collections including this paper 9