Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - TokenTrim: Inference-Time Token Pruning for Autoregressive Long Video Generation
[go: Go Back, main page]

https://arielshaulov.github.io/TokenTrim/
Open source code 🥳:
https://github.com/arielshaulov/TokenTrim

\n","updatedAt":"2026-02-11T12:08:52.164Z","author":{"_id":"65d4985d4e358ce02a949f8c","avatarUrl":"/avatars/3eda6f50d17802b1ce94349c89637e3c.svg","fullname":"Ariel Shaulov","name":"shaulov","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":1,"identifiedLanguage":{"language":"en","probability":0.3979944884777069},"editors":["shaulov"],"editorAvatarUrls":["/avatars/3eda6f50d17802b1ce94349c89637e3c.svg"],"reactions":[],"isReport":false}},{"id":"698c6e9b197fcd60f95dfa97","author":{"_id":"65d4985d4e358ce02a949f8c","avatarUrl":"/avatars/3eda6f50d17802b1ce94349c89637e3c.svg","fullname":"Ariel Shaulov","name":"shaulov","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false},"createdAt":"2026-02-11T11:57:15.000Z","type":"comment","data":{"edited":true,"hidden":true,"hiddenBy":"","hiddenReason":"Resolved","latest":{"raw":"This comment has been hidden","html":"This comment has been hidden","updatedAt":"2026-02-11T12:09:23.525Z","author":{"_id":"65d4985d4e358ce02a949f8c","avatarUrl":"/avatars/3eda6f50d17802b1ce94349c89637e3c.svg","fullname":"Ariel Shaulov","name":"shaulov","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"editors":[],"editorAvatarUrls":[],"reactions":[]}},{"id":"698d2f5ee7e2097d03e10eef","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2026-02-12T01:39:42.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [PackCache: A Training-Free Acceleration Method for Unified Autoregressive Video Generation via Compact KV-Cache](https://huggingface.co/papers/2601.04359) (2026)\n* [Past- and Future-Informed KV Cache Policy with Salience Estimation in Autoregressive Video Diffusion](https://huggingface.co/papers/2601.21896) (2026)\n* [Fast Autoregressive Video Diffusion and World Models with Temporal Cache Compression and Sparse Attention](https://huggingface.co/papers/2602.01801) (2026)\n* [Knot Forcing: Taming Autoregressive Video Diffusion Models for Real-time Infinite Interactive Portrait Animation](https://huggingface.co/papers/2512.21734) (2025)\n* [Pathwise Test-Time Correction for Autoregressive Long Video Generation](https://huggingface.co/papers/2602.05871) (2026)\n* [StableWorld: Towards Stable and Consistent Long Interactive Video Generation](https://huggingface.co/papers/2601.15281) (2026)\n* [End-to-End Training for Autoregressive Video Diffusion via Self-Resampling](https://huggingface.co/papers/2512.15702) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2026-02-12T01:39:42.185Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6648616194725037},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2602.00268","authors":[{"_id":"698c6b52eb12ea7453916823","user":{"_id":"65d4985d4e358ce02a949f8c","avatarUrl":"/avatars/3eda6f50d17802b1ce94349c89637e3c.svg","isPro":false,"fullname":"Ariel Shaulov","user":"shaulov","type":"user"},"name":"Ariel Shaulov","status":"claimed_verified","statusLastChangedAt":"2026-02-11T12:34:28.330Z","hidden":false},{"_id":"698c6b52eb12ea7453916824","name":"Eitan Shaar","hidden":false},{"_id":"698c6b52eb12ea7453916825","name":"Amit Edenzon","hidden":false},{"_id":"698c6b52eb12ea7453916826","name":"Lior Wolf","hidden":false}],"publishedAt":"2026-01-30T19:44:16.000Z","submittedOnDailyAt":"2026-02-11T09:15:42.189Z","title":"TokenTrim: Inference-Time Token Pruning for Autoregressive Long Video Generation","submittedOnDailyBy":{"_id":"65d4985d4e358ce02a949f8c","avatarUrl":"/avatars/3eda6f50d17802b1ce94349c89637e3c.svg","isPro":false,"fullname":"Ariel Shaulov","user":"shaulov","type":"user"},"summary":"Auto-regressive video generation enables long video synthesis by iteratively conditioning each new batch of frames on previously generated content. However, recent work has shown that such pipelines suffer from severe temporal drift, where errors accumulate and amplify over long horizons. We hypothesize that this drift does not primarily stem from insufficient model capacity, but rather from inference-time error propagation. Specifically, we contend that drift arises from the uncontrolled reuse of corrupted latent conditioning tokens during auto-regressive inference. To correct this accumulation of errors, we propose a simple, inference-time method that mitigates temporal drift by identifying and removing unstable latent tokens before they are reused for conditioning. For this purpose, we define unstable tokens as latent tokens whose representations deviate significantly from those of the previously generated batch, indicating potential corruption or semantic drift. By explicitly removing corrupted latent tokens from the auto-regressive context, rather than modifying entire spatial regions or model parameters, our method prevents unreliable latent information from influencing future generation steps. As a result, it significantly improves long-horizon temporal consistency without modifying the model architecture, training procedure, or leaving latent space.","upvotes":21,"discussionId":"698c6b52eb12ea7453916827","projectPage":"https://arielshaulov.github.io/TokenTrim/","githubRepo":"https://github.com/arielshaulov/TokenTrim","githubRepoAddedBy":"user","ai_summary":"Auto-regressive video generation suffers from temporal drift due to error accumulation in latent conditioning tokens, which is addressed by identifying and removing unstable tokens during inference to improve long-horizon consistency.","ai_keywords":["auto-regressive video generation","temporal drift","latent conditioning tokens","inference-time error propagation","unstable tokens","latent space"],"githubStars":14},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"67a1d4715685d37e28bc7b48","avatarUrl":"/avatars/23ab2c9899cad39391f41019d488729d.svg","isPro":false,"fullname":"EItan Shaar","user":"eitansh","type":"user"},{"_id":"65d4985d4e358ce02a949f8c","avatarUrl":"/avatars/3eda6f50d17802b1ce94349c89637e3c.svg","isPro":false,"fullname":"Ariel Shaulov","user":"shaulov","type":"user"},{"_id":"68456e168e792f1e46aa857f","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/JGMxml7xOIKYrNpwumgig.png","isPro":false,"fullname":"amit edenzon","user":"Edenzon","type":"user"},{"_id":"691373f76744e7bbc4883144","avatarUrl":"/avatars/8ef1dbffbcc4291d1f07b8b8594ea418.svg","isPro":false,"fullname":"Yoel Gottlieb","user":"yoelgott","type":"user"},{"_id":"681243b0fe6848c0a01b009a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/Nxy90LHnHUWiKJLJPGRlD.png","isPro":false,"fullname":"Peleg Shefi","user":"shfpeleg","type":"user"},{"_id":"691ed6380553214a9234d3cc","avatarUrl":"/avatars/65317cf895c2809c8b20d25f84a2bdfa.svg","isPro":false,"fullname":"Gal Nachshony","user":"gnbot","type":"user"},{"_id":"67a36a524cca06be65cc6381","avatarUrl":"/avatars/de25e84427f08af0a777dcdddcd2aae4.svg","isPro":false,"fullname":"Amitai Ovadia","user":"amitaio","type":"user"},{"_id":"6550ea0a8ffcf56fcbbcb0e1","avatarUrl":"/avatars/98909fdc904abc70be7e21b1df55f570.svg","isPro":false,"fullname":"Aviad LAzar","user":"aviadL","type":"user"},{"_id":"698c90a96e7e4f42f48458e1","avatarUrl":"/avatars/6b2e6e22853b118c87c42a0c5685d912.svg","isPro":false,"fullname":"Or Berman","user":"orberman","type":"user"},{"_id":"6889c144b0032e34cfc0dbc0","avatarUrl":"/avatars/e5158a846f99253d9fadf91d7746e115.svg","isPro":false,"fullname":"or cohen","user":"ornaznin","type":"user"},{"_id":"67618348c539e6eb430794bb","avatarUrl":"/avatars/1ac7282d144ba6567c829b90609ea78d.svg","isPro":false,"fullname":"Omri Shem Tov","user":"OmriSMT1","type":"user"},{"_id":"662a80358a721ebd0b4f358b","avatarUrl":"/avatars/5be5a3c8b13f6f663206a19d0525c18e.svg","isPro":false,"fullname":"Yehonatan Elisha","user":"Yoniel","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2602.00268

TokenTrim: Inference-Time Token Pruning for Autoregressive Long Video Generation

Published on Jan 30
· Submitted by
Ariel Shaulov
on Feb 11
Authors:
,
,

Abstract

Auto-regressive video generation suffers from temporal drift due to error accumulation in latent conditioning tokens, which is addressed by identifying and removing unstable tokens during inference to improve long-horizon consistency.

AI-generated summary

Auto-regressive video generation enables long video synthesis by iteratively conditioning each new batch of frames on previously generated content. However, recent work has shown that such pipelines suffer from severe temporal drift, where errors accumulate and amplify over long horizons. We hypothesize that this drift does not primarily stem from insufficient model capacity, but rather from inference-time error propagation. Specifically, we contend that drift arises from the uncontrolled reuse of corrupted latent conditioning tokens during auto-regressive inference. To correct this accumulation of errors, we propose a simple, inference-time method that mitigates temporal drift by identifying and removing unstable latent tokens before they are reused for conditioning. For this purpose, we define unstable tokens as latent tokens whose representations deviate significantly from those of the previously generated batch, indicating potential corruption or semantic drift. By explicitly removing corrupted latent tokens from the auto-regressive context, rather than modifying entire spatial regions or model parameters, our method prevents unreliable latent information from influencing future generation steps. As a result, it significantly improves long-horizon temporal consistency without modifying the model architecture, training procedure, or leaving latent space.

Community

Paper author Paper submitter
•
edited 9 days ago
Paper author Paper submitter
This comment has been hidden (marked as Resolved)

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.00268 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.00268 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.00268 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.