Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - StreamingVLM: Real-Time Understanding for Infinite Video Streams
[go: Go Back, main page]

Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-10-14T01:38:08.471Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6913136839866638},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}},{"id":"694cb6157890558bf083332f","author":{"_id":"6848efd865b3a0bf33d1cd68","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/jbGx4HFwd9-ylaUFl0E35.png","fullname":"zhangtiangang","name":"ztg-cv","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false},"createdAt":"2025-12-25T03:57:09.000Z","type":"comment","data":{"edited":true,"hidden":false,"latest":{"raw":"@librarian-bot recommend","html":"

\n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-12-25T04:01:34.420Z","author":{"_id":"6848efd865b3a0bf33d1cd68","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/jbGx4HFwd9-ylaUFl0E35.png","fullname":"zhangtiangang","name":"ztg-cv","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":1,"identifiedLanguage":{"language":"en","probability":0.7558995485305786},"editors":["ztg-cv"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/jbGx4HFwd9-ylaUFl0E35.png"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2510.09608","authors":[{"_id":"68ec5910cd07fb414898c8e1","name":"Ruyi Xu","hidden":false},{"_id":"68ec5910cd07fb414898c8e2","name":"Guangxuan Xiao","hidden":false},{"_id":"68ec5910cd07fb414898c8e3","user":{"_id":"62919485a29097b211bc7b83","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62919485a29097b211bc7b83/TX8iBGu5JSuFlrRvjPEBV.png","isPro":false,"fullname":"YukangChen","user":"Yukang","type":"user"},"name":"Yukang Chen","status":"claimed_verified","statusLastChangedAt":"2025-10-13T10:06:08.409Z","hidden":false},{"_id":"68ec5910cd07fb414898c8e4","name":"Liuning He","hidden":false},{"_id":"68ec5910cd07fb414898c8e5","name":"Kelly Peng","hidden":false},{"_id":"68ec5910cd07fb414898c8e6","name":"Yao Lu","hidden":false},{"_id":"68ec5910cd07fb414898c8e7","name":"Song Han","hidden":false}],"publishedAt":"2025-10-10T17:59:58.000Z","submittedOnDailyAt":"2025-10-13T00:12:40.683Z","title":"StreamingVLM: Real-Time Understanding for Infinite Video Streams","submittedOnDailyBy":{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},"summary":"Vision-language models (VLMs) could power real-time assistants and autonomous\nagents, but they face a critical challenge: understanding near-infinite video\nstreams without escalating latency and memory usage. Processing entire videos\nwith full attention leads to quadratic computational costs and poor performance\non long videos. Meanwhile, simple sliding window methods are also flawed, as\nthey either break coherence or suffer from high latency due to redundant\nrecomputation. In this paper, we introduce StreamingVLM, a model designed for\nreal-time, stable understanding of infinite visual input. Our approach is a\nunified framework that aligns training with streaming inference. During\ninference, we maintain a compact KV cache by reusing states of attention sinks,\na short window of recent vision tokens, and a long window of recent text\ntokens. This streaming ability is instilled via a simple supervised fine-tuning\n(SFT) strategy that applies full attention on short, overlapped video chunks,\nwhich effectively mimics the inference-time attention pattern without training\non prohibitively long contexts. For evaluation, we build Inf-Streams-Eval, a\nnew benchmark with videos averaging over two hours that requires dense,\nper-second alignment between frames and text. On Inf-Streams-Eval, StreamingVLM\nachieves a 66.18% win rate against GPT-4O mini and maintains stable, real-time\nperformance at up to 8 FPS on a single NVIDIA H100. Notably, our SFT strategy\nalso enhances general VQA abilities without any VQA-specific fine-tuning,\nimproving performance on LongVideoBench by +4.30 and OVOBench Realtime by\n+5.96. Code is available at https://github.com/mit-han-lab/streaming-vlm.","upvotes":51,"discussionId":"68ec5910cd07fb414898c8e8","githubRepo":"https://github.com/mit-han-lab/streaming-vlm","githubRepoAddedBy":"user","ai_summary":"StreamingVLM is a real-time vision-language model that efficiently processes infinite video streams using a compact KV cache and supervised fine-tuning, achieving high performance on long videos and diverse benchmarks.","ai_keywords":["vision-language models","VLMs","real-time assistants","autonomous agents","video streams","quadratic computational costs","sliding window methods","attention sinks","vision tokens","text tokens","supervised fine-tuning","SFT","inference-time attention pattern","Inf-Streams-Eval","win rate","NVIDIA H100","VQA abilities","LongVideoBench","OVOBench Realtime"],"githubStars":877},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},{"_id":"62919485a29097b211bc7b83","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62919485a29097b211bc7b83/TX8iBGu5JSuFlrRvjPEBV.png","isPro":false,"fullname":"YukangChen","user":"Yukang","type":"user"},{"_id":"6270324ebecab9e2dcf245de","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6270324ebecab9e2dcf245de/cMbtWSasyNlYc9hvsEEzt.jpeg","isPro":false,"fullname":"Kye Gomez","user":"kye","type":"user"},{"_id":"652ce0d4c543a08aa92e010f","avatarUrl":"/avatars/7978304e3fe99b0d4d0712441c6a24f3.svg","isPro":false,"fullname":"Haoyu Guo","user":"ghy0324","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"652fbe8cb2acab0b82f855a6","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/652fbe8cb2acab0b82f855a6/lVpzeEoFRQ6dnGAoNS9b3.jpeg","isPro":false,"fullname":"Jinming Wu","user":"kimingng","type":"user"},{"_id":"63129589bbaa385279d1826e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63129589bbaa385279d1826e/0AUs3q4ngRZ-wXuY1jP9G.jpeg","isPro":true,"fullname":"Muyang Li","user":"Lmxyy","type":"user"},{"_id":"650e6ab08f3228d807707735","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/650e6ab08f3228d807707735/yFo6aLuyGH06t9yG8AOp7.png","isPro":true,"fullname":"Zhuoyang Zhang","user":"zhuoyang20","type":"user"},{"_id":"65377c30e48353201e6fdda0","avatarUrl":"/avatars/a8f803b6f2e598eaee9c52c0d2ddfc16.svg","isPro":false,"fullname":"Jiaheng Liu","user":"CheeryLJH","type":"user"},{"_id":"64b8a72952b7353d8c669086","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64b8a72952b7353d8c669086/3PUTNmx9kd17gtZ9-Yviw.jpeg","isPro":false,"fullname":"Qi Fan","user":"fanqiNO1","type":"user"},{"_id":"652965773a416e1f2173443b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/652965773a416e1f2173443b/y9MB8YgHzbwCXAc4EI9T3.jpeg","isPro":true,"fullname":"Yuhao Dong","user":"THUdyh","type":"user"},{"_id":"642435a1a3adbc7142c3b0a6","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/642435a1a3adbc7142c3b0a6/wgLT_w9jNWRU3O0jU0646.jpeg","isPro":false,"fullname":"Joya Chen","user":"chenjoya","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2510.09608

StreamingVLM: Real-Time Understanding for Infinite Video Streams

Published on Oct 10, 2025
· Submitted by
taesiri
on Oct 13, 2025
Authors:
,
,
,
,
,

Abstract

StreamingVLM is a real-time vision-language model that efficiently processes infinite video streams using a compact KV cache and supervised fine-tuning, achieving high performance on long videos and diverse benchmarks.

AI-generated summary

Vision-language models (VLMs) could power real-time assistants and autonomous agents, but they face a critical challenge: understanding near-infinite video streams without escalating latency and memory usage. Processing entire videos with full attention leads to quadratic computational costs and poor performance on long videos. Meanwhile, simple sliding window methods are also flawed, as they either break coherence or suffer from high latency due to redundant recomputation. In this paper, we introduce StreamingVLM, a model designed for real-time, stable understanding of infinite visual input. Our approach is a unified framework that aligns training with streaming inference. During inference, we maintain a compact KV cache by reusing states of attention sinks, a short window of recent vision tokens, and a long window of recent text tokens. This streaming ability is instilled via a simple supervised fine-tuning (SFT) strategy that applies full attention on short, overlapped video chunks, which effectively mimics the inference-time attention pattern without training on prohibitively long contexts. For evaluation, we build Inf-Streams-Eval, a new benchmark with videos averaging over two hours that requires dense, per-second alignment between frames and text. On Inf-Streams-Eval, StreamingVLM achieves a 66.18% win rate against GPT-4O mini and maintains stable, real-time performance at up to 8 FPS on a single NVIDIA H100. Notably, our SFT strategy also enhances general VQA abilities without any VQA-specific fine-tuning, improving performance on LongVideoBench by +4.30 and OVOBench Realtime by +5.96. Code is available at https://github.com/mit-han-lab/streaming-vlm.

Community

Paper submitter

StreamingVLM enables real-time, stable understanding of effectively infinite video by keeping a compact KV cache and aligning training with streaming inference. It avoids quadratic cost and sliding-window pitfalls, runs up to 8 FPS on a single H100, and wins 66.18% vs GPT-4o mini on a new long-video benchmark. It also boosts general VQA without task-specific finetuning. You can grasp the gist by skimming this section first.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

@librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2510.09608 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2510.09608 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2510.09608 in a Space README.md to link it from this page.

Collections including this paper 9