Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - VideoMolmo: Spatio-Temporal Grounding Meets Pointing
[go: Go Back, main page]

\"image.png\"
Given complex referring expressions in natural language, VIDEOMOLMO demonstrates improved spatio-temporal reasoning in visual grounding. By decomposing the visual grounding task into
sequential steps—pointing (denoted by star) followed by generating masks (in red) -VIDEOMOLMO
produces more accurate and coherent segmentation masks compared to prior approaches.

\n","updatedAt":"2025-06-18T03:01:41.142Z","author":{"_id":"656864e12d73834278a8dea7","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/656864e12d73834278a8dea7/sfAWS2eyPtFHb_2GZIypp.jpeg","fullname":"Ahmed Heakl","name":"ahmedheakl","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":61,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8024119138717651},"editors":["ahmedheakl"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/656864e12d73834278a8dea7/sfAWS2eyPtFHb_2GZIypp.jpeg"],"reactions":[],"isReport":false}},{"id":"68522c3157987e83f5276876","author":{"_id":"656864e12d73834278a8dea7","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/656864e12d73834278a8dea7/sfAWS2eyPtFHb_2GZIypp.jpeg","fullname":"Ahmed Heakl","name":"ahmedheakl","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":61,"isUserFollowing":false},"createdAt":"2025-06-18T03:02:09.000Z","type":"comment","data":{"edited":true,"hidden":false,"latest":{"raw":"\n![image.png](https://cdn-uploads.huggingface.co/production/uploads/656864e12d73834278a8dea7/7FLXSWCAZ-Ztmvyg8V_9o.png)\nVideoMolmo Architecture. The visual encoder extracts multi-crop features from the current frame and the past l frames. These temporal features provide contextual cues and are processed by the Temporal Module M via multi-head cross-attention, where the query comes from the current frame, and key and value from the mean of previous frames. The output is fused with the original features to enrich temporal cues while preserving the spatial details of the current frame. The combined visual-textual representations are then passed to the LLM to predict grounded points. These points are converted into masks using our Bidirectional Temporal Mask Fusion module, ensuring temporally consistent pixel-level grounding.","html":"

\"image.png\"
VideoMolmo Architecture. The visual encoder extracts multi-crop features from the current frame and the past l frames. These temporal features provide contextual cues and are processed by the Temporal Module M via multi-head cross-attention, where the query comes from the current frame, and key and value from the mean of previous frames. The output is fused with the original features to enrich temporal cues while preserving the spatial details of the current frame. The combined visual-textual representations are then passed to the LLM to predict grounded points. These points are converted into masks using our Bidirectional Temporal Mask Fusion module, ensuring temporally consistent pixel-level grounding.

\n","updatedAt":"2025-06-18T03:04:01.656Z","author":{"_id":"656864e12d73834278a8dea7","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/656864e12d73834278a8dea7/sfAWS2eyPtFHb_2GZIypp.jpeg","fullname":"Ahmed Heakl","name":"ahmedheakl","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":61,"isUserFollowing":false}},"numEdits":1,"identifiedLanguage":{"language":"en","probability":0.8079861998558044},"editors":["ahmedheakl"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/656864e12d73834278a8dea7/sfAWS2eyPtFHb_2GZIypp.jpeg"],"reactions":[],"isReport":false}},{"id":"68522c4d25dedb0214d89c1a","author":{"_id":"656864e12d73834278a8dea7","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/656864e12d73834278a8dea7/sfAWS2eyPtFHb_2GZIypp.jpeg","fullname":"Ahmed Heakl","name":"ahmedheakl","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":61,"isUserFollowing":false},"createdAt":"2025-06-18T03:02:37.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"\n![image.png](https://cdn-uploads.huggingface.co/production/uploads/656864e12d73834278a8dea7/OI7Q20xPZ9qOjKdyiRYQd.png)\nVIDEOMOLMO annotation pipeline: We construct point-level supervision from framelevel masks using a semi-automatic process. For each frame, k points are sampled on the mask and\npassed to SAM2 to generate candidate masks. The point with the highest-IoU candidate mask (w.r.t. ground truth) is selected as the optimal annotation.","html":"

\"image.png\"
VIDEOMOLMO annotation pipeline: We construct point-level supervision from framelevel masks using a semi-automatic process. For each frame, k points are sampled on the mask and
passed to SAM2 to generate candidate masks. The point with the highest-IoU candidate mask (w.r.t. ground truth) is selected as the optimal annotation.

\n","updatedAt":"2025-06-18T03:02:37.294Z","author":{"_id":"656864e12d73834278a8dea7","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/656864e12d73834278a8dea7/sfAWS2eyPtFHb_2GZIypp.jpeg","fullname":"Ahmed Heakl","name":"ahmedheakl","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":61,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7576373815536499},"editors":["ahmedheakl"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/656864e12d73834278a8dea7/sfAWS2eyPtFHb_2GZIypp.jpeg"],"reactions":[],"isReport":false}},{"id":"68522c64602efaf2060f4cba","author":{"_id":"656864e12d73834278a8dea7","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/656864e12d73834278a8dea7/sfAWS2eyPtFHb_2GZIypp.jpeg","fullname":"Ahmed Heakl","name":"ahmedheakl","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":61,"isUserFollowing":false},"createdAt":"2025-06-18T03:03:00.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"More examples. \nhttps://cdn-uploads.huggingface.co/production/uploads/656864e12d73834278a8dea7/VoUcOufMSsfeQBfp8p0DE.mp4\nhttps://cdn-uploads.huggingface.co/production/uploads/656864e12d73834278a8dea7/1se6ooua7zEHAtWWxteVO.mp4\nhttps://cdn-uploads.huggingface.co/production/uploads/656864e12d73834278a8dea7/nQulJI_QLDHsylN3CIcqP.mp4\nhttps://cdn-uploads.huggingface.co/production/uploads/656864e12d73834278a8dea7/ajUDS6J5YpRHrkzakJ2BE.mp4\nhttps://cdn-uploads.huggingface.co/production/uploads/656864e12d73834278a8dea7/4nhABjJlDwCy51PViTGIV.mp4\n","html":"

More examples.

\n

\n

\n

\n

\n

\n","updatedAt":"2025-06-18T03:03:00.580Z","author":{"_id":"656864e12d73834278a8dea7","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/656864e12d73834278a8dea7/sfAWS2eyPtFHb_2GZIypp.jpeg","fullname":"Ahmed Heakl","name":"ahmedheakl","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":61,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.5401440262794495},"editors":["ahmedheakl"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/656864e12d73834278a8dea7/sfAWS2eyPtFHb_2GZIypp.jpeg"],"reactions":[],"isReport":false}},{"id":"6853694233d67d5d81db3cde","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2025-06-19T01:34:58.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [STSBench: A Spatio-temporal Scenario Benchmark for Multi-modal Large Language Models in Autonomous Driving](https://huggingface.co/papers/2506.06218) (2025)\n* [SAMA: Towards Multi-Turn Referential Grounded Video Chat with Large Language Models](https://huggingface.co/papers/2505.18812) (2025)\n* [Genesis: Multimodal Driving Scene Generation with Spatio-Temporal and Cross-Modal Consistency](https://huggingface.co/papers/2506.07497) (2025)\n* [HopaDIFF: Holistic-Partial Aware Fourier Conditioned Diffusion for Referring Human Action Segmentation in Multi-Person Scenarios](https://huggingface.co/papers/2506.09650) (2025)\n* [FusionAudio-1.2M: Towards Fine-grained Audio Captioning with Multimodal Contextual Fusion](https://huggingface.co/papers/2506.01111) (2025)\n* [DaMO: A Data-Efficient Multimodal Orchestrator for Temporal Reasoning with Video LLMs](https://huggingface.co/papers/2506.11558) (2025)\n* [SAM2-LOVE: Segment Anything Model 2 in Language-aided Audio-Visual Scenes](https://huggingface.co/papers/2506.01558) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-06-19T01:34:58.555Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6950598955154419},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2506.05336","authors":[{"_id":"68425a7ab63271ff41652734","user":{"_id":"67121d145b7360457e3dcbda","avatarUrl":"/avatars/0d49e3231681571cc029a0a2b5b232a9.svg","isPro":false,"fullname":"GHAZI SHAZAN AHMAD","user":"ghazishazan","type":"user"},"name":"Ghazi Shazan Ahmad","status":"claimed_verified","statusLastChangedAt":"2025-06-24T08:11:02.086Z","hidden":false},{"_id":"68425a7ab63271ff41652735","user":{"_id":"656864e12d73834278a8dea7","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/656864e12d73834278a8dea7/sfAWS2eyPtFHb_2GZIypp.jpeg","isPro":true,"fullname":"Ahmed Heakl","user":"ahmedheakl","type":"user"},"name":"Ahmed Heakl","status":"claimed_verified","statusLastChangedAt":"2025-06-07T05:48:53.401Z","hidden":false},{"_id":"68425a7ab63271ff41652736","user":{"_id":"64eddf17e630ae6d575f6231","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64eddf17e630ae6d575f6231/W8oofVqStLEAK1Xi4fQTh.png","isPro":false,"fullname":"Hanan Gani","user":"hanangani","type":"user"},"name":"Hanan Gani","status":"claimed_verified","statusLastChangedAt":"2025-06-24T08:11:04.406Z","hidden":false},{"_id":"68425a7ab63271ff41652737","name":"Abdelrahman Shaker","hidden":false},{"_id":"68425a7ab63271ff41652738","name":"Zhiqiang Shen","hidden":false},{"_id":"68425a7ab63271ff41652739","name":"Ranjay Krishna","hidden":false},{"_id":"68425a7ab63271ff4165273a","name":"Fahad Shahbaz Khan","hidden":false},{"_id":"68425a7ab63271ff4165273b","name":"Salman Khan","hidden":false}],"mediaUrls":["https://cdn-uploads.huggingface.co/production/uploads/656864e12d73834278a8dea7/7r2x8C0UYFCrqk95QpBGr.mp4"],"publishedAt":"2025-06-05T17:59:29.000Z","submittedOnDailyAt":"2025-06-18T01:30:20.599Z","title":"VideoMolmo: Spatio-Temporal Grounding Meets Pointing","submittedOnDailyBy":{"_id":"656864e12d73834278a8dea7","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/656864e12d73834278a8dea7/sfAWS2eyPtFHb_2GZIypp.jpeg","isPro":true,"fullname":"Ahmed Heakl","user":"ahmedheakl","type":"user"},"summary":"Spatio-temporal localization is vital for precise interactions across diverse\ndomains, from biological research to autonomous navigation and interactive\ninterfaces. Current video-based approaches, while proficient in tracking, lack\nthe sophisticated reasoning capabilities of large language models, limiting\ntheir contextual understanding and generalization. We introduce VideoMolmo, a\nlarge multimodal model tailored for fine-grained spatio-temporal pointing\nconditioned on textual descriptions. Building upon the Molmo architecture,\nVideoMolmo incorporates a temporal module utilizing an attention mechanism to\ncondition each frame on preceding frames, ensuring temporal consistency.\nAdditionally, our novel temporal mask fusion pipeline employs SAM2 for\nbidirectional point propagation, significantly enhancing coherence across video\nsequences. This two-step decomposition, i.e., first using the LLM to generate\nprecise pointing coordinates, then relying on a sequential mask-fusion module\nto produce coherent segmentation, not only simplifies the task for the language\nmodel but also enhances interpretability. Due to the lack of suitable datasets,\nwe curate a comprehensive dataset comprising 72k video-caption pairs annotated\nwith 100k object points. To evaluate the generalization of VideoMolmo, we\nintroduce VPoS-Bench, a challenging out-of-distribution benchmark spanning five\nreal-world scenarios: Cell Tracking, Egocentric Vision, Autonomous Driving,\nVideo-GUI Interaction, and Robotics. We also evaluate our model on Referring\nVideo Object Segmentation (Refer-VOS) and Reasoning VOS tasks. In comparison to\nexisting models, VideoMolmo substantially improves spatio-temporal pointing\naccuracy and reasoning capability. Our code and models are publicly available\nat https://github.com/mbzuai-oryx/VideoMolmo.","upvotes":9,"discussionId":"68425a80b63271ff416528f2","projectPage":"https://mbzuai-oryx.github.io/VideoMolmo/","githubRepo":"https://github.com/mbzuai-oryx/VideoMolmo","githubRepoAddedBy":"user","ai_summary":"VideoMolmo, a multimodal model incorporating a temporal attention mechanism and SAM2 for mask fusion, enhances spatio-temporal pointing accuracy and reasoning capabilities in diverse real-world scenarios.","ai_keywords":["Molmo","attention mechanism","temporal mask fusion","SAM2","bidirectional point propagation","VideoMolmo","LLM","sequential mask-fusion module","VPoS-Bench","Referring Video Object Segmentation","Reasoning VOS"],"githubStars":53,"organization":{"_id":"61fb9e24dc607a42af5f193f","name":"MBZUAI","fullname":"Mohamed Bin Zayed University of Artificial Intelligence","avatar":"https://cdn-uploads.huggingface.co/production/uploads/1643879908583-603ab5664a944b99e81476e8.jpeg"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"656864e12d73834278a8dea7","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/656864e12d73834278a8dea7/sfAWS2eyPtFHb_2GZIypp.jpeg","isPro":true,"fullname":"Ahmed Heakl","user":"ahmedheakl","type":"user"},{"_id":"63f08dc79cf89c9ed1bb89cd","avatarUrl":"/avatars/37290358ad00bbd752f519cfdec02f3e.svg","isPro":false,"fullname":"Zhoues","user":"Zhoues","type":"user"},{"_id":"6342796a0875f2c99cfd313b","avatarUrl":"/avatars/98575092404c4197b20c929a6499a015.svg","isPro":false,"fullname":"Yuseung \"Phillip\" Lee","user":"phillipinseoul","type":"user"},{"_id":"61868ce808aae0b5499a2a95","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/61868ce808aae0b5499a2a95/F6BA0anbsoY_Z7M1JrwOe.jpeg","isPro":true,"fullname":"Sylvain Filoni","user":"fffiloni","type":"user"},{"_id":"648eb1eb59c4e5c87dc116e0","avatarUrl":"/avatars/c636cea39c2c0937f01398c94ead5dad.svg","isPro":false,"fullname":"fdsqefsgergd","user":"T-representer","type":"user"},{"_id":"65a4567e212d6aca9a3e8f5a","avatarUrl":"/avatars/ed944797230b5460381209bf76e4a0e4.svg","isPro":false,"fullname":"Catherine Liu","user":"Liu12uiL","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"64eddf17e630ae6d575f6231","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64eddf17e630ae6d575f6231/W8oofVqStLEAK1Xi4fQTh.png","isPro":false,"fullname":"Hanan Gani","user":"hanangani","type":"user"},{"_id":"67121d145b7360457e3dcbda","avatarUrl":"/avatars/0d49e3231681571cc029a0a2b5b232a9.svg","isPro":false,"fullname":"GHAZI SHAZAN AHMAD","user":"ghazishazan","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0,"organization":{"_id":"61fb9e24dc607a42af5f193f","name":"MBZUAI","fullname":"Mohamed Bin Zayed University of Artificial Intelligence","avatar":"https://cdn-uploads.huggingface.co/production/uploads/1643879908583-603ab5664a944b99e81476e8.jpeg"}}">
Papers
arxiv:2506.05336

VideoMolmo: Spatio-Temporal Grounding Meets Pointing

Published on Jun 5, 2025
· Submitted by
Ahmed Heakl
on Jun 18, 2025
Authors:
,
,
,
,

Abstract

VideoMolmo, a multimodal model incorporating a temporal attention mechanism and SAM2 for mask fusion, enhances spatio-temporal pointing accuracy and reasoning capabilities in diverse real-world scenarios.

AI-generated summary

Spatio-temporal localization is vital for precise interactions across diverse domains, from biological research to autonomous navigation and interactive interfaces. Current video-based approaches, while proficient in tracking, lack the sophisticated reasoning capabilities of large language models, limiting their contextual understanding and generalization. We introduce VideoMolmo, a large multimodal model tailored for fine-grained spatio-temporal pointing conditioned on textual descriptions. Building upon the Molmo architecture, VideoMolmo incorporates a temporal module utilizing an attention mechanism to condition each frame on preceding frames, ensuring temporal consistency. Additionally, our novel temporal mask fusion pipeline employs SAM2 for bidirectional point propagation, significantly enhancing coherence across video sequences. This two-step decomposition, i.e., first using the LLM to generate precise pointing coordinates, then relying on a sequential mask-fusion module to produce coherent segmentation, not only simplifies the task for the language model but also enhances interpretability. Due to the lack of suitable datasets, we curate a comprehensive dataset comprising 72k video-caption pairs annotated with 100k object points. To evaluate the generalization of VideoMolmo, we introduce VPoS-Bench, a challenging out-of-distribution benchmark spanning five real-world scenarios: Cell Tracking, Egocentric Vision, Autonomous Driving, Video-GUI Interaction, and Robotics. We also evaluate our model on Referring Video Object Segmentation (Refer-VOS) and Reasoning VOS tasks. In comparison to existing models, VideoMolmo substantially improves spatio-temporal pointing accuracy and reasoning capability. Our code and models are publicly available at https://github.com/mbzuai-oryx/VideoMolmo.

Community

Paper author Paper submitter

VideoMolmo is a a large multimodal model tailored for fine-grained spatio-temporal pointing conditioned on textual descriptions. Building upon the Molmo architecture, VideoMolmo incorporates a temporal module utilizing an attention mechanism to condition each frame on preceding frames, ensuring temporal consistency. Additionally, our novel temporal mask fusion pipeline employs SAM2 for bidirectional point propagation, significantly enhancing coherence across video sequences. This two-step decomposition i.e., first using the LLM to generate precise pointing coordinates, then relying on a sequential mask-fusion module to produce coherent segmentation, not only simplifies the task for the language model but also enhances interpretability. Due to the lack of suitable datasets, we curate a comprehensive dataset comprising 72k video-caption pairs annotated with 100k object points. To evaluate the generalization of VideoMolmo, we introduce VPoS-Bench, a challenging out-of-distribution benchmark spanning five real-world scenarios: Cell Tracking, Egocentric Vision, Autonomous Driving, Video-GUI Interaction, and Robotics. We also evaluate our model on Referring Video Object Segmentation (Refer-VOS) and Reasoning VOS tasks. In comparison to existing models, VideoMolmo substantially improves spatio-temporal pointing accuracy and reasoning capability.

Paper author Paper submitter

image.png
Given complex referring expressions in natural language, VIDEOMOLMO demonstrates improved spatio-temporal reasoning in visual grounding. By decomposing the visual grounding task into
sequential steps—pointing (denoted by star) followed by generating masks (in red) -VIDEOMOLMO
produces more accurate and coherent segmentation masks compared to prior approaches.

Paper author Paper submitter
edited Jun 18, 2025

image.png
VideoMolmo Architecture. The visual encoder extracts multi-crop features from the current frame and the past l frames. These temporal features provide contextual cues and are processed by the Temporal Module M via multi-head cross-attention, where the query comes from the current frame, and key and value from the mean of previous frames. The output is fused with the original features to enrich temporal cues while preserving the spatial details of the current frame. The combined visual-textual representations are then passed to the LLM to predict grounded points. These points are converted into masks using our Bidirectional Temporal Mask Fusion module, ensuring temporally consistent pixel-level grounding.

Paper author Paper submitter

image.png
VIDEOMOLMO annotation pipeline: We construct point-level supervision from framelevel masks using a semi-automatic process. For each frame, k points are sampled on the mask and
passed to SAM2 to generate candidate masks. The point with the highest-IoU candidate mask (w.r.t. ground truth) is selected as the optimal annotation.

Paper author Paper submitter

More examples.

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2506.05336 in a Space README.md to link it from this page.

Collections including this paper 3