The following papers were recommended by the Semantic Scholar API
\n- \n
- SpatialLadder: Progressive Training for Spatial Reasoning in Vision-Language Models (2025) \n
- VideoChat-R1.5: Visual Test-Time Scaling to Reinforce Multimodal Reasoning by Iterative Perception (2025) \n
- Multimodal Spatial Reasoning in the Large Model Era: A Survey and Benchmarks (2025) \n
- Open-o3 Video: Grounded Video Reasoning with Explicit Spatio-Temporal Evidence (2025) \n
- TrackVLA++: Unleashing Reasoning and Memory Capabilities in VLA Models for Embodied Visual Tracking (2025) \n
- TRAVL: A Recipe for Making Video-Language Models Better Judges of Physics Implausibility (2025) \n
- Can World Models Benefit VLMs for World Dynamics? (2025) \n
Please give a thumbs up to this comment if you found it helpful!
\nIf you want recommendations for any Paper on Hugging Face checkout this Space
\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
Real world challenge leads to really performant spatial understanding!
\n","updatedAt":"2025-11-08T02:58:17.601Z","author":{"_id":"649939137e4bb2372ef88394","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/5e3RfdJ7jkn2tlC4sJQxW.png","fullname":"Jie Wang","name":"Everloom","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8862403631210327},"editors":["Everloom"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/5e3RfdJ7jkn2tlC4sJQxW.png"],"reactions":[{"reaction":"🔥","users":["EdwinHuang","xiangan"],"count":2}],"isReport":false}},{"id":"691ae66b852038da6d2cd584","author":{"_id":"65d9fc2a0e6ad24551d87a1e","avatarUrl":"/avatars/3aedb9522cc3cd08349d654f523fd792.svg","fullname":"Grant Singleton","name":"grantsing","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":4,"isUserFollowing":false},"createdAt":"2025-11-17T09:10:03.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"arXiv explained breakdown of this paper 👉 https://arxivexplained.com/papers/cambrian-s-towards-spatial-supersensing-in-video","html":"arXiv explained breakdown of this paper 👉 https://arxivexplained.com/papers/cambrian-s-towards-spatial-supersensing-in-video
\n","updatedAt":"2025-11-17T09:10:03.324Z","author":{"_id":"65d9fc2a0e6ad24551d87a1e","avatarUrl":"/avatars/3aedb9522cc3cd08349d654f523fd792.svg","fullname":"Grant Singleton","name":"grantsing","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":4,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7193554043769836},"editors":["grantsing"],"editorAvatarUrls":["/avatars/3aedb9522cc3cd08349d654f523fd792.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2511.04670","authors":[{"_id":"690d5b7aad2597bf6c464cb9","user":{"_id":"627ccf058b4e56cfc2716425","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1652346592327-noauth.jpeg","isPro":false,"fullname":"Shusheng Yang","user":"ShushengYang","type":"user"},"name":"Shusheng Yang","status":"claimed_verified","statusLastChangedAt":"2025-11-07T10:28:45.693Z","hidden":false},{"_id":"690d5b7aad2597bf6c464cba","user":{"_id":"6304baf041387c7f1177a5d2","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6304baf041387c7f1177a5d2/cQgCR8AsrMUaF2QVh97I9.jpeg","isPro":true,"fullname":"Jihan Yang","user":"jihanyang","type":"user"},"name":"Jihan Yang","status":"claimed_verified","statusLastChangedAt":"2025-11-17T10:32:11.880Z","hidden":false},{"_id":"690d5b7aad2597bf6c464cbb","user":{"_id":"65d14d80818f0593463fee32","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65d14d80818f0593463fee32/5dG3GwfzuMA9j_DhNMVsg.jpeg","isPro":true,"fullname":"Pinzhi Huang","user":"EdwinHuang","type":"user"},"name":"Pinzhi Huang","status":"claimed_verified","statusLastChangedAt":"2025-11-10T09:31:35.377Z","hidden":false},{"_id":"690d5b7aad2597bf6c464cbc","user":{"_id":"626dc5105f7327906f0b2a4e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/626dc5105f7327906f0b2a4e/QCSzuwYqsv8ozRnusVb-F.jpeg","isPro":true,"fullname":"Ellis Brown","user":"ellisbrown","type":"user"},"name":"Ellis Brown","status":"claimed_verified","statusLastChangedAt":"2025-11-10T09:31:37.284Z","hidden":false},{"_id":"690d5b7aad2597bf6c464cbd","name":"Zihao Yang","hidden":false},{"_id":"690d5b7aad2597bf6c464cbe","name":"Yue Yu","hidden":false},{"_id":"690d5b7aad2597bf6c464cbf","name":"Shengbang Tong","hidden":false},{"_id":"690d5b7aad2597bf6c464cc0","name":"Zihan Zheng","hidden":false},{"_id":"690d5b7aad2597bf6c464cc1","name":"Yifan Xu","hidden":false},{"_id":"690d5b7aad2597bf6c464cc2","name":"Muhan Wang","hidden":false},{"_id":"690d5b7aad2597bf6c464cc3","name":"Daohan Lu","hidden":false},{"_id":"690d5b7aad2597bf6c464cc4","name":"Rob Fergus","hidden":false},{"_id":"690d5b7aad2597bf6c464cc5","name":"Yann LeCun","hidden":false},{"_id":"690d5b7aad2597bf6c464cc6","name":"Li Fei-Fei","hidden":false},{"_id":"690d5b7aad2597bf6c464cc7","user":{"_id":"6596422646624a86ff3b3bda","avatarUrl":"/avatars/216e12b77e45ac5f1fa20932f5745411.svg","isPro":false,"fullname":"Saining Xie","user":"sainx","type":"user"},"name":"Saining Xie","status":"claimed_verified","statusLastChangedAt":"2025-11-11T19:51:34.053Z","hidden":false}],"publishedAt":"2025-11-06T18:55:17.000Z","submittedOnDailyAt":"2025-11-07T00:07:55.850Z","title":"Cambrian-S: Towards Spatial Supersensing in Video","submittedOnDailyBy":{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},"summary":"We argue that progress in true multimodal intelligence calls for a shift from\nreactive, task-driven systems and brute-force long context towards a broader\nparadigm of supersensing. We frame spatial supersensing as four stages beyond\nlinguistic-only understanding: semantic perception (naming what is seen),\nstreaming event cognition (maintaining memory across continuous experiences),\nimplicit 3D spatial cognition (inferring the world behind pixels), and\npredictive world modeling (creating internal models that filter and organize\ninformation). Current benchmarks largely test only the early stages, offering\nnarrow coverage of spatial cognition and rarely challenging models in ways that\nrequire true world modeling. To drive progress in spatial supersensing, we\npresent VSI-SUPER, a two-part benchmark: VSR (long-horizon visual spatial\nrecall) and VSC (continual visual spatial counting). These tasks require\narbitrarily long video inputs yet are resistant to brute-force context\nexpansion. We then test data scaling limits by curating VSI-590K and training\nCambrian-S, achieving +30% absolute improvement on VSI-Bench without\nsacrificing general capabilities. Yet performance on VSI-SUPER remains limited,\nindicating that scale alone is insufficient for spatial supersensing. We\npropose predictive sensing as a path forward, presenting a proof-of-concept in\nwhich a self-supervised next-latent-frame predictor leverages surprise\n(prediction error) to drive memory and event segmentation. On VSI-SUPER, this\napproach substantially outperforms leading proprietary baselines, showing that\nspatial supersensing requires models that not only see but also anticipate,\nselect, and organize experience.","upvotes":38,"discussionId":"690d5b7aad2597bf6c464cc8","projectPage":"https://cambrian-mllm.github.io/","githubRepo":"https://github.com/cambrian-mllm/cambrian-s","githubRepoAddedBy":"auto","ai_summary":"Progress in multimodal intelligence requires a shift to supersensing, including semantic perception, event cognition, spatial cognition, and predictive modeling, demonstrated through VSI-SUPER benchmarks and a self-supervised predictive sensing approach.","ai_keywords":["supersensing","semantic perception","streaming event cognition","implicit 3D spatial cognition","predictive world modeling","VSI-SUPER","VSR","VSC","VSI-590K","Cambrian-S","VSI-Bench","predictive sensing","self-supervised next-latent-frame predictor","surprise","prediction error","memory","event segmentation"],"githubStars":496},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},{"_id":"6304baf041387c7f1177a5d2","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6304baf041387c7f1177a5d2/cQgCR8AsrMUaF2QVh97I9.jpeg","isPro":true,"fullname":"Jihan Yang","user":"jihanyang","type":"user"},{"_id":"627ccf058b4e56cfc2716425","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1652346592327-noauth.jpeg","isPro":false,"fullname":"Shusheng Yang","user":"ShushengYang","type":"user"},{"_id":"63721f5ada3183d9d53cfe1f","avatarUrl":"/avatars/593c14c907848da7dbc9e5418751bd94.svg","isPro":true,"fullname":"Xue Zeyue","user":"xzyhku","type":"user"},{"_id":"64bbfcf6afd1e46c55ec67d3","avatarUrl":"/avatars/ce685b7ccf4fc6d6c2ae6e539ffdea85.svg","isPro":false,"fullname":"Yang Yang","user":"YangYangGirl","type":"user"},{"_id":"64b76660f92b20f7a37c3df7","avatarUrl":"/avatars/40158717bb9370f1e5d0ed156a6fed1f.svg","isPro":false,"fullname":"HaohaiSun","user":"HaohaiSun","type":"user"},{"_id":"63b908d0e3c78740d8e950d0","avatarUrl":"/avatars/3e80075e92aebdfea712f70b00d5ec7d.svg","isPro":true,"fullname":"Yuxuan Zhang","user":"Reacherx","type":"user"},{"_id":"6478679d7b370854241b2ad8","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6478679d7b370854241b2ad8/dBczWYYdfEt9tQcnVGhQk.jpeg","isPro":false,"fullname":"xiangan","user":"xiangan","type":"user"},{"_id":"659765e22235d4056ba80c0a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/659765e22235d4056ba80c0a/dATESmijLO3CpD1sCMezg.jpeg","isPro":true,"fullname":"Gao Sensen","user":"Sensen02","type":"user"},{"_id":"67f5e63688b2c5303ab5be7a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/u7B43H2QhY6Eby8wCEw_o.png","isPro":false,"fullname":"Chengxuan Qian","user":"Raymond-Qiancx","type":"user"},{"_id":"6683a05e74fb1736a4b7c934","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6683a05e74fb1736a4b7c934/eiz6qlqIUjAWGy5zfg8Cs.jpeg","isPro":false,"fullname":"QRQ","user":"RichardQRQ","type":"user"},{"_id":"6505a02f9310ce8c400edc63","avatarUrl":"/avatars/bbf781594fc8c812316711aa8e2797aa.svg","isPro":false,"fullname":"Fangfu Liu","user":"Liuff23","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">Cambrian-S: Towards Spatial Supersensing in Video
Abstract
Progress in multimodal intelligence requires a shift to supersensing, including semantic perception, event cognition, spatial cognition, and predictive modeling, demonstrated through VSI-SUPER benchmarks and a self-supervised predictive sensing approach.
We argue that progress in true multimodal intelligence calls for a shift from reactive, task-driven systems and brute-force long context towards a broader paradigm of supersensing. We frame spatial supersensing as four stages beyond linguistic-only understanding: semantic perception (naming what is seen), streaming event cognition (maintaining memory across continuous experiences), implicit 3D spatial cognition (inferring the world behind pixels), and predictive world modeling (creating internal models that filter and organize information). Current benchmarks largely test only the early stages, offering narrow coverage of spatial cognition and rarely challenging models in ways that require true world modeling. To drive progress in spatial supersensing, we present VSI-SUPER, a two-part benchmark: VSR (long-horizon visual spatial recall) and VSC (continual visual spatial counting). These tasks require arbitrarily long video inputs yet are resistant to brute-force context expansion. We then test data scaling limits by curating VSI-590K and training Cambrian-S, achieving +30% absolute improvement on VSI-Bench without sacrificing general capabilities. Yet performance on VSI-SUPER remains limited, indicating that scale alone is insufficient for spatial supersensing. We propose predictive sensing as a path forward, presenting a proof-of-concept in which a self-supervised next-latent-frame predictor leverages surprise (prediction error) to drive memory and event segmentation. On VSI-SUPER, this approach substantially outperforms leading proprietary baselines, showing that spatial supersensing requires models that not only see but also anticipate, select, and organize experience.
Community
We argue that progress in true multimodal intelligence calls for a shift from reactive, task-driven systems and brute-force long context towards a broader paradigm of supersensing. We frame spatial supersensing as four stages beyond linguistic-only understanding: semantic perception (naming what is seen), streaming event cognition (maintaining memory across continuous experiences), implicit 3D spatial cognition (inferring the world behind pixels), and predictive world modeling (creating internal models that filter and organize information). Current benchmarks largely test only the early stages, offering narrow coverage of spatial cognition and rarely challenging models in ways that require true world modeling. To drive progress in spatial supersensing, we present VSI-SUPER, a two-part benchmark: VSR (long-horizon visual spatial recall) and VSC (continual visual spatial counting). These tasks require arbitrarily long video inputs yet are resistant to brute-force context expansion. We then test data scaling limits by curating VSI-590K and training Cambrian-S, achieving +30% absolute improvement on VSI-Bench without sacrificing general capabilities. Yet performance on VSI-SUPER remains limited, indicating that scale alone is insufficient for spatial supersensing. We propose predictive sensing as a path forward, presenting a proof-of-concept in which a self-supervised next-latent-frame predictor leverages surprise (prediction error) to drive memory and event segmentation. On VSI-SUPER, this approach substantially outperforms leading proprietary baselines, showing that spatial supersensing requires models that not only see but also anticipate, select, and organize experience.
Amazing!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SpatialLadder: Progressive Training for Spatial Reasoning in Vision-Language Models (2025)
- VideoChat-R1.5: Visual Test-Time Scaling to Reinforce Multimodal Reasoning by Iterative Perception (2025)
- Multimodal Spatial Reasoning in the Large Model Era: A Survey and Benchmarks (2025)
- Open-o3 Video: Grounded Video Reasoning with Explicit Spatio-Temporal Evidence (2025)
- TrackVLA++: Unleashing Reasoning and Memory Capabilities in VLA Models for Embodied Visual Tracking (2025)
- TRAVL: A Recipe for Making Video-Language Models Better Judges of Physics Implausibility (2025)
- Can World Models Benefit VLMs for World Dynamics? (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Real world challenge leads to really performant spatial understanding!
arXiv explained breakdown of this paper 👉 https://arxivexplained.com/papers/cambrian-s-towards-spatial-supersensing-in-video
Models citing this paper 4
Datasets citing this paper 4
Spaces citing this paper 0
No Space linking this paper