Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - Cambrian-S: Towards Spatial Supersensing in Video
[go: Go Back, main page]

Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-11-08T01:34:46.446Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":317,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7220013737678528},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}},{"id":"690eb1c9115742b9bceaf1ee","author":{"_id":"649939137e4bb2372ef88394","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/5e3RfdJ7jkn2tlC4sJQxW.png","fullname":"Jie Wang","name":"Everloom","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false},"createdAt":"2025-11-08T02:58:17.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Real world challenge leads to really performant spatial understanding!","html":"

Real world challenge leads to really performant spatial understanding!

\n","updatedAt":"2025-11-08T02:58:17.601Z","author":{"_id":"649939137e4bb2372ef88394","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/5e3RfdJ7jkn2tlC4sJQxW.png","fullname":"Jie Wang","name":"Everloom","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8862403631210327},"editors":["Everloom"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/5e3RfdJ7jkn2tlC4sJQxW.png"],"reactions":[{"reaction":"🔥","users":["EdwinHuang","xiangan"],"count":2}],"isReport":false}},{"id":"691ae66b852038da6d2cd584","author":{"_id":"65d9fc2a0e6ad24551d87a1e","avatarUrl":"/avatars/3aedb9522cc3cd08349d654f523fd792.svg","fullname":"Grant Singleton","name":"grantsing","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":4,"isUserFollowing":false},"createdAt":"2025-11-17T09:10:03.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"arXiv explained breakdown of this paper 👉 https://arxivexplained.com/papers/cambrian-s-towards-spatial-supersensing-in-video","html":"

arXiv explained breakdown of this paper 👉 https://arxivexplained.com/papers/cambrian-s-towards-spatial-supersensing-in-video

\n","updatedAt":"2025-11-17T09:10:03.324Z","author":{"_id":"65d9fc2a0e6ad24551d87a1e","avatarUrl":"/avatars/3aedb9522cc3cd08349d654f523fd792.svg","fullname":"Grant Singleton","name":"grantsing","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":4,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7193554043769836},"editors":["grantsing"],"editorAvatarUrls":["/avatars/3aedb9522cc3cd08349d654f523fd792.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2511.04670","authors":[{"_id":"690d5b7aad2597bf6c464cb9","user":{"_id":"627ccf058b4e56cfc2716425","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1652346592327-noauth.jpeg","isPro":false,"fullname":"Shusheng Yang","user":"ShushengYang","type":"user"},"name":"Shusheng Yang","status":"claimed_verified","statusLastChangedAt":"2025-11-07T10:28:45.693Z","hidden":false},{"_id":"690d5b7aad2597bf6c464cba","user":{"_id":"6304baf041387c7f1177a5d2","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6304baf041387c7f1177a5d2/cQgCR8AsrMUaF2QVh97I9.jpeg","isPro":true,"fullname":"Jihan Yang","user":"jihanyang","type":"user"},"name":"Jihan Yang","status":"claimed_verified","statusLastChangedAt":"2025-11-17T10:32:11.880Z","hidden":false},{"_id":"690d5b7aad2597bf6c464cbb","user":{"_id":"65d14d80818f0593463fee32","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65d14d80818f0593463fee32/5dG3GwfzuMA9j_DhNMVsg.jpeg","isPro":true,"fullname":"Pinzhi Huang","user":"EdwinHuang","type":"user"},"name":"Pinzhi Huang","status":"claimed_verified","statusLastChangedAt":"2025-11-10T09:31:35.377Z","hidden":false},{"_id":"690d5b7aad2597bf6c464cbc","user":{"_id":"626dc5105f7327906f0b2a4e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/626dc5105f7327906f0b2a4e/QCSzuwYqsv8ozRnusVb-F.jpeg","isPro":true,"fullname":"Ellis Brown","user":"ellisbrown","type":"user"},"name":"Ellis Brown","status":"claimed_verified","statusLastChangedAt":"2025-11-10T09:31:37.284Z","hidden":false},{"_id":"690d5b7aad2597bf6c464cbd","name":"Zihao Yang","hidden":false},{"_id":"690d5b7aad2597bf6c464cbe","name":"Yue Yu","hidden":false},{"_id":"690d5b7aad2597bf6c464cbf","name":"Shengbang Tong","hidden":false},{"_id":"690d5b7aad2597bf6c464cc0","name":"Zihan Zheng","hidden":false},{"_id":"690d5b7aad2597bf6c464cc1","name":"Yifan Xu","hidden":false},{"_id":"690d5b7aad2597bf6c464cc2","name":"Muhan Wang","hidden":false},{"_id":"690d5b7aad2597bf6c464cc3","name":"Daohan Lu","hidden":false},{"_id":"690d5b7aad2597bf6c464cc4","name":"Rob Fergus","hidden":false},{"_id":"690d5b7aad2597bf6c464cc5","name":"Yann LeCun","hidden":false},{"_id":"690d5b7aad2597bf6c464cc6","name":"Li Fei-Fei","hidden":false},{"_id":"690d5b7aad2597bf6c464cc7","user":{"_id":"6596422646624a86ff3b3bda","avatarUrl":"/avatars/216e12b77e45ac5f1fa20932f5745411.svg","isPro":false,"fullname":"Saining Xie","user":"sainx","type":"user"},"name":"Saining Xie","status":"claimed_verified","statusLastChangedAt":"2025-11-11T19:51:34.053Z","hidden":false}],"publishedAt":"2025-11-06T18:55:17.000Z","submittedOnDailyAt":"2025-11-07T00:07:55.850Z","title":"Cambrian-S: Towards Spatial Supersensing in Video","submittedOnDailyBy":{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},"summary":"We argue that progress in true multimodal intelligence calls for a shift from\nreactive, task-driven systems and brute-force long context towards a broader\nparadigm of supersensing. We frame spatial supersensing as four stages beyond\nlinguistic-only understanding: semantic perception (naming what is seen),\nstreaming event cognition (maintaining memory across continuous experiences),\nimplicit 3D spatial cognition (inferring the world behind pixels), and\npredictive world modeling (creating internal models that filter and organize\ninformation). Current benchmarks largely test only the early stages, offering\nnarrow coverage of spatial cognition and rarely challenging models in ways that\nrequire true world modeling. To drive progress in spatial supersensing, we\npresent VSI-SUPER, a two-part benchmark: VSR (long-horizon visual spatial\nrecall) and VSC (continual visual spatial counting). These tasks require\narbitrarily long video inputs yet are resistant to brute-force context\nexpansion. We then test data scaling limits by curating VSI-590K and training\nCambrian-S, achieving +30% absolute improvement on VSI-Bench without\nsacrificing general capabilities. Yet performance on VSI-SUPER remains limited,\nindicating that scale alone is insufficient for spatial supersensing. We\npropose predictive sensing as a path forward, presenting a proof-of-concept in\nwhich a self-supervised next-latent-frame predictor leverages surprise\n(prediction error) to drive memory and event segmentation. On VSI-SUPER, this\napproach substantially outperforms leading proprietary baselines, showing that\nspatial supersensing requires models that not only see but also anticipate,\nselect, and organize experience.","upvotes":38,"discussionId":"690d5b7aad2597bf6c464cc8","projectPage":"https://cambrian-mllm.github.io/","githubRepo":"https://github.com/cambrian-mllm/cambrian-s","githubRepoAddedBy":"auto","ai_summary":"Progress in multimodal intelligence requires a shift to supersensing, including semantic perception, event cognition, spatial cognition, and predictive modeling, demonstrated through VSI-SUPER benchmarks and a self-supervised predictive sensing approach.","ai_keywords":["supersensing","semantic perception","streaming event cognition","implicit 3D spatial cognition","predictive world modeling","VSI-SUPER","VSR","VSC","VSI-590K","Cambrian-S","VSI-Bench","predictive sensing","self-supervised next-latent-frame predictor","surprise","prediction error","memory","event segmentation"],"githubStars":496},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},{"_id":"6304baf041387c7f1177a5d2","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6304baf041387c7f1177a5d2/cQgCR8AsrMUaF2QVh97I9.jpeg","isPro":true,"fullname":"Jihan Yang","user":"jihanyang","type":"user"},{"_id":"627ccf058b4e56cfc2716425","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1652346592327-noauth.jpeg","isPro":false,"fullname":"Shusheng Yang","user":"ShushengYang","type":"user"},{"_id":"63721f5ada3183d9d53cfe1f","avatarUrl":"/avatars/593c14c907848da7dbc9e5418751bd94.svg","isPro":true,"fullname":"Xue Zeyue","user":"xzyhku","type":"user"},{"_id":"64bbfcf6afd1e46c55ec67d3","avatarUrl":"/avatars/ce685b7ccf4fc6d6c2ae6e539ffdea85.svg","isPro":false,"fullname":"Yang Yang","user":"YangYangGirl","type":"user"},{"_id":"64b76660f92b20f7a37c3df7","avatarUrl":"/avatars/40158717bb9370f1e5d0ed156a6fed1f.svg","isPro":false,"fullname":"HaohaiSun","user":"HaohaiSun","type":"user"},{"_id":"63b908d0e3c78740d8e950d0","avatarUrl":"/avatars/3e80075e92aebdfea712f70b00d5ec7d.svg","isPro":true,"fullname":"Yuxuan Zhang","user":"Reacherx","type":"user"},{"_id":"6478679d7b370854241b2ad8","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6478679d7b370854241b2ad8/dBczWYYdfEt9tQcnVGhQk.jpeg","isPro":false,"fullname":"xiangan","user":"xiangan","type":"user"},{"_id":"659765e22235d4056ba80c0a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/659765e22235d4056ba80c0a/dATESmijLO3CpD1sCMezg.jpeg","isPro":true,"fullname":"Gao Sensen","user":"Sensen02","type":"user"},{"_id":"67f5e63688b2c5303ab5be7a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/u7B43H2QhY6Eby8wCEw_o.png","isPro":false,"fullname":"Chengxuan Qian","user":"Raymond-Qiancx","type":"user"},{"_id":"6683a05e74fb1736a4b7c934","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6683a05e74fb1736a4b7c934/eiz6qlqIUjAWGy5zfg8Cs.jpeg","isPro":false,"fullname":"QRQ","user":"RichardQRQ","type":"user"},{"_id":"6505a02f9310ce8c400edc63","avatarUrl":"/avatars/bbf781594fc8c812316711aa8e2797aa.svg","isPro":false,"fullname":"Fangfu Liu","user":"Liuff23","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2511.04670

Cambrian-S: Towards Spatial Supersensing in Video

Published on Nov 6, 2025
· Submitted by
taesiri
on Nov 7, 2025
Authors:
,
,
,
,
,
,
,
,
,
,

Abstract

Progress in multimodal intelligence requires a shift to supersensing, including semantic perception, event cognition, spatial cognition, and predictive modeling, demonstrated through VSI-SUPER benchmarks and a self-supervised predictive sensing approach.

AI-generated summary

We argue that progress in true multimodal intelligence calls for a shift from reactive, task-driven systems and brute-force long context towards a broader paradigm of supersensing. We frame spatial supersensing as four stages beyond linguistic-only understanding: semantic perception (naming what is seen), streaming event cognition (maintaining memory across continuous experiences), implicit 3D spatial cognition (inferring the world behind pixels), and predictive world modeling (creating internal models that filter and organize information). Current benchmarks largely test only the early stages, offering narrow coverage of spatial cognition and rarely challenging models in ways that require true world modeling. To drive progress in spatial supersensing, we present VSI-SUPER, a two-part benchmark: VSR (long-horizon visual spatial recall) and VSC (continual visual spatial counting). These tasks require arbitrarily long video inputs yet are resistant to brute-force context expansion. We then test data scaling limits by curating VSI-590K and training Cambrian-S, achieving +30% absolute improvement on VSI-Bench without sacrificing general capabilities. Yet performance on VSI-SUPER remains limited, indicating that scale alone is insufficient for spatial supersensing. We propose predictive sensing as a path forward, presenting a proof-of-concept in which a self-supervised next-latent-frame predictor leverages surprise (prediction error) to drive memory and event segmentation. On VSI-SUPER, this approach substantially outperforms leading proprietary baselines, showing that spatial supersensing requires models that not only see but also anticipate, select, and organize experience.

Community

Paper submitter

We argue that progress in true multimodal intelligence calls for a shift from reactive, task-driven systems and brute-force long context towards a broader paradigm of supersensing. We frame spatial supersensing as four stages beyond linguistic-only understanding: semantic perception (naming what is seen), streaming event cognition (maintaining memory across continuous experiences), implicit 3D spatial cognition (inferring the world behind pixels), and predictive world modeling (creating internal models that filter and organize information). Current benchmarks largely test only the early stages, offering narrow coverage of spatial cognition and rarely challenging models in ways that require true world modeling. To drive progress in spatial supersensing, we present VSI-SUPER, a two-part benchmark: VSR (long-horizon visual spatial recall) and VSC (continual visual spatial counting). These tasks require arbitrarily long video inputs yet are resistant to brute-force context expansion. We then test data scaling limits by curating VSI-590K and training Cambrian-S, achieving +30% absolute improvement on VSI-Bench without sacrificing general capabilities. Yet performance on VSI-SUPER remains limited, indicating that scale alone is insufficient for spatial supersensing. We propose predictive sensing as a path forward, presenting a proof-of-concept in which a self-supervised next-latent-frame predictor leverages surprise (prediction error) to drive memory and event segmentation. On VSI-SUPER, this approach substantially outperforms leading proprietary baselines, showing that spatial supersensing requires models that not only see but also anticipate, select, and organize experience.

Amazing!

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Real world challenge leads to really performant spatial understanding!

Sign up or log in to comment

Models citing this paper 4

Datasets citing this paper 4

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2511.04670 in a Space README.md to link it from this page.

Collections including this paper 4