Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - CineBrain: A Large-Scale Multi-Modal Brain Dataset During Naturalistic Audiovisual Narrative Processing
[go: Go Back, main page]

https://jianxgao.github.io/CineBrain.

\n","updatedAt":"2025-03-12T12:54:57.980Z","author":{"_id":"643815c4961bb61e463c5896","avatarUrl":"/avatars/3b44592472f16c56105bff8c314d9939.svg","fullname":"Jianxiong Gao","name":"Jianxiong","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":4,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8141717314720154},"editors":["Jianxiong"],"editorAvatarUrls":["/avatars/3b44592472f16c56105bff8c314d9939.svg"],"reactions":[],"isReport":false}},{"id":"67d2362ec5c8af10496b6433","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2025-03-13T01:34:38.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [A Survey of fMRI to Image Reconstruction](https://huggingface.co/papers/2502.16861) (2025)\n* [A Survey on Bridging EEG Signals and Generative AI: From Image and Text to Beyond](https://huggingface.co/papers/2502.12048) (2025)\n* [Large Cognition Model: Towards Pretrained EEG Foundation Model](https://huggingface.co/papers/2502.17464) (2025)\n* [MagicInfinite: Generating Infinite Talking Videos with Your Words and Voice](https://huggingface.co/papers/2503.05978) (2025)\n* [MindSimulator: Exploring Brain Concept Localization via Synthetic FMRI](https://huggingface.co/papers/2503.02351) (2025)\n* [BP-GPT: Auditory Neural Decoding Using fMRI-prompted LLM](https://huggingface.co/papers/2502.15172) (2025)\n* [BrainGuard: Privacy-Preserving Multisubject Image Reconstructions from Brain Activities](https://huggingface.co/papers/2501.14309) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-03-13T01:34:38.809Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7237528562545776},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2503.06940","authors":[{"_id":"67d1382885d9baaf658a4b55","user":{"_id":"643815c4961bb61e463c5896","avatarUrl":"/avatars/3b44592472f16c56105bff8c314d9939.svg","isPro":false,"fullname":"Jianxiong Gao","user":"Jianxiong","type":"user"},"name":"Jianxiong Gao","status":"claimed_verified","statusLastChangedAt":"2025-03-12T14:25:40.647Z","hidden":false},{"_id":"67d1382885d9baaf658a4b56","name":"Yichang Liu","hidden":false},{"_id":"67d1382885d9baaf658a4b57","name":"Baofeng Yang","hidden":false},{"_id":"67d1382885d9baaf658a4b58","name":"Jianfeng Feng","hidden":false},{"_id":"67d1382885d9baaf658a4b59","name":"Yanwei Fu","hidden":false}],"publishedAt":"2025-03-10T05:39:43.000Z","submittedOnDailyAt":"2025-03-12T11:24:57.956Z","title":"CineBrain: A Large-Scale Multi-Modal Brain Dataset During Naturalistic\n Audiovisual Narrative Processing","submittedOnDailyBy":{"_id":"643815c4961bb61e463c5896","avatarUrl":"/avatars/3b44592472f16c56105bff8c314d9939.svg","isPro":false,"fullname":"Jianxiong Gao","user":"Jianxiong","type":"user"},"summary":"In this paper, we introduce CineBrain, the first large-scale dataset\nfeaturing simultaneous EEG and fMRI recordings during dynamic audiovisual\nstimulation. Recognizing the complementary strengths of EEG's high temporal\nresolution and fMRI's deep-brain spatial coverage, CineBrain provides\napproximately six hours of narrative-driven content from the popular television\nseries The Big Bang Theory for each of six participants. Building upon this\nunique dataset, we propose CineSync, an innovative multimodal decoding\nframework integrates a Multi-Modal Fusion Encoder with a diffusion-based Neural\nLatent Decoder. Our approach effectively fuses EEG and fMRI signals,\nsignificantly improving the reconstruction quality of complex audiovisual\nstimuli. To facilitate rigorous evaluation, we introduce Cine-Benchmark, a\ncomprehensive evaluation protocol that assesses reconstructions across semantic\nand perceptual dimensions. Experimental results demonstrate that CineSync\nachieves state-of-the-art video reconstruction performance and highlight our\ninitial success in combining fMRI and EEG for reconstructing both video and\naudio stimuli. Project Page: https://jianxgao.github.io/CineBrain.","upvotes":11,"discussionId":"67d1382c85d9baaf658a4c96","projectPage":"https://jianxgao.github.io/CineBrain/","githubRepo":"https://github.com/JianxGao/CineBrain","githubRepoAddedBy":"auto","ai_summary":"CineSync, a multimodal decoding framework using reinforcement learning with a diffusion-based neural latent decoder, achieves state-of-the-art performance in reconstructing complex audiovisual stimuli from EEG and fMRI data.","ai_keywords":["EEG","fMRI","Multi-Modal Fusion Encoder","diffusion-based Neural Latent Decoder","CineSync","CineBrain","Cine-Benchmark"],"githubStars":5},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"643815c4961bb61e463c5896","avatarUrl":"/avatars/3b44592472f16c56105bff8c314d9939.svg","isPro":false,"fullname":"Jianxiong Gao","user":"Jianxiong","type":"user"},{"_id":"64de20c5808492ba6e65d124","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64de20c5808492ba6e65d124/58IX_TI5vJw73qS1knw56.jpeg","isPro":false,"fullname":"Zhang Mengchen","user":"Dubhe-zmc","type":"user"},{"_id":"65f42e56c792ce2e4e85ce7a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65f42e56c792ce2e4e85ce7a/krR5Vi47zU3nIncvzCMEq.png","isPro":false,"fullname":"jingyanghuo","user":"jingyanghuo","type":"user"},{"_id":"64cb444287bfb0fc21099909","avatarUrl":"/avatars/b0f1e14e01727b6bef47d5b8d36e9224.svg","isPro":false,"fullname":"Chong Li","user":"chongjg","type":"user"},{"_id":"67a3f78b1db4ae99793a5881","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/iUPVGWR44Pp0QZuXPRxoT.png","isPro":false,"fullname":"Robert Gu","user":"Tastooger","type":"user"},{"_id":"67573d76ee0958ae763bbe26","avatarUrl":"/avatars/a0ec538f38e9ff713ed6847d228a4bea.svg","isPro":false,"fullname":"jiyaoliu","user":"jiyaoliufd","type":"user"},{"_id":"668547e921e9f68a7a2a6d18","avatarUrl":"/avatars/5251e5c52993c7e5fb23cb4afba03f50.svg","isPro":false,"fullname":"Yuqin Dai","user":"dayll","type":"user"},{"_id":"6415d088107962562e99517c","avatarUrl":"/avatars/c2fa60334080fc238016b49b1a436c00.svg","isPro":false,"fullname":"Qi Chen-SII","user":"qc316","type":"user"},{"_id":"67d18a7c312ed7eef068feb9","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/z6RVAfS8IZ7SZWUcNvR-v.png","isPro":false,"fullname":"QuWanying","user":"RainbowQTT","type":"user"},{"_id":"665b133508d536a8ac804f7d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/Uwi0OnANdTbRbHHQvGqvR.png","isPro":false,"fullname":"Paulson","user":"Pnaomi","type":"user"},{"_id":"67d195a495bc9562a5c4b623","avatarUrl":"/avatars/bb831cc986aa520ced30e325f39e9e47.svg","isPro":false,"fullname":"Yichang","user":"LY1224","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2503.06940

CineBrain: A Large-Scale Multi-Modal Brain Dataset During Naturalistic Audiovisual Narrative Processing

Published on Mar 10, 2025
· Submitted by
Jianxiong Gao
on Mar 12, 2025
Authors:
,
,
,

Abstract

CineSync, a multimodal decoding framework using reinforcement learning with a diffusion-based neural latent decoder, achieves state-of-the-art performance in reconstructing complex audiovisual stimuli from EEG and fMRI data.

AI-generated summary

In this paper, we introduce CineBrain, the first large-scale dataset featuring simultaneous EEG and fMRI recordings during dynamic audiovisual stimulation. Recognizing the complementary strengths of EEG's high temporal resolution and fMRI's deep-brain spatial coverage, CineBrain provides approximately six hours of narrative-driven content from the popular television series The Big Bang Theory for each of six participants. Building upon this unique dataset, we propose CineSync, an innovative multimodal decoding framework integrates a Multi-Modal Fusion Encoder with a diffusion-based Neural Latent Decoder. Our approach effectively fuses EEG and fMRI signals, significantly improving the reconstruction quality of complex audiovisual stimuli. To facilitate rigorous evaluation, we introduce Cine-Benchmark, a comprehensive evaluation protocol that assesses reconstructions across semantic and perceptual dimensions. Experimental results demonstrate that CineSync achieves state-of-the-art video reconstruction performance and highlight our initial success in combining fMRI and EEG for reconstructing both video and audio stimuli. Project Page: https://jianxgao.github.io/CineBrain.

Community

Paper author Paper submitter

In this paper, we introduce CineBrain, the first large-scale dataset featuring simultaneous EEG and fMRI recordings during dynamic audiovisual stimulation. Recognizing the complementary strengths of EEG's high temporal resolution and fMRI's deep-brain spatial coverage, CineBrain provides approximately six hours of narrative-driven content from the popular television series The Big Bang Theory for each of six participants. Building upon this unique dataset, we propose CineSync, an innovative multimodal decoding framework integrates a Multi-Modal Fusion Encoder with a diffusion-based Neural Latent Decoder. Our approach effectively fuses EEG and fMRI signals, significantly improving the reconstruction quality of complex audiovisual stimuli. To facilitate rigorous evaluation, we introduce Cine-Benchmark, a comprehensive evaluation protocol that assesses reconstructions across semantic and perceptual dimensions. Experimental results demonstrate that CineSync achieves state-of-the-art video reconstruction performance and highlight our initial success in combining fMRI and EEG for reconstructing both video and audio stimuli. Project Page: https://jianxgao.github.io/CineBrain.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2503.06940 in a Space README.md to link it from this page.

Collections including this paper 2