Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - MOVA: Towards Scalable and Synchronized Video-Audio Generation
[go: Go Back, main page]

https://mosi.cn/models/mova
Model:https://huggingface.co/collections/OpenMOSS-Team/mova
Code:https://github.com/OpenMOSS/MOVA

\n","updatedAt":"2026-02-10T05:48:59.268Z","author":{"_id":"62c14609ac1b639c2d87192c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1656833489364-noauth.png","fullname":"SII-liangtianyi","name":"tianyilt","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":5,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"zh","probability":0.4221690893173218},"editors":["tianyilt"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1656833489364-noauth.png"],"reactions":[{"reaction":"🔥","users":["Cqy2019","ngc7293","natalie5","gaoyang07","tianyilt"],"count":5}],"isReport":false}},{"id":"698bdf1fb9b8fbd45e1ec8e1","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2026-02-11T01:45:03.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Apollo: Unified Multi-Task Audio-Video Joint Generation](https://huggingface.co/papers/2601.04151) (2026)\n* [JoVA: Unified Multimodal Learning for Joint Video-Audio Generation](https://huggingface.co/papers/2512.13677) (2025)\n* [LTX-2: Efficient Joint Audio-Visual Foundation Model](https://huggingface.co/papers/2601.03233) (2026)\n* [TalkVerse: Democratizing Minute-Long Audio-Driven Video Generation](https://huggingface.co/papers/2512.14938) (2025)\n* [Omni2Sound: Towards Unified Video-Text-to-Audio Generation](https://huggingface.co/papers/2601.02731) (2026)\n* [MM-Sonate: Multimodal Controllable Audio-Video Generation with Zero-Shot Voice Cloning](https://huggingface.co/papers/2601.01568) (2026)\n* [JUST-DUB-IT: Video Dubbing via Joint Audio-Visual Diffusion](https://huggingface.co/papers/2601.22143) (2026)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2026-02-11T01:45:03.329Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6789954304695129},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}},{"id":"69978f19c633a08f9cf69f82","author":{"_id":"69978dd4101eccb918a0d2eb","avatarUrl":"/avatars/c878ce25a2e6342dba232921b4c4cb04.svg","fullname":"يحيئ القاضي أبو عتريس","name":"yahyajudgeabuatris","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false},"createdAt":"2026-02-19T22:30:49.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"تفاصيل المشهد (السيناريو):\n1. البداية (0:00 - 0:03):\nالصورة: الكاميرا مثبتة (أو محمولة باليد بأسلوب الـ Vlog)، تبدأ بلقطة مقربة على \"القمريات\" (النوافذ الزجاجية الملونة) وهي تعكس ضوء الشمس على الجدار الحجري، ثم تنزل الكاميرا لتظهر الفتاة جالسة على الأرض وتعدل جلستها.\nالنص على الشاشة: \"أجواء رمضان في اليمن ما تشبه أي مكان ❤️\"\nالصوت: صوت خلفي خافت لأناشيد ترحيبية برمضان (مثل \"يا مرحبًا يا رمضان\") أو صوت أذان بعيد.","html":"

تفاصيل المشهد (السيناريو):

\n
    \n
  1. البداية (0:00 - 0:03):
    الصورة: الكاميرا مثبتة (أو محمولة باليد بأسلوب الـ Vlog)، تبدأ بلقطة مقربة على \"القمريات\" (النوافذ الزجاجية الملونة) وهي تعكس ضوء الشمس على الجدار الحجري، ثم تنزل الكاميرا لتظهر الفتاة جالسة على الأرض وتعدل جلستها.
    النص على الشاشة: \"أجواء رمضان في اليمن ما تشبه أي مكان ❤️\"
    الصوت: صوت خلفي خافت لأناشيد ترحيبية برمضان (مثل \"يا مرحبًا يا رمضان\") أو صوت أذان بعيد.
  2. \n
\n","updatedAt":"2026-02-19T22:30:49.300Z","author":{"_id":"69978dd4101eccb918a0d2eb","avatarUrl":"/avatars/c878ce25a2e6342dba232921b4c4cb04.svg","fullname":"يحيئ القاضي أبو عتريس","name":"yahyajudgeabuatris","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"ar","probability":0.9965164661407471},"editors":["yahyajudgeabuatris"],"editorAvatarUrls":["/avatars/c878ce25a2e6342dba232921b4c4cb04.svg"],"reactions":[],"isReport":false}},{"id":"69978f4a7b1a959012e535d3","author":{"_id":"69978dd4101eccb918a0d2eb","avatarUrl":"/avatars/c878ce25a2e6342dba232921b4c4cb04.svg","fullname":"يحيئ القاضي أبو عتريس","name":"yahyajudgeabuatris","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false},"createdAt":"2026-02-19T22:31:38.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"تفاصيل المشهد (السينا\n![20260219215934_8ec12_1f44d49f-cd54-49b5-8ec1-0e4040709656](https://cdn-uploads.huggingface.co/production/uploads/69978dd4101eccb918a0d2eb/AJeAabz995r5Rl5m1bX6H.jpeg)\nريو):\n1. البداية (0:00 - 0:03):\nالصورة: الكاميرا مثبتة (أو محمولة باليد بأسلوب الـ Vlog)، تبدأ بلقطة مقربة على \"القمريات\" (النوافذ الزجاجية الملونة) وهي تعكس ضوء الشمس على الجدار الحجري، ثم تنزل الكاميرا لتظهر الفتاة جالسة على الأرض وتعدل جلستها.\nالنص على الشاشة: \"أجواء رمضان في اليمن ما تشبه أي مكان ❤️\"\nالصوت: صوت خلفي خافت لأناشيد ترحيبية برمضان (مثل \"يا مرحبًا يا رمضان\") أو صوت أذان بعيد.","html":"

تفاصيل المشهد (السينا
\"20260219215934_8ec12_1f44d49f-cd54-49b5-8ec1-0e4040709656\"
ريو):

\n
    \n
  1. البداية (0:00 - 0:03):
    الصورة: الكاميرا مثبتة (أو محمولة باليد بأسلوب الـ Vlog)، تبدأ بلقطة مقربة على \"القمريات\" (النوافذ الزجاجية الملونة) وهي تعكس ضوء الشمس على الجدار الحجري، ثم تنزل الكاميرا لتظهر الفتاة جالسة على الأرض وتعدل جلستها.
    النص على الشاشة: \"أجواء رمضان في اليمن ما تشبه أي مكان ❤️\"
    الصوت: صوت خلفي خافت لأناشيد ترحيبية برمضان (مثل \"يا مرحبًا يا رمضان\") أو صوت أذان بعيد.
  2. \n
\n","updatedAt":"2026-02-19T22:31:38.103Z","author":{"_id":"69978dd4101eccb918a0d2eb","avatarUrl":"/avatars/c878ce25a2e6342dba232921b4c4cb04.svg","fullname":"يحيئ القاضي أبو عتريس","name":"yahyajudgeabuatris","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"ar","probability":0.9765099287033081},"editors":["yahyajudgeabuatris"],"editorAvatarUrls":["/avatars/c878ce25a2e6342dba232921b4c4cb04.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2602.08794","authors":[{"_id":"698ac65d1b2dc6b37d61b1c2","name":"SII-OpenMOSS Team","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1c4","user":{"_id":"630501ee34c824b17250dea3","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/630501ee34c824b17250dea3/1muf-A-SvXYzr9yjXi1Ev.jpeg","isPro":false,"fullname":"Donghua Yu","user":"yhzx233","type":"user"},"name":"Donghua Yu","status":"claimed_verified","statusLastChangedAt":"2026-02-11T11:18:12.256Z","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1c5","name":"Mingshu Chen","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1c6","name":"Qi Chen","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1c7","name":"Qi Luo","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1c8","name":"Qianyi Wu","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1c9","user":{"_id":"63ec4715c81b6a52391c46b8","avatarUrl":"/avatars/496819b5075a1a834a2b9edeb068c80e.svg","isPro":false,"fullname":"QinyuanCheng","user":"Cqy2019","type":"user"},"name":"Qinyuan Cheng","status":"claimed_verified","statusLastChangedAt":"2026-02-10T09:05:07.400Z","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1ca","name":"Ruixiao Li","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1cb","user":{"_id":"62c14609ac1b639c2d87192c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1656833489364-noauth.png","isPro":false,"fullname":"SII-liangtianyi","user":"tianyilt","type":"user"},"name":"Tianyi Liang","status":"claimed_verified","statusLastChangedAt":"2026-02-10T09:05:10.522Z","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1cc","name":"Wenbo Zhang","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1cd","name":"Wenming Tu","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1ce","name":"Xiangyu Peng","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1cf","name":"Yang Gao","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1d0","name":"Yanru Huo","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1d1","user":{"_id":"69158ffc0153b85a677dcc46","avatarUrl":"/avatars/c9c5f60522f2a8f370d790ea9938b090.svg","isPro":false,"fullname":"Ying Zhu","user":"Auraithm","type":"user"},"name":"Ying Zhu","status":"claimed_verified","statusLastChangedAt":"2026-02-10T09:27:41.440Z","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1d2","user":{"_id":"6809a215d1b1e0758d74142d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/HQaINIuC0nd4Xa5_3_ma9.png","isPro":false,"fullname":"Luo Yinze","user":"0-693","type":"user"},"name":"Yinze Luo","status":"claimed_verified","statusLastChangedAt":"2026-02-12T13:57:54.543Z","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1d3","name":"Yiyang Zhang","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1d4","name":"Yuerong Song","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1d5","user":{"_id":"6443f7bf1bc692d87b25e234","avatarUrl":"/avatars/fa9e62d96d0691a9a48e3db499a61557.svg","isPro":false,"fullname":"Xu Zhe","user":"Phospheneser","type":"user"},"name":"Zhe Xu","status":"claimed_verified","statusLastChangedAt":"2026-02-11T11:18:14.300Z","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1d6","name":"Zhiyu Zhang","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1d7","name":"Chenchen Yang","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1d8","name":"Cheng Chang","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1d9","user":{"_id":"6576b137a90ae2daae171245","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6576b137a90ae2daae171245/WQK4UyNDX1XxIK83GpOZB.jpeg","isPro":false,"fullname":"zhouchushu(SII)","user":"zhouchushu","type":"user"},"name":"Chushu Zhou","status":"claimed_verified","statusLastChangedAt":"2026-02-11T11:18:18.938Z","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1da","name":"Hanfu Chen","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1db","name":"Hongnan Ma","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1dc","name":"Jiaxi Li","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1dd","name":"Jingqi Tong","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1de","name":"Junxi Liu","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1df","name":"Ke Chen","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1e0","name":"Shimin Li","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1e1","name":"Songlin Wang","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1e2","name":"Wei Jiang","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1e3","user":{"_id":"629ef8544313a7c1dd671130","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/629ef8544313a7c1dd671130/i5xfHIgELcuO1Ew19ebTw.png","isPro":false,"fullname":"Zhaoye Fei","user":"ngc7293","type":"user"},"name":"Zhaoye Fei","status":"claimed_verified","statusLastChangedAt":"2026-02-13T09:37:46.776Z","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1e4","name":"Zhiyuan Ning","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1e5","name":"Chunguo Li","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1e6","name":"Chenhui Li","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1e7","name":"Ziwei He","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1e8","name":"Zengfeng Huang","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1e9","name":"Xie Chen","hidden":false},{"_id":"698ac65d1b2dc6b37d61b1ea","user":{"_id":"61457b8deff2c9fdb4de4988","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1632381702899-61457b8deff2c9fdb4de4988.jpeg","isPro":false,"fullname":"Xipeng Qiu","user":"xpqiu","type":"user"},"name":"Xipeng Qiu","status":"claimed_verified","statusLastChangedAt":"2026-02-11T11:18:10.009Z","hidden":false}],"publishedAt":"2026-02-09T15:31:54.000Z","submittedOnDailyAt":"2026-02-10T03:18:59.260Z","title":"MOVA: Towards Scalable and Synchronized Video-Audio Generation","submittedOnDailyBy":{"_id":"62c14609ac1b639c2d87192c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1656833489364-noauth.png","isPro":false,"fullname":"SII-liangtianyi","user":"tianyilt","type":"user"},"summary":"Audio is indispensable for real-world video, yet generation models have largely overlooked audio components. Current approaches to producing audio-visual content often rely on cascaded pipelines, which increase cost, accumulate errors, and degrade overall quality. While systems such as Veo 3 and Sora 2 emphasize the value of simultaneous generation, joint multimodal modeling introduces unique challenges in architecture, data, and training. Moreover, the closed-source nature of existing systems limits progress in the field. In this work, we introduce MOVA (MOSS Video and Audio), an open-source model capable of generating high-quality, synchronized audio-visual content, including realistic lip-synced speech, environment-aware sound effects, and content-aligned music. MOVA employs a Mixture-of-Experts (MoE) architecture, with a total of 32B parameters, of which 18B are active during inference. It supports IT2VA (Image-Text to Video-Audio) generation task. By releasing the model weights and code, we aim to advance research and foster a vibrant community of creators. The released codebase features comprehensive support for efficient inference, LoRA fine-tuning, and prompt enhancement.","upvotes":151,"discussionId":"698ac65e1b2dc6b37d61b1eb","projectPage":"https://mosi.cn/models/mova","githubRepo":"https://github.com/OpenMOSS/MOVA","githubRepoAddedBy":"user","ai_summary":"MOVA is an open-source model that generates synchronized audio-visual content using a Mixture-of-Experts architecture with 32 billion parameters, supporting image-text to video-audio generation tasks.","ai_keywords":["Mixture-of-Experts","MoE","audio-visual content","lip-synced speech","sound effects","content-aligned music","IT2VA","efficient inference","LoRA fine-tuning","prompt enhancement"],"githubStars":676,"organization":{"_id":"613b0dee83ec35d460684607","name":"OpenMOSS-Team","fullname":"OpenMOSS","avatar":"https://cdn-uploads.huggingface.co/production/uploads/61457b8deff2c9fdb4de4988/N5b9663zQ4uq5_OTNlnmw.png"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"62c14609ac1b639c2d87192c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1656833489364-noauth.png","isPro":false,"fullname":"SII-liangtianyi","user":"tianyilt","type":"user"},{"_id":"6809a215d1b1e0758d74142d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/HQaINIuC0nd4Xa5_3_ma9.png","isPro":false,"fullname":"Luo Yinze","user":"0-693","type":"user"},{"_id":"629ef8544313a7c1dd671130","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/629ef8544313a7c1dd671130/i5xfHIgELcuO1Ew19ebTw.png","isPro":false,"fullname":"Zhaoye Fei","user":"ngc7293","type":"user"},{"_id":"6687f9a71309e08b1f84bdc6","avatarUrl":"/avatars/f947ec9fe620ae4cffa83b371acdd571.svg","isPro":false,"fullname":"MeiYi","user":"natalie5","type":"user"},{"_id":"67a5ae48166721c8f99f8dac","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/QogTIecNITyOGrsqkJzWK.png","isPro":false,"fullname":"Yimin Wang","user":"99sweetcookie","type":"user"},{"_id":"67a1c770bb894e8b19246698","avatarUrl":"/avatars/38c6370f3845acc4fab334bf8088ec3e.svg","isPro":false,"fullname":"Tan Yue","user":"TTangenty","type":"user"},{"_id":"6346b4e7fa79ac99a3ad12ee","avatarUrl":"/avatars/cf72b76a33c5779b049faf7bf6ec5070.svg","isPro":false,"fullname":"Yang Gao","user":"gaoyang07","type":"user"},{"_id":"64805e6dde559d48dbb00627","avatarUrl":"/avatars/29ca34546411dcc28bbc934e3c26a2ba.svg","isPro":false,"fullname":"Zengfeng","user":"ZengfengHuang","type":"user"},{"_id":"698ac8c238667dcfb72d0af2","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/698ac8c238667dcfb72d0af2/J2jFdzeiXfz3mDErsnNPA.jpeg","isPro":false,"fullname":"ZHIYU ZHANG","user":"zhiyuzhang-0212","type":"user"},{"_id":"64f033ef82c6eea604c4da8b","avatarUrl":"/avatars/51b93fea7fd68b4274ee03701245dcca.svg","isPro":false,"fullname":"Xiaoran Liu (SII)","user":"SII-xrliu","type":"user"},{"_id":"680f7d6b8b2e2c7db910962c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/680f7d6b8b2e2c7db910962c/bN8dpywHEJuN_rdnS6CGX.png","isPro":false,"fullname":"huazzeng","user":"huazzeng","type":"user"},{"_id":"637169557a5e5d8efdc3e58e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1668515232215-637169557a5e5d8efdc3e58e.jpeg","isPro":false,"fullname":"Haowei Zhang","user":"freesky","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0,"organization":{"_id":"613b0dee83ec35d460684607","name":"OpenMOSS-Team","fullname":"OpenMOSS","avatar":"https://cdn-uploads.huggingface.co/production/uploads/61457b8deff2c9fdb4de4988/N5b9663zQ4uq5_OTNlnmw.png"}}">
Papers
arxiv:2602.08794

MOVA: Towards Scalable and Synchronized Video-Audio Generation

Published on Feb 9
· Submitted by
SII-liangtianyi
on Feb 10
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,
Zhe Xu ,
,
,

Abstract

MOVA is an open-source model that generates synchronized audio-visual content using a Mixture-of-Experts architecture with 32 billion parameters, supporting image-text to video-audio generation tasks.

AI-generated summary

Audio is indispensable for real-world video, yet generation models have largely overlooked audio components. Current approaches to producing audio-visual content often rely on cascaded pipelines, which increase cost, accumulate errors, and degrade overall quality. While systems such as Veo 3 and Sora 2 emphasize the value of simultaneous generation, joint multimodal modeling introduces unique challenges in architecture, data, and training. Moreover, the closed-source nature of existing systems limits progress in the field. In this work, we introduce MOVA (MOSS Video and Audio), an open-source model capable of generating high-quality, synchronized audio-visual content, including realistic lip-synced speech, environment-aware sound effects, and content-aligned music. MOVA employs a Mixture-of-Experts (MoE) architecture, with a total of 32B parameters, of which 18B are active during inference. It supports IT2VA (Image-Text to Video-Audio) generation task. By releasing the model weights and code, we aim to advance research and foster a vibrant community of creators. The released codebase features comprehensive support for efficient inference, LoRA fine-tuning, and prompt enhancement.

Community

Paper author Paper submitter

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

تفاصيل المشهد (السيناريو):

  1. البداية (0:00 - 0:03):
    الصورة: الكاميرا مثبتة (أو محمولة باليد بأسلوب الـ Vlog)، تبدأ بلقطة مقربة على "القمريات" (النوافذ الزجاجية الملونة) وهي تعكس ضوء الشمس على الجدار الحجري، ثم تنزل الكاميرا لتظهر الفتاة جالسة على الأرض وتعدل جلستها.
    النص على الشاشة: "أجواء رمضان في اليمن ما تشبه أي مكان ❤️"
    الصوت: صوت خلفي خافت لأناشيد ترحيبية برمضان (مثل "يا مرحبًا يا رمضان") أو صوت أذان بعيد.

تفاصيل المشهد (السينا
20260219215934_8ec12_1f44d49f-cd54-49b5-8ec1-0e4040709656
ريو):

  1. البداية (0:00 - 0:03):
    الصورة: الكاميرا مثبتة (أو محمولة باليد بأسلوب الـ Vlog)، تبدأ بلقطة مقربة على "القمريات" (النوافذ الزجاجية الملونة) وهي تعكس ضوء الشمس على الجدار الحجري، ثم تنزل الكاميرا لتظهر الفتاة جالسة على الأرض وتعدل جلستها.
    النص على الشاشة: "أجواء رمضان في اليمن ما تشبه أي مكان ❤️"
    الصوت: صوت خلفي خافت لأناشيد ترحيبية برمضان (مثل "يا مرحبًا يا رمضان") أو صوت أذان بعيد.

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.08794 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.08794 in a Space README.md to link it from this page.

Collections including this paper 2