Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - VBench-2.0: Advancing Video Generation Benchmark Suite for Intrinsic Faithfulness
[go: Go Back, main page]

Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-03-29T01:34:24.898Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6763342618942261},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2503.21755","authors":[{"_id":"67e60823284844fd3014f62b","user":{"_id":"67e60ae6ac37824273d74389","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/YvPKZ_0gyJnvNwM1zK3JS.png","isPro":false,"fullname":"Dian Zheng","user":"zhengli1013","type":"user"},"name":"Dian Zheng","status":"claimed_verified","statusLastChangedAt":"2025-12-01T16:25:17.342Z","hidden":false},{"_id":"67e60823284844fd3014f62c","user":{"_id":"60efe7fa0d920bc7805cada5","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/60efe7fa0d920bc7805cada5/2LBrJBjSCOP5ilZIpWLHl.png","isPro":false,"fullname":"Ziqi Huang","user":"Ziqi","type":"user"},"name":"Ziqi Huang","status":"admin_assigned","statusLastChangedAt":"2025-03-28T08:48:07.968Z","hidden":false},{"_id":"67e60823284844fd3014f62d","user":{"_id":"6690dfd73bbfdee5f43ffc4d","avatarUrl":"/avatars/88ff9b61663299d7751037696a75f1d7.svg","isPro":false,"fullname":"Hongbo Liu","user":"HongboLiu","type":"user"},"name":"Hongbo Liu","status":"admin_assigned","statusLastChangedAt":"2025-03-28T08:48:14.875Z","hidden":false},{"_id":"67e60823284844fd3014f62e","user":{"_id":"647993d9f966f086918da59e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/647993d9f966f086918da59e/NDxz3PEpo3srZQNhwT7Qf.jpeg","isPro":false,"fullname":"kzou","user":"jackyhate","type":"user"},"name":"Kai Zou","status":"claimed_verified","statusLastChangedAt":"2025-03-28T08:37:25.760Z","hidden":true},{"_id":"67e60823284844fd3014f62f","user":{"_id":"65b9d9961fe588f824fde191","avatarUrl":"/avatars/a9245958cc998a4b4b870bf2490fdaee.svg","isPro":false,"fullname":"Yinan He","user":"yinanhe","type":"user"},"name":"Yinan He","status":"admin_assigned","statusLastChangedAt":"2025-03-28T08:48:21.118Z","hidden":false},{"_id":"67e60823284844fd3014f630","name":"Fan Zhang","hidden":false},{"_id":"67e60823284844fd3014f631","name":"Yuanhan Zhang","hidden":false},{"_id":"67e60823284844fd3014f632","user":{"_id":"670749a9d827da9f37508209","avatarUrl":"/avatars/f14fc05ad405f3967b9af0bcc73d4207.svg","isPro":false,"fullname":"he jingwen","user":"mimihe","type":"user"},"name":"Jingwen He","status":"admin_assigned","statusLastChangedAt":"2025-03-28T08:48:30.978Z","hidden":false},{"_id":"67e60823284844fd3014f633","name":"Wei-Shi Zheng","hidden":false},{"_id":"67e60823284844fd3014f634","name":"Yu Qiao","hidden":false},{"_id":"67e60823284844fd3014f635","user":{"_id":"62ab1ac1d48b4d8b048a3473","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1656826685333-62ab1ac1d48b4d8b048a3473.png","isPro":false,"fullname":"Ziwei Liu","user":"liuziwei7","type":"user"},"name":"Ziwei Liu","status":"admin_assigned","statusLastChangedAt":"2025-03-28T08:48:49.191Z","hidden":false}],"publishedAt":"2025-03-27T17:57:01.000Z","submittedOnDailyAt":"2025-03-28T00:53:44.602Z","title":"VBench-2.0: Advancing Video Generation Benchmark Suite for Intrinsic\n Faithfulness","submittedOnDailyBy":{"_id":"60efe7fa0d920bc7805cada5","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/60efe7fa0d920bc7805cada5/2LBrJBjSCOP5ilZIpWLHl.png","isPro":false,"fullname":"Ziqi Huang","user":"Ziqi","type":"user"},"summary":"Video generation has advanced significantly, evolving from producing\nunrealistic outputs to generating videos that appear visually convincing and\ntemporally coherent. To evaluate these video generative models, benchmarks such\nas VBench have been developed to assess their faithfulness, measuring factors\nlike per-frame aesthetics, temporal consistency, and basic prompt adherence.\nHowever, these aspects mainly represent superficial faithfulness, which focus\non whether the video appears visually convincing rather than whether it adheres\nto real-world principles. While recent models perform increasingly well on\nthese metrics, they still struggle to generate videos that are not just\nvisually plausible but fundamentally realistic. To achieve real \"world models\"\nthrough video generation, the next frontier lies in intrinsic faithfulness to\nensure that generated videos adhere to physical laws, commonsense reasoning,\nanatomical correctness, and compositional integrity. Achieving this level of\nrealism is essential for applications such as AI-assisted filmmaking and\nsimulated world modeling. To bridge this gap, we introduce VBench-2.0, a\nnext-generation benchmark designed to automatically evaluate video generative\nmodels for their intrinsic faithfulness. VBench-2.0 assesses five key\ndimensions: Human Fidelity, Controllability, Creativity, Physics, and\nCommonsense, each further broken down into fine-grained capabilities. Tailored\nfor individual dimensions, our evaluation framework integrates generalists such\nas state-of-the-art VLMs and LLMs, and specialists, including anomaly detection\nmethods proposed for video generation. We conduct extensive annotations to\nensure alignment with human judgment. By pushing beyond superficial\nfaithfulness toward intrinsic faithfulness, VBench-2.0 aims to set a new\nstandard for the next generation of video generative models in pursuit of\nintrinsic faithfulness.","upvotes":33,"discussionId":"67e60824284844fd3014f68e","projectPage":"https://vchitect.github.io/VBench-2.0-project/","ai_summary":"VBench-2.0 introduces a benchmark for video generation that evaluates intrinsic faithfulness by assessing human fidelity, controllability, creativity, physics, and commonsense.","ai_keywords":["VLMs","LLMs","anomaly detection","VBench-2.0"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"60efe7fa0d920bc7805cada5","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/60efe7fa0d920bc7805cada5/2LBrJBjSCOP5ilZIpWLHl.png","isPro":false,"fullname":"Ziqi Huang","user":"Ziqi","type":"user"},{"_id":"636e19078ba65db4a093a3f4","avatarUrl":"/avatars/287b063b44a022d8576256e80e489c31.svg","isPro":false,"fullname":"alexiosss","user":"Alexislhb","type":"user"},{"_id":"62a993d80472c0b7f94027df","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62a993d80472c0b7f94027df/j5vp-IwLA2YBexylUHiQU.png","isPro":false,"fullname":"Zhang Yuanhan","user":"ZhangYuanhan","type":"user"},{"_id":"66a49f9b2b460286b05646b8","avatarUrl":"/avatars/5123cf7c3d00b5bc47a86011afd6abe8.svg","isPro":false,"fullname":"jwhe","user":"jwhejwhe","type":"user"},{"_id":"67e60ae6ac37824273d74389","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/YvPKZ_0gyJnvNwM1zK3JS.png","isPro":false,"fullname":"Dian Zheng","user":"zhengli1013","type":"user"},{"_id":"644fe6a9e1d7a97f3b66e906","avatarUrl":"/avatars/ad1a45f0b1c8a4d03ba87f2a3ce5a8f8.svg","isPro":false,"fullname":"Yuanming-Li","user":"Lymann","type":"user"},{"_id":"674fca91d9458feb8f3cf7f5","avatarUrl":"/avatars/6d36a6a9d015ae4623b1879a16c9c220.svg","isPro":false,"fullname":"Kun-Yu Lin","user":"LasNack","type":"user"},{"_id":"62aafa49f29ff279b51f0182","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62aafa49f29ff279b51f0182/rQx8QFQGOY2qIhqJ8zSRj.jpeg","isPro":false,"fullname":"yinanhe","user":"ynhe","type":"user"},{"_id":"6481764e8af4675862efb22e","avatarUrl":"/avatars/fc2e076bc861693f598a528a068a696e.svg","isPro":false,"fullname":"weichenfan","user":"weepiess2383","type":"user"},{"_id":"636daf995aaed143cd6c7447","avatarUrl":"/avatars/efee0647aeba593cd51550cf09e5a4df.svg","isPro":false,"fullname":"ZenT","user":"ZenT","type":"user"},{"_id":"6410213f928400b416424f6e","avatarUrl":"/avatars/4ce6a2a33d73119dc840217d7d053343.svg","isPro":false,"fullname":"Xudong Xu","user":"Sheldoooon","type":"user"},{"_id":"61f24cbb88b9b5abbe184a85","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/61f24cbb88b9b5abbe184a85/OvcJRU51yI8pdO77NBHLb.jpeg","isPro":false,"fullname":"zhangfan","user":"Fan-s","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2503.21755

VBench-2.0: Advancing Video Generation Benchmark Suite for Intrinsic Faithfulness

Published on Mar 27, 2025
· Submitted by
Ziqi Huang
on Mar 28, 2025
Authors:
,
,
,
,

Abstract

VBench-2.0 introduces a benchmark for video generation that evaluates intrinsic faithfulness by assessing human fidelity, controllability, creativity, physics, and commonsense.

AI-generated summary

Video generation has advanced significantly, evolving from producing unrealistic outputs to generating videos that appear visually convincing and temporally coherent. To evaluate these video generative models, benchmarks such as VBench have been developed to assess their faithfulness, measuring factors like per-frame aesthetics, temporal consistency, and basic prompt adherence. However, these aspects mainly represent superficial faithfulness, which focus on whether the video appears visually convincing rather than whether it adheres to real-world principles. While recent models perform increasingly well on these metrics, they still struggle to generate videos that are not just visually plausible but fundamentally realistic. To achieve real "world models" through video generation, the next frontier lies in intrinsic faithfulness to ensure that generated videos adhere to physical laws, commonsense reasoning, anatomical correctness, and compositional integrity. Achieving this level of realism is essential for applications such as AI-assisted filmmaking and simulated world modeling. To bridge this gap, we introduce VBench-2.0, a next-generation benchmark designed to automatically evaluate video generative models for their intrinsic faithfulness. VBench-2.0 assesses five key dimensions: Human Fidelity, Controllability, Creativity, Physics, and Commonsense, each further broken down into fine-grained capabilities. Tailored for individual dimensions, our evaluation framework integrates generalists such as state-of-the-art VLMs and LLMs, and specialists, including anomaly detection methods proposed for video generation. We conduct extensive annotations to ensure alignment with human judgment. By pushing beyond superficial faithfulness toward intrinsic faithfulness, VBench-2.0 aims to set a new standard for the next generation of video generative models in pursuit of intrinsic faithfulness.

Community

Paper author Paper submitter

Video generation has advanced significantly, evolving from producing unrealistic outputs to generating videos that appear visually convincing and temporally coherent. To evaluate these video generative models, benchmarks such as VBench have been developed to assess their faithfulness, measuring factors like per-frame aesthetics, temporal consistency, and basic prompt adherence. However, these aspects mainly represent superficial faithfulness, which focus on whether the video appears visually convincing rather than whether it adheres to real-world principles. While recent models perform increasingly well on these metrics, they still struggle to generate videos that are not just visually plausible but fundamentally realistic. To achieve real "world models" through video generation, the next frontier lies in intrinsic faithfulness to ensure that generated videos adhere to physical laws, commonsense reasoning, anatomical correctness, and compositional integrity. Achieving this level of realism is essential for applications such as AI-assisted filmmaking and simulated world modeling. To bridge this gap, we introduce VBench-2.0, a next-generation benchmark designed to automatically evaluate video generative models for their intrinsic faithfulness. VBench-2.0 assesses five key dimensions: Human Fidelity, Controllability, Creativity, Physics, and Commonsense, each further broken down into fine-grained capabilities. Tailored for individual dimensions, our evaluation framework integrates generalists such as state-of-the-art VLMs and LLMs, and specialists, including anomaly detection methods proposed for video generation. We conduct extensive annotations to ensure alignment with human judgment. By pushing beyond superficial faithfulness toward intrinsic faithfulness, VBench-2.0 aims to set a new standard for the next generation of video generative models in pursuit of intrinsic faithfulness.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2503.21755 in a model README.md to link it from this page.

Datasets citing this paper 4

Spaces citing this paper 5

Collections including this paper 7