Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - CPPO: Contrastive Perception for Vision Language Policy Optimization
\n","updatedAt":"2026-01-06T22:51:24.895Z","author":{"_id":"65243980050781c16f234f1f","avatarUrl":"/avatars/743a009681d5d554c27e04300db9f267.svg","fullname":"Avi","name":"avahal","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6888807415962219},"editors":["avahal"],"editorAvatarUrls":["/avatars/743a009681d5d554c27e04300db9f267.svg"],"reactions":[],"isReport":false}},{"id":"695db8a3c46a00ee2f1238ad","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2026-01-07T01:36:35.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Boosting RL-Based Visual Reasoning with Selective Adversarial Entropy Intervention](https://huggingface.co/papers/2512.10414) (2025)\n* [Stable and Efficient Single-Rollout RL for Multimodal Reasoning](https://huggingface.co/papers/2512.18215) (2025)\n* [Efficient Reinforcement Learning with Semantic and Token Entropy for LLM Reasoning](https://huggingface.co/papers/2512.04359) (2025)\n* [Reassessing the Role of Supervised Fine-Tuning: An Empirical Study in VLM Reasoning](https://huggingface.co/papers/2512.12690) (2025)\n* [CodeV: Code with Images for Faithful Visual Reasoning via Tool-Aware Policy Optimization](https://huggingface.co/papers/2511.19661) (2025)\n* [VisPlay: Self-Evolving Vision-Language Models from Images](https://huggingface.co/papers/2511.15661) (2025)\n* [Reasoning Within the Mind: Dynamic Multimodal Interleaving in Latent Space](https://huggingface.co/papers/2512.12623) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\n
The following papers were recommended by the Semantic Scholar API
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2026-01-07T01:36:35.434Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7433886528015137},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}},{"id":"696be61e289e9ebd7150bb0f","author":{"_id":"65243980050781c16f234f1f","avatarUrl":"/avatars/743a009681d5d554c27e04300db9f267.svg","fullname":"Avi","name":"avahal","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false},"createdAt":"2026-01-17T19:42:22.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"arXivlens breakdown of this paper ๐ https://arxivlens.com/PaperView/Details/cppo-contrastive-perception-for-vision-language-policy-optimization-461-3e68a555\n\n- Executive Summary\n- Detailed Breakdown\n- Practical Applications","html":"
\n","updatedAt":"2026-01-17T19:42:22.552Z","author":{"_id":"65243980050781c16f234f1f","avatarUrl":"/avatars/743a009681d5d554c27e04300db9f267.svg","fullname":"Avi","name":"avahal","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6762735843658447},"editors":["avahal"],"editorAvatarUrls":["/avatars/743a009681d5d554c27e04300db9f267.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2601.00501","authors":[{"_id":"695d52e7c03d6d81e4399dce","user":{"_id":"6495d9b6e6692d3676406834","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/W7ktMxTNo75rqUtTWJfCw.png","isPro":false,"fullname":"Ahmad Rezaei","user":"AhNr","type":"user"},"name":"Ahmad Rezaei","status":"claimed_verified","statusLastChangedAt":"2026-01-07T09:26:22.766Z","hidden":false},{"_id":"695d52e7c03d6d81e4399dcf","user":{"_id":"64ed16472f0bd58125027ff1","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/H59YRVHXb4EUTMK5rX47k.png","isPro":false,"fullname":"Mohsen Gholami","user":"mgholami","type":"user"},"name":"Mohsen Gholami","status":"claimed_verified","statusLastChangedAt":"2026-01-08T08:33:19.178Z","hidden":false},{"_id":"695d52e7c03d6d81e4399dd0","user":{"_id":"67f8267b33ef0ce3cdc24ce8","avatarUrl":"/avatars/f23e5e3ec86b450f7f29548253f217b6.svg","isPro":false,"fullname":"Saeed Ranjbar Alvar","user":"saeedranjbar12","type":"user"},"name":"Saeed Ranjbar Alvar","status":"claimed_verified","statusLastChangedAt":"2026-01-07T09:26:24.914Z","hidden":false},{"_id":"695d52e7c03d6d81e4399dd1","name":"Kevin Cannons","hidden":false},{"_id":"695d52e7c03d6d81e4399dd2","name":"Mohammad Asiful Hossain","hidden":false},{"_id":"695d52e7c03d6d81e4399dd3","name":"Zhou Weimin","hidden":false},{"_id":"695d52e7c03d6d81e4399dd4","name":"Shunbo Zhou","hidden":false},{"_id":"695d52e7c03d6d81e4399dd5","name":"Yong Zhang","hidden":false},{"_id":"695d52e7c03d6d81e4399dd6","user":{"_id":"6545976cb8ac1a89ffa8d6cb","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/fIBp6wUQ8Ln3gIDwvSv8-.png","isPro":false,"fullname":"Mohammad Akbari","user":"moak7","type":"user"},"name":"Mohammad Akbari","status":"claimed_verified","statusLastChangedAt":"2026-01-07T09:26:18.199Z","hidden":false}],"publishedAt":"2026-01-01T22:48:26.000Z","submittedOnDailyAt":"2026-01-06T15:57:49.685Z","title":"CPPO: Contrastive Perception for Vision Language Policy Optimization","submittedOnDailyBy":{"_id":"6495d9b6e6692d3676406834","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/W7ktMxTNo75rqUtTWJfCw.png","isPro":false,"fullname":"Ahmad Rezaei","user":"AhNr","type":"user"},"summary":"We introduce CPPO, a Contrastive Perception Policy Optimization method for finetuning vision-language models (VLMs). While reinforcement learning (RL) has advanced reasoning in language models, extending it to multimodal reasoning requires improving both the perception and reasoning aspects. Prior works tackle this challenge mainly with explicit perception rewards, but disentangling perception tokens from reasoning tokens is difficult, requiring extra LLMs, ground-truth data, forced separation of perception from reasoning by policy model, or applying rewards indiscriminately to all output tokens. CPPO addresses this problem by detecting perception tokens via entropy shifts in the model outputs under perturbed input images. CPPO then extends the RL objective function with a Contrastive Perception Loss (CPL) that enforces consistency under information-preserving perturbations and sensitivity under information-removing ones. Experiments show that CPPO surpasses previous perception-rewarding methods, while avoiding extra models, making training more efficient and scalable.","upvotes":7,"discussionId":"695d52e7c03d6d81e4399dd7","githubRepo":"https://github.com/vbdi/cppo","githubRepoAddedBy":"auto","ai_summary":"CPPO improves vision-language model fine-tuning by detecting perception tokens through entropy shifts and using contrastive perception loss to enhance multimodal reasoning efficiency.","ai_keywords":["Contrastive Perception Policy Optimization","vision-language models","reinforcement learning","perception tokens","reasoning tokens","entropy shifts","Contrastive Perception Loss","information-preserving perturbations","information-removing perturbations"],"githubStars":4,"organization":{"_id":"68ae0f09570f0a1176411a35","name":"vbdai","fullname":"Huawei's Vancouver VBDAI Lab","avatar":"https://cdn-uploads.huggingface.co/production/uploads/6545976cb8ac1a89ffa8d6cb/RqPqXMUOkpd6O4rVJACOF.png"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6495d9b6e6692d3676406834","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/W7ktMxTNo75rqUtTWJfCw.png","isPro":false,"fullname":"Ahmad Rezaei","user":"AhNr","type":"user"},{"_id":"67f8267b33ef0ce3cdc24ce8","avatarUrl":"/avatars/f23e5e3ec86b450f7f29548253f217b6.svg","isPro":false,"fullname":"Saeed Ranjbar Alvar","user":"saeedranjbar12","type":"user"},{"_id":"64ed16472f0bd58125027ff1","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/H59YRVHXb4EUTMK5rX47k.png","isPro":false,"fullname":"Mohsen Gholami","user":"mgholami","type":"user"},{"_id":"6545976cb8ac1a89ffa8d6cb","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/fIBp6wUQ8Ln3gIDwvSv8-.png","isPro":false,"fullname":"Mohammad Akbari","user":"moak7","type":"user"},{"_id":"660b9e8157b3737069a2bbcb","avatarUrl":"/avatars/7acd91384ab720ead3277896ded062de.svg","isPro":false,"fullname":"Kevin","user":"kcannons","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"686db5d4af2b856fabbf13aa","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/6BjMv2LVNoqvbX8fQSTPI.png","isPro":false,"fullname":"V bbbb","user":"Bbbbbnnn","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0,"organization":{"_id":"68ae0f09570f0a1176411a35","name":"vbdai","fullname":"Huawei's Vancouver VBDAI Lab","avatar":"https://cdn-uploads.huggingface.co/production/uploads/6545976cb8ac1a89ffa8d6cb/RqPqXMUOkpd6O4rVJACOF.png"}}">
CPPO improves vision-language model fine-tuning by detecting perception tokens through entropy shifts and using contrastive perception loss to enhance multimodal reasoning efficiency.
AI-generated summary
We introduce CPPO, a Contrastive Perception Policy Optimization method for finetuning vision-language models (VLMs). While reinforcement learning (RL) has advanced reasoning in language models, extending it to multimodal reasoning requires improving both the perception and reasoning aspects. Prior works tackle this challenge mainly with explicit perception rewards, but disentangling perception tokens from reasoning tokens is difficult, requiring extra LLMs, ground-truth data, forced separation of perception from reasoning by policy model, or applying rewards indiscriminately to all output tokens. CPPO addresses this problem by detecting perception tokens via entropy shifts in the model outputs under perturbed input images. CPPO then extends the RL objective function with a Contrastive Perception Loss (CPL) that enforces consistency under information-preserving perturbations and sensitivity under information-removing ones. Experiments show that CPPO surpasses previous perception-rewarding methods, while avoiding extra models, making training more efficient and scalable.
CPPO: Contrastive Perception for Vision Language Policy Optimization introduces a new method (CPPO) for fine-tuning vision-language models (VLMs) using reinforcement learning. Instead of relying on explicit perception rewards or auxiliary models, the approach identifies perceptual tokens via entropy changes under perturbed images and augments the policy objective with a contrastive perception loss to improve multimodal reasoning performance and training efficiency.