Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - Visual Riddles: a Commonsense and World Knowledge Challenge for Large
Vision and Language Models
https://visual-riddles.github.io/\n","updatedAt":"2024-07-30T02:49:55.053Z","author":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","fullname":"AK","name":"akhaliq","type":"user","isPro":false,"isHf":true,"isHfAdmin":false,"isMod":false,"followerCount":9179,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.34701573848724365},"editors":["akhaliq"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg"],"reactions":[],"isReport":false}},{"id":"66a9926a539f1ccd194c6bef","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2024-07-31T01:24:58.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [NTSEBENCH: Cognitive Reasoning Benchmark for Vision Language Models](https://huggingface.co/papers/2407.10380) (2024)\n* [HaloQuest: A Visual Hallucination Dataset for Advancing Multimodal Reasoning](https://huggingface.co/papers/2407.15680) (2024)\n* [Evaluating Visual and Cultural Interpretation: The K-Viscuit Benchmark with Human-VLM Collaboration](https://huggingface.co/papers/2406.16469) (2024)\n* [CVQA: Culturally-diverse Multilingual Visual Question Answering Benchmark](https://huggingface.co/papers/2406.05967) (2024)\n* [Losing Visual Needles in Image Haystacks: Vision Language Models are Easily Distracted in Short and Long Contexts](https://huggingface.co/papers/2406.16851) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\n
The following papers were recommended by the Semantic Scholar API
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2024-07-31T01:24:58.200Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.710142970085144},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2407.19474","authors":[{"_id":"66a854cddb77470d3b3ccbed","user":{"_id":"64680ec8efbd7ae309749b8a","avatarUrl":"/avatars/d38ff11ce1678c186e6452f0259992fc.svg","isPro":false,"fullname":"Yonatan Bitton","user":"Yonatan-Bitton","type":"user"},"name":"Nitzan Bitton-Guetta","status":"extracted_confirmed","statusLastChangedAt":"2024-07-30T04:01:03.702Z","hidden":false},{"_id":"66a854cddb77470d3b3ccbee","user":{"_id":"631da07f6d6a5870f3d2c375","avatarUrl":"/avatars/242e344dca08057bdf1eef09f69b41b2.svg","isPro":false,"fullname":"Aviv Slobodkin","user":"lovodkin93","type":"user"},"name":"Aviv Slobodkin","status":"admin_assigned","statusLastChangedAt":"2024-07-30T09:00:09.922Z","hidden":false},{"_id":"66a854cddb77470d3b3ccbef","user":{"_id":"640b2a6d91d9f65c58a71880","avatarUrl":"/avatars/e5b9bdddb14fc4d3f031cee2eaacd698.svg","isPro":false,"fullname":"Aviya Maimon","user":"Aviya","type":"user"},"name":"Aviya Maimon","status":"extracted_pending","statusLastChangedAt":"2024-07-30T02:49:53.023Z","hidden":false},{"_id":"66a854cddb77470d3b3ccbf0","user":{"_id":"62cd15106f6f759f2666d03c","avatarUrl":"/avatars/695a6872f55119f08c8eef31e215a498.svg","isPro":false,"fullname":"Eliya Habba","user":"eliyahabba","type":"user"},"name":"Eliya Habba","status":"admin_assigned","statusLastChangedAt":"2024-07-30T09:00:15.336Z","hidden":false},{"_id":"66a854cddb77470d3b3ccbf1","user":{"_id":"62a7581cf049be35252a2e7c","avatarUrl":"/avatars/91de4eb48f51bfd6e028c08ccfa98f8c.svg","isPro":false,"fullname":"Royi Rassin","user":"Royir","type":"user"},"name":"Royi Rassin","status":"admin_assigned","statusLastChangedAt":"2024-07-30T09:00:22.822Z","hidden":false},{"_id":"66a854cddb77470d3b3ccbf2","user":{"_id":"632e0771ae0a7b1fc95630bf","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1663961181981-632e0771ae0a7b1fc95630bf.jpeg","isPro":false,"fullname":"Yonatan","user":"yonatanbitton","type":"user"},"name":"Yonatan Bitton","status":"admin_assigned","statusLastChangedAt":"2024-07-30T09:00:28.881Z","hidden":false},{"_id":"66a854cddb77470d3b3ccbf3","name":"Idan Szpektor","hidden":false},{"_id":"66a854cddb77470d3b3ccbf4","user":{"_id":"62cd956bc589e4a9e23ea347","avatarUrl":"/avatars/dbdccd9258c9d18173d312a8bf14dd6e.svg","isPro":false,"fullname":"Amir Globerson","user":"amirgloberson","type":"user"},"name":"Amir Globerson","status":"admin_assigned","statusLastChangedAt":"2024-07-30T09:00:41.852Z","hidden":false},{"_id":"66a854cddb77470d3b3ccbf5","name":"Yuval Elovici","hidden":false}],"publishedAt":"2024-07-28T11:56:03.000Z","submittedOnDailyAt":"2024-07-30T01:19:55.045Z","title":"Visual Riddles: a Commonsense and World Knowledge Challenge for Large\n Vision and Language Models","submittedOnDailyBy":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","isPro":false,"fullname":"AK","user":"akhaliq","type":"user"},"summary":"Imagine observing someone scratching their arm; to understand why, additional\ncontext would be necessary. However, spotting a mosquito nearby would\nimmediately offer a likely explanation for the person's discomfort, thereby\nalleviating the need for further information. This example illustrates how\nsubtle visual cues can challenge our cognitive skills and demonstrates the\ncomplexity of interpreting visual scenarios. To study these skills, we present\nVisual Riddles, a benchmark aimed to test vision and language models on visual\nriddles requiring commonsense and world knowledge. The benchmark comprises 400\nvisual riddles, each featuring a unique image created by a variety of\ntext-to-image models, question, ground-truth answer, textual hint, and\nattribution. Human evaluation reveals that existing models lag significantly\nbehind human performance, which is at 82\\% accuracy, with Gemini-Pro-1.5\nleading with 40\\% accuracy. Our benchmark comes with automatic evaluation tasks\nto make assessment scalable. These findings underscore the potential of Visual\nRiddles as a valuable resource for enhancing vision and language models'\ncapabilities in interpreting complex visual scenarios.","upvotes":23,"discussionId":"66a854d1db77470d3b3cccbe","ai_summary":"Visual Riddles is a benchmark that tests vision and language models on visual riddles requiring commonsense and world knowledge, highlighting the gap between human and model performance.","ai_keywords":["vision and language models","visual riddles","commonsense","world knowledge","human evaluation","text-to-image models"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"64680ec8efbd7ae309749b8a","avatarUrl":"/avatars/d38ff11ce1678c186e6452f0259992fc.svg","isPro":false,"fullname":"Yonatan Bitton","user":"Yonatan-Bitton","type":"user"},{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},{"_id":"631da07f6d6a5870f3d2c375","avatarUrl":"/avatars/242e344dca08057bdf1eef09f69b41b2.svg","isPro":false,"fullname":"Aviv Slobodkin","user":"lovodkin93","type":"user"},{"_id":"61868ce808aae0b5499a2a95","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/61868ce808aae0b5499a2a95/F6BA0anbsoY_Z7M1JrwOe.jpeg","isPro":true,"fullname":"Sylvain Filoni","user":"fffiloni","type":"user"},{"_id":"609eb1fc1172dedeac2200db","avatarUrl":"/avatars/a7384a8d3389610b38388c100a28c86d.svg","isPro":false,"fullname":"H","user":"Eran","type":"user"},{"_id":"62a7581cf049be35252a2e7c","avatarUrl":"/avatars/91de4eb48f51bfd6e028c08ccfa98f8c.svg","isPro":false,"fullname":"Royi Rassin","user":"Royir","type":"user"},{"_id":"62f4ac43567dbf9a39f75474","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1661497922734-62f4ac43567dbf9a39f75474.jpeg","isPro":false,"fullname":"Daniel Huynh","user":"dhuynh95","type":"user"},{"_id":"63477bb66f8773f2a28daa15","avatarUrl":"/avatars/9a369763a73278cddcf2abcae594865d.svg","isPro":false,"fullname":"Dhruv Diddi","user":"ddiddi","type":"user"},{"_id":"66897ed00501525cc0029a1e","avatarUrl":"/avatars/277194ea820539d55e2035e554cf4cf3.svg","isPro":false,"fullname":"Lina Salazar","user":"12leana","type":"user"},{"_id":"66897f607ea384a9f81bdd4f","avatarUrl":"/avatars/47963b7a66a6ed3079a8a7d6ea0620d0.svg","isPro":false,"fullname":"Li Zhang","user":"zhaling","type":"user"},{"_id":"66897f980501525cc002bb66","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/66897f980501525cc002bb66/eEwsFitSMsA4PJc3Ribbm.png","isPro":false,"fullname":"Chrisopher Ponce","user":"PonceChrisCanada","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Visual Riddles is a benchmark that tests vision and language models on visual riddles requiring commonsense and world knowledge, highlighting the gap between human and model performance.
AI-generated summary
Imagine observing someone scratching their arm; to understand why, additional
context would be necessary. However, spotting a mosquito nearby would
immediately offer a likely explanation for the person's discomfort, thereby
alleviating the need for further information. This example illustrates how
subtle visual cues can challenge our cognitive skills and demonstrates the
complexity of interpreting visual scenarios. To study these skills, we present
Visual Riddles, a benchmark aimed to test vision and language models on visual
riddles requiring commonsense and world knowledge. The benchmark comprises 400
visual riddles, each featuring a unique image created by a variety of
text-to-image models, question, ground-truth answer, textual hint, and
attribution. Human evaluation reveals that existing models lag significantly
behind human performance, which is at 82\% accuracy, with Gemini-Pro-1.5
leading with 40\% accuracy. Our benchmark comes with automatic evaluation tasks
to make assessment scalable. These findings underscore the potential of Visual
Riddles as a valuable resource for enhancing vision and language models'
capabilities in interpreting complex visual scenarios.