Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models
[go: Go Back, main page]

https://github.com/HZQ950419/Math-LLaVA
Model: https://huggingface.co/Zhiqiang007/Math-LLaVA
Data: https://huggingface.co/Zhiqiang007/MathV360K

\n","updatedAt":"2024-06-27T16:55:47.350Z","author":{"_id":"637f228152229c63921119c3","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/637f228152229c63921119c3/acwXorra1r9_7i3KlBFjS.jpeg","fullname":"Zhiqiang Hu","name":"Zhiqiang007","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":8,"isUserFollowing":false}},"numEdits":3,"identifiedLanguage":{"language":"en","probability":0.8503784537315369},"editors":["Zhiqiang007"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/637f228152229c63921119c3/acwXorra1r9_7i3KlBFjS.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2406.17294","authors":[{"_id":"667c06bf1976894a8bdac826","name":"Wenhao Shi","hidden":false},{"_id":"667c06bf1976894a8bdac827","user":{"_id":"637f228152229c63921119c3","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/637f228152229c63921119c3/acwXorra1r9_7i3KlBFjS.jpeg","isPro":false,"fullname":"Zhiqiang Hu","user":"Zhiqiang007","type":"user"},"name":"Zhiqiang Hu","status":"claimed_verified","statusLastChangedAt":"2024-06-27T09:03:10.563Z","hidden":false},{"_id":"667c06bf1976894a8bdac828","name":"Yi Bin","hidden":false},{"_id":"667c06bf1976894a8bdac829","name":"Junhua Liu","hidden":false},{"_id":"667c06bf1976894a8bdac82a","name":"Yang Yang","hidden":false},{"_id":"667c06bf1976894a8bdac82b","name":"See-Kiong Ng","hidden":false},{"_id":"667c06bf1976894a8bdac82c","user":{"_id":"6454685a548f22be598414c4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/eMjMWKJ-AouF7eY1-RzGF.jpeg","isPro":false,"fullname":"Lidong Bing","user":"LidongBing","type":"user"},"name":"Lidong Bing","status":"admin_assigned","statusLastChangedAt":"2024-06-27T15:58:21.608Z","hidden":false},{"_id":"667c06bf1976894a8bdac82d","user":{"_id":"64b94cc0e3d41dbd6974ae45","avatarUrl":"/avatars/5edb9d9465addfceccef04c4465a34e6.svg","isPro":false,"fullname":"Roy Ka-Wei lee","user":"sroylee","type":"user"},"name":"Roy Ka-Wei Lee","status":"admin_assigned","statusLastChangedAt":"2024-06-27T15:58:15.887Z","hidden":false}],"mediaUrls":["https://cdn-uploads.huggingface.co/production/uploads/637f228152229c63921119c3/Yek_7jBViuW5Hp0WxsdBl.jpeg"],"publishedAt":"2024-06-25T05:43:21.000Z","submittedOnDailyAt":"2024-06-27T08:47:32.870Z","title":"Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large\n Language Models","submittedOnDailyBy":{"_id":"637f228152229c63921119c3","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/637f228152229c63921119c3/acwXorra1r9_7i3KlBFjS.jpeg","isPro":false,"fullname":"Zhiqiang Hu","user":"Zhiqiang007","type":"user"},"summary":"Large language models (LLMs) have demonstrated impressive reasoning\ncapabilities, particularly in textual mathematical problem-solving. However,\nexisting open-source image instruction fine-tuning datasets, containing limited\nquestion-answer pairs per image, do not fully exploit visual information to\nenhance the multimodal mathematical reasoning capabilities of Multimodal LLMs\n(MLLMs). To bridge this gap, we address the lack of high-quality, diverse\nmultimodal mathematical datasets by collecting 40K high-quality images with\nquestion-answer pairs from 24 existing datasets and synthesizing 320K new\npairs, creating the MathV360K dataset, which enhances both the breadth and\ndepth of multimodal mathematical questions. We introduce Math-LLaVA, a\nLLaVA-1.5-based model fine-tuned with MathV360K. This novel approach\nsignificantly improves the multimodal mathematical reasoning capabilities of\nLLaVA-1.5, achieving a 19-point increase and comparable performance to GPT-4V\non MathVista's minitest split. Furthermore, Math-LLaVA demonstrates enhanced\ngeneralizability, showing substantial improvements on the MMMU benchmark. Our\nresearch highlights the importance of dataset diversity and synthesis in\nadvancing MLLMs' mathematical reasoning abilities. The code and data are\navailable at: https://github.com/HZQ950419/Math-LLaVA.","upvotes":11,"discussionId":"667c06c01976894a8bdac890","githubRepo":"https://github.com/hzq950419/math-llava","githubRepoAddedBy":"auto","ai_summary":"The introduction of MathV360K dataset and Math-LLaVA model enhances multimodal mathematical reasoning by increasing dataset diversity and improving generalizability.","ai_keywords":["Multimodal LLMs","MathV360K","Math-LLaVA","LLaVA-1.5","MathVista","MMMU benchmark","multimodal mathematical reasoning"],"githubStars":92},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"637f228152229c63921119c3","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/637f228152229c63921119c3/acwXorra1r9_7i3KlBFjS.jpeg","isPro":false,"fullname":"Zhiqiang Hu","user":"Zhiqiang007","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"65f3b8112b4e85e2e88f87c8","avatarUrl":"/avatars/28bc7c96bee20819565dc65c1478f195.svg","isPro":false,"fullname":"Wenhao Shi","user":"steven16","type":"user"},{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},{"_id":"644e1b1d9b4e87c31bab0a14","avatarUrl":"/avatars/88bb4c4a67dc8958069e9014f5e73a0b.svg","isPro":false,"fullname":"Michael Barry","user":"MichaelBarryUK","type":"user"},{"_id":"6555125a4f361968f0e3aad7","avatarUrl":"/avatars/e7692d82804338f21ecdc6e731f5c5ea.svg","isPro":false,"fullname":"marinaretikof","user":"marinaretik","type":"user"},{"_id":"63a369d98c0c89dcae3b8329","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63a369d98c0c89dcae3b8329/AiH2zjy1cnt9OADAAZMLD.jpeg","isPro":false,"fullname":"Adina Yakefu","user":"AdinaY","type":"user"},{"_id":"6538119803519fddb4a17e10","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6538119803519fddb4a17e10/ffJMkdx-rM7VvLTCM6ri_.jpeg","isPro":false,"fullname":"samusenps","user":"samusenps","type":"user"},{"_id":"63107b18e87051f3e3e0f598","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63107b18e87051f3e3e0f598/R9onir4Y0MZuq1jEWCZ2-.jpeg","isPro":false,"fullname":"Unchun Yang","user":"ucyang","type":"user"},{"_id":"66897694aafa84bf3c0e5048","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/66897694aafa84bf3c0e5048/LiFZk1pMIPkG2lEAUkBqf.jpeg","isPro":false,"fullname":"Haley Buck","user":"haleybuck","type":"user"},{"_id":"663ccbff3a74a20189d4aa2e","avatarUrl":"/avatars/83a54455e0157480f65c498cd9057cf2.svg","isPro":false,"fullname":"Nguyen Van Thanh","user":"NguyenVanThanhHust","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2406.17294

Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models

Published on Jun 25, 2024
· Submitted by
Zhiqiang Hu
on Jun 27, 2024
Authors:
,
,
,
,
,

Abstract

The introduction of MathV360K dataset and Math-LLaVA model enhances multimodal mathematical reasoning by increasing dataset diversity and improving generalizability.

AI-generated summary

Large language models (LLMs) have demonstrated impressive reasoning capabilities, particularly in textual mathematical problem-solving. However, existing open-source image instruction fine-tuning datasets, containing limited question-answer pairs per image, do not fully exploit visual information to enhance the multimodal mathematical reasoning capabilities of Multimodal LLMs (MLLMs). To bridge this gap, we address the lack of high-quality, diverse multimodal mathematical datasets by collecting 40K high-quality images with question-answer pairs from 24 existing datasets and synthesizing 320K new pairs, creating the MathV360K dataset, which enhances both the breadth and depth of multimodal mathematical questions. We introduce Math-LLaVA, a LLaVA-1.5-based model fine-tuned with MathV360K. This novel approach significantly improves the multimodal mathematical reasoning capabilities of LLaVA-1.5, achieving a 19-point increase and comparable performance to GPT-4V on MathVista's minitest split. Furthermore, Math-LLaVA demonstrates enhanced generalizability, showing substantial improvements on the MMMU benchmark. Our research highlights the importance of dataset diversity and synthesis in advancing MLLMs' mathematical reasoning abilities. The code and data are available at: https://github.com/HZQ950419/Math-LLaVA.

Community

Paper author Paper submitter
edited Jun 27, 2024

We introduce Math-LLaVA, a LLaVA-1.5-based model fine-tuned with MathV360K. This novel approach significantly improves the multimodal mathematical reasoning capabilities of LLaVA-1.5, achieving a 19-point increase and comparable performance to GPT-4V on MathVista's minitest split. And Math-LLaVA achieves 15.69% accuracy on MathVision benchmark, outperforming Qwen-VL-Max (15.59%).

GitHub: https://github.com/HZQ950419/Math-LLaVA
Model: https://huggingface.co/Zhiqiang007/Math-LLaVA
Data: https://huggingface.co/Zhiqiang007/MathV360K

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2406.17294 in a Space README.md to link it from this page.

Collections including this paper 7