Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - Apriel-1.5-15b-Thinker
[go: Go Back, main page]

Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-10-07T01:34:51.212Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.687838077545166},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}},{"id":"68e50c9fa5dc0051d1be4152","author":{"_id":"64a16b1aeacb4b50ba1c889d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64a16b1aeacb4b50ba1c889d/EhWoBGL6LlFGIVTUsp2F4.jpeg","fullname":"Lai Wei","name":"WaltonFuture","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":10,"isUserFollowing":false},"createdAt":"2025-10-07T12:50:39.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Thanks for your work! I have a quick question: how do you organize the data formats for tasks like Image Reconstruction and Visual Matching in CPT Stage 2? I think this synthetic augmentation approach is particularly interesting.\n\nThank you!","html":"

Thanks for your work! I have a quick question: how do you organize the data formats for tasks like Image Reconstruction and Visual Matching in CPT Stage 2? I think this synthetic augmentation approach is particularly interesting.

\n

Thank you!

\n","updatedAt":"2025-10-07T12:50:39.525Z","author":{"_id":"64a16b1aeacb4b50ba1c889d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64a16b1aeacb4b50ba1c889d/EhWoBGL6LlFGIVTUsp2F4.jpeg","fullname":"Lai Wei","name":"WaltonFuture","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":10,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.9234086871147156},"editors":["WaltonFuture"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/64a16b1aeacb4b50ba1c889d/EhWoBGL6LlFGIVTUsp2F4.jpeg"],"reactions":[{"reaction":"๐Ÿ‘","users":["victor"],"count":1}],"isReport":false}},{"id":"68f401812d5098be2eb37450","author":{"_id":"65d9fc2a0e6ad24551d87a1e","avatarUrl":"/avatars/3aedb9522cc3cd08349d654f523fd792.svg","fullname":"Grant Singleton","name":"grantsing","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":4,"isUserFollowing":false},"createdAt":"2025-10-18T21:07:13.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"arXiv explained breakdown of this paper ๐Ÿ‘‰ https://arxivexplained.com/papers/apriel-15-15b-thinker","html":"

arXiv explained breakdown of this paper ๐Ÿ‘‰ https://arxivexplained.com/papers/apriel-15-15b-thinker

\n","updatedAt":"2025-10-18T21:07:13.741Z","author":{"_id":"65d9fc2a0e6ad24551d87a1e","avatarUrl":"/avatars/3aedb9522cc3cd08349d654f523fd792.svg","fullname":"Grant Singleton","name":"grantsing","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":4,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7057124376296997},"editors":["grantsing"],"editorAvatarUrls":["/avatars/3aedb9522cc3cd08349d654f523fd792.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2510.01141","authors":[{"_id":"68de25d26024653e8a3ed204","name":"Shruthan Radhakrishna","hidden":false},{"_id":"68de25d26024653e8a3ed205","user":{"_id":"62d913739a5353eef9d7edf3","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62d913739a5353eef9d7edf3/pRgen2izGJle3ahupOdC7.jpeg","isPro":false,"fullname":"Aman Tiwari","user":"amant555","type":"user"},"name":"Aman Tiwari","status":"claimed_verified","statusLastChangedAt":"2025-10-02T13:54:38.519Z","hidden":false},{"_id":"68de25d26024653e8a3ed206","name":"Aanjaneya Shukla","hidden":false},{"_id":"68de25d26024653e8a3ed207","name":"Masoud Hashemi","hidden":false},{"_id":"68de25d26024653e8a3ed208","name":"Rishabh Maheshwary","hidden":false},{"_id":"68de25d26024653e8a3ed209","name":"Shiva Krishna Reddy Malay","hidden":false},{"_id":"68de25d26024653e8a3ed20a","name":"Jash Mehta","hidden":false},{"_id":"68de25d26024653e8a3ed20b","name":"Pulkit Pattnaik","hidden":false},{"_id":"68de25d26024653e8a3ed20c","name":"Saloni Mittal","hidden":false},{"_id":"68de25d26024653e8a3ed20d","name":"Khalil Slimi","hidden":false},{"_id":"68de25d26024653e8a3ed20e","name":"Kelechi Ogueji","hidden":false},{"_id":"68de25d26024653e8a3ed20f","user":{"_id":"60c8e5a203ddd410286bad87","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1623778630501-noauth.jpeg","isPro":false,"fullname":"Akintunde Oladipo","user":"theyorubayesian","type":"user"},"name":"Akintunde Oladipo","status":"claimed_verified","statusLastChangedAt":"2025-10-13T10:10:36.940Z","hidden":false},{"_id":"68de25d26024653e8a3ed210","name":"Soham Parikh","hidden":false},{"_id":"68de25d26024653e8a3ed211","user":{"_id":"66845714d4e3eff8e18d8dda","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/-LV096QGIbXnGGjjCad4a.jpeg","isPro":false,"fullname":"Nifemi Bamgbose","user":"onifemibam","type":"user"},"name":"Oluwanifemi Bamgbose","status":"claimed_verified","statusLastChangedAt":"2025-10-22T15:47:00.117Z","hidden":false},{"_id":"68de25d26024653e8a3ed212","name":"Toby Liang","hidden":false},{"_id":"68de25d26024653e8a3ed213","name":"Ahmed Masry","hidden":false},{"_id":"68de25d26024653e8a3ed214","name":"Khyati Mahajan","hidden":false},{"_id":"68de25d26024653e8a3ed215","user":{"_id":"642f99079b2484d7d857341b","avatarUrl":"/avatars/01965cc5a5dbe9c08025a51973462a6a.svg","isPro":false,"fullname":"Sai Rajeswar","user":"rajeswarsai","type":"user"},"name":"Sai Rajeswar Mudumba","status":"claimed_verified","statusLastChangedAt":"2025-10-05T12:47:29.534Z","hidden":false},{"_id":"68de25d26024653e8a3ed216","name":"Vikas Yadav","hidden":false},{"_id":"68de25d26024653e8a3ed217","user":{"_id":"63d3095c2727d7888cbb54e2","avatarUrl":"/avatars/3d437afdf19cb43c6b67a15e4c2955f8.svg","isPro":false,"fullname":"Sathwik Tejaswi Madhusudhan","user":"stm4","type":"user"},"name":"Sathwik Tejaswi Madhusudhan","status":"claimed_verified","statusLastChangedAt":"2026-02-17T15:47:50.531Z","hidden":false},{"_id":"68de25d26024653e8a3ed218","user":{"_id":"60ecaa5efee13fee7ada7af4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1637694419284-60ecaa5efee13fee7ada7af4.jpeg","isPro":false,"fullname":"Torsten Scholak","user":"tscholak","type":"user"},"name":"Torsten Scholak","status":"claimed_verified","statusLastChangedAt":"2025-10-17T04:14:39.233Z","hidden":false},{"_id":"68de25d26024653e8a3ed219","name":"Sagar Davasam","hidden":false},{"_id":"68de25d26024653e8a3ed21a","name":"Srinivas Sunkara","hidden":false},{"_id":"68de25d26024653e8a3ed21b","name":"Nicholas Chapados","hidden":false}],"publishedAt":"2025-10-01T17:29:35.000Z","submittedOnDailyAt":"2025-10-06T07:20:15.592Z","title":"Apriel-1.5-15b-Thinker","submittedOnDailyBy":{"_id":"62d913739a5353eef9d7edf3","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62d913739a5353eef9d7edf3/pRgen2izGJle3ahupOdC7.jpeg","isPro":false,"fullname":"Aman Tiwari","user":"amant555","type":"user"},"summary":"We present Apriel-1.5-15B-Thinker, a 15-billion parameter open-weights\nmultimodal reasoning model that achieves frontier-level performance through\ntraining design rather than sheer scale. Starting from Pixtral-12B, we apply a\nprogressive three-stage methodology: (1) depth upscaling to expand reasoning\ncapacity without pretraining from scratch, (2) staged continual pre-training\nthat first develops foundational text and vision understanding, then enhances\nvisual reasoning through targeted synthetic data generation addressing spatial\nstructure, compositional understanding, and fine-grained perception, and (3)\nhigh-quality text-only supervised fine-tuning on curated instruction-response\npairs with explicit reasoning traces spanning mathematics, coding, science, and\ntool use. Notably, our model achieves competitive results without reinforcement\nlearning or preference optimization, isolating the contribution of our\ndata-centric continual pre-training approach. On the Artificial Analysis\nIntelligence Index, Apriel-1.5-15B-Thinker attains a score of 52, matching\nDeepSeek-R1-0528 despite requiring significantly fewer computational resources.\nAcross ten image benchmarks, its performance is on average within five points\nof Gemini-2.5-Flash and Claude Sonnet-3.7, a key achievement for a model\noperating within single-GPU deployment constraints. Our results demonstrate\nthat thoughtful mid-training 2 design can close substantial capability gaps\nwithout massive scale, making frontier-level multimodal reasoning accessible to\norganizations with limited infrastructure. We release the model checkpoint, all\ntraining recipes, and evaluation protocols under the MIT license to to advance\nopen-source research.","upvotes":121,"discussionId":"68de25d26024653e8a3ed21c","ai_summary":"A 15-billion parameter multimodal reasoning model achieves competitive performance through a progressive training methodology without reinforcement learning, demonstrating efficient use of computational resources.","ai_keywords":["multimodal reasoning model","depth upscaling","staged continual pre-training","synthetic data generation","spatial structure","compositional understanding","fine-grained perception","text-only supervised fine-tuning","reasoning traces","Artificial Analysis Intelligence Index","single-GPU deployment"],"organization":{"_id":"65f4df5de83b55da5d79fbb6","name":"ServiceNow-AI","fullname":"ServiceNow-AI","avatar":"https://cdn-uploads.huggingface.co/production/uploads/63d3095c2727d7888cbb54e2/Uv-Lx8PVGviqokfOyYlCN.png"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"62d913739a5353eef9d7edf3","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62d913739a5353eef9d7edf3/pRgen2izGJle3ahupOdC7.jpeg","isPro":false,"fullname":"Aman Tiwari","user":"amant555","type":"user"},{"_id":"63d3095c2727d7888cbb54e2","avatarUrl":"/avatars/3d437afdf19cb43c6b67a15e4c2955f8.svg","isPro":false,"fullname":"Sathwik Tejaswi Madhusudhan","user":"stm4","type":"user"},{"_id":"64a3bbc378edd17f9652bf3b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64a3bbc378edd17f9652bf3b/XJM8hvtNUxRSMrWvznLHT.jpeg","isPro":false,"fullname":"Amit Kumar Saha","user":"amitsaha","type":"user"},{"_id":"64b935fc49bde5d94836202d","avatarUrl":"/avatars/4ede3a8cf5afc556070edc16c9262cbd.svg","isPro":false,"fullname":"Sourabh Surana","user":"sourabh89","type":"user"},{"_id":"661909cd3b412cdc853f839b","avatarUrl":"/avatars/0658fd156ed521d013dd1b39dda2c30a.svg","isPro":false,"fullname":"sandeep reddy katypally","user":"sandeep-katypally","type":"user"},{"_id":"603c6bf03249b99991dbcbd0","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/603c6bf03249b99991dbcbd0/IZDS9PIg-9b8QleK08Zan.png","isPro":false,"fullname":"Surajit Dasgupta","user":"surajit","type":"user"},{"_id":"642f99079b2484d7d857341b","avatarUrl":"/avatars/01965cc5a5dbe9c08025a51973462a6a.svg","isPro":false,"fullname":"Sai Rajeswar","user":"rajeswarsai","type":"user"},{"_id":"681eca14a888a43c883911b6","avatarUrl":"/avatars/ccbda4533fb9fbee37eb115194cfc019.svg","isPro":false,"fullname":"Sourav Sharma","user":"sourav-snow","type":"user"},{"_id":"67a254f0aad4ef82acf649ff","avatarUrl":"/avatars/dae66fc26030070368867ac7a98c27d2.svg","isPro":false,"fullname":"Tunde Oladipo","user":"TundeAtSN","type":"user"},{"_id":"64895ed451b69a8c82f493a4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64895ed451b69a8c82f493a4/0G3lDWoVWiFuFmPfXPKfQ.jpeg","isPro":false,"fullname":"Seganrasan Subramanian","user":"Seganrasan","type":"user"},{"_id":"6697bc94c15a4f7037afbddb","avatarUrl":"/avatars/7258dfa45781065f8a0dad71ef1059fe.svg","isPro":false,"fullname":"Verma","user":"Abhigy","type":"user"},{"_id":"64e7ac350c47bf287ca307ef","avatarUrl":"/avatars/b36c727cbb983615e2dada2ef769639b.svg","isPro":false,"fullname":"Fanny Riols","user":"Nny042","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":1,"organization":{"_id":"65f4df5de83b55da5d79fbb6","name":"ServiceNow-AI","fullname":"ServiceNow-AI","avatar":"https://cdn-uploads.huggingface.co/production/uploads/63d3095c2727d7888cbb54e2/Uv-Lx8PVGviqokfOyYlCN.png"}}">
Papers
arxiv:2510.01141

Apriel-1.5-15b-Thinker

Published on Oct 1, 2025
ยท Submitted by
Aman Tiwari
on Oct 6, 2025
#1 Paper of the day
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

A 15-billion parameter multimodal reasoning model achieves competitive performance through a progressive training methodology without reinforcement learning, demonstrating efficient use of computational resources.

AI-generated summary

We present Apriel-1.5-15B-Thinker, a 15-billion parameter open-weights multimodal reasoning model that achieves frontier-level performance through training design rather than sheer scale. Starting from Pixtral-12B, we apply a progressive three-stage methodology: (1) depth upscaling to expand reasoning capacity without pretraining from scratch, (2) staged continual pre-training that first develops foundational text and vision understanding, then enhances visual reasoning through targeted synthetic data generation addressing spatial structure, compositional understanding, and fine-grained perception, and (3) high-quality text-only supervised fine-tuning on curated instruction-response pairs with explicit reasoning traces spanning mathematics, coding, science, and tool use. Notably, our model achieves competitive results without reinforcement learning or preference optimization, isolating the contribution of our data-centric continual pre-training approach. On the Artificial Analysis Intelligence Index, Apriel-1.5-15B-Thinker attains a score of 52, matching DeepSeek-R1-0528 despite requiring significantly fewer computational resources. Across ten image benchmarks, its performance is on average within five points of Gemini-2.5-Flash and Claude Sonnet-3.7, a key achievement for a model operating within single-GPU deployment constraints. Our results demonstrate that thoughtful mid-training 2 design can close substantial capability gaps without massive scale, making frontier-level multimodal reasoning accessible to organizations with limited infrastructure. We release the model checkpoint, all training recipes, and evaluation protocols under the MIT license to to advance open-source research.

Community

Paper author Paper submitter

Introducing ServiceNowโ€™s 15B-parameter model that matches ๐——๐—ฒ๐—ฒ๐—ฝ๐—ฆ๐—ฒ๐—ฒ๐—ธโ€“๐—ฅ๐Ÿญโ€“๐Ÿฌ๐Ÿฑ๐Ÿฎ๐Ÿด, ๐— ๐—ถ๐˜€๐˜๐—ฟ๐—ฎ๐—นโ€“๐—บ๐—ฒ๐—ฑ๐—ถ๐˜‚๐—บโ€“๐Ÿญ.๐Ÿฎ and ๐—š๐—ฒ๐—บ๐—ถ๐—ป๐—ถ ๐—™๐—น๐—ฎ๐˜€๐—ต ๐Ÿฎ.๐Ÿฑ on the Artificial Analysis Index (๐—”๐—”๐—œ ๐Ÿฑ๐Ÿฎ) โ€” delivering comparable results at a ๐—ณ๐—ฟ๐—ฎ๐—ฐ๐˜๐—ถ๐—ผ๐—ป ๐—ผ๐—ณ ๐˜๐—ต๐—ฒ ๐˜€๐—ถ๐˜‡๐—ฒ (at least 8-10 times smaller)

๐—™๐—ฟ๐—ผ๐—ป๐˜๐—ถ๐—ฒ๐—ฟ-๐—น๐—ฒ๐˜ƒ๐—ฒ๐—น ๐—ฟ๐—ฒ๐—ฎ๐˜€๐—ผ๐—ป๐—ถ๐—ป๐—ด on a single GPU
๐—ก๐—ผ ๐—ฅ๐—Ÿ ๐—ฝ๐—ต๐—ฎ๐˜€๐—ฒ โ€” the step-change comes from mid-training
๐—ฅ๐—ฒ๐—ฎ๐˜€๐—ผ๐—ป๐˜€ ๐—ผ๐˜ƒ๐—ฒ๐—ฟ ๐—ถ๐—บ๐—ฎ๐—ด๐—ฒ๐˜€ - Image + Text mid training enables model to reason over images without additional training
๐—š๐—ฟ๐—ฒ๐—ฎ๐˜ ๐—ฎ๐˜ ๐—ฟ๐—ฒ๐—ฎ๐˜€๐—ผ๐—ป๐—ถ๐—ป๐—ด โ€” AIME2025: 88, GPQA: 71, LCB: 73
๐—™๐—ผ๐—น๐—น๐—ผ๐˜„๐˜€ ๐—ถ๐—ป๐˜€๐˜๐—ฟ๐˜‚๐—ฐ๐˜๐—ถ๐—ผ๐—ป๐˜€ reliably โ€” IFBench: 62
T๐—ฎ๐˜‚๐Ÿฎ ๐—•๐—ฒ๐—ป๐—ฐ๐—ต (Telecom): 68 โ†’ ready for real-world workflows
๐—ข๐—ฝ๐—ฒ๐—ป ๐˜„๐—ฒ๐—ถ๐—ด๐—ต๐˜๐˜€ model to further research and reproducibility (MIT license)

ยท

will the data be released?

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Thanks for your work! I have a quick question: how do you organize the data formats for tasks like Image Reconstruction and Visual Matching in CPT Stage 2? I think this synthetic augmentation approach is particularly interesting.

Thank you!

arXiv explained breakdown of this paper ๐Ÿ‘‰ https://arxivexplained.com/papers/apriel-15-15b-thinker

Sign up or log in to comment

Models citing this paper 11

Browse 11 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2510.01141 in a dataset README.md to link it from this page.

Spaces citing this paper 20

Collections including this paper 10