Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - Shakti-VLMs: Scalable Vision-Language Models for Enterprise AI
[go: Go Back, main page]

Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-02-27T01:35:30.016Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6936035752296448},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2502.17092","authors":[{"_id":"67bea8cc7e54112af6c372aa","user":{"_id":"63d9e09f1cae35c27bf80cb2","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1675223055197-noauth.jpeg","isPro":false,"fullname":"Syed Abdul Gaffar Shakhadri","user":"SyedAbdul","type":"user"},"name":"Syed Abdul Gaffar Shakhadri","status":"extracted_confirmed","statusLastChangedAt":"2025-02-26T05:52:19.355Z","hidden":false},{"_id":"67bea8cc7e54112af6c372ab","user":{"_id":"5fb7ae48e6ae537272bdeb3c","avatarUrl":"/avatars/e5d01cb428f4b22161e0d17895a5c678.svg","isPro":false,"fullname":"Kruthika","user":"kruthika","type":"user"},"name":"Kruthika KR","status":"extracted_pending","statusLastChangedAt":"2025-02-26T05:38:21.529Z","hidden":false},{"_id":"67bea8cc7e54112af6c372ac","user":{"_id":"677cc34fe4cf361eedccd085","avatarUrl":"/avatars/e97a3f9a84ed258ab4b75c12865562d6.svg","isPro":false,"fullname":"Kartik Basavaraj Angadi","user":"KartikAngadi","type":"user"},"name":"Kartik Basavaraj Angadi","status":"extracted_pending","statusLastChangedAt":"2025-02-26T05:38:21.529Z","hidden":false}],"publishedAt":"2025-02-24T12:15:07.000Z","submittedOnDailyAt":"2025-02-26T03:08:42.527Z","title":"Shakti-VLMs: Scalable Vision-Language Models for Enterprise AI","submittedOnDailyBy":{"_id":"63d9e09f1cae35c27bf80cb2","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1675223055197-noauth.jpeg","isPro":false,"fullname":"Syed Abdul Gaffar Shakhadri","user":"SyedAbdul","type":"user"},"summary":"We introduce Shakti VLM, a family of vision-language models in the capacity\nof 1B and 4B parameters designed to address data efficiency challenges in\nmultimodal learning. While recent VLMs achieve strong performance through\nextensive training data, Shakti models leverage architectural innovations to\nattain competitive results with fewer tokens. Key advancements include\nQK-Normalization for attention stability, hybrid normalization techniques, and\nenhanced positional encoding. A three-stage training strategy further optimizes\nlearning efficiency. Evaluations show that Shakti-Shakti-VLM-1B and\nShakti-VLM-4B excel in document understanding, Visual Reasoning, OCR\nextraction, and general multimodal reasoning. Our results highlight that high\nperformance can be achieved through model design and training strategy rather\nthan sheer data volume, making Shakti an efficient solution for\nenterprise-scale multimodal tasks.","upvotes":3,"discussionId":"67bea8cd7e54112af6c37305","ai_summary":"Shakti VLM models achieve competitive performance in multimodal learning with fewer tokens and data through architectural innovations like QK-Normalization and a three-stage training strategy.","ai_keywords":["QK-Normalization","hybrid normalization techniques","enhanced positional encoding","three-stage training strategy","document understanding","Visual Reasoning","OCR extraction","general multimodal reasoning"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6270324ebecab9e2dcf245de","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6270324ebecab9e2dcf245de/cMbtWSasyNlYc9hvsEEzt.jpeg","isPro":false,"fullname":"Kye Gomez","user":"kye","type":"user"},{"_id":"66f612b934b8ac9ffa44f084","avatarUrl":"/avatars/6836c122e19c66c90f1673f28b30d7f0.svg","isPro":false,"fullname":"Tang","user":"tommysally","type":"user"},{"_id":"5fb7ae48e6ae537272bdeb3c","avatarUrl":"/avatars/e5d01cb428f4b22161e0d17895a5c678.svg","isPro":false,"fullname":"Kruthika","user":"kruthika","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2502.17092

Shakti-VLMs: Scalable Vision-Language Models for Enterprise AI

Published on Feb 24, 2025
· Submitted by
Syed Abdul Gaffar Shakhadri
on Feb 26, 2025

Abstract

Shakti VLM models achieve competitive performance in multimodal learning with fewer tokens and data through architectural innovations like QK-Normalization and a three-stage training strategy.

AI-generated summary

We introduce Shakti VLM, a family of vision-language models in the capacity of 1B and 4B parameters designed to address data efficiency challenges in multimodal learning. While recent VLMs achieve strong performance through extensive training data, Shakti models leverage architectural innovations to attain competitive results with fewer tokens. Key advancements include QK-Normalization for attention stability, hybrid normalization techniques, and enhanced positional encoding. A three-stage training strategy further optimizes learning efficiency. Evaluations show that Shakti-Shakti-VLM-1B and Shakti-VLM-4B excel in document understanding, Visual Reasoning, OCR extraction, and general multimodal reasoning. Our results highlight that high performance can be achieved through model design and training strategy rather than sheer data volume, making Shakti an efficient solution for enterprise-scale multimodal tasks.

Community

Paper author Paper submitter

We introduce Shakti VLM, a family of vision-language models in the capacity of 1B and 4B
parameters designed to address data efficiency challenges in multimodal learning. While recent
VLMs achieve strong performance through extensive training data, Shakti models leverage architectural innovations to attain competitive results with fewer tokens. Key advancements include
QK-Normalization for attention stability, hybrid normalization techniques, and enhanced positional
encoding. A three-stage training strategy further optimizes learning efficiency. Evaluations show that
Shakti-Shakti-VLM-1B and Shakti-VLM-4B excel in document understanding, Visual Reasoning,
OCR extraction, and general multimodal reasoning. Our results highlight that high performance can
be achieved through model design and training strategy rather than sheer data volume, making Shakti
an efficient solution for enterprise-scale multimodal tasks.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2502.17092 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2502.17092 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2502.17092 in a Space README.md to link it from this page.

Collections including this paper 2