Librarian Bot. I found the following papers similar to this paper. \n
The following papers were recommended by the Semantic Scholar API
\n
\n
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2026-02-19T01:40:36.016Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6490722894668579},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2602.15547","authors":[{"_id":"69958815ed493589ceb5be21","user":{"_id":"64d22f33032a420d1863b6ea","avatarUrl":"/avatars/ed3eaf4bab70dd6ab9a2b67b5928e4fb.svg","isPro":false,"fullname":"Mohammad Kalim Akram","user":"makram93","type":"user"},"name":"Mohammad Kalim Akram","status":"admin_assigned","statusLastChangedAt":"2026-02-18T13:06:40.906Z","hidden":false},{"_id":"69958815ed493589ceb5be22","user":{"_id":"64c23f6d569648a60737eddb","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64c23f6d569648a60737eddb/iZq7bp-yYaGl5VBVoN5Dg.jpeg","isPro":false,"fullname":"Saba Sturua","user":"jupyterjazz","type":"user"},"name":"Saba Sturua","status":"admin_assigned","statusLastChangedAt":"2026-02-18T13:06:49.861Z","hidden":false},{"_id":"69958815ed493589ceb5be23","user":{"_id":"6911a37ace661438b73ff25d","avatarUrl":"/avatars/21ae8db1f909229d22d2c93e4f1cb0e0.svg","isPro":false,"fullname":"Nastia Havriushenko","user":"ahavrius","type":"user"},"name":"Nastia Havriushenko","status":"admin_assigned","statusLastChangedAt":"2026-02-18T13:06:58.693Z","hidden":false},{"_id":"69958815ed493589ceb5be24","user":{"_id":"645b5a5b438d6cfbe1ad12a1","avatarUrl":"/avatars/f53e63f8e52115a95814b7be1f07a391.svg","isPro":false,"fullname":"Quentin Herreros","user":"qherreros","type":"user"},"name":"Quentin Herreros","status":"admin_assigned","statusLastChangedAt":"2026-02-18T13:07:05.539Z","hidden":false},{"_id":"69958815ed493589ceb5be25","user":{"_id":"6476ff2699a5ce743ccea3fc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6476ff2699a5ce743ccea3fc/zmFmF8tXXDaAGcl8RYiRr.jpeg","isPro":false,"fullname":"Michael Günther","user":"michael-guenther","type":"user"},"name":"Michael Günther","status":"claimed_verified","statusLastChangedAt":"2026-02-19T09:52:24.830Z","hidden":false},{"_id":"69958815ed493589ceb5be26","user":{"_id":"60638400b1703ddba0d458a7","avatarUrl":"/avatars/50228a18e7f211275a09e3cbd6e2931e.svg","isPro":false,"fullname":"Maximilian Werk","user":"mwerk","type":"user"},"name":"Maximilian Werk","status":"admin_assigned","statusLastChangedAt":"2026-02-18T13:07:16.360Z","hidden":false},{"_id":"69958815ed493589ceb5be27","user":{"_id":"603763514de52ff951d89793","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/603763514de52ff951d89793/n-QouGYg7oE5QeDaAb3Ns.png","isPro":false,"fullname":"Han Xiao","user":"hanxiao","type":"user"},"name":"Han Xiao","status":"claimed_verified","statusLastChangedAt":"2026-02-18T12:34:37.784Z","hidden":false}],"publishedAt":"2026-02-17T12:50:50.000Z","submittedOnDailyAt":"2026-02-18T11:26:39.504Z","title":"jina-embeddings-v5-text: Task-Targeted Embedding Distillation","submittedOnDailyBy":{"_id":"603763514de52ff951d89793","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/603763514de52ff951d89793/n-QouGYg7oE5QeDaAb3Ns.png","isPro":false,"fullname":"Han Xiao","user":"hanxiao","type":"user"},"summary":"Text embedding models are widely used for semantic similarity tasks, including information retrieval, clustering, and classification. General-purpose models are typically trained with single- or multi-stage processes using contrastive loss functions. We introduce a novel training regimen that combines model distillation techniques with task-specific contrastive loss to produce compact, high-performance embedding models. Our findings suggest that this approach is more effective for training small models than purely contrastive or distillation-based training paradigms alone. Benchmark scores for the resulting models, jina-embeddings-v5-text-small and jina-embeddings-v5-text-nano, exceed or match the state-of-the-art for models of similar size. jina-embeddings-v5-text models additionally support long texts (up to 32k tokens) in many languages, and generate embeddings that remain robust under truncation and binary quantization. Model weights are publicly available, hopefully inspiring further advances in embedding model development.","upvotes":19,"discussionId":"69958816ed493589ceb5be28","ai_summary":"Compact text embedding models are developed through a combined training approach using distillation and contrastive loss, achieving state-of-the-art performance while supporting long-context sequences and efficient quantization.","ai_keywords":["text embedding models","contrastive loss","model distillation","semantic similarity","information retrieval","clustering","classification","embedding models","long texts","binary quantization"],"organization":{"_id":"63563e0c2d14fcd7d83743cf","name":"jinaai","fullname":"Jina AI","avatar":"https://cdn-uploads.huggingface.co/production/uploads/603763514de52ff951d89793/wD54VbAHHyHop3uYlJKl4.png"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"603763514de52ff951d89793","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/603763514de52ff951d89793/n-QouGYg7oE5QeDaAb3Ns.png","isPro":false,"fullname":"Han Xiao","user":"hanxiao","type":"user"},{"_id":"63a369d98c0c89dcae3b8329","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63a369d98c0c89dcae3b8329/AiH2zjy1cnt9OADAAZMLD.jpeg","isPro":false,"fullname":"Adina Yakefu","user":"AdinaY","type":"user"},{"_id":"6317233cc92fd6fee317e030","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6317233cc92fd6fee317e030/cJHSvvimr1kqgQfHOjO5n.png","isPro":false,"fullname":"Tom Aarsen","user":"tomaarsen","type":"user"},{"_id":"5e6a3d4ea9afd5125d9ec064","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1584020801691-noauth.jpeg","isPro":true,"fullname":"Stefan Schweter","user":"stefan-it","type":"user"},{"_id":"65025370b6595dc45c397340","avatarUrl":"/avatars/9469599b176034548042922c0afa7051.svg","isPro":false,"fullname":"J C","user":"dark-pen","type":"user"},{"_id":"65f5dc345f9b537bfb125988","avatarUrl":"/avatars/7fa9de162694d34a214ccd8ecb02fa0a.svg","isPro":false,"fullname":"Sergey Zubrilin","user":"hiauiarau","type":"user"},{"_id":"673e025a1b559505fc8d9ac8","avatarUrl":"/avatars/5e4d3d63358bc82e763ff9dfce22d1a1.svg","isPro":false,"fullname":"Kyu Song","user":"kyunocap","type":"user"},{"_id":"643be8879f5d314db2d9ed23","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/643be8879f5d314db2d9ed23/VrW2UtJ7ppOnGIYjTWd7b.png","isPro":false,"fullname":"Chen Dongping","user":"shuaishuaicdp","type":"user"},{"_id":"63c1699e40a26dd2db32400d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63c1699e40a26dd2db32400d/3N0-Zp8igv8-52mXAdiiq.jpeg","isPro":false,"fullname":"Chroma","user":"Chroma111","type":"user"},{"_id":"662f733dc3a82e9f11192c4f","avatarUrl":"/avatars/29729889de22e437760c4814eee781f5.svg","isPro":false,"fullname":"Zhensong Zhang","user":"JasonCU","type":"user"},{"_id":"609bbe2f4932693ca2009d6a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1620819560688-609bbe2f4932693ca2009d6a.jpeg","isPro":false,"fullname":"Antoine Chaffin","user":"NohTow","type":"user"},{"_id":"5f17f0a0925b9863e28ad517","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/5f17f0a0925b9863e28ad517/fXIY5i9RLsIa1v3CCuVtt.jpeg","isPro":true,"fullname":"Victor Mustar","user":"victor","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0,"organization":{"_id":"63563e0c2d14fcd7d83743cf","name":"jinaai","fullname":"Jina AI","avatar":"https://cdn-uploads.huggingface.co/production/uploads/603763514de52ff951d89793/wD54VbAHHyHop3uYlJKl4.png"}}">
jina-embeddings-v5-text: Task-Targeted Embedding Distillation
Abstract
Compact text embedding models are developed through a combined training approach using distillation and contrastive loss, achieving state-of-the-art performance while supporting long-context sequences and efficient quantization.
Text embedding models are widely used for semantic similarity tasks, including information retrieval, clustering, and classification. General-purpose models are typically trained with single- or multi-stage processes using contrastive loss functions. We introduce a novel training regimen that combines model distillation techniques with task-specific contrastive loss to produce compact, high-performance embedding models. Our findings suggest that this approach is more effective for training small models than purely contrastive or distillation-based training paradigms alone. Benchmark scores for the resulting models, jina-embeddings-v5-text-small and jina-embeddings-v5-text-nano, exceed or match the state-of-the-art for models of similar size. jina-embeddings-v5-text models additionally support long texts (up to 32k tokens) in many languages, and generate embeddings that remain robust under truncation and binary quantization. Model weights are publicly available, hopefully inspiring further advances in embedding model development.