
\n","updatedAt":"2026-01-15T19:52:07.234Z","author":{"_id":"6448e79fe988635a3d6ad05b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6448e79fe988635a3d6ad05b/IgYgFxakZCP0bFqojYa53.jpeg","fullname":"Benedikt","name":"benediktnlp","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":5,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.5321030020713806},"editors":["benediktnlp"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/6448e79fe988635a3d6ad05b/IgYgFxakZCP0bFqojYa53.jpeg"],"reactions":[],"isReport":false}},{"id":"6969963c29be2bd148360090","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2026-01-16T01:37:00.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [GenProve: Learning to Generate Text with Fine-Grained Provenance](https://huggingface.co/papers/2601.04932) (2026)\n* [InstructLR: A Scalable Approach to Create Instruction Dataset for Under-Resourced Languages](https://huggingface.co/papers/2512.02213) (2025)\n* [Enhancing Long Document Long Form Summarisation with Self-Planning](https://huggingface.co/papers/2512.17179) (2025)\n* [PolicyBot - Reliable Question Answering over Policy Documents](https://huggingface.co/papers/2511.13489) (2025)\n* [Disco-RAG: Discourse-Aware Retrieval-Augmented Generation](https://huggingface.co/papers/2601.04377) (2026)\n* [SiamGPT: Quality-First Fine-Tuning for Stable Thai Text Generation](https://huggingface.co/papers/2512.19455) (2025)\n* [DocVAL: Validated Chain-of-Thought Distillation for Grounded Document VQA](https://huggingface.co/papers/2511.22521) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\n
The following papers were recommended by the Semantic Scholar API
\n
\n
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2026-01-16T01:37:00.625Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7093271017074585},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2601.08472","authors":[{"_id":"6967548ec5e371f6b235d22e","user":{"_id":"6448e79fe988635a3d6ad05b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6448e79fe988635a3d6ad05b/IgYgFxakZCP0bFqojYa53.jpeg","isPro":true,"fullname":"Benedikt","user":"benediktnlp","type":"user"},"name":"Benedikt Droste","status":"claimed_verified","statusLastChangedAt":"2026-01-16T14:37:13.002Z","hidden":false},{"_id":"6967548ec5e371f6b235d22f","user":{"_id":"647ef81ce9c81260ff84fdd7","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/1gCpz_Og6-LyCBXDmuOd1.jpeg","isPro":true,"fullname":"Jan Philipp Harries","user":"jphme","type":"user"},"name":"Jan Philipp Harries","status":"claimed_verified","statusLastChangedAt":"2026-01-16T10:33:08.204Z","hidden":false},{"_id":"6967548ec5e371f6b235d230","user":{"_id":"605a5f187b8f02bbb9592e52","avatarUrl":"/avatars/e0450ae9b948268dfdfb3bc2448861e3.svg","isPro":true,"fullname":"Max Idahl","user":"maxidl","type":"user"},"name":"Maximilian Idahl","status":"claimed_verified","statusLastChangedAt":"2026-01-16T10:33:06.130Z","hidden":false},{"_id":"6967548ec5e371f6b235d231","user":{"_id":"62e3b6ab0c2a907c388e4965","avatarUrl":"/avatars/4a0cff546914b0f094f4a33e376a2f16.svg","isPro":false,"fullname":"Björn Plüster","user":"bjoernp","type":"user"},"name":"Björn Plüster","status":"claimed_verified","statusLastChangedAt":"2026-01-16T10:32:54.925Z","hidden":false}],"publishedAt":"2026-01-13T11:59:15.000Z","submittedOnDailyAt":"2026-01-15T17:22:07.226Z","title":"sui-1: Grounded and Verifiable Long-Form Summarization","submittedOnDailyBy":{"_id":"6448e79fe988635a3d6ad05b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6448e79fe988635a3d6ad05b/IgYgFxakZCP0bFqojYa53.jpeg","isPro":true,"fullname":"Benedikt","user":"benediktnlp","type":"user"},"summary":"Large language models frequently generate plausible but unfaithful summaries that users cannot verify against source text, a critical limitation in compliance-sensitive domains such as government and legal analysis. We present sui-1, a 24B parameter model that produces abstractive summaries with inline citations, enabling users to trace each claim to its source sentence. Our synthetic data pipeline combines chain-of-thought prompting with multi-stage verification, generating over 22,000 high-quality training examples across five languages from diverse sources including parliamentary documents, web text, and Wikipedia. Evaluation shows sui-1 significantly outperforms all tested open-weight baselines, including models with 3x more parameters. These results demonstrate that task-specific training substantially outperforms scale alone for citation-grounded summarization. Model weights and an interactive demo are publicly available.","upvotes":3,"discussionId":"6967548fc5e371f6b235d232","ai_summary":"A 24 billion parameter model generates abstractive summaries with inline citations through synthetic data training, outperforming larger models in accuracy and verifiability.","ai_keywords":["large language models","abstractive summaries","inline citations","chain-of-thought prompting","multi-stage verification","synthetic data pipeline","task-specific training"],"organization":{"_id":"64cbf83c8256a8efea5ea4d5","name":"ellamind","fullname":"ellamind","avatar":"https://cdn-uploads.huggingface.co/production/uploads/647ef81ce9c81260ff84fdd7/G2Ftt38LatXWWu3CX9izs.png"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"60bccec062080d33f875cd0c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/60bccec062080d33f875cd0c/KvEhYxx9-Tff_Qb7PsjAL.png","isPro":true,"fullname":"Peter Szemraj","user":"pszemraj","type":"user"},{"_id":"686db5d4af2b856fabbf13aa","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/6BjMv2LVNoqvbX8fQSTPI.png","isPro":false,"fullname":"V bbbb","user":"Bbbbbnnn","type":"user"},{"_id":"684a7c07ceb3cea879adc462","avatarUrl":"/avatars/7e5e854c0e52451dee2f899690a29d38.svg","isPro":false,"fullname":"Fabian Lucas","user":"Tianum","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0,"organization":{"_id":"64cbf83c8256a8efea5ea4d5","name":"ellamind","fullname":"ellamind","avatar":"https://cdn-uploads.huggingface.co/production/uploads/647ef81ce9c81260ff84fdd7/G2Ftt38LatXWWu3CX9izs.png"}}">
sui-1: Grounded and Verifiable Long-Form Summarization
Abstract
A 24 billion parameter model generates abstractive summaries with inline citations through synthetic data training, outperforming larger models in accuracy and verifiability.
Large language models frequently generate plausible but unfaithful summaries that users cannot verify against source text, a critical limitation in compliance-sensitive domains such as government and legal analysis. We present sui-1, a 24B parameter model that produces abstractive summaries with inline citations, enabling users to trace each claim to its source sentence. Our synthetic data pipeline combines chain-of-thought prompting with multi-stage verification, generating over 22,000 high-quality training examples across five languages from diverse sources including parliamentary documents, web text, and Wikipedia. Evaluation shows sui-1 significantly outperforms all tested open-weight baselines, including models with 3x more parameters. These results demonstrate that task-specific training substantially outperforms scale alone for citation-grounded summarization. Model weights and an interactive demo are publicly available.