Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - LEMAS: Large A 150K-Hour Large-scale Extensible Multilingual Audio Suite with Generative Speech Models
https://arxiv.org/abs/2601.04233\n","updatedAt":"2026-01-09T14:10:21.530Z","author":{"_id":"647ede3fd26579210f58b5a0","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/647ede3fd26579210f58b5a0/MF6fm6WjLCOipAfUomdCU.png","fullname":"ZZY","name":"Approximetal","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":1,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6940640807151794},"editors":["Approximetal"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/647ede3fd26579210f58b5a0/MF6fm6WjLCOipAfUomdCU.png"],"reactions":[],"isReport":false}},{"id":"6961ad94c037e151fd45f2a9","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2026-01-10T01:38:28.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [IndexTTS 2.5 Technical Report](https://huggingface.co/papers/2601.03888) (2026)\n* [MM-Sonate: Multimodal Controllable Audio-Video Generation with Zero-Shot Voice Cloning](https://huggingface.co/papers/2601.01568) (2026)\n* [M3-TTS: Multi-modal DiT Alignment&Mel-latent for Zero-shot High-fidelity Speech Synthesis](https://huggingface.co/papers/2512.04720) (2025)\n* [HQ-MPSD: A Multilingual Artifact-Controlled Benchmark for Partial Deepfake Speech Detection](https://huggingface.co/papers/2512.13012) (2025)\n* [JoyVoice: Long-Context Conditioning for Anthropomorphic Multi-Speaker Conversational Synthesis](https://huggingface.co/papers/2512.19090) (2025)\n* [DMP-TTS: Disentangled multi-modal Prompting for Controllable Text-to-Speech with Chained Guidance](https://huggingface.co/papers/2512.09504) (2025)\n* [SynTTS-Commands: A Public Dataset for On-Device KWS via TTS-Synthesized Multilingual Speech](https://huggingface.co/papers/2511.07821) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\n
The following papers were recommended by the Semantic Scholar API
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2026-01-10T01:38:28.744Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6908715963363647},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2601.04233","authors":[{"_id":"6960750c5b7998385e6394e6","name":"Zhiyuan Zhao","hidden":false},{"_id":"6960750c5b7998385e6394e7","name":"Lijian Lin","hidden":false},{"_id":"6960750c5b7998385e6394e8","name":"Ye Zhu","hidden":false},{"_id":"6960750c5b7998385e6394e9","name":"Kai Xie","hidden":false},{"_id":"6960750c5b7998385e6394ea","name":"Yunfei Liu","hidden":false},{"_id":"6960750c5b7998385e6394eb","name":"Yu Li","hidden":false}],"publishedAt":"2026-01-04T03:43:52.000Z","submittedOnDailyAt":"2026-01-09T11:40:21.519Z","title":"LEMAS: Large A 150K-Hour Large-scale Extensible Multilingual Audio Suite with Generative Speech Models","submittedOnDailyBy":{"_id":"647ede3fd26579210f58b5a0","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/647ede3fd26579210f58b5a0/MF6fm6WjLCOipAfUomdCU.png","isPro":true,"fullname":"ZZY","user":"Approximetal","type":"user"},"summary":"We present the LEMAS-Dataset, which, to our knowledge, is currently the largest open-source multilingual speech corpus with word-level timestamps. Covering over 150,000 hours across 10 major languages, LEMAS-Dataset is constructed via a efficient data processing pipeline that ensures high-quality data and annotations. To validate the effectiveness of LEMAS-Dataset across diverse generative paradigms, we train two benchmark models with distinct architectures and task specializations on this dataset. LEMAS-TTS, built upon a non-autoregressive flow-matching framework, leverages the dataset's massive scale and linguistic diversity to achieve robust zero-shot multilingual synthesis. Our proposed accent-adversarial training and CTC loss mitigate cross-lingual accent issues, enhancing synthesis stability. Complementarily, LEMAS-Edit employs an autoregressive decoder-only architecture that formulates speech editing as a masked token infilling task. By exploiting precise word-level alignments to construct training masks and adopting adaptive decoding strategies, it achieves seamless, smooth-boundary speech editing with natural transitions. Experimental results demonstrate that models trained on LEMAS-Dataset deliver high-quality synthesis and editing performance, confirming the dataset's quality. We envision that this richly timestamp-annotated, fine-grained multilingual corpus will drive future advances in prompt-based speech generation systems.","upvotes":2,"discussionId":"6960750c5b7998385e6394ec","projectPage":"https://lemas-project.github.io/LEMAS-Project","ai_summary":"The LEMAS-Dataset enables high-quality multilingual speech synthesis and editing through specialized models leveraging flow-matching and autoregressive architectures with novel training techniques.","ai_keywords":["multilingual speech corpus","word-level timestamps","data processing pipeline","non-autoregressive flow-matching framework","zero-shot multilingual synthesis","accent-adversarial training","CTC loss","autoregressive decoder-only architecture","masked token infilling","speech editing","adaptive decoding strategies"],"organization":{"_id":"692d907ec68dd3f60d37c8b4","name":"LEMAS-Project","fullname":"LEMAS","avatar":"https://cdn-uploads.huggingface.co/production/uploads/647ede3fd26579210f58b5a0/TJGQyxXQ_NEs0SAdjNc3m.jpeg"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"686db5d4af2b856fabbf13aa","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/6BjMv2LVNoqvbX8fQSTPI.png","isPro":false,"fullname":"V bbbb","user":"Bbbbbnnn","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0,"organization":{"_id":"692d907ec68dd3f60d37c8b4","name":"LEMAS-Project","fullname":"LEMAS","avatar":"https://cdn-uploads.huggingface.co/production/uploads/647ede3fd26579210f58b5a0/TJGQyxXQ_NEs0SAdjNc3m.jpeg"}}">
The LEMAS-Dataset enables high-quality multilingual speech synthesis and editing through specialized models leveraging flow-matching and autoregressive architectures with novel training techniques.
AI-generated summary
We present the LEMAS-Dataset, which, to our knowledge, is currently the largest open-source multilingual speech corpus with word-level timestamps. Covering over 150,000 hours across 10 major languages, LEMAS-Dataset is constructed via a efficient data processing pipeline that ensures high-quality data and annotations. To validate the effectiveness of LEMAS-Dataset across diverse generative paradigms, we train two benchmark models with distinct architectures and task specializations on this dataset. LEMAS-TTS, built upon a non-autoregressive flow-matching framework, leverages the dataset's massive scale and linguistic diversity to achieve robust zero-shot multilingual synthesis. Our proposed accent-adversarial training and CTC loss mitigate cross-lingual accent issues, enhancing synthesis stability. Complementarily, LEMAS-Edit employs an autoregressive decoder-only architecture that formulates speech editing as a masked token infilling task. By exploiting precise word-level alignments to construct training masks and adopting adaptive decoding strategies, it achieves seamless, smooth-boundary speech editing with natural transitions. Experimental results demonstrate that models trained on LEMAS-Dataset deliver high-quality synthesis and editing performance, confirming the dataset's quality. We envision that this richly timestamp-annotated, fine-grained multilingual corpus will drive future advances in prompt-based speech generation systems.
LEMAS: A 150K-Hour Large-scale Extensible Multilingual Audio Suite with Generative Speech Models
LEMAS is a large-scale extensible multilingual audio suite, providing multilingual speech corpus (LEMAS-Dataset) with word-level timestamps, covering over 150,000 hours across 10 major languages. Built with a rigorous alignment and confidence-based filtering pipeline, LEMAS supports diverse generative paradigms including zero-shot multilingual synthesis (LEMAS-TTS) and seamless speech editing (LEMAS-Edit).