Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - Covo-Audio Technical Report
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2026-02-12T01:42:25.047Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7091168165206909},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2602.09823","authors":[{"_id":"698bf2d76052d3bed9630a6f","name":"Wenfu Wang","hidden":false},{"_id":"698bf2d76052d3bed9630a70","name":"Chenxing Li","hidden":false},{"_id":"698bf2d76052d3bed9630a71","name":"Liqiang Zhang","hidden":false},{"_id":"698bf2d76052d3bed9630a72","name":"Yiyang Zhao","hidden":false},{"_id":"698bf2d76052d3bed9630a73","name":"Yuxiang Zou","hidden":false},{"_id":"698bf2d76052d3bed9630a74","name":"Hanzhao Li","hidden":false},{"_id":"698bf2d76052d3bed9630a75","name":"Mingyu Cui","hidden":false},{"_id":"698bf2d76052d3bed9630a76","name":"Hao Zhang","hidden":false},{"_id":"698bf2d76052d3bed9630a77","name":"Kun Wei","hidden":false},{"_id":"698bf2d76052d3bed9630a78","name":"Le Xu","hidden":false},{"_id":"698bf2d76052d3bed9630a79","name":"Zikang Huang","hidden":false},{"_id":"698bf2d76052d3bed9630a7a","name":"Jiajun Xu","hidden":false},{"_id":"698bf2d76052d3bed9630a7b","name":"Jiliang Hu","hidden":false},{"_id":"698bf2d76052d3bed9630a7c","name":"Xiang He","hidden":false},{"_id":"698bf2d76052d3bed9630a7d","name":"Zeyu Xie","hidden":false},{"_id":"698bf2d76052d3bed9630a7e","name":"Jiawen Kang","hidden":false},{"_id":"698bf2d76052d3bed9630a7f","name":"Youjun Chen","hidden":false},{"_id":"698bf2d76052d3bed9630a80","name":"Meng Yu","hidden":false},{"_id":"698bf2d76052d3bed9630a81","name":"Dong Yu","hidden":false},{"_id":"698bf2d76052d3bed9630a82","name":"Rilin Chen","hidden":false},{"_id":"698bf2d76052d3bed9630a83","name":"Linlin Di","hidden":false},{"_id":"698bf2d76052d3bed9630a84","name":"Shulin Feng","hidden":false},{"_id":"698bf2d76052d3bed9630a85","name":"Na Hu","hidden":false},{"_id":"698bf2d76052d3bed9630a86","name":"Yang Liu","hidden":false},{"_id":"698bf2d76052d3bed9630a87","name":"Bang Wang","hidden":false},{"_id":"698bf2d76052d3bed9630a88","name":"Shan Yang","hidden":false}],"publishedAt":"2026-02-10T14:31:11.000Z","submittedOnDailyAt":"2026-02-11T00:39:19.748Z","title":"Covo-Audio Technical Report","submittedOnDailyBy":{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},"summary":"In this work, we present Covo-Audio, a 7B-parameter end-to-end LALM that directly processes continuous audio inputs and generates audio outputs within a single unified architecture. Through large-scale curated pretraining and targeted post-training, Covo-Audio achieves state-of-the-art or competitive performance among models of comparable scale across a broad spectrum of tasks, including speech-text modeling, spoken dialogue, speech understanding, audio understanding, and full-duplex voice interaction. Extensive evaluations demonstrate that the pretrained foundation model exhibits strong speech-text comprehension and semantic reasoning capabilities on multiple benchmarks, outperforming representative open-source models of comparable scale. Furthermore, Covo-Audio-Chat, the dialogue-oriented variant, demonstrates strong spoken conversational abilities, including understanding, contextual reasoning, instruction following, and generating contextually appropriate and empathetic responses, validating its applicability to real-world conversational assistant scenarios. Covo-Audio-Chat-FD, the evolved full-duplex model, achieves substantially superior performance on both spoken dialogue capabilities and full-duplex interaction behaviors, demonstrating its competence in practical robustness. To mitigate the high cost of deploying end-to-end LALMs for natural conversational systems, we propose an intelligence-speaker decoupling strategy that separates dialogue intelligence from voice rendering, enabling flexible voice customization with minimal text-to-speech (TTS) data while preserving dialogue performance. Overall, our results highlight the strong potential of 7B-scale models to integrate sophisticated audio intelligence with high-level semantic reasoning, and suggest a scalable path toward more capable and versatile LALMs.","upvotes":8,"discussionId":"698bf2d76052d3bed9630a89","ai_summary":"Covo-Audio is a 7B-parameter end-to-end large audio language model that processes continuous audio inputs and generates audio outputs, achieving state-of-the-art performance across speech-text modeling, spoken dialogue, and full-duplex voice interaction tasks through large-scale pretraining and post-training techniques.","ai_keywords":["end-to-end LALM","continuous audio inputs","audio outputs","large-scale curated pretraining","targeted post-training","speech-text modeling","spoken dialogue","speech understanding","audio understanding","full-duplex voice interaction","speech-text comprehension","semantic reasoning","dialogue-oriented variant","conversational abilities","contextual reasoning","instruction following","empathetic responses","full-duplex model","intelligent-speaker decoupling strategy","voice rendering","text-to-speech"],"organization":{"_id":"66543b6e420092799d2f625c","name":"tencent","fullname":"Tencent","avatar":"https://cdn-uploads.huggingface.co/production/uploads/5dd96eb166059660ed1ee413/Lp3m-XLpjQGwBItlvn69q.png"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6507fbecffc738079ca592bf","avatarUrl":"/avatars/1cb0f39ac6dc2dba2292846a8d7746da.svg","isPro":false,"fullname":"Ming Chen","user":"ChenMing-thu14","type":"user"},{"_id":"627b04e09ef63e604f24d660","avatarUrl":"/avatars/6836932945dbe04c398bec23bcae6262.svg","isPro":false,"fullname":"dooho lee","user":"BlueYellowGreen","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"67bbade8a8c89b98ec377944","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/67bbade8a8c89b98ec377944/HPtKDo8fnKr4OxpN1Z17D.png","isPro":false,"fullname":"Urodoc Oncall","user":"UDCAI","type":"user"},{"_id":"672db3e1a34d64e774fb0eca","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/uotcsIombvbELdOkriI3x.png","isPro":false,"fullname":"wwf","user":"wangwenfu","type":"user"},{"_id":"63374e719d05959872c4a046","avatarUrl":"/avatars/94d1bc20c97a249d9fe275415ab830e0.svg","isPro":false,"fullname":"jerad fields","user":"jeradf","type":"user"},{"_id":"63c1699e40a26dd2db32400d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63c1699e40a26dd2db32400d/3N0-Zp8igv8-52mXAdiiq.jpeg","isPro":false,"fullname":"Chroma","user":"Chroma111","type":"user"},{"_id":"6475ff9b4c9fb8a4bf1cde76","avatarUrl":"/avatars/61cf82cd0e15c4618f5bd8b1f7d52f37.svg","isPro":false,"fullname":"floyed shen","user":"floyed","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0,"organization":{"_id":"66543b6e420092799d2f625c","name":"tencent","fullname":"Tencent","avatar":"https://cdn-uploads.huggingface.co/production/uploads/5dd96eb166059660ed1ee413/Lp3m-XLpjQGwBItlvn69q.png"}}">
Covo-Audio is a 7B-parameter end-to-end large audio language model that processes continuous audio inputs and generates audio outputs, achieving state-of-the-art performance across speech-text modeling, spoken dialogue, and full-duplex voice interaction tasks through large-scale pretraining and post-training techniques.
AI-generated summary
In this work, we present Covo-Audio, a 7B-parameter end-to-end LALM that directly processes continuous audio inputs and generates audio outputs within a single unified architecture. Through large-scale curated pretraining and targeted post-training, Covo-Audio achieves state-of-the-art or competitive performance among models of comparable scale across a broad spectrum of tasks, including speech-text modeling, spoken dialogue, speech understanding, audio understanding, and full-duplex voice interaction. Extensive evaluations demonstrate that the pretrained foundation model exhibits strong speech-text comprehension and semantic reasoning capabilities on multiple benchmarks, outperforming representative open-source models of comparable scale. Furthermore, Covo-Audio-Chat, the dialogue-oriented variant, demonstrates strong spoken conversational abilities, including understanding, contextual reasoning, instruction following, and generating contextually appropriate and empathetic responses, validating its applicability to real-world conversational assistant scenarios. Covo-Audio-Chat-FD, the evolved full-duplex model, achieves substantially superior performance on both spoken dialogue capabilities and full-duplex interaction behaviors, demonstrating its competence in practical robustness. To mitigate the high cost of deploying end-to-end LALMs for natural conversational systems, we propose an intelligence-speaker decoupling strategy that separates dialogue intelligence from voice rendering, enabling flexible voice customization with minimal text-to-speech (TTS) data while preserving dialogue performance. Overall, our results highlight the strong potential of 7B-scale models to integrate sophisticated audio intelligence with high-level semantic reasoning, and suggest a scalable path toward more capable and versatile LALMs.