Librarian Bot. I found the following papers similar to this paper. \n
The following papers were recommended by the Semantic Scholar API
\n
\n
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2025-03-11T01:33:48.541Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7391502857208252},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2503.05085","authors":[{"_id":"67ced88fa0db83a841e09be6","user":{"_id":"62d4cdbdb9e2ec814c0e0659","avatarUrl":"/avatars/616dd89be0fe8937b32c3f23c69a4e15.svg","isPro":false,"fullname":"feng jiang","user":"liuxuan320","type":"user"},"name":"Feng Jiang","status":"claimed_verified","statusLastChangedAt":"2025-03-10T12:46:04.417Z","hidden":false},{"_id":"67ced88fa0db83a841e09be7","user":{"_id":"67061847e343e345b777d574","avatarUrl":"/avatars/6b6f35aa563dcf32937c4b6a3cdd870e.svg","isPro":false,"fullname":"Zhiyu","user":"zylin1","type":"user"},"name":"Zhiyu Lin","status":"claimed_verified","statusLastChangedAt":"2025-03-10T13:35:13.930Z","hidden":false},{"_id":"67ced88fa0db83a841e09be8","user":{"_id":"668e7f46c243a12604035758","avatarUrl":"/avatars/1b42743ac6672e376a83a3c17f8ec442.svg","isPro":false,"fullname":"Fan Bu","user":"FanBuCUHK","type":"user"},"name":"Fan Bu","status":"claimed_verified","statusLastChangedAt":"2025-03-10T13:35:16.297Z","hidden":false},{"_id":"67ced88fa0db83a841e09be9","user":{"_id":"662f495aa55baffc36870052","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/662f495aa55baffc36870052/8335-GrXBOJne4ZNgKrRz.jpeg","isPro":false,"fullname":"Yuhao Du","user":"KurtDu","type":"user"},"name":"Yuhao Du","status":"claimed_verified","statusLastChangedAt":"2025-09-13T15:06:30.315Z","hidden":false},{"_id":"67ced88fa0db83a841e09bea","user":{"_id":"637c6703ca8542a0ba900ccb","avatarUrl":"/avatars/288ed63a1efa566c3f01e850c6ba5dd5.svg","isPro":false,"fullname":"Wang","user":"Benyou","type":"user"},"name":"Benyou Wang","status":"claimed_verified","statusLastChangedAt":"2025-03-10T13:35:11.668Z","hidden":false},{"_id":"67ced88fa0db83a841e09beb","name":"Haizhou Li","hidden":false}],"publishedAt":"2025-03-07T02:07:00.000Z","submittedOnDailyAt":"2025-03-10T11:51:20.898Z","title":"S2S-Arena, Evaluating Speech2Speech Protocols on Instruction Following\n with Paralinguistic Information","submittedOnDailyBy":{"_id":"62d4cdbdb9e2ec814c0e0659","avatarUrl":"/avatars/616dd89be0fe8937b32c3f23c69a4e15.svg","isPro":false,"fullname":"feng jiang","user":"liuxuan320","type":"user"},"summary":"The rapid development of large language models (LLMs) has brought significant\nattention to speech models, particularly recent progress in speech2speech\nprotocols supporting speech input and output. However, the existing benchmarks\nadopt automatic text-based evaluators for evaluating the instruction following\nability of these models lack consideration for paralinguistic information in\nboth speech understanding and generation. To address these issues, we introduce\nS2S-Arena, a novel arena-style S2S benchmark that evaluates\ninstruction-following capabilities with paralinguistic information in both\nspeech-in and speech-out across real-world tasks. We design 154 samples that\nfused TTS and live recordings in four domains with 21 tasks and manually\nevaluate existing popular speech models in an arena-style manner. The\nexperimental results show that: (1) in addition to the superior performance of\nGPT-4o, the speech model of cascaded ASR, LLM, and TTS outperforms the jointly\ntrained model after text-speech alignment in speech2speech protocols; (2)\nconsidering paralinguistic information, the knowledgeability of the speech\nmodel mainly depends on the LLM backbone, and the multilingual support of that\nis limited by the speech module; (3) excellent speech models can already\nunderstand the paralinguistic information in speech input, but generating\nappropriate audio with paralinguistic information is still a challenge.","upvotes":47,"discussionId":"67ced890a0db83a841e09c1b","projectPage":"https://huggingface.co/spaces/FreedomIntelligence/S2S-Arena","githubRepo":"https://github.com/FreedomIntelligence/S2S-Arena","githubRepoAddedBy":"user","ai_summary":"S2S-Arena is introduced to evaluate speech models' instruction-following abilities with paralinguistic information, revealing that cascaded ASR, LLM, and TTS outperform jointly trained models in speech2speech protocols and that generating appropriate audio with paralinguistic information remains challenging.","ai_keywords":["large language models","LLMs","speech2speech","S2S-Arena","paralinguistic information","speech understanding","speech generation","TTS","ASR","task evaluation","speech model","knowledgeability","multilingual support"],"githubStars":18},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"67061847e343e345b777d574","avatarUrl":"/avatars/6b6f35aa563dcf32937c4b6a3cdd870e.svg","isPro":false,"fullname":"Zhiyu","user":"zylin1","type":"user"},{"_id":"64b61a77db5ccb303c6b0e47","avatarUrl":"/avatars/976a30b979970d88113d770ee41f788c.svg","isPro":false,"fullname":"张藉元","user":"EricZhang1412","type":"user"},{"_id":"65fbdbc8fc9132a2dfd67c8f","avatarUrl":"/avatars/9e404fca5f7b53d49e1aa73d525f834d.svg","isPro":false,"fullname":"Minghao Wu","user":"magicc0nch","type":"user"},{"_id":"641ac2207c21ab946bf036e8","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/641ac2207c21ab946bf036e8/r6c9gpOrul0eC59d9e2Mo.png","isPro":true,"fullname":"Nuo Chen","user":"nuojohnchen","type":"user"},{"_id":"66825365fc0e69f80ab3e5fd","avatarUrl":"/avatars/9191dbe3e44e8959bff9ce36f08d8811.svg","isPro":false,"fullname":"Huang","user":"SmithHuang","type":"user"},{"_id":"67c83a05d0ced7db1781304b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/67c83a05d0ced7db1781304b/prBXvmVvJSWNcOqrdXXOf.jpeg","isPro":false,"fullname":"Chang HONG","user":"CeliaShu1024","type":"user"},{"_id":"623be9e1d1eb227788764959","avatarUrl":"/avatars/b6521b795a59754dbb40123fd4f63b8c.svg","isPro":false,"fullname":"Shunian Chen","user":"Shunian","type":"user"},{"_id":"63e4bfa604b507b491f0e0a0","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1675935639035-noauth.jpeg","isPro":false,"fullname":"Hongliang Li","user":"Hanslerli","type":"user"},{"_id":"64097dd1b6a334f53e2b3e4c","avatarUrl":"/avatars/18d036aab5e096054a8706bc78027126.svg","isPro":false,"fullname":"Junying Chen","user":"jymcc","type":"user"},{"_id":"671ba4e9b70b3cf8014b728c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/671ba4e9b70b3cf8014b728c/kRsxzid91ZNLRI0XxgxhL.jpeg","isPro":false,"fullname":"Zhanchen Dai","user":"tzzte","type":"user"},{"_id":"637c6703ca8542a0ba900ccb","avatarUrl":"/avatars/288ed63a1efa566c3f01e850c6ba5dd5.svg","isPro":false,"fullname":"Wang","user":"Benyou","type":"user"},{"_id":"658afac6cffe7d2208657bcd","avatarUrl":"/avatars/0861bd6d394373cd9c03f207a019bc17.svg","isPro":false,"fullname":"Xuhan Huang","user":"XuhanH","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
S2S-Arena, Evaluating Speech2Speech Protocols on Instruction Following
with Paralinguistic Information
Abstract
S2S-Arena is introduced to evaluate speech models' instruction-following abilities with paralinguistic information, revealing that cascaded ASR, LLM, and TTS outperform jointly trained models in speech2speech protocols and that generating appropriate audio with paralinguistic information remains challenging.
The rapid development of large language models (LLMs) has brought significant
attention to speech models, particularly recent progress in speech2speech
protocols supporting speech input and output. However, the existing benchmarks
adopt automatic text-based evaluators for evaluating the instruction following
ability of these models lack consideration for paralinguistic information in
both speech understanding and generation. To address these issues, we introduce
S2S-Arena, a novel arena-style S2S benchmark that evaluates
instruction-following capabilities with paralinguistic information in both
speech-in and speech-out across real-world tasks. We design 154 samples that
fused TTS and live recordings in four domains with 21 tasks and manually
evaluate existing popular speech models in an arena-style manner. The
experimental results show that: (1) in addition to the superior performance of
GPT-4o, the speech model of cascaded ASR, LLM, and TTS outperforms the jointly
trained model after text-speech alignment in speech2speech protocols; (2)
considering paralinguistic information, the knowledgeability of the speech
model mainly depends on the LLM backbone, and the multilingual support of that
is limited by the speech module; (3) excellent speech models can already
understand the paralinguistic information in speech input, but generating
appropriate audio with paralinguistic information is still a challenge.