Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - Thanos: Enhancing Conversational Agents with Skill-of-Mind-Infused Large
Language Model
\n","updatedAt":"2024-11-08T22:31:42.791Z","author":{"_id":"65afe3fd7c11edbf6e1a1277","avatarUrl":"/avatars/a35c3e29d712d1fb062b5eb8887d46a6.svg","fullname":"Robin Williams","name":"bfuzzy1","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":10,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6930411458015442},"editors":["bfuzzy1"],"editorAvatarUrls":["/avatars/a35c3e29d712d1fb062b5eb8887d46a6.svg"],"reactions":[{"reaction":"❤️","users":["passing2961"],"count":1}],"isReport":false}},{"id":"67300da9b4215fd388125b7a","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2024-11-10T01:34:33.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Multi-Agent Large Language Models for Conversational Task-Solving](https://huggingface.co/papers/2410.22932) (2024)\n* [PersoBench: Benchmarking Personalized Response Generation in Large Language Models](https://huggingface.co/papers/2410.03198) (2024)\n* [Simulating User Agents for Embodied Conversational-AI](https://huggingface.co/papers/2410.23535) (2024)\n* [MCPDial: A Minecraft Persona-driven Dialogue Dataset](https://huggingface.co/papers/2410.21627) (2024)\n* [Beyond Ontology in Dialogue State Tracking for Goal-Oriented Chatbot](https://huggingface.co/papers/2410.22767) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\n
The following papers were recommended by the Semantic Scholar API
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2024-11-10T01:34:33.467Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7457572817802429},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[{"reaction":"👍","users":["passing2961"],"count":1}],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2411.04496","authors":[{"_id":"672d830975cc01b042225b23","user":{"_id":"6434b6619bd5a84b5dcfa4de","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6434b6619bd5a84b5dcfa4de/h8Q6kPNjFNc03wmdboHzq.jpeg","isPro":true,"fullname":"Young-Jun Lee","user":"passing2961","type":"user"},"name":"Young-Jun Lee","status":"claimed_verified","statusLastChangedAt":"2024-11-15T09:31:55.077Z","hidden":false},{"_id":"672d830975cc01b042225b24","name":"Dokyong Lee","hidden":false},{"_id":"672d830975cc01b042225b25","name":"Junyoung Youn","hidden":false},{"_id":"672d830975cc01b042225b26","name":"Kyeongjin Oh","hidden":false},{"_id":"672d830975cc01b042225b27","name":"Ho-Jin Choi","hidden":false}],"publishedAt":"2024-11-07T07:46:06.000Z","submittedOnDailyAt":"2024-11-08T02:31:51.207Z","title":"Thanos: Enhancing Conversational Agents with Skill-of-Mind-Infused Large\n Language Model","submittedOnDailyBy":{"_id":"6434b6619bd5a84b5dcfa4de","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6434b6619bd5a84b5dcfa4de/h8Q6kPNjFNc03wmdboHzq.jpeg","isPro":true,"fullname":"Young-Jun Lee","user":"passing2961","type":"user"},"summary":"To increase social bonding with interlocutors, humans naturally acquire the\nability to respond appropriately in a given situation by considering which\nconversational skill is most suitable for the response - a process we call\nskill-of-mind. For large language model (LLM)-based conversational agents,\nplanning appropriate conversational skills, as humans do, is challenging due to\nthe complexity of social dialogue, especially in interactive scenarios. To\naddress this, we propose a skill-of-mind-annotated conversation dataset, named\nMultifaceted Skill-of-Mind, which includes multi-turn and multifaceted\nconversational skills across various interactive scenarios (e.g., long-term,\ncounseling, task-oriented), grounded in diverse social contexts (e.g.,\ndemographics, persona, rules of thumb). This dataset consists of roughly 100K\nconversations. Using this dataset, we introduce a new family of\nskill-of-mind-infused LLMs, named Thanos, with model sizes of 1B, 3B, and 8B\nparameters. With extensive experiments, these models successfully demonstrate\nthe skill-of-mind process and exhibit strong generalizability in inferring\nmultifaceted skills across a variety of domains. Moreover, we show that Thanos\nsignificantly enhances the quality of responses generated by LLM-based\nconversational agents and promotes prosocial behavior in human evaluations.","upvotes":22,"discussionId":"672d830a75cc01b042225b5f","githubRepo":"https://github.com/passing2961/thanos","githubRepoAddedBy":"auto","ai_summary":"A new dataset and family of large language models, Thanos, improve conversational skills and quality of responses in social dialogue by infusing multifaceted conversational skills.","ai_keywords":["skill-of-mind","conversational agents","skill-of-mind-annotated","Multifaceted Skill-of-Mind","multifaceted conversational skills","parameter-efficient","large language model","generalizability","prosocial behavior"],"githubStars":8},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6434b6619bd5a84b5dcfa4de","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6434b6619bd5a84b5dcfa4de/h8Q6kPNjFNc03wmdboHzq.jpeg","isPro":true,"fullname":"Young-Jun Lee","user":"passing2961","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"65f40e43653c231cbaf7d1e4","avatarUrl":"/avatars/a42ac5454cbe175f04c3420fce90cad2.svg","isPro":false,"fullname":"Jue Zhang","user":"JueZhang","type":"user"},{"_id":"65a607fcc653a2c10cfd2c8e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65a607fcc653a2c10cfd2c8e/mOmR04rFyO_btblYukRAk.jpeg","isPro":false,"fullname":"Jonghwan Hyeon","user":"jonghwanhyeon","type":"user"},{"_id":"653032b83ecbe51d6a6f0498","avatarUrl":"/avatars/3d435a420bf7024560ed5c5aa2e204d6.svg","isPro":false,"fullname":"Yechan Hwang","user":"YYXYmint","type":"user"},{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},{"_id":"64e567c9ddbefb63095a9662","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/F2BwrOU0XpzVI5nd-TL54.png","isPro":false,"fullname":"Bullard ","user":"Charletta1","type":"user"},{"_id":"641b754d1911d3be6745cce9","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/641b754d1911d3be6745cce9/Ydjcjd4VuNUGj5Cd4QHdB.png","isPro":false,"fullname":"atayloraerospace","user":"Taylor658","type":"user"},{"_id":"672e7b26742ea2c59e207c29","avatarUrl":"/avatars/2dcd4ab77fc28dc3c8bf720bd456cd24.svg","isPro":false,"fullname":"Li","user":"Jim1990ai","type":"user"},{"_id":"65afe3fd7c11edbf6e1a1277","avatarUrl":"/avatars/a35c3e29d712d1fb062b5eb8887d46a6.svg","isPro":false,"fullname":"Robin Williams","user":"bfuzzy1","type":"user"},{"_id":"660f2b525c0044a079f4b977","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/660f2b525c0044a079f4b977/E7iPcykcFl61duegChFpL.jpeg","isPro":false,"fullname":"Yicheng Qian","user":"Davidqian123","type":"user"},{"_id":"672e8c523914978c078dd0ac","avatarUrl":"/avatars/f210b6d8ed5ca64d135797120eb1bf18.svg","isPro":false,"fullname":"Sophia Carter","user":"SophiaCarter","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
A new dataset and family of large language models, Thanos, improve conversational skills and quality of responses in social dialogue by infusing multifaceted conversational skills.
AI-generated summary
To increase social bonding with interlocutors, humans naturally acquire the
ability to respond appropriately in a given situation by considering which
conversational skill is most suitable for the response - a process we call
skill-of-mind. For large language model (LLM)-based conversational agents,
planning appropriate conversational skills, as humans do, is challenging due to
the complexity of social dialogue, especially in interactive scenarios. To
address this, we propose a skill-of-mind-annotated conversation dataset, named
Multifaceted Skill-of-Mind, which includes multi-turn and multifaceted
conversational skills across various interactive scenarios (e.g., long-term,
counseling, task-oriented), grounded in diverse social contexts (e.g.,
demographics, persona, rules of thumb). This dataset consists of roughly 100K
conversations. Using this dataset, we introduce a new family of
skill-of-mind-infused LLMs, named Thanos, with model sizes of 1B, 3B, and 8B
parameters. With extensive experiments, these models successfully demonstrate
the skill-of-mind process and exhibit strong generalizability in inferring
multifaceted skills across a variety of domains. Moreover, we show that Thanos
significantly enhances the quality of responses generated by LLM-based
conversational agents and promotes prosocial behavior in human evaluations.