Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - Instruction-tuning Aligns LLMs to the Human Brain
[go: Go Back, main page]

Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n","updatedAt":"2023-12-06T16:03:53.541Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7504582405090332},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[{"reaction":"👍","users":["bkhmsi","STARBORN"],"count":2}],"isReport":false}},{"id":"65e2bba701d1420e34871dfa","author":{"_id":"621d8a8d411b802eb0b70615","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1646104902183-621d8a8d411b802eb0b70615.png","fullname":"paoladimaio","name":"STARBORN","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":6,"isUserFollowing":false},"createdAt":"2024-03-02T05:39:51.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"VERY HELPFUL :-)","html":"

VERY HELPFUL :-)

\n","updatedAt":"2024-03-02T05:39:51.015Z","author":{"_id":"621d8a8d411b802eb0b70615","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1646104902183-621d8a8d411b802eb0b70615.png","fullname":"paoladimaio","name":"STARBORN","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":6,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"nl","probability":0.2589787542819977},"editors":["STARBORN"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1646104902183-621d8a8d411b802eb0b70615.png"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2312.00575","authors":[{"_id":"656d33b60bbc114fe639bcc8","user":{"_id":"631ad4b2fac58c9c8167bafc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/631ad4b2fac58c9c8167bafc/jum1yrhVWkl-biBdKitF3.png","isPro":false,"fullname":"Khai Loong Aw","user":"awwkl","type":"user"},"name":"Khai Loong Aw","status":"admin_assigned","statusLastChangedAt":"2023-12-04T10:45:25.821Z","hidden":false},{"_id":"656d33b60bbc114fe639bcc9","user":{"_id":"6447a6e94ca52cdb3188adec","avatarUrl":"/avatars/624084372445299bf8d3a8751da93475.svg","isPro":false,"fullname":"Montariol","user":"Syrielle","type":"user"},"name":"Syrielle Montariol","status":"admin_assigned","statusLastChangedAt":"2023-12-04T10:45:35.553Z","hidden":false},{"_id":"656d33b60bbc114fe639bcca","user":{"_id":"63ca94752ffd350b486354e1","avatarUrl":"/avatars/2e41b2651d30b855f0b46cddaa02aa43.svg","isPro":false,"fullname":"Badr AlKhamissi","user":"bkhmsi","type":"user"},"name":"Badr AlKhamissi","status":"admin_assigned","statusLastChangedAt":"2023-12-04T10:45:44.223Z","hidden":false},{"_id":"656d33b60bbc114fe639bccb","user":{"_id":"62c23d7b99c6c7738a66989b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1656896923660-62c23d7b99c6c7738a66989b.jpeg","isPro":false,"fullname":"Martin Schrimpf","user":"mschrimpf","type":"user"},"name":"Martin Schrimpf","status":"admin_assigned","statusLastChangedAt":"2023-12-04T10:45:53.586Z","hidden":false},{"_id":"656d33b60bbc114fe639bccc","user":{"_id":"654fb1c67490049d621516fb","avatarUrl":"/avatars/0f88177da236b1bce4a4a58fd0246884.svg","isPro":false,"fullname":"Antoine Bosselut","user":"atcbosselut","type":"user"},"name":"Antoine Bosselut","status":"admin_assigned","statusLastChangedAt":"2023-12-04T10:45:59.665Z","hidden":false}],"publishedAt":"2023-12-01T13:31:02.000Z","submittedOnDailyAt":"2023-12-03T23:34:39.122Z","title":"Instruction-tuning Aligns LLMs to the Human Brain","submittedOnDailyBy":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","isPro":false,"fullname":"AK","user":"akhaliq","type":"user"},"summary":"Instruction-tuning is a widely adopted method of finetuning that enables\nlarge language models (LLMs) to generate output that more closely resembles\nhuman responses to natural language queries, in many cases leading to\nhuman-level performance on diverse testbeds. However, it remains unclear\nwhether instruction-tuning truly makes LLMs more similar to how humans process\nlanguage. We investigate the effect of instruction-tuning on LLM-human\nsimilarity in two ways: (1) brain alignment, the similarity of LLM internal\nrepresentations to neural activity in the human language system, and (2)\nbehavioral alignment, the similarity of LLM and human behavior on a reading\ntask. We assess 25 vanilla and instruction-tuned LLMs across three datasets\ninvolving humans reading naturalistic stories and sentences. We discover that\ninstruction-tuning generally enhances brain alignment by an average of 6%, but\ndoes not have a similar effect on behavioral alignment. To identify the factors\nunderlying LLM-brain alignment, we compute correlations between the brain\nalignment of LLMs and various model properties, such as model size, various\nproblem-solving abilities, and performance on tasks requiring world knowledge\nspanning various domains. Notably, we find a strong positive correlation\nbetween brain alignment and model size (r = 0.95), as well as performance on\ntasks requiring world knowledge (r = 0.81). Our results demonstrate that\ninstruction-tuning LLMs improves both world knowledge representations and brain\nalignment, suggesting that mechanisms that encode world knowledge in LLMs also\nimprove representational alignment to the human brain.","upvotes":15,"discussionId":"656d33b70bbc114fe639bcf1","ai_summary":"Instruction-tuning improves brain alignment in large language models but not behavioral alignment, with correlations found between brain alignment and model size and world knowledge performance.","ai_keywords":["instruction-tuning","large language models","brain alignment","behavioral alignment","neural activity","world knowledge","model size"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"651d618a18be7acf8e602c41","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/kEDoJKsGXpNDTOiU7FRMP.jpeg","isPro":false,"fullname":"Abreu Magalhães","user":"Hildeberto","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"5e6a3d4ea9afd5125d9ec064","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1584020801691-noauth.jpeg","isPro":true,"fullname":"Stefan Schweter","user":"stefan-it","type":"user"},{"_id":"6549135c196ae037a74e10a3","avatarUrl":"/avatars/86194456844c7b2b5389de36cb258472.svg","isPro":false,"fullname":"Richrich","user":"RichardForests","type":"user"},{"_id":"63e4038e789dcaae43c52fd8","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63e4038e789dcaae43c52fd8/ElFZW_r9n_D6kLRL9IIlh.jpeg","isPro":false,"fullname":"Hari Koduvely","user":"harik68","type":"user"},{"_id":"6427532b39c7e60c4b37cc4b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6427532b39c7e60c4b37cc4b/XIR5R3oB4PcPIJLN9oYHD.jpeg","isPro":false,"fullname":"dann 🏴‍☠️","user":"dannoncaffeine","type":"user"},{"_id":"6538119803519fddb4a17e10","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6538119803519fddb4a17e10/ffJMkdx-rM7VvLTCM6ri_.jpeg","isPro":false,"fullname":"samusenps","user":"samusenps","type":"user"},{"_id":"64ca7c04710645aa7bdbbfff","avatarUrl":"/avatars/c12f4cb6dc1ff0010edb3ef4cfcccd7c.svg","isPro":false,"fullname":"Lize Pirenne","user":"Inversta","type":"user"},{"_id":"6141a88b3a0ec78603c9e784","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6141a88b3a0ec78603c9e784/DJsxSmWV39M33JFheLobC.jpeg","isPro":true,"fullname":"merve","user":"merve","type":"user"},{"_id":"643efec9e9d063936911026c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/643efec9e9d063936911026c/25TPUXWzFyBtdr7iH-T25.jpeg","isPro":false,"fullname":"Promptmetheus","user":"azure-arc-0","type":"user"},{"_id":"631ad4b2fac58c9c8167bafc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/631ad4b2fac58c9c8167bafc/jum1yrhVWkl-biBdKitF3.png","isPro":false,"fullname":"Khai Loong Aw","user":"awwkl","type":"user"},{"_id":"663ccbff3a74a20189d4aa2e","avatarUrl":"/avatars/83a54455e0157480f65c498cd9057cf2.svg","isPro":false,"fullname":"Nguyen Van Thanh","user":"NguyenVanThanhHust","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2312.00575

Instruction-tuning Aligns LLMs to the Human Brain

Published on Dec 1, 2023
· Submitted by
AK
on Dec 3, 2023

Abstract

Instruction-tuning improves brain alignment in large language models but not behavioral alignment, with correlations found between brain alignment and model size and world knowledge performance.

AI-generated summary

Instruction-tuning is a widely adopted method of finetuning that enables large language models (LLMs) to generate output that more closely resembles human responses to natural language queries, in many cases leading to human-level performance on diverse testbeds. However, it remains unclear whether instruction-tuning truly makes LLMs more similar to how humans process language. We investigate the effect of instruction-tuning on LLM-human similarity in two ways: (1) brain alignment, the similarity of LLM internal representations to neural activity in the human language system, and (2) behavioral alignment, the similarity of LLM and human behavior on a reading task. We assess 25 vanilla and instruction-tuned LLMs across three datasets involving humans reading naturalistic stories and sentences. We discover that instruction-tuning generally enhances brain alignment by an average of 6%, but does not have a similar effect on behavioral alignment. To identify the factors underlying LLM-brain alignment, we compute correlations between the brain alignment of LLMs and various model properties, such as model size, various problem-solving abilities, and performance on tasks requiring world knowledge spanning various domains. Notably, we find a strong positive correlation between brain alignment and model size (r = 0.95), as well as performance on tasks requiring world knowledge (r = 0.81). Our results demonstrate that instruction-tuning LLMs improves both world knowledge representations and brain alignment, suggesting that mechanisms that encode world knowledge in LLMs also improve representational alignment to the human brain.

Community

This comment has been hidden

Can you summarize the information

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

VERY HELPFUL :-)

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2312.00575 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2312.00575 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2312.00575 in a Space README.md to link it from this page.

Collections including this paper 7