Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - Search Arena: Analyzing Search-Augmented LLMs
[go: Go Back, main page]

https://github.com/lmarena/search-arena.

\n","updatedAt":"2025-06-06T15:43:27.366Z","author":{"_id":"644a767044b75fd95805d232","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/644a767044b75fd95805d232/vHA2vI_B3CpXapdBEwspB.jpeg","fullname":"Patrick (Tsung-Han) Wu","name":"tsunghanwu","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":6,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.9057909846305847},"editors":["tsunghanwu"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/644a767044b75fd95805d232/vHA2vI_B3CpXapdBEwspB.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2506.05334","authors":[{"_id":"68430c258f9ec8394c514870","user":{"_id":"647a99bd61e1252d761ae6ed","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/647a99bd61e1252d761ae6ed/_4K4IZH6IoXkqrUmP5LIC.jpeg","isPro":false,"fullname":"Mir Miroyan","user":"mmiroyan","type":"user"},"name":"Mihran Miroyan","status":"claimed_verified","statusLastChangedAt":"2025-07-22T07:54:23.502Z","hidden":false},{"_id":"68430c258f9ec8394c514871","user":{"_id":"644a767044b75fd95805d232","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/644a767044b75fd95805d232/vHA2vI_B3CpXapdBEwspB.jpeg","isPro":false,"fullname":"Patrick (Tsung-Han) Wu","user":"tsunghanwu","type":"user"},"name":"Tsung-Han Wu","status":"claimed_verified","statusLastChangedAt":"2025-06-07T05:45:14.475Z","hidden":false},{"_id":"68430c258f9ec8394c514872","name":"Logan King","hidden":false},{"_id":"68430c258f9ec8394c514873","name":"Tianle Li","hidden":false},{"_id":"68430c258f9ec8394c514874","name":"Jiayi Pan","hidden":false},{"_id":"68430c258f9ec8394c514875","name":"Xinyan Hu","hidden":false},{"_id":"68430c258f9ec8394c514876","name":"Wei-Lin Chiang","hidden":false},{"_id":"68430c258f9ec8394c514877","name":"Anastasios N. Angelopoulos","hidden":false},{"_id":"68430c258f9ec8394c514878","name":"Trevor Darrell","hidden":false},{"_id":"68430c258f9ec8394c514879","name":"Narges Norouzi","hidden":false},{"_id":"68430c258f9ec8394c51487a","name":"Joseph E. Gonzalez","hidden":false}],"publishedAt":"2025-06-05T17:59:26.000Z","submittedOnDailyAt":"2025-06-06T14:13:27.359Z","title":"Search Arena: Analyzing Search-Augmented LLMs","submittedOnDailyBy":{"_id":"644a767044b75fd95805d232","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/644a767044b75fd95805d232/vHA2vI_B3CpXapdBEwspB.jpeg","isPro":false,"fullname":"Patrick (Tsung-Han) Wu","user":"tsunghanwu","type":"user"},"summary":"Search-augmented language models combine web search with Large Language\nModels (LLMs) to improve response groundedness and freshness. However,\nanalyzing these systems remains challenging: existing datasets are limited in\nscale and narrow in scope, often constrained to static, single-turn,\nfact-checking questions. In this work, we introduce Search Arena, a\ncrowd-sourced, large-scale, human-preference dataset of over 24,000 paired\nmulti-turn user interactions with search-augmented LLMs. The dataset spans\ndiverse intents and languages, and contains full system traces with around\n12,000 human preference votes. Our analysis reveals that user preferences are\ninfluenced by the number of citations, even when the cited content does not\ndirectly support the attributed claims, uncovering a gap between perceived and\nactual credibility. Furthermore, user preferences vary across cited sources,\nrevealing that community-driven platforms are generally preferred and static\nencyclopedic sources are not always appropriate and reliable. To assess\nperformance across different settings, we conduct cross-arena analyses by\ntesting search-augmented LLMs in a general-purpose chat environment and\nconventional LLMs in search-intensive settings. We find that web search does\nnot degrade and may even improve performance in non-search settings; however,\nthe quality in search settings is significantly affected if solely relying on\nthe model's parametric knowledge. We open-sourced the dataset to support future\nresearch in this direction. Our dataset and code are available at:\nhttps://github.com/lmarena/search-arena.","upvotes":18,"discussionId":"68430c268f9ec8394c51487b","githubRepo":"https://github.com/lmarena/search-arena","githubRepoAddedBy":"user","ai_summary":"Search Arena is a large-scale human-preference dataset that analyzes user interactions with search-augmented language models, revealing insights into citation influence and source credibility.","ai_keywords":["LLMs","search-augmented language models","dataset","human-preference","user interactions","citations","credibility","community-driven platforms","search-intensive settings","parametric knowledge"],"githubStars":49},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"651c80a26ba9ab9b9582c273","avatarUrl":"/avatars/e963452eafd21f517d800f2e58e0f918.svg","isPro":false,"fullname":"siyeng feng","user":"siyengfeng","type":"user"},{"_id":"66f6134cb45e7dc1f22d5021","avatarUrl":"/avatars/50f78ec6d06c832c7692ae90f25c2c5e.svg","isPro":false,"fullname":"Yifan Song","user":"YSong02","type":"user"},{"_id":"6660ef901581213a2e91d28b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6660ef901581213a2e91d28b/laKzQefRmwjxLu23sklwS.jpeg","isPro":false,"fullname":"Derry Xu","user":"derixu","type":"user"},{"_id":"61568f37272f2d87a99ba884","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/61568f37272f2d87a99ba884/lgvkl5f0rEyiQRVU5FE32.png","isPro":false,"fullname":"Jiayi Pan","user":"Jiayi-Pan","type":"user"},{"_id":"644a767044b75fd95805d232","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/644a767044b75fd95805d232/vHA2vI_B3CpXapdBEwspB.jpeg","isPro":false,"fullname":"Patrick (Tsung-Han) Wu","user":"tsunghanwu","type":"user"},{"_id":"62fc26dde44837de54496319","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1660692172781-noauth.png","isPro":false,"fullname":"Anastasios Nikolas Angelopoulos","user":"angelopoulos","type":"user"},{"_id":"647a99bd61e1252d761ae6ed","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/647a99bd61e1252d761ae6ed/_4K4IZH6IoXkqrUmP5LIC.jpeg","isPro":false,"fullname":"Mir Miroyan","user":"mmiroyan","type":"user"},{"_id":"6801dcbfc91ed50053beac28","avatarUrl":"/avatars/cdd0a65fefa34b74d3ed0b783562ad5d.svg","isPro":false,"fullname":"Logan King","user":"Logankking","type":"user"},{"_id":"626e3449e7914f0d5ea78ad1","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/626e3449e7914f0d5ea78ad1/pVzdmdPMpNcxuj94qiIvB.jpeg","isPro":false,"fullname":"Yichuan","user":"Chrisyichuan","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"648eb1eb59c4e5c87dc116e0","avatarUrl":"/avatars/c636cea39c2c0937f01398c94ead5dad.svg","isPro":false,"fullname":"fdsqefsgergd","user":"T-representer","type":"user"},{"_id":"64df3ad6a9bcacc18bc0606a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/s3kpJyOf7NwO-tHEpRcok.png","isPro":false,"fullname":"Carlos","user":"Carlosvirella100","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2506.05334

Search Arena: Analyzing Search-Augmented LLMs

Published on Jun 5, 2025
· Submitted by
Patrick (Tsung-Han) Wu
on Jun 6, 2025
Authors:
,
,
,
,
,
,
,
,

Abstract

Search Arena is a large-scale human-preference dataset that analyzes user interactions with search-augmented language models, revealing insights into citation influence and source credibility.

AI-generated summary

Search-augmented language models combine web search with Large Language Models (LLMs) to improve response groundedness and freshness. However, analyzing these systems remains challenging: existing datasets are limited in scale and narrow in scope, often constrained to static, single-turn, fact-checking questions. In this work, we introduce Search Arena, a crowd-sourced, large-scale, human-preference dataset of over 24,000 paired multi-turn user interactions with search-augmented LLMs. The dataset spans diverse intents and languages, and contains full system traces with around 12,000 human preference votes. Our analysis reveals that user preferences are influenced by the number of citations, even when the cited content does not directly support the attributed claims, uncovering a gap between perceived and actual credibility. Furthermore, user preferences vary across cited sources, revealing that community-driven platforms are generally preferred and static encyclopedic sources are not always appropriate and reliable. To assess performance across different settings, we conduct cross-arena analyses by testing search-augmented LLMs in a general-purpose chat environment and conventional LLMs in search-intensive settings. We find that web search does not degrade and may even improve performance in non-search settings; however, the quality in search settings is significantly affected if solely relying on the model's parametric knowledge. We open-sourced the dataset to support future research in this direction. Our dataset and code are available at: https://github.com/lmarena/search-arena.

Community

Paper author Paper submitter

Search-augmented language models combine web search with Large Language Models (LLMs) to improve response groundedness and freshness. However, analyzing these systems remains challenging: existing datasets are limited in scale and narrow in scope, often constrained to static, single-turn, fact-checking questions. In this work, we introduce Search Arena, a crowd-sourced, large-scale, human-preference dataset of over 24,000 paired multi-turn user interactions with search-augmented LLMs. The dataset spans diverse intents and languages, and contains full system traces with around 12,000 human preference votes. Our analysis reveals that user preferences are influenced by the number of citations, even when the cited content does not directly support the attributed claims, uncovering a gap between perceived and actual credibility. Furthermore, user preferences vary across cited sources, revealing that community-driven platforms are generally preferred and static encyclopedic sources are not always appropriate and reliable. To assess performance across different settings, we conduct cross-arena analyses by testing search-augmented LLMs in a general-purpose chat environment and conventional LLMs in search-intensive settings. We find that web search does not degrade and may even improve performance in non-search settings; however, the quality in search settings is significantly affected if solely relying on the model's parametric knowledge. We open-sourced the dataset to support future research in this direction. Our dataset and code are available at: https://github.com/lmarena/search-arena.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2506.05334 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2506.05334 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2506.05334 in a Space README.md to link it from this page.

Collections including this paper 3