Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - From Crowdsourced Data to High-Quality Benchmarks: Arena-Hard and
BenchBuilder Pipeline
https://github.com/lm-sys/arena-hard-auto\n","updatedAt":"2024-06-19T16:32:36.016Z","author":{"_id":"64ab7f188a1a9187c219e597","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64ab7f188a1a9187c219e597/cieU4bdmPIn9dTNIVf3n7.jpeg","fullname":"Tianle Li","name":"Timmli","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":10,"isUserFollowing":false}},"numEdits":1,"identifiedLanguage":{"language":"en","probability":0.896850049495697},"editors":["Timmli"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/64ab7f188a1a9187c219e597/cieU4bdmPIn9dTNIVf3n7.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2406.11939","authors":[{"_id":"667304d105a87a94e686c084","user":{"_id":"64ab7f188a1a9187c219e597","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64ab7f188a1a9187c219e597/cieU4bdmPIn9dTNIVf3n7.jpeg","isPro":false,"fullname":"Tianle Li","user":"Timmli","type":"user"},"name":"Tianle Li","status":"extracted_confirmed","statusLastChangedAt":"2024-06-19T16:28:32.308Z","hidden":false},{"_id":"667304d105a87a94e686c085","user":{"_id":"63071772fd79b417f1b92291","avatarUrl":"/avatars/ec68d20dcedf4553bb31d9f6e0ded813.svg","isPro":true,"fullname":"Wei-Lin Chiang","user":"weichiang","type":"user"},"name":"Wei-Lin Chiang","status":"extracted_confirmed","statusLastChangedAt":"2024-06-19T16:21:23.012Z","hidden":false},{"_id":"667304d105a87a94e686c086","user":{"_id":"64e8ff509d939716d00f3690","avatarUrl":"/avatars/028713c83e99166fa40bd54980bc12ec.svg","isPro":false,"fullname":"Evan Frick","user":"evanfrick","type":"user"},"name":"Evan Frick","status":"admin_assigned","statusLastChangedAt":"2024-06-20T07:42:23.593Z","hidden":false},{"_id":"667304d105a87a94e686c087","user":{"_id":"63f7ccb8ad35fabd915171a8","avatarUrl":"/avatars/786bc609e6305730f0f2025ae5c27f82.svg","isPro":true,"fullname":"Lisa Dunlap","user":"lisabdunlap","type":"user"},"name":"Lisa Dunlap","status":"admin_assigned","statusLastChangedAt":"2024-06-20T07:42:29.194Z","hidden":false},{"_id":"667304d105a87a94e686c088","name":"Tianhao Wu","hidden":false},{"_id":"667304d105a87a94e686c089","user":{"_id":"647b8885aba7062fe5c32000","avatarUrl":"/avatars/61eacc1bdfc5129ead49c61eb2691f4a.svg","isPro":false,"fullname":"Banghua Zhu","user":"banghua","type":"user"},"name":"Banghua Zhu","status":"admin_assigned","statusLastChangedAt":"2024-06-20T07:43:00.682Z","hidden":false},{"_id":"667304d105a87a94e686c08a","user":{"_id":"645d2e8401f4eaab2a0878ce","avatarUrl":"/avatars/1273c5fb607b4b622a746a42692fa632.svg","isPro":false,"fullname":"Joseph E. Gonzalez","user":"ProfJoeyG","type":"user"},"name":"Joseph E. Gonzalez","status":"admin_assigned","statusLastChangedAt":"2024-06-20T07:43:12.033Z","hidden":false},{"_id":"667304d105a87a94e686c08b","name":"Ion Stoica","hidden":false}],"publishedAt":"2024-06-17T17:26:10.000Z","submittedOnDailyAt":"2024-06-19T15:00:23.748Z","title":"From Crowdsourced Data to High-Quality Benchmarks: Arena-Hard and\n BenchBuilder Pipeline","submittedOnDailyBy":{"_id":"64ab7f188a1a9187c219e597","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64ab7f188a1a9187c219e597/cieU4bdmPIn9dTNIVf3n7.jpeg","isPro":false,"fullname":"Tianle Li","user":"Timmli","type":"user"},"summary":"The rapid evolution of language models has necessitated the development of\nmore challenging benchmarks. Current static benchmarks often struggle to\nconsistently distinguish between the capabilities of different models and fail\nto align with real-world user preferences. On the other hand, live\ncrowd-sourced platforms like the Chatbot Arena collect a wide range of natural\nprompts and user feedback. However, these prompts vary in sophistication and\nthe feedback cannot be applied offline to new models. In order to ensure that\nbenchmarks keep up with the pace of LLM development, we address how one can\nevaluate benchmarks on their ability to confidently separate models and their\nalignment with human preference. Under these principles, we developed\nBenchBuilder, a living benchmark that filters high-quality prompts from live\ndata sources to enable offline evaluation on fresh, challenging prompts.\nBenchBuilder identifies seven indicators of a high-quality prompt, such as the\nrequirement for domain knowledge, and utilizes an LLM annotator to select a\nhigh-quality subset of prompts from various topic clusters. The LLM evaluation\nprocess employs an LLM judge to ensure a fully automated, high-quality, and\nconstantly updating benchmark. We apply BenchBuilder on prompts from the\nChatbot Arena to create Arena-Hard-Auto v0.1: 500 challenging user prompts from\na wide range of tasks. Arena-Hard-Auto v0.1 offers 3x tighter confidence\nintervals than MT-Bench and achieves a state-of-the-art 89.1% agreement with\nhuman preference rankings, all at a cost of only $25 and without human\nlabelers. The BenchBuilder pipeline enhances evaluation benchmarks and provides\na valuable tool for developers, enabling them to extract high-quality\nbenchmarks from extensive data with minimal effort.","upvotes":8,"discussionId":"667304d205a87a94e686c116","githubRepo":"https://github.com/lm-sys/arena-hard-auto","githubRepoAddedBy":"auto","ai_summary":"BenchBuilder enhances language model evaluation by creating a living benchmark that automates the selection and evaluation of high-quality prompts using LLM annotators and judges, ensuring continuous alignment with human preferences.","ai_keywords":["language models","BenchBuilder","Chatbot Arena","high-quality prompts","domain knowledge","LLM annotator","LLM judge","Arena-Hard-Auto","confidence intervals","human preference rankings"],"githubStars":996},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6538119803519fddb4a17e10","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6538119803519fddb4a17e10/ffJMkdx-rM7VvLTCM6ri_.jpeg","isPro":false,"fullname":"samusenps","user":"samusenps","type":"user"},{"_id":"64547135ec40bbbd0124ce11","avatarUrl":"/avatars/1b7145d73cc47f474bf0d5193062d5ac.svg","isPro":false,"fullname":"Tianhao Wu","user":"ThWu","type":"user"},{"_id":"64e0f9c759293fd1aee5ea41","avatarUrl":"/avatars/d2aa2af71b8dd66f7633776ef8ebc212.svg","isPro":false,"fullname":"Venkat Srinivasan","user":"venkat-srinivasan-nvidia","type":"user"},{"_id":"64e8ff509d939716d00f3690","avatarUrl":"/avatars/028713c83e99166fa40bd54980bc12ec.svg","isPro":false,"fullname":"Evan Frick","user":"evanfrick","type":"user"},{"_id":"647b8885aba7062fe5c32000","avatarUrl":"/avatars/61eacc1bdfc5129ead49c61eb2691f4a.svg","isPro":false,"fullname":"Banghua Zhu","user":"banghua","type":"user"},{"_id":"62fb0555e8c9c532aa7c6e5d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62fb0555e8c9c532aa7c6e5d/AUds_M-PA7TCrj3gQ15IN.jpeg","isPro":false,"fullname":"Christopher Chou","user":"BabyChou","type":"user"},{"_id":"65025370b6595dc45c397340","avatarUrl":"/avatars/9469599b176034548042922c0afa7051.svg","isPro":false,"fullname":"J C","user":"dark-pen","type":"user"},{"_id":"64df3ad6a9bcacc18bc0606a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/s3kpJyOf7NwO-tHEpRcok.png","isPro":false,"fullname":"Carlos","user":"Carlosvirella100","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
BenchBuilder enhances language model evaluation by creating a living benchmark that automates the selection and evaluation of high-quality prompts using LLM annotators and judges, ensuring continuous alignment with human preferences.
AI-generated summary
The rapid evolution of language models has necessitated the development of
more challenging benchmarks. Current static benchmarks often struggle to
consistently distinguish between the capabilities of different models and fail
to align with real-world user preferences. On the other hand, live
crowd-sourced platforms like the Chatbot Arena collect a wide range of natural
prompts and user feedback. However, these prompts vary in sophistication and
the feedback cannot be applied offline to new models. In order to ensure that
benchmarks keep up with the pace of LLM development, we address how one can
evaluate benchmarks on their ability to confidently separate models and their
alignment with human preference. Under these principles, we developed
BenchBuilder, a living benchmark that filters high-quality prompts from live
data sources to enable offline evaluation on fresh, challenging prompts.
BenchBuilder identifies seven indicators of a high-quality prompt, such as the
requirement for domain knowledge, and utilizes an LLM annotator to select a
high-quality subset of prompts from various topic clusters. The LLM evaluation
process employs an LLM judge to ensure a fully automated, high-quality, and
constantly updating benchmark. We apply BenchBuilder on prompts from the
Chatbot Arena to create Arena-Hard-Auto v0.1: 500 challenging user prompts from
a wide range of tasks. Arena-Hard-Auto v0.1 offers 3x tighter confidence
intervals than MT-Bench and achieves a state-of-the-art 89.1% agreement with
human preference rankings, all at a cost of only $25 and without human
labelers. The BenchBuilder pipeline enhances evaluation benchmarks and provides
a valuable tool for developers, enabling them to extract high-quality
benchmarks from extensive data with minimal effort.