Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - SkillsBench: Benchmarking How Well Agent Skills Work Across Diverse Tasks
[go: Go Back, main page]

https://github.com/benchflow-ai/skillsbench
Website: https://skillsbench.ai/

\n","updatedAt":"2026-02-18T00:33:00.620Z","author":{"_id":"663fe2d26304d377fc253322","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/wuey_nNXSW4GthPYLfFS4.jpeg","fullname":"Xiangyi Li","name":"xdotli","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":1,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.515940248966217},"editors":["xdotli"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/wuey_nNXSW4GthPYLfFS4.jpeg"],"reactions":[{"reaction":"๐Ÿ”ฅ","users":["taesiri","Hudx111"],"count":2},{"reaction":"๐Ÿค—","users":["taesiri","Hudx111"],"count":2}],"isReport":false}},{"id":"6995bfeff8241c4259341794","author":{"_id":"663fe2d26304d377fc253322","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/wuey_nNXSW4GthPYLfFS4.jpeg","fullname":"Xiangyi Li","name":"xdotli","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":1,"isUserFollowing":false},"createdAt":"2026-02-18T13:34:39.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"We made the first benchmark that measures the efficacy of agent skills. We collected 86 tasks from 105 domain experts across 11 domains, every task is verifiable, human created and has verified Skills. SOTA model without skills score ~30% without skills. \n\nWe found a few interesting things: \n1. Skills substitute for model scale โ€” Haiku 4.5 with Skills (27.7%) beats Opus 4.5 without (22.0%). The right procedural knowledge can be worth more than a bigger model. \n2. Skills' improvement has nothing to do with LLMs' internal knowledge. We have an ablation where no Skills provided for the agent, but the agent is prompted to generate relevant procedural knowledge before solving the task. This isolates the impact of LLMs' latent domain knowledge. \n\nThe result is: \n* Curated Skills: +16.2pp average improvement across all 7 agent configs \n* Self-generated Skills: -1.3pp: models can't write their own procedural knowledge pre-trajectory feedbacks. This is used to isolate the impact of LLMs' latent domain knowledge.","html":"

We made the first benchmark that measures the efficacy of agent skills. We collected 86 tasks from 105 domain experts across 11 domains, every task is verifiable, human created and has verified Skills. SOTA model without skills score ~30% without skills.

\n

We found a few interesting things:

\n
    \n
  1. Skills substitute for model scale โ€” Haiku 4.5 with Skills (27.7%) beats Opus 4.5 without (22.0%). The right procedural knowledge can be worth more than a bigger model.
  2. \n
  3. Skills' improvement has nothing to do with LLMs' internal knowledge. We have an ablation where no Skills provided for the agent, but the agent is prompted to generate relevant procedural knowledge before solving the task. This isolates the impact of LLMs' latent domain knowledge.
  4. \n
\n

The result is:

\n
    \n
  • Curated Skills: +16.2pp average improvement across all 7 agent configs
  • \n
  • Self-generated Skills: -1.3pp: models can't write their own procedural knowledge pre-trajectory feedbacks. This is used to isolate the impact of LLMs' latent domain knowledge.
  • \n
\n","updatedAt":"2026-02-18T13:34:39.395Z","author":{"_id":"663fe2d26304d377fc253322","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/wuey_nNXSW4GthPYLfFS4.jpeg","fullname":"Xiangyi Li","name":"xdotli","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":1,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.9253376722335815},"editors":["xdotli"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/wuey_nNXSW4GthPYLfFS4.jpeg"],"reactions":[{"reaction":"๐Ÿ”ฅ","users":["taesiri","Hudx111"],"count":2}],"isReport":false}},{"id":"6995daf1aa3c4d5606351faa","author":{"_id":"65243980050781c16f234f1f","avatarUrl":"/avatars/743a009681d5d554c27e04300db9f267.svg","fullname":"Avi","name":"avahal","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false},"createdAt":"2026-02-18T15:29:53.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"arXivLens breakdown of this paper ๐Ÿ‘‰ https://arxivlens.com/PaperView/Details/skillsbench-benchmarking-how-well-agent-skills-work-across-diverse-tasks-1475-a8815427\n- Executive Summary\n- Detailed Breakdown\n- Practical Applications","html":"

arXivLens breakdown of this paper ๐Ÿ‘‰ https://arxivlens.com/PaperView/Details/skillsbench-benchmarking-how-well-agent-skills-work-across-diverse-tasks-1475-a8815427

\n
    \n
  • Executive Summary
  • \n
  • Detailed Breakdown
  • \n
  • Practical Applications
  • \n
\n","updatedAt":"2026-02-18T15:29:53.516Z","author":{"_id":"65243980050781c16f234f1f","avatarUrl":"/avatars/743a009681d5d554c27e04300db9f267.svg","fullname":"Avi","name":"avahal","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6291268467903137},"editors":["avahal"],"editorAvatarUrls":["/avatars/743a009681d5d554c27e04300db9f267.svg"],"reactions":[],"isReport":false}},{"id":"69966a070e550c2f585d0213","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":317,"isUserFollowing":false},"createdAt":"2026-02-19T01:40:23.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [OctoBench: Benchmarking Scaffold-Aware Instruction Following in Repository-Grounded Agentic Coding](https://huggingface.co/papers/2601.10343) (2026)\n* [LongCLI-Bench: A Preliminary Benchmark and Study for Long-horizon Agentic Programming in Command-Line Interfaces](https://huggingface.co/papers/2602.14337) (2026)\n* [Agent Skills for Large Language Models: Architecture, Acquisition, Security, and the Path Forward](https://huggingface.co/papers/2602.12430) (2026)\n* [TermiGen: High-Fidelity Environment and Robust Trajectory Synthesis for Terminal Agents](https://huggingface.co/papers/2602.07274) (2026)\n* [The Hierarchy of Agentic Capabilities: Evaluating Frontier Models on Realistic RL Environments](https://huggingface.co/papers/2601.09032) (2026)\n* [CLI-Gym: Scalable CLI Task Generation via Agentic Environment Inversion](https://huggingface.co/papers/2602.10999) (2026)\n* [AgencyBench: Benchmarking the Frontiers of Autonomous Agents in 1M-Token Real-World Contexts](https://huggingface.co/papers/2601.11044) (2026)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2026-02-19T01:40:23.483Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":317,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7229623794555664},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2602.12670","authors":[{"_id":"6994d2138d17d1ee8c10eb51","user":{"_id":"663fe2d26304d377fc253322","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/wuey_nNXSW4GthPYLfFS4.jpeg","isPro":false,"fullname":"Xiangyi Li","user":"xdotli","type":"user"},"name":"Xiangyi Li","status":"claimed_verified","statusLastChangedAt":"2026-02-18T09:07:43.446Z","hidden":false},{"_id":"6994d2138d17d1ee8c10eb52","name":"Wenbo Chen","hidden":false},{"_id":"6994d2138d17d1ee8c10eb53","name":"Yimin Liu","hidden":false},{"_id":"6994d2138d17d1ee8c10eb54","name":"Shenghan Zheng","hidden":false},{"_id":"6994d2138d17d1ee8c10eb55","user":{"_id":"6462bf90c9cc74e82e270cb6","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6462bf90c9cc74e82e270cb6/usLDOUhyIfDXA2HZvXvps.jpeg","isPro":true,"fullname":"Kobe Chen","user":"kobe0938","type":"user"},"name":"Xiaokun Chen","status":"claimed_verified","statusLastChangedAt":"2026-02-18T09:07:37.777Z","hidden":false},{"_id":"6994d2138d17d1ee8c10eb56","user":{"_id":"659a1c9511b48706bab783cc","avatarUrl":"/avatars/6978a5bc7ab284d9f7285f9fd2c8d0e0.svg","isPro":false,"fullname":"Yifeng He","user":"yfhe","type":"user"},"name":"Yifeng He","status":"claimed_verified","statusLastChangedAt":"2026-02-18T09:07:41.599Z","hidden":false},{"_id":"6994d2138d17d1ee8c10eb57","name":"Yubo Li","hidden":false},{"_id":"6994d2138d17d1ee8c10eb58","user":{"_id":"663bd5fcfb931d4660fd18b7","avatarUrl":"/avatars/17aa421d40fe3532d0ddecbc2accb249.svg","isPro":false,"fullname":"Bingran You","user":"bingran-you","type":"user"},"name":"Bingran You","status":"admin_assigned","statusLastChangedAt":"2026-02-18T13:36:48.789Z","hidden":false},{"_id":"6994d2138d17d1ee8c10eb59","name":"Haotian Shen","hidden":false},{"_id":"6994d2138d17d1ee8c10eb5a","user":{"_id":"63b5d1bb0d5913eee4869c0d","avatarUrl":"/avatars/ae397f54bbb3debc1f7903f4c6959ae6.svg","isPro":false,"fullname":"Jiankai Sun","user":"zhenv5","type":"user"},"name":"Jiankai Sun","status":"admin_assigned","statusLastChangedAt":"2026-02-18T13:36:59.144Z","hidden":false},{"_id":"6994d2138d17d1ee8c10eb5b","name":"Shuyi Wang","hidden":false},{"_id":"6994d2138d17d1ee8c10eb5c","user":{"_id":"65cc8abe8ebd392213020575","avatarUrl":"/avatars/e0b49fe07b3553779992092f60aa0b48.svg","isPro":false,"fullname":"qunhongzeng","user":"qunhongzeng","type":"user"},"name":"Qunhong Zeng","status":"admin_assigned","statusLastChangedAt":"2026-02-18T13:37:07.249Z","hidden":false},{"_id":"6994d2138d17d1ee8c10eb5d","name":"Di Wang","hidden":false},{"_id":"6994d2138d17d1ee8c10eb5e","user":{"_id":"6275a465597c70eb8949fce5","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6275a465597c70eb8949fce5/ph4UogqMurMB0hSXZC38w.png","isPro":false,"fullname":"Xuandong Zhao","user":"Xuandong","type":"user"},"name":"Xuandong Zhao","status":"admin_assigned","statusLastChangedAt":"2026-02-18T13:37:13.482Z","hidden":false},{"_id":"6994d2138d17d1ee8c10eb5f","name":"Yuanli Wang","hidden":false},{"_id":"6994d2138d17d1ee8c10eb60","user":{"_id":"65a0dc6690eb7a1524186cc2","avatarUrl":"/avatars/6ecf7fb5aa74f2453d0e3bc7b9cca0d3.svg","isPro":true,"fullname":"Roey Ben Chaim","user":"roeybc","type":"user"},"name":"Roey Ben Chaim","status":"admin_assigned","statusLastChangedAt":"2026-02-18T13:37:19.969Z","hidden":false},{"_id":"6994d2138d17d1ee8c10eb61","name":"Zonglin Di","hidden":false},{"_id":"6994d2138d17d1ee8c10eb62","name":"Yipeng Gao","hidden":false},{"_id":"6994d2138d17d1ee8c10eb63","name":"Junwei He","hidden":false},{"_id":"6994d2138d17d1ee8c10eb64","user":{"_id":"63710bca7a5e5d8efdbff215","avatarUrl":"/avatars/25e7f713d613ec81ba775265eadff8bc.svg","isPro":false,"fullname":"He","user":"Yizhuo","type":"user"},"name":"Yizhuo He","status":"claimed_verified","statusLastChangedAt":"2026-02-18T09:07:45.624Z","hidden":false},{"_id":"6994d2138d17d1ee8c10eb65","name":"Liqiang Jing","hidden":false},{"_id":"6994d2138d17d1ee8c10eb66","name":"Luyang Kong","hidden":false},{"_id":"6994d2138d17d1ee8c10eb67","name":"Xin Lan","hidden":false},{"_id":"6994d2138d17d1ee8c10eb68","name":"Jiachen Li","hidden":false},{"_id":"6994d2138d17d1ee8c10eb69","name":"Songlin Li","hidden":false},{"_id":"6994d2138d17d1ee8c10eb6a","name":"Yijiang Li","hidden":false},{"_id":"6994d2138d17d1ee8c10eb6b","user":{"_id":"64b5198c25882acb62fb77ef","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64b5198c25882acb62fb77ef/HX9pfMEPQlfjvSAgSLplY.png","isPro":false,"fullname":"Yueqian Lin","user":"linyueqian","type":"user"},"name":"Yueqian Lin","status":"claimed_verified","statusLastChangedAt":"2026-02-18T09:07:39.753Z","hidden":false},{"_id":"6994d2138d17d1ee8c10eb6c","name":"Xinyi Liu","hidden":false},{"_id":"6994d2138d17d1ee8c10eb6d","name":"Xuanqing Liu","hidden":false},{"_id":"6994d2138d17d1ee8c10eb6e","name":"Haoran Lyu","hidden":false},{"_id":"6994d2138d17d1ee8c10eb6f","name":"Ze Ma","hidden":false},{"_id":"6994d2138d17d1ee8c10eb70","name":"Bowei Wang","hidden":false},{"_id":"6994d2138d17d1ee8c10eb71","name":"Runhui Wang","hidden":false},{"_id":"6994d2138d17d1ee8c10eb72","name":"Tianyu Wang","hidden":false},{"_id":"6994d2138d17d1ee8c10eb73","name":"Wengao Ye","hidden":false},{"_id":"6994d2138d17d1ee8c10eb74","name":"Yue Zhang","hidden":false},{"_id":"6994d2138d17d1ee8c10eb75","name":"Hanwen Xing","hidden":false},{"_id":"6994d2138d17d1ee8c10eb76","name":"Yiqi Xue","hidden":false},{"_id":"6994d2138d17d1ee8c10eb77","name":"Steven Dillmann","hidden":false},{"_id":"6994d2138d17d1ee8c10eb78","name":"Han-chung Lee","hidden":false}],"mediaUrls":["https://cdn-uploads.huggingface.co/production/uploads/663fe2d26304d377fc253322/7oDWwNkdX1FMVwX0ecYMt.png"],"publishedAt":"2026-02-13T07:06:06.000Z","submittedOnDailyAt":"2026-02-18T11:04:39.385Z","title":"SkillsBench: Benchmarking How Well Agent Skills Work Across Diverse Tasks","submittedOnDailyBy":{"_id":"663fe2d26304d377fc253322","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/wuey_nNXSW4GthPYLfFS4.jpeg","isPro":false,"fullname":"Xiangyi Li","user":"xdotli","type":"user"},"summary":"Agent Skills are structured packages of procedural knowledge that augment LLM agents at inference time. Despite rapid adoption, there is no standard way to measure whether they actually help. We present SkillsBench, a benchmark of 86 tasks across 11 domains paired with curated Skills and deterministic verifiers. Each task is evaluated under three conditions: no Skills, curated Skills, and self-generated Skills. We test 7 agent-model configurations over 7,308 trajectories. Curated Skills raise average pass rate by 16.2 percentage points(pp), but effects vary widely by domain (+4.5pp for Software Engineering to +51.9pp for Healthcare) and 16 of 84 tasks show negative deltas. Self-generated Skills provide no benefit on average, showing that models cannot reliably author the procedural knowledge they benefit from consuming. Focused Skills with 2--3 modules outperform comprehensive documentation, and smaller models with Skills can match larger models without them.","upvotes":41,"discussionId":"6994d2148d17d1ee8c10eb79","projectPage":"https://skillsbench.ai/","githubRepo":"https://github.com/benchflow-ai/skillsbench","githubRepoAddedBy":"user","ai_summary":"SkillsBench evaluates agent skills across 86 tasks and finds that curated skills improve performance significantly but inconsistently, while self-generated skills offer no benefit, indicating that models struggle to create useful procedural knowledge despite benefiting from curated versions.","ai_keywords":["agent skills","LLM agents","SkillsBench","procedural knowledge","curated Skills","self-generated Skills","agent-model configurations","pass rate","domain-specific effects"],"githubStars":413,"organization":{"_id":"69657f24ec1d157f1590a81d","name":"benchflow","fullname":"BenchFlow","avatar":"https://cdn-uploads.huggingface.co/production/uploads/663fe2d26304d377fc253322/L6ik2VpL5iizADEXK4W-A.png"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"643882331efe72ba4806d004","avatarUrl":"/avatars/1793802b10099e948b4a169fd68a61b9.svg","isPro":false,"fullname":"Ray ","user":"DogeRay","type":"user"},{"_id":"6419309f22270b3ccf177c77","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6419309f22270b3ccf177c77/KQa1586iBBKqucUlfpuPp.jpeg","isPro":false,"fullname":"William Li","user":"williamium","type":"user"},{"_id":"669f476ab29f9fbc230928b7","avatarUrl":"/avatars/1bde0aa49e6973e1b3149a78393a1733.svg","isPro":false,"fullname":"Wenbo Chen","user":"wenbochen111","type":"user"},{"_id":"67bfe3d2e46fd1cb18421dad","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/MjksaWIBmuQPxsQO8l74-.png","isPro":false,"fullname":"Yiqi Xue","user":"Xuey666","type":"user"},{"_id":"663fe2d26304d377fc253322","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/wuey_nNXSW4GthPYLfFS4.jpeg","isPro":false,"fullname":"Xiangyi Li","user":"xdotli","type":"user"},{"_id":"6462bf90c9cc74e82e270cb6","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6462bf90c9cc74e82e270cb6/usLDOUhyIfDXA2HZvXvps.jpeg","isPro":true,"fullname":"Kobe Chen","user":"kobe0938","type":"user"},{"_id":"65a0dc6690eb7a1524186cc2","avatarUrl":"/avatars/6ecf7fb5aa74f2453d0e3bc7b9cca0d3.svg","isPro":true,"fullname":"Roey Ben Chaim","user":"roeybc","type":"user"},{"_id":"65318cb0cdc25730d4ad1e42","avatarUrl":"/avatars/117a38c6229767b807401f4a607ab8da.svg","isPro":false,"fullname":"Liqiang Jing","user":"liqiang888","type":"user"},{"_id":"63710bca7a5e5d8efdbff215","avatarUrl":"/avatars/25e7f713d613ec81ba775265eadff8bc.svg","isPro":false,"fullname":"He","user":"Yizhuo","type":"user"},{"_id":"654d82b8d2db4280d9351bc5","avatarUrl":"/avatars/bcf1a67ea1282bf2124b6c964c717232.svg","isPro":false,"fullname":"Xinyi Liu","user":"Xinyi125","type":"user"},{"_id":"64b5198c25882acb62fb77ef","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64b5198c25882acb62fb77ef/HX9pfMEPQlfjvSAgSLplY.png","isPro":false,"fullname":"Yueqian Lin","user":"linyueqian","type":"user"},{"_id":"67f2f4ed88d089a03d6e45f2","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/67f2f4ed88d089a03d6e45f2/6IqT5qL7Tsea3rdfPpESg.png","isPro":false,"fullname":"Steven Dillmann","user":"StevenDillmann","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0,"organization":{"_id":"69657f24ec1d157f1590a81d","name":"benchflow","fullname":"BenchFlow","avatar":"https://cdn-uploads.huggingface.co/production/uploads/663fe2d26304d377fc253322/L6ik2VpL5iizADEXK4W-A.png"}}">
Papers
arxiv:2602.12670

SkillsBench: Benchmarking How Well Agent Skills Work Across Diverse Tasks

Published on Feb 13
ยท Submitted by
Xiangyi Li
on Feb 18
Authors:
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

SkillsBench evaluates agent skills across 86 tasks and finds that curated skills improve performance significantly but inconsistently, while self-generated skills offer no benefit, indicating that models struggle to create useful procedural knowledge despite benefiting from curated versions.

AI-generated summary

Agent Skills are structured packages of procedural knowledge that augment LLM agents at inference time. Despite rapid adoption, there is no standard way to measure whether they actually help. We present SkillsBench, a benchmark of 86 tasks across 11 domains paired with curated Skills and deterministic verifiers. Each task is evaluated under three conditions: no Skills, curated Skills, and self-generated Skills. We test 7 agent-model configurations over 7,308 trajectories. Curated Skills raise average pass rate by 16.2 percentage points(pp), but effects vary widely by domain (+4.5pp for Software Engineering to +51.9pp for Healthcare) and 16 of 84 tasks show negative deltas. Self-generated Skills provide no benefit on average, showing that models cannot reliably author the procedural knowledge they benefit from consuming. Focused Skills with 2--3 modules outperform comprehensive documentation, and smaller models with Skills can match larger models without them.

Community

Paper author Paper submitter
Paper author Paper submitter

We made the first benchmark that measures the efficacy of agent skills. We collected 86 tasks from 105 domain experts across 11 domains, every task is verifiable, human created and has verified Skills. SOTA model without skills score ~30% without skills.

We found a few interesting things:

  1. Skills substitute for model scale โ€” Haiku 4.5 with Skills (27.7%) beats Opus 4.5 without (22.0%). The right procedural knowledge can be worth more than a bigger model.
  2. Skills' improvement has nothing to do with LLMs' internal knowledge. We have an ablation where no Skills provided for the agent, but the agent is prompted to generate relevant procedural knowledge before solving the task. This isolates the impact of LLMs' latent domain knowledge.

The result is:

  • Curated Skills: +16.2pp average improvement across all 7 agent configs
  • Self-generated Skills: -1.3pp: models can't write their own procedural knowledge pre-trajectory feedbacks. This is used to isolate the impact of LLMs' latent domain knowledge.

arXivLens breakdown of this paper ๐Ÿ‘‰ https://arxivlens.com/PaperView/Details/skillsbench-benchmarking-how-well-agent-skills-work-across-diverse-tasks-1475-a8815427

  • Executive Summary
  • Detailed Breakdown
  • Practical Applications

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.12670 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.12670 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.12670 in a Space README.md to link it from this page.

Collections including this paper 2