Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - Patched MOA: optimizing inference for diverse software development tasks
[go: Go Back, main page]

https://github.com/codelion/optillm/blob/main/moa.py

\n","updatedAt":"2024-09-08T04:02:35.387Z","author":{"_id":"62f32eab52ad88c930bb3f3b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1677134945205-62f32eab52ad88c930bb3f3b.png","fullname":"Asankhaya Sharma","name":"codelion","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":385,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6352355480194092},"editors":["codelion"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1677134945205-62f32eab52ad88c930bb3f3b.png"],"reactions":[{"reaction":"🤗","users":["codelion"],"count":1}],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2407.18521","authors":[{"_id":"66c18f12198f9d79f2bfa136","user":{"_id":"62f32eab52ad88c930bb3f3b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1677134945205-62f32eab52ad88c930bb3f3b.png","isPro":false,"fullname":"Asankhaya Sharma","user":"codelion","type":"user"},"name":"Asankhaya Sharma","status":"claimed_verified","statusLastChangedAt":"2024-09-09T06:57:28.619Z","hidden":false}],"publishedAt":"2024-07-26T05:34:34.000Z","title":"Patched MOA: optimizing inference for diverse software development tasks","summary":"This paper introduces Patched MOA (Mixture of Agents), an inference\noptimization technique that significantly enhances the performance of large\nlanguage models (LLMs) across diverse software development tasks. We evaluate\nthree inference optimization algorithms - Best of N, Mixture of Agents, and\nMonte Carlo Tree Search and demonstrate that Patched MOA can boost the\nperformance of smaller models to surpass that of larger, more expensive models.\nNotably, our approach improves the gpt-4o-mini model's performance on the\nArena-Hard-Auto benchmark by 15.52%, outperforming gpt-4-turbo at a fraction of\nthe cost. We also apply Patched MOA to various software development workflows,\nshowing consistent improvements in task completion rates. Our method is\nmodel-agnostic, transparent to end-users, and can be easily integrated into\nexisting LLM pipelines. This work contributes to the growing field of LLM\noptimization, offering a cost-effective solution for enhancing model\nperformance without the need for fine-tuning or larger models.","upvotes":1,"discussionId":"66c18f13198f9d79f2bfa16a","projectPage":"https://github.com/patched-codes/patchwork","ai_summary":"Patched MOA, an inference optimization technique, improves the performance of large language models on software development tasks more cost-effectively than larger models.","ai_keywords":["Mixture of Agents","Best of N","Monte Carlo Tree Search","Arena-Hard-Auto benchmark","gpt-4o-mini","gpt-4-turbo","task completion rates","model-agnostic"],"organization":{"_id":"64ec8261a4b2985194127432","name":"patched-codes","fullname":"Patched","avatar":"https://cdn-uploads.huggingface.co/production/uploads/62f32eab52ad88c930bb3f3b/qBkAxnQea0VWVLMtawS_T.png"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"62f32eab52ad88c930bb3f3b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1677134945205-62f32eab52ad88c930bb3f3b.png","isPro":false,"fullname":"Asankhaya Sharma","user":"codelion","type":"user"}],"acceptLanguages":["*"],"organization":{"_id":"64ec8261a4b2985194127432","name":"patched-codes","fullname":"Patched","avatar":"https://cdn-uploads.huggingface.co/production/uploads/62f32eab52ad88c930bb3f3b/qBkAxnQea0VWVLMtawS_T.png"}}">
Papers
arxiv:2407.18521

Patched MOA: optimizing inference for diverse software development tasks

Published on Jul 26, 2024

Abstract

Patched MOA, an inference optimization technique, improves the performance of large language models on software development tasks more cost-effectively than larger models.

AI-generated summary

This paper introduces Patched MOA (Mixture of Agents), an inference optimization technique that significantly enhances the performance of large language models (LLMs) across diverse software development tasks. We evaluate three inference optimization algorithms - Best of N, Mixture of Agents, and Monte Carlo Tree Search and demonstrate that Patched MOA can boost the performance of smaller models to surpass that of larger, more expensive models. Notably, our approach improves the gpt-4o-mini model's performance on the Arena-Hard-Auto benchmark by 15.52%, outperforming gpt-4-turbo at a fraction of the cost. We also apply Patched MOA to various software development workflows, showing consistent improvements in task completion rates. Our method is model-agnostic, transparent to end-users, and can be easily integrated into existing LLM pipelines. This work contributes to the growing field of LLM optimization, offering a cost-effective solution for enhancing model performance without the need for fine-tuning or larger models.

Community

Paper author

The implementation for MOA is available here - https://github.com/codelion/optillm/blob/main/moa.py

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2407.18521 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 3

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.