Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - Efficient Guided Generation for Large Language Models
[go: Go Back, main page]

https://huggingface.co/blog/outlines-core 🔥

\n","updatedAt":"2024-10-24T11:53:39.768Z","author":{"_id":"5dd96eb166059660ed1ee413","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/5dd96eb166059660ed1ee413/NQtzmrDdbG0H8qkZvRyGk.jpeg","fullname":"Julien Chaumond","name":"julien-c","type":"user","isPro":true,"isHf":true,"isHfAdmin":true,"isMod":false,"followerCount":3847,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6502493619918823},"editors":["julien-c"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/5dd96eb166059660ed1ee413/NQtzmrDdbG0H8qkZvRyGk.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2307.09702","authors":[{"_id":"64dabaf97f8116a6ab33d222","user":{"_id":"6434a4e24b34368fdb06a8fd","avatarUrl":"/avatars/00c688918460476dc9dcbeb48c718c4a.svg","isPro":false,"fullname":"bwillard","user":"bwillard","type":"user"},"name":"Brandon T. Willard","status":"claimed_verified","statusLastChangedAt":"2023-10-26T07:34:50.454Z","hidden":false},{"_id":"64dabaf97f8116a6ab33d223","user":{"_id":"5de8d7255c51de1bfc829f99","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/5de8d7255c51de1bfc829f99/98fxu2lJMyEsh2j2PtsAs.jpeg","isPro":false,"fullname":"Remi Louf","user":"remi","type":"user"},"name":"Rémi Louf","status":"admin_assigned","statusLastChangedAt":"2023-08-16T12:58:07.627Z","hidden":false}],"publishedAt":"2023-07-19T01:14:49.000Z","title":"Efficient Guided Generation for Large Language Models","summary":"In this article we describe an efficient approach to guiding language model\ntext generation with regular expressions and context-free grammars. Our\napproach adds little to no overhead to the token sequence generation process,\nand makes guided generation feasible in practice. An implementation is provided\nin the open source Python library Outlines.","upvotes":8,"discussionId":"64dabaf97f8116a6ab33d22a","githubRepo":"https://github.com/normal-computing/outlines","githubRepoAddedBy":"auto","ai_summary":"An efficient method guides language model text generation using regular expressions and context-free grammars with minimal overhead.","ai_keywords":["language model","regular expressions","context-free grammars"],"githubStars":13441},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"609e7e5bd71123310c9db061","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/609e7e5bd71123310c9db061/2lCgoQH6Ai_mgXH0AP7j1.jpeg","isPro":false,"fullname":"ROSTAND KENNE","user":"rkennezangue","type":"user"},{"_id":"5dd96eb166059660ed1ee413","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/5dd96eb166059660ed1ee413/NQtzmrDdbG0H8qkZvRyGk.jpeg","isPro":true,"fullname":"Julien Chaumond","user":"julien-c","type":"user"},{"_id":"638c05b7d274cbbad284ced0","avatarUrl":"/avatars/e26f659316b4975562e006c081a795ba.svg","isPro":false,"fullname":"Quyu Kong","user":"kongquyu","type":"user"},{"_id":"62a8eb463d8a300f79a50c37","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62a8eb463d8a300f79a50c37/Wo0MJnWslzL72jJQ6Se6a.png","isPro":false,"fullname":"Mike","user":"mkly","type":"user"},{"_id":"6434a4e24b34368fdb06a8fd","avatarUrl":"/avatars/00c688918460476dc9dcbeb48c718c4a.svg","isPro":false,"fullname":"bwillard","user":"bwillard","type":"user"},{"_id":"5de8d7255c51de1bfc829f99","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/5de8d7255c51de1bfc829f99/98fxu2lJMyEsh2j2PtsAs.jpeg","isPro":false,"fullname":"Remi Louf","user":"remi","type":"user"},{"_id":"63692f9a7c5dd0caa7d54bdf","avatarUrl":"/avatars/90c38fcc055da47e1f4712deaf8b4537.svg","isPro":false,"fullname":"Leoveanu-Condrei Claudiu","user":"futurisold","type":"user"},{"_id":"603d0ed5d37b930940748108","avatarUrl":"/avatars/fdaf100d08a8278eaf65ceda8b05a73b.svg","isPro":false,"fullname":"Meilliez","user":"Antoine","type":"user"}],"acceptLanguages":["*"]}">
Papers
arxiv:2307.09702

Efficient Guided Generation for Large Language Models

Published on Jul 19, 2023

Abstract

An efficient method guides language model text generation using regular expressions and context-free grammars with minimal overhead.

AI-generated summary

In this article we describe an efficient approach to guiding language model text generation with regular expressions and context-free grammars. Our approach adds little to no overhead to the token sequence generation process, and makes guided generation feasible in practice. An implementation is provided in the open source Python library Outlines.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2307.09702 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2307.09702 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2307.09702 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.