Librarian Bot. I found the following papers similar to this paper. \n
The following papers were recommended by the Semantic Scholar API
\n
\n
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2025-05-27T01:34:29.665Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":317,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7157362103462219},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2505.17399","authors":[{"_id":"6833fd69fe87d9433d098068","user":{"_id":"63a2a51ef30c464227924fc6","avatarUrl":"/avatars/e109e85abd25b97bb29dbbe007119e34.svg","isPro":false,"fullname":"Haoyu Sun","user":"Mikivis","type":"user"},"name":"Haoyu Sun","status":"admin_assigned","statusLastChangedAt":"2025-05-26T13:19:50.579Z","hidden":false},{"_id":"6833fd69fe87d9433d098069","user":{"_id":"6715a16f60dd5f7f395b34cb","avatarUrl":"/avatars/f92240519132c693de30c133ca784322.svg","isPro":false,"fullname":"Will Wang","user":"wangwill","type":"user"},"name":"Huichen Will Wang","status":"claimed_verified","statusLastChangedAt":"2025-11-03T21:03:30.839Z","hidden":false},{"_id":"6833fd69fe87d9433d09806a","user":{"_id":"645b4819f9d4ec91fdd54852","avatarUrl":"/avatars/e12efb8e030688a0afcc72176b453fb3.svg","isPro":false,"fullname":"Jiawei Gu","user":"kuvvi","type":"user"},"name":"Jiawei Gu","status":"claimed_verified","statusLastChangedAt":"2025-05-26T08:09:01.542Z","hidden":false},{"_id":"6833fd69fe87d9433d09806b","name":"Linjie Li","hidden":false},{"_id":"6833fd69fe87d9433d09806c","user":{"_id":"67017abfe4d49b157ac534d9","avatarUrl":"/avatars/997e1b9f54b27a7728a9d4abfee4ba91.svg","isPro":false,"fullname":"Yu Cheng","user":"ych133","type":"user"},"name":"Yu Cheng","status":"claimed_verified","statusLastChangedAt":"2025-06-02T07:49:17.199Z","hidden":false}],"publishedAt":"2025-05-23T02:16:11.000Z","submittedOnDailyAt":"2025-05-26T08:05:22.618Z","title":"FullFront: Benchmarking MLLMs Across the Full Front-End Engineering\n Workflow","submittedOnDailyBy":{"_id":"645b4819f9d4ec91fdd54852","avatarUrl":"/avatars/e12efb8e030688a0afcc72176b453fb3.svg","isPro":false,"fullname":"Jiawei Gu","user":"kuvvi","type":"user"},"summary":"Front-end engineering involves a complex workflow where engineers\nconceptualize designs, translate them into code, and iteratively refine the\nimplementation. While recent benchmarks primarily focus on converting visual\ndesigns to code, we present FullFront, a benchmark designed to evaluate\nMultimodal Large Language Models (MLLMs) across the full front-end\ndevelopment pipeline. FullFront assesses three fundamental tasks that map\ndirectly to the front-end engineering pipeline: Webpage Design\n(conceptualization phase), Webpage Perception QA (comprehension of visual\norganization and elements), and Webpage Code Generation (implementation phase).\nUnlike existing benchmarks that use either scraped websites with bloated code\nor oversimplified LLM-generated HTML, FullFront employs a novel, two-stage\nprocess to transform real-world webpages into clean, standardized HTML while\nmaintaining diverse visual designs and avoiding copyright issues. Extensive\ntesting of state-of-the-art MLLMs reveals significant limitations in page\nperception, code generation (particularly for image handling and layout), and\ninteraction implementation. Our results quantitatively demonstrate performance\ndisparities across models and tasks, and highlight a substantial gap between\ncurrent MLLM capabilities and human expert performance in front-end\nengineering. The FullFront benchmark and code are available in\nhttps://github.com/Mikivishy/FullFront.","upvotes":14,"discussionId":"6833fd6bfe87d9433d0980c2","githubRepo":"https://github.com/Mikivishy/FullFront","githubRepoAddedBy":"user","ai_summary":"FullFront is a benchmark evaluating Multimodal Large Language Models across conceptualization, comprehension, and implementation phases in front-end engineering.","ai_keywords":["Multimodal Large Language Models","MLLMs","Webpage Design","Webpage Perception QA","Webpage Code Generation","front-end engineering"],"githubStars":26},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"66aca01e33f6b27979856f6f","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/66aca01e33f6b27979856f6f/IyOxv89TudwscGH7tdue3.jpeg","isPro":false,"fullname":"Mingyang Song","user":"hitsmy","type":"user"},{"_id":"63f3502a520c14618925825a","avatarUrl":"/avatars/e986a2a6625e7be6890616a417f908d2.svg","isPro":false,"fullname":"Yafu Li","user":"yaful","type":"user"},{"_id":"636f37fa93d9a0c987e092fa","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/636f37fa93d9a0c987e092fa/vdZgFPobSIUbBTC3jlfH5.jpeg","isPro":false,"fullname":"Yucheng Zhou","user":"YCZhou","type":"user"},{"_id":"645b4819f9d4ec91fdd54852","avatarUrl":"/avatars/e12efb8e030688a0afcc72176b453fb3.svg","isPro":false,"fullname":"Jiawei Gu","user":"kuvvi","type":"user"},{"_id":"656d8d4b1f8d9b618de91369","avatarUrl":"/avatars/884dba9e56936241034b179d11a513b9.svg","isPro":false,"fullname":"Xiangdong Zhang","user":"aHapBean","type":"user"},{"_id":"6458af46f4d212d780bd7c68","avatarUrl":"/avatars/832fd34bcc041b0b7b551873a459fc3c.svg","isPro":false,"fullname":"Wei Liu","user":"PeterV09","type":"user"},{"_id":"6498fde776d49ee00f79cbfe","avatarUrl":"/avatars/4c284a71080150e6cb3b9632dfccef60.svg","isPro":false,"fullname":"Xuyang Hu","user":"huxy912","type":"user"},{"_id":"653dd16277c2f09452ad37cd","avatarUrl":"/avatars/a95f9527722845a5414d86180c8e945d.svg","isPro":false,"fullname":"Yunzhuo Hao","user":"luckychao","type":"user"},{"_id":"6355473d525beaee688b7ba1","avatarUrl":"/avatars/1fb0d57ed5f1a9b872a1ada8b2973ffb.svg","isPro":false,"fullname":"Wei Tao","user":"itaowe","type":"user"},{"_id":"63a2a51ef30c464227924fc6","avatarUrl":"/avatars/e109e85abd25b97bb29dbbe007119e34.svg","isPro":false,"fullname":"Haoyu Sun","user":"Mikivis","type":"user"},{"_id":"648eb1eb59c4e5c87dc116e0","avatarUrl":"/avatars/c636cea39c2c0937f01398c94ead5dad.svg","isPro":false,"fullname":"fdsqefsgergd","user":"T-representer","type":"user"},{"_id":"67017abfe4d49b157ac534d9","avatarUrl":"/avatars/997e1b9f54b27a7728a9d4abfee4ba91.svg","isPro":false,"fullname":"Yu Cheng","user":"ych133","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
FullFront: Benchmarking MLLMs Across the Full Front-End Engineering
Workflow
Published on May 23, 2025
Abstract
FullFront is a benchmark evaluating Multimodal Large Language Models across conceptualization, comprehension, and implementation phases in front-end engineering.
Front-end engineering involves a complex workflow where engineers
conceptualize designs, translate them into code, and iteratively refine the
implementation. While recent benchmarks primarily focus on converting visual
designs to code, we present FullFront, a benchmark designed to evaluate
Multimodal Large Language Models (MLLMs) across the full front-end
development pipeline. FullFront assesses three fundamental tasks that map
directly to the front-end engineering pipeline: Webpage Design
(conceptualization phase), Webpage Perception QA (comprehension of visual
organization and elements), and Webpage Code Generation (implementation phase).
Unlike existing benchmarks that use either scraped websites with bloated code
or oversimplified LLM-generated HTML, FullFront employs a novel, two-stage
process to transform real-world webpages into clean, standardized HTML while
maintaining diverse visual designs and avoiding copyright issues. Extensive
testing of state-of-the-art MLLMs reveals significant limitations in page
perception, code generation (particularly for image handling and layout), and
interaction implementation. Our results quantitatively demonstrate performance
disparities across models and tasks, and highlight a substantial gap between
current MLLM capabilities and human expert performance in front-end
engineering. The FullFront benchmark and code are available in
https://github.com/Mikivishy/FullFront.