Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - AutoStory: Generating Diverse Storytelling Images with Minimal Human Effort
[go: Go Back, main page]

https://aim-uofa.github.io/AutoStory/

\n","updatedAt":"2023-11-21T23:27:18.066Z","author":{"_id":"62f847d692950415b63c6011","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1660437733795-noauth.png","fullname":"Yassine Ennaour","name":"Lyte","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":34,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.47769105434417725},"editors":["Lyte"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1660437733795-noauth.png"],"reactions":[],"isReport":false}},{"id":"655f447fd3934dc4021919c2","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2023-11-23T12:24:31.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [LLM Blueprint: Enabling Text-to-Image Generation with Complex and Detailed Prompts](https://huggingface.co/papers/2310.10640) (2023)\n* [VideoDirectorGPT: Consistent Multi-scene Video Generation via LLM-Guided Planning](https://huggingface.co/papers/2309.15091) (2023)\n* [TOSS: High-quality Text-guided Novel View Synthesis from a Single Image](https://huggingface.co/papers/2310.10644) (2023)\n* [The Chosen One: Consistent Characters in Text-to-Image Diffusion Models](https://huggingface.co/papers/2311.10093) (2023)\n* [Ctrl-Room: Controllable Text-to-3D Room Meshes Generation with Layout Constraints](https://huggingface.co/papers/2310.03602) (2023)\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n","updatedAt":"2023-11-23T12:24:31.802Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7137590050697327},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}},{"id":"6799e1c5e44c952735fec277","author":{"_id":"6616bafdd55279a2edcca1dd","avatarUrl":"/avatars/fa09e41f5c072319b0f48c326cd35ce3.svg","fullname":"Barış KÜMET","name":"koesan","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false},"createdAt":"2025-01-29T08:07:33.000Z","type":"comment","data":{"edited":true,"hidden":true,"hiddenBy":"","latest":{"raw":"This comment has been hidden","html":"This comment has been hidden","updatedAt":"2025-05-03T23:58:48.631Z","author":{"_id":"6616bafdd55279a2edcca1dd","avatarUrl":"/avatars/fa09e41f5c072319b0f48c326cd35ce3.svg","fullname":"Barış KÜMET","name":"koesan","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"editors":[],"editorAvatarUrls":[],"reactions":[]}}],"primaryEmailConfirmed":false,"paper":{"id":"2311.11243","authors":[{"_id":"655c245a0c27b3fb70822843","user":{"_id":"63f089456309c84d5f47f951","avatarUrl":"/avatars/04b926a7f2ad091ee00fef0c59903492.svg","isPro":false,"fullname":"Wen Wang","user":"wwen1997","type":"user"},"name":"Wen Wang","status":"claimed_verified","statusLastChangedAt":"2023-11-21T13:50:46.620Z","hidden":false},{"_id":"655c245a0c27b3fb70822844","user":{"_id":"646efd223dd912a539e0bd46","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/EOFAv5xvOgJOzuDgh4nSb.png","isPro":false,"fullname":"Canyu Zhao","user":"Canyu","type":"user"},"name":"Canyu Zhao","status":"admin_assigned","statusLastChangedAt":"2023-11-21T13:35:56.105Z","hidden":false},{"_id":"655c245a0c27b3fb70822845","name":"Hao Chen","hidden":false},{"_id":"655c245a0c27b3fb70822846","user":{"_id":"62d812e143df7719860d05d1","avatarUrl":"/avatars/412f7ec5c9f54990f4b562652d3e2c59.svg","isPro":false,"fullname":"zhekai chen","user":"Azily","type":"user"},"name":"Zhekai Chen","status":"admin_assigned","statusLastChangedAt":"2023-11-21T13:37:41.546Z","hidden":false},{"_id":"655c245a0c27b3fb70822847","user":{"_id":"64252045a4f3051f54dd1d53","avatarUrl":"/avatars/0e423a3291091be3b4736a14da3ce495.svg","isPro":false,"fullname":"kecheng zheng","user":"zkcys001","type":"user"},"name":"Kecheng Zheng","status":"extracted_confirmed","statusLastChangedAt":"2024-03-26T10:22:19.846Z","hidden":false},{"_id":"655c245a0c27b3fb70822848","name":"Chunhua Shen","hidden":false}],"publishedAt":"2023-11-19T06:07:37.000Z","submittedOnDailyAt":"2023-11-21T01:00:39.126Z","title":"AutoStory: Generating Diverse Storytelling Images with Minimal Human\n Effort","submittedOnDailyBy":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","isPro":false,"fullname":"AK","user":"akhaliq","type":"user"},"summary":"Story visualization aims to generate a series of images that match the story\ndescribed in texts, and it requires the generated images to satisfy high\nquality, alignment with the text description, and consistency in character\nidentities. Given the complexity of story visualization, existing methods\ndrastically simplify the problem by considering only a few specific characters\nand scenarios, or requiring the users to provide per-image control conditions\nsuch as sketches. However, these simplifications render these methods\nincompetent for real applications. To this end, we propose an automated story\nvisualization system that can effectively generate diverse, high-quality, and\nconsistent sets of story images, with minimal human interactions. Specifically,\nwe utilize the comprehension and planning capabilities of large language models\nfor layout planning, and then leverage large-scale text-to-image models to\ngenerate sophisticated story images based on the layout. We empirically find\nthat sparse control conditions, such as bounding boxes, are suitable for layout\nplanning, while dense control conditions, e.g., sketches and keypoints, are\nsuitable for generating high-quality image content. To obtain the best of both\nworlds, we devise a dense condition generation module to transform simple\nbounding box layouts into sketch or keypoint control conditions for final image\ngeneration, which not only improves the image quality but also allows easy and\nintuitive user interactions. In addition, we propose a simple yet effective\nmethod to generate multi-view consistent character images, eliminating the\nreliance on human labor to collect or draw character images.","upvotes":16,"discussionId":"655c245f0c27b3fb70822901","githubRepo":"https://github.com/aim-uofa/AutoStory","githubRepoAddedBy":"auto","ai_summary":"The system uses large language models for layout planning and large-scale text-to-image models for generating high-quality, consistent story images with minimal human interaction, transforming bounding boxes into dense control conditions to improve image quality.","ai_keywords":["large language models","text-to-image models","layout planning","bounding boxes","sketch","keypoints","multi-view consistent character images"],"githubStars":149},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"635f16eda81c7f7424a58996","avatarUrl":"/avatars/e25928188c3c9b7ac3d1abd69bcc39d5.svg","isPro":false,"fullname":"I Am Imagen","user":"imagen","type":"user"},{"_id":"646efd223dd912a539e0bd46","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/EOFAv5xvOgJOzuDgh4nSb.png","isPro":false,"fullname":"Canyu Zhao","user":"Canyu","type":"user"},{"_id":"639309aa0851cf996a001f6c","avatarUrl":"/avatars/9fd0c3e4b1af3f82ba83c61c2c971e4d.svg","isPro":false,"fullname":"Peter Larsen","user":"Boosh","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"6032802e1f993496bc14d9e3","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6032802e1f993496bc14d9e3/w6hr-DEQot4VVkoyRIBiy.png","isPro":false,"fullname":"Omar Sanseviero","user":"osanseviero","type":"user"},{"_id":"61868ce808aae0b5499a2a95","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/61868ce808aae0b5499a2a95/F6BA0anbsoY_Z7M1JrwOe.jpeg","isPro":true,"fullname":"Sylvain Filoni","user":"fffiloni","type":"user"},{"_id":"63f089456309c84d5f47f951","avatarUrl":"/avatars/04b926a7f2ad091ee00fef0c59903492.svg","isPro":false,"fullname":"Wen Wang","user":"wwen1997","type":"user"},{"_id":"630648eccb5492c9859e5728","avatarUrl":"/avatars/79121ae875b0489b7f5d1ab961834e7a.svg","isPro":false,"fullname":"william cody stanford","user":"williamcstanford","type":"user"},{"_id":"654c4b117824e2bb58e897cc","avatarUrl":"/avatars/53631223e3e0ba682e6e2e517afcd065.svg","isPro":false,"fullname":"aaa","user":"goshiaoki","type":"user"},{"_id":"648eb1eb59c4e5c87dc116e0","avatarUrl":"/avatars/c636cea39c2c0937f01398c94ead5dad.svg","isPro":false,"fullname":"fdsqefsgergd","user":"T-representer","type":"user"},{"_id":"6358edff3b3638bdac83f7ac","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1666772404424-noauth.jpeg","isPro":false,"fullname":"Pratyay Banerjee","user":"Neilblaze","type":"user"},{"_id":"63b2dc32677046a142857b25","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1679346645755-63b2dc32677046a142857b25.jpeg","isPro":false,"fullname":"Faisal Hossain","user":"faisalbsl21","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2311.11243

AutoStory: Generating Diverse Storytelling Images with Minimal Human Effort

Published on Nov 19, 2023
· Submitted by
AK
on Nov 21, 2023
Authors:
,

Abstract

The system uses large language models for layout planning and large-scale text-to-image models for generating high-quality, consistent story images with minimal human interaction, transforming bounding boxes into dense control conditions to improve image quality.

AI-generated summary

Story visualization aims to generate a series of images that match the story described in texts, and it requires the generated images to satisfy high quality, alignment with the text description, and consistency in character identities. Given the complexity of story visualization, existing methods drastically simplify the problem by considering only a few specific characters and scenarios, or requiring the users to provide per-image control conditions such as sketches. However, these simplifications render these methods incompetent for real applications. To this end, we propose an automated story visualization system that can effectively generate diverse, high-quality, and consistent sets of story images, with minimal human interactions. Specifically, we utilize the comprehension and planning capabilities of large language models for layout planning, and then leverage large-scale text-to-image models to generate sophisticated story images based on the layout. We empirically find that sparse control conditions, such as bounding boxes, are suitable for layout planning, while dense control conditions, e.g., sketches and keypoints, are suitable for generating high-quality image content. To obtain the best of both worlds, we devise a dense condition generation module to transform simple bounding box layouts into sketch or keypoint control conditions for final image generation, which not only improves the image quality but also allows easy and intuitive user interactions. In addition, we propose a simple yet effective method to generate multi-view consistent character images, eliminating the reliance on human labor to collect or draw character images.

Community

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

This comment has been hidden

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2311.11243 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2311.11243 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2311.11243 in a Space README.md to link it from this page.

Collections including this paper 11