Think in Strokes, Not Pixels: Process-Driven Image Generation via Interleaved Reasoning
Abstract
Process-driven image generation decomposes synthesis into iterative steps involving textual planning, visual drafting, textual reflection, and visual refinement, with step-wise supervision ensuring consistency and interpretability.
Humans paint images incrementally: they plan a global layout, sketch a coarse draft, inspect, and refine details, and most importantly, each step is grounded in the evolving visual states. However, can unified multimodal models trained on text-image interleaved datasets also imagine the chain of intermediate states? In this paper, we introduce process-driven image generation, a multi-step paradigm that decomposes synthesis into an interleaved reasoning trajectory of thoughts and actions. Rather than generating images in a single step, our approach unfolds across multiple iterations, each consisting of 4 stages: textual planning, visual drafting, textual reflection, and visual refinement. The textual reasoning explicitly conditions how the visual state should evolve, while the generated visual intermediate in turn constrains and grounds the next round of textual reasoning. A core challenge of process-driven generation stems from the ambiguity of intermediate states: how can models evaluate each partially-complete image? We address this through dense, step-wise supervision that maintains two complementary constraints: for the visual intermediate states, we enforce the spatial and semantic consistency; for the textual intermediate states, we preserve the prior visual knowledge while enabling the model to identify and correct prompt-violating elements. This makes the generation process explicit, interpretable, and directly supervisable. To validate proposed method, we conduct experiments under various text-to-image generation benchmarks.
Community
This is truly a breakthrough if it checks out. Hopefully will be released!
the self-sampled critique traces are the clever bit here, training the model to spot and fix its own prompt-violating elements. that internal loop could keep plan and sketch aligned across iterations without needing external evaluators. i do wonder how much this relies on the quality of the scene-graph subsampling, since bias there could steer the critique toward familiar relations. the arxivlens breakdown helped me parse the four-stage loop and grounding mechanics, if you want a quick read here: https://arxivlens.com/PaperView/Details/think-in-strokes-not-pixels-process-driven-image-generation-via-interleaved-reasoning-4378-e43416f6
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- CoCo: Code as CoT for Text-to-Image Preview and Rare Concept Generation (2026)
- InterCoG: Towards Spatially Precise Image Editing with Interleaved Chain-of-Grounding Reasoning (2026)
- StruVis: Enhancing Reasoning-based Text-to-Image Generation via Thinking with Structured Vision (2026)
- coDrawAgents: A Multi-Agent Dialogue Framework for Compositional Image Generation (2026)
- Self-Corrected Image Generation with Explainable Latent Rewards (2026)
- Spatial Chain-of-Thought: Bridging Understanding and Generation Models for Spatial Reasoning Generation (2026)
- MIRROR: Multimodal Iterative Reasoning via Reflection on Visual Regions (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2604.04746 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper