Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - Ponimator: Unfolding Interactive Pose for Versatile Human-human Interaction Animation
[go: Go Back, main page]

https://github.com/stevenlsw/ponimator

\n

","updatedAt":"2025-10-17T04:51:42.861Z","author":{"_id":"670462c6fd5ef6902568d7bd","avatarUrl":"/avatars/2d6ea275ccc289f6f1b07d0c8e860888.svg","fullname":"Shaowei Liu","name":"shaoweiliu","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":2,"identifiedLanguage":{"language":"en","probability":0.3569193184375763},"editors":["shaoweiliu"],"editorAvatarUrls":["/avatars/2d6ea275ccc289f6f1b07d0c8e860888.svg"],"reactions":[],"isReport":false}},{"id":"68f2ee80be30c71e25b9ba9f","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2025-10-18T01:33:52.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Text2Interact: High-Fidelity and Diverse Text-to-Two-Person Interaction Generation](https://huggingface.co/papers/2510.06504) (2025)\n* [MoReact: Generating Reactive Motion from Textual Descriptions](https://huggingface.co/papers/2509.23911) (2025)\n* [InterPose: Learning to Generate Human-Object Interactions from Large-Scale Web Videos](https://huggingface.co/papers/2509.00767) (2025)\n* [InfinityHuman: Towards Long-Term Audio-Driven Human](https://huggingface.co/papers/2508.20210) (2025)\n* [MoSA: Motion-Coherent Human Video Generation via Structure-Appearance Decoupling](https://huggingface.co/papers/2508.17404) (2025)\n* [VividAnimator: An End-to-End Audio and Pose-driven Half-Body Human Animation Framework](https://huggingface.co/papers/2510.10269) (2025)\n* [PersonaAnimator: Personalized Motion Transfer from Unconstrained Videos](https://huggingface.co/papers/2508.19895) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-10-18T01:33:52.006Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6878125667572021},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2510.14976","authors":[{"_id":"68f1c7cb6e0bef323a68febb","name":"Shaowei Liu","hidden":false},{"_id":"68f1c7cb6e0bef323a68febc","name":"Chuan Guo","hidden":false},{"_id":"68f1c7cb6e0bef323a68febd","name":"Bing Zhou","hidden":false},{"_id":"68f1c7cb6e0bef323a68febe","name":"Jian Wang","hidden":false}],"publishedAt":"2025-10-16T17:59:56.000Z","submittedOnDailyAt":"2025-10-17T03:12:25.526Z","title":"Ponimator: Unfolding Interactive Pose for Versatile Human-human\n Interaction Animation","submittedOnDailyBy":{"_id":"670462c6fd5ef6902568d7bd","avatarUrl":"/avatars/2d6ea275ccc289f6f1b07d0c8e860888.svg","isPro":false,"fullname":"Shaowei Liu","user":"shaoweiliu","type":"user"},"summary":"Close-proximity human-human interactive poses convey rich contextual\ninformation about interaction dynamics. Given such poses, humans can\nintuitively infer the context and anticipate possible past and future dynamics,\ndrawing on strong priors of human behavior. Inspired by this observation, we\npropose Ponimator, a simple framework anchored on proximal interactive poses\nfor versatile interaction animation. Our training data consists of\nclose-contact two-person poses and their surrounding temporal context from\nmotion-capture interaction datasets. Leveraging interactive pose priors,\nPonimator employs two conditional diffusion models: (1) a pose animator that\nuses the temporal prior to generate dynamic motion sequences from interactive\nposes, and (2) a pose generator that applies the spatial prior to synthesize\ninteractive poses from a single pose, text, or both when interactive poses are\nunavailable. Collectively, Ponimator supports diverse tasks, including\nimage-based interaction animation, reaction animation, and text-to-interaction\nsynthesis, facilitating the transfer of interaction knowledge from high-quality\nmocap data to open-world scenarios. Empirical experiments across diverse\ndatasets and applications demonstrate the universality of the pose prior and\nthe effectiveness and robustness of our framework.","upvotes":4,"discussionId":"68f1c7cb6e0bef323a68febf","projectPage":"https://stevenlsw.github.io/ponimator/","githubRepo":"https://github.com/stevenlsw/ponimator","githubRepoAddedBy":"user","ai_summary":"Ponimator uses conditional diffusion models to generate and synthesize interactive poses from motion capture data, enabling versatile interaction animation tasks.","ai_keywords":["conditional diffusion models","pose animator","pose generator","interactive pose priors","image-based interaction animation","reaction animation","text-to-interaction synthesis","motion capture data"],"githubStars":39,"organization":{"_id":"668450a2c1cbe5e008ac6515","name":"Snapchat","fullname":"Snapchat Inc.","avatar":"https://cdn-uploads.huggingface.co/production/uploads/648ca58a39d2584ee47efef6/plasFy052q2795odYb6NO.jpeg"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6507fbecffc738079ca592bf","avatarUrl":"/avatars/1cb0f39ac6dc2dba2292846a8d7746da.svg","isPro":false,"fullname":"Ming Chen","user":"ChenMing-thu14","type":"user"},{"_id":"670462c6fd5ef6902568d7bd","avatarUrl":"/avatars/2d6ea275ccc289f6f1b07d0c8e860888.svg","isPro":false,"fullname":"Shaowei Liu","user":"shaoweiliu","type":"user"},{"_id":"65a4567e212d6aca9a3e8f5a","avatarUrl":"/avatars/ed944797230b5460381209bf76e4a0e4.svg","isPro":false,"fullname":"Catherine Liu","user":"Liu12uiL","type":"user"},{"_id":"686db5d4af2b856fabbf13aa","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/6BjMv2LVNoqvbX8fQSTPI.png","isPro":false,"fullname":"V bbbb","user":"Bbbbbnnn","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0,"organization":{"_id":"668450a2c1cbe5e008ac6515","name":"Snapchat","fullname":"Snapchat Inc.","avatar":"https://cdn-uploads.huggingface.co/production/uploads/648ca58a39d2584ee47efef6/plasFy052q2795odYb6NO.jpeg"}}">
Papers
arxiv:2510.14976

Ponimator: Unfolding Interactive Pose for Versatile Human-human Interaction Animation

Published on Oct 16, 2025
· Submitted by
Shaowei Liu
on Oct 17, 2025
Authors:
,
,
,

Abstract

Ponimator uses conditional diffusion models to generate and synthesize interactive poses from motion capture data, enabling versatile interaction animation tasks.

AI-generated summary

Close-proximity human-human interactive poses convey rich contextual information about interaction dynamics. Given such poses, humans can intuitively infer the context and anticipate possible past and future dynamics, drawing on strong priors of human behavior. Inspired by this observation, we propose Ponimator, a simple framework anchored on proximal interactive poses for versatile interaction animation. Our training data consists of close-contact two-person poses and their surrounding temporal context from motion-capture interaction datasets. Leveraging interactive pose priors, Ponimator employs two conditional diffusion models: (1) a pose animator that uses the temporal prior to generate dynamic motion sequences from interactive poses, and (2) a pose generator that applies the spatial prior to synthesize interactive poses from a single pose, text, or both when interactive poses are unavailable. Collectively, Ponimator supports diverse tasks, including image-based interaction animation, reaction animation, and text-to-interaction synthesis, facilitating the transfer of interaction knowledge from high-quality mocap data to open-world scenarios. Empirical experiments across diverse datasets and applications demonstrate the universality of the pose prior and the effectiveness and robustness of our framework.

Community

We propose Ponimator (ICCV 2025), a generative framework that turns interactive poses into realistic human–human motion, supporting image, text, and pose-based interaction animation.

Github: https://github.com/stevenlsw/ponimator

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2510.14976 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2510.14976 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.