Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - KnowMe-Bench: Benchmarking Person Understanding for Lifelong Digital Companions
[go: Go Back, main page]

Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2026-01-15T01:38:27.971Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7132441997528076},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2601.04745","authors":[{"_id":"6964724e138cc47cbd765325","user":{"_id":"68e4ba9bb3738c567535654e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/68e4ba9bb3738c567535654e/DmkMgEaKb3N3bnbJYk1cC.png","isPro":false,"fullname":"wu","user":"realty2333","type":"user"},"name":"Tingyu Wu","status":"claimed_verified","statusLastChangedAt":"2026-01-14T09:53:39.763Z","hidden":false},{"_id":"6964724e138cc47cbd765326","user":{"_id":"692d850486aa9dfeebcf10b5","avatarUrl":"/avatars/6f7782844275f3eec7d8466fab787923.svg","isPro":false,"fullname":"Zhisheng Chen","user":"Zhisheng888","type":"user"},"name":"Zhisheng Chen","status":"admin_assigned","statusLastChangedAt":"2026-01-14T12:49:43.502Z","hidden":false},{"_id":"6964724e138cc47cbd765327","name":"Ziyan Weng","hidden":false},{"_id":"6964724e138cc47cbd765328","user":{"_id":"6776ce2f10eb0715dbb89df6","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/GH7VYlzgdrUEDlQfW60Ez.png","isPro":false,"fullname":"Shuhe Wangv2","user":"Super-shuhe-v2","type":"user"},"name":"Shuhe Wang","status":"admin_assigned","statusLastChangedAt":"2026-01-14T12:49:52.050Z","hidden":false},{"_id":"6964724e138cc47cbd765329","user":{"_id":"6411c9c71d87842eedc5ad23","avatarUrl":"/avatars/b8a06aeafbbf7272a831534c2307d65e.svg","isPro":false,"fullname":"Chenglong Li","user":"ChenglongLi","type":"user"},"name":"Chenglong Li","status":"admin_assigned","statusLastChangedAt":"2026-01-14T12:49:57.344Z","hidden":false},{"_id":"6964724e138cc47cbd76532a","user":{"_id":"65562edfb7bad186e877c724","avatarUrl":"/avatars/bb91f42b102e113208bbe3238916a015.svg","isPro":false,"fullname":"zhangshuo","user":"mcflurryshuoz","type":"user"},"name":"Shuo Zhang","status":"claimed_verified","statusLastChangedAt":"2026-01-15T15:06:18.485Z","hidden":false},{"_id":"6964724e138cc47cbd76532b","name":"Sen Hu","hidden":false},{"_id":"6964724e138cc47cbd76532c","name":"Silin Wu","hidden":false},{"_id":"6964724e138cc47cbd76532d","user":{"_id":"68f287f2faba6f123f8a3b3c","avatarUrl":"/avatars/58a34b0f45bb34d74f86a638eff7dc94.svg","isPro":false,"fullname":"Qizhen Lan","user":"lanqz7766","type":"user"},"name":"Qizhen Lan","status":"admin_assigned","statusLastChangedAt":"2026-01-14T12:50:03.306Z","hidden":false},{"_id":"6964724e138cc47cbd76532e","user":{"_id":"6603d56ab4344a2b07cd6d21","avatarUrl":"/avatars/1569bb60166532317c85e80da722ba1c.svg","isPro":false,"fullname":"Huacan Wang","user":"Huacan-Wang","type":"user"},"name":"Huacan Wang","status":"admin_assigned","statusLastChangedAt":"2026-01-14T12:50:08.549Z","hidden":false},{"_id":"6964724e138cc47cbd76532f","user":{"_id":"6874f7f0f8e67e9b5714adf2","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/g2bltqJmCR7MY3zEaQHr6.png","isPro":false,"fullname":"RongHao Chen","user":"SuPA4ki","type":"user"},"name":"Ronghao Chen","status":"admin_assigned","statusLastChangedAt":"2026-01-14T12:50:13.928Z","hidden":false}],"publishedAt":"2026-01-08T09:11:33.000Z","submittedOnDailyAt":"2026-01-14T05:31:09.992Z","title":"KnowMe-Bench: Benchmarking Person Understanding for Lifelong Digital Companions","submittedOnDailyBy":{"_id":"64084fa192033c150738e4f2","avatarUrl":"/avatars/dfff2216eb235c635e5abe6fda3084f0.svg","isPro":false,"fullname":"Yu_xm","user":"Yu2020","type":"user"},"summary":"Existing long-horizon memory benchmarks mostly use multi-turn dialogues or synthetic user histories, which makes retrieval performance an imperfect proxy for person understanding. We present \\BenchName, a publicly releasable benchmark built from long-form autobiographical narratives, where actions, context, and inner thoughts provide dense evidence for inferring stable motivations and decision principles. \\BenchName~reconstructs each narrative into a flashback-aware, time-anchored stream and evaluates models with evidence-linked questions spanning factual recall, subjective state attribution, and principle-level reasoning. Across diverse narrative sources, retrieval-augmented systems mainly improve factual accuracy, while errors persist on temporally grounded explanations and higher-level inferences, highlighting the need for memory mechanisms beyond retrieval. Our data is in KnowMeBench{https://github.com/QuantaAlpha/KnowMeBench}.","upvotes":58,"discussionId":"6964724f138cc47cbd765330","githubRepo":"https://github.com/QuantaAlpha/KnowMeBench","githubRepoAddedBy":"user","ai_summary":"Long-horizon memory benchmarks based on autobiographical narratives evaluate models' ability to infer stable motivations and decision principles through evidence-linked questions spanning factual recall, subjective state attribution, and principle-level reasoning.","ai_keywords":["memory benchmarks","autobiographical narratives","retrieval-augmented systems","factual recall","subjective state attribution","principle-level reasoning"],"githubStars":117,"organization":{"_id":"68b33ab6a9ed99140481cf44","name":"QuantaAlpha","fullname":"QuantaAlpha","avatar":"https://cdn-uploads.huggingface.co/production/uploads/63f7767fbd28622c9b9915e9/DRN8PvmnpKmn2MSLQ7qhF.jpeg"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"66a0ab4923e426e19db92773","avatarUrl":"/avatars/19517dd085a3e48e644613ca0b2c3753.svg","isPro":false,"fullname":"ronghaochen","user":"cristiano28","type":"user"},{"_id":"692881094c3f4293dfe29e3d","avatarUrl":"/avatars/bddfaae8041a45498d46ef65ba17c920.svg","isPro":false,"fullname":"qihao wang","user":"jimson991","type":"user"},{"_id":"671b8b4d2eeb8de1c153aaa4","avatarUrl":"/avatars/9d11e80e79e817e01d2d7a032b186b98.svg","isPro":false,"fullname":"Xiaoba","user":"Xiaoba123","type":"user"},{"_id":"66260fa6eb729227ec6c002a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/66260fa6eb729227ec6c002a/bwhhv6xkP7M9YbCOXDpp1.jpeg","isPro":false,"fullname":"Heng Lian","user":"Pixelieee","type":"user"},{"_id":"692d850486aa9dfeebcf10b5","avatarUrl":"/avatars/6f7782844275f3eec7d8466fab787923.svg","isPro":false,"fullname":"Zhisheng Chen","user":"Zhisheng888","type":"user"},{"_id":"662911a202f5ad9a5195932f","avatarUrl":"/avatars/3c7db9bf9c1d95360b62fe4f56ee9c3a.svg","isPro":false,"fullname":"Tu Hu","user":"Blackteaxxx","type":"user"},{"_id":"64084fa192033c150738e4f2","avatarUrl":"/avatars/dfff2216eb235c635e5abe6fda3084f0.svg","isPro":false,"fullname":"Yu_xm","user":"Yu2020","type":"user"},{"_id":"68e4ba9bb3738c567535654e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/68e4ba9bb3738c567535654e/DmkMgEaKb3N3bnbJYk1cC.png","isPro":false,"fullname":"wu","user":"realty2333","type":"user"},{"_id":"676e3120aeae4cac1577daeb","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/i_xUSDbzbxPPB9rSqs1eV.png","isPro":false,"fullname":"zhengzhenkun","user":"akun-bupt","type":"user"},{"_id":"67c08207cd6b906a0d03a39b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/67c08207cd6b906a0d03a39b/hBnoZ8soyF9BSlU7AgH14.jpeg","isPro":false,"fullname":"AbstractionLayer","user":"SatoHaruto","type":"user"},{"_id":"67b18f3836b9713b3e09e015","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/67b18f3836b9713b3e09e015/njQNs9VF05MQvySDX6yg1.png","isPro":false,"fullname":"haiy bai","user":"Warsun","type":"user"},{"_id":"67b0a2d9e2883deef7246748","avatarUrl":"/avatars/4a478f37b6e605cb9cf806b4558121fa.svg","isPro":false,"fullname":"Hyunwoo","user":"CHyunwoo","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0,"organization":{"_id":"68b33ab6a9ed99140481cf44","name":"QuantaAlpha","fullname":"QuantaAlpha","avatar":"https://cdn-uploads.huggingface.co/production/uploads/63f7767fbd28622c9b9915e9/DRN8PvmnpKmn2MSLQ7qhF.jpeg"}}">
Papers
arxiv:2601.04745

KnowMe-Bench: Benchmarking Person Understanding for Lifelong Digital Companions

Published on Jan 8
· Submitted by
Yu_xm
on Jan 14
Authors:
,
,
,

Abstract

Long-horizon memory benchmarks based on autobiographical narratives evaluate models' ability to infer stable motivations and decision principles through evidence-linked questions spanning factual recall, subjective state attribution, and principle-level reasoning.

AI-generated summary

Existing long-horizon memory benchmarks mostly use multi-turn dialogues or synthetic user histories, which makes retrieval performance an imperfect proxy for person understanding. We present \BenchName, a publicly releasable benchmark built from long-form autobiographical narratives, where actions, context, and inner thoughts provide dense evidence for inferring stable motivations and decision principles. \BenchName~reconstructs each narrative into a flashback-aware, time-anchored stream and evaluates models with evidence-linked questions spanning factual recall, subjective state attribution, and principle-level reasoning. Across diverse narrative sources, retrieval-augmented systems mainly improve factual accuracy, while errors persist on temporally grounded explanations and higher-level inferences, highlighting the need for memory mechanisms beyond retrieval. Our data is in KnowMeBench{https://github.com/QuantaAlpha/KnowMeBench}.

Community

Paper submitter

know me

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2601.04745 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2601.04745 in a Space README.md to link it from this page.

Collections including this paper 1