Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - Qilin: A Multimodal Information Retrieval Dataset with APP-level User
Sessions
\n","updatedAt":"2025-03-04T06:56:03.646Z","author":{"_id":"60c0ed29d8bc072769d78f48","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/60c0ed29d8bc072769d78f48/V6q6Tn4kzB46NIbTYw9pQ.jpeg","fullname":"Qian Dong","name":"qian","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":2,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7388666868209839},"editors":["qian"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/60c0ed29d8bc072769d78f48/V6q6Tn4kzB46NIbTYw9pQ.jpeg"],"reactions":[],"isReport":false}},{"id":"67c7a9f179b553252bac290c","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2025-03-05T01:33:37.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [MIM: Multi-modal Content Interest Modeling Paradigm for User Behavior Modeling](https://huggingface.co/papers/2502.00321) (2025)\n* [HCMRM: A High-Consistency Multimodal Relevance Model for Search Ads](https://huggingface.co/papers/2502.05822) (2025)\n* [MomentSeeker: A Comprehensive Benchmark and A Strong Baseline For Moment Retrieval Within Long Videos](https://huggingface.co/papers/2502.12558) (2025)\n* [Any Information Is Just Worth One Single Screenshot: Unifying Search With Visualized Information Retrieval](https://huggingface.co/papers/2502.11431) (2025)\n* [VideoRAG: Retrieval-Augmented Generation with Extreme Long-Context Videos](https://huggingface.co/papers/2502.01549) (2025)\n* [A Survey on Multimodal Recommender Systems: Recent Advances and Future Directions](https://huggingface.co/papers/2502.15711) (2025)\n* [A Large-scale Dataset with Behavior, Attributes, and Content of Mobile Short-video Platform](https://huggingface.co/papers/2502.05922) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\n
The following papers were recommended by the Semantic Scholar API
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2025-03-05T01:33:37.180Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6742612719535828},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2503.00501","authors":[{"_id":"67c6a343ad6b7c2fa29d5e7e","user":{"_id":"67c03221aed8409476d39da8","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/67c03221aed8409476d39da8/eQIhOPRLNoiphsR145mfB.png","isPro":false,"fullname":"Jia Chen","user":"Regulus309","type":"user"},"name":"Jia Chen","status":"claimed_verified","statusLastChangedAt":"2025-03-04T16:08:10.744Z","hidden":false},{"_id":"67c6a343ad6b7c2fa29d5e7f","user":{"_id":"60c0ed29d8bc072769d78f48","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/60c0ed29d8bc072769d78f48/V6q6Tn4kzB46NIbTYw9pQ.jpeg","isPro":false,"fullname":"Qian Dong","user":"qian","type":"user"},"name":"Qian Dong","status":"claimed_verified","statusLastChangedAt":"2025-03-04T08:34:51.762Z","hidden":false},{"_id":"67c6a343ad6b7c2fa29d5e80","user":{"_id":"67b5d91558369f6b38c5b596","avatarUrl":"/avatars/18b08d5d9b05786cad34bc000c7606aa.svg","isPro":false,"fullname":"Haitao Li","user":"haitaoli","type":"user"},"name":"Haitao Li","status":"admin_assigned","statusLastChangedAt":"2025-03-04T10:20:57.898Z","hidden":false},{"_id":"67c6a343ad6b7c2fa29d5e81","name":"Xiaohui He","hidden":false},{"_id":"67c6a343ad6b7c2fa29d5e82","name":"Yan Gao","hidden":false},{"_id":"67c6a343ad6b7c2fa29d5e83","user":{"_id":"6981d255ed901b3f22393c49","avatarUrl":"/avatars/1961b0d576a188068603f90711a182f1.svg","isPro":false,"fullname":"ShelsonCao","user":"ShelsonCao","type":"user"},"name":"Shaosheng Cao","status":"extracted_pending","statusLastChangedAt":"2026-02-03T10:48:12.896Z","hidden":false},{"_id":"67c6a343ad6b7c2fa29d5e84","name":"Yi Wu","hidden":false},{"_id":"67c6a343ad6b7c2fa29d5e85","name":"Ping Yang","hidden":false},{"_id":"67c6a343ad6b7c2fa29d5e86","name":"Chen Xu","hidden":false},{"_id":"67c6a343ad6b7c2fa29d5e87","name":"Yao Hu","hidden":false},{"_id":"67c6a343ad6b7c2fa29d5e88","user":{"_id":"6657e7045f6e35c7d541bdd8","avatarUrl":"/avatars/368e5cef6c93543b2b92fbca79a4e4b9.svg","isPro":false,"fullname":"Qingyao Ai","user":"aiqy","type":"user"},"name":"Qingyao Ai","status":"admin_assigned","statusLastChangedAt":"2025-03-04T10:21:22.100Z","hidden":false},{"_id":"67c6a343ad6b7c2fa29d5e89","name":"Yiqun Liu","hidden":false}],"publishedAt":"2025-03-01T14:15:00.000Z","submittedOnDailyAt":"2025-03-04T04:26:03.632Z","title":"Qilin: A Multimodal Information Retrieval Dataset with APP-level User\n Sessions","submittedOnDailyBy":{"_id":"60c0ed29d8bc072769d78f48","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/60c0ed29d8bc072769d78f48/V6q6Tn4kzB46NIbTYw9pQ.jpeg","isPro":false,"fullname":"Qian Dong","user":"qian","type":"user"},"summary":"User-generated content (UGC) communities, especially those featuring\nmultimodal content, improve user experiences by integrating visual and textual\ninformation into results (or items). The challenge of improving user\nexperiences in complex systems with search and recommendation (S\\&R) services\nhas drawn significant attention from both academia and industry these years.\nHowever, the lack of high-quality datasets has limited the research progress on\nmultimodal S\\&R. To address the growing need for developing better S\\&R\nservices, we present a novel multimodal information retrieval dataset in this\npaper, namely Qilin. The dataset is collected from Xiaohongshu, a popular\nsocial platform with over 300 million monthly active users and an average\nsearch penetration rate of over 70\\%. In contrast to existing datasets,\nQilin offers a comprehensive collection of user sessions with\nheterogeneous results like image-text notes, video notes, commercial notes, and\ndirect answers, facilitating the development of advanced multimodal neural\nretrieval models across diverse task settings. To better model user\nsatisfaction and support the analysis of heterogeneous user behaviors, we also\ncollect extensive APP-level contextual signals and genuine user feedback.\nNotably, Qilin contains user-favored answers and their referred results for\nsearch requests triggering the Deep Query Answering (DQA) module. This allows\nnot only the training \\& evaluation of a Retrieval-augmented Generation (RAG)\npipeline, but also the exploration of how such a module would affect users'\nsearch behavior. Through comprehensive analysis and experiments, we provide\ninteresting findings and insights for further improving S\\&R systems. We hope\nthat Qilin will significantly contribute to the advancement of\nmultimodal content platforms with S\\&R services in the future.","upvotes":12,"discussionId":"67c6a346ad6b7c2fa29d5f88","projectPage":"https://huggingface.co/datasets/THUIR/Qilin","githubRepo":"https://github.com/RED-Search/Qilin","githubRepoAddedBy":"user","ai_summary":"A new multimodal dataset named Qilin, sourced from Xiaohongshu, supports the development of advanced retrieval models and examines the impact of deep query answering on user behavior.","ai_keywords":["multimodal information retrieval","Deep Query Answering","Retrieval-augmented Generation","multimodal neural retrieval models"],"githubStars":63},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"60c0ed29d8bc072769d78f48","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/60c0ed29d8bc072769d78f48/V6q6Tn4kzB46NIbTYw9pQ.jpeg","isPro":false,"fullname":"Qian Dong","user":"qian","type":"user"},{"_id":"67c03221aed8409476d39da8","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/67c03221aed8409476d39da8/eQIhOPRLNoiphsR145mfB.png","isPro":false,"fullname":"Jia Chen","user":"Regulus309","type":"user"},{"_id":"6400250ccafc9d549863d6e2","avatarUrl":"/avatars/063122daa0fc390b188e1058faca0388.svg","isPro":false,"fullname":"SHY","user":"YangsHao","type":"user"},{"_id":"67c6a788a87c8e90e3b09e7b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/yejUQwDOuti5ns1DhBKj1.png","isPro":false,"fullname":"TroyX","user":"TroyXZW","type":"user"},{"_id":"648eb1eb59c4e5c87dc116e0","avatarUrl":"/avatars/c636cea39c2c0937f01398c94ead5dad.svg","isPro":false,"fullname":"fdsqefsgergd","user":"T-representer","type":"user"},{"_id":"63a369d98c0c89dcae3b8329","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63a369d98c0c89dcae3b8329/AiH2zjy1cnt9OADAAZMLD.jpeg","isPro":false,"fullname":"Adina Yakefu","user":"AdinaY","type":"user"},{"_id":"6168218a4ed0b975c18f82a8","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6168218a4ed0b975c18f82a8/vD4Q6KVcz5Td39QWTG-s7.png","isPro":true,"fullname":"NIONGOLO Chrys Fé-Marty","user":"Svngoku","type":"user"},{"_id":"679f9aaab6fd93f91c3b85e4","avatarUrl":"/avatars/085256231f3aba91fd310f41a634d184.svg","isPro":false,"fullname":"Wang","user":"EpsilonElegy","type":"user"},{"_id":"665b133508d536a8ac804f7d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/Uwi0OnANdTbRbHHQvGqvR.png","isPro":false,"fullname":"Paulson","user":"Pnaomi","type":"user"},{"_id":"64d4615cf8082bf19b916492","avatarUrl":"/avatars/8e1b59565ec5e4b31090cf1b911781b9.svg","isPro":false,"fullname":"wongyukim","user":"wongyukim","type":"user"},{"_id":"6270324ebecab9e2dcf245de","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6270324ebecab9e2dcf245de/cMbtWSasyNlYc9hvsEEzt.jpeg","isPro":false,"fullname":"Kye Gomez","user":"kye","type":"user"},{"_id":"663ccbff3a74a20189d4aa2e","avatarUrl":"/avatars/83a54455e0157480f65c498cd9057cf2.svg","isPro":false,"fullname":"Nguyen Van Thanh","user":"NguyenVanThanhHust","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
A new multimodal dataset named Qilin, sourced from Xiaohongshu, supports the development of advanced retrieval models and examines the impact of deep query answering on user behavior.
AI-generated summary
User-generated content (UGC) communities, especially those featuring
multimodal content, improve user experiences by integrating visual and textual
information into results (or items). The challenge of improving user
experiences in complex systems with search and recommendation (S\&R) services
has drawn significant attention from both academia and industry these years.
However, the lack of high-quality datasets has limited the research progress on
multimodal S\&R. To address the growing need for developing better S\&R
services, we present a novel multimodal information retrieval dataset in this
paper, namely Qilin. The dataset is collected from Xiaohongshu, a popular
social platform with over 300 million monthly active users and an average
search penetration rate of over 70\%. In contrast to existing datasets,
Qilin offers a comprehensive collection of user sessions with
heterogeneous results like image-text notes, video notes, commercial notes, and
direct answers, facilitating the development of advanced multimodal neural
retrieval models across diverse task settings. To better model user
satisfaction and support the analysis of heterogeneous user behaviors, we also
collect extensive APP-level contextual signals and genuine user feedback.
Notably, Qilin contains user-favored answers and their referred results for
search requests triggering the Deep Query Answering (DQA) module. This allows
not only the training \& evaluation of a Retrieval-augmented Generation (RAG)
pipeline, but also the exploration of how such a module would affect users'
search behavior. Through comprehensive analysis and experiments, we provide
interesting findings and insights for further improving S\&R systems. We hope
that Qilin will significantly contribute to the advancement of
multimodal content platforms with S\&R services in the future.
Qilin is a large-scale multimodal dataset designed for advancing research in search, recommendation, and Retrieval-Augmented Generation (RAG) systems. This repository contains the official implementation of the dataset paper, baseline models, and evaluation tools.