Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
science (Hugging Face Science)
[go: Go Back, main page]

\n
\n
\n
\n \n \n \n \n
\n
\n
\n
\n
\n
\n
NOV
\n \n
\n \n \n \n
\n
\n
\n
\n \n \n \n \n
\n
\n
\n
\n \n \n \n \n
\n
\n
\n
\n \n
\n
\n
JAN 2024
\n
\n
🔥 Warming up
\n
\n
\n
\n
\n
\n\n\n\n### 🤗 Join us! \n\nWe are actively hiring for both full-time and internships. Check out [hf.co/jobs](https://hf.co/jobs)\n\n","html":"

Hugging Face Research

\n

The science team at Hugging Face is dedicated to advancing machine learning research in ways that maximize value for the whole community.

\n

🛠️ Tooling & Infrastructure

\n

The foundation of ML research is tooling and infrastructure and we are working on a range of tools such as datatrove, nanotron, TRL, LeRobot, and lighteval.

\n

📑 Datasets

\n

High quality datasets are the powerhouse of LLMs and require special care and skills to build. We focus on building high-quality datasets such as no-robots, FineWeb, The Stack, and FineVideo.

\n

🤖 Open Models

\n

The datatsets and training recipes of most state-of-the-art models are not released. We build cutting-edge models and release the full training pipeline as well fostering more innovation and reproducibility, such as Zephyr, StarCoder2, or SmolLM3.

\n

🌸 Collaborations

\n

Research and collaboration go hand in hand. That's why we like to organize and participate in large open collaborations such as BigScience and BigCode, as well as lots of smaller partnerships such as Leaderboards on the Hub.

\n

⚙️ Infrastructre

\n

The research team is organized in small teams with typically <4 people and the science cluster consists of 96 x 8xH100 nodes as well as an auto-scalable CPU cluster for dataset processing. In this setup, even a small research team can build and push out impactful artifacts.

\n

📖 Educational material

\n

Besides writing tech reports of research projects we also like to write more educational content to help newcomers get started to the field or practitioners. We built for example the alignment handbook, the evaluation guidebook, the pretraining tutorial, or the FineWeb blog.

\n

Release Timeline

\n

This is the release timeline so far and follow the links by clicking on the elements:

\n\n\n
\n
\n
\n
\n \n \n \n \n
\n
\n
\n
\n
\n
\n
NOV
\n \n
\n \n \n \n
\n
\n
\n
\n \n \n \n \n
\n
\n
\n
\n \n \n \n \n
\n
\n
\n
\n \n
\n
\n
JAN 2024
\n
\n
🔥 Warming up
\n
\n
\n
\n
\n
\n\n\n\n

🤗 Join us!

\n

We are actively hiring for both full-time and internships. Check out hf.co/jobs

\n","classNames":"hf-sanitized hf-sanitized-WI_5inlnOEZ6XfHAq5WwI"},"users":[{"_id":"5e48005437cb5b49818287a5","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/5e48005437cb5b49818287a5/4uCXGGui-9QifAT4qelxU.png","isPro":false,"fullname":"Leandro von Werra","user":"lvwerra","type":"user"},{"_id":"61c141342aac764ce1654e43","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/61c141342aac764ce1654e43/81AwoT5IQ_Xdw0OVw7TKu.jpeg","isPro":false,"fullname":"Loubna Ben Allal","user":"loubnabnl","type":"user"},{"_id":"5df7e9e5da6d0311fd3d53f9","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1583857746553-5df7e9e5da6d0311fd3d53f9.jpeg","isPro":true,"fullname":"Thomas Wolf","user":"thomwolf","type":"user"},{"_id":"629f3b18ee05727ce328ccbe","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1669189789447-629f3b18ee05727ce328ccbe.jpeg","isPro":false,"fullname":"Kashif Rasul","user":"kashif","type":"user"},{"_id":"651e96991b97c9f33d26bde6","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/651e96991b97c9f33d26bde6/-Bqs6qrmz0yCfwtB2e-6q.jpeg","isPro":true,"fullname":"Elie Bakouch","user":"eliebak","type":"user"},{"_id":"5f0c746619cb630495b814fd","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1594651707950-noauth.jpeg","isPro":true,"fullname":"Lewis Tunstall","user":"lewtun","type":"user"},{"_id":"602e6dee60e3dd96631c906e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1613655355830-noauth.png","isPro":false,"fullname":"Anton Lozhkov","user":"anton-l","type":"user"},{"_id":"631ce4b244503b72277fc89f","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1677431596830-631ce4b244503b72277fc89f.jpeg","isPro":true,"fullname":"Quentin Gallouédec","user":"qgallouedec","type":"user"},{"_id":"6202a599216215a22221dea9","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1644340617257-noauth.png","isPro":false,"fullname":"Clémentine Fourrier","user":"clefourrier","type":"user"},{"_id":"63691c3eda9b693c2730b2a2","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63691c3eda9b693c2730b2a2/hBtKpgo3_9003MWCGkw5d.png","isPro":false,"fullname":"Brigitte Tousignant","user":"BrigitteTousi","type":"user"},{"_id":"65d66b494bbd0d92b641cdbb","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65d66b494bbd0d92b641cdbb/6-7dm7B-JxcoS1QlCPdMN.jpeg","isPro":false,"fullname":"Andres Marafioti","user":"andito","type":"user"},{"_id":"6200d0a443eb0913fa2df7cc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1644220542819-noauth.jpeg","isPro":true,"fullname":"Edward Beeching","user":"edbeeching","type":"user"},{"_id":"5dd96eb166059660ed1ee413","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/5dd96eb166059660ed1ee413/NQtzmrDdbG0H8qkZvRyGk.jpeg","isPro":true,"fullname":"Julien Chaumond","user":"julien-c","type":"user"},{"_id":"62543749b777cd32720675c2","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62543749b777cd32720675c2/EF_KRZO4hTo8TWXOtvc-n.png","isPro":false,"fullname":"Irene Solaiman","user":"irenesolaiman","type":"user"},{"_id":"5ff8c9f4b2035d9a81a859f7","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1652134289581-5ff8c9f4b2035d9a81a859f7.jpeg","isPro":false,"fullname":"Nouamane Tazi","user":"nouamanetazi","type":"user"},{"_id":"608aabf24955d2bfc3cd99c6","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/608aabf24955d2bfc3cd99c6/-YxmtpzEmf3NKOTktODRP.jpeg","isPro":true,"fullname":"Aritra Roy Gosthipaty","user":"ariG23498","type":"user"},{"_id":"60c757ea5f9a76ab3f844f12","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1626214544196-60c757ea5f9a76ab3f844f12.png","isPro":false,"fullname":"Margaret Mitchell","user":"meg","type":"user"},{"_id":"63e0eea7af523c37e5a77966","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1678663263366-63e0eea7af523c37e5a77966.jpeg","isPro":true,"fullname":"Nathan Habib","user":"SaylorTwift","type":"user"}],"userCount":18,"collections":[{"slug":"science/releases-6823627145cf63aec03686ae","title":"🔥 Releases","description":"Hugging Face Science team releases","gating":false,"lastUpdated":"2025-09-09T12:30:00.312Z","owner":{"_id":"672926c86c4d0ec128d7496c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/5e48005437cb5b49818287a5/oges-i7Dd9Fs0FDrh3bEs.png","fullname":"Hugging Face Science","name":"science","type":"org","isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":287,"isUserFollowing":false},"items":[{"_id":"68c01db551949bb6f385b87f","position":0,"type":"collection","id":"68bd02d20928419c1dc12296","slug":"HuggingFaceFW/finepdfs-68bd02d20928419c1dc12296","title":"📄 FinePDFs","description":"","lastUpdated":"2026-01-09T22:18:36.716Z","numberItems":82,"owner":{"_id":"657882cd90df9d85fb6be8df","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62596f9e1c0a084224b93e00/1NVyAAh-WfULT4i_LWZNB.png","fullname":"FineData","name":"HuggingFaceFW","type":"org","isHf":false,"isHfAdmin":false,"isMod":false,"plan":"team","followerCount":1383,"isUserFollowing":false},"theme":"green","shareUrl":"https://hf.co/collections/HuggingFaceFW/finepdfs","upvotes":27,"isUpvotedByUser":false},{"_id":"682362cd8c856061acb55b69","position":1,"type":"collection","id":"67ab6b5e84bf8aaa60cb17c7","slug":"HuggingFaceTB/smolvlm2-smallest-video-lm-ever-67ab6b5e84bf8aaa60cb17c7","title":"SmolVLM2 📺 Smallest video LM ever 🤏🏻","description":"","lastUpdated":"2025-05-05T16:18:41.847Z","numberItems":11,"owner":{"_id":"65116f902408819fa5329c6f","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/651e96991b97c9f33d26bde6/e4VK7uW5sTeCYupD0s_ob.png","fullname":"Hugging Face Smol Models Research","name":"HuggingFaceTB","type":"org","isHf":false,"isHfAdmin":false,"isMod":false,"plan":"team","followerCount":3354,"isUserFollowing":false},"theme":"purple","shareUrl":"https://hf.co/collections/HuggingFaceTB/smolvlm2-smallest-video-lm-ever","upvotes":106,"isUpvotedByUser":false},{"_id":"682362da4ebeba21adf551a8","position":2,"type":"space","author":"nanotron","authorData":{"_id":"65af989ede38fbe922da644e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/5df7e9e5da6d0311fd3d53f9/qAagSltOINhPaSgZe7roz.png","fullname":"Nanotron Research","name":"nanotron","type":"org","isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":835,"isUserFollowing":false},"colorFrom":"yellow","colorTo":"purple","createdAt":"2024-06-18T17:12:28.000Z","emoji":"🌌","id":"nanotron/ultrascale-playbook","lastModified":"2025-08-21T12:03:51.000Z","likes":3698,"pinned":true,"private":false,"sdk":"static","repoType":"space","runtime":{"stage":"RUNNING","hardware":{"current":null,"requested":null},"storage":null,"replicas":{"requested":1,"current":1}},"shortDescription":"The ultimate guide to training LLM on large GPU Clusters","title":"The Ultra-Scale Playbook","isLikedByUser":false,"ai_short_description":"Read the Ultra‑Scale Playbook on training LLMs","ai_category":"Other","trendingScore":12,"tags":["static","region:us"],"featured":false},{"_id":"682362ec402b31dec0cf9196","position":3,"type":"dataset","author":"open-r1","downloads":9708,"gated":false,"id":"open-r1/OpenR1-Math-220k","lastModified":"2025-02-18T11:45:27.000Z","datasetsServerInfo":{"viewer":"viewer","numRows":450258,"libraries":["datasets","dask","polars","mlcroissant"],"formats":["parquet"],"modalities":["text"]},"private":false,"repoType":"dataset","likes":713,"isLikedByUser":false,"isBenchmark":false}],"position":0,"theme":"orange","private":false,"shareUrl":"https://hf.co/collections/science/releases","upvotes":2,"isUpvotedByUser":false},{"slug":"science/papers-68236d6ce46091bb768ae820","title":"📝 Papers","description":"Hugging Face Science team papers","gating":false,"lastUpdated":"2025-09-12T16:07:41.752Z","owner":{"_id":"672926c86c4d0ec128d7496c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/5e48005437cb5b49818287a5/oges-i7Dd9Fs0FDrh3bEs.png","fullname":"Hugging Face Science","name":"science","type":"org","isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":287,"isUserFollowing":false},"items":[{"_id":"685debc24991716944278525","position":0,"type":"paper","id":"2506.20920","title":"FineWeb2: One Pipeline to Scale Them All -- Adapting Pre-Training Data\n Processing to Every Language","thumbnailUrl":"https://cdn-thumbnails.huggingface.co/social-thumbnails/papers/2506.20920.png","upvotes":77,"publishedAt":"2025-06-26T01:01:47.000Z","isUpvotedByUser":false},{"_id":"68236e093faba8afbbfa6b23","position":1,"type":"paper","id":"2504.05299","title":"SmolVLM: Redefining small and efficient multimodal models","thumbnailUrl":"https://cdn-thumbnails.huggingface.co/social-thumbnails/papers/2504.05299.png","upvotes":205,"publishedAt":"2025-04-07T17:58:57.000Z","isUpvotedByUser":false},{"_id":"68236e8fda8c7fd4ce83aea9","position":2,"type":"paper","id":"2504.01833","title":"YourBench: Easy Custom Evaluation Sets for Everyone","thumbnailUrl":"https://cdn-thumbnails.huggingface.co/social-thumbnails/papers/2504.01833.png","upvotes":22,"publishedAt":"2025-04-02T15:40:24.000Z","isUpvotedByUser":false},{"_id":"68236e02f678c559394e5b28","position":3,"type":"paper","id":"2502.02737","title":"SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model","thumbnailUrl":"https://cdn-thumbnails.huggingface.co/social-thumbnails/papers/2502.02737.png","upvotes":254,"publishedAt":"2025-02-04T21:43:16.000Z","isUpvotedByUser":false}],"position":1,"theme":"indigo","private":false,"shareUrl":"https://hf.co/collections/science/papers","upvotes":4,"isUpvotedByUser":false}],"datasets":[],"models":[],"paperPreviews":[],"spaces":[],"buckets":[],"numBuckets":0,"numDatasets":0,"numModels":0,"numSpaces":1,"lastOrgActivities":[{"time":"2026-02-13T21:24:27.469Z","user":"lewtun","userAvatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1594651707950-noauth.jpeg","type":"paper-daily","paper":{"id":"2602.12176","title":"Single-minus gluon tree amplitudes are nonzero","thumbnailUrl":"https://cdn-thumbnails.huggingface.co/social-thumbnails/papers/2602.12176.png","upvotes":7,"publishedAt":"2026-02-12T17:09:06.000Z","isUpvotedByUser":true}},{"time":"2026-02-12T15:51:44.252Z","user":"lewtun","userAvatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1594651707950-noauth.jpeg","type":"paper-daily","paper":{"id":"2602.03773","title":"Reasoning Cache: Continual Improvement Over Long Horizons via Short-Horizon RL","thumbnailUrl":"https://cdn-thumbnails.huggingface.co/social-thumbnails/papers/2602.03773.png","upvotes":9,"publishedAt":"2026-02-03T17:34:04.000Z","isUpvotedByUser":true}},{"time":"2026-01-30T16:49:59.657Z","user":"julien-c","userAvatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/5dd96eb166059660ed1ee413/NQtzmrDdbG0H8qkZvRyGk.jpeg","type":"paper-daily","paper":{"id":"2601.21571","title":"Shaping capabilities with token-level data filtering","thumbnailUrl":"https://cdn-thumbnails.huggingface.co/social-thumbnails/papers/2601.21571.png","upvotes":27,"publishedAt":"2026-01-29T11:34:01.000Z","isUpvotedByUser":true}}],"acceptLanguages":["*"],"canReadRepos":false,"canReadSpaces":false,"blogPosts":[],"currentRepoPage":0,"filters":{},"paperView":false}">

AI & ML interests

None defined yet.

Recent Activity

Hugging Face Research

The science team at Hugging Face is dedicated to advancing machine learning research in ways that maximize value for the whole community.

🛠️ Tooling & Infrastructure

The foundation of ML research is tooling and infrastructure and we are working on a range of tools such as datatrove, nanotron, TRL, LeRobot, and lighteval.

📑 Datasets

High quality datasets are the powerhouse of LLMs and require special care and skills to build. We focus on building high-quality datasets such as no-robots, FineWeb, The Stack, and FineVideo.

🤖 Open Models

The datatsets and training recipes of most state-of-the-art models are not released. We build cutting-edge models and release the full training pipeline as well fostering more innovation and reproducibility, such as Zephyr, StarCoder2, or SmolLM3.

🌸 Collaborations

Research and collaboration go hand in hand. That's why we like to organize and participate in large open collaborations such as BigScience and BigCode, as well as lots of smaller partnerships such as Leaderboards on the Hub.

⚙️ Infrastructre

The research team is organized in small teams with typically <4 people and the science cluster consists of 96 x 8xH100 nodes as well as an auto-scalable CPU cluster for dataset processing. In this setup, even a small research team can build and push out impactful artifacts.

📖 Educational material

Besides writing tech reports of research projects we also like to write more educational content to help newcomers get started to the field or practitioners. We built for example the alignment handbook, the evaluation guidebook, the pretraining tutorial, or the FineWeb blog.

Release Timeline

This is the release timeline so far and follow the links by clicking on the elements:

🤗 Join us!

We are actively hiring for both full-time and internships. Check out hf.co/jobs

models 0

None public yet

datasets 0

None public yet