Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - TabTune: A Unified Library for Inference and Fine-Tuning Tabular
Foundation Models
https://github.com/Lexsi-Labs/TabTune\n","updatedAt":"2025-11-06T06:46:02.566Z","author":{"_id":"66fce04d927ec45504514afd","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/66fce04d927ec45504514afd/lA_bTErY7JywT6xbzdflo.jpeg","fullname":"Pratinav Seth","name":"pratinavsetharya","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":1,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8576441407203674},"editors":["pratinavsetharya"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/66fce04d927ec45504514afd/lA_bTErY7JywT6xbzdflo.jpeg"],"reactions":[{"reaction":"🔥","users":["vinay-k12"],"count":1}],"isReport":false}},{"id":"690e9e5d0570b41ffe112ce4","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2025-11-08T01:35:25.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Orion-MSP: Multi-Scale Sparse Attention for Tabular In-Context Learning](https://huggingface.co/papers/2511.02818) (2025)\n* [Data Efficient Adaptation in Large Language Models via Continuous Low-Rank Fine-Tuning](https://huggingface.co/papers/2509.18942) (2025)\n* [Limited Reference, Reliable Generation: A Two-Component Framework for Tabular Data Generation in Low-Data Regimes](https://huggingface.co/papers/2509.09960) (2025)\n* [Resource-Efficient Fine-Tuning of LLaMA-3.2-3B for Medical Chain-of-Thought Reasoning](https://huggingface.co/papers/2510.05003) (2025)\n* [flowengineR: A Modular and Extensible Framework for Fair and Reproducible Workflow Design in R](https://huggingface.co/papers/2511.00079) (2025)\n* [MeTA-LoRA: Data-Efficient Multi-Task Fine-Tuning for Large Language Models](https://huggingface.co/papers/2510.11598) (2025)\n* [Optimizing Fine-Tuning through Advanced Initialization Strategies for Low-Rank Adaptation](https://huggingface.co/papers/2510.03731) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\n
The following papers were recommended by the Semantic Scholar API
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2025-11-08T01:35:25.105Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7199671268463135},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2511.02802","authors":[{"_id":"690b34a160494e4fa76754fc","name":"Aditya Tanna","hidden":false},{"_id":"690b34a160494e4fa76754fd","user":{"_id":"66fce04d927ec45504514afd","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/66fce04d927ec45504514afd/lA_bTErY7JywT6xbzdflo.jpeg","isPro":false,"fullname":"Pratinav Seth","user":"pratinavsetharya","type":"user"},"name":"Pratinav Seth","status":"claimed_verified","statusLastChangedAt":"2025-11-05T16:24:05.630Z","hidden":false},{"_id":"690b34a160494e4fa76754fe","name":"Mohamed Bouadi","hidden":false},{"_id":"690b34a160494e4fa76754ff","name":"Utsav Avaiya","hidden":false},{"_id":"690b34a160494e4fa7675500","name":"Vinay Kumar Sankarapu","hidden":false}],"publishedAt":"2025-11-04T18:25:17.000Z","submittedOnDailyAt":"2025-11-06T04:16:02.558Z","title":"TabTune: A Unified Library for Inference and Fine-Tuning Tabular\n Foundation Models","submittedOnDailyBy":{"_id":"66fce04d927ec45504514afd","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/66fce04d927ec45504514afd/lA_bTErY7JywT6xbzdflo.jpeg","isPro":false,"fullname":"Pratinav Seth","user":"pratinavsetharya","type":"user"},"summary":"Tabular foundation models represent a growing paradigm in structured data\nlearning, extending the benefits of large-scale pretraining to tabular domains.\nHowever, their adoption remains limited due to heterogeneous preprocessing\npipelines, fragmented APIs, inconsistent fine-tuning procedures, and the\nabsence of standardized evaluation for deployment-oriented metrics such as\ncalibration and fairness. We present TabTune, a unified library that\nstandardizes the complete workflow for tabular foundation models through a\nsingle interface. TabTune provides consistent access to seven state-of-the-art\nmodels supporting multiple adaptation strategies, including zero-shot\ninference, meta-learning, supervised fine-tuning (SFT), and parameter-efficient\nfine-tuning (PEFT). The framework automates model-aware preprocessing, manages\narchitectural heterogeneity internally, and integrates evaluation modules for\nperformance, calibration, and fairness. Designed for extensibility and\nreproducibility, TabTune enables consistent benchmarking of adaptation\nstrategies of tabular foundation models. The library is open source and\navailable at https://github.com/Lexsi-Labs/TabTune .","upvotes":16,"discussionId":"690b34a160494e4fa7675501","githubRepo":"https://github.com/Lexsi-Labs/TabTune","githubRepoAddedBy":"auto","ai_summary":"TabTune is a unified library that standardizes the workflow for tabular foundation models, supporting various adaptation strategies and evaluation metrics.","ai_keywords":["tabular foundation models","zero-shot inference","meta-learning","supervised fine-tuning","parameter-efficient fine-tuning","model-aware preprocessing","calibration","fairness"],"githubStars":65,"organization":{"_id":"69034619c56aefa86350a727","name":"Lexsi","fullname":"Lexsi Labs","avatar":"https://cdn-uploads.huggingface.co/production/uploads/63cbbcb9f488db9bb3beeaa1/b0eHmC8iYVQCyqqaLFfA6.png"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"66fce04d927ec45504514afd","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/66fce04d927ec45504514afd/lA_bTErY7JywT6xbzdflo.jpeg","isPro":false,"fullname":"Pratinav Seth","user":"pratinavsetharya","type":"user"},{"_id":"63cbbcb9f488db9bb3beeaa1","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63cbbcb9f488db9bb3beeaa1/hR46Xb6Yb12CN0r8kU507.png","isPro":false,"fullname":"vinaykumar","user":"vinay-k12","type":"user"},{"_id":"6886f78a7650ffcfbe10f8e4","avatarUrl":"/avatars/a67dcce8a447af01d48d054887852b52.svg","isPro":false,"fullname":"Neeraj Kumar Singh","user":"neeraj-aryaai","type":"user"},{"_id":"687a2d49fc5a901f520c29de","avatarUrl":"/avatars/3b7d5c24250d282de218a609f32d5b99.svg","isPro":false,"fullname":"Apurv","user":"apurvharkhani","type":"user"},{"_id":"62aef39f524001e75b919fc8","avatarUrl":"/avatars/1aacb973a1f8e1c74f234f51fe01e289.svg","isPro":false,"fullname":"Neeraj Kumar Singh","user":"neeraj1909","type":"user"},{"_id":"65021b8f3a05fd436ba38f8c","avatarUrl":"/avatars/0e9286685abf8b354088bc2c8261a2a6.svg","isPro":false,"fullname":"Rui Yuan","user":"RuiYUAN91","type":"user"},{"_id":"690c5d1fa57a12ccc9f6eab6","avatarUrl":"/avatars/454588c3621e20e09934accc4b46dd79.svg","isPro":false,"fullname":"Aryan SIngh","user":"Aryan-XAI","type":"user"},{"_id":"6270324ebecab9e2dcf245de","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6270324ebecab9e2dcf245de/cMbtWSasyNlYc9hvsEEzt.jpeg","isPro":false,"fullname":"Kye Gomez","user":"kye","type":"user"},{"_id":"65745569839aa08899ea5d27","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/4X8waDwiphbfKZySrYlFy.jpeg","isPro":false,"fullname":"Kailin Jiang, 蒋凯林","user":"kailinjiang","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"65dba1f1b62d242ed88b2d2a","avatarUrl":"/avatars/e35ef7687e217e6ab71ad76cef59ea21.svg","isPro":false,"fullname":"Gibran Iqbal","user":"Jibbscript","type":"user"},{"_id":"68a20956846db9d4bade50ab","avatarUrl":"/avatars/7a6e832451be016ec89525c9287b0f39.svg","isPro":false,"fullname":"Aditya Tanna","user":"adityatannaarya","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0,"organization":{"_id":"69034619c56aefa86350a727","name":"Lexsi","fullname":"Lexsi Labs","avatar":"https://cdn-uploads.huggingface.co/production/uploads/63cbbcb9f488db9bb3beeaa1/b0eHmC8iYVQCyqqaLFfA6.png"}}">
TabTune is a unified library that standardizes the workflow for tabular foundation models, supporting various adaptation strategies and evaluation metrics.
AI-generated summary
Tabular foundation models represent a growing paradigm in structured data
learning, extending the benefits of large-scale pretraining to tabular domains.
However, their adoption remains limited due to heterogeneous preprocessing
pipelines, fragmented APIs, inconsistent fine-tuning procedures, and the
absence of standardized evaluation for deployment-oriented metrics such as
calibration and fairness. We present TabTune, a unified library that
standardizes the complete workflow for tabular foundation models through a
single interface. TabTune provides consistent access to seven state-of-the-art
models supporting multiple adaptation strategies, including zero-shot
inference, meta-learning, supervised fine-tuning (SFT), and parameter-efficient
fine-tuning (PEFT). The framework automates model-aware preprocessing, manages
architectural heterogeneity internally, and integrates evaluation modules for
performance, calibration, and fairness. Designed for extensibility and
reproducibility, TabTune enables consistent benchmarking of adaptation
strategies of tabular foundation models. The library is open source and
available at https://github.com/Lexsi-Labs/TabTune .
TabTune is a powerful and flexible Python library designed to simplify the training and fine-tuning of modern foundation models on tabular data. It provides a high-level, scikit-learn-compatible API that abstracts away the complexities of data preprocessing, model-specific training loops, and benchmarking, letting you focus on delivering results.
Whether you are a practitioner aiming for production-grade pipelines or a researcher exploring advanced architectures, TabTune streamlines your workflow for tabular deep learning.