Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - Table-GPT: Table-tuned GPT for Diverse Table Tasks
[go: Go Back, main page]

\"IMG-20210721-WA0010.jpg\"

\n","updatedAt":"2023-10-16T16:28:42.042Z","author":{"_id":"650e696ba0f2ffbeca687a3a","avatarUrl":"/avatars/d5365258828e5aecedc3cdf2768c4f60.svg","fullname":"PATHAN ASHRAF ","name":"Pathanashraf","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.29723411798477173},"editors":["Pathanashraf"],"editorAvatarUrls":["/avatars/d5365258828e5aecedc3cdf2768c4f60.svg"],"reactions":[],"isReport":false}},{"id":"652d64d6956d6a4244e0ffc0","author":{"_id":"650e696ba0f2ffbeca687a3a","avatarUrl":"/avatars/d5365258828e5aecedc3cdf2768c4f60.svg","fullname":"PATHAN ASHRAF ","name":"Pathanashraf","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false},"createdAt":"2023-10-16T16:29:10.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Write a top 5points","html":"

Write a top 5points

\n","updatedAt":"2023-10-16T16:29:10.727Z","author":{"_id":"650e696ba0f2ffbeca687a3a","avatarUrl":"/avatars/d5365258828e5aecedc3cdf2768c4f60.svg","fullname":"PATHAN ASHRAF ","name":"Pathanashraf","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7698227167129517},"editors":["Pathanashraf"],"editorAvatarUrls":["/avatars/d5365258828e5aecedc3cdf2768c4f60.svg"],"reactions":[],"isReport":false}},{"id":"652d84ca956d6a4244e5e4b5","author":{"_id":"64660eae875b1a86a786c04e","avatarUrl":"/avatars/8b4900c358848aafd55194c12aa31bfe.svg","fullname":"Rohit Kumar","name":"Rohit-788","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false},"createdAt":"2023-10-16T18:45:30.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"\n![Screenshot 2023-10-16 154255.png](https://cdn-uploads.huggingface.co/production/uploads/64660eae875b1a86a786c04e/JU3tcctMNmYsg4nNDatac.png)\n","html":"

\"Screenshot

\n","updatedAt":"2023-10-16T18:45:30.764Z","author":{"_id":"64660eae875b1a86a786c04e","avatarUrl":"/avatars/8b4900c358848aafd55194c12aa31bfe.svg","fullname":"Rohit Kumar","name":"Rohit-788","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.3670727014541626},"editors":["Rohit-788"],"editorAvatarUrls":["/avatars/8b4900c358848aafd55194c12aa31bfe.svg"],"reactions":[],"isReport":false}},{"id":"652d84e546e3998e41e94d15","author":{"_id":"64660eae875b1a86a786c04e","avatarUrl":"/avatars/8b4900c358848aafd55194c12aa31bfe.svg","fullname":"Rohit Kumar","name":"Rohit-788","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false},"createdAt":"2023-10-16T18:45:57.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"start discussing about this paper\n","html":"

start discussing about this paper

\n","updatedAt":"2023-10-16T18:45:57.720Z","author":{"_id":"64660eae875b1a86a786c04e","avatarUrl":"/avatars/8b4900c358848aafd55194c12aa31bfe.svg","fullname":"Rohit Kumar","name":"Rohit-788","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8430055379867554},"editors":["Rohit-788"],"editorAvatarUrls":["/avatars/8b4900c358848aafd55194c12aa31bfe.svg"],"reactions":[],"isReport":false}},{"id":"652dadbdd838856129ef807a","author":{"_id":"6486638da4cf2081f20c40ec","avatarUrl":"/avatars/0bc16a7447cd71ac18828a678313bd83.svg","fullname":"Mike Young","name":"mikelabs","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":13,"isUserFollowing":false},"createdAt":"2023-10-16T21:40:13.000Z","type":"comment","data":{"edited":true,"hidden":false,"latest":{"raw":"Wtf is going on in these comments lol? Anyway, here's my summary...\n\nTables are everywhere - reports, databases, webpages. They neatly organize data for humans to parse. But despite strong language skills, AI still struggles with table comprehension.\n\nEven models like GPT-3 fail at basic tasks like finding where a missing value should go. This is because they're trained mostly on free-flowing text, not 2D tabular data. Unlike unstructured text, data in tables derives meaning from its structure and position!\n\nSo researchers at Microsoft tried \"table-tuning\" - extending training with synthesized table task cases. Tasks like \"impute missing value X\" or \"identify outliers in this table\". They did this using a corpus of real-world tables.\n\nThey also augmented the data more by paraphrasing, reordering rows/columns, chaining model responses, and more. This protects against overfitting.\n\nThe resulting Table-GPT models showed big improvements:\n\n* 25%+ better at unseen table tasks like missing value ID\n* Beat GPT-3 on 98% of test cases over 9 different table tasks\n* Stayed strong even after targeted downstream tuning\n\nTable-tuning seems a promising step toward AI that can handle tables. That would unlock automated analysis over the troves of valuable tabular data out there.\n\n**TLDR: Training models on a large and diverse dataset of synthesized table tasks significantly boosts their table skills.**\n\n[Full Summary is here.](https://notes.aimodels.fyi/table-gpt-table-tuned-gpt-for-diverse-table-tasks/)","html":"

Wtf is going on in these comments lol? Anyway, here's my summary...

\n

Tables are everywhere - reports, databases, webpages. They neatly organize data for humans to parse. But despite strong language skills, AI still struggles with table comprehension.

\n

Even models like GPT-3 fail at basic tasks like finding where a missing value should go. This is because they're trained mostly on free-flowing text, not 2D tabular data. Unlike unstructured text, data in tables derives meaning from its structure and position!

\n

So researchers at Microsoft tried \"table-tuning\" - extending training with synthesized table task cases. Tasks like \"impute missing value X\" or \"identify outliers in this table\". They did this using a corpus of real-world tables.

\n

They also augmented the data more by paraphrasing, reordering rows/columns, chaining model responses, and more. This protects against overfitting.

\n

The resulting Table-GPT models showed big improvements:

\n
    \n
  • 25%+ better at unseen table tasks like missing value ID
  • \n
  • Beat GPT-3 on 98% of test cases over 9 different table tasks
  • \n
  • Stayed strong even after targeted downstream tuning
  • \n
\n

Table-tuning seems a promising step toward AI that can handle tables. That would unlock automated analysis over the troves of valuable tabular data out there.

\n

TLDR: Training models on a large and diverse dataset of synthesized table tasks significantly boosts their table skills.

\n

Full Summary is here.

\n","updatedAt":"2023-10-16T21:44:29.235Z","author":{"_id":"6486638da4cf2081f20c40ec","avatarUrl":"/avatars/0bc16a7447cd71ac18828a678313bd83.svg","fullname":"Mike Young","name":"mikelabs","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":13,"isUserFollowing":false}},"numEdits":2,"identifiedLanguage":{"language":"en","probability":0.894295871257782},"editors":["mikelabs"],"editorAvatarUrls":["/avatars/0bc16a7447cd71ac18828a678313bd83.svg"],"reactions":[{"reaction":"🤗","users":["mikelabs","KrishnaKaasyap","amberberli","sciafri","jj97"],"count":5},{"reaction":"👍","users":["b08x","tdingman-scale","sciafri"],"count":3},{"reaction":"❤️","users":["sciafri","kkuhlman"],"count":2}],"isReport":false}},{"id":"652dd3db99687726aef79ffa","author":{"_id":"64aed48cfec303c461d06242","avatarUrl":"/avatars/236c771e6c5a25ef6ed5e1bc061e30b8.svg","fullname":"Krishna Kaasyap","name":"KrishnaKaasyap","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":6,"isUserFollowing":false},"createdAt":"2023-10-17T00:22:51.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"> Can you opensource your training datasets of the different table tasks?\n\nNot sure about that but here's a table dataset that is more than 800 billion tokens!\n\nhttps://huggingface.co/datasets/approximatelabs/tablib-v1-full","html":"
\n

Can you opensource your training datasets of the different table tasks?

\n
\n

Not sure about that but here's a table dataset that is more than 800 billion tokens!

\n

https://huggingface.co/datasets/approximatelabs/tablib-v1-full

\n","updatedAt":"2023-10-17T00:22:51.192Z","author":{"_id":"64aed48cfec303c461d06242","avatarUrl":"/avatars/236c771e6c5a25ef6ed5e1bc061e30b8.svg","fullname":"Krishna Kaasyap","name":"KrishnaKaasyap","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":6,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8816072940826416},"editors":["KrishnaKaasyap"],"editorAvatarUrls":["/avatars/236c771e6c5a25ef6ed5e1bc061e30b8.svg"],"reactions":[{"reaction":"👍","users":["jj97","LisaWang0306"],"count":2}],"isReport":false}},{"id":"6531189fba7ae55b6549179f","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false},"createdAt":"2023-10-19T11:53:03.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Testing the Limits of Unified Sequence to Sequence LLM Pretraining on Diverse Table Data Tasks](https://huggingface.co/papers/2310.00789) (2023)\n* [LLM-augmented Preference Learning from Natural Language](https://huggingface.co/papers/2310.08523) (2023)\n* [Efficient Finetuning Large Language Models For Vietnamese Chatbot](https://huggingface.co/papers/2309.04646) (2023)\n* [A Systematic Evaluation of Large Language Models on Out-of-Distribution Logical Reasoning Tasks](https://huggingface.co/papers/2310.09430) (2023)\n* [Training Generative Question-Answering on Synthetic Data Obtained from an Instruct-tuned Model](https://huggingface.co/papers/2310.08072) (2023)\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n","updatedAt":"2023-10-19T11:53:03.673Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7445301413536072},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}},{"id":"654ef7110aa8eba4c209cd31","author":{"_id":"6348f06b5b1a5329cc2a3923","avatarUrl":"/avatars/33212f35f87e7bba5294a5750d96ee0f.svg","fullname":"Shashi Prakash Tripathi","name":"ShishuTripathi","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false},"createdAt":"2023-11-11T03:37:53.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"I am still not very sure with GPT capabilities of tackling arithmetical calculation in any context let it be tables or time series data. Till now the output that I have seen is not even close.\n","html":"

I am still not very sure with GPT capabilities of tackling arithmetical calculation in any context let it be tables or time series data. Till now the output that I have seen is not even close.

\n","updatedAt":"2023-11-11T03:37:53.130Z","author":{"_id":"6348f06b5b1a5329cc2a3923","avatarUrl":"/avatars/33212f35f87e7bba5294a5750d96ee0f.svg","fullname":"Shashi Prakash Tripathi","name":"ShishuTripathi","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.9469090104103088},"editors":["ShishuTripathi"],"editorAvatarUrls":["/avatars/33212f35f87e7bba5294a5750d96ee0f.svg"],"reactions":[],"isReport":false}},{"id":"655bfe8f948930b0fce7d10f","author":{"_id":"63f4afd0cfd7ba6e26f5d0c5","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63f4afd0cfd7ba6e26f5d0c5/imFdETtKl94fpQefk0bon.jpeg","fullname":"Damian","name":"QuantumDamian","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false},"createdAt":"2023-11-21T00:49:19.000Z","type":"comment","data":{"edited":true,"hidden":false,"latest":{"raw":"Key conclusion from my review of v1 of this paper (published on 13th Oct 2023):\n\nThis paper offers good introduction to simple table-tuning tasks, however task T-3 (TQA) should be significantly improved before Table-GPT can be used commercially.\n\nKey points:\n•\tGeneric overview of results indicates very good results of table-tuning, however in my opinion tasks should be divided based on complexity to better understand value of table-tuning. Please see diagram below.\n\n![image.png](https://cdn-uploads.huggingface.co/production/uploads/63f4afd0cfd7ba6e26f5d0c5/ql_WG_1ISLV0Qs67xzZx2.png)\n\n•\tFor the most of easy tasks (all tasks except T-3) table-tuning offers great improvements in zero-shot comparing to vanilla models (209% improvement for GPT-3.5 and 119% for ChatGPT) \n•\tFor the most of easy tasks (all tasks except T-3) table-tuning offers good results in few-shot comparing to vanilla models (22% improvement for GPT-3.5 and 12% for ChatGPT)\n \n![Table-GPT paper review all except T-3.png](https://cdn-uploads.huggingface.co/production/uploads/63f4afd0cfd7ba6e26f5d0c5/LgmVN2GX-cgaMF4eJAq5p.png)\n\n•\tT-3 TQA is the most complex task (and with biggest business demand) and for this tasks table-tuning offers very small improvements (1-2% for ChatGPT and 5-8% for GPT-3.5), which is probably not worth of the fine-tuning effort\n \n![Table-GPT paper review T-3.png](https://cdn-uploads.huggingface.co/production/uploads/63f4afd0cfd7ba6e26f5d0c5/j-p9tDLZmTd24D0281ELn.png)\n\nOpen questions:\n•\tDo you have plans to fine-tune GPT-4?\n•\tCan you share recommendations on improving T-3 (TQA)? Maybe including TQA tasks in training?\n•\tCan you include as well T-12 (NS) in tests?\n•\tCan you specify number of tokens used (both for training and test execution) for each task\n\nOther remarks:\n•\tMarkdown format increases performance of table-tuning by 3% comparing to CSV and by 5% comparing to JSON (table 5)\n•\tFor most of the tasks few-shot offers strong improvement over zero-shot for vanilla GPT 3.5 and ChatGPT (even without table-tuning).\n•\tTypos found in paper: \n-\tp.4 is “toke-by-token” should be “token-by-token”\n-\tp.6 is “few select table-tasks” should be “few selected table-tasks”\n-\tp.7 is “describes the row-augmentation task” should be “describes the column-augmentation task”","html":"

Key conclusion from my review of v1 of this paper (published on 13th Oct 2023):

\n

This paper offers good introduction to simple table-tuning tasks, however task T-3 (TQA) should be significantly improved before Table-GPT can be used commercially.

\n

Key points:
•\tGeneric overview of results indicates very good results of table-tuning, however in my opinion tasks should be divided based on complexity to better understand value of table-tuning. Please see diagram below.

\n

\"image.png\"

\n

•\tFor the most of easy tasks (all tasks except T-3) table-tuning offers great improvements in zero-shot comparing to vanilla models (209% improvement for GPT-3.5 and 119% for ChatGPT)
•\tFor the most of easy tasks (all tasks except T-3) table-tuning offers good results in few-shot comparing to vanilla models (22% improvement for GPT-3.5 and 12% for ChatGPT)

\n

\"Table-GPT

\n

•\tT-3 TQA is the most complex task (and with biggest business demand) and for this tasks table-tuning offers very small improvements (1-2% for ChatGPT and 5-8% for GPT-3.5), which is probably not worth of the fine-tuning effort

\n

\"Table-GPT

\n

Open questions:
•\tDo you have plans to fine-tune GPT-4?
•\tCan you share recommendations on improving T-3 (TQA)? Maybe including TQA tasks in training?
•\tCan you include as well T-12 (NS) in tests?
•\tCan you specify number of tokens used (both for training and test execution) for each task

\n

Other remarks:
•\tMarkdown format increases performance of table-tuning by 3% comparing to CSV and by 5% comparing to JSON (table 5)
•\tFor most of the tasks few-shot offers strong improvement over zero-shot for vanilla GPT 3.5 and ChatGPT (even without table-tuning).
•\tTypos found in paper:
-\tp.4 is “toke-by-token” should be “token-by-token”
-\tp.6 is “few select table-tasks” should be “few selected table-tasks”
-\tp.7 is “describes the row-augmentation task” should be “describes the column-augmentation task”

\n","updatedAt":"2023-11-21T01:20:37.140Z","author":{"_id":"63f4afd0cfd7ba6e26f5d0c5","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63f4afd0cfd7ba6e26f5d0c5/imFdETtKl94fpQefk0bon.jpeg","fullname":"Damian","name":"QuantumDamian","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":3,"identifiedLanguage":{"language":"en","probability":0.8828281164169312},"editors":["QuantumDamian"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/63f4afd0cfd7ba6e26f5d0c5/imFdETtKl94fpQefk0bon.jpeg"],"reactions":[],"isReport":false}},{"id":"666557bf21aa69e386b12004","author":{"_id":"6186ddf6a7717cb375090c01","avatarUrl":"/avatars/716b6a7d1094c8036b2a8a7b9063e8aa.svg","fullname":"Julien BLANCHON","name":"blanchon","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":176,"isUserFollowing":false},"createdAt":"2024-06-09T07:20:31.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"# Revolutionizing AI: Table-GPT Enhances Language Models for Complex Table Tasks!\n\nhttps://cdn-uploads.huggingface.co/production/uploads/6186ddf6a7717cb375090c01/YBXfRJZdPO5YRXR-qpOr0.mp4 \n\n## Links 🔗:\n👉 Subscribe: https://www.youtube.com/@Arxflix\n👉 Twitter: https://x.com/arxflix\n👉 LMNT (Partner): https://lmnt.com/\n\n\nBy Arxflix\n![9t4iCUHx_400x400-1.jpg](https://cdn-uploads.huggingface.co/production/uploads/6186ddf6a7717cb375090c01/v4S5zBurs0ouGNwYj1GEd.jpeg)","html":"

\n\t\n\t\t\n\t\n\t\n\t\tRevolutionizing AI: Table-GPT Enhances Language Models for Complex Table Tasks!\n\t\n

\n

\n\n

\n\t\n\t\t\n\t\n\t\n\t\tLinks đź”—:\n\t\n

\n

👉 Subscribe: https://www.youtube.com/@Arxflix
👉 Twitter: https://x.com/arxflix
👉 LMNT (Partner): https://lmnt.com/

\n

By Arxflix
\"9t4iCUHx_400x400-1.jpg\"

\n","updatedAt":"2024-06-09T07:20:31.232Z","author":{"_id":"6186ddf6a7717cb375090c01","avatarUrl":"/avatars/716b6a7d1094c8036b2a8a7b9063e8aa.svg","fullname":"Julien BLANCHON","name":"blanchon","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":176,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.5163402557373047},"editors":["blanchon"],"editorAvatarUrls":["/avatars/716b6a7d1094c8036b2a8a7b9063e8aa.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2310.09263","authors":[{"_id":"652c9b55a2d97e682b62b270","name":"Peng Li","hidden":false},{"_id":"652c9b55a2d97e682b62b271","name":"Yeye He","hidden":false},{"_id":"652c9b55a2d97e682b62b272","name":"Dror Yashar","hidden":false},{"_id":"652c9b55a2d97e682b62b273","name":"Weiwei Cui","hidden":false},{"_id":"652c9b55a2d97e682b62b274","name":"Song Ge","hidden":false},{"_id":"652c9b55a2d97e682b62b275","name":"Haidong Zhang","hidden":false},{"_id":"652c9b55a2d97e682b62b276","name":"Danielle Rifinski Fainman","hidden":false},{"_id":"652c9b55a2d97e682b62b277","name":"Dongmei Zhang","hidden":false},{"_id":"652c9b55a2d97e682b62b278","name":"Surajit Chaudhuri","hidden":false}],"publishedAt":"2023-10-13T17:20:56.000Z","submittedOnDailyAt":"2023-10-16T00:39:25.588Z","title":"Table-GPT: Table-tuned GPT for Diverse Table Tasks","submittedOnDailyBy":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","isPro":false,"fullname":"AK","user":"akhaliq","type":"user"},"summary":"Language models, such as GPT-3.5 and ChatGPT, demonstrate remarkable\nabilities to follow diverse human instructions and perform a wide range of\ntasks. However, when probing language models using a range of basic\ntable-understanding tasks, we observe that today's language models are still\nsub-optimal in many table-related tasks, likely because they are pre-trained\npredominantly on one-dimensional natural-language texts, whereas\nrelational tables are two-dimensional objects.\n In this work, we propose a new \"table-tuning\" paradigm, where we\ncontinue to train/fine-tune language models like GPT-3.5 and ChatGPT, using\ndiverse table-tasks synthesized from real tables as training data, with the\ngoal of enhancing language models' ability to understand tables and perform\ntable tasks. We show that our resulting Table-GPT models demonstrate (1) better\ntable-understanding capabilities, by consistently outperforming the\nvanilla GPT-3.5 and ChatGPT, on a wide-range of table tasks, including holdout\nunseen tasks, and (2) strong generalizability, in its ability to respond\nto diverse human instructions to perform new table-tasks, in a manner similar\nto GPT-3.5 and ChatGPT.","upvotes":40,"discussionId":"652c9b55a2d97e682b62b284","ai_summary":"A new table-tuning paradigm improves language models' understanding and generalization in table-related tasks by fine-tuning them with synthesized table data, resulting in enhanced performance.","ai_keywords":["table-tuning","table-understanding","generalizability","Table-GPT","GPT-3.5","ChatGPT","table-tasks"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"61e9db1b7346f0a8ff5904e4","avatarUrl":"/avatars/ece6cabd401b68e3fc2743969a9c99f0.svg","isPro":false,"fullname":"Jan Zheng","user":"yawnxyz","type":"user"},{"_id":"6332e38ba652574b0cea2da4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6332e38ba652574b0cea2da4/YWVE8MRz7aCXLQ31CEhLT.png","isPro":true,"fullname":"Ismail Pelaseyed","user":"homanp","type":"user"},{"_id":"642e6cbb113b200fd7676bb1","avatarUrl":"/avatars/f1e3e2dc8bad30e88a5159732e4c5d0c.svg","isPro":false,"fullname":"fujingling","user":"FuJingLing","type":"user"},{"_id":"60e35a2de9edab19748c73e1","avatarUrl":"/avatars/0ca2be9455800f8a50072c277c77136b.svg","isPro":false,"fullname":"Mustafa Akben","user":"mustafaakben","type":"user"},{"_id":"64b9206d77ae61bcc80e858a","avatarUrl":"/avatars/f4a36b81ad43f13f3ac07f9889fa193d.svg","isPro":false,"fullname":"RitchieBlackmore","user":"RitchieBlackmoresRainbow","type":"user"},{"_id":"6453f69c13dba495b2df9c75","avatarUrl":"/avatars/c2e405370d263fe09a2c92ec36e0a08a.svg","isPro":false,"fullname":"setareh z","user":"setareh1","type":"user"},{"_id":"64d257481a6275df575ad17f","avatarUrl":"/avatars/154eabc6eb149f6e9a9b79a23f6e6b96.svg","isPro":false,"fullname":"Naethan Jacob","user":"naejac","type":"user"},{"_id":"5dd96eb166059660ed1ee413","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/5dd96eb166059660ed1ee413/NQtzmrDdbG0H8qkZvRyGk.jpeg","isPro":true,"fullname":"Julien Chaumond","user":"julien-c","type":"user"},{"_id":"6476c2cd592bb80eba2b7696","avatarUrl":"/avatars/3cd83b00a028af4bc2a10c78f57430d1.svg","isPro":false,"fullname":"ANUBHAV","user":"anubhav10mishra","type":"user"},{"_id":"63ac9130e3b217fb36ce7177","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63ac9130e3b217fb36ce7177/tjZ70f4RgTBrDzES_N8CQ.jpeg","isPro":false,"fullname":"Kaushal Powar","user":"kaushalpowar","type":"user"},{"_id":"64945856d8b51eea62559a1e","avatarUrl":"/avatars/8562a08609f635a1ca0b1964f477d59a.svg","isPro":false,"fullname":"Matt Barr","user":"mattbarr","type":"user"},{"_id":"63e625112d2c508de9f92547","avatarUrl":"/avatars/2deb4f9e605d0dc30a31538d5c9aa300.svg","isPro":false,"fullname":"Shyam Peri","user":"shyamperi","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2310.09263

Table-GPT: Table-tuned GPT for Diverse Table Tasks

Published on Oct 13, 2023
· Submitted by
AK
on Oct 16, 2023
Authors:
,
,
,
,
,
,
,
,

Abstract

A new table-tuning paradigm improves language models' understanding and generalization in table-related tasks by fine-tuning them with synthesized table data, resulting in enhanced performance.

AI-generated summary

Language models, such as GPT-3.5 and ChatGPT, demonstrate remarkable abilities to follow diverse human instructions and perform a wide range of tasks. However, when probing language models using a range of basic table-understanding tasks, we observe that today's language models are still sub-optimal in many table-related tasks, likely because they are pre-trained predominantly on one-dimensional natural-language texts, whereas relational tables are two-dimensional objects. In this work, we propose a new "table-tuning" paradigm, where we continue to train/fine-tune language models like GPT-3.5 and ChatGPT, using diverse table-tasks synthesized from real tables as training data, with the goal of enhancing language models' ability to understand tables and perform table tasks. We show that our resulting Table-GPT models demonstrate (1) better table-understanding capabilities, by consistently outperforming the vanilla GPT-3.5 and ChatGPT, on a wide-range of table tasks, including holdout unseen tasks, and (2) strong generalizability, in its ability to respond to diverse human instructions to perform new table-tasks, in a manner similar to GPT-3.5 and ChatGPT.

Community

Can you opensource your training datasets of the different table tasks?

This comment has been hidden

IMG-20210721-WA0010.jpg

Write a top 5points

Screenshot 2023-10-16 154255.png

start discussing about this paper

Wtf is going on in these comments lol? Anyway, here's my summary...

Tables are everywhere - reports, databases, webpages. They neatly organize data for humans to parse. But despite strong language skills, AI still struggles with table comprehension.

Even models like GPT-3 fail at basic tasks like finding where a missing value should go. This is because they're trained mostly on free-flowing text, not 2D tabular data. Unlike unstructured text, data in tables derives meaning from its structure and position!

So researchers at Microsoft tried "table-tuning" - extending training with synthesized table task cases. Tasks like "impute missing value X" or "identify outliers in this table". They did this using a corpus of real-world tables.

They also augmented the data more by paraphrasing, reordering rows/columns, chaining model responses, and more. This protects against overfitting.

The resulting Table-GPT models showed big improvements:

  • 25%+ better at unseen table tasks like missing value ID
  • Beat GPT-3 on 98% of test cases over 9 different table tasks
  • Stayed strong even after targeted downstream tuning

Table-tuning seems a promising step toward AI that can handle tables. That would unlock automated analysis over the troves of valuable tabular data out there.

TLDR: Training models on a large and diverse dataset of synthesized table tasks significantly boosts their table skills.

Full Summary is here.

Can you opensource your training datasets of the different table tasks?

Not sure about that but here's a table dataset that is more than 800 billion tokens!

https://huggingface.co/datasets/approximatelabs/tablib-v1-full

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

I am still not very sure with GPT capabilities of tackling arithmetical calculation in any context let it be tables or time series data. Till now the output that I have seen is not even close.

Key conclusion from my review of v1 of this paper (published on 13th Oct 2023):

This paper offers good introduction to simple table-tuning tasks, however task T-3 (TQA) should be significantly improved before Table-GPT can be used commercially.

Key points:
• Generic overview of results indicates very good results of table-tuning, however in my opinion tasks should be divided based on complexity to better understand value of table-tuning. Please see diagram below.

image.png

• For the most of easy tasks (all tasks except T-3) table-tuning offers great improvements in zero-shot comparing to vanilla models (209% improvement for GPT-3.5 and 119% for ChatGPT)
• For the most of easy tasks (all tasks except T-3) table-tuning offers good results in few-shot comparing to vanilla models (22% improvement for GPT-3.5 and 12% for ChatGPT)

Table-GPT paper review all except T-3.png

• T-3 TQA is the most complex task (and with biggest business demand) and for this tasks table-tuning offers very small improvements (1-2% for ChatGPT and 5-8% for GPT-3.5), which is probably not worth of the fine-tuning effort

Table-GPT paper review T-3.png

Open questions:
• Do you have plans to fine-tune GPT-4?
• Can you share recommendations on improving T-3 (TQA)? Maybe including TQA tasks in training?
• Can you include as well T-12 (NS) in tests?
• Can you specify number of tokens used (both for training and test execution) for each task

Other remarks:
• Markdown format increases performance of table-tuning by 3% comparing to CSV and by 5% comparing to JSON (table 5)
• For most of the tasks few-shot offers strong improvement over zero-shot for vanilla GPT 3.5 and ChatGPT (even without table-tuning).
• Typos found in paper:
- p.4 is “toke-by-token” should be “token-by-token”
- p.6 is “few select table-tasks” should be “few selected table-tasks”
- p.7 is “describes the row-augmentation task” should be “describes the column-augmentation task”

Revolutionizing AI: Table-GPT Enhances Language Models for Complex Table Tasks!

Links đź”—:

👉 Subscribe: https://www.youtube.com/@Arxflix
👉 Twitter: https://x.com/arxflix
👉 LMNT (Partner): https://lmnt.com/

By Arxflix
9t4iCUHx_400x400-1.jpg

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2310.09263 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2310.09263 in a Space README.md to link it from this page.

Collections including this paper 20