Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Paper page - GLiNER multi-task: Generalist Lightweight Model for Various Information
Extraction Tasks
@clem\n\t , thank you for the opportunity to share our work here. I really like that everything is linked on one page!\n","updatedAt":"2024-06-21T14:56:24.399Z","author":{"_id":"62d59dd5a2de3ae5ea6fc262","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1658166666371-noauth.png","fullname":"Stepanov","name":"Ihor","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":60,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.9628139138221741},"editors":["Ihor"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1658166666371-noauth.png"],"reactions":[{"reaction":"β€οΈ","users":["clem","BioMike"],"count":2}],"isReport":false,"parentCommentId":"667593e97d43ca7ee5e142ac"}}]}],"primaryEmailConfirmed":false,"paper":{"id":"2406.12925","authors":[{"_id":"667529f0246665be1aa7312a","user":{"_id":"62d59dd5a2de3ae5ea6fc262","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1658166666371-noauth.png","isPro":true,"fullname":"Stepanov","user":"Ihor","type":"user"},"name":"Ihor Stepanov","status":"claimed_verified","statusLastChangedAt":"2024-06-21T10:24:42.457Z","hidden":false},{"_id":"667529f0246665be1aa7312b","user":{"_id":"6405f62ba577649430be5124","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6405f62ba577649430be5124/WMSqNvDqaeydFkHhNCU_K.png","isPro":false,"fullname":"Mykhailo Shtopko","user":"BioMike","type":"user"},"name":"Mykhailo Shtopko","status":"claimed_verified","statusLastChangedAt":"2024-06-21T07:57:48.931Z","hidden":false}],"mediaUrls":["https://cdn-uploads.huggingface.co/production/uploads/65a4f7be2548c41ad9d4f0a0/reGi8zNmCSDF5Af-I7ZoI.png"],"publishedAt":"2024-06-14T13:54:29.000Z","submittedOnDailyAt":"2024-06-21T05:59:35.026Z","title":"GLiNER multi-task: Generalist Lightweight Model for Various Information\n Extraction Tasks","submittedOnDailyBy":{"_id":"65a4f7be2548c41ad9d4f0a0","avatarUrl":"/avatars/ddcf1627fd225ee03b421d0f0576ca27.svg","isPro":false,"fullname":"Ingvar","user":"whitemetalicdragon","type":"user"},"summary":"Information extraction tasks require both accurate, efficient, and\ngeneralisable models. Classical supervised deep learning approaches can achieve\nthe required performance, but they need large datasets and are limited in their\nability to adapt to different tasks. On the other hand, large language models\n(LLMs) demonstrate good generalization, meaning that they can adapt to many\ndifferent tasks based on user requests. However, LLMs are computationally\nexpensive and tend to fail to generate structured outputs. In this article, we\nwill introduce a new kind of GLiNER model that can be used for various\ninformation extraction tasks while being a small encoder model. Our model\nachieved SoTA performance on zero-shot NER benchmarks and leading performance\non question-answering, summarization and relation extraction tasks.\nAdditionally, in this article, we will cover experimental results on\nself-learning approaches for named entity recognition using GLiNER models.","upvotes":25,"discussionId":"667529f1246665be1aa73192","ai_summary":"A small encoder model, GLiNER, achieves state-of-the-art performance on zero-shot and various information extraction tasks, combining size efficiency with strong generalization.","ai_keywords":["GLiNER model","zero-shot NER","named entity recognition","question-answering","summarization","relation extraction","self-learning"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"62d59dd5a2de3ae5ea6fc262","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1658166666371-noauth.png","isPro":true,"fullname":"Stepanov","user":"Ihor","type":"user"},{"_id":"6486bb3d4c025cf3c41e7767","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6486bb3d4c025cf3c41e7767/1I6yYhv06rHX1_1iIOdjP.png","isPro":false,"fullname":"Quentin Tardif","user":"ntnq","type":"user"},{"_id":"5e67bdd61009063689407479","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1583857146757-5e67bdd61009063689407479.jpeg","isPro":true,"fullname":"Clem π€","user":"clem","type":"user"},{"_id":"6405f62ba577649430be5124","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6405f62ba577649430be5124/WMSqNvDqaeydFkHhNCU_K.png","isPro":false,"fullname":"Mykhailo Shtopko","user":"BioMike","type":"user"},{"_id":"62111fdbe1d974ee5bcbfa27","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62111fdbe1d974ee5bcbfa27/YUzX6lBvW8pbxDorx1kgV.png","isPro":false,"fullname":"Urchade Zaratiana","user":"urchade","type":"user"},{"_id":"60107b385ac3e86b3ea4fc34","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1627505688463-60107b385ac3e86b3ea4fc34.jpeg","isPro":true,"fullname":"Daniel van Strien","user":"davanstrien","type":"user"},{"_id":"5fda5e8ab46950341bb67ca6","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1608146860610-5fda5e8ab46950341bb67ca6.jpeg","isPro":false,"fullname":"Antonio Polo de Alvarado","user":"polodealvarado","type":"user"},{"_id":"5f43448a79c1ba4c353d0d8f","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/5f43448a79c1ba4c353d0d8f/DiSygV3dn7A_OjmGVTrHD.jpeg","isPro":true,"fullname":"Sugato Ray","user":"sugatoray","type":"user"},{"_id":"630b4269e67c604e9b7a429c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/630b4269e67c604e9b7a429c/qsmA2ObMFfLwPIAyveo9F.jpeg","isPro":true,"fullname":"Steffen RΓΆcker","user":"sroecker","type":"user"},{"_id":"655ac762cb17ec19ef82719b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/655ac762cb17ec19ef82719b/1kDncYrGLYS_2SR8cNdAL.png","isPro":false,"fullname":"Welcome to matlok","user":"matlok","type":"user"},{"_id":"631ece54c1a8269da391efe9","avatarUrl":"/avatars/eea8d90e514d8b011d1549d401bd9f9f.svg","isPro":false,"fullname":"Dhruvajyoti Sarma","user":"dhruva-sarma","type":"user"},{"_id":"6549135c196ae037a74e10a3","avatarUrl":"/avatars/86194456844c7b2b5389de36cb258472.svg","isPro":false,"fullname":"Richrich","user":"RichardForests","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
A small encoder model, GLiNER, achieves state-of-the-art performance on zero-shot and various information extraction tasks, combining size efficiency with strong generalization.
AI-generated summary
Information extraction tasks require both accurate, efficient, and
generalisable models. Classical supervised deep learning approaches can achieve
the required performance, but they need large datasets and are limited in their
ability to adapt to different tasks. On the other hand, large language models
(LLMs) demonstrate good generalization, meaning that they can adapt to many
different tasks based on user requests. However, LLMs are computationally
expensive and tend to fail to generate structured outputs. In this article, we
will introduce a new kind of GLiNER model that can be used for various
information extraction tasks while being a small encoder model. Our model
achieved SoTA performance on zero-shot NER benchmarks and leading performance
on question-answering, summarization and relation extraction tasks.
Additionally, in this article, we will cover experimental results on
self-learning approaches for named entity recognition using GLiNER models.
Our research dives into the following exciting and forward-thinking topics:
π Zero-shot NER & Information Extraction: We demonstrate that with diverse and ample data, paired with the right architecture, encoders can achieve impressive results across various extraction tasks, such as NER, relation extraction, summarization, etc.
π οΈ Synthetic Data Generation: Leveraging open labelling by LLMs like Llama, we generated high-quality training data. Our student model even outperformed the teacher model, highlighting the potential of this approach.
π€ Self-Learning: Our model showed consistent improvements in performance without labelled data, achieving up to a 12% increase in F1 score for initially challenging topics. This ability to learn and improve autonomously is a very perspective direction of future research!