Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - A Comprehensive Survey in LLM(-Agent) Full Stack Safety: Data, Training and Deployment
[go: Go Back, main page]

Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-04-25T01:34:21.847Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7402103543281555},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2504.15585","authors":[{"_id":"6809c1f389b7cade55b32a6c","name":"Kun Wang","hidden":false},{"_id":"6809c1f389b7cade55b32a6d","name":"Guibin Zhang","hidden":false},{"_id":"6809c1f389b7cade55b32a6e","name":"Zhenhong Zhou","hidden":false},{"_id":"6809c1f389b7cade55b32a6f","name":"Jiahao Wu","hidden":false},{"_id":"6809c1f389b7cade55b32a70","name":"Miao Yu","hidden":false},{"_id":"6809c1f389b7cade55b32a71","name":"Shiqian Zhao","hidden":false},{"_id":"6809c1f389b7cade55b32a72","name":"Chenlong Yin","hidden":false},{"_id":"6809c1f389b7cade55b32a73","user":{"_id":"66dedaae6c821863fc78637f","avatarUrl":"/avatars/d1ef954f7669c16465243d426e414c4d.svg","isPro":false,"fullname":"Jinhu Fu","user":"Fred456","type":"user"},"name":"Jinhu Fu","status":"admin_assigned","statusLastChangedAt":"2025-04-24T11:59:47.159Z","hidden":false},{"_id":"6809c1f389b7cade55b32a74","name":"Yibo Yan","hidden":false},{"_id":"6809c1f389b7cade55b32a75","user":{"_id":"6635d16a46904d5cfdda21cb","avatarUrl":"/avatars/6c6dcad65186b2eaedfabb9a484a9641.svg","isPro":false,"fullname":"Hanjun Luo","user":"Atarogic","type":"user"},"name":"Hanjun Luo","status":"admin_assigned","statusLastChangedAt":"2025-04-24T11:59:57.410Z","hidden":false},{"_id":"6809c1f389b7cade55b32a76","name":"Liang Lin","hidden":false},{"_id":"6809c1f389b7cade55b32a77","user":{"_id":"65647e2b50a80d26dbfdf49c","avatarUrl":"/avatars/aff0de9f9e4ed322e05d7f832c3c060d.svg","isPro":false,"fullname":"Xu Zhihao","user":"naiweizi","type":"user"},"name":"Zhihao Xu","status":"claimed_verified","statusLastChangedAt":"2026-01-16T10:36:39.556Z","hidden":false},{"_id":"6809c1f389b7cade55b32a78","name":"Haolang Lu","hidden":false},{"_id":"6809c1f389b7cade55b32a79","name":"Xinye Cao","hidden":false},{"_id":"6809c1f389b7cade55b32a7a","name":"Xinyun Zhou","hidden":false},{"_id":"6809c1f389b7cade55b32a7b","name":"Weifei Jin","hidden":false},{"_id":"6809c1f389b7cade55b32a7c","name":"Fanci Meng","hidden":false},{"_id":"6809c1f389b7cade55b32a7d","name":"Junyuan Mao","hidden":false},{"_id":"6809c1f389b7cade55b32a7e","name":"Hao Wu","hidden":false},{"_id":"6809c1f389b7cade55b32a7f","user":{"_id":"6482c303954578a9d1e66f91","avatarUrl":"/avatars/7bccbd8b82f154a350de5011659ac341.svg","isPro":false,"fullname":"wang","user":"minghewang","type":"user"},"name":"Minghe Wang","status":"admin_assigned","statusLastChangedAt":"2025-04-24T12:41:08.753Z","hidden":false},{"_id":"6809c1f389b7cade55b32a80","name":"Fan Zhang","hidden":false},{"_id":"6809c1f389b7cade55b32a81","name":"Junfeng Fang","hidden":false},{"_id":"6809c1f389b7cade55b32a82","name":"Chengwei Liu","hidden":false},{"_id":"6809c1f389b7cade55b32a83","name":"Yifan Zhang","hidden":false},{"_id":"6809c1f389b7cade55b32a84","name":"Qiankun Li","hidden":false},{"_id":"6809c1f389b7cade55b32a85","name":"Chongye Guo","hidden":false},{"_id":"6809c1f389b7cade55b32a86","name":"Yalan Qin","hidden":false},{"_id":"6809c1f389b7cade55b32a87","name":"Yi Ding","hidden":false},{"_id":"6809c1f389b7cade55b32a88","name":"Donghai Hong","hidden":false},{"_id":"6809c1f389b7cade55b32a89","name":"Jiaming Ji","hidden":false},{"_id":"6809c1f389b7cade55b32a8a","name":"Xinfeng Li","hidden":false},{"_id":"6809c1f389b7cade55b32a8b","user":{"_id":"644078ea518271b0d1bf7ec6","avatarUrl":"/avatars/61014a3fd45f74ce541bdbe53929e233.svg","isPro":false,"fullname":"Yifan Jiang","user":"YifanJ","type":"user"},"name":"Yifan Jiang","status":"claimed_verified","statusLastChangedAt":"2025-04-25T08:35:13.704Z","hidden":false},{"_id":"6809c1f389b7cade55b32a8c","name":"Dongxia Wang","hidden":false},{"_id":"6809c1f389b7cade55b32a8d","name":"Yihao Huang","hidden":false},{"_id":"6809c1f389b7cade55b32a8e","name":"Yufei Guo","hidden":false},{"_id":"6809c1f389b7cade55b32a8f","name":"Jen-tse Huang","hidden":false},{"_id":"6809c1f389b7cade55b32a90","name":"Yanwei Yue","hidden":false},{"_id":"6809c1f389b7cade55b32a91","user":{"_id":"66b1dd6b93121096ffcfdab1","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/66b1dd6b93121096ffcfdab1/sOrbqdZhbp3ZP_cUGjhlN.jpeg","isPro":false,"fullname":"Wenke Huang","user":"WilliamHuang91","type":"user"},"name":"Wenke Huang","status":"claimed_verified","statusLastChangedAt":"2025-09-18T13:28:58.407Z","hidden":false},{"_id":"6809c1f389b7cade55b32a92","name":"Guancheng Wan","hidden":false},{"_id":"6809c1f389b7cade55b32a93","name":"Tianlin Li","hidden":false},{"_id":"6809c1f389b7cade55b32a94","name":"Lei Bai","hidden":false},{"_id":"6809c1f389b7cade55b32a95","name":"Jie Zhang","hidden":false},{"_id":"6809c1f389b7cade55b32a96","name":"Qing Guo","hidden":false},{"_id":"6809c1f389b7cade55b32a97","name":"Jingyi Wang","hidden":false},{"_id":"6809c1f389b7cade55b32a98","name":"Tianlong Chen","hidden":false},{"_id":"6809c1f389b7cade55b32a99","name":"Joey Tianyi Zhou","hidden":false},{"_id":"6809c1f389b7cade55b32a9a","name":"Xiaojun Jia","hidden":false},{"_id":"6809c1f389b7cade55b32a9b","name":"Weisong Sun","hidden":false},{"_id":"6809c1f389b7cade55b32a9c","name":"Cong Wu","hidden":false},{"_id":"6809c1f389b7cade55b32a9d","name":"Jing Chen","hidden":false},{"_id":"6809c1f389b7cade55b32a9e","name":"Xuming Hu","hidden":false},{"_id":"6809c1f389b7cade55b32a9f","name":"Yiming Li","hidden":false},{"_id":"6809c1f389b7cade55b32aa0","name":"Xiao Wang","hidden":false},{"_id":"6809c1f389b7cade55b32aa1","user":{"_id":"620b3bbb0668e435407c8d0a","avatarUrl":"/avatars/e0fccbb2577d76088e09f054c35cffbc.svg","isPro":true,"fullname":"Ningyu Zhang","user":"Ningyu","type":"user"},"name":"Ningyu Zhang","status":"claimed_verified","statusLastChangedAt":"2025-04-24T09:10:18.968Z","hidden":false},{"_id":"6809c1f389b7cade55b32aa2","name":"Luu Anh Tuan","hidden":false},{"_id":"6809c1f389b7cade55b32aa3","name":"Guowen Xu","hidden":false},{"_id":"6809c1f389b7cade55b32aa4","name":"Tianwei Zhang","hidden":false},{"_id":"6809c1f389b7cade55b32aa5","name":"Xingjun Ma","hidden":false},{"_id":"6809c1f389b7cade55b32aa6","name":"Xiang Wang","hidden":false},{"_id":"6809c1f389b7cade55b32aa7","name":"Bo An","hidden":false},{"_id":"6809c1f389b7cade55b32aa8","name":"Jun Sun","hidden":false},{"_id":"6809c1f389b7cade55b32aa9","name":"Mohit Bansal","hidden":false},{"_id":"6809c1f389b7cade55b32aaa","name":"Shirui Pan","hidden":false},{"_id":"6809c1f389b7cade55b32aab","name":"Yuval Elovici","hidden":false},{"_id":"6809c1f389b7cade55b32aac","user":{"_id":"65cb79db6427380bc21261e2","avatarUrl":"/avatars/a003eb5d0955417329c1a4170ae65879.svg","isPro":false,"fullname":"Bhavya Kailkhura","user":"bhavyakailkhura","type":"user"},"name":"Bhavya Kailkhura","status":"claimed_verified","statusLastChangedAt":"2025-06-10T09:30:04.244Z","hidden":false},{"_id":"6809c1f389b7cade55b32aad","name":"Bo Li","hidden":false},{"_id":"6809c1f389b7cade55b32aae","name":"Yaodong Yang","hidden":false},{"_id":"6809c1f389b7cade55b32aaf","name":"Hongwei Li","hidden":false},{"_id":"6809c1f389b7cade55b32ab0","name":"Wenyuan Xu","hidden":false},{"_id":"6809c1f389b7cade55b32ab1","name":"Yizhou Sun","hidden":false},{"_id":"6809c1f389b7cade55b32ab2","name":"Wei Wang","hidden":false},{"_id":"6809c1f389b7cade55b32ab3","name":"Qing Li","hidden":false},{"_id":"6809c1f389b7cade55b32ab4","name":"Ke Tang","hidden":false},{"_id":"6809c1f389b7cade55b32ab5","name":"Yu-Gang Jiang","hidden":false},{"_id":"6809c1f389b7cade55b32ab6","user":{"_id":"6417cf37dce1e4c0229f17b1","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6417cf37dce1e4c0229f17b1/7h-ZCB5f4wif7TsnF-B1M.jpeg","isPro":false,"fullname":"Felix Xu","user":"katanaxu","type":"user"},"name":"Felix Juefei-Xu","status":"claimed_verified","statusLastChangedAt":"2025-12-10T13:10:37.951Z","hidden":false},{"_id":"6809c1f389b7cade55b32ab7","name":"Hui Xiong","hidden":false},{"_id":"6809c1f389b7cade55b32ab8","user":{"_id":"6426616ea5ec4a5cbc535634","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6426616ea5ec4a5cbc535634/5IfSFYd9QOxz8K9QmBCst.png","isPro":false,"fullname":"JeffWang","user":"Jeff-Wang","type":"user"},"name":"Xiaofeng Wang","status":"claimed_verified","statusLastChangedAt":"2025-10-23T15:04:07.352Z","hidden":false},{"_id":"6809c1f389b7cade55b32ab9","name":"Shuicheng Yan","hidden":false},{"_id":"6809c1f389b7cade55b32aba","name":"Dacheng Tao","hidden":false},{"_id":"6809c1f389b7cade55b32abb","name":"Philip S. Yu","hidden":false},{"_id":"6809c1f389b7cade55b32abc","name":"Qingsong Wen","hidden":false},{"_id":"6809c1f389b7cade55b32abd","name":"Yang Liu","hidden":false}],"publishedAt":"2025-04-22T05:02:49.000Z","submittedOnDailyAt":"2025-04-24T03:15:54.692Z","title":"A Comprehensive Survey in LLM(-Agent) Full Stack Safety: Data, Training\n and Deployment","submittedOnDailyBy":{"_id":"620b3bbb0668e435407c8d0a","avatarUrl":"/avatars/e0fccbb2577d76088e09f054c35cffbc.svg","isPro":true,"fullname":"Ningyu Zhang","user":"Ningyu","type":"user"},"summary":"The remarkable success of Large Language Models (LLMs) has illuminated a\npromising pathway toward achieving Artificial General Intelligence for both\nacademic and industrial communities, owing to their unprecedented performance\nacross various applications. As LLMs continue to gain prominence in both\nresearch and commercial domains, their security and safety implications have\nbecome a growing concern, not only for researchers and corporations but also\nfor every nation. Currently, existing surveys on LLM safety primarily focus on\nspecific stages of the LLM lifecycle, e.g., deployment phase or fine-tuning\nphase, lacking a comprehensive understanding of the entire \"lifechain\" of LLMs.\nTo address this gap, this paper introduces, for the first time, the concept of\n\"full-stack\" safety to systematically consider safety issues throughout the\nentire process of LLM training, deployment, and eventual commercialization.\nCompared to the off-the-shelf LLM safety surveys, our work demonstrates several\ndistinctive advantages: (I) Comprehensive Perspective. We define the complete\nLLM lifecycle as encompassing data preparation, pre-training, post-training,\ndeployment and final commercialization. To our knowledge, this represents the\nfirst safety survey to encompass the entire lifecycle of LLMs. (II) Extensive\nLiterature Support. Our research is grounded in an exhaustive review of over\n800+ papers, ensuring comprehensive coverage and systematic organization of\nsecurity issues within a more holistic understanding. (III) Unique Insights.\nThrough systematic literature analysis, we have developed reliable roadmaps and\nperspectives for each chapter. Our work identifies promising research\ndirections, including safety in data generation, alignment techniques, model\nediting, and LLM-based agent systems. These insights provide valuable guidance\nfor researchers pursuing future work in this field.","upvotes":13,"discussionId":"6809c1f789b7cade55b32bf4","ai_summary":"This paper introduces the concept of full-stack safety to address the entire lifecycle of Large Language Models (LLMs) from data preparation to commercialization, providing comprehensive insights and promising research directions.","ai_keywords":["Large Language Models","full-stack safety","LLM lifecycle","data preparation","pre-training","post-training","deployment","commercialization","safety survey","data generation","alignment techniques","model editing","LLM-based agent systems"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"620b3bbb0668e435407c8d0a","avatarUrl":"/avatars/e0fccbb2577d76088e09f054c35cffbc.svg","isPro":true,"fullname":"Ningyu Zhang","user":"Ningyu","type":"user"},{"_id":"65535b54140fc44a74d43635","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/MIrD8OzDKF2aI38i7ZPjR.jpeg","isPro":false,"fullname":"Zhisong Qiu","user":"consultantQ","type":"user"},{"_id":"623d8ca4c29adf5ef6175615","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/623d8ca4c29adf5ef6175615/q7lHao7UPwU1u7YLSP56m.jpeg","isPro":false,"fullname":"Yi-Fan Zhang","user":"yifanzhang114","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"66446d2fe7ca43b97c6f41fe","avatarUrl":"/avatars/b9936fd8bce78160bfa362e26594001c.svg","isPro":false,"fullname":"Boyuan Chen","user":"BoyuanChen","type":"user"},{"_id":"648eb1eb59c4e5c87dc116e0","avatarUrl":"/avatars/c636cea39c2c0937f01398c94ead5dad.svg","isPro":false,"fullname":"fdsqefsgergd","user":"T-representer","type":"user"},{"_id":"665b133508d536a8ac804f7d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/Uwi0OnANdTbRbHHQvGqvR.png","isPro":false,"fullname":"Paulson","user":"Pnaomi","type":"user"},{"_id":"6270324ebecab9e2dcf245de","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6270324ebecab9e2dcf245de/cMbtWSasyNlYc9hvsEEzt.jpeg","isPro":false,"fullname":"Kye Gomez","user":"kye","type":"user"},{"_id":"644078ea518271b0d1bf7ec6","avatarUrl":"/avatars/61014a3fd45f74ce541bdbe53929e233.svg","isPro":false,"fullname":"Yifan Jiang","user":"YifanJ","type":"user"},{"_id":"651c80a26ba9ab9b9582c273","avatarUrl":"/avatars/e963452eafd21f517d800f2e58e0f918.svg","isPro":false,"fullname":"siyeng feng","user":"siyengfeng","type":"user"},{"_id":"663ccbff3a74a20189d4aa2e","avatarUrl":"/avatars/83a54455e0157480f65c498cd9057cf2.svg","isPro":false,"fullname":"Nguyen Van Thanh","user":"NguyenVanThanhHust","type":"user"},{"_id":"6270d2ddbcef985363d774fa","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6270d2ddbcef985363d774fa/HOKAxx_FKVRF-87WpGQbF.png","isPro":true,"fullname":"jiakai","user":"real-jiakai","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2504.15585

A Comprehensive Survey in LLM(-Agent) Full Stack Safety: Data, Training and Deployment

Published on Apr 22, 2025
· Submitted by
Ningyu Zhang
on Apr 24, 2025
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

This paper introduces the concept of full-stack safety to address the entire lifecycle of Large Language Models (LLMs) from data preparation to commercialization, providing comprehensive insights and promising research directions.

AI-generated summary

The remarkable success of Large Language Models (LLMs) has illuminated a promising pathway toward achieving Artificial General Intelligence for both academic and industrial communities, owing to their unprecedented performance across various applications. As LLMs continue to gain prominence in both research and commercial domains, their security and safety implications have become a growing concern, not only for researchers and corporations but also for every nation. Currently, existing surveys on LLM safety primarily focus on specific stages of the LLM lifecycle, e.g., deployment phase or fine-tuning phase, lacking a comprehensive understanding of the entire "lifechain" of LLMs. To address this gap, this paper introduces, for the first time, the concept of "full-stack" safety to systematically consider safety issues throughout the entire process of LLM training, deployment, and eventual commercialization. Compared to the off-the-shelf LLM safety surveys, our work demonstrates several distinctive advantages: (I) Comprehensive Perspective. We define the complete LLM lifecycle as encompassing data preparation, pre-training, post-training, deployment and final commercialization. To our knowledge, this represents the first safety survey to encompass the entire lifecycle of LLMs. (II) Extensive Literature Support. Our research is grounded in an exhaustive review of over 800+ papers, ensuring comprehensive coverage and systematic organization of security issues within a more holistic understanding. (III) Unique Insights. Through systematic literature analysis, we have developed reliable roadmaps and perspectives for each chapter. Our work identifies promising research directions, including safety in data generation, alignment techniques, model editing, and LLM-based agent systems. These insights provide valuable guidance for researchers pursuing future work in this field.

Community

Paper author Paper submitter

This paper introduces, for the first time, the concept of "full-stack" safety to systematically consider safety issues throughout the entire process of LLM training, deployment, and eventual commercialization.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2504.15585 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2504.15585 in a Space README.md to link it from this page.

Collections including this paper 3