Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Paper page - QuantaAlpha: An Evolutionary Framework for LLM-Driven Alpha Mining
[go: Go Back, main page]

Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2026-02-11T01:44:03.620Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":318,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7354156970977783},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2602.07085","authors":[{"_id":"698ab6f91b2dc6b37d61b031","user":{"_id":"69674549666228b695202137","avatarUrl":"/avatars/47ac3b87d395856b3dd1a9f24af82c25.svg","isPro":false,"fullname":"Han Jun","user":"Junqwef","type":"user"},"name":"Jun Han","status":"claimed_verified","statusLastChangedAt":"2026-02-11T11:19:06.698Z","hidden":false},{"_id":"698ab6f91b2dc6b37d61b032","name":"Shuo Zhang","hidden":false},{"_id":"698ab6f91b2dc6b37d61b033","name":"Wei Li","hidden":false},{"_id":"698ab6f91b2dc6b37d61b034","user":{"_id":"64aa645404e7b379feccc490","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64aa645404e7b379feccc490/4m8qcdy2OGK8visR5Jjl5.png","isPro":false,"fullname":"Zhi Yang","user":"yangzhi1","type":"user"},"name":"Zhi Yang","status":"claimed_verified","statusLastChangedAt":"2026-02-10T09:05:58.707Z","hidden":false},{"_id":"698ab6f91b2dc6b37d61b035","name":"Yifan Dong","hidden":false},{"_id":"698ab6f91b2dc6b37d61b036","name":"Tu Hu","hidden":false},{"_id":"698ab6f91b2dc6b37d61b037","name":"Jialuo Yuan","hidden":false},{"_id":"698ab6f91b2dc6b37d61b038","user":{"_id":"64084fa192033c150738e4f2","avatarUrl":"/avatars/dfff2216eb235c635e5abe6fda3084f0.svg","isPro":false,"fullname":"Yu_xm","user":"Yu2020","type":"user"},"name":"Xiaomin Yu","status":"claimed_verified","statusLastChangedAt":"2026-02-10T09:06:00.954Z","hidden":false},{"_id":"698ab6f91b2dc6b37d61b039","name":"Yumo Zhu","hidden":false},{"_id":"698ab6f91b2dc6b37d61b03a","name":"Fangqi Lou","hidden":false},{"_id":"698ab6f91b2dc6b37d61b03b","name":"Xin Guo","hidden":false},{"_id":"698ab6f91b2dc6b37d61b03c","name":"Zhaowei Liu","hidden":false},{"_id":"698ab6f91b2dc6b37d61b03d","user":{"_id":"6895e7f146763431aea25ca4","avatarUrl":"/avatars/52e550c3f7e8da2e31b63413e2e71e6c.svg","isPro":false,"fullname":"Tianyi Jiang","user":"LumosJiang","type":"user"},"name":"Tianyi Jiang","status":"claimed_verified","statusLastChangedAt":"2026-02-11T11:19:04.702Z","hidden":false},{"_id":"698ab6f91b2dc6b37d61b03e","name":"Ruichuan An","hidden":false},{"_id":"698ab6f91b2dc6b37d61b03f","name":"Jingping Liu","hidden":false},{"_id":"698ab6f91b2dc6b37d61b040","name":"Biao Wu","hidden":false},{"_id":"698ab6f91b2dc6b37d61b041","name":"Rongze Chen","hidden":false},{"_id":"698ab6f91b2dc6b37d61b042","name":"Kunyi Wang","hidden":false},{"_id":"698ab6f91b2dc6b37d61b043","name":"Yifan Wang","hidden":false},{"_id":"698ab6f91b2dc6b37d61b044","name":"Sen Hu","hidden":false},{"_id":"698ab6f91b2dc6b37d61b045","name":"Xinbing Kong","hidden":false},{"_id":"698ab6f91b2dc6b37d61b046","name":"Liwen Zhang","hidden":false},{"_id":"698ab6f91b2dc6b37d61b047","name":"Ronghao Chen","hidden":false},{"_id":"698ab6f91b2dc6b37d61b048","name":"Huacan Wang","hidden":false}],"publishedAt":"2026-02-06T08:08:04.000Z","submittedOnDailyAt":"2026-02-10T02:19:22.216Z","title":"QuantaAlpha: An Evolutionary Framework for LLM-Driven Alpha Mining","submittedOnDailyBy":{"_id":"64aa645404e7b379feccc490","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64aa645404e7b379feccc490/4m8qcdy2OGK8visR5Jjl5.png","isPro":false,"fullname":"Zhi Yang","user":"yangzhi1","type":"user"},"summary":"Financial markets are noisy and non-stationary, making alpha mining highly sensitive to noise in backtesting results and sudden market regime shifts. While recent agentic frameworks improve alpha mining automation, they often lack controllable multi-round search and reliable reuse of validated experience. To address these challenges, we propose QuantaAlpha, an evolutionary alpha mining framework that treats each end-to-end mining run as a trajectory and improves factors through trajectory-level mutation and crossover operations. QuantaAlpha localizes suboptimal steps in each trajectory for targeted revision and recombines complementary high-reward segments to reuse effective patterns, enabling structured exploration and refinement across mining iterations. During factor generation, QuantaAlpha enforces semantic consistency across the hypothesis, factor expression, and executable code, while constraining the complexity and redundancy of the generated factor to mitigate crowding. Extensive experiments on the China Securities Index 300 (CSI 300) demonstrate consistent gains over strong baseline models and prior agentic systems. When utilizing GPT-5.2, QuantaAlpha achieves an Information Coefficient (IC) of 0.1501, with an Annualized Rate of Return (ARR) of 27.75% and a Maximum Drawdown (MDD) of 7.98%. Moreover, factors mined on CSI 300 transfer effectively to the China Securities Index 500 (CSI 500) and the Standard & Poor's 500 Index (S&P 500), delivering 160% and 137% cumulative excess return over four years, respectively, which indicates strong robustness of QuantaAlpha under market distribution shifts.","upvotes":181,"discussionId":"698ab6fa1b2dc6b37d61b049","githubRepo":"https://github.com/QuantaAlpha/QuantaAlpha","githubRepoAddedBy":"user","githubStars":339,"organization":{"_id":"68b33ab6a9ed99140481cf44","name":"QuantaAlpha","fullname":"QuantaAlpha","avatar":"https://cdn-uploads.huggingface.co/production/uploads/63f7767fbd28622c9b9915e9/DRN8PvmnpKmn2MSLQ7qhF.jpeg"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"64aa645404e7b379feccc490","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64aa645404e7b379feccc490/4m8qcdy2OGK8visR5Jjl5.png","isPro":false,"fullname":"Zhi Yang","user":"yangzhi1","type":"user"},{"_id":"66a0ab4923e426e19db92773","avatarUrl":"/avatars/19517dd085a3e48e644613ca0b2c3753.svg","isPro":false,"fullname":"ronghaochen","user":"cristiano28","type":"user"},{"_id":"687f601b7170fd281b898c0f","avatarUrl":"/avatars/d77ceb1dca5115d50abb1bb300d05209.svg","isPro":false,"fullname":"yifan","user":"dongyifan","type":"user"},{"_id":"69663cd775a7cc5f0817eb50","avatarUrl":"/avatars/c5b649e961abcfa4aed9758630661b87.svg","isPro":false,"fullname":"liwei","user":"Liwei-123454321","type":"user"},{"_id":"64084fa192033c150738e4f2","avatarUrl":"/avatars/dfff2216eb235c635e5abe6fda3084f0.svg","isPro":false,"fullname":"Yu_xm","user":"Yu2020","type":"user"},{"_id":"6603d56ab4344a2b07cd6d21","avatarUrl":"/avatars/1569bb60166532317c85e80da722ba1c.svg","isPro":false,"fullname":"Huacan Wang","user":"Huacan-Wang","type":"user"},{"_id":"698abfb990c649d4e8c42740","avatarUrl":"/avatars/8537f5e88619b41c87acd487861c8bcb.svg","isPro":false,"fullname":"Han","user":"Jun20020101","type":"user"},{"_id":"67ac17794d4fed8d16c0b3f9","avatarUrl":"/avatars/c5dffa00a1586bca7cadd1f49b81d701.svg","isPro":false,"fullname":"linjie","user":"zhouzhou12123","type":"user"},{"_id":"67e787cd05d7355e47634b0c","avatarUrl":"/avatars/cad004380557ba51c88d9a2bb659d938.svg","isPro":false,"fullname":"LE CHANG","user":"BeetleSpike","type":"user"},{"_id":"68e5cd2af7b5b87f951fdb13","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/Cuf7wio5ENpxWys6fNa3W.png","isPro":false,"fullname":"CHENG ZIMING","user":"HarrytheOrange2","type":"user"},{"_id":"63f19d2d2f7c0152e87583a4","avatarUrl":"/avatars/36b4669844b88cf21a6782d1645e4144.svg","isPro":false,"fullname":"Zhe Huang","user":"strike20023","type":"user"},{"_id":"698ac36770ffef22cd6e3a56","avatarUrl":"/avatars/9ef8b128baeace0d655802b7bc4f9587.svg","isPro":false,"fullname":"Jun Ye","user":"junyip66","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":3,"organization":{"_id":"68b33ab6a9ed99140481cf44","name":"QuantaAlpha","fullname":"QuantaAlpha","avatar":"https://cdn-uploads.huggingface.co/production/uploads/63f7767fbd28622c9b9915e9/DRN8PvmnpKmn2MSLQ7qhF.jpeg"}}">
Papers
arxiv:2602.07085

QuantaAlpha: An Evolutionary Framework for LLM-Driven Alpha Mining

Published on Feb 6
· Submitted by
Zhi Yang
on Feb 10
#3 Paper of the day
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

Financial markets are noisy and non-stationary, making alpha mining highly sensitive to noise in backtesting results and sudden market regime shifts. While recent agentic frameworks improve alpha mining automation, they often lack controllable multi-round search and reliable reuse of validated experience. To address these challenges, we propose QuantaAlpha, an evolutionary alpha mining framework that treats each end-to-end mining run as a trajectory and improves factors through trajectory-level mutation and crossover operations. QuantaAlpha localizes suboptimal steps in each trajectory for targeted revision and recombines complementary high-reward segments to reuse effective patterns, enabling structured exploration and refinement across mining iterations. During factor generation, QuantaAlpha enforces semantic consistency across the hypothesis, factor expression, and executable code, while constraining the complexity and redundancy of the generated factor to mitigate crowding. Extensive experiments on the China Securities Index 300 (CSI 300) demonstrate consistent gains over strong baseline models and prior agentic systems. When utilizing GPT-5.2, QuantaAlpha achieves an Information Coefficient (IC) of 0.1501, with an Annualized Rate of Return (ARR) of 27.75% and a Maximum Drawdown (MDD) of 7.98%. Moreover, factors mined on CSI 300 transfer effectively to the China Securities Index 500 (CSI 500) and the Standard & Poor's 500 Index (S&P 500), delivering 160% and 137% cumulative excess return over four years, respectively, which indicates strong robustness of QuantaAlpha under market distribution shifts.

Community

Paper author Paper submitter

QuantaAlpha tackles noisy, non-stationary markets by evolving alpha-mining trajectories via mutation and crossover, enabling controllable multi-round search and reliable reuse of successful patterns. It enforces hypothesis–factor–code semantic consistency and limits complexity to reduce crowding. On CSI 300 it improves over strong baselines (GPT-5.2: IC 0.1501, ARR 27.75%, MDD 7.98%) and transfers well to CSI 500 and the S&P 500 under distribution shifts.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.07085 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.07085 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.07085 in a Space README.md to link it from this page.

Collections including this paper 7