HAE-RAE
non-profit
dataset, paper), \nKMMLU (general knowledge, dataset, paper), HRM8K (math, dataset, paper), and KMMLU-Redux/Pro (general knowledge, dataset, paper).\n
View all Papers
Evaluation
We developed the haerae-evaluation-toolkit, a unified LLM evaluation framework designed to provide consistent and reproducible benchmarking for Korean and multilingual models.
Reasoning Language Models
With cooperation with KISTI-KONI we released the KO-REAson series, <10B reasoning language models trained for Korean.
News
\n2026.01.08: HAERAE-VISION a vision successor of HAE-RAE Bench is here! ๐
2025.08.31: We release six KO-REAson-0831 models ๐ฅ๐ฅ๐ฅ
2025.07.11: We've collaborated with LG AI Research to build KMMLU-PRO an major update to our KMMLU franchise.
2025.01.05: We are releasing the first public korean math (๐e = โโโฟโผโฐ ยนโ๐ค) benchmark HRM8K
AI & ML interests
None defined yet.
Recent Activity
View all activity
Papers
View all Papers
Submitted by
DasolChoi