K-EXAONE Technical Report
Eunbi Choi, Kibong Choi, Seokhee Hong, Junwon Hwang, Hyojin Jeon, Hyunjik Jo, Joonkee Kim, Seonghwan Kim, Soyeon Kim, Sunkyoung Kim, Yireun Kim, Yongil Kim, Haeju Lee, Jinsik Lee, Kyungmin Lee, Sangha Park, Heuiyeen Yeen, Hwan Chang, Stanley Jungkyu Choi, Yejin Choi, Jiwon Ham, Kijeong Jeon
2026-01-06
Summary
This report introduces K-EXAONE, a new, very large language model created by LG AI Research that can understand and generate text in multiple languages.
What's the problem?
Building AI that can truly understand and work with language is incredibly difficult, especially when you want it to handle many languages and long pieces of text. Existing large language models often struggle with complex reasoning, acting as helpful assistants, and performing well across different languages, and they can be limited in how much information they can process at once.
What's the solution?
The researchers built K-EXAONE, a model with a massive 236 billion parameters, but cleverly designed so it only actively uses 23 billion at a time to make it more efficient. It’s trained on six languages – Korean, English, Spanish, German, Japanese, and Vietnamese – and can handle very long texts, up to 256,000 tokens. They then tested K-EXAONE on a variety of tasks to see how well it performs in areas like reasoning, acting as an agent, general knowledge, Korean language tasks, and multilingual understanding. The results showed it performs as well as other similar-sized open-source models.
Why it matters?
K-EXAONE represents a significant step forward in AI development because it’s a powerful tool that can be used for many different applications in both industry and research. It’s designed to improve AI technology with the goal of making life better, and provides a strong foundation for building even more advanced AI systems in the future.
Abstract
This technical report presents K-EXAONE, a large-scale multilingual language model developed by LG AI Research. K-EXAONE is built on a Mixture-of-Experts architecture with 236B total parameters, activating 23B parameters during inference. It supports a 256K-token context window and covers six languages: Korean, English, Spanish, German, Japanese, and Vietnamese. We evaluate K-EXAONE on a comprehensive benchmark suite spanning reasoning, agentic, general, Korean, and multilingual abilities. Across these evaluations, K-EXAONE demonstrates performance comparable to open-weight models of similar size. K-EXAONE, designed to advance AI for a better life, is positioned as a powerful proprietary AI foundation model for a wide range of industrial and research applications.