South Korea’s AI Firms Challenge ChatGPT with Local LLMs

Asia Daily
15 Min Read

South Korea accelerates its sovereign AI plan

South Korea is moving fast to build its own large language models that can stand beside world leaders like OpenAI and Google. Five industrial consortia led by LG AI Research, Naver Cloud, SK Telecom, Upstage, and NC AI have been tapped to develop national foundation models tuned for the Korean language and local needs. The government’s goal is ambitious: reach at least 95 percent of the performance of the most advanced global systems released within the prior six months, while keeping data, compute, and core expertise inside the country. The push is part of a broader sovereign AI vision that spans chips, cloud, and applications, and that aims to give South Korea more control over its digital future.

Companies are already fielding contenders. LG’s Exaone 4.0, Naver’s HyperClova X, SK Telecom’s A.X series, Upstage’s Solar Pro 2, and NC AI’s Varco models each target different strengths, from enterprise productivity and Korean search to call center automation and multimodal understanding. Several teams say their systems exceed frontier rivals on Korean-language benchmarks. The plan is not just about one model, or a single sector. It is a full ecosystem project that blends open source releases, shared datasets, domestic compute, and stepwise evaluations to winnow the field over time.

Why South Korea wants control of its AI stack

Sovereign AI in the Korean context means more than having a homegrown chatbot. Policymakers outline a full stack approach that connects semiconductors, data centers, cloud services, and advanced models into an integrated national capability. Leaders in Seoul cast this as independence, not isolation. The aim is to avoid long term reliance on foreign providers for core digital infrastructure and to secure AI that aligns with local laws, values, and security needs. The push has been framed as a potential Third Way between the United States and China, with South Korea offering a partner model to countries that want high grade systems without geopolitical strings.

Two events helped crystallize the plan. The rapid rise of ChatGPT convinced Korea’s tech sector that it could not sit out the foundation model race. Then the emergence of China’s DeepSeek, with its claims of far cheaper training at scale, signaled a possible reset in the economics of model development. The government responded with a sweeping agenda. It set up flagship programs for an independent foundation model and a national AI computing center, backed by a public private investment slate on the order of 100 trillion won. It also elevated AI policy inside the presidential office and moved to recruit research talent from abroad. The message is clear: South Korea wants AI it can build, run, and export on its own terms.

Who got picked and what comes next

The Ministry of Science and ICT selected five consortia from a field of 15 to develop Independent AI Foundation Models. The winners, led by Naver Cloud, Upstage, SK Telecom, NC AI, and LG AI Research, each carry the labels K-AI model and K-AI company. The target is a model or model family that can hit at least 95 percent of leading performance, with a steady cadence of evaluations. One team will be cut in December, four will continue. Reviews will take place every six months until two teams remain by 2027. That structure injects urgency and creates a national scoreboard for progress.

Support is material. Each team receives access to processed datasets worth 10 billion won and a pool of high quality broadcast and video learning data valued at 20 billion won. Another 2.8 billion won per team is set aside to build domain specific datasets. SK Telecom and Naver Cloud will lease GPU infrastructure that backs training and fine tuning for Upstage, NC AI, and LG AI Research. Upstage also gets matching support to bring in international researchers. The ministry has encouraged open source releases so that domestic developers and small firms can adopt and commercialize the technology. Several teams say a first round of models will ship within the year.

An ICT ministry official said the selection reflected both technical strength and a commitment to community adoption, describing the shared approach to open source and broader ecosystem growth.

All five teams have demonstrated exceptional capabilities in AI model development, sharing a clear commitment to the vision of sovereign AI and presenting robust open source strategies that will allow other businesses to adopt and commercialize their technologies.

Korea’s ICT minister framed the stakes as national and long term. In remarks reported by local media, he tied the project to a broader ecosystem that the government intends to nurture.

Our bold initiative marks the beginning of Korea’s journey toward building AI for all. The government will stand firmly behind AI companies and institutions as they scale new heights and shape a robust sovereign AI ecosystem.

Inside the models: what each team brings

The five teams are not building clones. Each model line targets specific strengths and real world jobs, and several already run in production. Here is how the field lines up, based on recent releases and public benchmarks.

LG Exaone 4.0: enterprise first with strong reasoning

LG AI Research’s Exaone 4.0 is designed for work, not only chat. It focuses on automating enterprise workflows, analyzing documents, and extracting structured insights from unstructured text. On global indexes tracked by independent aggregators, Exaone has ranked near the frontier, with reports that it sits just outside the global top tier by composite intelligence. Engineers cite strong performance on advanced reasoning tests such as MMLU Pro and AIME 2025, which probe multi step logic and math. LG’s strategy includes a range of sizes to fit different budgets and a path toward multimodal models that can handle text, code, and visuals.

Naver HyperClova X: built for Korean language and search

Naver trained HyperClova X on one of the largest Korean language corpora assembled, then embedded it inside Naver’s search and services. The model excels at Korean specific tasks, with internal and external tests that show an edge over GPT 4 on language understanding tied to Korean culture and context. Naver has released specialized variants and lighter models for cost efficient use. In June 2025 it introduced HyperClova X Think, a version tuned for more careful reasoning in search and conversation. The company is also working toward an omni foundation model that unifies text, audio, image, and perhaps video into a single architecture.

SK Telecom A.X: tuned for local culture and ready for deployment

SK Telecom is leaning on its telecom footprint and AI investments to ship models that run in production services. The A.X line, trained from scratch with a strong Korean emphasis, has beaten GPT 4o on local benchmark suites according to company tests. A.X already powers AI secretaries for customer service, call summarization, and other telecom tasks. SK Telecom has shared a lighter model, AX 3.1 Lite, with roughly 7 billion parameters trained on 1.65 trillion tokens, and reported about 96 percent performance on KMMLU2 reasoning and 102 percent on the CLIcK3 cultural understanding index relative to larger models. The company says models will be released with open source licenses so developers can build on them across industries.

Upstage Solar Pro 2: efficiency as a weapon

Upstage is the startup in the group, and it has found leverage in efficient training and careful data curation. Solar Pro 2, at roughly 31 billion parameters, has matched or exceeded larger rivals on several benchmarks. It is listed on the Frontier Language Model Intelligence leaderboard and has achieved top rankings on intelligence versus cost to run, a measure that matters for businesses trying to scale usage. Upstage targets knowledge heavy fields such as law, finance, and healthcare, where accuracy and controllability matter, and where smaller models with strong reasoning can beat oversized systems on price performance.

NC AI Varco: smaller scale, strong multimodal skills

NC AI, a subsidiary of the game publisher NCSoft, surprised many when it won a spot in the sovereign AI project. Its Varco models, including Varco Vision 2.0, show strong results on image and text processing even at smaller scales. That focus fits the company’s roots in graphics rich interactive content. The team is building out a consortium with research groups and smaller firms, and it has leaned on years of in house AI work to keep pace in a field dominated by giants.

Kakao Kanana: still a player outside the project

Kakao did not make the sovereign AI roster, but it continues to release its Kanana models and to invest in multimodal capabilities. The company says it plans a partnership with OpenAI to bring an AI agent across Kakao’s platforms. Kakao also maintains a mix of smaller and more capable models, some shared under open source terms, aimed at mobile and media heavy use cases.

Do benchmarks tell the whole story

Benchmarks have become the scoreboard of modern AI. Suites such as MMLU, HumanEval, LiveCodeBench, AIME, and MATH 500 test knowledge, coding, and reasoning. Korea’s models are moving up. LG’s Exaone and Upstage’s Solar Pro 2 appear on global leaderboards, with Exaone ranked near the global top ten by composite intelligence in recent updates, and Solar Pro 2 singled out for efficiency. SK Telecom reports higher scores than GPT 4o on Korean language and culture tests such as KMMLU2 and CLIcK3. Naver’s team cites results where HyperClova X outperforms GPT 4 on Korean tasks.

Benchmarks matter for engineers. They guide training choices and reveal strengths and weaknesses. They also have limits. Scores can cluster close together, tasks can saturate, and some tests reward pattern recall more than deep reasoning. The global race is also moving. OpenAI’s GPT 5 has posted gains on math, coding, and vision in recent months, but user feedback on newer systems can be mixed and practical utility often depends on the application. Korean teams know that enterprise buyers care as much about data control, cost predictability, and reliability as they do about leaderboard positions. That is why many local models are tuned for specific domains and are shipping inside live services.

Hardware and the full stack

South Korea’s chip industry gives it an edge. SK Hynix is a key supplier of high bandwidth memory that powers Nvidia’s leading GPUs. Samsung remains a major memory and foundry player. That industrial base supports the sovereign AI plan, even as teams still rely on Nvidia hardware for training. SK Telecom will use its Titan supercomputer, built with Nvidia GPUs, and an AI data center it is developing with Amazon to train and serve models. The approach aims to keep critical operations inside Korean facilities while tapping global partners where it makes sense.

At the same time, a domestic push around neural processing units is taking shape. Rebellions’ ATOM chip has been deployed in production for AI inference, replacing older Nvidia GPU servers inside SK Telecom’s X Caliber pet X ray diagnostic service used by more than 1,000 veterinary hospitals. The system detects dozens of findings within about 15 seconds and reports accuracy in the high nineties. FuriosaAI’s RNGD chip is used with LG’s Exaone 3.5 and has earned commercialization approval after extensive evaluation, with claims of better performance per watt than general purpose GPUs for repetitive tasks. Industry voices expect a dual structure to emerge, with GPUs handling flexible training and many workloads, and NPUs serving targeted, high volume inference where power and cost matter most.

Where these models will show up

The first wave of deployments is already visible. Telecoms are using LLMs to transcribe and summarize calls, block spam, and assist human agents. Search engines and online portals are upgrading Korean language results and chat answers. Game studios are exploring multimodal models for character dialogue, visuals, and moderation. Financial firms and insurers are testing assistants that retrieve policy details and generate compliant documents. Government agencies see value in secure, domestic models for public services that require data localization.

Healthcare is an early focus. Korea’s regulator for food and drugs issued guidance in early 2025 for approving text generating medical AI, opening a pathway for clinical applications that use LLMs. Seoul National University Hospital reports a medical LLM trained on tens of millions of de identified clinical records that exceeded the human average on the Korean Medical Licensing Examination. Lighter models such as AX 3.1 Lite are designed to run on devices and at the telecom edge, which cuts latency and helps protect personal data. The sovereign AI program’s emphasis on open source also lowers barriers for smaller hospitals and startups to build specialized assistants using Korean medical vocabularies.

Open source or walled garden

Seoul’s program leans toward open source releases. Officials want homegrown models that other businesses can adopt and shape for their needs. That approach can seed a broad developer base and speed diffusion across industries. It also helps mitigate lock in to a single vendor. The government’s dataset support and GPU leasing come with expectations that teams will share code and weights where feasible. SK Telecom’s consortium plans to release a model this year that developers can use with a permissive license. Naver and LG are building families of models that mix proprietary and open releases to balance commercial opportunities with ecosystem growth.

The strategy is not free of debate. Some startups worry that a state backed push risks favoring large conglomerates and could crowd out smaller players. Others argue that scarce resources would be better spent on vertical applications, rather than another national foundation model that must chase frontier labs burning through massive budgets. Policymakers counter that open source terms, shared datasets, and staged evaluations will keep the field competitive and that a credible national model unlocks export potential. The national AI computing center has faced challenges attracting private partners, a reminder that infrastructure needs to fit market demand.

Risks, competition, and the global picture

The competitive bar keeps rising. OpenAI, Anthropic, Google, and xAI are pushing out faster, more capable models and investing heavily in safety and alignment research. Chinese firms, including Alibaba and DeepSeek, are releasing strong open source systems. Korean models tuned for local language and regulations can win at home, yet they must still attract developers and prove value in global markets. Compute budgets and talent acquisition remain hard constraints. The government has tried to ease those pressure points with funding, GPU access, and international recruiting support.

Execution risks are real. Building a top tier model is different from sustaining a platform that enterprises trust. Teams need robust tooling, monitoring, and fine tuning pipelines. They must keep up with safety standards and guard against prompt injection, data leakage, and hallucinations in sensitive workflows. On the other hand, Korea’s full stack position gives it options. Memory makers feed GPU suppliers, telecoms run edge computing, and a tight feedback loop between chip design, cloud operations, and software can help models evolve quickly. If the open source approach draws a strong developer community, South Korea could offer an attractive alternative for countries and companies seeking choices beyond the United States and China.

Key Points

  • Five consortia led by LG AI Research, Naver Cloud, SK Telecom, Upstage, and NC AI were selected to build sovereign AI foundation models.
  • The government’s goal is to reach at least 95 percent of the performance of top global models released within the prior six months.
  • Support includes 10 billion won in shared datasets per team, a 20 billion won pool of broadcast and video data, and 2.8 billion won for domain specific datasets.
  • SK Telecom and Naver Cloud will lease GPU infrastructure to support model training and fine tuning for select teams.
  • Evaluations will reduce five teams to two by 2027, with reviews every six months.
  • LG’s Exaone 4.0 and Upstage’s Solar Pro 2 appear on global leaderboards, with strong reasoning and efficiency results.
  • Naver’s HyperClova X focuses on Korean tasks and is integrated into search, with a new Think variant for more careful reasoning.
  • SK Telecom’s A.X models are tuned for local language and already power customer service tools, with lighter open source versions for on device use.
  • NC AI’s Varco models emphasize multimodal skills, including image and text understanding at smaller scales.
  • South Korea is leveraging its chip industry while exploring domestic NPUs from Rebellions and FuriosaAI for efficient inference.
  • Open source is central to the plan, with the goal of seeding a domestic developer ecosystem and reducing vendor lock in.
  • Debate continues over market dynamics and the role of the state, yet the program positions Korea as a potential partner for countries seeking AI options outside the United States and China.
Share This Article