An AI twist in a familiar Kimsuky playbook
North Korea’s Kimsuky group used artificial intelligence to dress up a phishing campaign with convincing images of South Korean military employee identification cards, according to a detailed analysis by a South Korean cybersecurity firm. The lures asked recipients to review sample ID designs for civilian staff, a request that would appear routine inside a defense bureaucracy. The attached images looked like legitimate badges, yet investigators say the graphics were produced by a generative AI model and packaged with malware that could take control of a victim’s computer.
The emails impersonated a defense related institution tasked with issuing IDs. The attackers tried to reassure recipients by attaching PNG images that resembled draft ID card designs. Hidden in the same delivery chain were files that activated a backdoor once a target interacted with the content. The images were likely created by prodding a commercial AI system to produce mock ups for testing purposes rather than real credentials, a technique that can trick automated guardrails.
The sender address also mimicked a trusted military domain. Investigators observed false domains such as .mli.kr, a lookalike to South Korea’s official defense suffix .mil.kr, to fool recipients scanning for familiar cues. Small changes of this kind are a common social engineering tactic, and they are difficult to notice when someone is scanning a busy inbox.
The campaign, first detected in mid July, came on the heels of earlier activity where the same group used a ClickFix style lure and delivered the same malware family. The knowledge gained in those runs appears to have been reused, now paired with AI generated graphics to raise the believability of the story. Targets included a military related organization as well as researchers, journalists, and activists who focus on North Korea.
How did the phishing scheme work
The hook was simple. Recipients received a message asking for a quick review of sample ID card designs. A ZIP archive or a download link delivered an image and one or more scripts. Once opened, the chain executed code that connected the computer to attacker infrastructure, pulled down additional components, and established remote control. The image helped sell the story of a harmless design review.
Bypassing AI safeguards with prompt injection
Most mainstream AI tools refuse to create government issued IDs, since copying such documents is illegal and helps fraud. In this case, the attackers appear to have framed their prompts as design samples for an internal mock up. That approach can cause a model to interpret the request as benign. It is a form of prompt injection or jailbreaking, where a user steers a model around its rules by crafting language that reframes the intent.
Analysts who examined one of the fake images reported that a deepfake detector flagged it with a 98 percent probability of being AI generated. In at least one sample, image metadata referenced GPT 4o and ChatGPT, suggesting an OpenAI image generation pipeline had been used during testing or production. The more the image looked like a routine badge draft, the more likely a hurried staffer would consider the request legitimate.
Malware delivery and intrusion chain
The delivery package varied. In one observed case, a ZIP file contained a shortcut disguised to look like a normal document. When launched, it ran a PowerShell command that reached out to a remote server, fetched a backdoor, and downloaded the decoy PNG of the ID card. In another, a batch file named LhUdPC3G.bat executed alongside the image and initiated malicious activity, including establishing persistence and enabling data theft and remote control.
These were spear phishing emails, and the operators paid attention to detail. Some archives were personalized with a partially masked real name of the recipient to raise trust and nudge the target to open the attachment. The fake ID imagery, the familiar type of request, and the near match of the sender domain combined to push recipients past their skepticism.
Who Kimsuky targets, and why
Kimsuky has been tracked for years by government agencies and private researchers as a North Korean intelligence gathering unit. The group is assessed to collect policy, military, and technology information that aligns with state priorities. Its victim set often includes government agencies, defense related institutions, think tanks, academic researchers, journalists, and human rights organizations focused on North Korea.
In June, researchers documented ClickFix themed emails that nudged users to press a button to resolve a supposed issue. The July campaign borrowed the same malware and many of the same techniques, then added AI generated design samples to strengthen the lure. This was not a mass blast. It was a careful, targeted operation seeking access to specific networks and inboxes.
Attribution rested on a mix of technical and contextual evidence, including shared malware code, infrastructure reuse, language patterns, and choice of targets. These indicators align with activity previously linked to Kimsuky. The method is consistent with a long running mission to infiltrate policy circles, collect data, and maintain footholds inside institutions that watch North Korea closely.
Impersonating a unit that issues ID cards exploits a well understood business process. Identity documents invite collaborative review, they often involve back and forth on fonts, photos, and layout, and they carry an implied aura of authority. That makes the request to check a sample feel ordinary.
A widening pattern of AI misuse by North Korean operators
The phishing operation fits a broader pattern. In August, a major AI developer reported that it had disrupted North Korean operatives who used its model to create false identities, apply for remote jobs at United States tech firms, and even complete technical work after they were hired. The same report described how accounts tied to this activity were banned and how the company updated its detection systems for malicious use.
Security briefings in South Korea and the United States have warned that North Korean actors use AI tools to write fluent emails, craft resumes and cover letters, draft code, and create personas with professional profile photos. These capabilities let a small operator act with the polish of a much larger team. The barrier to entry has fallen. A person with modest technical skills can now rely on AI for guidance, draft malware modules faster, translate content into native quality Korean or English, and generate visual assets that pass a quick glance test.
Analysts of digital conflict on the peninsula have noted a surge in attacks on South Korean institutions. Public sources in 2023 cited daily attempts in the millions against government systems, many traced to North Korean activity. Separate United Nations reporting and industry research have tied North Korean units to large cryptocurrency thefts used to fund strategic programs. The spear phishing incidents, AI assisted job fraud, and financial theft are parts of the same strategy to gather intelligence and generate revenue despite sanctions.
While China based operators and other state linked actors are also experimenting with generative AI, the focus here is the rapid adoption by North Korean units for both espionage and illicit finance. The line between a phishing kit and an AI assisted content studio has blurred. What used to demand teams of designers, translators, and developers can now be assembled with prompts and a handful of open source tools.
Why AI makes social engineering more convincing
Generative AI excels at producing polished content on demand. In a phishing context, that means images, text, and code are tailored to a target audience with minimal effort. A convincing badge mock up or a branded letterhead lowers a reader’s guard. A model that can write in formal bureaucratic Korean, sprinkle in the right acronyms, and keep a respectful tone generates messages that look and feel authentic.
Language models also function as a tireless writing assistant. They correct grammar, match the style of an institution, and keep tone consistent across multiple emails. Attackers can ask for variants of the same message for different roles. A model can produce one draft that looks like a routine notice for an HR manager, and another that reads like a technical memo for an IT lead.
On the technical side, code assistants help less experienced operators stitch together scripts and payloads. Models explain errors, suggest fixes, and produce examples that run. Used improperly, this adds speed and reduces the learning curve for malware development. Pair that with text generation and translation, and a single operator can execute multilingual campaigns that would have required a team in past years.
Identity creation is the other pillar. Models can help invent work histories, produce formatted resumes, and draft cover letters that match job postings. Image tools create profile photos that pass a quick scan. Taken together, these capabilities make business email compromise and long term infiltration easier to pull off, at least until human and technical defenses catch up.
Defense, detection, and policy responses
South Korea has updated its National Cybersecurity Strategy with an emphasis on international cooperation, adoption of advanced technology, and stronger defenses. The plan highlights AI driven detection for real time monitoring, deeper information sharing with partners, and improvements in resilience across critical infrastructure. Seoul has also pursued closer coordination with the United States and Japan to align policy and to run joint exercises focused on incident response.
Organizations can blunt attacks built around lookalike domains and deepfake imagery with a mix of process, training, and technology. Process starts with verification. Requests that involve credentials or identity documents should follow a documented workflow, including contact with a known official through a secondary channel. Staff should learn to check domains letter by letter and to hover over links before clicking.
Technical controls help stop execution. Email gateways should quarantine messages with nested archives and block risky attachment types such as shortcut files, ISO images, and scripts. Turn off or restrict PowerShell where possible and rely on application allow lists to prevent unapproved executables. Endpoint detection and response can spot malicious PowerShell calls and unusual outbound connections. Network monitoring can alert on new connections to unrecognized domains shortly after an email interaction.
Hiring and vendor vetting are a growing part of cyber defense. Remote hiring workflows should include strong identity checks with live video, trusted third party verification, and independent reference validation. Systems that handle sensitive data should enforce least privilege access and continuous verification. Routine audits can catch anomalies such as unusual login locations and activity outside of expected hours.
Legal and ethical guardrails for AI
The incident highlights a tension in AI adoption. Businesses benefit when employees can use AI to draft documents, compose code, and process images. The same accessibility can be misused by adversaries. AI developers have introduced safety systems that refuse harmful content and monitor for misuse, yet attackers try to bypass those measures with tricky prompts. The defense community has responded by sharing indicators of abuse, removing offending accounts, and refining model behavior to resist manipulation.
Policy and procurement choices matter. Agencies and contractors should require suppliers of AI systems to provide audit logs, abuse reporting channels, and model hardening updates. Clear corporate rules for AI use can prevent accidental creation of risky content. Security teams should test models against common jailbreak prompts and offer safe alternatives and pre approved workflows for business needs like document design or code review.
What to Know
- Kimsuky used AI generated images of South Korean military ID cards as lures in a July spear phishing campaign.
- The emails asked recipients to review sample ID designs and came from a domain that mimicked the official .mil.kr suffix with .mli.kr.
- Investigators found metadata referencing GPT 4o and ChatGPT in at least one image, and a detector rated a fake at 98 percent probability of being AI generated.
- The delivery chain included ZIP archives and scripts that executed PowerShell commands, installed a backdoor, and downloaded a decoy PNG.
- Targets included a military related organization, researchers, journalists, and human rights activists focused on North Korea.
- The campaign reused malware seen in June ClickFix themed activity attributed to the same group.
- North Korean operators have also used AI tools to build fake identities and secure remote jobs, according to a major AI provider’s recent report.
- Defenses include strict verification of identity related requests, email and endpoint controls that block risky file types and scripts, and strong remote hiring checks.