A new threat called LameHug has emerged, and it's not just any malware. This malicious family leverages the power of a large language model (LLM) to generate commands specifically tailored for compromised Windows systems.
The discovery was made by CERT-UA, Ukraine’s national computer emergency response team. According to their investigation, the attack has been attributed to APT28, a well-known threat actor linked to the Russian government. This group is also known by other names such as Fancy Bear, Sednit, Sofacy, or STRONTIUM, among several aliases used in various reports.
How does the LameHug malware work?
What’s both interesting and alarming is how LameHug operates. It’s developed in Python and uses the Hugging Face API to interact with a specific LLM: Qwen 2.5-Coder-32B-Instruct. This model, created by Alibaba Cloud, is open-source and designed for advanced tasks such as code generation, logical reasoning, and executing commands from natural language input.
In other words, the malware can send a description of the infected system’s environment to the LLM and receive executable commands in return. This allows it to adapt in real time to each machine, making the attack far more precise and harder to detect.
CERT-UA uncovered this threat after receiving alerts on July 10, when several entities reported malicious emails sent from compromised accounts. The messages posed as official communications from Ukrainian ministries and attempted to distribute the malware to key government agencies.
LameHug is a clear example of how attackers are using emerging technologies like generative AI to take cybercrime to a new level.
Malicious Email Attempting to Deploy LameHug (Source: CERT-UA)
The attackers behind LameHug use malicious emails with ZIP file attachments that, once opened, contain the malware loader. According to CERT-UA’s report, at least three variants have been identified with filenames such as:
-
Attachment.pif
-
AI_generator_uncensored_Canvas_PRO_v0.9.exe
-
image.py
Although the investigation is still ongoing, the Ukrainian agency attributes this campaign (with medium confidence) to APT28, a cyber-espionage group backed by the Russian government and known by multiple aliases including Fancy Bear and Sofacy.
Once on the system, LameHug doesn’t operate alone or follow a fixed script. Instead, it relies on a language model (LLM) to generate real-time custom commands. These commands are used for tasks such as:
-
Scanning the infected system
-
Gathering detailed information and saving it to a file (info.txt)
-
Searching for documents in key folders like Documents, Desktop, and Downloads
-
Exfiltrating the stolen data via SFTP connections or HTTP POST requests
This ability to dynamically adapt to each victim’s environment (thanks to AI) makes LameHug a far more flexible and dangerous threat than traditional types of malware.
Messages Sent to the LLM for Command Generation
Read more: Hackers Use Microsoft Teams to Spread Matanbuchus 3.0 Malware
LameHug and the Future of AI-Driven Malware
LameHug marks a turning point in the world of malware. It is the first publicly documented case of malicious software that directly incorporates support for AI language models (LLMs) as an active part of its operations.
From a technical standpoint, this could be the beginning of a new era in cyberattacks, where threat actors no longer need to deploy multiple malicious payloads to adapt to different environments. Thanks to the use of AI, they can adjust their tactics in real time, making each attack more agile, personalized, and harder to detect.
Another key aspect is that LameHug uses the Hugging Face infrastructure as its communication channel, allowing it to disguise its malicious traffic within legitimate services. This makes it much more difficult for security systems to detect quickly, since it doesn’t involve traditional C2 server connections.
Moreover, by generating commands on the fly using AI, the malware avoids leaving obvious traces that antivirus solutions typically detect, such as hardcoded instructions or recognizable patterns. Instead, each command is new, tailored, and designed not to raise suspicion—whether it's from static analysis tools or automated detection engines.
In short, LameHug is not just a new threat; it could be a glimpse into what cyberattacks of the future may look like, with artificial intelligence playing a central role in evasion, control, and effectiveness.