What was once a tool for boosting productivity is now becoming a cyber weapon. A new report from cybersecurity firm Volexity reveals that a China-linked hacking group, identified as UTA0388, is using ChatGPT and other large language models (LLMs) to develop malware and craft highly sophisticated phishing campaigns targeting victims across North America, Asia, and Europe.
Researchers say the group has been active since at least June 2025, representing a new generation of threats that combine AI-driven automation with large-scale social engineering tactics.
Volexity discovered that UTA0388 uses ChatGPT not only to write malicious code, but also to generate convincing phishing emails that mimic the communication style of researchers, executives, or academics from credible—but entirely fake—institutions.
These messages lure victims into clicking links or opening compressed files containing malware. What’s particularly concerning is that the messages are automatically generated in multiple languages—including English, Chinese, Japanese, French, and German.
The group employs a tactic known as “trust-building phishing,” where they first start a harmless conversation with the target before sending the malicious attachment or link. This approach makes the interaction seem more legitimate and increases the likelihood of success.
The phishing emails contain ZIP or RAR attachments with a legitimate executable file and a malicious DLL library, a classic technique called DLL search order hijacking.
Once the victim opens the file, it activates a backdoor named GOVERSHELL, giving the attacker remote control over the compromised system.
This malware has evolved rapidly: researchers identified five distinct variants, rewritten in different programming languages (from C++ to Golang) and equipped with enhanced encryption. GOVERSHELL also creates scheduled tasks to maintain persistence after system reboots—a hallmark of advanced persistent threat (APT) groups.
At TecnetOne, this behavior is known as advanced persistence, a defining trait of state-sponsored threat actors.
Learn more: OpenAI Removes ChatGPT Chat Sharing Feature Over Privacy Concerns
Investigators believe LLMs like ChatGPT played a key role in both developing GOVERSHELL and composing the phishing emails.
Evidence includes recurring indicators of AI generation—contextual inconsistencies, incoherent phrasing, or fabricated details such as nonexistent institutions or fake phone numbers.
For example:
These flaws point to automated, unreviewed content generation, a growing trend as attackers refine their use of AI.
In some cases, researchers found bizarre or irrelevant items inside the compressed files—such as pornographic images or recordings of Buddhist chants. These “Easter eggs” had no apparent malicious purpose but further suggest automated content compilation.
Experts see this as a turning point: cybercriminals no longer rely solely on human creativity but on the mass-production capabilities of AI. Even if the output isn’t always coherent, the sheer volume and speed of these operations can overwhelm traditional defenses.
Volexity attributes UTA0388 to Chinese state interests with high confidence, citing:
Additionally, the constant code rewrites and communication protocol changes suggest AI assistance: the modifications don’t follow human iterative logic, but rather reflect outputs from different prompts.
Though no confirmed breaches have been made public, the potential impact is vast. With ChatGPT’s help, attackers can scale operations and craft personalized campaigns faster and cheaper, reducing dependence on skilled developers or writers.
This marks a new challenge for corporate cybersecurity: AI now enables the automation of tasks once requiring advanced expertise, from writing malware to composing perfectly worded phishing emails.
In short, AI enhances not just productivity—but also the power of attackers.
Read more: 10 Things You Should Never Do with ChatGPT
At TecnetOne, we’ve observed how AI’s rise has reshaped the cybersecurity landscape. To defend against campaigns like UTA0388’s, we recommend:
OpenAI has suspended accounts linked to Chinese and North Korean hackers who attempted to use ChatGPT for malware creation.
However, the Volexity report shows the issue goes beyond account bans: attackers will always find ways around controls, using alternative models or platforms.
This underscores the need for global collaboration between governments, tech companies, and cybersecurity experts to prevent AI from becoming a weapon for cybercrime.
The UTA0388 case marks a turning point: AI is now both a tool for innovation and a catalyst for cybercrime.
For organizations, this means defense strategies must evolve as fast as attack tools. At TecnetOne, we believe success lies in anticipation and adaptation—combining technology, training, and continuous monitoring.
AI can be a powerful ally—but in the wrong hands, it’s a system that can learn, adapt… and attack.