What was once a tool for boosting productivity is now becoming a cyber weapon. A new report from cybersecurity firm Volexity reveals that a China-linked hacking group, identified as UTA0388, is using ChatGPT and other large language models (LLMs) to develop malware and craft highly sophisticated phishing campaigns targeting victims across North America, Asia, and Europe.
Researchers say the group has been active since at least June 2025, representing a new generation of threats that combine AI-driven automation with large-scale social engineering tactics.
How Hackers Are Using ChatGPT
Volexity discovered that UTA0388 uses ChatGPT not only to write malicious code, but also to generate convincing phishing emails that mimic the communication style of researchers, executives, or academics from credible—but entirely fake—institutions.
These messages lure victims into clicking links or opening compressed files containing malware. What’s particularly concerning is that the messages are automatically generated in multiple languages—including English, Chinese, Japanese, French, and German.
The group employs a tactic known as “trust-building phishing,” where they first start a harmless conversation with the target before sending the malicious attachment or link. This approach makes the interaction seem more legitimate and increases the likelihood of success.
The GOVERSHELL Malware: A New Kind of Threat
The phishing emails contain ZIP or RAR attachments with a legitimate executable file and a malicious DLL library, a classic technique called DLL search order hijacking.
Once the victim opens the file, it activates a backdoor named GOVERSHELL, giving the attacker remote control over the compromised system.
This malware has evolved rapidly: researchers identified five distinct variants, rewritten in different programming languages (from C++ to Golang) and equipped with enhanced encryption. GOVERSHELL also creates scheduled tasks to maintain persistence after system reboots—a hallmark of advanced persistent threat (APT) groups.
At TecnetOne, this behavior is known as advanced persistence, a defining trait of state-sponsored threat actors.
Learn more: OpenAI Removes ChatGPT Chat Sharing Feature Over Privacy Concerns
The Role of Artificial Intelligence in Cyberattacks
Investigators believe LLMs like ChatGPT played a key role in both developing GOVERSHELL and composing the phishing emails.
Evidence includes recurring indicators of AI generation—contextual inconsistencies, incoherent phrasing, or fabricated details such as nonexistent institutions or fake phone numbers.
For example:
- Emails sent from imaginary entities like the “Copenhagen Governance Institute.”
- Messages mixing subjects in Mandarin with body text in German.
- Electronic signatures combining three unrelated names.
- Mass emails to nonsensical addresses like firstname.lastname@domain.
These flaws point to automated, unreviewed content generation, a growing trend as attackers refine their use of AI.
Odd Details That Expose Automation
In some cases, researchers found bizarre or irrelevant items inside the compressed files—such as pornographic images or recordings of Buddhist chants. These “Easter eggs” had no apparent malicious purpose but further suggest automated content compilation.
Experts see this as a turning point: cybercriminals no longer rely solely on human creativity but on the mass-production capabilities of AI. Even if the output isn’t always coherent, the sheer volume and speed of these operations can overwhelm traditional defenses.
Evidence of Chinese Origin
Volexity attributes UTA0388 to Chinese state interests with high confidence, citing:
- Victims involved in sensitive Asia-Pacific geopolitical issues.
- Simplified Chinese characters embedded in GOVERSHELL’s internal code paths.
- Server infrastructure overlapping with known Chinese APT operations.
Additionally, the constant code rewrites and communication protocol changes suggest AI assistance: the modifications don’t follow human iterative logic, but rather reflect outputs from different prompts.
A Growing Global Risk
Though no confirmed breaches have been made public, the potential impact is vast. With ChatGPT’s help, attackers can scale operations and craft personalized campaigns faster and cheaper, reducing dependence on skilled developers or writers.
This marks a new challenge for corporate cybersecurity: AI now enables the automation of tasks once requiring advanced expertise, from writing malware to composing perfectly worded phishing emails.
In short, AI enhances not just productivity—but also the power of attackers.
Read more: 10 Things You Should Never Do with ChatGPT
How to Protect Your Organization
At TecnetOne, we’ve observed how AI’s rise has reshaped the cybersecurity landscape. To defend against campaigns like UTA0388’s, we recommend:
- Use AI-based email filters.
Modern security tools can detect LLM-generated patterns and suspicious email structures.
- Strengthen authentication and access segmentation.
Implement MFA and role-based access controls to limit damage if an account is compromised.
- Keep systems updated and monitored.
GOVERSHELL exploits unpatched vulnerabilities—apply updates regularly and monitor for unusual scheduled tasks.
- Train employees.
Human error remains the weakest link. Teach teams to spot suspicious messages and verify senders.
- Adopt Extended Detection and Response (XDR) solutions.
XDR platforms can detect abnormal behaviors like DLL execution or unauthorized outbound connections.
OpenAI’s Response
OpenAI has suspended accounts linked to Chinese and North Korean hackers who attempted to use ChatGPT for malware creation.
However, the Volexity report shows the issue goes beyond account bans: attackers will always find ways around controls, using alternative models or platforms.
This underscores the need for global collaboration between governments, tech companies, and cybersecurity experts to prevent AI from becoming a weapon for cybercrime.
Conclusion
The UTA0388 case marks a turning point: AI is now both a tool for innovation and a catalyst for cybercrime.
For organizations, this means defense strategies must evolve as fast as attack tools. At TecnetOne, we believe success lies in anticipation and adaptation—combining technology, training, and continuous monitoring.
AI can be a powerful ally—but in the wrong hands, it’s a system that can learn, adapt… and attack.