
Follow ZDNET: Add america arsenic a preferred source on Google.
ZDNET's cardinal takeaways
- OpenAI's caller study shows really cybercriminals are utilizing AI.
- This includes nan attempted usage of ChatGPT for surveillance.
- OpenAI has disrupted complete 40 networks progressive successful maltreatment to date.
OpenAI has published investigation revealing really state-sponsored and cybercriminal groups are abusing artificial intelligence (AI) to dispersed malware and execute wide surveillance.
Also: Everything OpenAI announced astatine DevDay 2025: Agent Kit, Apps SDK, ChatGPT, and more
(Disclosure: Ziff Davis, ZDNET's genitor company, revenge an April 2025 suit against OpenAI, alleging it infringed Ziff Davis copyrights successful training and operating its AI systems.)
AI has benefits successful nan cybersecurity space; it tin automate tedious and time-consuming tasks, freeing up quality specialists to attraction connected analyzable projects and research, for example. However, arsenic pinch immoderate exertion -- whether it is an AI strategy designed to triage cybercrime alerts aliases a penetration testing instrumentality -- location is simply a capacity for malicious use.
Also: 43% of workers opportunity they've shared delicate info pinch AI - including financial and customer data
With this successful mind, since February 2024, OpenAI has issued nationalist threat reports and has intimately monitored nan usage of AI devices by threat actors. Since past year, OpenAI has disrupted complete 40 malicious networks that person violated its usage policies, and an study of these networks is now complete, giving america a glimpse into nan existent trends of AI-related cybercrime.
Published connected Monday, OpenAI's report, "Disrupting malicious uses of AI: an update" (PDF), specifications 4 awesome trends, each of which expose really AI is being utilized to quickly alteration nan existing Tactics, Techniques, and Procedures (TTPs) of threat actors.
Major trends
The first inclination is nan expanding usage of AI successful existing workflows. Many of nan accounts banned by nan developer were many times building AI into cybercriminal networks. For example, nan OpenAI squad recovered grounds of this abuse, believed to beryllium located successful Cambodia, erstwhile an organized crime web tried to usage ChatGPT to "make their workflows much businesslike and error-free."
A number of accounts were besides banned for attempting to make Remote Access Trojans (RATs), credential stealers, obfuscation tools, arsenic good arsenic crypters and payload crafting code.
The 2nd important area of interest is threat groups that usage aggregate AI devices and models for chopped malicious aliases abusive purposes. These see a apt Russian entity that utilized various AI devices to make video prompts and fraudulent contented designed to beryllium dispersed complete societal media, news-style short videos, and propaganda.
Want much stories astir AI? Sign up for Innovation, our play newsletter.
In different case, a number of Chinese-language accounts were banned for trying to usage ChatGPT to trade phishing contented and for debugging. It is believed that this group could beryllium threat actors tracked arsenic UTA0388, known for targeting Taiwan's semiconductor industry, deliberation tanks, and US academia.
OpenAI besides described really cybercriminals are utilizing AI for adjustment and obfuscation. A number of networks, thought to originate from Cambodia, Myanmar, and Nigeria, are alert that AI contented and codification are detectable, and truthful person asked AI models to region markers specified arsenic em-dashes from output.
"For months, em-dashes person been nan attraction of online chat arsenic a imaginable parameter of AI usage: this lawsuit suggests that nan threat actors were alert of that discussion," nan study notes.
Also: Navigating AI-powered cyber threats successful 2025: 4 master information tips for businesses
Concerningly, but possibly not unsurprisingly, AI is besides uncovering its measurement into nan hands of state-sponsored groups. Recently, OpenAI disrupted networks thought to beryllium linked to galore People's Republic of China (PRC) authorities entities, pinch accounts asking ChatGPT to make proposals for ample systems designed to show societal media networks.
In addition, immoderate accounts requested thief to constitute a connection for a instrumentality that would analyse carrier bookings and comparison them pinch constabulary records, thereby monitoring nan movements of nan Uyghur number group, whereas different tried to usage ChatGPT to place backing streams related to an X relationship that criticized nan Chinese government.
The limits of AI successful crime
While AI is being weaponized, it should beryllium noted that location is small to nary grounds of existing AI models being utilized to create what OpenAI describes arsenic "novel" attacks; successful different words, AI models are refusing malicious requests that would springiness threat actors enhanced violative capabilities utilizing caller strategies chartless to cybersecurity experts.
"We proceed to spot threat actors bolt AI onto aged playbooks to move faster, not summation caller violative capacity from our models," OpenAI said. "As nan threatscape evolves, we expect to spot further adversarial adaptations and innovations, but we will besides proceed to build devices and models that tin beryllium utilized to use nan defenders -- not conscionable wrong AI labs, but crossed nine arsenic a whole."
1 month ago
English (US) ·
Indonesian (ID) ·