OpenAI Bans Accounts Using ChatGPT for Mass Surveillance and Malicious Activities
OpenAl has reported banning multiple ChatGPT accounts that were attempting to use the Al chatbot to create tools for large-scale social media monitoring. The company said one user was instructing ChatGPT to develop promotional materials and project plans for an Al-driven social media "listening" tool intended for a government client. The tool was described as a probe capable of scanning platforms such as X, Facebook, Instagram, Reddit, TikTok, and YouTube for extremist content, as well as material related to ethnicity,
religion, and politics.
Another account, suspected of having government ties, was reportedly using ChatGPT to draft a proposal for a "High-Risk Uyghur-Related InflowWarning Model.” The plan aimed to analyze transport bookings alongside police records to provide early warnings about movements by the Uyghur community.
“Some elements of this usage appeared aimed at supporting large-scale monitoring of online or offline activity, highlighting the need for continued vigilance against potential authoritarian misuse of AI,” OpenAI said. The company also noted that its models are not officially available in China, suggesting that these users may have accessed ChatGPT through a VPN.
OpenAI reported additional bans targeting Russian hackers who were using the AI to develop and refine malware, including remote access trojans and credential-stealing software. The company observed that some persistent threat actors had adjusted their behavior to remove obvious indicators of AI usage from their outputs, such as em-dashes.
Despite these threats, OpenAI emphasized that ChatGPT is being used more frequently to detect scams than to generate them. “Our current estimate is that ChatGPT is used to identify scams up to three times more often than it is used to create scams,” the company said.
Since starting public threat reporting in February 2024, OpenAI has disrupted and reported over 40 networks that violated its usage policies. The company noted that threat actors are mainly incorporating AI into existing workflows rather than creating entirely new AI-driven operations.
“We found no evidence that our models enabled threat actors to develop new tactics or provided novel offensive capabilities. Our models consistently rejected malicious requests,” OpenAI added.