OpenAI’s “Disrupting malicious uses of AI” report highlights efforts to identify and stop harmful uses of its AI models.
The report shows how threat actors combine AI with traditional tools to run scams, cyberattacks, social engineering, and covert influence operations. OpenAI uses its tools alongside human investigation to detect misuse, ban abusive accounts, and share findings with partners to strengthen defenses.
The company aims to make AI beneficial and safe by monitoring activity, enforcing policies, and improving understanding of how malicious actors operate. Case studies illustrate real abuses and how controls disrupt those threats, helping protect users and support broader safety measures.


