Models
February 27, 2026

Disrupting malicious uses of AI

OpenAI’s “Disrupting malicious uses of AI” report explains how the company detects and prevents harmful AI abuse, including scams, influence operations, and cyber threats, and shares insights to improve defenses and protect users.

OpenAI’s “Disrupting malicious uses of AI” report highlights efforts to identify and stop harmful uses of its AI models.

The report shows how threat actors combine AI with traditional tools to run scams, cyberattacks, social engineering, and covert influence operations. OpenAI uses its tools alongside human investigation to detect misuse, ban abusive accounts, and share findings with partners to strengthen defenses.

The company aims to make AI beneficial and safe by monitoring activity, enforcing policies, and improving understanding of how malicious actors operate. Case studies illustrate real abuses and how controls disrupt those threats, helping protect users and support broader safety measures.

#
OpenAI

Read Our Content

See All Blogs
Gen AI

WebMCP and AI orchestration: how the web is finally catching up to enterprise AI agents

Deveshi Dabbawala

March 10, 2026
Read more
Gen AI

OpenAI just released GPT-5.4: here’s what you need to know

Deveshi Dabbawala

March 6, 2026
Read more