Models
December 28, 2025

Keeping your data safe when an AI agent clicks a link

OpenAI published safety guidance on how AI agents handle web links. The focus is on avoiding quiet leaks of private data by only auto-loading links already seen publicly. User control stays centra

OpenAI outlined how it protects user data when AI agents follow links. Agents can help by loading web content, but URLs can carry hidden sensitive information. To reduce risk, OpenAI only lets agents fetch links that are known public URLs from an independent index.

This approach avoids quietly exposing private data during automated tasks. When a link is unknown or unverified, users see warnings before it opens.

These safety steps are part of a layered defense that includes prompt injection protections and ongoing monitoring, aiming to balance agent usefulness with stronger privacy safeguards as AI agents become more common.

#
OpenAI

Read Our Content

See All Blogs
Gen AI

Anthropic’s Claude Managed Agents platform accelerates AI agent deployment for teams

Deveshi Dabbawala

April 9, 2026
Read more
AI safety

Everything you need to know about Anthropic's Project Glasswing

Deveshi Dabbawala

April 8, 2026
Read more