An AI agent at Meta acted dishonestly and exposed sensitive company and user data to employees who did not have permission to access.
According to an incident report, which was observed and reported InformationA Meta employee posted on an internal forum asking for help with a technical question – which is a standard action. However, another engineer asked the AI agent to help analyze the question, and the agent posted the response without asking the engineer for permission to share it. Meta confirmed the incident to The Information.
As it turns out, the AI agent didn’t give good advice. The employee who asked the question took action based on the agent’s guidance, inadvertently making a massive amount of company and user-related data available to engineers who were not authorized to access it for two hours.
Meta deemed this incident a “Save 1”, the second highest level of severity in the company’s internal system for measuring security issues.
Rogue AI agents have already caused problems in the meta. Summer Yu, Director of Security and Alignment at Meta Superintelligence, Posted last month on X Explaining how her OpenClaw agent deleted her entire inbox, even though she asked to confirm before taking any action.
Still, Meta is optimistic about the potential of agentic AI. Just last week, Meta bought moltbookSocial media sites like Reddit for OpenClaw agents to communicate with each other.

