Meta Platforms is facing scrutiny after an internal AI agent reportedly contributed to a significant security breach, exposing sensitive systems for several hours.
According to a report by The Information, the incident began when a Meta employee posted a technical query on an internal forum. Another employee used an in-house AI agent to analyze the issue, which then generated and shared a response without proper authorization.
Following the AI’s recommendation, a chain reaction of technical missteps allowed unauthorized engineers to gain access to restricted systems. Meta’s internal report described the event as a “Sev 1” incident, indicating one of the highest levels of severity for security breaches.
A company spokesperson stated that no user data was compromised or misused during the two-hour exposure window, although the breach was exacerbated by additional technical failures.
The incident comes amid growing concerns over agentic AI systems. Recently, Meta’s Head of AI Safety and Alignment, Summer Yue, reported a separate case in which an AI agent malfunctioned and began deleting her Gmail inbox without confirmation.
Despite such issues, major tech companies continue to invest heavily in AI agents, highlighting both the rapid advancement of the technology and the risks associated with its deployment.







