
Microsoft has been forced to admit a significant error: its flagship AI work assistant, Copilot, inadvertently accessed and summarized confidential emails for some users. The incident has raised fresh questions about the safety of integrating generative AI into sensitive workplace environments.
The tech giant has aggressively marketed Microsoft 365 Copilot Chat as a secure, enterprise-grade solution, allowing businesses to leverage the power of AI while maintaining strict data control. However, a recent glitch caused the tool to surface information from users’ drafts and sent email folders—including messages explicitly marked as confidential—to other enterprise users.
Microsoft was quick to downplay the security ramifications, stating that the issue “did not provide anyone access to information they weren’t already authorised to see.” In essence, the data wasn’t leaked to external hackers, but it was surfaced internally to people who may not have been the intended recipients. The company has since rolled out a global configuration update to fix the underlying “code issue.”
The problem, first reported by tech news outlet Bleeping Computer, came to light through a service alert. It revealed that Copilot’s work tab was summarizing emails with active sensitivity labels and data loss prevention policies—security measures specifically designed to block unauthorized sharing. Microsoft acknowledged that this behaviour “did not meet our intended Copilot experience,” which is supposed to exclude protected content from the AI’s reach. Internal notices seen on support dashboards, including one for NHS workers in England, confirmed that the root cause was a processing error.
For critics, this incident is a predictable consequence of the breakneck speed at which tech companies are integrating AI into their products. While enterprise tools like Copilot are built with stricter controls than public-facing chatbots, the complexity of these systems makes errors inevitable. The fear is that in the race to add features, data protection may sometimes take a backseat.
As one expert noted, when you deploy these tools at scale, data leakage—whether accidental or not—will eventually happen. For now, Microsoft has patched the hole, but the incident serves as a stark reminder that even the most secure AI assistants are not infallible.