Discord recently confirmed that approximately 70,000 users may have had their government ID photos exposed through a breach of their third-party customer service provider, Zendesk. While hackers initially claimed to possess over 2 million photos totaling 1.5TB of age verification data in what appears to be an extortion attempt, Discord has firmly disputed these inflated numbers as "incorrect and part of an attempt to extort a payment from Discord."
Nevertheless, the breach also potentially exposed names, usernames, emails, partial credit card numbers, and IP addresses, a reminder of how quickly sensitive data can spiral out of an organization's control.
This wasn't even a breach of Discord itself, but of a vendor they trusted with some of their most sensitive user data: government-issued photo IDs used for age verification appeals. Whether it's 70,000 or 2 million IDs, the fundamental security failure remains the same.
This incident couldn't come at a more critical time. Across the United States and internationally, well-intentioned legislation is rapidly expanding to require age verification for various online services. States like Utah, Arkansas, and Texas have already enacted laws requiring age checks for social media platforms, while the UK's Online Safety Act introduces similar requirements.
These laws fundamentally change data protection. These regulations would require collection of more than simple text fields like usernames or email addresses. It would require companies to collect high-resolution photos of government IDs; driver's licenses, passports, and other official documents that contain a treasure trove of personal information.
But here's the uncomfortable truth: Most organizations, and apparently their third-party vendors, are woefully unprepared for this new reality of protecting all of that information.
Think about the workflow. A user uploads their ID photo, it gets transmitted to a company's servers, potentially shared with a third-party verification service, stored for compliance purposes, and accessed by customer service representatives for appeals. At each step, that file—containing a photo, full name, address, date of birth, and government ID number—represents a massive liability.
The Discord breach illustrates what happens when organizations treat these files like any other customer service ticket attachment. Zendesk, a platform designed for general customer support, suddenly became the repository for thousands of government IDs. Was it equipped with file-level encryption? Did it have granular access controls for sensitive attachments? Could Discord maintain visibility and control over how Zendesk handled these files?
Even if we accept Discord's lower number of 70,000 exposed IDs rather than the hackers' claimed millions, the answer to all these questions appears to be no.
This breach raises a question that every organization implementing age verification must answer: Should users' most sensitive identity documents ever truly leave their control?
In a data-centric security model, the answer is no. The data owner (in this case, the user submitting their ID) should maintain control over their information regardless of where it travels. This means:
Imagine if Discord users could have submitted encrypted ID photos that only authorized verification personnel could decrypt, with access automatically expiring after the age verification process. Even if Zendesk's systems were completely compromised, the attackers would have found only encrypted files they couldn't access.
What’s most concerning is how this breach exemplifies the third-party risk multiplication effect. It’s a tale as old as time, one we’ ve seen with Fortinet, Snowflake, Microsoft (time and time again), and more.
Discord didn't just trust Zendesk with customer service tickets, they trusted them with government IDs. And Zendesk, in turn, likely relies on its own constellation of vendors for storage, backup, and infrastructure.
Each additional party in this chain represents another potential point of failure, another set of employees with access, another security program that might not be up to the task of protecting government IDs. Without data-centric protections that persist regardless of where files travel, organizations are essentially hoping that every single vendor in their supply chain maintains perfect security.
Hope, as this breach demonstrates, is not a strategy. And when breaches do occur, the lack of proper controls creates opportunities for bad actors to exaggerate their claims for extortion purposes—leaving companies and users uncertain about the true scope of the damage.
As age verification requirements expand, every affected organization needs to ask themselves hard questions: Are we treating ID photos with appropriate security rigor? Do we have visibility into how our vendors handle these files? Can we revoke access after verification? If a vendor is breached tomorrow, would the data be usable?
The good news is that organizations don't need to reinvent the wheel. Data-centric security solutions that provide persistent protection, granular controls, and complete audit trails for sensitive files already exist.
At Virtru, we've been helping organizations solve exactly these challenges for over a decade, ensuring that sensitive files remain protected and under the data owner's control, whether they're being shared with verification services, customer support platforms, or any other third party.
The Discord breach won't be the last age verification incident we see. But it should serve as a wake-up call that traditional approaches to data protection aren't sufficient when every user interaction might involve sharing a government ID. In an era where proving your age online requires exposing your most sensitive documents, organizations need data protection that's as sophisticated as the data they're handling.
The alternative—as 70,000 Discord users just learned—is becoming the next cautionary tale and potential extortion target.