The Curious Case of Facebook’s Not-a-Data-Breach in Ireland & the Un-enforceability of GDPR

The European Union has a Lemony Snicket kind of case on its hands. One that it created of its own doing. One that is driven by the tendency towards techno-fear in the EU. But one that also has a valid cause for concern. They are just not going about it in a manner that presents a solvable problem. And have not since its inception.

As reported by TechCrunch this morning, both Meta and the Irish Data Protection Commission (DPC) are being sued by Data Rights Ireland (DRI). DRI is a consumer privacy protection advocacy group that engages corporations and regulators on behalf of private citizens. Last fall, the DPC found that an incident where Facebook leaked Personally Identifiable Information (PII) of the 100 million European Union users of Facebook did not require notifying those subscribers. It came to this conclusion that nothing was actively broken by FaceBook in the protection scheme of that information; ie no one made an active mistake that compromised the security of that information and led to it being accessed. That is because no real security of that information existed in the first place.

In Systems and Software Engineering, the formal regime that characterizes best practices taught in industry and defines the science of those domains in academia, this falls under the label of phase identification of the root cause of a defect. A “bug” is not always necessarily created in code. It is often a defect of the systems requirement or the design not properly translating the business or the operational requirement.

Or the architectural intent, for that matter. It is a case where, if tested, the implementation should pass verification, but not validation. The bug cannot simply be corrected in code. The requirements and design may need to be updated, the test may need to be revised. The architectural model may need revision. But a defect still exists.

And this is the problem that GDPR and its policing regulatory agencies do not address. If the company never implemented security properly in the first place, then nothing got broken, and therefore no remediation action needs to happen, and no one needs to get informed. Maybe it’s a defect, but it’s not a GDPR thing. Not my problem, man.

In other regimes, we have specifications. Yardsticks, rubrics, and goal-posts that establish measures of effectiveness and successful thresholds by those measures. They are developed by NIST. The IEEE. SEI. But no one thought to ensure that there was one of these established for GDPR.

The age-old mistake was made of creating a compliance framework that did not have the teeth of being enforceable. And so the regime is generally useless. And consumers go on at the mercy of corporations who cannot be policed.

It would make sense for there to be an international consortium on privacy. There are some scattered thumb rules that exist, but none that have the international weight of law, and none that point to a tangible specification that measures compliance effectiveness for GDPR. And so a defect must be an operational manifestation to be valid. No look at root causes that pre-date operations are taken into account. It’s a path that allows for a lot of skating by, and allows corporations to hand-wave compliance without really doing the work. And, as TechCrunch indicates, it allows things like Cambridge Analytica to be scape-goated so that the corporate perpetrator that allowed it to happen can go unscathed. It is the difference between knowing tech and building a solid mouse-trap, versus fearing tech out of ignorance, leaving the cheese out, and hoping the mouse will just go away once it’s full.