🔓 Who’s protecting whom, really? Part 1 of the trilogy “The Max One Disclosure”

🔓 Who’s protecting whom, really? Part 1 of the trilogy “The Max One Disclosure”

New EU laws have shifted the rules. Since 2024, every operator of critical IT systems is subject to legal obligations – including provable structural control, auditability, and risk prevention. These duties are real. The liabilities are personal. And most companies still haven’t understood what that means.

This text explains why everyone is liable – even if no one can prove compliance. And why silence is no longer neutral.


⚖️ The legal baseline

EU laws like NIS2, CRA, DORA, and the AI Act all require structural – not just procedural – oversight. Responsibility lies with executive management and cannot be delegated.

Any system that lacks provable structural auditability violates these legal obligations – no matter how polished the paper processes may look.

Yet: No mainstream system meets these requirements.

💣 And still, companies are fully liable:

  • Organisations face civil and criminal liability for incidents they couldn’t prevent.
  • Executives can be held personally liable – even with complex cloud and AI architectures.
  • And: Everyone knows it. For years. For months. And now: undeniably, since these laws took effect.

And still: no one acted.

  • Microsoft, Amazon, Google & Co. remain silent – because transparency would destroy their model.
  • Deloitte, PwC, EY and KPMG remain silent – because they are part of the audit problem.
  • EU Commission, national regulators and authorities remain silent – because they won’t admit legal misdesign.
  • VCs and tech investors remain silent – because their valuations are based on technical illusions.
  • Gartner, Forrester & Co. remain silent – because a real comparison would vaporize their ratings.


🧑💼 Executives: You're the last line of liability

You are fully liable for systems no one can structurally verify.

If you stand before court tomorrow, no one will ask what your cloud provider promised – only what you verifiably controlled.

Ask your auditors why this gap never appeared in a single assurance report. Ask your ESG consultants whether structural auditability was ever part of their rating logic. Ask yourself: Who protected you – and who exposed you to this liability?

📩 Ask your IT leadership for a written statement by tomorrow morning: Which of your systems are structurally auditable – and which are not?

⚠️ But beware: Such a statement does not release you from liability. You remain legally responsible – even if the information was wrong or incomplete.

Liability only disappears if:

There is no technical trust defense. You hold the final responsibility. No one else.


🛡️ CISOs & IT/GRC: Knowledge does not equal immunity

The structural unverifiability of mainstream cloud and AI systems is well known. What’s missing is internal clarity about the consequences.

No standard market system currently satisfies Article 21 of NIS2 – in terms of structured response, technical traceability, and verifiable control.

Ask your vendors for technical proof of structural auditability – no PDFs, no promises. Machine-verifiable evidence only.

If that’s not possible, the system must not be used in regulated environments.

Can you rule out with 100% certainty that a critical incident won't hit your infrastructure tomorrow – and force you to explain why you knowingly ran an unverifiable system?

💰 Investors: Invisible risks undermine valuation

You carry capital market risk for companies whose compliance structures cannot be audited. No auditor can guarantee that deployed systems meet legal requirements.

This affects:

  • Insurability – becomes illusory.
  • Valuation certainty – disappears.
  • Risk discounts – become inevitable.

Concrete examples: At VW, Siemens or Deutsche Telekom, critical infrastructures and AI-based models are integral to company value.

Without structural auditability, expect goodwill impairments, participation markdowns, and loss of insurance coverage – with direct impact on stock price, credit rating, and investment appetite.

In securities law, withholding such risk information may be considered a breach of duty once knowledge is established.


📉 Auditors, Analysts & ESG Agencies: Accountability will shift

Continuing to certify, recommend or rate systems that lack structural auditability is a documented failure – publicly traceable and legally actionable.

Which audit firm has ever tested whether AWS, Azure or Google Cloud comply structurally with NIS2, CRA, DORA, AI Act?

How often has a system been called “trustworthy AI” without even checking for structural auditability?

How many ESG ratings are based on technical trust instead of verifiable control?

History will not only judge executive boards – it will judge your recommendations.


⚖️ Policy Advisors & Legislators: Law without enforceability creates breach

The laws exist. They are in force. They are clear. Yet none of the current market-leading systems can fulfill them structurally.

Issuing regulations that cannot be fulfilled technically – while enforcing them regardless – institutionalizes legal breach.

Article 21(2) NIS2 demands "appropriate technical and organisational measures" – but no standard cloud or AI system can prove these measures are in place.

The same applies to DORA, CRA, AI Act, and GDPR structural requirements.

Those who know this and don’t initiate corrections are legally anchoring institutionalised irresponsibility.


��️ Journalists, Investigators & Watchdogs: This is the real headline

Anyone reporting on cyberattacks, ESG violations, or AI risks can no longer ignore the structural audit gap.

  • The affected are liable – personally, criminally, financially.
  • The decision-makers were informed – structurally, in writing, provably.
  • The systems are unverifiable – yet still in productive use.

The real scandal: Operators are being left out in the cold – knowingly.

Did you know this?

Then you are now part of the chain of responsibility.

If you think this is exaggerated – ask your IT team, your compliance unit, or your auditors. They know. Most ignore it. All hope they won’t be next.

Because those who remain silent now aren’t documenting ignorance – but complicity.

📌 About this publication

This article is Part 1 of the trilogy “The Max One Disclosure”:

1️⃣ Part 1 – Who’s protecting whom, really? 🕗 Published: Friday, 12 July, 08:30 CEST Why everyone is liable, although no one can verify – and what this means for investors, regulators, and decision-makers.

2️⃣ Part 2 – The Organised Silence 🕖 Scheduled: Monday, 15 July, 07:00 CEST Who failed to act despite direct delivery, full awareness, and explicit deadlines – and why this very silence now becomes legally relevant.

3️⃣ Part 3 – The Structural Break 🕦 Scheduled: Monday, 15 July, 11:30 CEST How a fully disclosed infrastructure from now on sets the new standard – legally, economically, and in terms of liability.

Author: Jan Eidecker Spokesperson, Take Back Your Data – a nonprofit organisation, an architecture community, and a strategic think tank ✉ jan@take-back-your-data.com 🔗 https://www.linkedin.com/company/take-back-your-data/


This article may and should be shared – with executives, CIOs, legal departments, regulators, and auditors. Before the next incident hits.** of the trilogy “The Max One Disclosure”


New EU laws have shifted the rules. Since 2024, every operator of critical IT systems is subject to legal obligations – including provable structural control, auditability, and risk prevention. These duties are real. The liabilities are personal. And most companies still haven’t understood what that means.

This text explains why everyone is liable – even if no one can prove compliance. And why silence is no longer neutral.


⚖️ The legal baseline

EU laws like NIS2, CRA, DORA, and the AI Act all require structural – not just procedural – oversight. Responsibility lies with executive management and cannot be delegated.

Any system that lacks provable structural auditability violates these legal obligations – no matter how polished the paper processes may look.

Yet: No mainstream system meets these requirements.

💣 And still, companies are fully liable:

  • Organisations face civil and criminal liability for incidents they couldn’t prevent.
  • Executives can be held personally liable – even with complex cloud and AI architectures.
  • And: Everyone knows it. For years. For months. And now: undeniably, since these laws took effect.

And still: no one acted.

  • Microsoft, Amazon, Google & Co. remain silent – because transparency would destroy their model.
  • Deloitte, PwC, EY and KPMG remain silent – because they are part of the audit problem.
  • EU Commission, national regulators and authorities remain silent – because they won’t admit legal misdesign.
  • VCs and tech investors remain silent – because their valuations are based on technical illusions.
  • Gartner, Forrester & Co. remain silent – because a real comparison would vaporize their ratings.


🧑💼 Executives: You're the last line of liability

You are fully liable for systems no one can structurally verify.

If you stand before court tomorrow, no one will ask what your cloud provider promised – only what you verifiably controlled.

Ask your auditors why this gap never appeared in a single assurance report. Ask your ESG consultants whether structural auditability was ever part of their rating logic. Ask yourself: Who protected you – and who exposed you to this liability?

📩 Ask your IT leadership for a written statement by tomorrow morning: Which of your systems are structurally auditable – and which are not?

⚠️ But beware: Such a statement does not release you from liability. You remain legally responsible – even if the information was wrong or incomplete.

Liability only disappears if:

There is no technical trust defense. You hold the final responsibility. No one else.


🛡️ CISOs & IT/GRC: Knowledge does not equal immunity

The structural unverifiability of mainstream cloud and AI systems is well known. What’s missing is internal clarity about the consequences.

No standard market system currently satisfies Article 21 of NIS2 – in terms of structured response, technical traceability, and verifiable control.

Ask your vendors for technical proof of structural auditability – no PDFs, no promises. Machine-verifiable evidence only.

If that’s not possible, the system must not be used in regulated environments.

Can you rule out with 100% certainty that a critical incident won't hit your infrastructure tomorrow – and force you to explain why you knowingly ran an unverifiable system?

💰 Investors: Invisible risks undermine valuation

You carry capital market risk for companies whose compliance structures cannot be audited. No auditor can guarantee that deployed systems meet legal requirements.

This affects:

  • Insurability – becomes illusory.
  • Valuation certainty – disappears.
  • Risk discounts – become inevitable.

Concrete examples: At VW, Siemens or Deutsche Telekom, critical infrastructures and AI-based models are integral to company value.

Without structural auditability, expect goodwill impairments, participation markdowns, and loss of insurance coverage – with direct impact on stock price, credit rating, and investment appetite.

In securities law, withholding such risk information may be considered a breach of duty once knowledge is established.


📉 Auditors, Analysts & ESG Agencies: Accountability will shift

Continuing to certify, recommend or rate systems that lack structural auditability is a documented failure – publicly traceable and legally actionable.

Which audit firm has ever tested whether AWS, Azure or Google Cloud comply structurally with NIS2, CRA, DORA, AI Act?

How often has a system been called “trustworthy AI” without even checking for structural auditability?

How many ESG ratings are based on technical trust instead of verifiable control?

History will not only judge executive boards – it will judge your recommendations.


⚖️ Policy Advisors & Legislators: Law without enforceability creates breach

The laws exist. They are in force. They are clear. Yet none of the current market-leading systems can fulfill them structurally.

Issuing regulations that cannot be fulfilled technically – while enforcing them regardless – institutionalizes legal breach.

Article 21(2) NIS2 demands "appropriate technical and organisational measures" – but no standard cloud or AI system can prove these measures are in place.

The same applies to DORA, CRA, AI Act, and GDPR structural requirements.

Those who know this and don’t initiate corrections are legally anchoring institutionalised irresponsibility.


🕵️ Journalists, Investigators & Watchdogs: This is the real headline

Anyone reporting on cyberattacks, ESG violations, or AI risks can no longer ignore the structural audit gap.

  • The affected are liable – personally, criminally, financially.
  • The decision-makers were informed – structurally, in writing, provably.
  • The systems are unverifiable – yet still in productive use.

The real scandal: Operators are being left out in the cold – knowingly.

Did you know this?

Then you are now part of the chain of responsibility.

If you think this is exaggerated – ask your IT team, your compliance unit, or your auditors. They know. Most ignore it. All hope they won’t be next.

Because those who remain silent now aren’t documenting ignorance – but complicity.

📌 About this publication

This article is Part 1 of the trilogy “The Max One Disclosure”:

1️⃣ Part 1 – Who’s protecting whom, really? 🕗 Published: Friday, 12 July, 08:30 CEST Why everyone is liable, although no one can verify – and what this means for investors, regulators, and decision-makers.

2️⃣ Part 2 – The Organised Silence 🕖 Scheduled: Monday, 15 July, 07:00 CEST Who failed to act despite direct delivery, full awareness, and explicit deadlines – and why this very silence now becomes legally relevant.

3️⃣ Part 3 – The Structural Break 🕦 Scheduled: Monday, 15 July, 11:30 CEST How a fully disclosed infrastructure from now on sets the new standard – legally, economically, and in terms of liability.

Author: Jan Eidecker Spokesperson, Take Back Your Data – a nonprofit organisation, an architecture community, and a strategic think tank ✉ jan@take-back-your-data.com 🔗 https://www.linkedin.com/company/take-back-your-data/


This article may and should be shared – with executives, CIOs, legal departments, regulators, and auditors. Before the next incident hits.**

JU E.

Reimagining Digital Markets | Building Open Ecosystems | Empowering Data Ownership

6d

Compliance is demanded. But compliance is technically impossible. And still – you’re liable

Like
Reply
JU E.

Reimagining Digital Markets | Building Open Ecosystems | Empowering Data Ownership

6d

You paid them all. Your provider – who promised “secure by design.” Your auditor – who never asked the right question. Your ESG consultant – who rated compliance based on branding. None of them protected me. All of them knew. What you learned from this article is simple: They sold me systems I couldn’t verify – and left me liable. Read this if you’ve ever trusted a partner, a provider, a process. 👉 https://www.linkedin.com/pulse/whos-protecting-whom-really-part-1-trilogy-max-one-ju-eidecker-uttvf You’re not just at risk. You were set up.

To view or add a comment, sign in

More articles by JU E.

Explore topics