Cloud8 min read

Time to Get SET for A New Wave of Regulation

How Sovereignty, Enforcement and Trust Are Shaping Our Relationship with Technology
How Sovereignty, Enforcement and Trust Are Shaping Our Relationship with Technology

Guest article by Bill Mew, Global Cyber Ambassador for the International Association for Risk and Crisis Communication (

We have never been as reliant on technology or indeed as interconnected as we are today. Almost all modern organisations have become digitised to some extent and many, such as financial enterprises are now almost entirely digitised and have become technology companies in all but name. Thus the extent to which technology is regulated and secured impacts us all.

Historically technology regulation focused on tangible aspects - the standards for devices, their production and their interoperability. It was thought that more intangible aspects would be harder to regulate, but the EU’s introduction of GDPR to address data protection and privacy has led to a wave of similar regulations worldwide and far more regulation on everything from resilience and digital harms to artificial intelligence is on the way.

If however this wave of regulation is to be effective and achieve its aims then we need to consider how it is being applied and to address three major issues: Sovereignty, Enforcement and Trust.


International organisations are struggling to cope with the patchwork nature of the new regulations. Operating in numerous markets requires compliance with the local regulations in each one. Even in markets like the EU where a single regulation applies, it is often interpreted differently in each country, as we have seen with GDPR rulings from different national Data Protection Authorities (DPAs). And in the US, the lack of any federal data protection regulation has led to privacy regulations that vary from state to state.

Divergence, and a lack of harmony, between the many different regulations represents a significant challenge for international businesses (as well as for firms operating in many different US states). This is complicated by regulations that either restrict or prohibit cross border data sharing (typically for personal information) or are extraterritorial in nature and potentially therefore lead to jurisdictional conflict (such as the conflict between the US CLOUD Act and GDPR).

Example: Supplementary measures and data sovereignty following demise of Privacy Shield
In Europe privacy is seen as a fundamental human right, while in the US national security is seen as more of a priority. Privacy Shield, the data sharing agreement between the EU and US, was overturned in the European courts when the US was judged to provide inadequate data protections. In essence mass surveillance by the NSA using secret warrants that lacked judicial oversight or means of redress were seen as contrary to GDPR. In addition, the extraterritorial nature of the CLOUD act has made this a global problem for US tech firms, as even the use of their data centres located in the EU counts as an international data transfer in this respect. Only by adopting supplementary measures (like encryption) can organisations make use of US-based cloud or SaaS firms in a compliant manner.

As further regulations are introduced, especially in regard to AI, the compliance challenge will only increase:

  • The EU: General Data Protection Regulation (GDPR) is being followed by newer regulations like the Digital Operational Resilience Act (DORA) and proposals for an AI Ethics Framework.
  • The UK: in addition to proposals to ‘localise’ GDPR which would lead to divergence from the EU, the UK’s Department for Science, Innovation, and Technology (DSIT) recently published its long-awaited AI white paper setting out five principles which regulators must consider to build trust and provide clarity for innovation. Meanwhile the UK’s Online Safety Bill has been derided for seeking to compel backdoor access to end-to-end encryption systems.
  • The US: there is a myriad of privacy laws based on jurisdiction and sector, many of which also contain principles relating to AI. While there is no federal privacy law, The White House has announced a blueprint for an AI Bill of Rights, The Federal Trade Commission (FTC) has issued a series of reports on AI and related consumer and privacy issues, and the Securities and Exchange Commission (SEC) has issued new requirements for cybersecurity disclosures.
  • Elsewhere: regulators across Asia have been rolling out local privacy regulations and are looking at potential controls on AI, but these are likely to be less prescriptive than what is planned in the EU.

SOLUTION: organisations that for some time have been told to have data privacy and security built in from the ground up, will need to think about data sovereignty in the same way. Increasingly, competitive advantage will be gained from adopting a strategic data architecture that provides both the flexibility to adapt to new regulations as well as the controls to employ different sovereignty and compliance measures seamlessly in each market.


EnforcementRegulation without enforcement is pointless. Responsible organisations that would have acted well anyway are burdened with extra compliance costs, while irresponsible organisations gain a competitive advantage by not abiding by costly regulation without any fear of sanction.

When enacting regulation, governments often allocate inadequate consideration or resource to enforcement. As an individual if you are the victim of data abuse, who do you turn to? The police typically lack the resources to tackle existing crime let alone the skills needed to deal with more complex digital crimes. And the regulators typically only take action on as little as 3% of the notifications that they receive.

You could try to take action through the courts, but this is prohibitively expensive unless you are part of a group action and in some jurisdictions, such as the UK, this isn’t even possible.

Some notable commentators are also warning that we are entering a ‘Technopolar’ era in which government have lost any control over significant entities that are now acting effectively above the law, such as:

  • BigTech firms that use regulatory arbitrage, lobbying power and influence to avoid sanction;
  • AI systems that are too complex to be understood, managed or even regulated; and
  • Cybercriminals that use safe havens to operate beyond the reach of international law enforcement.

Example: Schrems, Facebook and the Irish Data Protection Commision (DPC)
When Max Schrems won his previously mentioned case against Facebook, which overturned Privacy Shield, the Irish Data Protection Commision should have enforced the ruling the very next day. However, Facebook, like many other Big Tech firms, has its European headquarters in Ireland because the country provides a favourable corporate taxation and regulation regime.

While the DPC was responsible for GDPR enforcement, it had taken no action at all against Facebook’s privacy abuses. It took a handful of well-motivated privacy campaigners, who had to crowd-fund their own expenses, to take on Facebook with its bottomless pockets and army of lawyers and lobbyists. And even after they won the case, rather than taking action against Facebook the DPC colluded with the tech giants on a scheme in which signing up for a social media account would have been seen as a contract in which users could sign away their privacy rights. Rather than upholding GDPR and people’s rights (the DPC’s role), this would have undermined them completely.

The campaigners were forced to bring a vote in the European parliament of 541 vores to 1 to sanction the Irish DPC for not taking action. This eventually forced the DPC’s hand and after further lobbying by other DPAs to encourage any action taken to be significant, a record fine of $1bn was announced.

Ironically, however, the fine went not to the campaigners that had worked so hard to bring the case, but to the regulator that had frustrated their efforts all the way.

While there has been a reluctance by governments to either provide the resources or the mandate for regulatory authorities to be draconian in their enforcement of data protection laws, this may change. Currently concerns that enforcement may be seen as anti-business will be overlooked if governments start to see their revenue stream from taxation coming under threat.

Governments have already struggled during the cloud era to get technology giants to pay what is regarded as a fair share of tax – Amazon’s main UK division paid no corporation tax at all in 2022 or 2023. And a “Google tax” introduced by the UK’s coalition government to crack down on multinationals shifting profits overseas has been a total failure. It was predicted by officials that the tax would raise up to £400m a year, but recent figures show that revenues have slumped to zero, nor is it expected to raise any money in the years ahead either.

While there are concerns that AI will disenfranchise large sections of the workforce and lead to a significant number of job losses - particularly in white collar roles, it is also argued that many new roles will also be created. The problem for governments is that far more jobs are likely to be lost than created and that in many instances jobs that currently pay wages and taxes will be replaced by systems that pay neither. This could lead to a significant erosion of the tax base at a time when heavily indebted governments are already under great financial pressure.

SOLUTION: The threatened impact on their tax base will force governments to rethink the way that they tax value creation and encourage them to seek to shift the tax burden further from labour (workers that vote) to capital (the large tech firms with their AI systems that don’t vote). Governments will also need to reconsider how they tackle the ‘Technopolar’ threat from digital powers that are reshaping the global order.


Brand trust, the amount of respect and loyalty that customers have for any brand, or how strongly they believe that a brand can deliver on its promises, has always been paramount. In the digital age, however, brand trust is also being influenced by the extent to which they believe that they can trust you with their personal information or credit card details.

Having a trustworthy brand in the digital era can mean the difference between having a thriving e-commerce presence or none at all. We have even seen big tech brands scoring points off one another in this respect, with the battle between Apple and Facebook being particularly one-sided in terms of brand trust.

Trustworthiness and responsibility will both also have a significant impact on the regulatory front. When facing regulatory scrutiny, it will not be a question of whether something went wrong or not (if you’re facing regulatory scrutiny it will be as a result of something having already gone wrong), but instead a question of what reasonable and responsible steps you took in advance to try to prevent it.

Whether facing a national DPA for a GDPR breach or making a cybersecurity disclosure to the SEC, you will need to be able to demonstrate that your CIO and CISO did everything necessary from prevention and detection to incident response. In addition you’ll need to show that the rest of the organisation was adequately prepared, with staff at all levels given cyber hygiene training to respect data privacy and prevent phishing attacks, and with senior management participating in cyber fire drills.

When facing the music you will need to be able to provide a legally defensible narrative not only for the regulator, but also for the press and indeed in any litigation that you may face. You will also need to show that compliance was more than a tick box exercise and that it was taken seriously.

For example, when it comes to cyber fire drills (now seen as essential), GDPR mandates that you regularly test and assess your cybersecurity systems and processes. Regulators take a dim view of firms that failed to do so at all. They look more favourably on companies that simply conduct table top exercises or PowerPoint training. And they’re most lenient sanctions will be reserved for companies that conducted the most rigorous immersive training exercises to really put the crisis preparedness of their teams to the test.

Best practice: Human factor counts from beginning to end of a cyber incident
Cyber incidents typically start with your people: According to IBM’s 2020 “Cost of a Data Breach” report, human error is the root cause of 23% of data breaches. Whether it’s sending a sensitive document to the wrong person, failing to use a strong or original password, or simply losing a thumb drive or laptop, the common denominator in many cyberattacks is people. Creating a strong PrivSec culture where the value of data is recognised, means equipping staff at all levels with both the training and tools required to become active data guardians.

Cyber incidents also end with your people: while cybersecurity is often seen as a technology issue, once you have a breach it is a problem for the whole organisation. You cannot learn crisis preparedness from a manual and any more than you can learn to drive. You need fully immersive training to give you the road-sense to know how to act in a cyber crisis. Your best chance of dealing with an incident is to have rehearsed it in as realistic a manner as possible. This will equip your team to deal with the real thing - if or when they need to!

Having cyber insurance is unlikely to carry much favour with the regulators though, as it is only ever seen as supplementary to cybersecurity and incident response, and never as a substitute for either of them.

SOLUTION: Engendering the right PrivSec culture is rapidly becoming a considerable source of competitive advantage, whether in the confidence, respect and loyalty that you can earn with customers as a digitally trusted brand, or in the legally defensible narrative that you will be able to tell if or when things go wrong to show that you acted responsibly. It will allow you to charge a premium with customers while also reducing the risk of incidents occurring and minimising your exposure to regulatory fines and litigation if they do.

Make sure that you’re all SET for the challenges ahead!

Bill Mew, Global Cyber Ambassador for the International Association for Risk and Crisis Communication (