Security in contracts: how not to get fined $80 million

Another day, another breach, another fine. This time it’s Capital One, a US bank, whose move to the cloud was so poorly managed that someone was able to extract the data of 100 million customers and post it to a public GitHub page. Capital One has been fined $80 million by the US Office of the Comptroller of Currency (OOC), part of the US Treasury, but that fine is just the start. Once you add in the cost of contacting customers, buying them identity protection service, class actions and other lawsuits, the total cash outflow will probably exceed $300 million. And that’s without assessing the damage to Capital One’s reputation.

So where did it all go wrong? According to the OCC, the bank “failed to establish appropriate risk management for the cloud operating environment, including appropriate design and implementation of certain network security controls, adequate data loss prevention controls, and effective dispositioning of alerts.”

 

A lot of suppliers don’t get it

Given that the bank had outsourced this part of its operations to a cloud provider, that probably means that its contract with the cloud service provider (most likely a SaaS player) did not address information security issues properly.

 

Which is all very topical, because I’ve just been involved in negotiation with an EU regulated entity (my client) that was looking to purchase a SaaS service from a supplier based in the US. What did the US supplier’s standard contract say about information security? Nothing. Nada. Zilch. Zero. Unbelievable, but true. (Actually not quite true, because their data protection schedule did reproduce the GDPR’s information security clause. But that doesn’t work – see below.)

 

What do you do when someone provides you with a contract like that? Run a mile would be a good place to start. Right or wrongly, we didn’t run. We asked to see their information security policy because, if it was any good, we could use it as a schedule to the contract. That didn’t work, because they said their information security policy is confidential and could not be disclosed. This is a bizarre answer (but not the first time I’ve heard it) because a top-level information security policy is so general, organisational and aspirational that there is nothing confidential or sensitive in it. It will contain statements like this one:

 

Company will manage access rights to information assets and their supporting systems on a ‘need-to-know’ basis, including for remote access. Users should be granted minimum access rights that are strictly required to execute their duties (principle of ‘least privilege’).

 

Or this one:

 

Company shall establish and implement an information security testing framework that validates the robustness and effectiveness of its information security measures and ensure that this framework considers threats and vulnerabilities, identified through threat monitoring and ICT and security risk assessment process.

 

Or this one:

 

Company shall establish and implement an incident and problem management process to monitor and log operational and security ICT incidents and to enable it to continue or resume, in a timely manner, critical business functions and processes when disruptions occur.

 

Not exactly what a hacker would call the mother lode. So what did we do? We produced our own infosec schedule and asked for it to be included in the contract. Basically, we created what should have been (and possibly was) in their information security policy using statements of the type set above (i.e. still very high-level). We also organised it by logical groupings. For example:

 

  • organisation and governance
  • logical security
  • physical security
  • ICT operations security
  • security monitoring
  • information security reviews, assessment and testing
  • information security training and awareness

 

(If you want to get hold of some of this stuff ready-made, try the EBA guidelines on ICT and security risk managementhttps://eba.europa.eu/-/eba-consults-on-guidelines-on-ict-and-security-risk-management).

 

The real breach is not the data loss

Why should you worry about this kind of stuff? you might ask. Well, if you are regulated by the FCA or by the ICO or your local equivalents (and just about every company now processes personal data and so is regulated by the ICO or its local equivalent), then, if you outsource these kinds of activities, you have no option: you are legally required to put in place the appropriate measures, controls, and so on. As the OCC said in relation to Capital One, you have to put in place appropriate design and implementation of certain network security controls, adequate data loss prevention controls, and effective dispositioning of alerts: if you are not outsourcing, you have to have them internally. If you are outsourcing, you have them externally, i.e. in the contract with your supplier.

 

The real breach of the regulations is not the data breach but the failure to put in the appropriate measures right from the start. The data breach is what gets the regulator’s attention, but you are already in breach before then.

 

GDPR and information security

Basically, this is just a version of the accountability principle from the GDPR. It’s not enough to be compliant; you want to be able to demonstrate, when asked, that you are compliant. Which brings us neatly to the GDPR’s requirement on information security (Article 32, in case you were wondering). Here is what the GDPR mandates in terms of infosec.

 

Taking into account the state of the art, the costs of implementation and the nature, scope, context and purposes of processing as well as the risk of varying likelihood and severity for the rights and freedoms of natural persons, the controller and the processor shall implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk, including inter alia as appropriate:

 

(a) the pseudonymisation and encryption of personal data;

(b) the ability to ensure the ongoing confidentiality, integrity, availability and resilience of processing systems and services;

(c) the ability to restore the availability and access to personal data in a timely manner in the event of a physical or technical incident;

(d) a process for regularly testing, assessing and evaluating the effectiveness of technical and organisational measures for ensuring the security of the processing.

 

The GDPR trap!

There are a number of suppliers around that think that, if they include this in their contract with the buyer, and nothing else, everything GDPR-infosec is sorted. Well, that’s half true: it’s all good for the processor (from the contract perspective) but, for the controller, nothing could be further from the truth. In fact, the controller is in breach of GDPR from the moment it signs the contract.

 

How could this possibly be, I hear these controllers cry, when we have reproduced verbatim the words of the GDPR? Well: that is exactly the problem – it’s verbatim. Imagine that you are the legislator putting together the set of rules that will become the GDPR, and in particular setting out what is required in terms of information security. As you wrestle with that problem, you become aware of a number of constraints. You can’t be too prescriptive in terms of technology, because the GDPR is intended to last 30 years, and what is presently technologically best of breed will look feeble in 2050. You can’t be too prescriptive in terms of level of protection, because personal data varies hugely in terms of sensitivity: my medical records are orders of magnitude more sensitive than the number of coffees I drank last week. So, what do you do? You set out a high-level obligation which is principled enough to set a high bar, but flexible enough over time to allow technology to evolve, and which recognises that high (as in high bar) is a relative concept.

 

But if you are two companies about to sign a contract, you don’t have that problem. You don’t have a time issue (most contracts last 3 to 5 years), plus you know exactly the type of data you are dealing with, and how sensitive it is. You don’t have the excuse of needing to keep it principled: on the contrary, you have a requirement to be prescriptive and much more granular. Except, not the processor. All the processor has to do, vis-à-vis the GDPR, is to ensure that its infosec meets the obligation set out above (whatever that means): it doesn’t really matter what’s in the contract, it has a (sort of) de facto obligation. Either the infosec the processor provides is good enough to comply with the GDPR, or it isn’t.

 

The buyer has to be seen to be secure

The poor controller, on the other hand, has the obligation of accountability. He has to be able to demonstrate that he can bring about good infosec: in other words, he has to be able to demonstrate that he has contracted for good infosec, which means he has to have obligations he can enforce. Now, if all the contract does is repeat Article 32 of the GDPR, he’s in trouble because Article 32 is, in practical terms, unenforceable between companies. It’s just too vague for a court to enforce (courts aren’t good at determining vague things like state of the art: that’s why regulators get to do it). To give a simple example, imagine you have contracted with a builder to build you a house. The contract defines the house to be built as follows:

 

 

Taking into account the state of the art, the costs of implementation and the nature, scope, context and purposes of house-building as well as the risk of varying likelihood and severity for the rights and freedoms of natural persons who are going to live in that house, the buyer and the builder shall implement appropriate technical and organisational measures to make sure the house doesn’t fall over, including inter alia as appropriate:

 

(a) using good bricks;

(b) making sure the rain doesn’t come in;

(c) the ability to fix the house if it falls over;

(d) snagging.

 

If you have ever seen the specification for a house, you will know that it looks nothing like the above, and there’s a good reason for the difference. It’s too vague.

 

Moral: prevention is better than cure, and cheaper by a factor of 10,000. If you’ve got some questions about how best to reflect these issues in your contract, or you just want to brainstorm a few issues, then feel free to get in touch.  You can find me here.

Capital One has been fined $80million by the US Office of the Comptroller of Currency, part of the US Treasury, but that fine is just the start.