A run down of the key challenges with choosing and using cyber insurance called out in the last few months.
It looks entirely possible you will have ‘adequate’ security dictated by your insurers, so it is your job to understand the risk based yardstick they’re using to define that. Quoting this recent article on the subject, aimed at the banking industry:
“When there are security breaches, where companies have failed to comply with [industry best] practices – otherwise known as negligence – then you have lawsuits,”
That’s written by a lawyer and he is almost audibly rubbing his hands with glee. ‘Best practice’ … ‘Negligence’ …based on what benchmark? Time to ask yourself how you would defend your security strategy if an insurer says company controls are just not good enough?
Cyber Security Insurance or CSI…a handy co-incidence, as Jan Winter pointed out with enthusiastic supporting commentary from Stuart (@stegopax) and Claus Houmann. The result is a tongue-in-cheek tweet-size analogy with a hard centre.
Infospectives (@S_Clarke22) April 01, 2015
There’s much discomfort about this. Twitter critics have been vociferous, but I’ve also had correspondence with a couple of CISOs unsure how to tackle beauty contests with eager brokers. The bottom line – they may be offering good or bad deals at the moment, neither you nor they know.
Without historical data actuaries cannot drive using the rear view mirror annmariecommunicatesinsurance.com/2015/01/02/cyb… …—
Infospectives (@S_Clarke22) April 01, 2015
It’s all about forecasting the likelihood and scale of a breach. Doing that needs sufficient good quality historical data to begin to pin down root causes and trends. That historical incident info (the accurate stuff) and a workable model to monetise identified risks, just doesn’t exist as yet.
I’m not a cyber insurance expert, but I know security risk and that’s what this is all about. I am however inviting anyone reading to call out inaccuracies, other good sources of information and alternative perspectives. How to get in touch.
- Lack of quality historical data about notifiable incidents – Mandatory notification of breaches and incidents has been patchy, both in terms of requirements and in terms of the quality of data provided upon notification. Here’s TechCrunch on latest US proposals for a blanket requirement to report breaches. Freedom of Information requests, where a public interest can be proven, do reveal some useful details (notably following HIPAA, ICO, SEC, or other investigations), but there is no general standard for investigation and reporting.
- Lack of quality historical data about other publicised incidents – As evidenced by the usual attribution-go-round, when another big breach hits the news, root cause investigation is often not a straightforward task. The upshot is piecemeal or incomplete data, even when the media circus dies down. Hardly the stuff of statisticians’ dreams.
- Lack of quality historical data about internal incidents – Internal risk event notification and incident management is also hugely variable from firm to firm. Starting at one end of the scale with nothing being formally logged, to the typical historical situation with only outages being recorded, then on (at the other end), to firms who robustly log the full range of security incidents and near misses,
Even at that better practice end, there will be the same wide variance with incident identification, investigation, root cause analysis, risk estimation, aggregation to find trends and monetisation of impact.
- Identification – What proportion of incidents have historically gone under the radar? A selection of those may get spotted in future if useful threat/vulnerability monitoring and reporting becomes the norm. The heightened focus on cybersecurity will support the business case for that, but it doesn’t stop the historical dataset being (at best) patchy and at worst utterly useless to enable any predictive modelling.
For an alternative perspective, think about the recent rash of public figures brought up on historical child abuse charges. There was a cultural tendency not to report such things and shocking issues with appropriate handling of reported incidents. That and other bases for bias can seriously skew statistics. The situation with cyber incidents will be no different.
- Investigation – There is vast variance in the quality of investigations into incidents. Particularly trying to identify all parts of a kill chain involved in quiet exploits executed over time (APTs, by some definitions). A significant linked challenge is uncovering the social engineering and human error related contributions to a breach. As capability improves and matures there will be a rise in the quality of aggregate data (the newest NIST standard is heavy on incident response guidance as is content on most security focused sites) but that, as I call out with identification, doesn’t help us with prediction. Models need to settle and if the type, quantity and quality of data indicating what makes a breach more likely (or more impactful) changes, it constantly stretches the time before worthwhile conclusions can be drawn.
- Impact assessment – So the above is about the number and frequency of incidents. This is about the severity and cost. Hands up who has a way to accurately assess, monetise and aggregate the impact of internal security incidents? Yes? Great. How about the impact on company reputation? It’s not just the intangibles that are hard to put a figure on, it’s the cumulative impact of many small incidents (e.g. lost laptops, or one-off low cost fraud). Things I look at in more detail in my Tripwire State of Security article ‘Cybersecurity Risk – The Unvarnished Truth’.
- Actuarial modeling – This is where my knowledge base is too thin. So rather than me inexpertly potificating about this, why not have a look at the diagram below from KPMG (illustrates factors that can influence accuracy and success of risk modelling) and read a couple of the below articles.
SOCIETY OF ACTUARIES RELEASES NEW MORTALITY RATE TABLES TO IMPROVE ACCURACY OF PRIVATE PENSION PLAN ESTIMATES The latter is purely to demonstrate the frequent changes needed to ensure risk models remain accurate. Even the best documented datasets evolve over time. Now, bearing in mind what you do (or don’t) accept in this article, how close can insurers be to:
- Understanding global and industry views of cyber risk accurately enough to provide an adequate level of cover and set generally fair premiums
- Understanding the risk of a breach and how much security control is enough to minimise that risk, for your specific company.
It’s entirely feasible that more secure companies will be slapped with one-size-fits-all premiums, padded to protect insurers against uncertainty created by their lack of accurate risk data. So all of that leaves companies with a range of interrelated challenges:
- Top down and sales pressure – Insurers see a still open market (it’s actually over a decade old). With fresh breach intensified demand they are energetically chasing market share. Insurance is something the C-suite understands. In combination that will put pressure on security leaders to make decisions sooner rather than later.
- Government evangelism – The government has thrown it’s weight behind this market. On 23rd March it published it’s report into the role of insurance in mitigating UK cyber security risks (a good summary of key points). It’s the kind of missive that will find fans in the boadroom. There’s been a long relationship between the government and cyber insurers. Suppliers to the state are required to become Cyber Essentials certified. The Cyber Risk & Insurance Forum (CRIF) were key consultants on the creation of CE and optional certification for the charity sector comes with an insurance policy attached – baby steps to making cover mandatory?
- Initial sweeteners – Insurers know their limitations and will be less risk averse at first to get competitive advantage. This will make deals attractive.
- Creeping coverage loss – Insurers will add more uncertainty quashing conditions and exclusions after getting your business if the trend towards breaches doesn’t slow (unlikely), or their risk models remain too inaccurate to comfort them about covering their losses (extremely likely).
- Potential government backup – That may not matter if inaccuracies in premium estimation and levels of cover are in your favour and/or the government (as predicted at the end of this article), moves to provide a financial backstop for overexposed insurers.
- Do you have a risk leg to stand on? – Businesses with immature or moderately mature risk cultures will be defenceless to benchmark the appropriateness of premiums and policy coverage against their specific risk exposure.
- Not a security panacea – Overestimating coverage or underestimating risk exposure could lead to inappropriate reliance on insurance as a strategy for managing risks. Transferring risks with insurance is an entirely valid risk management option, but cover has to be right for you and is only intended to deal with risks not economic, or (based on a defensible risk argument), not strategically sensible to mitigate.
- Come claim time will all be well? – As inferred by Gary Smith in the below tweet, claims may bring nasty surprises if company security turns out not to be as robust as it appeared when premiums were set:
Gary Smith (@fl1bbl3) April 02, 2015
Security standards dictated by insurers & copywrite risks?
Ignoring the challenges individual business will face, insurers are doing all they can to up the quality of available data and reduce their risks. The two main ways they are doing this are:
- Demanding evidence of some defined standard of security control from potential clients
- Gathering incident data from client companies as part of setting ‘fair’ premiums.
The section title outlines my concerns. Could we end up with the insurance tail wagging the business dog, with security strategy and implementation driven by firms tasked with picking up the pieces if security controls fail? Further to that, insurers may end up being the first businesses with enough aggregate risk data to work out what a strategic security priority looks like. Data they will not be sharing freely, or foregoing profit to reflect with reduced premiums for firms who can prove they’re doing a good job.
Women (and men) paid the price when it became illegal to use gender to calculate premiums. If even proven risk levels fail to secure proportionate rates, how will you avoid having to jump through insurer defined hoops to get the the right coverage at the right price? Will you be forced to gold-plate some controls and ignore others you know are locally important?
What, in your opinion, is wrong with that picture? I realise that does next to nothing to damp down discomfort, but as things currently stand, calling out the challenges is the best I (we?) can do. Except for heeding the sage counsel offered by our business ancestors: Caveat Emptor and remember…it’s a buyers’ market