Off the back of another blog post…
…where I quote a Trustwave survey saying 80% of IT pros feel pressured to deliver insecure solutions, a new discussion emerged. One I’ve reproduced below. It suggests it’s “standard” for vendors to ignore vulnerabilities so they can subsidize solution maintenance costs and THEN fix those bugs using revenue from extortionately priced support contracts.
Or not fix them at all if the cost/benefit argument doesn’t work ((think the scene in fight club, where Brad Pitt explains there’ll be no car recall if compensating for death and injury caused by a fault is cheaper).
The thread below suggests there’s more to this. Essentially “Quis Custodiet Ipsos Custodes?”or “Who guards the guards?“. When risks are being transferred to the customer, where’s the ultimate sanction for poor solution design and lack of due diligence treating identified risks?
In an SC Magazine article entitled ‘Who is responsible for software safety? Nobody is no longer an option‘ Bob Brennan, Veracode CEO had this to say:
“To highlight how big an issue this is, nine out of 10 third-party applications get an F when they are independently audited for security threats.”
This isn’t imagined. For a window into one executive’s perspective on mitigating risks, here’s a ‘hypothetical’ example from Jason Spaltro (Sony’s then executive director of information security) in a 2007 article:
A company relies on legacy systems to store and manage credit card transactions for its customers. The cost to harden the legacy database against a possible intrusion could come to $10 million, he says. The cost to notify customers in case of a breach might be $1 million. With those figures, says Spaltro, “it’s a valid business decision to accept the risk” of a security breach. “I will not invest $10 million to avoid a possible $1 million loss,”
To anyone associated with an IT/InfoSec solutions vendor – are the practices described here standard? Comments very welcome, anonymously or otherwise. The fallout is hard to predict…or is it?…
- I see this all the time with my developer friends. They say the trend is even worse when it comes to mobile applications because monetization means they can stay in business. Security isn’t an after thought. It’s a non thought. They simply don’t care.
- The issue becomes so big because risks are rarely, if ever handed over to owners of the post implementation service and I don’t mean IT bodies, I mean ultimate executive owners who are the ones impacted by a failure of that service. The oversight happens as a point in time thing and in silos quite cut off from anyone apart from the PM who has his eye on the budget prize. That is a sweeping statement and not fair to those diligent souls out there, but bad behaviours with no fairly quick consequences will carry on forever….
The rest essentially replicates the blog post linked to at the top.
Consulting Manager US Security Firm
- Security folks HAVE to be involved in development projects from the start. And I don’t mean the start of the coding – but from taking an idea and evolving it into a product. To many times, the security team will be called late the afternoon before implementation to do a review and give their blessing.
Having developers who understand what is meant by security, utilize secure coding practices throughout the SDLC and interacting with the security folks as peers definitely is a step forward. They also need to be able to continue with current education and keep informed on current security issues.
Unfortunately this doesn’t happen overnight and generally will not get executive commitment and sponsorship and, more importantly, the necessary resources.
Selena Flood CISA, CPA, CFE, CAPM Programme Integrity Analyst & US government contractor
- @…., I totally agree. It is a little known fact by security and IT professionals that approximately 60-80% of software production costs are derived from software maintenance expenses – NOT software development. Therefore developers recoup these backend costs are by passing them on to the consumer. The software titans already know that their pre-market testing costs will be significantly reduced if they release their ‘buggy’ product and simply wait for the consumers to report the bugs to them. The developers decide which fixes will yield the highest cost-benefit to them, not the consumer. Software maintenance fees pay for the product’s development (more bells and whistles), technical support and – believe it or not – the FIXES for the original defective software. That’s why we can buy super cheap software that is accompanied by ridiculously high tech support costs.
- @…. Reminds me of the scene in fight club on the plane.
Car recall costs > Compensation for deaths caused by mechanical car failure = No recall. Not dealing with people’s lives here, but could be dealing with their financial welfare and reputation. My perspective on this `Mismanaged Risk – 80% IT Pros Pressured To Deliver Despite Security Concerns‘ was aimed at in-house security assurance for company change programmes. I talk about how vital it is to land a clear view of potential impact with the ultimate risk owner. Sounds like software co execs are getting that impact view, but as they’re transferring the production risk to their consumers, they don’t really care. I’m sure all clients would be delighted to realise they’re compensating these firms for failure. How prevalent is this?
- @ Sarah, Unfortunately, it’s the industry standard. It is my belief is that you will see more enterprises purchasing applications according to the EAL’s (Evaluation Assurance Levels) prescribed by the Common Criteria (CC) which is sanctioned by ISO/IEC 15408. It’s time for the InfoSec community to brush up on CC (relatively new) and old (TCSEC and ITSEC) evaluation standards to begin making better choices on our ‘trusted’ applications and systems. Ultimately the burden will be shared with InfoSec and IT, not the developers, thanks to the American free-market system.
This practice seems to have been normalized behind a wall of collusion and silence within vendor management teams. Where’s the regulator in this space? The only sanction I can see is if there’s a personal info breach linkable to a known unfixed bug. The only hope of control is boycott by consumers either directly or by avoiding vendors not proved to adhere to standards like those referenced by Selena. This is also bigger than InfoSec isn’t it. Functional bugs will get the same treatment, but are less risky to ignore. Code verification isn’t part of any due diligence programme I’m aware of. At some point you have to trust, look into secure dev training and practice then diligent adherence to vendors own policies. No assurance or due diligence regime can mitigate for willful bypassing of good controls. And this seems to be that on a grand scale.
Selena Flood (her last post giving impactful perspective and good practice advice)
- @Sarah, feel free to quote me in your blog. To put the issue in perspective, Carnegie Mellon University estimates there are 5 to 15 bugs in every 1,000 lines of code; for instance, Windows 2008 has 40 to 60 million lines of code – quoted from Shon Harris One the best tools for developers to use is OWASPs Code Review Metrics. Below are the benefits its use… “The objective of code review is to detect development errors which may cause vulnerabilities, and hence give rise to an exploit. Code review can also be used to measure the progress of a development team in their practice of secure application development. It can pinpoint areas where the development practice is weak, areas where secure development practice is strong, and give a security practitioner the ability to address the root cause of the weaknesses within a developed solution. It may give rise to investigation into software development policies and guidelines and the interpretation of them by the users; communication is the key.” https://www.owasp.org/index.php/Code_Review_Metrics Also the careers of a software architect and software developer are two of the fastest growing occupations in the US and software titans are becoming millionaires at alarmingly fast rates. So expect ‘buggy’ software to be a major part of InfoSec’s headaches for a long, long time:-( http://money.cnn.com/pf/best-jobs/
Some scrutiny would appear warranted. The sheer scale of the challenge is undeniable, hence the rockstar salaries for some developers. Poor coding training and practice are issues that must be addressed, but the impression here is that significant functional and security bugs, with known potential impact, are being found and ignored. The risk is then handed over to the consumer and to add insult to injury, they pay for the fixes indirectly when they buy extremely costly support contracts.
Is this just a fact of IT life, or is it something unacceptable that tacit acceptance and secrecy has normalised?
What next? You tell me..
The battle between a quick buck and good secure software development (allowing for the fact NO-ONE can find & fix ALL bugs) is still fierce and the god of money has big guns.