Are Corporates complicit in User Security Breaches?

Posted on November 27, 2015


Organizations have for some time used conditions of employment as an end user control in their GRC (Governance, Risk and Compliance) process to address Cyber and Data Security. For many, this is a ‘get out of jail free’ card for disowning the actions of users who may click on the wrong email, inadvertently send documents to the wrong recipient, visit the wrong website or expose data through the loss of a device. These types of user controls have been the cause of more than a few disciplinary proceedings for unwitting employee’s as well as outright dismissals. But how reasonable is it to place responsibility on end users in this way?

Any company that suffered a burglary due to a faulty lock that they had been informed about by the vendor and was public knowledge could not possibly expect to blame their personnel who in good faith used said lock in the exercising of their duties. But that is often exactly what is happening across corporate digital surface areas. Faulty locks are abound, email and web browsers represent the two biggest open doors, both of which are likely classed as critical business systems despite there role as the principle attack vectors for hackers and proven track record as ‘broken locks’. But it extends beyond such obvious tools and goes to the heart of how attacks through email and browsers get a foothold on the underlying operating systems. Organisations are:

  1. NOT deploying vendor supplied patches to their software despite these being publicly distributed often FREE. A plane is quiet rightly grounded, and products like cars are similarly recalled is there are any identified safety or security issues.
  2. Persist in using out of date software and operating systems that are out of vendor support. To continue the flight analogy, would you fly in a plane that had been rated not fit to fly, even if it could? It would be one thing if companies exercised their right on the above if they ensured these systems were prevented from connecting to a network, especially accessible from a public network. The stark reality is that they are and even when clearly and publicly a risk, they do not suspend such use, but allow their users to drive compromised systems.As for the argument that software systems are not analogous to places or cars, well think again. Software is the interface onto critical infrastructure as well as the gatekeepers to personal private data. Furthermore software in itself is inherently flawed!

Vendors have a logical duty of care to their brand, shareholders and most of all their end user audience to supply security patches and secuirty critical upgrades FREE for current supported versions of their products. This should be an industry self regulated best practice, but sadly is not. Often organizations are blackmailed into expensive support and maintenance contracts to fund the software vendor’s own failures in their code. I am not talking about new features of functionality, just security flaws, afterall every 1,000 lines of software code contains numerous bugs; quoting from the book ‘Code Complete: A Practical Handbook of Software Construction, Second Edition 2nd Edition’. Cloud Computing has started to erode this practice as Software as a Service (SaaS) places the maintenance burden firmly on the vendor.

Adoption of SaaS based Cloud Computing solutions is a viable way to mitigate this exposure. That is IF you do your vendor checks and don’t end up jumping from out of the frying pan into the fire. Not all SaaS solutions are as robust and secure as they make out to be and Cloud Computing is no silver bullet but correctly vetted can inject a weclomed step change in an organisations IT Secuirty.