Digital Dementia born of Artificial Intelligence

Posted on January 10, 2017


Once again into the breach hoisted on our very human petard!

I will not tire of saying it until the IT community takes responsibility and accountability for the mess we are manifesting in our rush to the market with compromised code in such a cavalier attitude. For the facts are stark. In every 1,000 lines of code there are between 15-50 bugs that could constitute vulnerabilities. (Quoting from the book  ‘Code Complete: A Practical Handbook of Software Construction, Second Edition 2nd Edition). A little-known statistic that crystallises the reality of this fact and will arguably be one of the most impactful in the future of our digital lives, is that the world will need to secure 111 billion lines of new software code in 2017. (According to the 2017 Application Security Report, recently published by Cybersecurity Ventures and scheduled for release next month).

Let’s put that in context. 111 Billion lines of code at even the low-end bug error rate per 1,000 lines of code = 1Billion+ bugs. If we take an optimistic view that only 10% are true vulnerabilities that still means 100m exploitable vulnerabilities exist to compromising data. So, with 90% of reported security incidents resulting from exploits against defects in the design or code of software (researched by the Software Engineering Institute (SEI)) you would have thought it would be the No1 priority of the IT industries priority to get its house in order.

Unfortunately, the outlook is bleak. As layer upon layer of infrastructure and applications ensure a future of unintended consequences, it is important that we reflect on the IoT mayhem with the Mirai DDoS attacks.  This taking the IoT security threat to a whole new level. Now consider the new arena of Artificial Intelligence (AI).  This promises a whole new spectrum of mayhem. As we struggle with the ‘confinement problem’ of today in the form of malware and advanced persistent threats that gorge themselves on the Cyber Space of buggy code, tomorrows confinement problem promises to make our Cyber Security challenges of today a veritable playground in comparison. Too busy cleaning up our current mess we are neglecting to consider how we will contain AI systems from overreaching their intended scope of influence.

When I refer to AI I do not mean the hyped-up nursery aged equivalents that are hawked around the market by vendors today which have their own fallout potential as I noted in an October blog ‘Bots & Robotic Software, the next threat surface’. I am referring to the future prospect of where these nursery exercises in AI of today hope to evolve. For now, with our current level of coding discipline the frightening realisation is that we could be coding up AI with the equivalent of Digital Dementia!  Far from intelligent AI we could have the digital equivalent of mad dictators.

To understand what this implies requires no more than a basic grasp of what the not too distant future of AI could mean form humanity.

AI represents an exponential acceleration of evolution that will compound in growth as the very intelligent systems themselves start to define subsequent generations. We can just about get our heads around how the Internet impacts and is transforming our lives, oblivious though, too much of its network and hyper-connected effects. This affects all connected things accelerating the potential of unforeseen outcomes as well as the detrimental flip side of the same cause. From the very physical impact on our world in the form of the Arab Spring that has challenged large parts of North Africa and the Middle east to the digital phenomena of Facebooks distortion of social discourse. The fickle nature of which echos in the historic rise and fall of companies from billions to buttons. Loyalty shifting like flocking birds hopping from one power line to the next:

  • Myspace – Was the largest social networking site in the world, and in June 2006 surpassed Google as the most visited website in the United States. As of October 2016, Myspace was ranked 2,154 by total web traffic.
  • InfoSpace – In March 2000 this stock reached a price $1,305 per share, but by April 2001 the price had crashed down to $22 a share.
  • – spent $188 million in just six months in an attempt to create a global online fashion store that went bankrupt in May 2000.
  • GeoCities – Purchased by Yahoo! for $3.57 billion in January 1999. Yahoo! closed GeoCities on October 26, 2009
  • Alta Vista – It became one of the most used early search engines, but lost out to Google and was consumed by Yahoo.

We are so unprepared for the world of AI that it is likely to see some significant fallout completely off the current radar of consideration. Until we can effectively write code we will be building our AI with the equivalent of degenerative diseases at their digital DNA level.  That is even if we can get right the ethical questions of how to program intelligent systems.

Reliable AI will only materialise when code becomes clean and Cyber Security pervasive. Not a lapdog of compliance but an unruly and unpredictable game of hi-risk with entities we share no values with. Imagine Digital Dementia. Humans cannot even program themselves reliably so how can we expect such a confidence in AI? The greatest risk to human health continues to be human behaviour. Just look at the obesity epidemic amongst other addictive substance abuses that burden society.

There is one dimension that has not been considered in this article and that is ‘Software Robots’  ie: Digital Labour. This is what is going to have the greatest impact, not physical ‘R2D2 Star Wars’ class of units. Digital Labour is doing to repetitive task based jobs what the word processor did to typing pools. As these get smarter with AI advances, their encroachment into more complex roles and decision making processes will occur.

The challenge we face is not the impact to jobs but the risk these introduce into our digitally transformed lives.  AI represents an exponential acceleration of evolution that will compound as the very intelligent systems themselves start to define subsequent generations beyond any human capacity. What hope have we of containing AI Digital Labour from running riot out with the constraints they are meant to operate, when the current generation of IT and Cyber Security professionals who still cannot contain the basic malware and malfeasants breaking into our systems today.

Just as we have seen the failures in the rush to get IoT system to market fallout across the Internet, we need to learn some lessons before rushing blindly forward. To pause and reflect that, as humans we need to start putting controls in place so we do not inadvertently let the AI out of the bag before we know how to put it back in!