FEATURE Increasingly, organisations will want to protect IoT-type devices where, again, it may not be practical or possible to install a software agent. Think about a custom medical device, or network-attached machine tool, or even facilities technologies like heating, ventilation and air conditioning (HVAC) controllers, or even an Internet-aware television. Vulnerable? Potentially yes. Innovative? Potentially yes. Desirable? Well, it would be nice to find out, rather than hear a flat ‘No, we can’t allow that on the network’ from the security team. “The more we help IT organisations to focus on their business and developing innovation for their businesses instead of trying to fight cybercrime or cyberthreats, the better,” says Ziften’s Roark. Indeed, the default answer to
the ‘May I connect this thing to the network?’ should become ‘yes’ – because organisations can now deploy technology that will manage that risk. Wedge’s Weiner explains: “If you can get really confident that you’re going to block the vast majority of threats, then suddenly you’re free to start using the Internet of Things more pervasively and enable BYOD policies where people are bringing technology from home. Even within the enterprise, when an IT group wants to try something – and even if new and even artificial intelligence-based endpoint protection might not be available for that device – we can still feel comfortable bringing it in if we have a reliable level of network security.” Forget ‘Lock it down! Button it up tight!’. Let’s open up and allow innova-
tion, productivity and competitiveness to flourish in a safe, controlled environment.
About the author Alan Zeichick is president and principal analyst at Camden Associates, a bespoke analyst firm in Phoenix, Arizona that focuses on enterprise IT. His background includes consulting to the IT industry, as well as creating and hosting many conferences and trade shows since the 1980s. As an editor and publisher, Zeichick served as editor-in-chief of LAN Magazine and was the founding editor of Network Magazine, which later merged with CMP’s Network Computing. He also founded BZ Media’s SD Times, a publication for software development managers. Today, Zeichick is a regular contributor to Network World and other online publications.
Artificial intelligence – the next frontier in IT security? Rohit Talwar and April Koury, Fast Future Security has always been an arms race between attacker and defender. He starts a war with a stick, you get a spear; he counters with a musket, you upgrade to a cannon; he develops a tank, you split the atom. While the consequences of organisational cyber-security breaches may not be as earth-shatteringly dramatic today, the arms race of centuries ago continues into the digital sphere of today. The next challenge for companies with an eye towards the future should be to recognise that artificial intelligence (AI) is already entering the scene – with tools such as PatternEx focused on spotting cyber-attacks and Feedzai for fraud detection across the e-commerce value chain. The technology is developing so rapidly that it is too early to say whether the impact will be revolutionary or just the next evolution.
Artificial intelligence Some AI evangelists argue that this new technological force could render all others seemingly irrelevant, given the scale of change, risk and opportunity it could bring about in IT security. This new dark 14
Network Security
art offering seemingly magical technological wizardry does indeed have the potential to change our world and – depending on who you choose to believe – either make life a little better, lead to total societal transformation or end humanity itself. Artificial intelligence has the potential to disrupt all industry sectors – it is a field of computer science focused on creating intelligent software tools that replicate critical human mental faculties. The range of applications includes speech recognition, language translation, visual perception, learning, reasoning, inference, strategising, planning, decision-making and intuition. As a result of a new generation of disruptive technologies and AI we are entering a fourth industrial revolution.
The three previous revolutions gave us steam-based mechanisation, electrification and mass production, then electronics, information technology and automation. This new fourth era, with its smart machines, is fuelled by exponential improvement and the convergence of multiple scientific and technological fields such as big data, AI, the Internet of Things (IoT), super-computing hardware, hyperconnectivity, cloud computing, digital currencies, blockchain distributed ledger systems and mobile computing. The medium to long-term outcomes of these converging exponential technologies for individuals, society, business, government and IT security are far from clear. The pace of AI development is accelerating and is astounding even those in the sector. In March 2016, Google DeepMind’s AlphaGo system beat the world Go champion, demonstrating the
April 2017
FEATURE speed of development taking place in machine learning – a core AI technology. The board game Go has over 560 million possible moves – you cannot teach the system all the rules and permutations. Instead, AlphaGo was equipped with a machine learning algorithm that enabled it to deduce the rules and possible moves from observing thousands of games. This same technology can now be used in IT security in applications ranging from external threat detection and prevention to spotting the precursors of potentially illegal behaviour among employees.
Current state of security In 2015 in the US, the Identity Theft Resource Centre noted that almost 180 million personal records were exposed to data breaches and a PwC survey report highlighted that 79% of responding US organisations had experienced at least one security incident.1,2 Industry research indicates that while hackers exploit vulnerabilities within minutes of their becoming known, companies take roughly 146 days to fix critical vulnerabilities. With the average cost of a data breach estimated at $4m, there is growing concern over how companies can keep up with the constant onslaught of ever stealthier, faster and malicious attacks today and in the future. As it stands, many firms focus more on reacting to security breaches than on preventing them and the current approach to network security is often aimed more at standards compliance than at detecting new and evolving threats. The result is an unwinnable game of whack-a-mole that could overwhelm companies in the future unless they are willing to adopt and adapt the mindset, technology and techniques used by the hackers. And there is very little doubt that hackers are – or soon will be – developing AI tools to increase the frequency, scale, breadth and sophistication of their attacks. Organisations in this digital age create infinite amounts of data, both internally through their own processes and externally via customers, suppliers and partners. No one human is capable of analysing all
April 2017
Factors that led to security incidents. Source: PwC 2015 Information Security Breaches Survey.
that data to monitor for potential security breaches – our systems have simply become too widespread, data-laden and unwieldy. However, when combined with big data management tools, AI is becom-
ing ever more effective at crunching vast amounts of data and picking out patterns and anomalies. In fact, with most AI systems, the more information they are fed, the smarter they become.
Network Security
15
FEATURE
Number of breaches in 2015, by sector. Source: Identity Theft Resource Centre.
Future potential One of the biggest potential security benefits of AI lies in detecting internal threats. Imagine an AI system that, day in and day out, watches the comings and goings of all employees within corporate headquarters via biometrics and login information. It knows, for example, that the CFO normally logs out of the cloud each day by 12 noon and heads to the company gym, where she spends an average of 45 minutes. One day it spots an anomaly – the CFO has logged into the cloud at 12:20pm. The AI is intelligent enough to compare her location with this unexpected login – according to its data, the CFO’s face was last scanned on entering the gym and has not been seen leaving the gym, but the cloud login originated from her office. The AI recognises the anomaly, correlates the discrepancy between login and CFO locations, shuts down cloud access to the CFO’s account and begins defensive measures against potential cyberattacks. The system also alerts the CFO and escalates this high priority problem to human cyber-security within seconds. Important company data and financial records are safe thanks to AI security. Imagine how its capabilities will grow as this same AI system continues to learn from and predict the behaviour of hundreds or thousands of employees across the organisation – helping it monitor for and predict similar security breaches. 16
Network Security
Beyond employee behaviour, this AI security application is also watching the company’s internal systems and learning how they interact. It discovers when customer information is added into the company’s database, the information is automatically picked up by the accounting software and an invoice is generated within an average of 7.5 seconds. Any deviation outside of normal behaviour by 0.25 seconds triggers the AI to investigate every link within the process and tease out the cause. In this case, based on what it discovers (an inconsequential lag in the system), the AI security properly prioritises the incident as a non-threatening low risk, but will continue to monitor for similar lags and alert system maintenance to the issue just in case.
“This new era is fuelled by the convergence of scientific and technological fields such as big data, AI, the Internet of Things (IoT), super-computing hardware, hyperconnectivity, cloud computing, digital currencies, blockchain distributed ledger systems and mobile computing” Now let’s take this scenario a step further – imagine that not only has this AI system learned the behaviour of hun-
dreds of employees and of the internal company networks, but it is also capable of continually learning from external cyber-attacks. The more cyber-attacks are thrown at the AI, the more data it can parse and similar to a thinking, rational soldier who has manned the battlements through numerous campaigns, the better educated and prepared it becomes for future attacks. It will recognise totally new, hostile code based on past experience and previous exposure to related patterns of attack behaviour. It will build defences as it works to unravel the new hostile code: and as the hostile AI code attempts to adapt to the new defences, the AI security will continually develop new methods to counter and destroy the invader.
“The more cyber-attacks are thrown at the AI, the more data it can parse and similar to a thinking, rational soldier who has manned the battlements through numerous campaigns, the better educated and prepared it becomes for future attacks” This is the potential AI security system of the near future – fully integrated inside and out, non-invasive to daily business and always on alert and ready to defend. It will be the ultimate digital sentry – hopefully learning and adapting as quickly as the attackers.
Organisations’ approach Just as fighting with sticks eventually escalated to nuclear weapons, so too will the AI battle between organisations and hackers. Continual one-upmanship will become the norm in AI security, perhaps to the point where even developers will be unable to decipher the exact workings of their constantly learning and evolving security algorithms. As complex and expensive as this all sounds, will companies in the future, especially smaller organisations, be able to survive without AI?
April 2017
FEATURE As the stakes become higher and failures loom larger, ever-evolving AI threats may encourage far more collaboration across multiple companies. Smaller organisations could band together under one AI security system, dispersing the cost and maintenance across multiple payers, while larger players with the financial and technological muscle to own their own AI security may actually exchange critical information on cyber-attacks – or rather, their AIs could exchange information on cyber-attacks and learn from each other.
“Whereas the AI system is maintained by a few cybersecurity experts, the entire simple security system is in the hands of every employee, vastly multiplying the chances of a security breach” Alternatively, companies could become so overwhelmed that they simply opt for simple, technologically cheaper ‘brute force’ non-AI solutions to counter increasingly complex AI hacks. The simple, or dumb, solution may entail more checks and passwords across accounts and devices, or perhaps security-enhanced devices that are changed every two weeks. While adding five layers of complex passwords to any login or continuously rotating through smartphones could protect company security, the increased overhead, employee frustration and time wasted with cumbersome security measures would not be seen as ideal and could hinder the firm’s reputation – which might make it more susceptible to attack. While an AI system will quietly monitor security and enable employees to focus on their work, the simple non-AI solution will place an unnecessary security burden on employees – they will be responsible for keeping up with those five complex passwords and changing devices on a biweekly basis. Whereas the AI system is maintained by a few cyber-security experts, the entire simple
April 2017
security system is in the hands of every employee, vastly multiplying the chances of a security breach. In the future, this simple non-AI solution might become a defensive strategy of survival rather than an adaptive offensive campaign of a leading, thriving business.
The role of humans Of course, at this point there’s a natural question to ask: if AI is quicker, smarter and continually adapting to do its job better, why even bother with human cyber-security?” Today, AI security must still learn from humans and although it may one day reach the point where it no longer requires expert involvement, that day remains at least a few years down the road. Furthermore, depending on how valuable we deem human oversight and intuition in security to be, that day may never come to pass. AI security systems currently need humans to write their starter algorithms and provide the necessary data, training and feedback to guide their learning. Humans are currently an essential part of the deployment of AI and as AI security evolves beyond this nascent stage, the role for humans in AI will evolve as well. As organisations increasingly digitise processes, amassing mind-boggling amounts of sensitive data, new importance will be placed on the role of the human architects and minders of AI security systems. Never has so much data been so easily accessible to attack and even small attacks gathering seemingly innocuous data could add up to catastrophic security breaches. Developers of AI security will become akin to nuclear weapons inspectors in importance – highly trustworthy individuals who have undergone extensive background checks and intensive training, vetting and accreditation. They will not only build AI security, but also provide oversight and intuitive guidance in the training process and be an integral line of cybersecurity defence. AI security will go far beyond human capabilities, freeing organisations and cyber-security experts from the impossible task of constant vigilance, allowing them to prevent future attacks
without interrupting daily workflow. Tomorrow’s AI security system will learn, self-improve and run discreetly behind the scenes – intelligently monitoring, prioritising and destroying threats: ever-evolving into the next finely honed weapon in the cyber-security armoury.
About the authors Rohit Talwar is a global futurist, keynote speaker and the CEO of Fast Future Publishing. He works with clients around the world to help them understand, anticipate and respond to the forces of change reshaping business and the global economy. He has a particular interest in artificial intelligence and is the editor and contributing author for a recently published book, The Future of Business, editor of Technology vs Humanity and co-editor of a forthcoming book, The Future of AI Business. April Koury is a futurist, foresight researcher and graduate in foresight from the University of Houston. Her research has covered a wide range of topics, from human enhancement and artificial intelligence to the futures of food, water and distribution. She works at Fast Future Publishing as an editor, book designer and project and operations manager. She is co-editor of the forthcoming book The Future of AI in Business. Fast Future (www.fastfuturepublishing.com) publishes books from future thinkers around the world to explore how developments such as AI and robotics could transform existing industries, create new trillion-dollar sectors and reinvent society, government and business over the next decade.
References 1. ‘Identity Theft Resource Centre Breach Report hits near record high in 2015’. Identity Theft Resource Centre, 25 Jan 2016. Accessed Apr 2017. www.idtheftcentre.org/ITRCSurveys-Studies/2015databreaches. html. 2. ‘2015 Information security breaches survey’. PwC. Accessed Apr 2017. www.pwc.co.uk/services/audit-assurance/insights/2015-informationsecurity-breaches-survey.html.
Network Security
17