RISK CONTROL 5"VBS.Timofonica",
Symantec Security Response, 7 June 2000. http://www.sarc.com/avcenter/venc/data/v bs.timofonica.html. 6"Palm.Liberty.A", Symantec Security Response, 28 August 2000. http://securityresponse.symantec.com/avcenter/venc/da ta/palm.liberty.a.html. 7"Palm.Phage.Dropper", Symantec Security Response, 22 September 2000. http://securityresponse.symantec.com/avce nter/venc/data/palm.phage.dropper.html. 8"DIY hack for Orange SPV smartphone revealed", The inquirer, 17 January 2003. 9"SymbOS.Cabir", Symantec Security Response, 14 June 2004. http://www.symantec.com/avcenter/venc/d ata/symbos.cabir.html. 10See 'Security aspects', 3GPP Specifications 33 series, 3rd Generation Partnership Project (3GPP). http://www.3gpp.org/ftp/Specs/html-
info/33-series.htm 11"SymbOS.Commwarrior.A", Symantec Security Response, 7 March 2005. http://securityresponse.symantec.com/avcen ter/venc/data/symbos.commwarrior.a.html. 12"Backdoor.Brador.A", Symantec Security Response, 5 August 2004. http://www.symantec.com/avcenter/venc/d ata/backdoor.brador.a.html. 13"SymbOS.Dampig.A", Symantec Security Response, 8 March 2005. http://securityresponse.symantec.com/avce nter/venc/data/symbos.dampig.a.html. 14"WinCE.Duts.A", Symantec Security Response, 17 July 2004. http://www.symantec.com/avcenter/venc/d ata/wince.duts.a.html. 15"SymbOS.Fontal.A", Symantec Security Response, 6 April 2005. http://securityresponse.symantec.com/ avcenter/venc/data/symbos.fontal.a.html.
Risk control: a technical view Piers Wilson, Senior Consultant, Insight Consulting The relationship between the technical aspects Piers Wilson of IT security and 'risk assessment' has always been somewhat variable and, in many cases, non-existent. The reasons for this are often diverse and fault can be attributed to both sides of any discussion. In the author's experience, there are three main reasons for this loggerhead - risk assesments often state the obvious, there is lack of business awareness in the technical disciplines and accounting or discounting of existing controls.
'Stating the obvious' in risk assessments
A lack of business awareness within the technical disciplines
Examples include recommending passwords to protect user accounts (quite valid), the use of firewalls to connect systems to the Internet (quite valid) or the use of tokens to secure remote access (again quite valid). The problem being that, from a technical standpoint, this type of recommendation isn't often very helpful and would likely have been suggested anyway; thus going through a specific exercise to derive these types of recommendation seems like a wasted effort.
For example, where a finance system goes down, loses its network link or data is lost; what is the impact? From the perspective of IT staff, they get angry phone calls, are asked to work overtime, get calls out at 2am, etc… But these things are all manageable (and personal) impacts. For the Finance department, however, it might mean that they can't pay suppliers or can't raise invoices. Thus for the organization, they could incur fines, interest payments, be unable to meet contractual obligations,
8
Computer Fraud & Security
16"SymbOS.Lasco.A",
Symantec Security Response, 10 January 2005. http://securityresponse.symantec.com/avcenter/venc/da ta/symbos.lasco.a.html. 17"SymbOS.Locknut", Symantec Security Response, 2 February 2005. http://securityresponse.symantec.com/avcenter/venc/da ta/symbos.locknut.html. 18"SymbOS.Skulls", Symantec Security Response, 19 November 2004. http://securityresponse.symantec.com/avce nter/venc/data/symbos.skulls.html. 19Ferrie, P. and Perriot, F. 2005. "Paradise Lost", Virus Bulletin, April 2005, pp4-6. 20Furnell, S. and Ward, J. 2004. "Malware Comes of Age: The arrival of the true computer parasite", Network Security, October 2004, pp11-15. 21"Survey Reveals Stolen PDAs Provide Open Door to Corporate Networks", Press
etc. These are not as manageable and much more serious; but not as immediately obvious to the IT professional who is replacing the hard disks…
Accounting for (and discounting) existing controls If you ask a technician what the risk (vulnerability/threat/likelihood) of a virus is, and in consideration of the gateway, desktop and server protection he already has in place, he might reply that it is 'low'. However, in a risk assessment, this answer might mean that anti-virus measures would not be specifically recommended (when clearly they are necessary, and are in place in this example). Also, and more worryingly, it could mean that other controls, which are able to reduce virus impacts, such as good data access permissions, staff awareness or systems logging, may also be omitted (the threat is 'low' after all) and subsequently overlooked. Risk Assessment often needs to initially ignore the controls already in place. This is necessary even if that means you end up with recommended controls you already have and can immediately tick off (but then, see the first problem above).
May 2005
RISK CONTROL
Building a stronger relationship between risk and technology
Answers to avoid:
The end result of this has been that in the past, Risk Assessment has tended to be foisted on a technical resource group that is unwilling or unable to get the most out of the process. This situation is changing, though – and needs to – as the corporate world at the highest levels takes more notice of Risk Assessment, Risk Management and related disciplines. Recent initiatives and regulations, including Turnbull, Sarbanes-Oxley and Basel II, are driving this - particularly for the financial sector and international or US-based organizations. It is worth noting, however, that the increasing emphasis on Corporate Governance, of which Risk Management is a part, means that the implications of this should be taken under consideration by all organizations. The end result is that in environments with highly skilled (and in many cases highly business aware) technical IT and IT security teams, the need for risk assessment is increasing, rather than decreasing. A high degree of reliance on the expertise of an individual or isolated team with no process of governance around it may mean it will be hard to demonstrate that appropriate decisions have been taken (even if in reality they have).
·
IT Security governance Governance is a word that is increasingly being used to describe how organizations manage themselves. These days, it includes having an auditable, transparent and justifiable decision making process in place - aiming to prevent the kind of financial and corporate self destructions that we've seen in recent years. The same principles, when applied to Information Security Management, can pay real dividends. Key to all of this is the adoption of a 'risk-based approach' and, although the level of formality adopted is down to a particular organization. There are, however, some answers, which we should avoid giving when asked "Why do we need a security control?"
May 2005
· ·
·
"Because everyone else has one." "Because there was an article on them in a magazine." "Because Tom though we ought to" (whoever Tom is) "Better safe than sorry.”
Better answer: ·
"Because it will defend X systems, which hold X data and support X business process, from a specific type of threat".
Let's consider now some of the practical implications of all this.
Baseline controls/ standards These are about defining a starting point or an agreed, minimum set of rules that should apply to all systems of a particular type. Good examples are build/configuration documents for the operating system platforms used, the standards of authentication that must be employed by different types of user (local, remote access, administrator, etc…) or a minimum degree of anti-virus protection. In other words, they form an interpretation of the organization's policies and information management objectives - but at a level that actually means something to the underlying technology. In reality, this is where the formalised (or semi-formalised) approach of risk assessment and the use of 'conventional wisdom' meet. There are some things, irrespective of the level of risk, are ubiquitously good ideas. As an example, the standard recommendation of disabling unnecessary network services and changing default Define baseline controls and standard builds for platforms, systems and applications. These baselines may be the common ground of all risk treatment processes or you may develop specific baseline sets for platforms of different roles, based on the level of risk (i.e. Web servers).
passwords typically applies to the most lowly workstation as well as the most mission critical server.
Logging and monitoring "Have you ever suffered from a security breach?" is a question often asked in discussions around IT Security. The answer is often "No." I like to follow this up by asking how the organization came to that answer. Equally as often I am assured that they haven't noticed any, but of course the missing piece of this jigsaw is some mechanism to allow security events, incidents or breaches to be detected at all. Adequate collection and handling of log file information and some process for monitoring it (in which I include IDS solutions that have visibility of systems logs - although they often do a lot more besides) is critically important for three reasons. The first two are fairly obvious: 1. In the absence of logs, and some process to monitor logs, it is impossible even to detect a security breach except by its impact. Many security breaches (e.g. theft/copying of data) do not have an immediately visible or noticeable impact. This even includes those that are often easy to keep within normal system tolerances (e.g. some forms of low-level system misuse). 2. Having detected an incident without logs (through whatever means) it is difficult, if not impossible, to diagnose and trace back through the actions of an attacker. These actions help us find out what happened and when. However the third reason is slightly less obvious. It relates to the main theme of this article, which is the relationship between risk management and the technical environment: 3. If there is no record or knowledge about systems/security performance and the likelihood/frequency of security breaches, it is impossible to accurately assess:
Computer Fraud & Security
9
RISK CONTROL ·
How likely a certain type of threat is ? How often it typically occurs ? What types of systems tend to be affected. ? What the common source of the threat is ?
· · ·
As such, the process of risk assessment which we started out with is inherently weakened. You have no really accurate information about the likelihood or extent of a vulnerability other than that it 'theoretically exists' and 'might one day happen'. You can rely on external statistical analysis (the FBI and NHTCU regularly publish surveys, for example) but, to an extent, you are building a house on foundations of sand. Having said all that, of course - and to borrow a phrase from the financial sector - "Past performance is no indication of future performance". If you've never suffered from a particular type of threat or technical breach, it does not mean that it won't happen for the first time tomorrow, or the next day, but at least you have an auditable data set from which to make decisions about risk.
Ensure adequate logging is in place, this can be used not only to diagnose, detect and investigate breaches when then do occur; but forms an essential statistical input to the risk assessment process.
Cross-platform identity management, authentication and access control In any large, heterogeneous IT environment, user management gets complicated very quickly. If you look beyond IT to telephone systems, building access control and personal/payroll systems, there are even more dependencies. The problem arises because it is often hard to set up or modify an account or entry for a user across the many different 10
Computer Fraud & Security
teams, systems and environments involved. For instance, if a user moves from Finance to Marketing, they have numerous elements of IT and physical access which must all be changed. This may involve having a phone line/number moved or a different application installed. The scope for errors can be high. This is also one of the times when IT security solutions can actually save money - the ability to enforce rolebased granting of access and then handle user rights assignment in terms of roles held by the user is incredibly powerful. It can deliver real cost savings and simplify the interface between what the business is asking for and what user management functions deliver. The linkage between user management in the various platforms and environments may arise in a number of forms. The use of centralised directories (or meta-directories) is getting more and more common as is the approach of providing users with smart cards. The latter can be used for network logon, RAS/VPN access and physical building access control. It can lift the concept of 'single-signon' out of IT and make it a part of the physical environment too. If certificates are in use, then these can contain information about the applications, which staff can access and their capabilities. This means that the user effectively carries around a trusted copy of their entry in the systems access matrix. At the very least, having a single approach to user management can mean there is a single point of contact and administration for the business. This offers a much more integrated approach to user management and one affording longer term cost savings as the many separate provisioning functions can be consolidated into one team. How does this help us in our pursuit of sound risk management? It enables the creating of audit trails that can be relied upon (i.e. which user did what operation at what time and where) across multiple systems. It also
allows us to anticipate the requirements of risk treatment to authenticate users and access control requirements (the 'principle of least privilege'). These are key aspects of sound Governance and, as such, link in closely between Technology, Risk Management and the Corporate Governance agenda. Consider how to bring the various user management, permissioning and access control functions together. Investigate how technology (such as smart cards, directory platforms) can be deployed to simplify (or centralise) management, reduce costs and achieve higher levels of control, security and assurance.
Testing and validation The process by which major systems, certainly external Web-based ones, are moved or promoted from development to production often includes a security assessment or penetration test. The level of detail this goes into varies but, when considering an organization as a whole, it is not really enough to conduct this at a system level. The way systems (both internal and external) are managed plays a huge role in determining the levels of security achieved going forward, and this may not be evident within the short 'snap shot' of a single point test. Security must be assessed organization wide; from the network up to the applications, from the workstations to the servers, from the external firewall interface to the back-end database server. If controls have been placed at all these levels (either as part of a risk treatment/management process or otherwise) then there has to be a process to ensure that the controls are actually present, that they are effective and that they are being managed appropriately. This process is important - not just
May 2005
RISK CONTROL for peace of mind - but as an input to the wider audit process. In addition, if risk assessment is used to justify security investment but doesn't prevent attacks (through ineffective or poorly implemented controls) then the whole idea of having a business case for security is weakened - possibly for ever. After all, why spend more on security if the security controls you do have give so little benefit?
Ensure the security controls and security management practices of your organizations are regularly reviewed. This should ince coverage of all aspects of your infrastructure (possibly by taking samples to reduce effort involved) but should aim not only to find mis-configurations, but areas where security protection is 'thin' i.e. where single failures can lead to large exposures. Also it is important to understand how systems and security controls are managed, including patches, change control, backups, user administration, etc.
Risk assurance In essence, the aim of this article has been to show that looking at either Technology or Risk Management in isolation can be flawed, and does not sit well with current approaches to Corporate Governance. The two disciplines are intrinsically interlinked, and the field of Security tends to be the meeting point… Going forward, this means that a greater understanding on each side of the technical divide will be needed - the ability of technology to deliver risk controls as well as the importance of having sound management practices behind them must be recognised by businesses and invested in. From a technologist's perspective, it means that a greater understanding of risk management and the associated principles (such as the underlying business impacts and value of data) is required. No longer
May 2005
is it sufficient to scatter controls and configuration settings on the basis purely of judgement, past practices or just 'a hunch'. That is not so say experience and skill is not valuable, just that we need to start talking the same language. For example, if you are connecting a network (a Web server farm, say) to the Internet then you should certainly deploy a firewall. No one would argue with this and it doesn't really take a separate risk assessment to tell you. However, if you take a step back and consider the threat of 'unauthorized access' it is clear that you need application controls too, and that these also play an important part in maintaining security…
“
Past performance is no indication
Case Study… One security review undertaken by Insight Consulting included verification of the security of an organisation's Windows platforms. Within this we checked the patch levels and found that they were reasonably up to date. However, on closer examination the "$NtUninstall…" directories all had creation dates within the 2 days prior to the audit, none before that for about 8 months. The finding was not that patches weren't in place, but that there was clear evidence of inadequate patch management processes and a sign that things had rapidly been 'tidied up' purely for the audit.
of future
”
performance
The trick, of course, is that if you have a budget for firewalls, but nothing for application security, you can rightly escalate the partial coverage of the risks that will result. By combining the arguments for different controls on a risk-based foundation, it is almost harder (for management) to justify spending on just one without spending anything on the other. You accept the risk or not - you cannot really justify accepting it and doing nothing at the application layer, and then deploy controls at the network level upon which you place sole reliance. We also gain some 'auditability' in the design process through this approach, listing out what can actually go wrong in a structured (or semistructured) way and identifying the countermeasures our designs include. Where we can get an idea of business impact and costs, this can then feed very easily into the resulting business case - potentially having a stronger chance of winning some business
backing (there is nothing like throwing a few figures around to gain senior management's attention).
Parting shots... A risk assessment might include threats like 'unauthorized access by an outsider' whereas a technologist might be worried that the public should not be able to directly access a database table. Ultimately, though, the aim of preventing a given threat is common - we have the same objectives. As such, we need to combine the knowledge and skills of the technical staff involved in the delivery of IT Security, with the corporate agenda and requirement for sound governance and risk management. Hopefully I've been able to present some ideas as to what this means and how it can be achieved and, most importantly, what the benefits can be of getting it right.
About the author Piers Wilson is a Senior Consultant with Insight Consulting - the security, compliance and continuity unit of Siemens Communications - and head of the company's Technical Risk Assurance team.
Computer Fraud & Security
11