Reflecting on 20 SEC conferences

Reflecting on 20 SEC conferences

computers & security 25 (2006) 247–256 available at www.sciencedirect.com journal homepage: www.elsevier.com/locate/cose Reflecting on 20 SEC confe...

341KB Sizes 0 Downloads 50 Views

computers & security 25 (2006) 247–256

available at www.sciencedirect.com

journal homepage: www.elsevier.com/locate/cose

Reflecting on 20 SEC conferences Reinhardt A. Botha*, Tshepo G. Gaadingwe Centre for Information Security Studies, Nelson Mandela Metropolitan University, Port Elizabeth, South Africa

article info

abstract

Article history:

The ever-increasing use of information technology in business and everyday life led to

Received 3 April 2006

a raised awareness of security issues. The last three decades spawned a large amount of

Revised 21 April 2006

research literature on security.

Accepted 21 April 2006

The belief that the future can only be realized if the past is well understood has motivated an investigation into the history of information security. This report therefore fo-

Keywords:

cuses on analyzing the work reported in the past 20 SEC conferences, the flagship

Computer security

conference series of the IFIP Technical Committee 11 (TC-11).

Information security

The study indicates that the focus of papers increased over time and that the output of

Security history

papers became more technical. Certain topics such as auditing and business continuity

SEC conference series

have largely disappeared as a topic. The study confirmed an expected increase in the promi-

TC-11

nence of papers dealing with network related research. Surprisingly, information security management showed no upward trend; instead a slight decrease could be seen. In contrast, crypto-like topics showed a strong growth. Finally the paper reflects on the significance of the observations made, specifically with respect to future research considerations by the TC-11 community. ª 2006 Elsevier Ltd. All rights reserved.

1.

Introduction

The 20th International Information Security Conference (SEC 2005) was hosted in Makuhari-Messe, Chiba, Japan from 30 May to 1 June 2005. Given the relative youth of the computing disciplines, this represented a milestone for this flagship conference series of IFIP Technical Committee 11 (TC-11). During 2004 the current chairman of TC-11, Mr Leon Strous, expressed the opinion that TC-11 should reflect on the past, to assist TC-11 in gaining a clearer view of the road ahead. The milestone of 20 conferences provided an excellent

opportunity to start this reflection. The 20 SEC conference proceedings, dating from 1983 to 2005,1 would thus be analysed. The authors set off on a project to analyze the papers published during the 20 years of SEC conferences to determine whether there were any significant trends in the research conducted. The first problem to overcome was that of sourcing the 20 conference proceedings. Eventually the proceedings were sourced from various private collections.2 The next problem was to code the papers in such a way that it would allow for sensible analysis. The next section, therefore, explains the design of the research process. Thereafter the paper

* Corresponding author. Centre for Information Security Studies, P.O. Box 77000, NMMU, Port Elizabeth, 6031, South Africa. E-mail address: [email protected] (R.A. Botha). URL: http://www.nmmu.ac.za/rbotha (R.A. Botha). 1 Unfortunately the conference did not take place in 1987, 1989 and 1999. 2 The authors wish to, in particular, thank Professors Louise Yngstrom, Rossouw von Solms, Basie von Solms and Mr Leon Strous for allowing us access to their personal collections of proceedings. 0167-4048/$ – see front matter ª 2006 Elsevier Ltd. All rights reserved. doi:10.1016/j.cose.2006.04.002

248

computers & security 25 (2006) 247–256

presents selected results and, finally, reflects on the observed trends.

2.

The research process

Trends can only be visible if viewed in terms of a classification scheme. However arbitrary this statement might sound, it represented our single biggest challenge in the course of this research. Classification schemes aimed at Computer Science and Information Systems do exist. The ACM Computing Classification Scheme ACM (1998), for example, is aimed at the general field of computing. The ISRL categories and keywords (Barki et al., 1988, 1993) are aimed at Information Systems literature in general while lists, such as the IEEE Keyword list (IEEE, 1998), do this from an engineering perspective. Although these lists are comprehensive, their levels of granularity make them unapplicable to a specific sub-discipline such as security. Unfortunately, no classification scheme exists for security per se. The situation is even worse, as we have to agree with Donner (2003) that far too much security terminology today is vaguely defined. Previous studies (Glass et al., 2004; Ramesh et al., 2004; Vessey et al., 2005) tried establishing research trends in information systems, computing and software engineering. They used existing classification schemes and sampled papers from well reputed sources. Although our scope and focus were different from those studies, we learned from their experiences, especially as far as the process of classification is concerned. Taking this background into consideration, we abandoned any idea of using a formally established taxonomy for classification purposes. Instead, the route of developing a folk taxonomy had to be followed.

2.1.

Developing a folk taxonomy

Since the taxonomy is based on a specific group’s ‘‘language’’ it is vernacular in nature. This is referred to as a folk taxonomy (Wikipedia, 2006). No claim is therefore made as to the objectivity and universality of the taxonomy. We started the formulation of our folk taxonomy by studying 5 random years of SEC proceedings. For each article we attempted to identify a ‘‘topic’’ indicator. However, several topics contextualized more than one topic or described fairly general security issues. This difficulty in identifying a single topic for each paper led us to change our approach to using two ‘‘topic’’ categories: a high-level topic and a zoomed-in topic. This allowed more flexibility in describing the topic. The list of topics developed could be used interchangeably as high-level and zoomed-in topics. A paper could, for example, be categorized as being about ‘‘network security’’ (highlevel), specifically zooming in on ‘‘intrusion detection’’. The paper ‘‘A methodology to detect Temporal Regularities in User Behavior for Anomaly Detection’’ (Seleznyov, 2001) serves as an example. However, the paper ‘‘An Adaptive Intrusion Detection System Using Neural Networks’’ (Bonifa´cio et al., 1998) was classified as dealing at the high-level with ‘‘Intrusion Detection’’ and zooming in on ‘‘AI/Expert System’’ techniques.

To allow for a sensible level of granularity when doing the analysis, we eliminated as many synonyms as possible. Careful thought was given to the underlying principles forming the foundation of each concept. Wherever possible a term was eliminated if its underlying concept was described already. Another problem was homonyms: terms used to represent different concepts at completely different levels. An example is the term ‘‘policy’’. In general ‘‘policy’’ implies rules that are being used; however, from an analysis perspective this would be too course grained again as it would include high-level organization policies and low-level access control policies. In some respects our two-topic approach caters for this. Considering the (Governance, Policy) and (Access Control, Policy) tuples would indeed reflect the differences. However, to clarify the topics we catered for ‘‘management policies’’ and ‘‘technical policies’’. The two-topic approach, however, was not a panacea. Instead it introduced new problems. On several occasions it was extremely difficult to distinguish between the high-level and zoomed-in topics. This was especially evident in papers where two seemingly unrelated topics are brought together. Also, for extremely focused papers it proved difficult to decide on a high-level topic whereas the zoomed-in topic was easily established. Similarly it was difficult categorizing papers where a multitude of topics is considered (usually at a fairly high-level). For coding purposes we, therefore, adopted two special topics: at the high-level the ‘‘general’’ topic was introduced, while ‘‘multiple’’ was used at the zoomed-in level. Papers would thus be coded as follows:  (general, topic) if the paper considered the topic in a general context, for example the paper entitled ‘‘Cryptographic requirements for secure data communications’’ (Carroll and Martin, 1986) was defined as (General, Cryptography).  (topic, multiple) if the paper was considering the topic from multiple subtopics, for example the paper entitled ‘‘Information Security Issues in Mobile Computing’’ (Hardjono and Seberry, 1995) was classified as (Mobile, Multiple).  (general, multiple) if the paper dealt with general security issues, often from a philosophical perspective. This was often used where papers try making sense of the security field. This paper, for example, would have been classified as such. Another example is the paper ‘‘Restating the foundation of Information Security’’ (Parker, 1992). These ‘‘special’’ topics (general and multiple) required additional consideration while doing the analysis and reflection. Although we were developing a folk taxonomy, and therefore did not use an existing taxonomy as is, in the choice of topics we considered existing taxonomies, keywords indicated by authors and topic lists given in conference calls. The resultant folk taxonomy is given in Table 1. We believe that this folk taxonomy presents a useful starting point even for the more general security research community. However, at this point in time, the only certainty that we can express is that this taxonomy covers the topics addressed by the community of authors who presented papers at the first 20 SEC conferences.

computers & security 25 (2006) 247–256

Table 1 – Security folk taxonomy for IFIP SEC conference series Academic/curriculum Access control AI/Expert systems Ambient systems Architecture Assurance/Trust/ Accreditation Auction systems Auditing Authentication Availability Awareness Biometrics Building secure applications Business continuity Certification authorities Confidentiality Copyright protection/ Steganography/Watermarking Corporate culture Corporate governance Cryptography Database/Data store security Decoy Digital signature Distributed computing E-commerce Education Electronic payment and transaction schemes Ethics/Human factors Exploits of vulnerabilities General Government involvement Groupware applications Hardware security Information security management Integrity

2.2.

Intellectual property Internet Access Intrusion Detection Key Management Legacy Systems Legal Malice/Crime Management Policy M-Commerce MLS (Multilevel Security) Mobile Technology Multiple Network/Communication Security Non-repudiation Physical Security Privacy/Anonymity Profiling Protocol Risk Analysis/Risk Management Single Sigh On/Identity Smart Cards Software Security Standards, Guidelines Technical Policy Tool Training Trusted Third Party Verification Visualization Voting Systems Web Service

Coding the papers

The process of coding refers to the actual classification of the papers. In addition to coding the topic dimension in terms of the high-level and zoomed-in applicability to the folk taxonomy, we also considered the output-type domain. In the output-type domain we made a distinction between formal, technical and informal, similar to Bjorck and Yngstrom (2001). Outputs in the formal domain encompassed rules and procedures aimed at a human receiver, whereas classifying outputs in the informal domain indicated that the paper ventured in the field of human behavior or presented ideas. Outputs classified in the technical domain were aimed at implementation and execution at a computing artifact level. The proceedings reported on 867 papers. However, 65 papers were not included in our analysis. Some of the 65 papers were excessively short and did not allow for confident classification. Others were written in styles not representing academic papers, such as papers that resembled speeches or

249

slide outlines, and were thus excluded as well. In total 802 papers were coded. The papers were coded using manual code sheets and then captured in a database. Occasionally the coding exercise prompted us to revisit the taxonomy and add a topic that was not discussed in the initial building of the taxonomy. Where this was based on homonyms, the papers classified as the initially used term were identified from the database and their classification revisited. The papers were considered from the perspective of different analytical units. First the title and abstract were considered, sometimes in conjunction with keywords identified. If this was not enough, the introduction and conclusion were considered. Failing all else, the complete paper was studied.

2.3.

Verifying the results

Both authors acted as coders. Where either coder was unsure of specific papers these papers were discussed by both coders and consensus was reached. On occasion, the opinion of others was seeked, especially when an expert in a specific area was available. Each coder also randomly selected papers coded by the other party, coded it separately and then compared the resultant coding. Differences were discussed between the coders and consensus reached. Sometimes this caused the coders to revisit all papers that they coded in a specific way to confirm that the same arguments were used consistently. We found, like Malterud (2001), that the value gained from the disagreement between coders was more than that gained from agreement. Although the resultant coding could still have some degree of contention in it, we believe that the process followed was done in a rigorous fashion and that any contention could be attributed to the subjective nature of the activities in the process and not the process itself. Malterud (2001) points out that researchers’ knowledge, experiences and preconceptions have an effect on their method of judgement and the communication of the conclusion. However, we are convinced that this subjectivity on the part of the researcher has been addressed sufficiently in the design of the process.

2.4.

Analysing the results

Once all the papers were coded the question ‘‘Anything interesting?’’ begged. We set off to analyze the information in two ways. Firstly the high-level and zoomed-in topics were cross tabulated to identify any clusters in the topics. This led to the conclusion that we should be careful of the role played by the ‘‘general’’ and ‘‘multiple’’ categories. We addressed this problem by ‘‘flattening’’ our two dimensional topic category for the purpose of analysis. The general and overall categories therefore had to be considered specifically. The special topics ‘‘general’’ and ‘‘multiple’’ were handled as follows:  where classifications were (general, topic) or (topic, multiple) the paper was tallied twice for the topic.  (general, multiple) classifications resulted in a double count for a special ‘‘general’’ category.

250

computers & security 25 (2006) 247–256

Above measures ensured that each paper was counted twice. It also prevented discrimination against topics that were used in conjunction with a general or multiple classification. In the analysis possibly related topics were grouped to investigate possible trends. Each year a different number of papers were accepted at the conferences. As it was our aim to identify trends over the 20 conference years we expressed the topics addressed as a percentage of the papers published in each year, rather than a count. The next section will present some interesting observations, indicating some trends (or the lack thereof). The choice of trends presented is based on observing certain clusters in topics when cross-tabulating the topics, as well as methodically investigating certain perceptions of the researchers.

3.

Selected results

Trends were investigated by plotting the percentage of papers per year per topic. The following sections show a selection of results.

3.1.

General focus at conference

Since two special topics, general and multiple, were introduced, questions as to the focus of papers at the SEC conferences were raised. Fig. 1 shows the percentage of papers that were classified as general in the high-level category per year. It is quite clear that there was a downward trend in the need for the use of a general topic through the years. This trend may well indicate that the conferences became more focused through the years. It is interesting to note that for 3 of the 20 conferences (1988, 1990 and 1993) in excess of 50% of papers could be considered general ‘‘sense-making’’ papers. However, it also appears that the conference made

a significant turnabout with respect to sense-making arguments since 2000. The use of the ‘‘multiple’’ topic for the zoomed-in category indicated that the paper dealt with an issue from multiple perspectives. Fig. 2 shows the percentage of papers that had been classified as ‘‘multiple’’ for the zoomed-in topic. Although much less evident than the ‘‘general’’ classification, a downward trend (especially in later years) is evident. This could indicate a bit more of a focus at the micro-level. In general then, it seemed as if papers became more focused. This focus led us to consider whether the nature of the outputs has changed.

3.2.

The nature of the outputs

As mentioned, the output has been classified as technical, formal and informal. Fig. 3 shows the percentage of papers per year according to the different output types. Note that ‘‘informal’’ outputs are infrequent and represent a small percentage of papers. Even a cursory glance at Fig. 3 reveals that after 1998 there was a significant move toward technical outputs. The ratio between formal and technical outputs appeared to be reasonably balanced until 1998. Fig. 4 confirms this by showing the distribution between technical, formal and informal output types cumulatively over different time periods. The rest of the paper investigates this further by analyzing some topics in more detail.

3.3.

Business continuity

The topic of business continuity deals with unexpected events such as fires and earthquakes. Thus, papers classified as such relate to aspects of managing the impacts of system security failure as well as managing the recovery procedures. Only 9 papers were categorized as business continuity papers. Out of a total of 802 papers this is only marginally

70

60

50

40

30

20

10

0

1983 1984 1985 1986 1988 1990 1991 1992 1993 1994 1995 1996 1997 1998 2000 2001 2002 2003 2004 2005

Fig. 1 – Use of ‘‘general’’ categorization at the high-level.

computers & security 25 (2006) 247–256

251

40

35

30

25

20

15

10

5

0

1983 1984 1985 1986 1988 1990 1991 1992 1993 1994 1995 1996 1997 1998 2000 2001 2002 2003 2004 2005

Fig. 2 – Use of ‘‘multiple’’ categorization at the zoomed-in level.

more than 1% SEC representation. Seven of these papers were categorized in the zoomed-in category while 2 in the high-level topic category. Eighty-nine percent of them were of a formal output type and 11% were technically oriented. Fig. 5 shows the representation of business continuity papers over the 20 years. From the graph, we can see that the most representation was in 1990 with about 6% of the proceeding consisting of papers in this category.

For the majority of years business continuity did not get any representation. It was interesting to note that these papers did not address business continuity within a bigger risk management or an information security perspective. Instead, they provided general discussion of the topic. However, this prompted us to investigate trends with respect to papers written from a risk management and information security management perspective.

100

90

Formal Informal Technical

80

70

60

50

40

30

20

10

0

1983 1984 1985 1986 1988 1990 1991 1992 1993 1994 1995 1996 1997 1998 2000 2001 2002 2003 2004 2005

Fig. 3 – Distribution of technical, formal and informal output types.

252

computers & security 25 (2006) 247–256

(a)

(b)

Formal 26%

Formal 47%

Technical 50%

Informal 3%

1983-1997

Infomal 4%

Technical 70%

(c) Formal 38% Technical 59% Informal 3%

1998-2005

1983-2005

Fig. 4 – Cumulative distributions of technical, formal and informal output types.

These concepts are combined in the next section as management related topics.

3.4.

Management related

In addition to papers dealing with Information Security Management and Risk Management issues, several topics had a ‘‘management’’ connotation. For the purpose of analyzing management related topics, we also included papers classified as management policy, corporate culture and corporate governance. A total of 145 papers were management related. Fig. 6 seems to indicate a very slight decrease in prominence of management related topics. Management related topics had the most prominence in 1985 and the least in 2005. When considering the output types of management related papers, we found 80% of the papers to be formal, 5% informal and 15% technical. Another target for analysis was topics that were suspected to be more technical in their approach. The next section summarizes the trends in terms of network related topics.

3.5.

Network related

Papers categorized as network related include those which refer to the protection and detection of threats or attacks on

the network, that is, the assurance of secure communication. However, we explicitly exclude crypto-like topics. Fig. 7 therefore depicts a grouping of 3 topic categories: network security, intrusion detection and internet security. There was a total of 109 network related papers with a 14% representation in the SEC. Twelve percent of the papers had a formal output type, while 88% were technical outputs. Fig. 7 shows the representation of network related papers over the 20 conference years. Network related papers were most prominent in 2005 with just over 18% of the proceedings being dedicated to it, while, in the first 2 years (1983, 1984) none of the papers were classified as such. Considering our explicit exclusion of crypto-like topics a separate look at these are warranted.

3.6.

Crypto-like topics

Papers characterized as crypto-like are those which provide procedures, protocols, cryptographic algorithms and instructions relating with encoding and decoding messages using cryptographic technologies. There was a total of 111 crypto-like papers with a 14% representation in the SEC. Five percent of papers had a formal output type and 95% had a technical output type. Fig. 8 shows the representation of crypto-like papers over the 20 years.

7

6

5

4

3

2

1

0

1983 1984 1985 1986 1988 1990 1991 1992 1993 1994 1995 1996 1997 1998 2000 2001 2002 2003 2004 2005

Fig. 5 – Business continuity.

computers & security 25 (2006) 247–256

253

30

25

20

15

10

5

0

1983 1984 1985 1986 1988 1990 1991 1992 1993 1994 1995 1996 1997 1998 2000 2001 2002 2003 2004 2005

Fig. 6 – Management related.

Crypto-like papers were more prominent in 2004 with just over 18% of the proceedings being dedicated to it. Another interesting observation can be made when considering auditing.

3.7.

Auditing

Papers categorized as auditing have direct reference to either the profession of auditing or the function/act of auditing. It includes the examination of system records and activities in order to test the adequacy and/or effectiveness of security systems and procedures, to ensure compliance with established policies. There was a total of 28 auditing papers with a 3% representation in the SEC. Sixty-one percent of the papers had a formal

output type, while 39% had technical. Fig. 9 shows auditing papers were more prominent in 1990 with just under 8% of the proceedings being dedicated to it. Interestingly enough it shows that auditing papers became sparser in later years (after 1993). Having pointed out some of the trends evident in the data, we can now proceed to reflect on the observed trends.

4.

Reflections

In the previous section several trends were identified. This section reflects on these trends by interpreting the trends in the light of current events and perceptions. As this may require us to draw certain inference, it should be noted that

20 18 16 14 12 10 8 6 4 2 0

1983 1984 1985 1986 1988 1990 1991 1992 1993 1994 1995 1996 1997 1998 2000 2001 2002 2003 2004 2005

Fig. 7 – Network related.

254

computers & security 25 (2006) 247–256

20 18 16 14 12 10 8 6 4 2 0

1983 1984 1985 1986 1988 1990 1991 1992 1993 1994 1995 1996 1997 1998 2000 2001 2002 2003 2004 2005

Fig. 8 – Crypto-like topics.

‘‘Inference never yields absolute certainties’’ (Krippendorff, 1980, pp. 99). We hope to stimulate insightful discussion and healthy debate, at least within the IFIP TC-11 community, but possibly also wider. Variation in topics at SEC conferences could be influenced by several factors other than general trends in the topics. At an individual conference level this can be attributed to, for example, the programme committee, the reviewers and special themes supported advertently or inadvertently by the conference organizers. Within this context of TC-11, the establishment of special work groups which may be hosting their own conferences may contribute to the trends. Table 2 reflects the current working groups of TC-11. In addition, the reward

system for turning out papers as well as the general interest and opinion of the authors on what security as a whole lacks may be contributing factors. We found it interesting, yet not surprising, that during the initial years of the SEC conference, papers were more general in what they discussed than during the later years. One has to consider that when the conferences started back in 1983, security was relatively new field of study. Several of the earlier papers at the conference therefore merely tried making sense of the field. As new experiences, knowledge and concepts were uncovered the papers became more focused. Considering that a discipline is expected to mature over years the trend toward more focused papers can be perceived as positive.

9

8

7

6

5

4

3

2

1

0

1983 1984 1985 1986 1988 1990 1991 1992 1993 1994 1995 1996 1997 1998 2000 2001 2002 2003 2004 2005

Fig. 9 – Auditing.

computers & security 25 (2006) 247–256

Table 2 – TC-11 working groups WG WG 11.1 WG 11.2 WG 11.3 WG 11.4 WG 11.5 WG 9.6/11.7 WG 11.8 WG 11.9

Name

History

Information Security Management Small Systems Security

est. 1985, revised 1992 est. 1985, revised 1992, 1995 est. 1987, revised 2001 est. 1985, revised 1992 est. 1987, revised 1989, 1991 est. 1990, revised 1992, 2000 est. 1991

Data and Application Security Network Security Systems Integrity and Control Information Technology Mis-Use and the Law Information Security Education Digital Forensics

est. 2004

There was a high inclination toward technical papers. This trend is highly visible toward the latter years (1998–2005). Informal and formal outputs accounted for 30% of the SEC series while technical outputs account for 70%. Bjorck and Yngstrom (2001) reported similar findings when analyzing 125 papers (including workshop papers) from IFIP SEC2000. They acknowledged the fact that technically oriented security research and solutions are a vital base for the secure operation of information and communication technologies. However, they added that judging by today’s continued challenges to keep computers secure, the solution to better security may not only be found through technical measures. This was supported by the 2004 Ernst & Young Annual Global Information Security Survey which highlighted the importance of the human aspect. They reported that the ‘‘lack of user awareness’’ was the top obstacle to effective information security (Ernst and Young, 2004). Given statements such as these, it was surprising to not observe an increase in management related topics and the ‘‘formal’’ output type. In fact, the opposite seems true. Some of the tendency toward technical outputs probably rose from the fact that the security research community historically evolved from mathematical and natural sciences (Gerber and von Solms, 2005). The presence of ‘‘informal’’ outputs, however, does show that the TC-11 community recognizes the interdisciplinary nature of security to a degree. The role of the human in security is furthermore recognized by the ‘‘formal’’ outputs which address procedural issues. There has not been a time in the history of computing that information security has been more important to the success and stability of the business enterprise than now. Ironically, at no time in history has organizations been in more danger of computing failure. The result is that ‘‘most organizations face a business continuity event at some point’’ (Smith, 2006, pp. 7). The authors therefore found the lack of business continuity and related papers highly disturbing. However, management related issues did consistently receive attention. Its prominence, however, has been slightly decreasing. This is contrary to the authors’ perception before this study. Although the authors did not collect data that directly support the following observation, their perception after the study is that management related topics moved on the

255

operational–tactical–strategic continuum. In the earlier days papers appeared to have been much more focused on operational matters, whereas lately more strategic issues such as corporate governance received more attention. From a pragmatic perspective, security mechanisms must be checked to have the necessary effect. There is, therefore, a need to have assurance as to the accuracy of records and transaction they present exists. Audit trails, whether computer based or manually produced, usually forms a noteworthy part in the role of fraud detection and prevention within systems (Mercuri, 2003). Therefore, we found it disturbing that papers dealing with auditing issues have largely disappeared from scene. Although, relatively few papers consistently dealt with the issue of auditing in the early years (1983–1993). In the later years (1994–2005), papers dealing with auditing then appeared more sporadic. With reports of an increase in the ‘‘insider threat’’ (Quigley, 2002; Neumann, 1999; Iyer and Ngo, 2005), one would expect work dealing with the mischief of insiders would receive more attention than what it does. The effects of network connectivity on computing cannot be denied. The strong presence of topics related to network security is therefore all but surprising. We found that overall, network related research in the SEC occupied almost 20% of the total papers published over the 20 years. A steady growth also reflects the increasing role that development such as the Internet had on the computing world. While the importance of crypto-like topics in the security filed cannot be denied, the trend to have an increasing number of crypto-like paper at the SEC conference is counterintuitive. Well known cryptographer Bruce Schneier in the preface to his book ‘‘Secrets and Lies’’ cautions against an unwarranted focus on cryptography and the mathematical utopia in which it exist (Schneier, 2000, pp. xi–xiii). While this in no way detracts from the critical role cryptography has to play in security, it should not be seen as the ‘‘silver bullet’’ of information security. The lack of enthusiasm for the human side of security remains trivialized although many researchers (for example, Siponen, 2000; Barber, 2001; Barrett, 2003) have argued that this dimension has a huge impact toward the overall security of a system infrastructure.

5.

Conclusion

The aim of this paper was to report and reflect on trends over the first 20 years of the IFIP TC-11 conference series. We were less than astonished by the move toward more focused papers in the SEC conference, which we believe is in line with the maturing of the field. However, we found that while the conference reported on the human or formal side of information security, an inclination to technical aspects of security was apparent. It was also uncovered that there were, in our opinion, some key topics which received little or no mention in the series such as business continuity and auditing. Such observations are alarming. It is our belief that security consists of multiple aspects which need to work and co-exist together in order to achieve and maintain the highest level of security. Hence, there is

256

computers & security 25 (2006) 247–256

a need to properly balance the different topic types within information security, especially with regards to a general security publication. However, as authors, we need to acknowledge the important role which the IFIP community has played in contributing to security research through the years, and specifically through the flagship conference series. In this report, we outlined issues which we deem important, hoping that it would encourage healthy debate and discussion between scholars over not only the content and direction of the SEC conference series, but over security research as a whole.

Acknowledgement The financial assistance of National Research Foundation (NRF) towards this research is hereby acknowledged. Opinions expressed and conclusions arrived at, are those of the author and are not necessarily to be attributed to the National Research Foundation.

references

ACM. ACM computing classification system 1998 version [online]. Available from: ; August 1998 [03/08/05]. Barber R. Social engineering: a people problem? Network Security July 2001;2001(7):9–11. Barki H, Rivard S, Talbot J. An information systems keyword classification scheme. MIS Quarterly 1988;12(2):299–322. Barki H, Rivard S, Talbot J. A keyword classification scheme for is research literature: an update. MIS Quarterly 1993;17(2):209–26. Barrett N. Penetration testing and social engineering: hacking the weakest link. Information Security Technical Report April 2003;8(4):56–64. Bjorck F, Yngstrom L. IFIP world computer congress/SEC 2000 revisited. In: Proceedings of the IFIP TC11 WG 11.8 second world conference on information security education July 2001. p. 209–23. Bonifa´cio Jr J, Cansian A, Moreira E, de Carvalho A. An adaptive intrusion detection system using neural networks. In: Posch R, Papp G, editors. Global IT security (SEC’98), IFIP world computer congress 1998, vol. 116; 1998. p. 418–27. Carroll JM, Martin S. Cryptographic requirements for secure data communications. In: Grissonanche A, editor. Information security: the challenge 1986. p. 90–9. Donner M. Towards a security ontology. IEEE Security and Privacy May/June 2003;1(3):6–7. Ernst & Young. Global information security survey 2004. Available from: ; 2004 [29/03/05], Company Report. Gerber M, von Solms R. Management of risk in the information age. Computers & Security February 2005;24(1):16–30. Glass RL, Ramesh V, Vessey I. An analysis of research in computing disciplines. Communication of the ACM 2004; 47(6):89–94. Hardjono T, Seberry J. Information security issues in mobile computing. In: IFIP/Sec ’95: proceedings of the IFIP TC11, eleventh international conference on information security. Chapman and Hall; 1995. p. 143–51.

IEEE. IEEE approved indexing keyword list. Available from: ; August 1998 [03/08/05]. Iyer A, Ngo HQ. Towards a Theory of Insider Threat Assessment. In: DSN ’05: proceedings of the 2005 international conference on dependable systems and networks (DSN’05). Washington, DC, USA: IEEE Computer Society; 2005. p. 108–17. Krippendorff K. Content analysis: an introduction to its methodology. Beverly Hills, CA: Sage Publications; 1980. Malterud K. Qualitative research: standards, challenges, and guidelines. Lancet August 2001;358(9280):483–8. Mercuri RT. On auditing audit trails. Communications of the ACM 2003;46(1):17–20. Neumann PG. Inside risks: risks of insiders. Communications of the ACM 1999;42(12):160. Parker DB. Restating the foundation of information security. In: IFIP/Sec ’92: proceedings of the IFIP TC11, eighth international conference on information security. North-Holland; 1992. p. 139–51. Quigley A. Inside job. net Worker 2002;6(1):20–4. Ramesh V, Glass RL, Vessey I. Research in computer science: an empirical study. Journal of Systems and Software 2004;70(1–2): 165–76. Schneier B. Secrets & lies – digital security in a networked world. New York: Wiley Computing Publishing; 2000. Seleznyov A. A methodology to detect temporal regularities in user behavior for anomaly detection. In: Dupuy M, Paradinas P, editors. Trusted information: the new decade challenge, IFIP TC11 sixteenth annual working conference on information security (IFIP/Sec’01), June 11–13, 2001, Paris, France. IFIP conference proceedings, vol. 193. Kluwer; 2001. p. 339–52. Siponen MT. Critical analysis of different approaches to minimizing user-related faults in information systems security: implications for research and practice. Information Management & Computer Security Dec 2000;8(5):197–209. Smith D. Business continuity and crisis management. Available from: ; March 2006 [29/04/06]. Vessey I, Ramesh V, Glass RL. A unified classification system for research in the computing disciplines. Information and Software Technology March 2005;47(4):245–55. Wikipedia. Folk taxonomy. Available from: ; March 2006 [29/04/06].

Reinhardt A. Botha is a professor in the School of ICT in the Faculty of Engineering, the Built Environment and Information Technology at the Nelson Mandela Metropolitan University in Port Elizabeth, South Africa. His research activities are conducted within the Center for Information Security Studies, which forms part of the Institute for ICT Advancement within the same Faculty. Reinhardt holds a PhD in Computer Science from the former Rand Afrikaans University, South Africa. His current research interests explore security in the context of mobile technology, workflow and business process management.

Tshepo G. Gaadingwe is an M Tech (Information Technology) student in the School of ICT in the Faculty of Engineering, the Built Environment and Information Technology at the Nelson Mandela Metropolitan University in Port Elizabeth, South Africa. Tshepo, a Botswana National, completed his B Tech (Information Technology) degree at the former Port Elizabeth Technikon during 2004. His current research involves an analysis of the history of information and computer security.