digital investigation 5 (2009) 138–145
available at www.sciencedirect.com
journal homepage: www.elsevier.com/locate/diin
Source attribution for network address translated forensic captures M.I. Cohen* Australian Federal Police, High Tech Crime Operations, 203 Wharf St., Spring Hill, 4001, Brisbane
article info
abstract
Article history:
Network Address Translation (NAT) is a technology allowing a number of machines to
Received 31 October 2008
share a single IP address. This presents a problem for network forensics since it is difficult
Received in revised form
to attribute observed traffic to specific hosts. We present a model and algorithm for dis-
11 December 2008
entangling observed traffic into discrete sources. Our model relies on correlation of
Accepted 18 December 2008
a number of artifacts left over by the NAT gateway which allows identification of sources. The model works well for a small number of sources, as commonly found behind a home
Keywords: Network forensics
or small office NAT gateway. Crown Copyright ª 2008 Published by Elsevier Ltd. All rights reserved.
Traffic analysis Protocol analysis Digital forensics Source attribution Network address translation NAT
The problem of packet traceback is a commonly considered problem in intrusion detection and network forensics (Carrier and Shields, 2004). Traceback is used to detect the real source of traffic as it is being routed through the Internet. Commonly, traceback is used to identify the sources of distributed denial of service attacks, where the source IP address is spoofed (Shanmugasundaram et al., 2004). A similar problem is determining the real source behind a NAT gateway, a topic explored in this paper. The wide proliferation of Internet connected systems and networks has spurred the adoption of Network Address Translation (NAT) gateways. These gateways allow deployment of many systems behind the NAT gateway, which all appear to contact the Internet using the gateway’s public IP address. Often, systems behind the NAT gateway are assigned private, or non routable IP addresses (Rekhter et al., 1996).
NAT was initially considered as a temporary stop-gap measure to increase the number of hosts connected to the Internet while using few preciously scarce IPv4 addresses. However, recent drafts of IPv6 have sought to specify NAT as an integral part of the protocol (Bagnulo et al., 2008). Therefore, NAT is set to remain in wide deployment throughout the foreseeable future. Network Forensics is an important sub-discipline of digital forensics. The network forensic analyst aims to retrieve forensically significant information from observed network traffic (Casey, 2004). Often, sources of interest are deployed behind NAT gateways, together with a number of sources which are not of interest. This makes attribution a significant problem since all Internet traffic from the gateway device appears to originate from the gateway’s external, public, IP address. It is difficult to disentangle the source of interest
* Tel.: þ61 7 32221361. E-mail address:
[email protected] 1742-2876/$ – see front matter Crown Copyright ª 2008 Published by Elsevier Ltd. All rights reserved. doi:10.1016/j.diin.2008.12.002
digital investigation 5 (2009) 138–145
from other traffic generated by other hosts (Shanmugasundaram and Memon, 2006). This situation is illustrated in Fig. 1. One solution is to move the intercept point behind the gateway, so traffic can be seen destined to specific hosts (Nikkel, 2006). Such a solution may not be practical in a forensic context, if the network is inaccessible. Network communication is typically described in terms of the OSI model (T.S. sector of ITU, 1994), which breaks protocols into distinct layers. Each layer can introduce unique artifacts which can be used to attribute the traffic to a specific source. The following discussion examines common artifacts within various layers of the OSI model, which may be used for attribution. The attributes described are not an exhaustive list, and we suggest other attributes which may be examined in further research. The literature contains a number of techniques for source attribution based on fingerprinting. For example, in the case of wireless networks it is possible to fingerprint the devices based on their radio signatures (Desmond et al., 2008) (The Physical OSI Layer). Clock skew and jitter can also be used to fingerprint devices (Kohno et al., 2005) (Transport OSI Layer). Even traffic analysis techniques can be used (Liberatore and Levine, 2006; McHugh et al., 2008). This paper examines a novel technique to attribute traffic to specific sources behind the gateway. We develop a model and illustrate how the model can be applied.
the packet’s destination IP address and port numbers. This translation is done for both TCP and UDP packets. We will loosely refer to each translation as a Stream. We can summarize the connection state table as a sequence of stream entries: SAddr ; SPort ; GWAddr ; GWPort ; DAddr ; DPort |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}
Attribution model
It is instructive to consider how a NAT device works (Srisuresh and Egevang, 2001). When a network packet arrives at the gateway from one of the hosts behind it (We will term it a Source), the gateway consults its connection state table to determine if the packet is part of an existing connection or a new connection. New connections are assigned a new source port number, and the packet is rewritten such that its source IP is the external interface’s IP address and the source port is the assigned port number. When a reply is received back to this source port, the connection state table is consulted again, and the originating host’s IP address and source port number are substituted as
Fig. 1 – Typical NAT architecture. A number of hosts are masked behind the NAT gateway. Their Internet traffic appears to originate from the gateway address.
(1)
Observables
where S represents the originating source, and D represents the destination. The GWAddr is fixed, while the GWPort is the assigned gateway source port – usually a sequentially incrementing number within a certain range. We are only able to observe the addresses and ports on the gateway side (after translation), the real source address is unobservable. Definition. A Stream (denoted by s) is a set of packets with a source (GWAddr, GWPort) and a destination (DAddr, DPort). The reverse stream is the set of packets with a source (DAddr, DPort) and destination (GWAddr, GWPort). The problem then becomes to deduce the connection state table and in particular assign unique sources for each stream given the observable properties and streams themselves. Definition. A Source (denoted by S ) is a set of streams which are attributed to the same host. It forms a set of packets which is the union of each stream attributed to that source: S ¼ fs1 ; s2 ; s3 .g ¼ s1 Ws2 Ws3 .
1.
139
(2)
Streams are attributed to exactly one source at a time. The attribution model, therefore, is a finite set of sources, each of which contains a finite number of streams, such that all streams have been attributed to exactly one of the sources. The configuration of the system is the specific assignment of streams to sources in a particular way. For example, a configuration might assign some streams to source 1 and other streams to source 2. The correct configuration can be taken to be the one which is most likely based on specific stream properties. We essentially ask the question: does it make sense to assign these streams to these sources? We answer this by examining the streams for internal consistency. If it is plausible for this configuration to be correct we retain that configuration as a possible configuration, otherwise we try a different configuration. The problem of finding the best configuration is now reduced to a discrete multivariate optimization problem. Optimization problems are typically solved using an energy or cost function which must be minimized by adjusting the system configuration (Diwekar, 2003). The energy function reflects how far away from an optimal state the current system is. The system is then be re-configured in such a way as to minimize this energy function – and thereby bring the system to a more optimized state. The optimization algorithm aims to reduce this energy function to either a global minimum (an absolutely optimal configuration) or a local minimum (a configuration which is ‘‘good enough’’ but converges in reasonable time).
140
digital investigation 5 (2009) 138–145
We define the energy of the system as the sum of energy factors for each source: Etotal ¼
n X
EðSi Þ
(3)
i¼0
The correct configuration is the one which minimizes the total energy for the system. Practically, once the traffic is broken down into a number of different sources, the source of interest can usually be isolated through specific traffic attributes such as email addresses, login names or chat handles. Once some streams within a source are identified as being of interest, the other streams within the source are also considered of interest. Note that we do not use the IP address of the source (we have no way of finding it). The actual gateway address (GWAddr) is only used to establish the direction of connections (i.e. outbound or inbound). Due to our non-reliance on IP addresses, its possible for addresses to dynamically change (e.g. via DHCP) without affecting our algorithm.
1.1.
Hypothesis testing
Consider a capture with M distinct streams. We begin by testing each stream for the best system configuration. The stream under test (s) is assigned to the most likely source (Si) by generating a sequence of hypotheses, and calculating their energy terms: EðHi Þ ¼ EðSi WsÞ þ
m X
E Sj
j¼0;jsi
DEðHi Þ ¼ EðSi WsÞ EðSi Þ
EðHmþ1 Þ ¼ EðsÞ þ
m X E Sj j¼0
DEðHmþ1 Þ ¼ EðsÞ
(4)
where m is the total number of sources and 0 i m. Hi is the hypothesis that stream s belongs to source Si. The energy associated with this hypothesis is simply the energy term for source Si coupled with stream s in addition to the energy terms of all the other sources by themselves. The change of energy DE(H ) for each hypothesis is the amount the system’s energy changed by introducing the stream to that configuration. Finally, the Null Hypothesis, Hmþ1 is tested to see if the stream warrants the creation of a new source. Adopting this hypothesis means that we do not consider the stream to be a good fit within any of the existing sources, and the creation of a new source is warranted. The hypothesis with the lowest overall energy term is chosen as the most likely configuration. Once all streams have been assigned to a configuration, the calculation can be repeated in a second pass to test the model’s stability and improve convergence.
1.2.
Source model
It is often useful to build a source model of each source. The source model collects all inferences we have made about this
source and can be used to make predictions and deductions about the source (for example the operating system (Smith and Grundl, 2002)). Statistical techniques have previously been used to detect deviations from normal activity by creating a probability model for each source (Goonatilake et al., 2007). The User Agent HTTP header reveals information about the client program which generated the request, and often information about the specific software version. By building a probability model of the occurrence of each User Agent string within the source it is possible to make an estimation of the probability that the request came from source S: pðUserAgent; SÞ ¼
Total User Agent Requests Total Requests
(5)
Another useful attribute to include in a model is the frequency of requests to certain URLs. For example, setting a particular web site as a browser’s home page will result in a request to that site each time the browser is started. This kind of information can be fairly unique particularly when the homepage is heavily customized. The source might include other properties. In particular we discuss the IPID properties in Section 2.1, and a clock model in Section 2.2.
2.
Energy function design
The previous section discusses a general optimization framework for deducing the most optimal configuration. The energy function must reduce the total energy when a stream belongs to a particular source, and increase when a stream is grouped with an incorrect source. The energy function is a mathematical estimate of how well a stream ‘‘fits’’ with the other streams in each source. Ultimately we seek internal consistency within each source. We assume that the traffic from each source is natural, and does not attempt to deliberately subvert our detection. The following sections examine a number of internal consistency measures which may be used in the construction of an energy function.
2.1.
IP IDs
Considering the OSI model, protocols below the network layer are typically invisible to our capture device since the sources of interest are not on the same physical broadcast domain (as shown in Fig. 1). We therefore begin our quest for attributable artifacts at the OSI Network Layer, commonly represented by the IP protocol. The IP header contains an identification field termed the IPID field. This field is used to ensure each packet is unique in the event it needs to be fragmented during routing. The exact format of the IPID is not specified and its implementation is operating system specific. Some operating systems simply assign integers which increment by one for each packet sent. The field is commonly 15 or 16 bit wide, and starts at 0 at boot time. Some operating systems write the IPID in little endian format, while others write the IPID in big endian format (Bellovin, 2002; Spring
digital investigation 5 (2009) 138–145
et al., 2002). Since the IPID behavior is an attribute of the source, we need to account for this behavior using the source model described in Section 1.2. Modern operating systems such as Linux actually generate secure IPIDs by randomising these for each stream in order to defeat the following analysis. (This can be seen by the function secure_ip_id() in the linux source tree (Linux source tree)). The following analysis is most useful for Windows based sources which use a simple sequential generator. The source model needs to detect the possibility for the source to be a linux system, and discount IPID analysis from such sources. Most NAT implementations do not update the IPID field at all, hence we usually find the captured packets have the same IPID as was set by the host which generated the packet, even though source IP addresses and ports may have been rewritten by the NAT implementation (Bellovin, 2002). For each packet the source sends, the IPID increments by one, however, we may not see all the packets the source sends, since not all packets were routed through our capture point. A useful forensic analysis therefore is to plot the IPIDs of all outbound packets against the packet number (Bellovin, 2002). Such an example plot is shown in Fig. 2. In the figure we see 3 separate sources. Source 1 only sends some packets to the Internet, most other packets are not visible to us. Source 2 wraps its IPID field at 216, while Source 3 wraps at 215. A useful measure of internal consistency of the source, is how many packets we must have missed in order to observe the sequence of IPIDs for this source. For example, assuming a source which generates IPIDs in big endian format and increments IPIDs by one for each packet (e.g. Windows XP): EðSÞ ¼
X
mod pjþ1 pj 1; m
(6)
j
where m is the width of the IPID field as obtained from the source model. While pj is the IPID for the jth packet in the source sequence obtained by ordering packets in time order. Clearly if we are able to observe all packets from this source, the energy for the source will be zero since pjþ1 ¼ pj þ 1. The energy function assumes that the hypothesis is true, and that the reason for the observed pattern of IPIDs is that
the source has transmitted packets which were not observed (for example locally or to another machine on the same subnet). This is a distinct possibility, and we can calculate the number of unobserved packets required for the hypothesis to be true. We adopt the hypothesis which requires the least number of unobserved packets as being the most plausible and reject all other hypotheses. It is instructive to examine how Eq. (6) discriminates between the different hypotheses. This is illustrated in Fig. 3. Case A illustrates a hypothesis which neglects to include the stream under test in the source. The energy factor for this configuration is equal to the number of packets which we must have missed to make the hypothesis true – or the total number of packets in the test stream. Case B illustrates the hypothesis which incorrectly attributes stream s to the source S. In this case there are 2 energy components. Since the stream under test has an IPID smaller than the previous source’s IPID, the IPID field must have wrapped for this hypothesis to be true. The e1 component will therefore have a very large value (e1 > 215). Case C illustrates a correct attribution hypothesis. In this case, the energy is 0 as all the packets are accounted for. Based on the IPID analysis, this hypothesis will be chosen as the most likely one. Since we optimise the total energy (Eq. (3)), an incorrect hypothesis will be penalised both for mis-attributing the stream to the wrong source, and for not attributing the stream to the correct source.
2.2.
Timers and clocks
Time is a unique attribute of a source, both for its absolute value and for any clock drift we may encounter. Timestamps may appear in a number of layers of the OSI model. For example, the TCP timestamp option (OSI Transport layer) has been shown to be a reliable source identification technique (Kohno et al., 2005). Although the TCP timestamp option is off by default on Windows OS’s it is on by default on Linux OS’s. Another useful source of timestamps is through HTTP. The HTTP protocol itself does not specify for a client to transmit its time (the server however, must send its clock in the Date header). However, many web applications do send the time stamp from the client’s clock and this can be used to estimate the client’s clock drift. This is an example of artifacts introduced by the OSI Application layer. Table 1 shows a small example of useful timestamps which may be found in the traffic. The clock properties are set to be part of the model and an energy function contribution can be taken as the difference between any timestamp and the model. Although clock tests can only be done on some of the connections (e.g. specific HTTP connections), we have a high degree of confidence in attribution based on clock sources.
2.3.
Fig. 2 – IP IDs plotted vs. packet number for packets outbound from a NAT gateway.
141
HTTP referrers
Another OSI Application layer protocol is the HTTP protocol – one of the most common protocols on the Internet forming the basis for the world wide web. HTTP is often used to transmit HTML (Hyper Text Markup Language) documents. These documents rely heavily on cross linking to other documents,
142
digital investigation 5 (2009) 138–145
A
B
C
Fig. 3 – Possible IPID fitting conditions and their effects on the energy function. A – Stream omission, B – Incorrect stream attribution, C – Correct Stream Attribution.
as well as embedding images, script and other multimedia content. HTTP typically uses the Referrer header to indicate that the current request was referred to from another page. Fig. 4 illustrates a typical HTTP request tree. This tree is formed by tracking each object’s Referrer header. If a stream is a HTTP stream containing a Referrer header, it is likely that the request for the originating page also came from the same source. Searching for the source which in the recent past requested the referred URL allows us to attribute the present stream to the source. We can use this information to reduce the energy term for the relevant source.
2.4.
HTTP cookies
HTTP is inherently a stateless protocol. However, many web applications rely on the user maintaining state throughout their use of the application. This state is maintained by use of HTTP cookies. A cookie is a bit of information which the server requests the client to present in future interactions with the site. Cookies are used to track users and systems (Kristol, 2001). Cookies may contain any information, but most commonly sites use a Session Cookie to maintain session state. The session
Table 1 – Common HTTP parameters which can be used to estimate the client’s clock drift. These parameters specify the clock to a resolution of milliseconds. Domain googlesyndication.com ..imrworldwide.com statse.webtrendslive.com www.google.com.au
Parameter dt rnd dcsdat t
cookie is a random nonce which the client receives at the beginning of the session, and then subsequently presents in each request for the duration of the session. Since the session cookie is only valid for the duration of a session, it is only present on a single host and may be used to infer attribution. Some web development frameworks (e.g. PHP) set session IDs using cookies of a fixed and predictable name (for example PHP uses the variable PHPSESSID). Attribution based on session cookies is considered very strong, that is we have a high degree of confidence that the two streams have come from the same source. This is because the session cookie is random and designed to be difficult to guess. Even if the user logged into the same site from two different machines, the session cookie will be different. We may use this information to merge sources which were otherwise detected as being different by other methods.
3.
Results
In order to test our fitting model we generated a sample capture on a small network. The network contained a Windows XP SP2 system and a Linux Ubuntu 8.04 system (Kernel version 2.6.22). The gateway was a Debian ‘‘Lenny’’ system (Kernel 2.6.24) with a PPP connection to a 3G modem. The gateway was configured as a NAT gateway using the standard Linux NAT implementation. No inbound connections were permitted. The gateway was running a dyndns update client. We captured all traffic on the ppp0 interface (after address translation took place), without truncating any frames, using the tcpdump program. The capture lasted for 15 min while the two systems were used to browse the Internet.
digital investigation 5 (2009) 138–145
http://www.example.com/
style.css
image1.jpg
page1.html
image2.jpg
page2.html
Fig. 4 – An example of a typical HTTP request tree. The initial request fetches an image and a CSS file, and links to another page. The new page fetches a new image and may link to yet another page. Each item fetched includes a Referrer header indicating the URL of the page it was fetched from.
A plot of the IPID field for all outbound packets from the network is shown in Fig. 5. In contrast with Fig. 2, the Linux host uses secure IPID generation making attribution by IPIDs particularly difficult. Each stream sent by the Linux host uses a random initial IPID, followed by incremental IPIDs for each packet in the stream. Many packets sent by the Linux host (e.g. UDP packets) have an IPID of zero. It is therefore very difficult to attribute packets sent by Linux hosts based on IPIDs. In contrast, the packets sent from the Windows XP host simply increment for each packet it sends. A PyFlag (Cohen and Collett; Cohen, 2008) module was written to classify the data into possible sources according to the algorithms described in Section 2.1. For each stream, the program calculated the total energy difference for each hypothesis, as well as testing the new source hypothesis. When processing a stream originating from the Linux system, the algorithm tended to place the stream into a new source because the stream did not ‘‘fit’’ well with the other streams in the source. However, for packets from the
143
Windows system, the algorithm showed a decrease of energy (negative DE ) when placing the stream into the correct configuration. Thus the algorithm shows strong convergence. The final result was that for the Linux system there were 133 separate sources, each containing one stream. For the Windows system, all streams were correctly matched into the same source resulting in an IPID plot shown in Fig. 6. The source shows a continuous increase of IPIDs with few missed packets. Since the above matching relied solely on the IPIDs, we ran a second check by comparing the User Agent strings from each source. We found total agreement between attributed streams and the source’s User Agent strings. The IPID attribution technique above is not suitable for Linux based hosts, which use secure IPID generation algorithms. We applied the TCP timestamp options algorithm described in Section 2.2 to the same scenario and obtained the relation illustrated in Fig. 7. Windows XP SP2 does not send the TCP timestamp option by default, and therefore does not appear in the graph. The graph illustrates two separate sources, the Linux workstation is shown as the lower line, while some requests are seen by the gateway itself (for dyndns updates). Of course, this algorithm is only useful for TCP streams.
4.
Discussion
The previous sections discussed a number of artifacts which may be used for source attribution. These artifacts are certainly not exhaustive and many more protocols leave identifiable artifacts within the data which might assist in source attribution. For example proprietary protocols such as online game communication or VOIP communication can have strongly attributable artifacts. Some protocols may even divulge the source’s internal IP address (for example SIP passes internal IP addresses for SDP negotiated end points when the gateway does not support SIP NAT fixups).
Fig. 5 – IP ID plot for all outbound packets captured from a small network.
144
digital investigation 5 (2009) 138–145
0
Fig. 6 – All packets belonging to the Windows XP system as produced by our fitting program.
Protocols which deal with usernames or nicks may be used for attribution, and even the contents of the communication itself can be very valuable. For example, identifying a suspect’s voice on a VOIP call, or seeing their picture on a video conference stream make for very strong source attribution regardless of the protocols. Once some streams are strongly attributed, these can be used to merge logically distinct sources as detected by the algorithm above. We describe the Null Hypothesis as that configuration which adds a particular stream to a new source (The term Hmþ1 in Eq. (4)). Adopting this hypothesis essentially declares that we are unsure which source the stream is attributed to, and we presume it is a new source instead. It is possible to make a type I error by attributing the stream to a source when it should not be attributed to that source (Rejecting the Null Hypothesis when it should be adopted). A type II error is made when we place a stream in its own new source instead of attributing the stream to its rightful source (Adopting the Null Hypothesis when it should be rejected in favor of one of the alternate hypotheses). The probability of making these errors depends on the reliability of the attribution method. For example, as seen in
our experiment, the IPID attribution technique is not effective for Linux based hosts. Our algorithm caused each stream to be attributed to a distinct source (Adopting the Null Hypothesis) even though in reality all streams originated from the same source. We therefore made type II errors frequently when applying the technique to a Linux host, despite being very accurate for the Windows host. The legal importance placed on the specific streams can influence our tolerance of errors. For example, consider the case of relevant MSN chats (which strongly confirm the suspect’s identity) interspersed with HTTP traffic, which we postulate originated from the same host. In that case, we may need a high level of certainty of attribution. We can control for type I errors by adjusting the relative energy terms for the null hypothesis and the alternate hypotheses. Plainly, when we are particularly concerned with attribution accuracy, we conservatively prefer to say that streams are not related unless there is overwhelming evidence to the contrary. Clearly, the energy terms from different attribution methods can be tuned to produce acceptable probabilities of making type I or type II errors. Further research is required to sufficiently assess these probabilities by examining how well the different techniques perform in practice. As we have seen some artifacts are most effective for certain types of sources. For example, the IPID attribution technique is useful when the source is a windows XP system, but is useless when the source is a Linux host (with randomized IPIDs). Similarly the TCP timestamp technique is useful when the source is a Linux system, but less so for Windows based sources (since TCP timestamp is enabled by default for Linux and not enabled for Windows). Different techniques carry different level of confidence. For example, timing analysis may be less reliable for sources with NTP synchronized clocks, since the clock skew is very small in that case. Cookie based attribution is very strong on the other hand, since cookies are designed to be unique for each browser. The energy function may be tuned to weigh up each attribution technique to take account of the reliability of the method in the specific situation. The techniques described above may also be used for other applications. The most obvious application is the detection of introduced streams into a PCAP file. This type of modification can occur when PCAP files have been manipulated in such a way that incriminating data was added, for example via the mergecap program (Wireshark, 2008). The above analysis can be used as a sanity check to ensure that the traffic appears to originate in the same source. If two distinct traffic captures were merged, this will be evident as distinct sources. In effect, the above analysis performs many internal consistency checks on the data at several layers of the OSI model, and therefore can be used to uncover anomalies in the data.
5.
Fig. 7 – Source timestamp as obtained from the TCP header options field plotted against real time of packet collection.
Conclusions
Network forensics relies on the acquisition of network traffic as evidence. Often, we have little control over the placement of our capture device within the network. When acquiring traffic emanating from a NAT gateway, the problem of
digital investigation 5 (2009) 138–145
attributing traffic to specific sources is non trivial. We present a number of algorithms which may be used to automate source attribution. The algorithms were applied to a small network capture with several hosts and were demonstrated to perform well, with low error rates. This work only examined a single NAT implementation – the one provided by the Linux iptables firewall. This implementation is commonly deployed, but is certainly not the only one available. Different NAT implementations may behave differently and further work is required to classify how each implementation affects the different artifacts. The attribution artifacts presented in this work are by no means exhaustive, and many protocols can be used to this end. Future work is required to expand the repertoire of suitable protocols, and artifacts to make attribution more accurate.
references
Bagnulo M, Baker F, van Beijnum I. Ipv4/ipv6 coexistence and transition: requirements for solutions [Online]. Available from: http://tools.ietf.org/id/draft-ietf-v6ops-nat64-pbstatement-req-00.txt; May 2008. Bellovin SM. A technique for counting NATed hosts. In: IMW ’02: proceedings of the 2nd ACM SIGCOMM workshop on Internet measurement. New York, NY, USA: ACM. Available from: http://www.cs.columbia.edu/wsmb/papers/ fnat.pdf; 2002. p. 267–272 [Online]. Carrier B, Shields C. The session token protocol for forensics and traceback. ACM Trans Inf Syst Secur 2004;7(3):333–62. Casey E. Network traffic as a source of evidence: tool strengths, weaknesses, and future needs. Digit Invest 2004;1:28–43. Cohen MI. Pyflag – an advanced network forensic framework. In: The proceedings of the eighth annual DFRWS conference, vol. 5; September 2008. Suppl. 1, pp. S112–20. Cohen MI, Collett D. Pyflag – python forensic and log analysis gui [online]. Available from: http://www.pyflag.net. Desmond LCC, Yuan CC, Pheng TC, Lee RS. Identifying unique devices through wireless fingerprinting. In: WiSec ’08: proceedings of the first ACM conference on wireless network security. New York, NY, USA: ACM; 2008. p. 46–55. Diwekar UM. Introduction to applied optimization. Springer; 2003. Goonatilake R, Herath A, Herath S, Herath S, Herath J. Intrusion detection using the chi-square goodness-of-fit test for
145
information assurance, network, forensics and software security. J Comput Small Coll 2007;23(1):255–63. Kohno T, Broido A, Claffy K. Remote physical device fingerprinting. IEEE Trans Depend Secure Comput 2005;2(2): 93–108. Kristol DM. HTTP cookies: standards, privacy, and politics. ACM Trans Internet Technol 2001;1(2):151–98. Liberatore M, Levine BN. Inferring the source of encrypted http connections. In: CCS’06: proceedings of the 13th ACM conference on computer and communications security. New York, NY, USA: ACM; 2006. p. 255–63. they detect access to encrypted web sites by building a model of length/frequency of access. secure_ip_id function, linux source tree [online]. Available from: http://lxr.linux.no/linuxþv2.6.27/drivers/char/random. c#L1501. McHugh J, McLeod R, Nagaonkar V. Passive network forensics: behavioural classification of network hosts based on connection patterns. SIGOPS Oper Syst Rev 2008; 42(3):99–111. Nikkel BJ. Improving evidence acquisition from live network sources. Digit Invest 2006;3:89–96. Rekhter Y, Moskowitz B, de Groot DKGJ, Lear E. RFC1918: address allocation for private internets. Internet Engineering Task Force, Tech. Rep.; Feb 1996. Shanmugasundaram K, Memon N. Network monitoring for security and forensics. In: Information systems security, ser. lecture notes in computer science. Berlin/Heidelberg: Springer; 2006. p. 56–70. Shanmugasundaram K, Bronnimann H, Memon N. Payload attribution via hierarchical bloom filters. In: Proceedings of the 11th ACM conference on computer and communications security. New York: ACM; 2004. p. 31–41. Srisuresh P, Egevang K. RFC3022: traditional ip network address translator (traditional nat). Internet Engineering Task Force, Tech. Rep.; Jan 2001. Smith C, Grundl P. Know your enemy: passive fingerprinting [Online]. Available from: http://project.honeynet.org/papers/ finger/; March 2002. Spring N, Mahajan R, Wetherall D. Measuring isp topologies with rocketfuel. SIGCOMM Comput Commun Rev 2002;32(4):133–45. Telecommunication Standardization sector of ITU. Open systems interconnection – model and notation. ITU-T X.200 [online]. Available from: http://www.itu.int/rec/T-REC-X.200-199407-I/ en; 1994. Wireshark [Online]. Available from: http://www.wireshark.org/; Feb 2008.