Data Models

Data Models

CHAPTER Data Models 3 3.1  FROM ANALOG TO DIGITAL Morphology or other physical properties that we want to map using SPM are not discrete. They are ...

1MB Sizes 1 Downloads 101 Views

CHAPTER

Data Models

3

3.1  FROM ANALOG TO DIGITAL Morphology or other physical properties that we want to map using SPM are not discrete. They are continuously changing over the surface and if we want to image them we need first to discretize them. The way how the discretization is performed has influence on what can and what cannot be present in the measured data, so it is certainly related to the final uncertainty of the measurement. In this chapter we would like to discuss more in detail the process of data acquisition in SPM, including simple models for the feedback loop, sampling ­ regimes and typical data storage approaches. Besides this, we would like to discuss also two important issues related to data acquisition—drift and noise. Both of them could be understood as uncertainty sources and could be discussed in Chapter 5, or, together with other data correction and processing questions, in Chapter 4. However, both drift and noise are so intrinsically connected with the data acquisition approach that we prefer to discuss them here. Moreover, different measurement strategies can be used to minimize the influence of both these uncertainty sources, so it is important to take them into account already in the data acquisition phase.

3.2  DATA ACQUISITION BASICS 3.2.1  Data Sampling While scanning over the surface, the tip performs a continuous motion, often even with constant velocity in the lateral direction.The result of a SPM measurement is a set of points, usually with an equal distance between each two points. The sampling process itself can be understood as a filter that can distort the measured values. There is no unique way how to sample data in SPM. In Figure 3.1 there are two extrema of what can be used as a value in a single pixel of a signal measured in time—(B) an average over the sampling interval or (C) as instant value at the point of acquisition. We can see that the resulting profile can differ for both cases. The (C) approach is more related to what we understand as data interpolation in mathematics. On the other side, the benefit of the (B) approach is namely in the fact that it prevents us from missing some sharp features in the signal that would be otherwise lost. Quantitative Data Processing in Scanning Probe Microscopy. http://dx.doi.org/10.1016/B978-1-45-573058-2.00003-6 © 2013 Elsevier, Inc. All rights reserved.

35

36

CHAPTER 3  Data Models

Figure 3.1 SPM sampling approaches: (A) “real” data, (B) data averaged in intersections of real data and vertical gray lines, always using half of the sample spacing on each side as shown in inset, (C) data taken only as values at intersections.

From the point of view of electronics and software, both approaches can be realized in a SPM easily. The feedback loop reads data so fast, that values are processed almost in a continuous way, compared to the image sampling. In practice, an approach which is intermediate between these two extrema is used, using averaging to some extent over an interval close to the exact position. The user, however, cannot change the signal handling in most of the commercial instruments, and in fact, in most of the cases, he cannot even know how the sampling is realized. A more important issue is that the probing process itself can have a different averaging behavior. The measurement result can be understood as a convolution of the measured sample properties and a point spread function, describing the probing process averaging properties. Sharp atomic force microscope tips act nearly as a delta function if measuring relatively smooth surfaces, such as in case (C) of the previous sampling discussion. On the other side, capacitance or optical probe signal (see Chapters 8 and 12) is known to be influenced by a larger surface area, so the signal is already averaged by physical processes, as it would be in case (B). The properties of the probing mechanism and its localization will be discussed in each respective chapter. Sampling itself distorts the information that can be obtained during the ­measure­­ment. The maximum spatial frequency that can be measured is given by half of the sampling rate. As the real surfaces are in principle not frequency limited, this reduces the amount of information that we can obtain. On the other hand, the minimum spatial frequency is limited by the scan size. If we want to obtain most of the relevant information, we need to choose the scan size and resolution so that as many of the spatial frequencies on our sample as possible lie within the measurable frequency band. This is sometimes not easy as the surface is formed by many different features, with a large span of spatial frequencies. As an example, in Figure 3.2 we show the results of a roughness evaluation on a randomly rough surface, depending on the scan size

3.2  Data Acquisition Basics

10 9

σ [nm]

8 7 6 5 4 3 2 0.01

0.1

1 scan range [µm]

10

100

Figure 3.2 Roughness scaling with different image size, but same number of pixels. Results were simulated using a single high resolution and large scale image.

(always with the same number of pixels). While for small scan sizes we do not have a large enough area to get the surface statistical properties, for large scan sizes (but the same number of pixels) we scan too coarse to get the roughness well represented. Note that for surfaces with a geometry other than random roughness the dependence could get even more complicated. All the data used for roughness evaluation plotted in Figure 3.2 were obtained with the same resolution, changing only the scan size. On the resulting dependence of root mean square deviation of heights (roughness parameter usually denoted as σ, Rq or Sq) we can see that both the low and high limit provides significantly wrong results. As a final issue, we discuss the influence of the measurement direction. Data are usually collected profile by profile, which introduces a virtual anisotropy in the measurement results. If we assume that a periodic noise is added to the data, it will be imaged on every profile. However, if we collect many profiles and form an image, the noise characteristics perpendicular to the profiles will change. On the fast axis data we could still perform some correction for noise, e.g. using Fourier filtering, however, in the slow scan axis we have uncorrelated noise that can hardly be treated, as shown in Figure 3.3. The noise is more in detail discussed in Section 3.6.

3.2.2  Feedback Loop Effects Even if the principles of interaction detection can be different for different SPM ­techniques (force detection by a cantilever and laser beam being the most frequent ones), there is always a feedback loop doing in principle the same—trying to preserve the desired set-point value of the interaction quantity. As this is an active component of the SPM (and we might say that it is the most active one), with many parameters that can be varied, it will have certainly important influence on the accuracy of the measured data.

37

38

CHAPTER 3  Data Models

Figure 3.3 Noise dependence on fast/slow axis: (a) noisy AFM image, (b) graph of power spectral densities obtained from fast (horizontal) axis and slow (vertical) axis.

The most common implementation of a feedback loop in SPM is via a proportional-integral-derivative controller. Even if this can be practically realized even in analog form, nowadays it is typically a digital process running on some real-time device, such as an FPGA chip or a microcontroller. It can be modeled by an equation consisting of three terms. Z (t) = P(s(t) − sset ) + I



0

t

(s(t − τ ) − sset )dτ + D

d (s(t) − sset ), dt

(3.1)

where Z is the response (e.g. z axis piezo voltage), s(t) is the signal from the measured interaction, sset is the desired interaction signal value (setpoint), and constants P, I, D have the following meaning: • P is the proportional term describing the immediate response to the error between interaction quantity and its setpoint. Note that if only this proportional term is used, the system reaches the desired position exponentially at its best. The effect of this term on the feedback loop output is shown in Figure 3.4a. • I is the integral term having two purposes. First of all it slightly smooths the response to noise due its averaging nature. Second, it helps the system to reach the set-point value if the present interaction signal is only slightly shifted from the desired value—in this case the proportional term would react extremely slowly. The influence of this term on measurement process is shown in Figure 3.4b. • D is the derivative term describing the response to the change in error. As the digital feedback loops are very fast at present compared to dynamics of other SPM parts, this term is often omitted. Note that the interactions in SPM are typically highly nonlinear. This can affect the PID controller response. In some cases, such as in scanning tunneling microscopy,

3.2  Data Acquisition Basics

Figure 3.4 Typical feedback loop effects in PI controller: (a) proportional term (integral term is zero), (b) integral term (proportional term is 0.1 for all the curves).

a logarithm of the interaction (in this case the tunneling current) is used instead of the interaction itself to overcome this problem (the tunneling current has an exponential dependence on the tip-sample distance). However, for example in atomic force microscopy measurement the interaction is so complex that this approach ­cannot be used. It can be often helpful to simulate the effect of different imaging conditions on our structure, e.g. by simulating the imaging process on an ideal surface. There are several educational tools for doing this, often supplied by instrument manufacturers, here we have used the simple PID simulation module that is available in Gwyddion open source software package. Even if we tune our feedback loop optimally, there is some residual information in the error signal channel, namely information about the high spatial frequencies of the morphological features [1]. It is therefore important to have the possibility to detect whether this amount of information is really negligible (as we typically expect). First of all we need to convert the error signal to probe displacement. This can be done using an approach known from force-distance curves acquisition (see Chapter 6): measure the error signal difference e while moving the z actuator upwards or downwards by a known distance z. This gives us the conversion factor �e/�z that can be used to process our error signal images. Note that if we use an optical lever technique for probe-sample force measurement (like in AFM and most of the other methods) the conversion factor is dependent not only on the laser beam path in the microscope head, but also on the cantilever geometry, you therefore need to measure it for each kind of cantilever separately. The propagation of beam in optical lever detection system and all its components can influence the performance strongly [2], so it is probable that height information recalculated from cantilever deflection will be much less accurate than what we get from z axis sensors. After conversion of the error data into displacement we can (a) judge whether this signal is large enough that we need to treat it furthermore and (b) add this signal to the height channel. Note that even after

39

40

CHAPTER 3  Data Models

conversion the error signal uncertainty will always be much higher than uncertainty of the height (we rely on cantilever bending, we do not use any independent sensors, etc.), so we should try to keep the error signal as low as possible anyway. As the tuning of the feedback loop is sometimes difficult and introduces a ­certain amount of human influence into measurements, manufacturers tend to develop methods for doing this automatically (e.g. the ScanAsyst method by Bruker). From a quantitative measurements point of view this might be contradictory. It is certainly good to suppress the human factor. On the other side, if the user does not know what the adaptive feedback tuning does during the measurement, he can hardly estimate its influence on the results. The benefit of this approach therefore depends namely on how the adaptive feedback is realized and how it is documented.

3.3  IMAGE SAMPLING 3.3.1  Regular Sampling For most of the SPM techniques, the data are sampled regularly, forming a matrix of points with the same distance between points in the x direction and the same in the y direction. Very often the horizontal and vertical distance is the same, so the data form square pixels that can be very easily visualized (see Figure 3.5), providing a 1:1 correspondence between what is measured and what is displayed. The principal channel consists nearly always of the topography (the height field) that is collected in nearly every SPM mode. The other information (electrical, magnetic, thermal etc…) is collected at the same positions as height, so we usually obtain more channels during the measurement. Forward and reverse measurements are saved as separate channels, as they are not necessarily the same due to sample friction and some topography effects (see Chapter 7 for discussion on this).

Figure 3.5 Difference between SPM regular (left) and non-regular (right) sampling approach. Crosses mark points that would be measured, background image is the “real” sample topography.

3.3  Image Sampling

3.3.2  Irregular Sampling Sometimes the regular matrix measurement is not effective. Imagine that we are measuring a single object on a flat substrate, e.g. a hair, carbon nanotube, or microelectronics line. We might want to evaluate only its width or height, but we will not probably need to know the surface roughness precisely. Similarly, if we need to determine the period of a diffraction grating we will measure as many grooves as possible, but we do not need to know the morphology of each groove (except the first and last one). The rest of the data would be left aside even if it could take several hours to acquire them and even if they occupy hundreds of megabytes of disk space. This is a huge waste of effort. One of the possibilities how to overcome this is to measure data in a non-equidistant regime (not creating a square matrix of points during measurements, as shown in Figure 3.5). If we are able to measure the overall sample geometry with low resolution and local features that will be used for surface properties evaluation with high resolution, we can both speed up the measurement and reduce needs for data storage. There are numerous ways how to perform measurements of a rectangular s­ urface area. Logically, the most straightforward and most common used approach is to measure in a rectangular grid mapping pixel to pixel and directly creating the image. Even if this is the most convenient scanning regime, and the data can be processed very easily after measurement, it leads to a large loss of data already during the scanning process. As the microscope needs to maintain the feedback loop during the motion of the tip, the amount of the collected data in a single profile is usually much larger than the number of pixels of the resulting image (e.g. by an order or more). Data are then resampled to the requested number of pixels, losing the high resolution information already acquired in between them. A natural improvement of the “standard” square SPM image would be therefore a square set of data with non-rectangular pixel size, where the fast axis spacing would be much smaller than the one of the slow axis. Implementing such a scanning regime could already dramatically improve the amount of information collected in a single scan (in the same time), even if the regime is still a quite regular one. This kind of data could be still processed by many of the SPM data analysis software packages, which is not the case of the following approaches. On the other side, adaptive sampling could be realized as a completely random distribution of points in the x y plane. Based on some surface properties, such as local roughness we could measure data forming a completely irregular set of points that would be triangulated afterward. However, as the microscope in principle measures high resolution data over some continuous path, this could be ineffective. It is therefore desirable to measure a set of profiles (not necessarily forming a Cartesian grid). As an example of simple realization of an adaptive scanning regime, we refer here to a scanning regime based on acquiring rows and columns of different length, not organized in a regularly spaced matrix, as described in our previous work [3]. The prerequisite of the presented iterative process is to decide what the final maximum

41

42

CHAPTER 3  Data Models

resolution should be and what precision we want to reach. The key points of the refinement algorithm are as follows: 1. Measure a net formed by rows and columns with coarse resolution and interpolate data to the final resolution. 2. Measure and add an interleaved net of rows and columns to form a resolution which would be twice as fine and interpolate data to the final resolution. 3. Identify subsets between rows and columns where the interpolated data between the last two iterations differ by more than a chosen precision criterion. 4. Where the precision criterion failed, measure another net of rows and columns with twice as fine resolution on those rectangles. Note that in order to save the measurement time this process needs to be optimized so that the movement between different refinement areas is minimized. An optimum path for the SPM probe is therefore planned merging all the necessary movements (including movements between different areas) with the criterion of the shortest total distance. This is in principle a traveling salesman problem and here it is solved using the nearest neighbor algorithm. The whole refinement process is illustrated in Figure 3.6. If the surface exhibits regular areas (e.g. a microchip surface), areas measured in each refinement iteration become smaller and smaller, up to a point where all data are measured with the requested precision. Result of performance on different surfaces is shown in Figure  3.7. We can see that the algorithm seems to be rather efficient on regular surfaces (formed by large flat areas), but much worse on irregular surfaces. The worst case would

Figure 3.6 Simple adaptive sampling approach: graph shows evolution of error for points A and B, inset shows their position in the image and net of the rows and columns that are measured. See Ref. [3] for more details.

3.4  Data Storage

Figure 3.7 Simple adaptive sampling approach performance on different surfaces—black color showing areas designed for measurement in the next step [3].

be a completely randomly rough surface, where we would need to measure every point (like in square matrix measurement approach) and we would measure it even twice (as we are here measuring both rows and columns). But this behavior could be expected—we cannot omit anything if every pixel in the image contains unique information.

3.4  DATA STORAGE There is unfortunately no commonly supported format for the exchange of data between different scanning probe microscopes. A normative document ISO 28600:2011 exists that describes a proposal for a standard SPM data exchange format. It must be said that at present (i.e. 2012) the format is not extended and it is not sure whether the manufacturers will use it in the future. However, we can use it to illustrate the basic data format properties, common to all other formats as well (including proprietary ones). A piece of the format definition is given in here for illustration of the following paragraphs. Note that in practice parameter values are written only (not their names

43

44

CHAPTER 3  Data Models

and = symbols) and a missing parameter is given by an empty line in the file. After header 256 × 256 data values would follow. format identifier

= ISO/TC 201 SPM data transfer format

label line

= test file

institution identifier

= CMI

instrument model identifier = Dimension Icon operator identifier

= Klapetek

experiment identifier

= test measurement

… x resolution

= 256

y resolution

= 256

x axis unit

= nm

y axis unit

= nm

field of view x

= 100

field of view y

= 100

… end of header identifier

= end of header

12.234 13.342 13.565 …

In the absolute majority of cases the saved data start with a header containing all the necessary meta-data, i.e. format identification, pixel resolution, physical range of data, and optional meta-data, e.g. measurement parameters, operator, date and time of measurement, etc. The block of x yz or z data follows, either in ASCII or binary format (in this case the file is mixed from both ACII and binary data). In most of the formats, data are usually stored as a set of binary 16-bit or 32-bit values, which also can be a limiting factor. The bit depth may be connected with properties of the ADC converter in the microscope electronics, however this does not mean that it necessarily represents the microscope resolution. Signals in feedback loops in closed loop scanners are usually processed digitally anyway, over­ sampling can be used to increase bit depth, etc. Note that for ASCII data files the bit depth depends on the number of printed characters—six printed characters with an exponent correspond at maximum to a float (i.e. 16-bit) representation. For double ­precision about 16 characters would need to be printed. Note that sometimes the coordinate system of the AFM is right handed and sometimes left handed. This is probably due to different choices of the z axis direction by different manufacturers and can be confusing while interpreting data. ISO 28600:2011 defines the format as shown in Figure 3.8.

3.5  Mechanical and Thermal Drifts

Figure 3.8 Coordinates orientation as suggested in ISO 28600:2011.

Note that there are many sources that produce cross talk between channels, such as the influence of the topography on nearly all other physical quantities measured. These are discussed in each chapter separately. However, in some instruments we can also observe a cross talk between the channels that is purely of electronic origin. If the measured signal is extremely small (e.g. for scanning near field optical microscopy luminescence measurements) and the wiring or electronics is not ideally conditioned, there may be a very small signal cross talk from other channels observed, that could be wrongly interpreted.

3.5  MECHANICAL AND THERMAL DRIFTS When we recall the image of a typical SPM design (Figure 2.1), we can expect that the loop from sample, through head holder and rough approach mechanism, through head up to probe can be relatively large, e.g. several tens of centimeters, as shown in ­Figure  3.9. Even if we take into account only distances in the z axis and imagine that we have an approx. 10 cm high mechanism expected to measure data with sub-nm resolution in z axis. This is not far from a situation when we would like to use a 100 meters high crane for a movement with micrometer resolution, e.g. for living cell imaging. It can be expected that any mechanical or thermal instability will deteriorate significantly the imaging process and instruments need to be highly optimized to prevent this. There are generally two sources of drift that can be seen in SPM instrumentation: 1. Mechanical drifts due to compliance of the material forming the path between probe and sample. 2. Thermal drifts caused by thermal expansion of the materials used.

45

46

CHAPTER 3  Data Models

Figure 3.9 A schematic drawing of a SPM instrument showing metrology loop on which all the drift, noise and mechanical compliance effects directly affect SPM measurements.

In Table 3.1 we summarize thermal and mechanical properties of materials t­ypically used in the construction of SPMs. For thermal properties a typically used quantity is the linear thermal expansion coefficient α. Mechanical properties are harder to estimate as they can be connected to both elastic and plastic deformation of materials (depending to application), here we show the elastic modulus E that is connected with elastic deformation and will directly affect e.g. vibration eigen­ frequencies of SPM components. Table 3.1 Mechanical and thermal properties of typical SPM construction materials, note that most of them can vary depending on material purity or composition.

Stainless steel Aluminum Glass Brass Granite Invar

α [×10−6 K −1 ]

E [GPa]

10–20 22 5–10 19 8 <2

200 70 80 120 60 140

3.5  Mechanical and Thermal Drifts

For mechanical drifts we do not have many correction possibilities if we are not the instrument manufacturer. We can only try to minimize additional drift sources that we would introduce by ourselves. The fact that we use nanoscale forces for imaging often leads to the underestimation of importance of firm sample mounting. An SPM imaging will probably work perfectly even if we leave the sample unfixed or fix it using a double-sided tape. However, the long term mechanical stability can be much worse (namely for the tape case) and all the manufacturer effort in making instrument with the lowest compliance and best thermal stability is wasted. For sample mounting, in most cases a magnetic or vacuum chuck-based holder is used. Thermal drifts can be controlled much better. The biggest source of thermal drifts is the instrument itself. Namely after start-up, the drift rate can be relatively high, as observed in Ref. [4]. Here, temperature changes up to 5 degrees and drift rates up to 100 nm/min were observed for the first hour after the instrument start-up for different commercial SPMs. After that period a thermal equilibrium is more or less established and only sources of drift come from the outside—from the laboratory environment and from the sample itself. It is therefore important, namely for larger sample, to let the sample accommodate to the temperature in the laboratory before inserting in the instrument. There is often some drift in measured data even if we take care for all the effects mentioned above. It is important to have tools to detect it and to correct it, or eventually include it in the measurement uncertainty. The simplest approach for drift detection is to take a well-known sample, such as a calibration grating, to image it and to observe changes from the expected shape. In Figure 3.10a an example of a one-dimensional holographic grating measured with a clearly observable drift is presented. By manually evaluating the bending of the grating structures we could easily evaluate the drift rate in the x-axis and we would get similar results like in Figure 3.10c. However, in practice we usually do not measure samples that would be so regular, and still we need to get some estimate of the drift rate. To obtain this from a single image of an unknown structure is almost impossible. If we can assume at least something about the structure (e.g. that it is a regular structure or has isotropic roughness),

Figure 3.10 Drift effects and their suppression by post processing: (a) raw data, (b) data after drift correction, (c) x offsets data used for the correction, related to drift rate. Note that mean orientation of sample, e.g. determined from scan after thermal stabilization would need to be subtracted from drift graph.

47

48

CHAPTER 3  Data Models

we can try to correlate data line by line during scanning, detecting some systematic shift between lines. This is how the drift in 3.10C was estimated. For more complex samples we need to repeat the imaging. Already from two ­successive images of the same region (within a single tip approach) we can estimate the drift relatively precisely. First of all, we need to know the time scale of the measurement. In most of the cases, the instrument measures all the profiles in one direction (lets call it forward) and leaves the data in the second one (backward). We can save and use both directions, e.g. for friction estimation, but we leave this to Chapter 7, here we will use only the forward data. In order to analyze the drift it is important to be sure that the speed in the forward and backward direction is the same and there are no breaks or other delays between the profiles. It is also important to know the delay that could be eventually observed between successive images and the direction in which the successive images are taken. There are several algorithms proposed in the literature for drift estimation, here we select two of them that do not need a special scanning approach and therefore are the most suitable for a commercial instruments user. Probably the simplest approach is described in Ref. [5], where only two charac­ teristic points need to be detected in successively measured images. These points could be for example two larger particles on a flat surface, two different corners of a calibration grating, etc. Their position within the image, mutual distance, and orientation can be used to determine the drift rate in x yz, magnification drift and rotation component of the drift. Following [5], let us assume that we have two positions of the characteristic points—point A at position (x1 , y1 ) and B at (x2 , y2 ). Measuring the absolute position of both points as a function of time gives us the overall drift rate. By evaluating relative changes in the separation of A and B, e.g. as (x2 − x1 ) and (y2 − y1 ) we can obtain changes in the microscope magnification that would be connected with the drift. Finally, by observing the rotation as (y2 y1 )/(x2 x1 ) we can estimate the rotational part of drift. A more complex approach uses the correlation between successive images, like in Ref. [6]. Here we use two successive scans measured upward and downward (so that there is one common point for both the scans in one corner of the image). An important issue is the influence of drift on the measurement process of ­non-equidistantly measured data. If we use the adaptive scanning approach briefly introduced here (see Ref. [3]), the data in the image are not measured successively, so the straightforward drift determination from the AFM image is not possible as the drift influences interleaved values significantly (see Figure 3.11c). However, drift can still be evaluated. In each refinement level we measure the same area as was already measured in the previous iteration and we even obtain data at exactly the same points (at row/column crossings) as previously, therefore we can determine the drift rate already during the refinement process for areas that are being refined. We can do this using the following approach [3]: • create an interpolated image from one refinement level (using the standard procedure from the previous section),

3.5  Mechanical and Thermal Drifts

Figure 3.11 Effect of drift on regular and non-regular sampling: (a) original data, (b) regular sampling data measured with drift, (c) non-regular sampling data measured with drift, (d) non-regular data from part (C) after drift correction.

• create an interpolated image from the next level skipping the data measured in the previous level, • use cross-correlation to determine the shift between the two data sets (in all three axes), these are the x and y drift values. • shift one data set according to the cross-correlation result and subtract the two data sets, this gives us the z drift value. As an illustration of this process we simulated a measurement with a constant drift vector of (3, 3, 0.3) nm/s in Figure 3.11c. We used a part of the microchip surface seen in Figure 3.11a, using already measured data (without observable drift) and adding the drift during the simulated measurement (all movements were performed with the same velocity). Drift was evaluated from level 1 and 2 of the refinement process where, for this sample, we still measure on nearly the whole area (the local refinement criterion still holds everywhere). Using the above-mentioned process we have evaluated the drift vector as (3.3 ± 0.5, 2.7 ± 0.3, 0.29 ± 0.15) nm/s which is a good estimate of the drift rate. In Figure 3.11d a simulated measurement with

49

50

CHAPTER 3  Data Models

a correction based on this estimated drift rate is shown (the first two iterations were used for the estimate of the drift), obviously leading to a significant correction of the image. Of course, if the drift rate is not constant, the above-mentioned approach will not be an optimal one, however the user could repeat it after several refinement iterations to correct the estimate of the drift rate.

3.6 NOISE Virtually every measurement is affected by noise that can come from many different sources. In SPM we can observe several main sources of noise. Mechanical noise arises from the combination of external vibration sources and non-rigidity of the head. External vibrations affecting SPM measurements come from mechanical vibrations of the environment (the amplitude of the floor vibrations can be in the order of hundreds of micrometers) and from acoustic noise. The SPM head is usually relatively large, having only a limited stiffness (with some parts, such as the cantilever in AFM being very weak) and can be understood as a kind of microphone. To prevent this, there is usually a several stage process applied for noise minimization. The microscope itself is on a low resonance frequency anti-vibration platform that removes most of the high resonance frequency components. The microscope construction is made as stiff as possible in order to get the highest possible resonance frequency, so that the system is not affected by low frequency noise transmitted by the anti-vibration platform. Finally, acoustic shielding is applied to screen out the noise from the surrounding environment. Electrical noise is caused by imperfections of the power sources, high ­voltage amplifiers and data acquisition circuitry. As a special part of this is the noise of feedback loop that can be partly tuned by the user, adjusting feedback parameters (as shown in previous sections). As a special kind of electrical noise we can also consider ADC sampling noise that is usually comparable to the value corresponding to a single ADC bit. Unfortunately, noise is not distributed isotropically in the measured data, which limits the methods for its further treatment. If we consider the standard sampling approach consisting of a matrix of equidistant points, we can observe that noise in the fast scanning axes is correlated as the values are obtained in successive steps. This is not the case for the slow scanning axes where the neighboring points are obtained with a time step of several seconds. If we want to process the data e.g. by some filtering or denoising operation, it is therefore usually effective to run this only in the fast scan axis direction. Some suggestions for denoising in data processing phase will be discussed in the next chapter. If we want to determine the effect of noise on our data, it is practical to change the fast and slow scan axis direction during the experiment. If we measure the same image twice, first with a fast x-axis and second with a fast y-axis (which can be controlled easily from nearly any SPM acquisition control software), and process the data independently, we can at least see the effect of noise on our results.

3.6 Noise

An even more complex procedure utilizing this approach was suggested in Ref. [7]. The process has several steps, as follows: 1. Obtain a regular image. 2. Obtain an image rotated by 90 degrees. 3. Rotate the second image back and mutually align both images (the necessity of this step depends on how the data acquisition software treats the measurement). 4. Calculate a Discrete Fourier Transform (DFT) from both images, taking care for non-continuities at image borders, e.g. by mirroring images in each direction. 5. Process Fourier transform coefficients from both images together in order to get an image with minimum noise (the phase is taken from one of the images while the modulus is taken as the minimum of both). 6. Calculate the backward DFT to get the final de-noised image. An example of the performance of such an algorithm on the simulated data is shown in Figure 3.12. This approach is valid only under the assumption that there are no other effects that would cause the two images to be different, such as drift, lateral forces during measurement, etc. However, it can be helpful to estimate the effect of noise on the results and to remove the anisotropy in the data created by scanning process. Note that for the adaptive scanning algorithm described in the previous section the fast and slow scanning axes are not defined. The image is already obtained in a

Figure 3.12 Noise cancellation algorithm performance. (a) original data (simulated surface formed by particle-like objects), (b) and (c) simulated horizontally and vertically measured image, (d) result of the procedure suggested in Ref. [7].

51

52

CHAPTER 3  Data Models

manner where both rows and columns are measured as if in the fast scanning axis, so that the noise is more isotropic than for a regular matrix approach.

3.7  TRY IT YOURSELF In order to play with different errors and their sources related to sampling and the measurement process we recommend using a few Gwyddion modules here (but similar functionality can be found in other software as well). In order to see the feedback loop effects on the measured data, the PID loop can be simulated on some data—both measured or simulated (see the next Chapter). The whole cantilever-tip-surface system is rather complex, but even using a simpler model omitting some of its parts can be useful for a better understanding of the feedback loop behavior in different situations. In Gwyddion, you can use a PID simulator module for this. The influence of drift on the data can be detected using the “compensate drift” module, that is using line by line correlation strategy to detect the drift in the image. By some amount of manual data evaluation also the other approaches suggested in this chapter, e.g. the two objects method can be used. Moreover, an image crosscorrelation can be used to detect small differences between successive images, which can be also useful when evaluating different distortions in scanner performance, such as drift. Sometimes it can be practical to add some noise on data and to check its effect, e.g. on the performance of different data processing or evaluation algorithms. Gwyddion “noise” and “line noise” and “spectral synthesis” modules can be used for this to add different kinds of noise with various properties (correlation, distribution, direction, etc). Finally, the denoising procedure suggested in Ref. [7] and discussed in the ­previous section is implemented in Gwyddion as well and can be tested both on simulated or real data.

3.8  TIPS AND TRICKS To maximize the amount of information contained in our data, the following steps can be recommended: • Do not use double sided tape. If there is no other option, use the thinnest one you can buy. Magnetic holder or vacuum chuck is certainly better choice (for vacuum chuck check sample flatness at the bottom side). • If the profiles along the fast scan axes will be used for evaluation, use nonrectangular pixels—you can increase the fast scan axes resolution significantly without any influence on the duration of the scan. • At least for the first image repeat the measurement twice to see the basic drift behavior.

References

• If you have a novel instrument, try to repeatedly estimate mechanical and thermal drifts under different environmental conditions (after start-up, sample exchange, etc.). • Try to reduce the feedback loop noise. If the data are too heavily distorted by noise with a clearly seen anisotropic behavior, try to measure the image with interchanged fast and slow scan axis. • Save also the feedback loop error channel (called often error signal, T-B signal, etc.).

REFERENCES [1] J.F. González Martínez, I. Nieto-Carvajal, J. Abad, J. Colchero, Nanoscale measurement of the power spectral density of surface roughness: how to solve a difficult experimental challenge, Nanoscale Res. Lett. 7 (2012) 174. [2] L.Y. Beaulieu, M. Godin, O. Laroche, V. Tabard-Cossa, P. Grütter, A complete analysis of the laser beam deflection systems used in cantilever-based systems, Ultramicroscopy 107 (2007) 422–430. [3] P. Klapetek, M. Valtr, P. Buršík, Non-equidistant scanning approach for millimetre-sized spm measurements, Nanoscale Res. Lett. 7 (2012) 213. [4] F. Marinello, M. Balcon, P. Schiavuta, S. Carmignato, E. Savio, Thermal drift study on different commercial scanning probe microscopes during the initial warming-up phase, Meas. Sci. Technol. 22 (2011) 094016. [5] Ch. Clifford, M.P. Seah, Simplified drift characterization in scanning probe microscopes using a simple two-point method, Meas. Sci. Technol. 20 (2009) 095103. [6] R.V. Lapshin, Automatic drift elimination in probe microscope images based on techniques of counter-scanning and topography feature recognition, Meas. Sci. Technol. 18 (2007) 907. [7] E. Anguiano, M. Aguilar, Cross measurement procedure (cmp) for noise free imaging by scanning microscopes, Ultramicroscopy 76 (1999) 39–47.

53