Content-aware image restoration for electron microscopy

Content-aware image restoration for electron microscopy

CHAPTER Content-aware image restoration for electron microscopy 13 Tim-Oliver Buchholza, Alexander Krulla,b, Reza Shahidic, Gaia Piginod, Ga´spa´r...

3MB Sizes 21 Downloads 95 Views

CHAPTER

Content-aware image restoration for electron microscopy

13

Tim-Oliver Buchholza, Alexander Krulla,b, Reza Shahidic, Gaia Piginod, Ga´spa´r Jekelyc, Florian Juga,* a

Max Planck Institute of Molecular Cell Biology and Genetics (MPI-CBG), Center for Systems Biology Dresden (CSBD), Dresden, Germany b Max Planck Institute for the Physics of Complex Systems (MPI-PKS), Center for Systems Biology Dresden (CSBD), Dresden, Germany c Living Systems Institute, University of Exeter, Exeter, United Kingdom d Max Planck Institute of Molecular Cell Biology and Genetics (MPI-CBG), Dresden, Germany *Corresponding author: e-mail address: [email protected]

Chapter outline 1 Introduction......................................................................................................278 2 Supervised CARE...............................................................................................278 3 CARE without ground-truth.................................................................................279 3.1 Noise2Noise training..........................................................................279 3.2 Noise2Void, Noise2Self, and deep image denoising...............................280 4 Experiments and results.....................................................................................281 4.1 CARE for SEM...................................................................................281 4.2 CARE for 2D cryo-TEM projections.......................................................282 4.3 CARE for 3D cryo-tomograms..............................................................285 5 CSBDeep, an open CARE software package..........................................................286 6 Discussion........................................................................................................288 References............................................................................................................288

Abstract Multiple approaches to use deep neural networks for image restoration have recently been proposed. Training such networks requires well registered pairs of high and low-quality images. While this is easily achievable for many imaging modalities, e.g., fluorescence light microscopy, for others it is not. Here we summarize on a number of recent developments in the fastpaced field of Content-Aware Image Restoration (CARE), in particular, and the associated area of neural network training, more in general. We then give specific examples how electron microscopy data can benefit from these new technologies. Methods in Cell Biology, Volume 152, ISSN 0091-679X, https://doi.org/10.1016/bs.mcb.2019.05.001 © 2019 Elsevier Inc. All rights reserved.

277

278

CHAPTER 13 Content-aware image restoration for electron microscopy

1 Introduction In recent years, tremendous technological advances have been made in light microscopy (LM) and electron microscopy (EM). Using fluorescent light microscopes, we routinely image beyond the resolution limit, acquire large volumes at high temporal resolution, and capture many hours of video material, enabling us to image processes in living cells and tissues that have previously been unobservable. Electron microscopes can go far beyond the resolution limit of light microscopy, and modern EM approaches enable us to see cellular building-blocks in their native cell and tissue context. Despite all progress, major bottlenecks remain. For example, the acquisition times of large image volumes using Scanning Electron Microscopy (SEM) is very time consuming, contrast in cryo-EM is typically very low, and the analysis of acquired images is typically prone to errors and cumbersome. In this chapter we first review how neural networks, i.e., content-aware image restoration (CARE) networks, can help to go beyond what fluorescence LM is traditionally capable of. CARE methods are known to help the analysis of acquired microscopy data, and the applicability of CARE methods on EM data is therefore desirable. Traditionally, CARE training is enabled by the acquisition of high-quality, low-noise images. While this is typically possible in LM, for many EM modalities it is not. Hence, additional ideas are required to make CARE methods applicable to these microscopy regimes. Recently, some ideas have been proposed that can enable CARE even in the absence of high-quality, low-noise images (Batson & Royer, 2019; Krull, Buchholz, & Jug, 2018; Laine, Lehtinen, & Aila, 2019; Lehtinen et al., 2018). In this chapter we will show how CARE networks can be trained on SEM data, on 2D cryoTEM projections, and on 3D cryo-tomography data. For each example, we will show restored images and compare them to results of other denoising methods. In order to enable others to profit from CARE network training on their own data, we describe how we have used CSBDeep, an open-source CARE software package, to create the presented results.

2 Supervised CARE Image restoration is the problem of reconstructing an image from a corrupted version of itself. Typical distortions can include the effects of camera and photon/electron noise, the blur of the optical point-spread-function, resolution loss due to sampling, or aberrations induced by the sample itself. Although we have a very detailed understanding of the nature and physics behind contributing distortions, it is hard to compute the inverse, i.e., generate a reconstructed image by only observing the corrupted images acquired by a microscopes. The inverse of an observed image x is usually not uniquely defined, meaning that multiple undistorted images could have given rise to the corrupted version we observed. Such non-invertible problems are typically approached by minimizing a cost function E that combines a data term D and a regularizer term R. The regularizer helps to pick one restored image y∗ from

3 CARE without ground-truth

the set of uncorrupted images that might have given rise to x. Formally, y∗ is computed as y∗ ¼ arg min y Dðx, yÞ + α  RðyÞ The data term D encourages the restored image to be in-line with the observation and our understanding of the physical nature of distortions. The regularization term R favors solutions based on prior assumptions about uncorrupted images. Dependent on the severity of the distortion, regularization has to be tuned by weighing it with a parameter α. Virtually all image restoration methods can be understood in these terms. In the context of this book chapter, it is important to note that the regularization term R traditionally encodes rather simple, hand-crafted assumptions, often as simple as favoring smooth images by penalizing strong differences of neighboring pixels. Content-aware image restoration, in contrast, extracts a suitable prior from a given body of sample data that can then be applied to more data of the same kind. CARE networks [1], typically being (residual) U-Nets [2–4], are in essence a high-dimensional cost function E that computes a restored image y∗ ¼ E(x) for a given distorted input image x. CARE networks are trained on a set of image pairs (x, y), with y being the undistorted ground-truth of x. During training, the network is adjusted such that a loss function L(E(x), y), measuring the difference between the network prediction E(x) and the desired result y, is minimized. While in LM it is possible to obtain very good quality ground-truth images y, in the context of many EM modalities, in particular for cryo-EM with its very restrictive total electron budget, this is not the case. Hence, additional ideas to the ones in (Weigert et al., 2018) are strictly required in order to make CARE approaches applicable to EM data.

3 CARE without ground-truth Unsupervised methods do not require high-quality, low-noise data (ground-truth). Such approaches try to utilize internal statistics of the presented data to perform image restoration. Below we will look at several ways of (semi-)supervised and unsupervised network training methods that do not require ground-truth data.

3.1 Noise2Noise training As mentioned before, traditional CARE networks need to be trained on pairs of images (x, y), where x are noisy images (composed of the true signal s with superimposed noise n), and a good approximation of a ground-truth signal y. As described above, a CARE network E is trained such that a loss function L(E(x), y) is minimized. Noise2Noise (Lehtinen et al., 2018), in contrast, is a training regime for neural networks that does not require clean ground-truth. Instead, it is sufficient to know two noisy images x1 ¼ s + n1 and x2 ¼ s + n2, such that n1 and n2 are independent and have zero-mean. These requirements are fulfilled for Gaussian as well as Poisson noise, which are known to be two main sources of noise in light- and electron microscopy.

279

280

CHAPTER 13 Content-aware image restoration for electron microscopy

Noise2Noise training is simple to implement. Instead of minimizing the loss to available ground-truth y, one simply minimizes L(E(x1), x2). During training, due to the independence of n1 and n2, the network is asked to achieve the impossible, i.e., mapping noisy pixel values of x1 to noisy pixel values of x2. Depending on the kinds of noise, a suitable loss function has to be used, and with enough data at hand, the network will learn to predict high quality image restorations (Lehtinen et al., 2018). Hence, as long as pairs of images with independent noise can be acquired, Noise2Noise enables CARE network training even in the absence of ground-truth. In (Buchholz, Jordan, Pigino, & Jug, 2018), the authors show how Noise2Noise training is applicable to cryo-TEM data. Their best results use dose-fractionated image acquisitions where the detector does not record a single image at once, but instead breaks the acquisition down into a movie of short exposure images. These individual images, in our case 10, are then cross-correlated (aligned) before being combined to one single, higher quality, long exposure frame. The motivation for doing so is to reduce the negative effects of motion blur in long exposure images (Zheng et al., 2017). The availability of short exposure movie frames that contain the same signal, but independent noise, is the ideal precondition for Noise2Noise training. To yield two noisy images containing the same signal, Buchholz et al. suggest to average all even and all odd short exposure frames (Buchholz et al., 2018). Example restorations can be found in Sections 4.2 and 4.3.

3.2 Noise2Void, Noise2Self, and deep image denoising While Noise2Noise enables the training of CARE networks without ground-truth, recent work by (Krull et al., 2018) show how CARE training on single image acquisitions can be enabled. This training regime requires to derive both, the input and the target data, from single noisy images. Motivated by the non-existing target image, this training approach is called Noise2Void. All nodes in a Convolutional Neural Network (CNN) operate on a subset of input pixels, jointly called their receptive field. Hence, output nodes in the last layer of a CARE network are predicting target pixel values after receiving information from not only a single input pixel but, instead, from an entire image patch surrounding a pixel. The fundamental idea of Noise2Void is to take out a single pixel in the center of a receptive field, thereby creating a “blind-spot.” We can then use this removed pixel value as target for learning a network that will predict the value hidden in the blind-spot. Note, this target is not the ground-truth pixel value, since it is itself only taken from the noisy input image. Hence, the same arguments that justify successful Noise2Noise training are also needed here. Two conditions must be fulfilled for Noise2Void training to succeed. The data must be statistically interdependent, meaning that knowing the surrounding of one pixel allows an observer to predict that pixel’s value. Additionally, the noise in the image needs to be pixel-wise independent (given the signal). A similar approach, called Noise2Self, was independently developed by (Batson & Royer, 2019). Their work provides a generalized theoretical background

4 Experiments and results

on self-supervised denoising, which can be used to calibrate any parameterized denoising algorithm – from the single hyperparameter of a median filter to the millions of weights of a deep neural network. More recently, (Laine et al., 2019) showed how a neural network can be built without the necessity of leaving out a blind spot. Additionally, they show how to further improve denoising if only Gaussian noise is present.

4 Experiments and results In the following sections we will describe how the various CARE training techniques, introduced above, can be applied to EM data.

4.1 CARE for SEM In large-scale serial SEM imaging, the speed of scanning is one of the main limiting factors when acquiring large image volumes. Imaging a 8000  8000 pixel image at a rate of 1 MHz takes over a minute while the same image can be acquired in a TEM with a four-camera array in less than a second. Despite this difference in imaging speed, SEM is often the method of choice. Either focused ion beam (FIB) or serial block face (SBF) SEM is compatible with automatic sectioning and imaging of a block surface. Alternatively, several dozens of sections can be collected on slides as ribbons or automatically on a tape support (ATUM-tome; see chapter “Serial-section electron microscopy using automated tape-collecting ultramicrotome (ATUM)” by Baena et al. in this volume). The scanning speeds used in recent SEM connectomics projects are in the range of 0.5–4 MHz. One way to improve imaging speed is to capture more electrons by improved detectors. Another highly technical and expensive approach is the use of a multi-beam SEM with 91 parallel electron beams (Crosby, Eberle, & Zeidler, 2016). Here we present a way to speed up image acquisition with CARE. To this end, we imaged ultrathin sections (30 nm) of an EPON-embedded larva of the marine annelid worm Platynereis dumerilii using a Zeiss Gemini 500 SEM. Platynereis is an ideal specimen for whole-body connectomics. We collected sections as ribbons on conductive ITO glass (Pluk, Stokes, Lich, Wieringa, & Fransen, 2009). For post-staining, we used a solution of uranyl acetate and lead citrate. To train a CARE network, we obtain high quality (ground-truth) images at 0.2 MHz with four times averaging, and fast-scanned, low-quality images at a 5 MHz scanning speed. Not surprisingly at such high speeds, image quality deteriorates notably, rendering the resulting images unfit for downstream image analysis (e.g., connectome tracing). With a good CARE model at hand, these lowquality images might be restored to a degree that re-enables downstream processing. In general, for CARE to work, one requires pixel-perfectly registered pairs of input and ground-truth images. While two SEM acquisitions of the same sample

281

282

CHAPTER 13 Content-aware image restoration for electron microscopy

are roughly aligned, a discrepancy of multiple pixels is common. Luckily, image registration is a well understood problem, and a number of powerful methods are available (Klein, Staring, Murphy, Viergever, & Pluim, 2010; Schindelin et al., 2012; Thevenaz, 1998). We have aligned the high and low quality image pairs we acquired using the free Fiji plugin StackReg (Thevenaz, 1998). To train CARE, we extracted 32,768 randomly positioned image patches of size 128  128 from a total of eight images (jointly counting 471 megapixels). No additional patch augmentation was used. From the extracted patches we used 10% as validation data and trained a default CARE denoising network with depth 2, 5  5 kernels, and a linear activation function in the last layer. We used a batchsize of 16 and an initial learning rate of 0.0004. We further used the mean absolute error (mae) as loss function. The best performing network on the validation set is used for testing. After training, we restored one fast-scanned, low-quality image we have excluded from the training set. Additionally, we used non-local means (Buades, Coll, & Morel, 2005; van der Walt, Sch€onberger, Nunez-Iglesias, Boulogne, Warner, et al., 2014) and BM3D (Dabov, Foi, Katkovnik, & Egiazarian, 2009) as baselines, two self-supervised denoising algorithms. All results and the corresponding ground-truth (slowly scanned and averaged image of the same sample) are shown in Fig. 1. In addition, we summarize computed PSNR and SSIM values for all baselines and our CARE results in Table 1.

4.2 CARE for 2D cryo-TEM projections Cryo-transmission electron microscopy (cryo-TEM) allows to image biological samples in their unaltered, hydrated state (Knapek & Dubochet, 1980). But this opportunity comes at a price. The contrast in acquired projections is typically very low and contrast enhancing heavy metal staining is de-facto impossible. One way to improve contrast is to image with the sample being out of focus (defocus, 5μm). This introduces phase-contrast but reduces overall resolution. Another rather limiting factor in cryo-TEM is the available electron budget, i.e., the total number of electrons a sample can be exposed to before it experiences noticeable electron damage. Hence, cryo-TEM data is usually subject to quite low SNRs and low contrast. How raw image data, acquired using a Gatan K2 direct electron detector, can be used to train denoising networks was shown by (Buchholz et al., 2018). Since ground-truth is unobtainable in cryo-TEM, they have used Noise2Noise training. The used detector acquires 10 short exposure frames that are then aligned and added. Aligning these frames followed by splitting them in even/odd halves and averaging these two sets of five frames each, creates two pixel-perfectly registered images with independent noise. For the data shown in Fig. 2, we used the standard U-Net of depth two from CSBDeep (see Section 5), 3  3 kernels, and a linear activation function at the last layer. We used the mean squared error as loss function. We extracted 1000 randomly selected patchpairs of size 128  128 and used 10% for validation. We used a batchsize of 16 and an initial learning rate of 0.0004. The network performing best on the validation set

4 Experiments and results

FIG. 1 Results of SEM-CARE. The upper row of images shows (A) the noisy input image (scanned at 5 MHz), and two baseline denoising methods, namely (B) Non-Local Means and (C) BM3D. The second row of images shows (D) SEM-CARE results, and (E) the ground-truth, i.e., an average of four scans at 0.2 MHz. The remaining two rows show the insets of (A–E) in respective order, additionally indicated by color and line-style.

was then used to reconstruct the two (even/odd) noisy images. The final restoration was created by pixel-averaging these two images. However, what can be done if a direct detector is not available and only single noisy projections can be acquired? While Noise2Noise training is not applicable,

283

284

CHAPTER 13 Content-aware image restoration for electron microscopy

Table 1 Quantitative measurements comparing the restoration results of the 5 MHz SEM acquisition restored with Non-Local Means (NLM), BM3D, and CARE to the 0.2 MHz four times average ground-truth SEM acquisition.

Input (5 MHz) NLM (Buades, Coll, & Morel, 2005) BM3D (Dabov et al., 2009) SEM-CARE (our)

PSNR

SSIM

6.62 9.25 9.41 16.56

0.09 0.16 0.37 0.47

FIG. 2 Cryo-CARE result on a 2D cryo-TEM projections. Raw data are shown in (A), BM3D denoising results in (B). (C and D) show restorations of the same CARE network after being trained via Noise2Noise and Noise2Void, respectively.

4 Experiments and results

self-supervised denoising methods, such as (Batson & Royer, 2019; Krull et al., 2018; Laine et al., 2019), are offering an alternative. Using the same network as before, we switched from Noise2Noise training to the unsupervised alternative Noise2Void. In our experiment we used a batchsize of 128 and a total of 64 blind-spots with uniformly chosen pixel replacement from a 11  11 neighborhood (including the central blind-spot). Similar results are discussed in detail in (Krull et al., 2018). Note that Noise2Void is openly available on GitHub. Despite only showcased in 2D in this book chapter, the available code can also be used to train CARE networks for 3D image data. In Fig. 2 we show a raw cryo-TEM projection, the denoising results of the powerful baseline method BM3D (Dabov et al., 2009), and the results of our CARE networks trained with Noise2Noise and Noise2Void training. Please note that Noise2Void restorations contain more residual noise then our results using Noise2Noise, and it will require additional ideas to further improve the results achieved by Noise2Void-trained networks.

4.3 CARE for 3D cryo-tomograms As before, cryo-electron tomograms can also be denoised with Noise2Noise trained CARE networks. Short exposure frames of dose-fractionated tilt-angle acquisitions are aligned and then split in even and odd halves. Each subset is then summed, resulting in two independently noisy projections per tilt angle. We use IMOD (Mastronarde & Held, 2017) to reconstruct two tomograms, giving us the opportunity to train a 3D Noise2Noise CARE network. More specifically, we used the standard U-Net of depth two from CSBDeep (see Section 5), 3  3 kernels, and a linear activation function at the last layer. We used the mean squared error as loss function. We extracted 1200 randomly selected sub-volume-pairs of size 64  64  64 and used 10% for validation. We used a batchsize of 16 and an initial learning rate of 0.0004. The network performing best on the validation set was then used to reconstruct the two (even/odd) noisy images. The final restoration was created by pixel-averaging these two images. As shown in Fig. 3, using content-aware denoising on cryo-electron tomograms can lead to greatly improved contrast. The tomograms we used contain cilia of Chlamydomonas Reinhardtii, for which structures like the individual protofilaments of microtubules, or the periodic structure of the glycocalix are easily visible after restoration with CARE. Sizes and scales of all shown structures are in agreement with the literature on ciliary anatomy (Ichikawa et al., 2017). To further demonstrate the utility of cryo-CARE, (Buchholz et al., 2018) have applied a segmentation pipeline to segment outer dynein arms (ODAs) on raw and the denoised tomograms. Since they have used the same segmentation ground-truth, a direct comparison is possible. The segmentation model trained on cryo-CARE denoised tomograms shows an overall increase in segmentation quality of about 10% (see Fig. 4 and Buchholz et al., 2018).

285

286

CHAPTER 13 Content-aware image restoration for electron microscopy

FIG. 3 Cryo-CARE result on a 3D cryo-TEM tomogram. In the subfigures a slice through the raw tomogram is shown (A), and the same slice from the denoised tomogram (B). In the insets the glycocalix (dotted border) and protofilaments (dash-dotted border) in the raw tomogram are shown and the corresponding insets in the denoised tomogram.

5 CSBDeep, an open CARE software package CSBDeep (https://csbdeep.bioimagecomputing.com) is an open-source image denoising framework published with the initial CARE paper (Weigert et al., 2018). It contains all you need to train and run a number of CARE use-cases on your own data. Additionally, it contains a number of well documented examples/tutorials, allowing novice users to get familiar with training and using CARE networks. Most methods we present in this chapter make heavy use of the existing CSBDeep codebase, allowing readers to quickly replicate the solutions we propose. Noise2Void code of (Buchholz et al., 2018; Krull et al., 2018) is available on GitHub (https://github.com/juglab/n2v). While network training is, so far, only available from within Python, we offer two non-Python ways to apply trained CARE networks on raw microscopy data, namely the CSBDeep plugins for Fiji (Schindelin et al., 2012), and workflow solutions for KNIME (Berthold et al., 2009). With these efforts we hope to maximize the number of research projects that can immediately benefit from the image restoration models we present.

FIG. 4 Automated downstream analysis on raw data (A) and cryo-CARE restored data (B). Ground-truth voxels are shown in violet, true-positives in turquoise, and false-positives in orange. Precision-recall plots on increasing min segment size are also shown (Buchholz et al., 2018). Pentagons indicate the results as they are shown in sub-figures (A) and (B).

288

CHAPTER 13 Content-aware image restoration for electron microscopy

6 Discussion Here we have shown how state-of-the-art image restoration methods can be applied to electron microscopy data. When using SEM, it is obvious that faster acquisition times are desirable if very large image volumes need to be recorded. Our results using SEM-CARE indicate that 40–50 fold speed-ups can be achieved without substantial loss in quality. Lowquality acquisitions used in our experiments have been acquired using a 200 times faster then the ground-truth images. A more systematic study will be required to understand the limitations of SEM-CARE. While for SEM reconstructions we could use straightforward supervised network training, TEM data required semi-supervised (Noise2Noise) or unsupervised (Noise2Void) training regimes. Still, it has to be mentioned that Noise2Void can be applied to data that was not acquired with CARE in mind, including old data that were so far not useful for downstream processing. Today, we only start to explore the possibilities that CARE approaches will bring to microscopy-heavy research in the life-sciences. Thanks to existing open software packages, such as CSBDeep, research projects can use CARE already today. It will be exciting to see in what creative new ways CARE will help researchers to image faster and/or at lower signal-to-noise ratios, and how this will facilitate improved automated quantitative analyses.

References Batson, J., & Royer, L. (2019). Noise2Self: Blind denoising by self-supervision. arXiv. Retrieved from http://arxiv.org/abs/1901.11365. Berthold, M. R., Cebron, N., Dill, F., Gabriel, T. R., K€ otter, T., Meinl, T., et al. (2009). KNIME—The konstanz information miner. ACM SIGKDD Explorations Newsletter, 11(1), 26–31. Buades, A., Coll, B., & Morel, J.-M. (2005). A Non-Local Algorithm for Image Denoising. In 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05). https://doi.org/10.1109/cvpr.2005.38. Buchholz, T.-O., Jordan, M., Pigino, G., & Jug, F. (2018). Cryo-CARE: content-aware image restoration for Cryo-transmission electron microscopy data. arXiv. Retrieved from http:// arxiv.org/abs/1810.05420. Crosby, K., Eberle, A. L., & Zeidler, D. (2016). Multi-beam SEM technology for high throughput imaging. MRS Advances., 1, 1915–1920. https://doi.org/10.1557/adv.2016.363. Dabov, K., Foi, A., Katkovnik, V., & Egiazarian, K. (2009). BM3D image denoising with shapeadaptive principal component analysis. In SPARS’09-signal processing with adaptive sparse structured representations. Retrieved from https://hal.inria.fr/inria-00369582/. Ichikawa, M., Liu, D., Kastritis, P. L., Basu, K., Hsu, T. C., Yang, S., et al. (2017). Subnanometre-resolution structure of the doublet microtubule reveals new classes of microtubule-associated proteins. Nature Communications, 8, 15035. https://doi.org/ 10.1038/ncomms15035.

References

Klein, S., Staring, M., Murphy, K., Viergever, M. A., & Pluim, J. P. W. (2010). Elastix: A toolbox for intensity-based medical image registration. IEEE Transactions on Medical Imaging, 29(1), 196–205. Knapek, E., & Dubochet, J. (1980). Beam damage to organic material is considerably reduced in cryo-electron microscopy. Journal of Molecular Biology, 141(2), 147–161. Krull, A., Buchholz, T.-O., & Jug, F. (2018). Noise2Void—Learning denoising from single noisy images. arXiv. Retrieved from http://arxiv.org/abs/1811.10980. Laine, S., Lehtinen, J., & Aila, T. (2019). Self-supervised deep image denoising. arXiv. Retrieved from http://arxiv.org/abs/1901.10277. Lehtinen, J., Munkberg, J., Hasselgren, J., Laine, S., Karras, T., Aittala, M., et al. (2018). Noise2Noise: Learning image restoration without clean data. arXiv, Retrieved from http://arxiv.org/abs/1803.04189. Mastronarde, D. N., & Held, S. R. (2017). Automated tilt series alignment and tomographic reconstruction in IMOD. Journal of Structural Biology, 197(2), 102–113. Pluk, H., Stokes, D. J., Lich, B., Wieringa, B., & Fransen, J. (2009). Advantages of indium-tin oxide-coated glass slides in correlative scanning electron microscopy applications of uncoated cultured cells. Journal of Microscopy, 233(3), 353–363. Schindelin, J., Arganda-Carreras, I., Frise, E., Kaynig, V., Longair, M., Pietzsch, T., et al. (2012). Fiji: An open-source platform for biological-image analysis. Nature Methods, 9(7), 676–682. Thevenaz, P. (1998). StackReg: An ImageJ plugin for the recursive alignment of a stack of images (p. 2012). Biomedical Imaging Group, Swiss Federal Institute of Technology Lausanne. van der Walt, S., Sch€onberger, J. L., Nunez-Iglesias, J., Boulogne, F., Warner, J. D., et al.Yu, T., (2014). Scikit-image: image processing in Python. PeerJ, 2, e453. https://doi.org/10.7717/ peerj.453. Weigert, M., Schmidt, U., Boothe, T., M€uller, A., Dibrov, A., Jain, A., et al. (2018). Contentaware image restoration: Pushing the limits of fluorescence microscopy. bioRxiv 236463, Nature Methods, 15, 1090–1097. Zheng, S. Q., Palovcak, E., Armache, J.-P., Verba, K. A., Cheng, Y., & Agard, D. A. (2017). MotionCor2: Anisotropic correction of beam-induced motion for improved cryo-electron microscopy. Nature Methods, 14(4), 331–332.

289