Available online at www.sciencedirect.com
ScienceDirect Advances in Space Research xxx (2017) xxx–xxx www.elsevier.com/locate/asr
The fast co-adding algorithm of QCT Yiding Ping a,b,⇑, Chen Zhang a,b,c b
a Purple Mountain Observatory, Chinese Academy of Sciences, 2 West Beijing Road, Nanjing 210008, China Key Laboratory of Space Object and Debris Observation, Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing 210008, China c School of Astronomy and Space Science, University of Science and Technology of China, Hefei 230026, China
Received 21 February 2017; received in revised form 16 April 2017; accepted 14 May 2017
Abstract This paper presents a fast co-adding algorithm designed to stack the images coming from different channels of QCT in real-time. The algorithm calculates the transformation coefficients for every single exposure to eliminate the effects of the possible shifts of the lenses. The way of reprojection and co-adding applied here is a linear method similar to Drizzle, and a reasonable simplification is applied to accelerate the computation. All the calculation can be finished in about 100 ms on a 3.4 GHz CPU with 4 cores, which matches the needs for the observation of space debris perfectly, while the limiting magnitude is improved by about 0.8. The co-adding results of our algorithm are very close to SWarp’s, even slightly better in terms of SNRs. Ó 2017 Published by Elsevier Ltd on behalf of COSPAR.
Keywords: Image-processing; Instrumentation; Space debris
1. Introduction Quad-Channel Telescope (QCT), located at Yaoan site of Purple Mountain Observatory (PMO), is designed to simultaneously conduct four-color fast photometry of space debris, especially bright rocket bodies. QCT comprises four 33 cm-aperture lenses sharing a common mount and pointing direction (Fig. 1). Each channel is equipped with an Andor Xion DU888 CCD camera, which has a 1024 1024 pixels frame-transfer EMCCD chip. Frametransfer CCDs are generally not suitable for astronomical imaging (Howell, 2006). However, the high readout rate of DU888, which can be up to 9 frames per second, even higher in the subframe mode, as well as the feature of no requirement for shutters, make it very suitable for the time series photometry (Nather and Mukadam, 2004) of space ⇑ Corresponding author at: Purple Mountain Observatory, Chinese Academy of Sciences, 2 West Beijing Road, Nanjing 210008, China. E-mail addresses:
[email protected] (Y. Ping),
[email protected]. cn (C. Zhang).
debris, whose tumbling rates could be very fast (Meeus, 1974; Lin and Zhao, 2015). Orbital measurement of space debris also shares a big part of QCT’s observation time. For the sake of maximizing the telescope’s detecting ability, we usually remove the filters and co-add the images coming from 4 different channels. In this mode, QCT works as a distributed aperture telescope, whose images are stacked into one single image incoherently in computers so as to get larger equivalent aperture (Kaiser et al., 2002). There are already existing tools such as SWarp (Bertin et al., 2002) and Drizzle (Fruchter and Hook, 2002) being able to stack images, which can be taken in same or different epochs, into a single image. But these packages are not suitable in the case of QCT due to their relatively low speed. The main scientific targets of QCT are space debris, whose apparent motion across the sky, which could be as fast as one degree per second, is much higher than normal astronomical objects. It is necessary to adjust the tracking speed of the telescope in real-time to ensure the tracking stability. We use the feedback from image, i.e. the target
http://dx.doi.org/10.1016/j.asr.2017.05.018 0273-1177/Ó 2017 Published by Elsevier Ltd on behalf of COSPAR.
Please cite this article in press as: Ping, Y., Zhang, C. The fast co-adding algorithm of QCT. Adv. Space Res. (2017), http://dx.doi.org/10.1016/j. asr.2017.05.018
2
Y. Ping, C. Zhang / Advances in Space Research xxx (2017) xxx–xxx
Fig. 1. Quad-Channel Telescope at Yaoan site (longitude = 101.18 degrees East, latitude = 25.53 degrees North) of Purple Mountain Observatory. Its smoothly fast tracking system, along with the 2.17 2.17 degrees wide field of view, make it suitable for the observation of space debris with high angular velocities.
positional offset from the image center, to determine the amount of speed adjustment so that the positions of the target on the image can be kept fixed during the observation. All the effort here is for ensuring that the targets are imaged well enough, rather than turning to streaks which are not good for both position determination and photometry. Therefore, the co-adding algorithm must be fast enough to keep up with the frame rate of QCT, which is normally 4 frames per second, in order that the tracking system can get the captured image in time to determine how much to be adjusted. In this paper, we present a method, which is developed to match the real-time requirement of QCT observation, to co-add the images taken by the four different channels. The algorithm, basically a straightforward linear combination method, has the advantage of being able to handle the slight exposure-to-exposure differences of transformation, i.e. shifts and rotations, between the 4 channel’s images. Most importantly, the algorithm, with the CPU consumption of each co-adding calculation less than 100 ms on a 3.4 GHz CPU with 4 cores, is fast enough to support QCT’s observation of space debris. 2. The image registration of QCT The apparent velocity of space debris is so high that if the images of the 4 channels are not synchronized, the celestial coordinates of a target detected by different channels of QCT could be different enormously. For instance, in the case of a regular space debris in low earth orbit (LEO), whose motion across the sky is 0.5 degree per second, even only about 10 ms exposure time gap between channels will lead to a pointing difference of about 18 arcseconds, 3 pixels according to QCT’s image scale, which is about 7.66 arcseconds per pixel. As a result, there will be 4 detections belonging to the same debris if we co-add the images by
aligning the field stars. Thus, in order to ensure the simultaneousness of 4 channels’ exposures, we set all the cameras to external trigger mode and trigger them with pulses generated by a signal generator. Fig. 2 presents the 4 images of one single exposure from QCT. There are obvious shifts between images. A closer scrutiny shows that there are not only shifts, which come from the misalignment of optic tubes and detectors, but also differences of rotations, detector plate scales, distortions, etc. These differences, even those much smaller than shifts of centers, will affect the results of co-adding in the sense that they will lead to a misalignment of input images significantly. Unlike the differences caused by chip-to-chip variations in bias and quantum efficiency which could be corrected by standard tasks, i.e. bias subtraction and flatfielding respectively, all these differences can be regarded as geometric differences and described by established transformations to designated coordinates. Thus the biggest problem here to be resolved, in order to co-add the images with acceptable precision, is how to describe the geometric differences between the images from different lenses. In our initial design, the difference should be constant, therefore being able to be described with a set of constant coefficients of the transformation function. But it has turned out that these differences, especially the center shifts, are varying along with the variation of pointing slightly. To locate the source of these shifts, we took a series of images pointing in different directions covering half of the sky, and did astrometry for these images to determine the exact value of the pointing. By comparing these values, we found that the differences of the centers are varying along with the pointing (Fig. 3). Such kind of variation mostly is caused by the different flexures of the telescopes’ tubes, which is hard to overcome. Consequently, there is no way for us to represent the differences using a transformation function with one set of constant coefficients, which were supposed to be almost unchanged with time and pointing. If so, we would be able to calculate the coefficients in advance and co-add the images with them. What we need is to just re-calculate the coefficients once for a while to keep the accuracy. But the unexpected pointing shifts make it impossible, which means we have to re-calculate the transformation coefficients after every exposure to eliminate the effects of the shifts. There are two issues of this solution. One is the significant increase of computing time, the other is that no coadding can be conducted if there are no observable stars in the field of view. We use I 1 ; I 2 ; I 3 and I 4 to denote the images from channel 1, 2, 3 and 4 respectively, and R1 ; R2 ; R3 are the transformation from I 1 ; I 2 ; I 3 to I 4 . In order to save computing time, we choose to use I 4 as the reference image and stack other three channels’ images into it, instead of adding all the 4 images into a common frame free of distortion, the way similar to what SWarp does. Consequently, we only have to calculate the transformation and do the projection
Please cite this article in press as: Ping, Y., Zhang, C. The fast co-adding algorithm of QCT. Adv. Space Res. (2017), http://dx.doi.org/10.1016/j. asr.2017.05.018
Y. Ping, C. Zhang / Advances in Space Research xxx (2017) xxx–xxx
3
Fig. 2. A typical exposure of QCT. Obvious shifts between images can be easily observed. Besides, their rotations are also slightly different to each other, as well as the distortions.
-45.5
-45.5 -46
Δx (pixel)
Δx (pixel)
-46 -46.5 -47 -47.5 -48 -100
-46.5 -47 -47.5
-80
-60
-40
-20
0
20
40
60
80
100
120
-48 -40
120
27.8 27.6 27.4 27.2 27 26.8 26.6 26.4 26.2 26 -40
-20
0
20
Δy (pixel)
Δy (pixel)
27.8 27.6 27.4 27.2 27 26.8 26.6 26.4 26.2 26 -100
-80
-60
-40
-20
0
20
40
60
80
100
40
60
80
100
DE (°)
HA (°)
40
60
80
100
HA (°)
-20
0
20
DE (°)
Fig. 3. Center shifts in pixels with respect to the changing of both hour angle and declination. Obvious correspondence between hour angle axis and x coordinate, as well as that of declination and y coordinate, can be easily noticed from the plots.
for 3 times so as to save some time since the efficiency of the algorithm is our big concern. We represent the transformation model in terms of the dependence of the location ðX ; Y Þ on the reference image, as a function of the position on the target images. We use a polynomial of order 3 in x, in y, and including all their cross-terms: ( P3 Pi X n ¼ i¼0 j¼0 aij xnj y ij n ð1Þ P3 Pi j ij Y n ¼ i¼0 j¼0 bij xn y n
where n denotes the number of the channels from 1 to 3. Therefore, the calculation of transformation function involves the following steps: 1. Correcting the pointing by applying the pointing model. 2. Reading the exposure time from clock and predicting the star positions on the images based on the pointing of telescopes, scale of images and size of images. 3. Measuring the positions of stars near the predicted positions.
Please cite this article in press as: Ping, Y., Zhang, C. The fast co-adding algorithm of QCT. Adv. Space Res. (2017), http://dx.doi.org/10.1016/j. asr.2017.05.018
4
Y. Ping, C. Zhang / Advances in Space Research xxx (2017) xxx–xxx
4. Finding the common stars appearing on all the 4 images. 5. Fitting the transformation function with the positions on the 4 images. 3. Algorithm of reprojection and co-adding The raw image data of the 4 channels will be bias and flat corrected immediately after they are read out. Then the calculation of reconstruction, i.e. mapping I 1 ; I 2 and I 3 to I 4 is conducted. Our algorithm developed here to co-add QCT’s images is a linear method, and a fairly straightforward method. It is similar to Drizzle, which was developed for combining the dithered images of Hubble Deep Field by mapping pixels in the original input images into pixels in the subsampled output image, taking into account shifts and rotations between images and the optical distortion of the camera. The linear method does not need the known information of the point spread function (PSF) and no extra noise caused by convolution with nonlinear methods will be introduced. The basic concept of our algorithm is represented by Fig. 4. The method is similar to Drizzle with pixfrac = 1.0, where an input pixel, whose shift, rotation and distortion are all taken into account, is mapped into the output image. Considering all these factors, one direct solution is re-sampling, i.e. dividing every input pixel into a subsampled grid, and the size of the grid is the step of iteration. Then the positions of these sub-pixels on the output image is calculated, and this part of flux will be added into the destination pixel. At first, we applied such a method which replicates each input pixel into a finer sub-sampled grid, shifted into place, and added to the output image. It is very similar to a shift-and-add method (Christou, 1991). Consequently, the resulting quality of this method depends on the size of the re-sampling step. The smaller is the step, the higher quality will be achieved. On the other hand, it means this method requires a high computing capability. We found by some tests that we are not able to get acceptable co-adding quality, i.e. there is no obvious
Fig. 5. The simplification of the reprojection. For any input pixel, the targeting place of its center on the output image, ðxc ; y c Þ, is calculated with transformation functions, while its own rotation and distortion are neglected. Thus, the positions of its four corner in the output image, along with its areas overlapped with the four pixels of I 4 can be easily calculated by simple arithmetic. The fluxes redistributed to the four output pixels are proportional to the areas.
mismatching pattern due to the low resolution on the result image being noticed by naked eyes, unless we re-sample every input pixel with a 10 10 sub-pixel grid. The grid with around 100 100 sub-pixel is necessary for a good output. However, the consumption of CPU time can be over ten minutes even with the fastest CPU by far. Multithread programming helps, but apparently not enough for the requirement of real-time space debris tracking. To resolve this problem, we have to simplify the coadding algorithm. We apply a concept coming from Cloud in Cell (CIC) algorithm (Hockney and Eastwood, 1988) whereby we approximate the transformation of each pixel (Fig. 5). For every single pixel, we take in only the rotation and offset of its center when performing the coordinate transformation and ignore those of the pixel itself, therefore an input pixel will be placed on the output image
Fig. 4. The concept of QCT’s co-adding algorithm. The left of the figure presents how other solutions like Drizzle map input pixels to the output image, where the distortion and rotation of the pixel itself has been exaggerated. In these algorithms, the distortion and rotation of pixels are also taken into account for the transformation of pixels. We simplify the algorithm in QCT by ignoring them, shown as the right figure, which accelerate the computation tremendously.
Please cite this article in press as: Ping, Y., Zhang, C. The fast co-adding algorithm of QCT. Adv. Space Res. (2017), http://dx.doi.org/10.1016/j. asr.2017.05.018
Y. Ping, C. Zhang / Advances in Space Research xxx (2017) xxx–xxx
5
Fig. 6. A typical weight map of co-adding. There are four obvious levels in different colors in the plot, representing the overlapping of 1, 2, 3 or 4 input images. The values of the orange part are very close to 1, thus almost no correction is to be done for these pixels. The values of purple part at the lower-left corner are very close to 4, which scale the values to the same level of other parts. The yellow and green part represents overlapping of 2 and 3 input images respectively. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Fig. 7. The demonstration of the result of QCTs co-adding algorithm. The left is a part of one original image of I 4 , while the right is its counterpart on coadded image. Many stars being totally absent on the left now can be seen clearly on the right.
with only shifts, without any rotation and distortion of it. Therefore, we only have to calculate any input pixel’s position in the output image by transformation function for one time. In Fig. 5, ðxc ; y c Þ is the coordinates of the input pixel in the output. Since we have already ignored the rotation and distortion of it, its relation with the output image is just like in the right panel of Fig. 4. Thus we can easily get the coordinates of its four corners in the output image by shifting by half size of the pixel (Fig. 5). In Fig. 5, we can see that the input pixel is divided into 4 rectangles overlapping with 4 different pixels of output image. We denote the positions of the four corners by ðx1 ; y 1 Þ; ðx2 ; y 2 Þ; ðx3 ; y 3 Þ and ðx4 ; y 4 Þ, thus the sides lengths
of the 4 rectangles can be express by Dxn ¼ jxn xc j and Dy n ¼ jy n y c j, where n is 1, 2, 3, 4 respectively, and the areas of them can be calculated by sn ¼ Dxn Dy n . Therefore, we can easily distribute the flux of the input pixel into 4 output pixels by calculating the overlaid area of them, which can be expressed by: I n ¼ IS n
ð2Þ
where I is the flux of the input pixel. The calculation of the co-adding is accelerated tremendously by applying the simplified reprojection algorithm above. Meanwhile, the difference of the result can be negligible comparing to what is got by the re-sampling
Please cite this article in press as: Ping, Y., Zhang, C. The fast co-adding algorithm of QCT. Adv. Space Res. (2017), http://dx.doi.org/10.1016/j. asr.2017.05.018
6
Y. Ping, C. Zhang / Advances in Space Research xxx (2017) xxx–xxx
method. The difference of the resulting pixels’ value is normally smaller than one thousandth. In addition, we improve the computing efficiency by multi-threading, which accelerates the computation as well. The most computationally intensive parts of our algorithm are the reprojections and flux-reallocatings, which are independent for every input image, and suitable to be paralleled therefore. Our working machine is an industrial computer with an Intel i7 3770 CPU, which has 4 cores. The implementation of our algorithm, using single-precision floating-point arithmetic when co-adding, creates three threads occupying three cores respectively, each of them is responsible for the reprojection and flux-reallocating of one input image. The rest one core of the CPU thus can still be available for the software to conduct other tasks such as axis control, camera manipulation, mission scheduling, etc. Therefore, the whole system keeps running smoothly regardless the intensive computation. Tests show that the computing time for one single exposure in real work circumstance can be as fast as 100 ms, which is the overall processing time including image reduction, measurement of stellar positions, determination of transform parameters, reprojections and flux-reallocatings, and so on. Thus the efficiency of the algorithm perfectly satisfies the needs of real time in our observation system. Due to the shifts and rotations of the 4 channels, a small part of the output image on the edge is covered by less than 4 input images. The pixel values of these area are smaller than those stacked by 4 input images. These pixels also can be useful sometimes. To correct the value of these pixels, we record the summations of the pixel area during the process of co-adding and use them to create the weight of every pixel as: , X W ðx; yÞ ¼ 4 si ðx; yÞ ð3Þ
images are shown in Table 1, which includes 10 samples. The stars extracted from the co-added image are roughly 2 times as those from the original ones. The differences of numbers from the original ones are caused by the shifts of the telescopes’ FOVs. Fig. 8 shows the magnitude distributions of these stars. The darker bars are from I 4 and the maximum is located at about magnitude 11.9, while the lighter bars come from the co-added images showing that its maximum is at about 12.7, which means we can get roughly 0.8 magnitude gain in terms of detection ability by co-adding. A comparison in full-width-half-maximum (FWHM) and position are conducted for the 1474 common detections both in I 4 and co-added images. The average FWHM of the detections in the original images is 3.292 pixels, and after the co-adding it comes to 3.299, which indicates that Table 1 The comparison of the numbers of extracted stars from original images and co-added images. The numbers of co-added images roughly double with respect to I 4 because there is no shift between them. From the previous discussion, we have already known that there are shifts of the fields of view between channels, which is also the reason why the quantities are different between channels. No.
I1
I2
I3
I4
Co-added
1 2 3 4 5 6 7 8 9 10 Total
111 196 194 116 108 83 90 119 238 404 1659
103 153 152 100 78 72 73 95 189 374 1389
123 179 192 119 103 79 87 105 228 428 1643
103 172 164 107 91 71 78 104 202 382 1474
187 290 309 210 192 157 160 191 394 732 2822
i¼1;2;3;4
and correct every pixel by I w ðx; yÞ ¼ Iðx; yÞ=W ðx; yÞ. Fig. 6 is a typical weight map, from which we also can see the huge shifts of the images, which can be up to dozens of pixels. 4. Results and analysis Fig. 7 shows the comparison of a pair of example images before and after the co-adding. The original image’s exposure time is 100 ms, which is a typical exposure time used by QCT for space debris observation. In the right figure, many more stars can be seen clearly compared to the original. Those stars are so faint before the co-adding that they are all hidden by noise, and the co-adding algorithm obviously gets them shown up because of higher signal-to-noise ratios (SNR). To test the algorithm quantitatively, we scan 100 ms exposed images and the co-added images with SExtractor (Bertin and Arnouts, 1996) using the same scanning parameters. The numbers of extracted stars from these
Fig. 8. Magnitude distributions for stars extracted from both I 4 (darker bars) and the co-added images (lighter bars), and in respect to the latter’s maximum counting, all bars are normalized. We also make a smoothing over both of the distributions to get clearer shapes, which are plotted as dashed lines, as well as where the distributional maximums are placed, which are also labeled in the figure.
Please cite this article in press as: Ping, Y., Zhang, C. The fast co-adding algorithm of QCT. Adv. Space Res. (2017), http://dx.doi.org/10.1016/j. asr.2017.05.018
Y. Ping, C. Zhang / Advances in Space Research xxx (2017) xxx–xxx 43.78
301.58 29.03
5.89
13.39
16.96
7.44
36.07
7.72 113.09 7.84
27.91
17.45 35.83
37.57 19.67
12.39
6.32
68.28
14.23
9.10
90.48 25.67 7.22
8.71
7.91 310.20
10.98 7.51 39.92
94.41 10.35 12.68 10.29
5.86
5.36 20.56
39.34 8.89
17.77
6.64 32.60
8.44
7
SWarp, which is performed then to get co-added images. Afterward, we scan the output images from both algorithms with SExtrator using same parameters. The numbers of extracted sources from the images are quite similar and both algorithms have a few detections missed by another one. We also check the SNRs of the detections (Fig. 9). The SNRs from both algorithms are very close to each other, while in most cases, our algorithm results in slightly higher SNRs than SWarp. A few things could be the explanation of this result, such as the method of position determination, the selection of the field stars used for astrometry, the different way of performing stacking, the difference of the destination images, and so on. Finding the answer needs a more thorough investigation and a closer scrutiny, which will be our near future work.
11.26 5.12 8.13
10.04
6.49
25.61
8.80
9.32
19.85 24.90 82.25
5. Summary
8.87 7.70 6.32 16.96
15.94 6.44
29.83
22.86 49.35
339.28
6.37
13.21
17.51
7.89
39.48
8.21
126.24
30.93
19.70 38.23
41.60 21.67
14.89
7.71
75.92
13.78
9.75
99.61 27.77
10.63
8.03 350.23 10.03
8.01 44.13
104.39 11.64 14.61
9.47
6.85
5.11
44.31
9.06
18.88
6.49 23.81
6.71 33.61
8.91
5.13
13.32 12.20
7.91 10.63
26.42 88.27
8.97
6.96
28.65
10.55 8.12 7.43 6.82 17.51 5.82
QCT, the telescope designed for 4-color simultaneous photometry of space debris, is capable to do position observation as well. In this operating mode, we co-add all the 4 channel images together for maximizing the detection ability. For matching the requirement of real-time working, we developed a fast co-adding algorithm, which is able to handle the shifts and rotations of the images and with the help of a reasonable simplification, the calculation of co-adding can be finished in about 100 ms on a 3.4 GHz CPU with 4 cores. By checking the result image, we find that the standard deviations of the positions in co-added image from those in I 4 are about 0.2 pixels in both coordinates. The results of our fast algorithm are very similar to, even slightly better than, those of SWarp in terms of the SNR of the detections. But SWarp is not efficient enough to be embedded into a real-time observation system. Then our algorithm is obviously a better choice. The study on whether there are possible systematic errors, as well as why our algorithm has a slighly better average SNR with respect to SWarp, will be our next work.
6.06 18.50
6.64
Fig. 9. The comparison of the detections’ SNRs between SWarp and our algorithm. The top figure is a part of one co-added image obtained by SWarp, while the bottom one is the result of our algorithm. All detectable sources and their SNRs are labeled.
the co-adding algorithm does not have any obvious impact on the PSF. The standard deviations of the positions in coadded image in respect to those in I 4 are 0.22 and 0.23 pixels, in x and y coordinate respectively. Considering in most cases we have a much higher uncertainty in terms of position determination for space debris, such a precision is good enough for us. We also compare results of our algorithm with SWarp’s. Fig. 9 is one example of them. We do astrometry for input images with SCAMP to get WCS keywords required by
Acknowledgments This research was supported by the National Natural Science Foundation of China (Grant No. 11373071, 11673070 and 11603082). We are also very grateful for the help of Chengmin Lei and Zhanwei Xu, whose hard work for the construction of QCT made this paper possible. We sincerely thank the two anonymous reviewers, whose invaluable comments and suggestions substantially helped us improve and clarify the manuscript. References Bertin, E., Arnouts, S., 1996. SExtractor: Software for source extraction. Astron. Astrophys. Suppl. Ser. 117, 393–404. Bertin, E., Mellier, Y., Radovich, M., et al., 2002. The TERAPIX pipeline. ASPC Ser. 281, 228.
Please cite this article in press as: Ping, Y., Zhang, C. The fast co-adding algorithm of QCT. Adv. Space Res. (2017), http://dx.doi.org/10.1016/j. asr.2017.05.018
8
Y. Ping, C. Zhang / Advances in Space Research xxx (2017) xxx–xxx
Christou, J.C., 1991. Image quality, tip-tilt correction, and shift-and-add infrared imaging. PASP 103, 1040–1048. Fruchter, A.S., Hook, R.N., 2002. Drizzle: a method for the linear reconstruction of undersampled images. PASP 114, 144–152. Hockney, R.W., Eastwood, J.W., 1988. Computer Simulation using Particles. CRC Press. Howell, S.B., 2006. Handbook of CCD Astronomy, second ed. Cambridge University Press, New York, p. 18.
Kaiser, N., Aussel, H., Burke, B.E., et al., 2002. Pan-STARRS: a large synoptic survey telescope array. SPIE 4836, 154–164. Lin, H.Y., Zhao, C.Y., 2015. Evolution of the rotational motion of space debris acted upon by eddy current torque. Ap&SS 357, 167. Meeus, J., 1974. Satellites artificiels: observations de periodes photometriques 1971-1973. Ciel et Terre 90, 201. Nather, R.E., Mukadam, A.S., 2004. A CCD time-series photometer. ApJ 605, 846–853.
Please cite this article in press as: Ping, Y., Zhang, C. The fast co-adding algorithm of QCT. Adv. Space Res. (2017), http://dx.doi.org/10.1016/j. asr.2017.05.018