Accepted Manuscript Compressive Sampling-Based Strategy For Enhancing ADCs Resolution Aldo Baccigalupi, Mauro D’Arco, Annalisa Liccardo, Rosario Schiano Lo Moriello PII: DOI: Reference:
S0263-2241(14)00267-X http://dx.doi.org/10.1016/j.measurement.2014.06.006 MEASUR 2893
To appear in:
Measurement
Received Date: Revised Date: Accepted Date:
7 March 2014 12 May 2014 9 June 2014
Please cite this article as: A. Baccigalupi, M. D’Arco, A. Liccardo, R. Schiano Lo Moriello, Compressive SamplingBased Strategy For Enhancing ADCs Resolution, Measurement (2014), doi: http://dx.doi.org/10.1016/ j.measurement.2014.06.006
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
COMPRESSIVE SAMPLING-BASED STRATEGY FOR ENHANCING ADCS RESOLUTION
Aldo Baccigalupi, Mauro D'Arco, Annalisa Liccardo, Rosario Schiano Lo Moriello
Università degli Studi di Napoli Federico II, Dipartimento di Ingegneria Elettrica e delle Tecnologie dell’Informazione, Via Claudio 21, I-80125, Napoli, Italia {baccigal, darco, aliccard, rschiano}@unina.it
Compressive Sampling-Based Strategy For Enhancing ADCs Resolution Abstract - The paper deals with the problem of simultaneously enhancing both horizontal and vertical resolution of analog-to-digital converters, with specific regard to low-cost conversion systems. To this aim, the authors propose the combined exploitation of a suitable Compressive Sampling (CS) approach and a proper digital signal processing stage. In particular, starting from a reduced number of digitized samples, the proposed CS-based sampling approach allows to recover an oversampled version of the input signal, whose spectral content is properly shaped to reject the most of in-band noise. The successive processing stage, implementing a low-pass filter, is mandated to drastically attenuate out-of-band noise components. Tests carried out on an actual microcontroller (namely, PIC32MX360L512 by Microchip) evidence the promising performance of the proposed sampling strategy. Results obtained either on single tone or multisine signals highlight a gain up to 3.5 bits in vertical resolution, while the sample rate increases 50 times with respect to the actual one adopted to randomly sample the input signal of interest.
Keywords: Sampling frequency increase, ADC resolution enhancement, Compressive sampling, Penalty function, Optimized multisine.
I.
INTRODUCTION
The performance offered by many applications based on low-cost devices are often limited by the following characteristics of the available data acquisition system: (i) maximum sampling rate, (ii) effective number of bits, and (iii) acquisition memory depth. Traditional approaches to overcome the problem of the limited sample rate are based on equivalent time sampling, either synchronous or random, or alternatively on band-pass sampling [1]. However, these acquisition modes require strict constraints in order to properly reconstruct the input signal and/or avoid aliasing [2]. In fact, since even less than one sample per period is usually digitized, a long observation interval is needed for signal reconstruction. As a consequence, hard requirements on the time-basis are demanded in order to prevent harmful signal degradation due to jitter [3,4].
Moreover, especially for random equivalent time, there is no guarantee that the whole signal will be sampled in the observation interval. Finally, non-ideal behaviour, both of time-basis and ADC, results in an unwanted deterioration of signal-to-noise ratio. With regard to the vertical resolution, typical solutions, referred to as HiRes [5] or enhanced resolution [6], are mainly based on the preliminary signal oversampling, successive filtering (either low-pass or band-pass) and decimation stages. Unfortunately, the oversampling approach requires acquisition units characterized by a data memory depth largely oversized for the number of samples produced by the decimation filter. According to the adopted oversampling factor, OVF, improvements up to 1/2 lg 2 OVF bits can in theory be granted to detriment of the final effective sample rate [7]. Taking advantages from a novel sampling strategy, referred to as compressive sampling (CS) [8], some papers have recently been presented in the literature to face the considered problems. Most of them are based on the so-called random modulation approach [9] that proves to be effective in enhancing the performance of the ADC in terms of sampling rate. However, the modulation stage requires the implementation of a proper hardware section, thus preventing its use in systems already available on the market. Starting from their previous experience on data acquisition devices, the authors propose a new sampling strategy based on the new CS paradigm and a suitable digital signal processing to overcome the abovementioned limitations, thus allowing to simultaneously increase the maximum sample rate and the vertical resolution of traditional ADCs. To this aim, the method does not require any hardware modification, but the availability of a high frequency time-basis (such as the fundamental clock of microcontrollers), thus proving itself as the optimal solution for most of the already available ADCs. From an operating point of view, the method exploits a hardware section, constituted by the traditional ADC, combined with a proper software procedure mandated to (i) define a suitable random sequence of the sampling instants, (ii) high rate reconstruct the signal of interest according to the CS theory and (iii) suitably filter the reconstructed signal with a low-pass filter in order to enhance the vertical resolution.
II. THE PROPOSED METHOD The fundamental steps of the acquisition strategy proposed in the paper are described in detail in the following; for the sake of clarity, an application example is exploited to the purpose. In particular, the proposed example considers an input signal that is first digitized by means of a low-rate, low-
resolution ADC; according to the CS approach, a reduced number of random samples is acquired. The input signal is then high rate reconstructed by the CS to take advantage form the signal sparsity and attain a first slight improvement of the vertical resolution. Superior performance is finally achieved thanks to the application of a proper filter tuned on the Nyquist bandwidth of the considered ADC. A. Recovery of k-sparse signals through Compressive Sampling The proposed acquisition method takes advantage from the theory of Compressive Sampling, which assures that a signal ∈ ℝ can be reconstructed by means of a number of measurements ∈ ℝ
with m<
(1)
where ∈ ℝx models the acquisition process, is the transformation matrix representing the
selected representation basis, such that the product gives the signal in the time domain, and
is the so-called sensing matrix. In order to assure that the columns of are linearly independent, and have to be uncorrelated, therefore the sampling matrix is usually realized as a random
sequence of Dirac pulses, which proves to be uncorrelated with most of the traditional orthonormal bases. Even though the model in (1) is employed in many applications related to CS theory, the involved equality constraint states that the measurement vector y is characterized by negligible uncertainty [10]. In most of the measurement applications the uncertainty associated to the acquired samples in y has, indeed, to be taken into account. To this aim, the model (1) is modified, relaxing the equality constraint and turning it into the following inequality:
- ∞ <ε
(2)
where the ∞ -norm is conventionally defined as the maximum absolute value of a vector: ‖‖∞ = max | | and ε is a tolerance value, usually set according to a-priori information about noise and non-linearity affecting the acquired samples [11]. Obviously, a wide variety of vectors satisfies inequality (2), therefore an optimization criterion has to be added to the problem in order to force the solver algorithm to prefer one of the several
admissible solutions. The optimization criterion commonly proposed is based on the -norm,
defined as the number of vector components different from zero, and permits to select the sparsest solution among the admissible ones. But, the optimization problem, in this case, would be unstable and computationally NP-hard [12]. Recent literature proves that L1-norm minimization leads to
solutions very close to those granted by -norm minimization. Nonetheless, the minimization of
-norm, which is defined in terms of the convex function ‖‖ = ∑| |, can be obtained by means of well-assessed algorithms characterized by much less computational burden. For instance, the vector , can be obtained by solving the basis pursuit model: argmin∈ℛ ‖‖
subject to ‖ − ‖∞ < ε
(3)
Equation (3) with the accompanying constraint can be cast in a linear programming (LP) or secondorder cone programming (SOCP) problem and can efficiently be solved by means of typical methods deeply discussed in the literature, such as the gradient descent or the interior point method [13]. B. Increase of equivalent sampling rate The signal observed in the interval Tm is reconstructed and represented over the same interval in terms of n estimated samples, obtained processing the set of the acquired m (m<
defined as integer multiples of the considered time-basis period. This is the reason that makes the approach underlying the CS particularly attractive if ADCs integrated in low-cost embedded systems are taken into account. Such converters are usually realized through the traditional successive approximation architecture, and are characterized by a nominal number of bits ranging in the interval from 8 to 12 and sample rate barely greater than 5 MHz. On the counterpart, the associated microcontrollers can offer operating frequencies ranging from 16 up to few hundreds of MHz, that are mostly unused. The on-board cache memory capable of operating at the same frequencies is in fact too much limited since it is not intended for large data storage. Anyway, the higher speed of the microcontroller in conjunction with a clock conditioning circuitry, typically available on the same hardware, can be exploited to finely control the time instants at which the signal is sampled. Hereinafter, CS theory is firstly exploited to improve the equivalent sample rate of low-cost ADCs included in traditional microcontrollers. As shown in Fig.1, the key idea to improve the sample rate
Fig. 1. Acquired and reconstructed samples in CS-based approach. is the exploitation of a high resolution time basis to accurately determine the random instants corresponding to the start of conversion (SOC) of the input signal samples. To assure reliable operations, the random sampling matrix is defined in such a way that the time difference between two successive sampling instants is always greater than the time 1/fs (fs being the maximum available sample rate) needed by the ADC to acquire and digitize a single sample. So, for the m samples indicated by circle markers in Fig.1, and digitized at the non-uniformly spaced instants ts1, ts2, ..., tsm the following condition:
#$% − #$ ≥ ' ) = 1,2, … , . (
(4)
is satisfied. As it can be appreciated in Fig.1, the combined exploitation of the high resolution time basis and CS approach allows to achieve a number of samples much higher than that available if the original sample rate of the ADC were adopted. In particular, each considered sampling instant can be expressed as an integer multiple of the fundamental period (referred to as Tck = 1/fck, fck being the clock frequency of the high resolution time basis): #$ = / #01 , ) = 1, . . , ., / ∈ ℕ
(5)
If this is the case, eq. (4) can be rewritten as /% − / ≥
'45 '(
) = 1,2, … , .
(6)
The values of / are adopted to generate the sampling matrix [15]. With regard to the transformation matrix, the traditional Fourier basis is chosen to minimize the coherence and,
consequently, the number of samples needed to correctly reconstruct the signal of interest. This way, vector in (3) turns out to be the complex spectrum of the input signal and, as a consequence,
the matrix implements the inverse Fourier transform; the estimate of the desired spectrum is gained by solving the convex optimization problem (3). It is worth noting that the chosen transformation matrix proves to be the best one in terms of coherence with the considered sampling matrix; thus granting the possibility of reconstructing the input signal with a very reduced number of acquired samples. The signal of interest, whose reconstruction is represented in Fig.1 through the dotted line, is finally achieved by applying the inverse Fourier transform to the obtained spectrum. As it can be appreciated, the signal of interest is represented in the observation interval by means of n samples uniformly spaced at a sample interval equal to Tck. C. Enhancement of vertical resolution It can be shown that taking advantage of the virtual sample rate increase it an enhancement of the vertical resolution of low-cost Data Acquisition Systems (DAS) is obtained. In fact, despite the maximum DAS sample rate is fs, the reconstructed signal plotted in Fig. 1 can be considered as acquired with an oversampling factor equal to fck/fs. Moreover, the gained oversampling inherently modifies the initial quantization noise (either nominal or actual) affecting the signal: some noise contributions are spread on the wider frequency range corresponding to the increased sample rate. The noise contributions external to the band of interest can be cut off according to the well-known approaches that apply a proper filter to benefit of the higher sample rate. As an example, let us suppose to uniformly acquire M samples of a sinusoidal signal, exhibiting a frequency f, by means of an ADC characterized by its effective number of bits (ENOB), its maximum sample rate fs and finely controlled by a fundamental signal clock with frequency fck. For the sake of clarity, Fig. 2 shows the single-sided amplitude spectrum of the digitized signal, scaled in dB, where the following values have been assumed: M = 200, f = 10kHz, ENOB = 10, fs = 200kS/s, and fck = 10MHz. The quantization produces the noise floor, whose mean level depends on the effective number of bits ENOB of the ADC and the number of acquired samples M. The noise floor mean level, NFML, is expressed in dB below a reference level, typically assumed equal to the ADC half range, according to the: NFML = 6.0289:; + 1.76 + 10 log
@ A
(7)
decibels below the zero reference per fs/M hertz of bandwidth. On the contrary, thanks to the adopted sampling strategy based on CS, even though only M random samples are acquired, a time
Fig. 2 Single-sided spectrum of the signal acquired by the ADC. resolution as fine as 1/fck can be achieved in the same measurement interval. The signal is therefore reconstructed and represented by M’ = fck/fs samples, and can be regarded as a signal that has been sampled at fck. The single-sided amplitude spectrum of the reconstructed signal, obtained according to the numerical values set in this example, is shown in Fig. 3; two main appealing features can be noticed. First of all, as it can be expected, the reconstructed spectrum consists of M’ bins and covers a frequency interval up to 5 MHz; also, according to equation (7), in which M can be substituted by M’, the mean level of the noise floor is much below the reference and thus much more lower. Also, the noise affecting the reconstructed signal is not uniform: since the reconstruction algorithm is
Fig. 3 Single-sided spectrum of the signal reconstructed through CS-based algorithm.
capable of recovering a limited number of spectral components, most of the frequency bins associated with random noise are not reconstructed, thus obtaining an inherent improvement in terms of noise rejection. The noise rejection capability is even more strengthened by applying a low-pass windowed sinc filter whose bandwidth is 100 kHz, i.e. half times the initial sample rate; the spectrum of the filtered
Fig. 4 Spectrum of the reconstructed and filtered signal. signal is shown in Fig. 4, where the portion of the frequency values between 0 and 200kHz has been highlighted. Even though most of the frequency bins related to the noise floor has been cut off thanks to the adopted filter, some noise spectral components could yet be included in the bandwidth of interest. More specifically, the reconstruction algorithm, according to the minimization problem (3), provides an approximation to the sparsest solution satisfying the constraint ‖ − ‖∞ < ε.
Hence, the algorithm returns a solution vector likely consisting of the largest frequency components, i.e. those associated with the DC and fundamental components in the considered example, along with other spectral components reconstructed in order to make the difference between the recovered signal and the quantized samples lower than ε. The position of these frequency components is not predictable, since it depends on the particular set of acquired samples, i.e. on the adopted sampling instants, that are randomly chosen by definition. D. Noise shaping through penalty function
The enhancement of the vertical resolution can be optimized if the oversampling is associated with a proper spectral shaping of the undesired noise components [16]. To this end a suitable penalty function to properly reject in-band noise spectral components can be exploited. A penalty function
BCD assesses a cost or penalty for each component of the desired solution ; the total penalty is the sum of the penalties for each element of , i.e., BCE D + BCEAD + ⋯ + B CE D; different choices of lead to different total penalties. The adopted penalty function depends both on the frequency and magnitude of the frequency components, according to the following definition: BCG, ED = H
10 if IJI′ ∙ L; ≤ |E| ≤ 1L; and G ≤ GO U
(8)
1 otherwise
It is worth noting that in order to assure fast and reliable convergence of the solver algorithm, in the definition of the penalty function a lower bound for the values of the z variable has to be specified. To justify the choice of the lower bound, it is useful to consider again Fig.3, in which three different regions can be identified in the reconstructed spectrum with reference to components amplitude •
a bottom region consisting of the effective floor of the reconstructed spectrum (amplitudes lower than -200 dB)
•
an intermediate region accounting for random samples of the initial quantization noise floor (about 90dB)
•
a top region including the spectral components of interest.
Amplitude limits in (8) have thus been set in such a way that, within the Nyquist bandwidth, the solution prefers frequency components belonging either to the top or bottom regions, to the detriment of those included in the intermediate region. In particular the lower bound value of the intermediate region has been set lower than the theoretical resolution granted by oversampling (L;V =
WXY
Z@VJ@
).
The optimization problem turns into: min∈ℛ ‖BCG, D‖
subject to ‖ − ‖∞ < ε
(9)
Fig. 5 shows a typical solution of problem (9), where, for the sake of clarity, only frequencies from zero up to 1 MHz have been reported. As it can be noticed, thanks to the proposed approach all the components due to noise are located beyond the Nyquist frequency. As a consequence, the whole
Fig. 5 Spectrum of the signal recovered through the use of the penalty function. noise is suitably cut off by the low-pass filter, whose bandwidth has been highlighted in figure through a red dashed line.
III. EXPERIMENTAL TESTS The performance of the proposed method has been assessed on a 32-bit microcontroller, namely PIC32MX360F by Microchip©, whose main characteristics are: (i) embedded unipolar ADC with nominal 10-bit vertical resolution, (ii) 500 kS/s maximum sample rate, (iii) 80 MHz maximum clock frequency and (iv) 32 kB data memory. Preliminary tests have been carried out to characterize the available converter; to this aim, the evaluation of the effective number of bits (ENOB) has been performed according to the procedures recommended by the IEEE 1241 standard. In order to cover the ADC full dynamic range, an arbitrary waveform generator (namely AFG 3252 by TektronixTM), characterized by a nominal 14-bit vertical resolution, has been configured to generate a sinusoidal waveform with amplitude equal to 3.3 Vpp and DC offset equal to 1.65 V. The signal frequency has been varied within the range from 10 kHz to 100 kHz, with a step equal to 10 kHz. The ADC has been tested in different dynamic operating conditions by setting the adopted sample rate equal to 200, 300, 400 and 500 kS/s, respectively. The obtained values of ENOB, reported in Tab.1, proved that the best acquisition conditions, intended as the maximum sample rate that did not cause a significant ENOB reduction, were associated with an ADC sample rate equal to 400 kS/s. Since the duration of sample digitization did not depend on the selected rate, reasons of
the ENOB deterioration experienced at higher sample rates have to be found in the more reduced time dedicated to input signal sampling. Tab. I. Effective number of bits versus ADC sampling rate and frequency of input signal. Sample rate
200kS/s
300kS/s
400kS/s
500kS/s
10kHz
8.3
8.2
8.3
8.1
20kHz
8.4
8.4
8.4
8.2
30kHz
8.5
8.6
8.6
8.3
40kHz
8.7
8.7
8.5
8.2
50kHz
8.7
8.8
8.7
8.3
60kHz
8.7
8.7
8.8
8.3
70kHz
8.6
8.7
8.6
8.2
80kHz
8.6
8.8
8.7
8.0
90kHz
8.7
8.7
8.7
7.9
100kHz
8.8
8.7
8.7
8.0
Signal frequency
The firmware of the microcontroller has, successively, been modified to implement the CS-based acquisition strategy. According to what stated in Section II, the events of start of conversion have been generated through a high resolution time basis, whose clock frequency has been set to 20 MHz; to this aim, a proper timer module has been adopted. With regard to the ADC, it has been set to assure a sampling and conversion time lasting at least 2.5 µs (corresponding to a nominal sample rate of 400 kHz); the low-pass filter bandwidth has accordingly been set to 200 kHz. The microcontroller has, finally, been programmed to autonomously generate the pseudo-random sequence of 200 sampling instants, expressed in terms of counts with respect to the 50 ns time basis. Since the constraint in (6) has to be satisfied, the microcontroller does not accept all the differences between successive counts lower than 50, which involves a time interval between successive samples lower than 2.5 µs. For each digitized sample, both sampling instants and ADC codes are saved in microcontroller data memory and successively transferred to a PC for signal reconstruction and digital processing. A number of tests have been conducted for different values of input signal frequency, ranging from 10 kHz up to 100 kHz, with a 10 kHz step. As an example, Fig. 6 shows the single-sided amplitude spectrum obtained when the proposed sampling strategy is applied to a 10 kHz sinusoidal signal reconstructed from 200 acquired samples.
Fig. 6 Single-sided spectrum of the recovered and filtered signal. The performance factors defining the ADC dynamic characteristics, i.e. signal to noise and distortion ratio, SINAD, and spurious free dynamic range, SFDR, have been evaluated on the reconstructed spectrum, as recommended by IEEE 1241 standard [17]. Their evolutions versus input signal frequency is shown in Fig. 7; the obtained values proved to be as good as those granted by ADCs characterized by a vertical resolution ranging from 24 up to 25 bits. As it can be appreciated, the performance of the proposed acquisition strategy is remarkable; SINAD and SFDR never lower than 146.8 dB and 148 dB, respectively, have been encountered. The obtained results were very encouraging and confirmed the capability of the proposed sampling strategy of reconstructing the input signal without significant artifacts in frequency domain.
Fig. 7 SINAD and SFDR of the proposed method versus signal frequency.
However, in authors’ opinion frequency domain analysis turns out to be insufficient to assess the performance of the proposed sampling strategy, as well as of all those sampling approaches based on the combination of a traditional ADC with a proper digital signal processing [18]; an artificial overestimate of the resolution performance could easily be experienced. The adopted reconstruction algorithm can, in fact, introduce undesired differences, in terms of amplitude gain and offset or phase displacement, between reconstructed and original signal. Such differences have to be taken into account when assessing the performance of the sampling strategy and the mere analysis of the amplitude spectrum proves unfit to the purpose. Therefore, the best estimate of the input signal has been achieved through the sine fit of the waveform digitized by the ADC (i.e. evaluated on the randomly acquired samples) according to what recommended by the IEEE standard for conventional ADC. The point-to-point difference between either the acquired or recovered and the reference signal has been adopted as noise waveform from which evaluate the performance factors. As an example, Fig. 8 shows the noise
Fig. 8 Differences between digitized samples and their sine fitting model. estimated from 200 random samples digitized by the employed ADC in the presence of an input signal whose frequency was equal to 30 kHz. As it could be expected, the noise values varied within the range of 5 LSB, according to the about 8.5 effective bits verified during the preliminary tests. The difference between the signal reconstructed through the proposed acquisition strategy and the reference signal is, instead, plotted in Fig. 9. Even though the noise amplitude has hugely been reduced, its evolution versus time, shown in Fig. 9, clearly exhibits DC and sinusoidal components at 30 kHz frequency, likely due to offset, gain and phase artifacts introduced by the CS-based
Fig. 9 Differences between reconstructed waveform and the sine fit of the ADC output. reconstruction algorithm, that apparently limited the overall performance in time domain of the proposed method. To mitigate their harmful effect on the overall performance, offset, gain and phase errors have preliminarily been evaluated in the whole range of input frequency 10 kHz-100 kHz; their constant terms has then been evaluated and exploited for the compensation of the reconstructed signal. To better appreciate the obtained advantages, the noise waveform associated with the compensated signal is shown in Fig. 10. According to the procedures presented in the IEEE 1241, this can be considered as the noise corresponding to an ADC with an effective number of bits equal to about 12, i.e. 3.5 bits more than those originally granted by the ADC.
Fig. 10 Differences between compensated waveform and the sine fit of the ADC output
The compensation has been performed on all the reconstructed signals in the frequency range of interest and the reconstruction error, E%, has been evaluated as performance factor, according to [8]: 8% =
‖ \]‖^ ‖‖^
∙ 100
(10)
where is the reference signal, estimated through the sine fit of the samples provided by the
Fig. 11 Reconstruction error associated to compensated waveform versus frequency. ADC, and interpolated on the observation interval with the high resolution time basis, and \ is the reconstructed, filtered and compensated signal. The evolution of the reconstruction error versus the input signal frequency given in Fig. 11; values lower than 8·10-3% highlight the satisfying performance of the proposed strategy within the bandwidth of interest.
IV. EXPERIMENTAL TESTS INVOLVING MULTITONES SIGNAL Further tests have been performed on multitones signal to assess the capability of the proposed acquisition strategy of properly reconstructing the whole spectral content of the signal of interest; to this aim, a new input signal, the so-called optimized multisine [19], has been adopted as test signal. The main advantage provided by the multisine consists in the opportunity of stimulating the system under test with a signal involving simultaneously several spectral components whose amplitude and phase can easily be tailored to the ADC dynamic range. To the purpose, the optimized multisine can be expressed as the sum of cosine waveform according to: C#D = ∑O ` _` cosC2bG` # + c` D
(11)
where N is the number of harmonic components, and Ah, fh, and ϕh their amplitude, frequency and phase, respectively. The amplitudes of the spectral components have been set to the same value, equal to the ADC midrange, in order to obtain a flat amplitude spectrum in the frequency region of interest. Several combinations of number of harmonics (ranging in the interval from 3 and 10) and frequencies (varying within the range from 1 up to 100 kHz) have been taken into account in the conducted tests. The phase of each component has been selected according to the criterion of crest factor (CF) minimization, in order to assure signals characterized by suitable SNR in the whole observation interval. More specifically, Schroeder multisine [20] has been adopted; CF minimization has been achieved by setting phase values according to the following expression: φe = −
eCe]D f
π
(12)
As an example, Fig. 12 shows the waveform digitized by the ADC obtained by acquiring 1000
Fig. 12 Time evolution of digitized waveform and its multisine fit. samples at 400 kS/s, along with its best fit, obtained through multisine fit interpolation and adopted as reference signal; Fig. 13 shows the related noise, evaluated as difference between the considered signals. The noise ranged within 10 LSB and exhibited an RMS value equal to about 3 LSB, downgrading the resolution of the ADC to 7 bits. On the contrary, Fig. 14 shows the time evolution of signal recovered by the proposed acquisition strategy, where only 200 samples have been acquired, and the reference one; Fig. 15 shows the related noise. The resolution enhancement is clearly observable; the noise ranged within 2 LSB and
Fig. 13 Differences between digitized waveform and its multisine fit.
Fig. 14 Time evolution of reconstructed waveform and the multisine fit of the ADC output. its RMS value is lower than half an LSB, which is comparable with the noise of a 9.2 bits converter, thus granting a resolution improvement of about 2 bits.
Fig. 15 Differences between reconstructed waveform and the multisine fit of the ADC output.
V. CONCLUSIONS A novel acquisition strategy able to enhance the vertical resolution of low cost ADC, whose performance in terms of resolution, maximum sampling rate and acquisition memory length are usually limited, has been presented. In particular, the proposed strategy takes advantage of (i) the CS-based acquisition in order to increase the horizontal resolution of the acquired signal; (ii) a proper penalty function able to perform noise shaping; (iii) a filtering technique aimed at drastically reducing the out of band noise. Experimental tests have been carried out to assess the performance of the method, expressed by means of the indicators recommended by the current Standard for the characterization of ADC. In authors' opinion, however, the use of these indicators could lead to artificial overestimates of the resolution performance, especially when the analog to digital conversion is obtained through the integration of a conventional ADC and a digital post-processing section applied to ADC raw data. The characterization method recommended by the Standard suggests the use of a sinusoidal waveform as test signal and the evaluation of the ADC performance only by measuring the output noise and harmonic contents. As a consequence, any smoothing operation, such as the digital filtering proposed in the paper, but also exploited by other solutions, e.g. Sigma-Delta converters, inherently enhances the values of the considered performance factors, thus overestimating the ADC characteristics. Several tests in actual operating conditions have been conducted to prevent the considered artifacts. In particular, a first set of tests has been carried out on sinusoidal signals with the of aim of
comparing the signal reconstructed through the proposed method with the best estimate of the input signal, obtained by means of a traditional 4-parameter sine fitting algorithm applied to the randomly acquired samples. As it could be expected, the actual values of the considered performance factors proved to be lower than those measured in frequency domain. Moreover, the tests have evidenced the occurrence of some systematic effects (i.e. offset, gain and phase displacement) introduced by the adopted digital signal processing section. Such systematic effects have been compensated and the performance factor evolution versus frequency of the input signal has then been evaluated in the whole bandwidth of interest; values of ENOB as high as 12 bits have been experienced, thus improving the inherent performance of the ADC of about 3.5 bits. The performance of the proposed method have then been assessed also in the presence of nonsinusoidal signals; to this aim, the results obtained on a multisine signal characterized by crest factor optimized through the Schroeder formula have been presented and discussed. Also for this kind of tests, comparisons between signals reconstructed by means of the proposed method and those achieved through a traditional multi-sine fit have highlighted a vertical resolution enhancement equal to about 2 bits. With regard to the computational burden, the execution time hardly depends on the specific tool adopted for the implementation of the CS-based algorithm. Thanks to the adoption of greedy algorithms the proposed acquisition strategy typically provides the reconstructed input signal in less than 1 s.
REFERENCES [1]
M. D’Arco, M. Genovese, E. Napoli, M. Vadursi, Design and implementation of a preprocessing circuit for bandpass signals acquisistion, IEEE Transactions on Instrumentation and Measurement, Vol.63, No.2, pp.287-294, 2014.
[2]
L. Angrisani, M. Vadursi, “On the optimal sampling of bandpass measurement signals through data acquisition systems,” Measurement, Science and Technology, vol.19, n.4, pp.1-9, April 2008.
[3]
L. De Vito, L. Michaeli, S. Rapuano, An improved ADC-error-correction scheme based on a Bayesian approach, IEEE Transactions on Instrumentation and Measurement, Vol.57, N.1, pp. 128133, 2008.
[4]
P.E. Pace, P.A. Ramamoorthy, D. Styer, A preprocessing architecture for resolution enhancement in high-speed analog-to-digital converters, IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, Vol.41, N.6, pp. 373-379, 1994.
[5]
Teledyne LeCroy Application Note, Differences Between ERES and HiRes, 2011, available at: http://teledynelecroy.com/doc/differences-between-eres-and-hires.
[6]
L. Angrisani, M. D’Arco, G. Ianniello, M. Vadursi, An efficient pre-processing scheme to enhance resolution of band-pass signals acquisition, IEEE Transactions on Instrumentation and Measurement, Vol.61, N.11, pp.2932-2940, 2012.
[7]
A. Baccigalupi, D.L. Carnì, D. Grimaldi, A. Liccardo, Characterization of arbitrary waveform generator by low resolution and oversampling signal acquisition, Measurement, Vol.45, N.10, pp. 2498-2510, 2012.
[8]
E. J. Candès, M. B. Wakin, An Introduction To Compressive Sampling, IEEE Signal Processing Magazine, pp.21-30, 2008.
[9]
D. Bao, P. Daponte, L. De Vito, S. Rapuano, Defining frequency domain performance of Analog-toInformation converters, Proceedings of 19th Symposium IMEKO TC 4 Symposium and 17th IWADC Workshop Advances in Instrumentation and Sensors Interoperability, July 18-19, Barcelona, Spain, pp. 748-753, 2013.
[10] E. Candès, J. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Comm. Pure Appl. Math., vol. 59, no. 8, pp. 1207–1223, Aug. 2006 [11] R. Tibshirani, Regression Shrinkage and Selection Via the Lasso, Journal of the Royal Statistical Society, Series B, Vol.58, N.1, pp. 267-288, 1994. [12] D. Donoho, Compressed sensing, IEEE Transaction on Information Theory, Vol.52, N.4, pp. 12891306, 2006. [13] S. Boyd, L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004. (available at http://www.stanford.edu/˜boyd/cvxbook/). [14] L. Angrisani, F. Bonavolontà, A. Liccardo, R. Schiano Lo Moriello, L. Ferrigno, M. Laracca, G. Miele, Multi-channel Simultaneous data Acquisition Through a Compressive Sampling-Based Approach, accepted for publication in Measurement, 2014. [15] F. Bonavolonta, M. D'Arco, G. Ianniello, A. Liccardo, R. Schiano Lo Moriello, L. Ferrigno, M. Laracca, G. Miele, On the suitability of compressive sampling for the measurement of electrical power quality, Proceedings of IEEE International Instrumentation and Measurement Technology Conference (I2MTC), Minneapolis, MN, May, 6-9, pp. 126-131, 2013. [16] M. Agarwal, R. Gupta, Penalty function approach in heuristic algorithms for constrained redundancy reliability optimization, IEEE Transactions on Reliability, Vol.54, N.3, pp.549-558, 2005. [17] IEEE Std 1241-2010, Standard for Terminology and Test Methods for Analog-to-Digital Converters, 2011. [18] P. Arpaia, F. Cennamo, P. Daponte, H. Schumny, Modeling and characterization of sigma-delta analog-to-digital converters, Proceedings of IEEE Instrumentation and Measurement Technology Conference (IMTC), May, 18-21, St. Paul, MN, pp. 96-100, 1998 [19] T.P. Dobrowiecki, J. Schoukens, P. Guillaume, Optimized Excitation Signals for MIMO Frequency Response Function Measurements, IEEE Transactions on Instrumentation and Measurement, Vol.55 , N.6, pp. 2072-2079, 2006.
[20] D.L. Carnì, D. Grimaldi, Voice quality measurement in networks by optimized multi-sine signals, Measurement, Vol.41, N.3, pp. 266-273, 2008.
Highlights
• • • •
We propose a CS-based acquisition for increasing ADC equivalent sampling frequency. By means of oversampling and filtering, the SNR of the reconstructed signal is increased. A penalty function is exploited for shaping the ADC noise. A resolution increment of about 3 bits is obtained.