Adversarial Unsupervised Domain Adaptation for Cross Scenario Waveform Recognition
Journal Pre-proof
Adversarial Unsupervised Domain Adaptation for Cross Scenario Waveform Recognition Qing Wang, Panfei Du, Xiaofeng Liu, Jingyu Yang, Guohua Wang PII: DOI: Reference:
S0165-1684(20)30069-4 https://doi.org/10.1016/j.sigpro.2020.107526 SIGPRO 107526
To appear in:
Signal Processing
Received date: Revised date: Accepted date:
8 May 2019 2 February 2020 4 February 2020
Please cite this article as: Qing Wang, Panfei Du, Xiaofeng Liu, Jingyu Yang, Guohua Wang, Adversarial Unsupervised Domain Adaptation for Cross Scenario Waveform Recognition, Signal Processing (2020), doi: https://doi.org/10.1016/j.sigpro.2020.107526
This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2020 Published by Elsevier B.V.
Highlights • We propose a novel waveform recognition method based on an adversarial unsupervised domain adaptation, which incorporates adversarial learning to improve the cross scenario recognition performance. To our best knowledge, this paper is the first to do unsupervised cross scenario waveform recognition. • We design two specific frameworks (GR-AUDA and FA-AUDA) for unsupervised cross scenario waveform recognition. We have added three advanced deep network-based methods(CLDNN, Vgg, ResNet) related with datasets for comparison. The experiment results show the effectiveness of our proposed frameworks for unsupervised protocol recognition. • We sampled a more abundant waveform dataset under variousscenarios, including typical indoor, outdoor, and corridor cases, using software defined radio. We evaluate our method on both public datasets and our own datasets. Results indicate that our methods can greatly improve the recognition performance of target waveforms without any labels.
1
Adversarial Unsupervised Domain Adaptation for Cross Scenario Waveform Recognition Qing Wanga , Panfei Dua , Xiaofeng Liua,∗, Jingyu Yanga , Guohua Wangb a
School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, P. R. China. b Nanyang Technological University, Singapore 639798
Abstract Deep Learning based waveform recognition has become a topical area of research recently. In fact, due to complex and heterogeneous scenarios, it is impossible to train a universal model for all waveform recognition tasks. Unsupervised domain adaptation (UDA) has an ability to take advantage of knowledge from an available source dataset and apply it to an unlabeled target dataset. Therefore, we propose a novel waveform recognition method based on an adversarial UDA, which incorporates adversarial learning to improve the cross scenario recognition performance. Specifically, a two-channel convolutional neural network is well designed for waveform recognition task and the system gradually converges an optimized stable state via two innovative methods, namely Adversarial unsupervised domain adaptation with gradient reversal (GR-AUDA) and Feature augmentation adversarial unsupervised domain adaptation (FA-AUDA). Furthermore, we sampled a more ∗
Corresponding author Email addresses:
[email protected] (Qing Wang),
[email protected] (Panfei Du),
[email protected] (Xiaofeng Liu ),
[email protected] (Jingyu Yang),
[email protected] (Guohua Wang)
Preprint submitted to Signal Processing
February 5, 2020
abundant waveform dataset under various scenarios, including typical indoor, outdoor, and corridor cases, using software defined radio. We evaluate our method on both public datasets and our own datasets. The results demonstrate that our proposed adversarial UDA framework is able to significantly improve the recognition performance of target domain waveforms. Keywords: Transfer Learning, Domain Adaptation, Waveform Recognition, GAN 1. Introduction In recent years, the development of Deep Learning (DL) has been pushing recent performance boundaries for a variety of machine learning tasks in fields such as computer vision, natural language processing and speech recognition. DL architectures are also applied to communications field such as waveform classification [1], channel encoding and decoding [2]. Waveform recognition has been accomplished for years through traditional approaches making use of hand-crafted features such as cumulants, cyclostationary features, and distribution distances [3, 4, 5]. Some related work has introduced deep learning to improve the performance of waveform recognition. Timothy J. O’Shea demonstrates that deep convolutional neural networks is a viable and strong candidate approach for the modulation recognition task [2]. The authors of [1] apply Convolutional Long short-term Deep Neural Networks (CLDNN) to this task and achieves higher classification accuracy. In [6], a two channel convolutional neural network was proposed to further improve the performance of both modulation and protocol recognition. However, due to a phenomenon known as dataset bias or domain shift, 3
recognition models trained on one large dataset do not generalize well to unfamiliar datasets and tasks [7]. Dataset bias or domain shift may derived from different sample frequencies, varying wireless propagation conditions and so on. Suffering from complex and variable scenarios, it is impossible to train a universal model for all waveform recognition tasks. To this end, T. J. OShea et al. [8] provide a possible mitigation idea to improve generalization for varying wireless propagation conditions. This includes domain-matched attention mechanisms such as the radio transformer network in the network architecture, however, they have not presented a specific implementation method. Another typical solution is to further fine-tune these radio transformer networks on task-specific datasets [6, 8]. However, it is often difficult and expensive to obtain enough labeled data to fine-tune the large number of parameters in deep multilayer networks. Besides, domain adaptation method, which is a subclass of transductive Transfer Learning, attempts to mitigate the harmful effects of domain shift. Assume that we have two domains, namely: source domain and target domain, denoting the training dataset with sufficient labeled data and testing dataset with a small amount of labeled data or no labeled data, respectively. Transfer learning aims to build learning machines that generalize across different domains following different probability distributions. The main technical problem of transfer learning is how to reduce the shifts in data distributions across domains. The transductive transfer learning has the following characteristics[9]: 1) The source and target tasks are the same but their domains can be different; 2) The label is easy to be obtained in source domain but quite difficult to be obtained in
4
target domain; 3) The marginal probability distributions of the input data in source and target domain are different. Due to the complex and heterogeneous wireless propagation conditions, domain adaptation is a commendable solution for cross scenario waveform recognition. Recent work in domain adaptation has mostly focused on two fields, mapping source domain data to target domain [10] mapping both domains to a shared space before classification [11, 12, 13]. Both fields require a map function. Therefore, we implement deep neural network, which is able to model complex non-linear map relationships, as the map function for domain adaptation. In the original formulation by Goodfellow, a generative adversarial net (GAN) model is trained through a min-max game between a generator, that maps noise vectors in the image space, and a discriminator, trained to discriminate generated images from real ones [14]. However, the downside of the standard unconditional GAN is that there is no control on modes of the data being generated. GAN can be extended to a conditional model (CGAN) [15] by conditioning some extra information y (i.e. labels) on both the generator and discriminator. By this way, it is possible to control the data generation process. Adversarial adaptation method has become an increasingly popular incarnation which seeks to minimize domain discrepancy distance through an adversarial objective with respect to a domain discriminator, learning a representation that is simultaneously discriminative of source labels while not being able to distinguish between domains. The gradient reversal algorithm [12] treats domain invariance as a binary classification problem, but directly
5
maximizes the loss of the domain classifier by reversing its gradients. Adversarial Discriminative Domain Adaptation (ADDA) [11] applies two Encoder networks, Source and Target Encoder, to map both domain to a similar feature space by adversarial learning with a GAN loss. In contrast, Domain Invariance Feature Augmentation (DIFA) [13] just applies one shared Encoder network for shared space and performs feature augmentation using CGANs to learn the class distribution in the feature space, and therefore to generate an arbitrary number of labeled feature vectors. It should be noted that the unsupervised waveform recognition for cross scenario has not been considered in the literatures. Thus, we propose a novel waveform recognition method based on an adversarial unsupervised domain adaptation (AUDA) to improve the cross scenario recognition performance in this paper. To do this, we first design a robust deep neural network, namely Encoder, as the map function. It transforms both source and target domains to a shared feature space. We then present an adversarial unsupervised learning framework via adversarial learning, which makes the features of both domains indistinguishable in high-dimensional space. So the classifier trained on the source domain can works well on the target domain. At the last, we conduct numerical experiments to validate the proposed framework in an abundant waveform dataset under different scenarios. 2. Unsupervised domain adaptation architecture for waveform recognition In this section, we firstly present our proposed generalized AUDA architecture for waveform recognition including supervised training and adver6
sarial training procedures, with a two-channel convolutional neural network as Encoder. Then, two effective implementation AUDA networks, one-stage gradient reversal and three-stage feature augmentation methods, are proposed based on our AUDA architecture for waveform recognition. 2.1. Proposed adversarial unsupervised domain adaptation architecture for waveform recognition Inspired by the adversarial learning, we propose a generalized AUDA architecture for waveform recognition, as shown in Fig.1. There is a shared encoder network for source domain and target domain waveforms, which transforms both domains to a shared high-dimensional feature space implicitly through adversarial training. The domain classifier tries to distinguish source waveforms from target waveforms. The label classifier is trained in source domain (transformed feature space) where labeled data is available, and then applied in both domains. It should be noted that modulations such as QPSK and protocols such as WLAN are the classes, whereas different channel conditions such as different SNR levels, or indoor/outdoor propagation channels are the domains. Shared encoder. For effective feature learning and transformation, a two-channel convolutional neural network (T-CNN) is introduced as our shared encoder network [6], which has been proved very suitable for waveform recognition, as shown in Fig.1. T-CNN is comprised of one deep channel and one shallow channel with different convolution layers. The inputs are IQ (In-Phase and Quadrature) components of signal, whose sample length is L. Two-dimensional convolution filter is applied on the first layer for both channels to extract the correlation between I and Q components. 7
Supervised training. We pre-train the encoder E and label classifier C1 with labeled source waveforms under supervised learning setting. The classification loss (cls Loss) is defined as min Lcls (Xs , Ys ) = E(xi ,yi )∼(X,Ys ) H (C1 (E (xi )) , yi ) ,
E,C1
(1)
where Xs and Ys are source domain waveform and labels, respectively; E is the shared encoder network (feature extractor); C1 is the classifier that predicts labels; H refers to the cross-entropy loss function with softmax activation. The loss of label classifier is always minimized. Adversarial training. The domain classifier is trained to discriminate two different domains feature, domain0 and domain1. At the training phase, we seek the parameters of the encoder E that maximize the loss of domain classifier C2 and the parameters of domain classifier that minimize the loss of domain classifier simultaneously. Thus, the domain classifier C2 cannot distinguish these two domains features, indicating that both domains are mapped to a shared feature space. We define the adversarial loss in AUDA as min max Ladv = Exi ∼domain0 kC2 (E (xi )) − 1k2 + Exi ∼domain1 kC2 (E (xi ))k2 , E
C2
(2) where k·k2 denotes the mean square loss. We relies on Least Squares GANs [16] since the smooth and gradient-unsaturated mean square loss can achieve more stability and faster convergence during training. The domain label is defined as di (di = 0, 1), di = 0 denotes domain0, di = 1 denotes domain1. The domain classifier C2 tries to maximize the loss to distinguish domain0 and domain1, whereas the encoder E minimize the loss to confuse the clas8
Figure 1: Proposed generalized architecture for adversarial unsupervised domain adaptation for waveform recognition
¶Ly ¶q
features
Loss Ly
C1
T-CNN encoder
Class label cls Loss
Label classifier
Input x
¶L -l d ¶q f RevGrad Layer
C2
Domain label
GAN Loss Domain classifier
Loss Ld
Figure 2: One-stage gradient reversal adversarial unsupervised domain adaptation for waveform recognition (GR-AUDA), gradient reversal (RevGrad layer) ensures that the feature distributions over the two domains are made similar
sifier.
9
2.2. Adversarial unsupervised domain adaptation with gradient reversal (GRAUDA) GR-AUDA approach, in Fig. 2, combines domain adaptation (adversarial training) and supervised training within one training process, so that the final classification decisions are made based on features that are both discriminative and invariant to the change of domains as shown in Fig. 2. The input waveform is firstly mapped to a D-dimensional feature vector by the encoder E. Then, the feature vector is used to inference the class through a label classifier C1. Finally, the same feature vector is mapped to the domain label by a domain classifier C2. The adversarial training is achieved via a gradient reversal layer that multiplies the gradient by a certain negative constant during the backpropagation-based training. Gradient reversal ensures that the feature distributions over the two domains are made similar (as indistinguishable as possible for the domain classifier), thus resulting in the domain-invariant features. The gradient reversal layer (GRL) has no training parameters. It is just a simple definition which is different between forward and backward propagation. In the forward propagation process, the GRL is treated as an identity transformation. However, in the backpropagation, the GRL takes the gradient from the subsequent level and changes its sign before passing it to the preceding layer (multiplied by −λ, λ is a trade-off hyper-parameter). In this way, we can minimize the whole loss function in one training stage using standard optimization algorithm. The whole loss function is the combination of classification loss (Eq.1)
10
and adversarial loss (Eq.2), defined as min L (θf , θy , θd ) =
C1,C2
N P
i=1,di =0
Ly (C1 (f ; θy ) ; yi ) − λ
N P
Ld (C2 (f ; θd ) ; di ),
i=1
(3)
where Ly is the cross entropy loss for class label prediction; Ld is the adversarial loss for domain classification. In this setting, domain0 is source domain(di =0) and domain1 is target domain(di =1). θf , θy , and θd are the parameters of the encoder, label classifier and domain classifier, respectively. f is the encoder feature extracted by the encoder. We utilize the idea of transfer learning to improve model performance on target domain, via minimizing the class loss Ly while maximizing the adversarial loss (domain) Ld at the same time. 2.3. Feature augmentation adversarial unsupervised domain adaptation (FAAUDA) We the propose the FU-AUDA approach, containing a feature generator to perform data augmentation in the feature space through Conditional GAN (CGAN), as shown in Fig. 3. The CGAN generator is able to learn the class distribution in the feature space. Our goal is to train a domain-invariant feature extractor E, whose training procedure is made more robust by data augmentation in the space of source features. Firstly, we pre-train the Encoder Es and classifier with labeled source waveforms data. The classification loss (cls Loss) is defined as min Lcls (Xs , Ys ) = E(xi ,yi )∼(Xs ,Ys ) − Es ,C
X
pi log C(Es (xi )),
(4)
i
where Es is the feature extractor; C is the classifier that predicts labels; pi 11
refers to the one-hot label distribution. The model and parameters of Es and C are saved for the following steps. Next, after supervised training (Stage 1), we train the feature generator S and the first discriminator D1 following the CGAN strategy, which takes both noise and conditional labels as input for feature generator (Stage 2). The feature generator thus resembles the source features. On this stage, the encoder Es is fixed, initialized with parameters on Stage 1. The CGAN loss is defined as min max Ladv = E(z,yi )∼(pz (z),Ys ) kD1 (S (z S D1 ` +E(xi ,yi )∼(Xs ,Ys ) kD1 (ES (xi ) yi )k2 ,
`
yi )
`
yi ) − 1k2
where pz (z) is the distribution from which noise samples are drawn;
(5) `
de-
notes a concatenation operation; k·k2 denotes the mean square loss based on the idea of LSGAN [16]. In this setting, domain0 refers to generated features domain (pz (z) , Ys ) and domain1 is source domain (Xs , Ys ). The intuition behind it is to generate features to resemble the source domain features, and least square loss are used to realize the training progress more stabler and faster. Finally, the feature extractor E, initialized with parameters optimized on Stage 1, is trained by playing minimax game. Es and E are equipped with same architecture. The feature extractor E is domain-invariant encoder since it is trained using both source and target domains waveforms. Furthermore, the feature extractor maps both source and target samples in a shared feature space, where features are indistinguishable from the ones generated through
12
S, which is fixed on this stage. The loss on Stage 3 is defined as min max L = Exi ∼Xs ∪Xt kD2 (E (xi )) − 1k2 + E(z,yi )∼(pz (z),Ys ) kD2 (S (z E
D2
`
yi ))k2 , (6)
where D2 is domain classifier, trying to distinguish the generated features extracted from domain1 by S with the real waveform features extracted from domain0 by E. In this setting, domain0 refers to real waveform features domain (both source and target domain, (Xs ∪ Xt )), and domain1 refers to generated features (pz (z) , Ys ). The feature extractor tries to fool the domain classifier so that it cannot discriminate generated features from real waveform features. On Stage 2, the generated features are expected to be similar with augmented source domain features. On this stage, the generated features are expected to be similar with both source and target features. Thus, source and target domain waveform can be encoded into a similar augmented feature space with a stronger generalization ability. For inference, the feature extractor E is combined with the saved classifier C of Stage 1 to predict labels from the target data distribution. 2.4. Network Configuration GR-AUDA. The label classifier C1 consists of one fully connected layer with 512 hidden units followed by Relu activation function and one output layer with 11 (10 for protocol dataset) units using Softmax activation function. As for the domain classifier C2, we adopt three fully connected layers (x → 1024 (units) → 512 → 2). In order to suppress noisy signal from the domain classifier at the early stages of the training procedure, the trade-off parameter λ is gradually changed from 0 to 1 according to the following 13
Figure 3: Three-stage feature augmentation adversarial unsupervised domain adaptation for waveform recognition (FA-AUDA)
schedule [12] λp =
2 − 1, 1 + exp(−γ · p)
(7)
where γ is set to 10 and p is the training step.
FA-AUDA. The feature generator S contains three fully connected (FC) layers (x → 1024 → 512 → 256). The first two FC layers are respectively followed by a Batch Normalization [17] layer and a Dropout [18] layer, while the last FC layer is followed by a tanh activation function. The output feature vector (f ) is feed into D1 and D2 with size of 256. D1 is built with one fully connected layer using a sigmoid unit as output layer. D2 consists of two fully connected layers (x → 512 → 2) with a sigmoid unit. Both methods chose T-CNN architecture as the encoder network. T14
CNN contains one deep channel and one shallow channel. The deep channel is comprised of two convolution layers with 256 and 128 filters respectively, while the shallow channel is equipped with one convolution layer including 128 filters. One fully connected layer is used to merge the flattened feature maps of both channels for multi-scale information. 70% of examples are set as training set, with rest of examples as testing set. Dropout rate is set to be 0.5. Model training is based on TensorFlow [19]. The Adam solver [20] is applied over the training set with a batch size of 64. 3. Experimental Evaluation In this section, we firstly introduce the waveform datasets in Table 1. Then, the proposed method is evaluated on unsupervised adaptation tasks within the modulation and protocol dataset, respectively. We construct one domain adaptation pair for modulation recognition: MR10a ↔ MR04c, and three domain adaptation pairs for protocol recognition: PRin ↔ PRcor, PRin ↔ PRout, PRout ↔ PRcor. All experiments are performed in the unsupervised settings, where labels in the target domain are withheld. 3.1. Datasets To evaluate the proposed approach, we use two benchmark modulation datasets under different channel conditions from [21] and sample three protocol datasets in three real scenarios; indoor office, corridor and outdoors (wide area). All datasets are shown in Table 1 in detail. Modulation dataset contains 11 classes, 8PSK, BPSK, QPSK, QAM16, QAM64, AM-SSB, AM-DSB, CPFSK, GFSK, WBFM, PAM4. Each exam15
Table 1: Description of Datasets Signal type
Dataset
Classes
MR10a
11
MR04c
11
PRindoor
10
PRcorridor
10
PRoutdoor
10
Modulation Dataset
Protocol Dataset
ple waveform is of size 2 × 128. The modulation signals are in reality synthetically generated in GNU Radio, introducing modulation, pulse shaping, carried data, and other well characterized transmit parameters identical to a real world signal. GNU Radio models for time varying multi-path fading of the channel impulse response, random walk drifting of carrier frequency oscillator and additive Gaussian white noise are employed, and the signals are passed through harsh channels models which introduce unknown scale, translation and dilation. There are two datasets, named MR10a and MR04c. For MR10a, each class contains 10,000 samples for 10 signal-to-noise ratios (SNRs) levels, constituting totally 22,000 samples dataset. For MR04c dataset, each class contains different amount of samples. The main difference from MR10a dataset is that MR04c dataset is generated with more moderate LO drift and lighter fading, meaning a less impaired channel condition. Protocol dataset1 is generated and recorded by a vector signal gen1
Open source code and dataset to replicate the experiments is available on the web
under BSD-2 license. (https://github.com/dupanfei1/AUDA)
16
Table 2: Experimental results (recognition accuracy) for unsupervised adaptation on modulation datasets Methods
MR10a → MR04c
MR04c → MR10a
Source only
0.182 ± 0.03
0.289 ± 0.02
CLDNN
0.138 ± 0.01
0.290 ± 0.01
Vgg
0.165 ± 0.01
0.434 ± 0.01
ResNet
0.106 ± 0.01
0.357 ± 0.01
GR-AUDA
0.492 ± 0.01
0.501 ± 0.02
FA-AUDA
0.702 ± 0.02
0.468 ± 0.01
Trained Target
0.922
0.768
erator based on NI PXI-5611 using LabVIEW. Signals are transmitted by directional antennas, and received and downconverted at the receiver side. Each signal containing I and Q components is recorded and sliced within a 400 samples rectangular window. Therefore, the sampled signal size is 2 × 400. Protocol dataset consists of 10 classes: FM, GSM, LTE, WCDMA, Bluetooth, WLAN-ac, WLAN-a/g/j/p, WLAN-g, WLAN-b/g and WLAN-n. Three datasets are generated under three channel conditions; indoor office, corridor and wide area, named PRindoor(PRin), PRcorridor(PRcor), PRoutdoor(PRout) respectively. Each class contains 2,000 samples, constituting 20,000 samples for each dataset. 3.2. Modulation Recognition Table 2 reports our findings and results of two unsupervised domain adaptation methods for cross-domain modulation recognition. A → B represents 17
transfer learning from domain A to domain B. The first row represents the accuracies on target data achieved with no adapted classifiers trained on source data (Source only), and the last row reports accuracies on target data achieved with classifiers trained on labeled target data (Trained Target). The network configuration for “Source only” and “Trained target” methods is the same as the label classifier following T-CNN encoder in AUDA architecture. The accuracy can be boosted from 0.182 to 0.702 with FA-AUDA method for MR10a → MR04c task. For MR04c → MR10a task, the accuracy on unlabeled target waveforms can be improved from 0.289 to 0.501 with GR-AUDA method. It should be noted that the unsupervised waveform recognition for cross scenario has not been considered in the literatures to best of our knowledge. We compare our methods with three deep neural network methods (CLDNN[1], Vgg[8], ResNet[8]) as a compromise. CLDNN combines CNN and LSTM for modulation recognition. Vgg and ResNet are recent advanced deep network with outstanding recognition performance. Experimental results indicate that the unsupervised domain adaptation approaches can greatly improve the recognition performance of target waveforms without any labels. Fig. 4 shows the confusion matrices before (source only) and after domain adaptation for transfer task MR10a → MR04c. A confusion matrix is a specific table layout that allows visualization of the performance of a classification algorithm. The horizontal axis represents the predicted label, and the vertical axis represents the true label. The trained model on MR10a performs poorly on all categories of MR04c except for AM-SSB and GFSK before adaptation, as shown in Fig.4 (a). However, a clear diagonal pattern
18
(a) source only
(b) domain adaptation Figure 4: Recognition performance of FA-AUDA method before (source only) and after domain adaptation for transfer task MR10a → MR04c
can be seen in Fig.4 (b) after domain adaptation, which proves the effectiveness of our proposed method.
19
Table 3: Experimental results for unsupervised adaptation among protocol datasets Methods
PRin → PRout
PRin → PRcor PRout→ PRcor
Source only
0.505 ± 0.02
0.430 ± 0.02
CLDNN
0.218 ± 0.01
Vgg
PRout→ PRin
PRcor→ PRin
PRcor→ PRout
0.532 ± 0.03
0.551 ± 0.02
0.606 ± 0.01
0.564 ± 0.03
0.469 ± 0.02
0.543 ± 0.02
0.539 ± 0.01
0.511 ± 0.01
0.339 ± 0.02
0.500 ± 0.02
0.588 ± 0.02
0.457 ± 0.01
0.494 ± 0.02
0.576 ± 0.01
0.402 ± 0.02
ResNet
0.522 ± 0.01
0.582 ± 0.01
0.522 ± 0.02
0.502 ± 0.01
0.580 ± 0.02
0.415 ± 0.02
GR-AUDA
0.730 ± 0.02
0.801 ± 0.02
0.746 ± 0.02
0.732 ± 0.02
0.710 ± 0.02
0.741 ± 0.01
FA-AUDA
0.742 ± 0.02
0.681 ± 0.02
0.646 ± 0.01
0.674 ± 0.02
0.682 ± 0.02
0.630 ± 0.02
Trained Target
0.945
0.940
0.940
0.925
0.925
0.945
3.3. Protocol Recognition Table 3 presents the results of these two UDA methods for protocol recognition. GR-AUDA approach achieves better performance on most transfer tasks, indicating that GR-AUDA is well-suited for cross-domain protocol recognition. We also compare our methods with three deep neural network methods (CLDNN [1] and Vgg [8], ResNet[8]) as a compromise. Results show the effectiveness of our proposed methods for unsupervised protocol recognition. Fig.5 shows the confusion matrixes before (source only) and after domain adaptation for transfer task PRin → PRcor. The trained model on PRin performs poorly on all categories of MR04c except for FM, GSM and WLANn before adaptation, as shown in Fig.5 (a). After domain adaptation, results present a clear diagonal pattern in Fig.5 (b) (note that the five WLAN signals are hard to adapt due to their similarity), proving the effectiveness of our proposed method.
20
(a) source only
(b) domain adaptation Figure 5: Recognition performance of GR-AUDA method before (source only) and after domain adaptation for transfer task PRin → PRcor
3.4. Discussion The advantage of FA-AUDA approach is feature augmentation, which can expand the feature space of source domain data, thus improve the generalization ability of the recognition model. And the advantage of GR-AUDA is
21
one-stage feature learning which can learn features that are both discriminative and invariant to the change of domains. Signals in MR10a dataset pass through harsh channels models which introduce unknown scale, multi-path fading and dilation noise. Thus, MR10a is more complete and contains more waveform information than MR04c. For MR10a→MR04c task, FA-AUDA approach can expand the feature space of modulation waveforms successfully, achieving much better transfer performance than GR-AUDA approach. For MR04c→MR10a task, GR-AUDA achieves a little better performance than FA-AUDA due to the one-stage feature learning. For protocol recognition task, three kinds of protocol datasets sampled in the real world suffer less impairment than the simulation dataset and contain more domain-specific features. Therefore, the transfer performance of GR-AUDA approach is better than FA-AUDA approach overall. In conclusion, FA-AUDA approach is more powerful when the source domain has complete and various datasets, and GR-AUDA approach is more suitable when the source and target domains has more domain-specific features. 4. Conclusion Inspired by the adversarial domain adaptation, we design a novel AUDA waveform recognition framework to improve cross scenario recognition performance. The proposed approach is devoted to transform source and target domain to a shared feature space with a robust feature extractor network. In addition, the domain invariant features are learned by adversarial learning 22
in the shared space. By our experimental results, we demonstrate that the recognition performance of target domain waveforms can be greatly improved via two effective methods, GR-AUDA and FA-AUDA. Finally, we provide an abundant waveform dataset under different propagation conditions. Future research will focus on the semi-supervised domain adaptation, which requires little labeled data in target domain, to further improve recognition performance. Acknowledgement This work was supported by the National Natural Science Foundation of China under Grant 61871282. References Declaration of interests The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Author statement Qing Wang: Conceptualization, Writing - Review & Editing, Supervision, Project administration Panfei Du: Methodology, Software, Writing - Original Draft Xiaofeng Liu: Validation, Writing - Review & Editing Jingyu Yang: Writing - Review & Editing Guohua Wang: Writing - Review & Editing
23
References [1] N. E. West, T. O’Shea, Deep architectures for modulation recognition, in: 2017 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN), 2017, pp. 1–6. doi:10.1109/DySPAN.2017. 7920754. [2] T. OShea, J. Hoydis, An introduction to deep learning for the physical layer, IEEE Transactions on Cognitive Communications and Networking 3 (4) (2017) 563–575. doi:10.1109/TCCN.2017.2758370. [3] A. Swami, B. M. Sadler, Hierarchical digital modulation classification using cumulants, IEEE Transactions on Communications 48 (3) (2000) 416–429. doi:10.1109/26.837045. [4] K. Kim, I. A. Akbar, K. K. Bae, J. S. Um, C. M. Spooner, J. H. Reed, Cyclostationary approaches to signal detection and classification in cognitive radio, in: 2007 2nd IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks, 2007, pp. 212–215. doi:10.1109/DYSPAN.2007.35. [5] P. Urriza, E. Rebeiz, P. Pawelczak, D. Cabric, Computationally efficient modulation level classification based on probability distribution distance functions, IEEE Communications Letters 15 (5) (2011) 476–478. doi: 10.1109/LCOMM.2011.032811.110316. [6] Q. Wang, P. Du, J. Yang, G. Wang, J. Lei, C. Hou, Transferred deep learning based waveform recognition for cognitive passive radar, Signal Processing 155. doi:10.1016/j.sigpro.2018.09.038. 24
[7] A. Torralba, A. A. Efros, Unbiased look at dataset bias, in: CVPR 2011, IEEE, 2011, pp. 1521–1528. [8] T. J. OShea, T. Roy, T. C. Clancy, Over-the-air deep learning based radio signal classification, IEEE Journal of Selected Topics in Signal Processing 12 (1) (2018) 168–179. doi:10.1109/JSTSP.2018.2797022. [9] S. J. Pan, Q. Yang, A survey on transfer learning, IEEE Transactions on Knowledge and Data Engineering 22 (10) (2010) 1345–1359. doi: 10.1109/TKDE.2009.191. [10] T. Tommasi, M. Lanzi, P. Russo, B. Caputo, Learning the roots of visual domain shift, in: European Conference on Computer Vision (ECCV), 2016, pp. 475–482. [11] E. Tzeng, J. Hoffman, K. Saenko, T. Darrell, Adversarial discriminative domain adaptation, in: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 2962–2971.
doi:
10.1109/CVPR.2017.316. [12] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, V. Lempitsky, Domain-adversarial training of neural networks, Journal of Machine Learning Research 17 (1) (2017) 2096– 2030. [13] R. Volpi, P. Morerio, S. Savarese, V. Murino, Adversarial feature augmentation for unsupervised domain adaptation, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 5495–5504. 25
[14] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, Generative adversarial nets, in: International Conference on Neural Information Processing Systems (NIPS), 2014, pp. 2672–2680. [15] M. Mirza, S. Osindero, Conditional generative adversarial nets, CoRR abs/1411.1784. arXiv:1411.1784. URL http://arxiv.org/abs/1411.1784 [16] X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, Multi-class generative adversarial networks with the l2 loss function, arXiv preprint arXiv:1611.04076 5. [17] S. Ioffe, C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift, CoRR abs/1502.03167. arXiv:1502.03167. URL http://arxiv.org/abs/1502.03167 [18] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, R. Salakhutdinov, Dropout: A simple way to prevent neural networks from overfitting, Journal of Machine Learning Research 15 (2014) 1929–1958. URL http://jmlr.org/papers/v15/srivastava14a.html [19] Tensorflow, https://www.tensorflow.org/. [20] D. P. Kingma, J. Ba, Adam: A Method for Stochastic Optimization, ArXiv e-printsarXiv:1412.6980. [21] T. O. Jim Shea, Deepsig, https://www.deepsig.io/. 26