Neurocomputing 397 (2020) 457–463
Contents lists available at ScienceDirect
Neurocomputing journal homepage: www.elsevier.com/locate/neucom
BeAware: Convolutional neural network(CNN) based user behavior understanding through WiFi channel state information Leyuan Jia a,1, Yu Gu a,1,∗, Ken Cheng a,1, Huan Yan a,1, Fuji Ren b,1 a b
Hefei University of Technology, China University of Tokushima, Japan
a r t i c l e
i n f o
Article history: Received 15 May 2019 Revised 22 August 2019 Accepted 7 September 2019 Available online 8 April 2020 Keywords: User behavior analysis WiFi channel state information (CSI) Fresnel zone
a b s t r a c t In modern informatics society, human beings are becoming more and more attached to the computer. Therefore, understanding user behavior is critical to various application fields like sedentary analysis, human-computer interaction, and affective computing. Current sensor-based and vision-based user behavior understanding approaches are either contact or obtrusive to user s, jeopardizing their availability and practicality. To this end, we present BeAware, a contactless Radio Frequency (RF) based user behavior understanding system leveraging the WiFi Channel State Information (CSI). The key idea is to visualize the channel data affected by human movements into time -series heat-map images, which are processed by a Convolutional Neural Network (CNN) to understand the corresponding user behaviors. We prototype BeAware on commodity low-cost WiFi devices and evaluate its performance in real-world environments. Experimental results have verified its effectiveness in recognizing user behaviors. © 2020 Elsevier B.V. All rights reserved.
1. Introduction In an informatics society, the computer has devoured much of our time. When working, people spend an average of 66% of their work time with their desktops. In their spare time, people also tend to sit in front of their personal computers surfing and gaming. Though this coherent state facilitates the accelerating rhythm of urban life and works efficiency, it has gradually deprived their exercise time and led to side effects like sedentary behavior (SB). SB is proved to pose great threats to the wellness of people and potentially raised the possibility of chronic diseases like blood pressure, diabets or even cancers [1]. Therefore, it becomes crucial to understand user behavior like knowing whether the user is working, gaming or surfing and how long s/he has been doing it. Moreover, it constitutes a promising enabler to many other fields like human-computer interaction (HCI) and affective computing (AC). Current research on this very topic can be roughly divided into two categories, i.e., sensor-based and vision-based. The former leverages various sensors attached to the human body to monitor the user behavior [5–7], while the latter relies on the mature Computer Vision (CV) technology that analyzes the camera footage to read the user behavior [2–4]. Both types of approaches are quite
∗
1
Corresponding author. E-mail address:
[email protected] (Y. Gu). Senior Member, IEEE
https://doi.org/10.1016/j.neucom.2019.09.111 0925-2312/© 2020 Elsevier B.V. All rights reserved.
effective, but certain shortcomings like line-of-sight, illumination and coverage constraints eopardize its usage in practice. To this end, it remains a great challenge to develop a ubiquitous behavior analysis system. In this article, we introduce WiFi signal, which is insensible to users, as an alternative source to vision and sensor for perceiving user behavior. The key reason behind is that the human body reflects or absorbs WiFi signal, and thus changes the WiFi channel state information (CSI) [8–10]. The inherent research problem is how to exploit WiFi CSI that contains rich behavior information to retrieve micro-gestures like keystrokes and mouse movements for understanding the corresponding user behavior? We propose two different approaches to deal with this challenge. One is to use a traditional classifier like Support Vector Machine (SVM), which requires us to select features from time or frequency domain. This approach is simple in training and fast in recognition. However, its performance largely depends on the selection of features, making it not adaptive to environmental changes. To this end, we propose a neurocomputing based approach, which needs no explicit human intervention and adapts to different scenarios. More specifically, we map the channel data affected by human movements into time-series heat-map images and leverage a Convolutional Neural Network (CNN) to understand the corresponding user behaviors. The mapping scheme aims to preserve the physical posture changes on channel response as taking images, while CNN is used due to its well acknowledged ability dealing the images. We prototype BeAware with low-cost
458
L. Jia, Y. Gu and K. Cheng et al. / Neurocomputing 397 (2020) 457–463
commodity WiFi devices and verify the proposed two approaches in real environments. Extensive experiments demonstrate BeAware is very effective in capturing and understanding user behaviors. The remainder of this paper is organized as follows: in next the section, we provide an overview of the related works. We introduce the system design in Section 3, and evaluate the experimental results in Section 4. Finally, we conclude our work and discuss some open issues in Section 5. 2. Related works 2.1. Wireless motion-sensing WiFi-based motion sensing technology has many advantages over traditional motion sensing technology (e.g. vision-based sensing technology, infrared-based sensing technology, and dedicated sensor-based sensing technology) in terms of non-line-of-sight, passive sensing (no need to carry sensors), low cost, easy deployment, no restrictions on lighting conditions, and strong scalability. A large number of studies and applications of motion sensing have emerged based on WiFi signals, which can be divided into two categories: RSSI (Received Signal Strength Indicator)-based and CSIbased. RSSI-based: The human motions affect the signal propagation path and lead to variations of the signal strength, which lays the foundation of motion recognition. In the early days, WiFi-based motion-sensing mainly uses RSSI. Sigg et al. [14] use a software defined radio to transmit RF signals and determine human motion based on changes in RSSI. Abdelnasser et al. leverage RSSI to identify 7 different gestures [11] and respiratory detection [15]. We also built a similar RSSI-based system PAWS to handle wholebody activities [8]. However, due to the RSSIs coarse resolution, this method cant capture complex and subtle motions. CSI-based: CSI is the subcarrier information from the physical layer (PHY) and it can provide more details due to the multipath effect. Thus recent research mainly uses CSI instead of RSSI for motion sensing. Wifall [13] uses CSI to build a ubiquitous fall detection system. Zeng et al. [12] leverage CSI to recognize five different customer behavior states. Ali et al. proposed a gesturerecognition system called Wikey to recognize 37 keys on the keyboard [17]. Fu et al. utilize CSI to realize a device-free air-write recognition sysmtem called Wri-Fi [18]. We also built a system named MoSense which extracts CSI from Intel 5300 NICs to pinpoint the motions in a real-time manner [9]. Motion sensing technology based on WiFi is showing unprecedented potential for a variety of applications, achieving not only the interaction between machines but also the natural interaction between humans and machines. 2.2. Behavior recognition From the perspective of recognition devices, previous studies on human behavior recognition can be mainly divided into two categories: vision-based [2–4], sensor-based [5–7]. Meanwhile, radio frequence (RF) based behavior recognition has also rised recently as a ubiquitous solution. Vision-based behavior recognition: Computer vision (CV) technologies have long been recognized as an effective solution for behavior recognition. Bodor et al. developed an automated, smart video system to track pedestrians and detect suspicious motions or behaviors [19]. Jalal et al. [20] presented a depth video-based novel method for recognizing daily life activities of elderly people living alone in the indoor environment, which uses robust multifeatures and embedded Hidden Markov Models (HMMs). Hsu et al. proposed an abnormal human behavior detection system based on computer vision for monitoring psychiatric patients [21].
Sensor-based behavior recognition: In the past decades, the advancement of various types of sensors stimulated the development of human behavior understanding and recognition approaches. Zhu et al. collected data from wearable motion sensors and the associated location context to detect different behavioral anomalies in human daily life [22]. Chen et al. presented a framework leveraging smartphone-sensor (mainly accelerometer and gyroscope sensors) based human activity recognition [23]. Attal et al. [24] used three inertial sensor units, which were worn by healthy subjects at key points of upper/lower body limbs (chest, right thigh, and left ankle), to recognize the main daily living human activities. WiFi-based behavior recognition: Most of the abovementioned studies on WiFi-based method are conducted by segmenting CSI signal sequences and mapping them to corresponding actions through template matching. For instance, Tan et al. built a system named WiFinger [25], which performs data processing operations such as signal filtering, denoising, and segmentation firstly, then conducts behavior recognition by matching established CSI templates. Our previous research [26] also learned CSI signal patterns caused by specific actions and then analyzing it vis SVM. Nowadays, a new WiFi-based method is adopted which is utilizing a multi-layer convolutional neural network (CNN) for learning human activities by using CSI from multiple access points (APs) [16]. This deep learning-based recognition has higher accuracy than traditional solutions but requires substantial training data and training time. Our system, BeAware, pushes the research one-step further via data visualization. The key idea is to map the channel data affected by human movements into time-series heat map images to preserve the physical postures changes. Then BeAware utilizes a mature CNN, which is well-acknowledged as a effective approach for image processing, to extract high-level features and recognize the corresponding behaviors. 3. Beaware: System design In this section, we present the system design of BeAware. 3.1. System overview As shown in Fig. 1, BeAware consists of four modules, i.e., data receiving module, pre-processing module, mapping module, and classification module. Since our system is training based, there exist two data streams: training and testing. The first module controls the transceivers to record the user behaviors on channel data. Then the raw data is handled in the preprocessing module for denoising through Butterworth Filter [17]. The mapping module then visualize the denoised channel data intro a time-series heat-map images to preserve the physical posture changes. Lastly, the classification module is responsible for training classifiers and behavior analysis. All training data sets are stored in an SQLite database mounted on a Linux server. 3.2. Theoretical foundation: channel response In general, the recent Wi-Fi standards use either RSS(Received Signal Strength) to represent the received power level, or CSI(Channel State Information) to indicate signal attenuation as the channel response. In general, recent Wi-Fi standards leverage either RSS representing the received power level, or CSI indicating signal attenuation, as channel response. RSS characterizes the total received power of all paths,
RSS = 10 log2 (H ), 2
(1)
L. Jia, Y. Gu and K. Cheng et al. / Neurocomputing 397 (2020) 457–463
459
Fig. 1. System architecture of BeAware.
Fig. 2. System environment of BeAware.
jθk with H and θ being the amplitude where H = N k k k=1 Hk e and phase of the kth multi-path, respectively. Eq. (1) indicates that RSS is coarse-grained so that it is inherently incapable of capturing the multi-path effect. Therefore, recently CSI on the PHY layer which can better capture multi-path channel features emerges as a more effective alternative. Specifically, in Orthogonal Frequency Division Multiplexing system (OFDM), H(f, t) represents the complex value channel frequency response (CFR) in the format of CSI, which characterizes the channel performance with the amplitude and phase for subcarrier frequency f measured at time t.
H ( f, t ) =
N k=1
hk ( f, t )e− jθk ( f,t ) ,
(2)
where hk is the amplitude that characterizes the attenuation, eθk ( f,t ) is the phase shift on the kth path caused by a propagation delay. 3.3. Experiment setup In order to detect the sedentary behavior, and recognize whether the experimenters are gaming, working or surfing etc., the following experimental settings are carried out according to the channel response characteristics of wireless signals: [Prototype]. As shown in the right of Fig. 2, our prototype consists of two MiniPCs which equipped with Intel network interface to form a controller (NIC) 5300 (running at 5 GHz). One miniPC is equipped with an external antenna as a transmitter, while the
460
L. Jia, Y. Gu and K. Cheng et al. / Neurocomputing 397 (2020) 457–463
other is connected to three antennas as a receiver. Antennas are fixed on the tripods. The sampling rate is 100 Hz. [Participant]. Two young men participated in the experiment. [Environment]. The experiments were carried out in a n almost 8.85 × 8.85 m2 office room, which in shown in the left of Fig. 2. There is some office furniture, including chairs, sofas, and computer desks. Other students are present in the same room during the experiment. [Behaviors]. Three basic behaviors are monitored, i.e., gaming,working, and surfing. [Settings]. In our work, we ask participant to perform each behavior under his own way for 60 s, which makes sure to get the near-real data, and repeated the experiment many times. In this approach, we collected data in the experimental environment rather than in the real working scenes. We controlled the duration of the experiment and created a single office environment without interference from other people. Therefore the data is called nearreal data, but it is also representative. Besides, every participant also performed each fixed motion under the control.
Table 1 Relative use frequency of keyboard and mouse.
Static Gaming Working Surfing
Keyboard
Mouse
Low Frequency High Frequency High Frequency low Frequency
Low Frequency High Frequency Medium Frequency Medium Frequency
the default state. All of these behaviors consist of two basic features, i.e., typing and mouse moving. But the radio of motions contained in each type should be different as shown in Table 1. In the end, we use two different methods, CNN and SVM, to classify these behaviors. For CNN, We first segmented the preprocessed data and generated the heat map. Then we selected 90% of the heat maps for training data, 10% of which as the validation set. And the remaining 10% is used as the testing data. However, for SVM, we directly trained and tested the segmented CSI data in the same proportion. The flow chart of data processing is shown in Fig. 3.
3.4. Pre-processing module 4. Performance evaluation The raw channel information may contain abnormal samples caused by environmental noise and hardware glitches, and therefore it is necessary to denoise the corresponding raw data. In the pre-processing module, we choose the Butterworth filter. As we sample CSI values at a rate of Fs = 100 samples/s, we set the cutπ ·15 = off frequency ωc of the Butterworth filter at ωc = 2πFs· f = 2100 94.2 rad/s. And the reason we choose the Butterworth filter is that the frequency of variations caused by human motions lie at the low end of the spectrum, while the frequency of the noise lies at the high end of the spectrum. To remove noise in this case, Butterworth low-pass filter is a natural selection which does not significantly distort the phase information in the signal and has a maximally flat amplitude response in the passband and thus does not lead to excessive distortion of the signal. 3.5. Mapping module In this module, we will process CSI data for the subsequent CNN classification. After pre-processing, we get the denoising CSI data. And the CSI signal extracted from our devices has 30 subcarriers, each of which has an energy value at each time point, which corresponds to the heat map image, and each time point of each subcarrier can be regarded as one pixel on the image. Its energy value can be converted to the gray value of the image. So we will first segment the preprocessed CSI data and convert it into the heat map. The heat map images will be used as CNN’s training and testing set. 3.6. Classification module Behavior Recognition. In our work, we focus on the three behaviors, i.e., gaming, working and surfing, and static is considered
4.1. Hardware setup We implement the proposed system on existing hardware devices. Our sending and receiving devices are two mini PCs. All the mini PCs are Intel Link 5300 WiFi which has Intel Celeron N2830 processor. And comes with 2GB RAM and Ubuntu Operating System in version 12.04. Our experiment settings are shown in the right of Fig. 2. Our sending speed of the transmitting equipment is 100 packets/s. And we use one transmitting antenna and three receiving antennas, But only the data collected by the second receiving antennas were used. 4.2. Experimental methodology We collect experimental data in the lab environment shown in Fig. 2. We conducted a number of experimental data collections for three sedentary positions (gaming, working and surfing). We experimented with the collection of experimental data in two different scenarios. One is controlled experimental data, the action mode is fixed. The other is the experimental data close to the real scene, the action mode is free. We collect three kinds of motion data in these two different scenarios. The experimental time for each data acquisition is about 60 s, avoiding the quality of the experimental data that cannot be guaranteed because the experimental time is too long. The experimenters’ repetition of these actions over a longer period of time may cause them to be unable to control some of the requirements of the action due to fatigue. After the CSI experimental data is collected, the data is cut out as the original sample data by a 1-s sliding window. Finally, we took a total of 120 samples for each action in each scenario.
Fig. 3. Flow chart of the data processing.
L. Jia, Y. Gu and K. Cheng et al. / Neurocomputing 397 (2020) 457–463
461
Fig. 4. CSI waveforms of three behaviors on the same antenna #2.
Fig. 5. CSI feature extraction for three different actions.
4.3. Case demonstration Fig. 4 shows the CSI signal diagram for three different actions in a real scene. From the figure, we can roughly see that the CSI signals of different action modes are different. But the difference between them is difficult to distinguish directly by giving specific characteristics. So we tried to distinguish these action data through machine learning. One is the traditional machine vector machine (SVM), and the other is a convolutional neural network(CNN). The original intention of our idea is to convert the CSI signal into a heat map and then identify it through the image. CNN has a good effect on image recognition and is the most widely used neural network classifier, so we naturally think of using CNN. The Table 2 shows CNN’s structure parameter setting. Also, compare the advantages and disadvantages of different machine learning methods. In the SVM, we try to extract the features of the CSI signal as training data. For each action CSI signal, we extracted four features
Table 2 Structure of CNN. Layer
Output shape
conv2d_1 (Conv2D) conv2d_2 (Conv2D) conv2d_3 (Conv2D) dense_1 (Dense) dense_2 (Dense)
(None,16,64,64) (None,64,32,32) (None,256,16,16) (None, 1024) (None, 3)
of minimum, maximum, mean, and standard deviation. In Fig. 5, we can see the specifics of the CSI features. CNN has certain advantages in classifying different pictures. The CSI signal image can be converted into a heat map. Each pixel on the heat map corresponds to a different packet on a different subcarrier on the CSI signal. The amplitude of the CSI signal corresponds to the color of the heat map. We can also see the difference between different action modes from the heat map. So the
462
L. Jia, Y. Gu and K. Cheng et al. / Neurocomputing 397 (2020) 457–463
Fig. 6. Heat map after conversion of CSI signals.
but because the neural network has certain advantages in dealing with a large amount of complex data, CNN will have better performance when the experimental data increases. At the same time, we also try to use the Capsule Network [27] instead of CNN to process the heat map image. Because of our relatively small sample size, we are not suitable for processing data using this complex neural network, so we got only 44.4% accuracy. 5. Conclusion and future work Fig. 7. (a) heat map of 1 second gaming CSI data ; (b) heat map of 1 second working CSI data (c) heat map of 1 second surfing CSI data.
heat map is another manifestation of CSI signal at the data level. Convert the already cut CSI signal into heat maps, then these heat maps can then be trained as samples of CNN. The heat maps directly converted by CSI are larger in size, so we re-transform each heat map into a 64 × 64 image. Fig. 6 is shown the Heat maps after conversion of CSI signals, and the heat maps of CSI data after cutting are shown in the Fig. 7.
In this paper, we proposed BeAware, a device-free and real-time WiFi-based system to analyze common human behaviors (surfing, working and gaming) around computers. The key idea is to exploit WiFi CSI that contains rich behavior information to retrieve micro-gestures like keystrokes and mouse movements for understanding the corresponding user behavior. Meanwhile, compared with our previous work [28,29], this is the first time that we use CNN to process CSI signals for behavior recognition. BeAware has been prototyped on low-cost and ubiquitous WiFi infrastructures and evaluated in extensive real-world experiments, where its performance has been verified. The research results of this paper demonstrate the effectiveness of this approach of using CNN to process CSI signal sequences to identify different actions of users. This method provides new research ideas for WiFi-based behavior sensing. In the future, we could use this method to conduct more behavior recognition research, such as the user’s fitness behavior recognition in the gym, and sleep quality detection during sleeping, e.g.
4.4. Evaluation result
Declaration of Competing Interest
Table 3 Experimental results of two methods in two scenarios. SVM
CNN
DataSet
Training-Set
Testing-Set
Training-Set
Testing-Set
Control Near-real
100% 71.94%
100% 77.78%
100% 100%
94.40% 77.78%
We used two different machine learning methods to train the samples in the two scenarios. For the processed sample data, we select 90% of the samples for training and the remaining samples for validating. In the Table 3, you can see the accuracy of the two methods in different scenarios. In a controlled scenario, high accuracy can be achieved using two different methods. In the near-real scenario, the accuracy of the two has decreased. Overall, the accuracy of the two methods is relatively close. The data used in the experiment is relatively small,
I hereby confirm that there is no conflict of interest. References [1] K.M. Diaz, V.J. Howard, B. Hutto, N. Colabianchi, J.E. Vena, M.M. Safford, S.N. Blair, S.P. Hooker, Patterns of sedentary behavior and mortality in u.s. middle-aged and older adults: a national cohort study, Ann. Intern. Med. 167 (7) (2017). [2] Z. Li, Z. Feng, J.D. Tygar, Keyboard acoustic emanations revisited, in: ACM Conference on Computer and Communications Security, CCS 2005, Alexandria, Va, Usa, November, 2005, pp. 373–382.
L. Jia, Y. Gu and K. Cheng et al. / Neurocomputing 397 (2020) 457–463 [3] S. Gupta, D. Morris, S. Patel, D. Tan, Soundwave: using the doppler effect to sense gestures, in: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2016, pp. 1911–1914. [4] J. Shotton, A. Kipman, A. Kipman, M. Finocchio, M. Finocchio, A. Blake, R. Moore, R. Moore, Real-time human pose recognition in parts from single depth images, Commun. ACM 56 (1) (2013) 116–124. [5] G. Cohn, D. Morris, S. Patel, D. Tan, Humantenna: using the body as an antenna for real-time whole-body interaction, in: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2012, pp. 1901–1910. [6] R. Jenke, A. Peer, M. Buss, Feature extraction and selection for emotion recognition from EEG, IEEE Trans. Affect. Comput. 5 (3) (2017) 327–339. [7] Y. Liu, S. Antwiboampong, J.J. Belbruno, M.A. Crane, S.E. Tanski, Detection of secondhand cigarette smoke via nicotine using conductive polymer films, Nicotine Tobacco Res. 15 (9) (2013) 1511–1518. [8] Y. Gu, F. Ren, J. Li, Paws: passive human activity recognition based on WiFi ambient signals, IEEE Internet Things J. 3 (5) (2016) 796–805. [9] Y. Gu, J. Zhan, Y. Ji, J. Li, F. Ren, S. Gao, Mosense: A RF-based motion detection system via off-the-shelf wifi devices, IEEE Internet of Things Journal PP (99) (2017). 1–1 [10] X. Zheng, J. Wang, L. Shangguan, Z. Zhou, Y. Liu, Smokey: Ubiquitous smoking detection with commercial wifi infrastructures, in: Proc. of IEEE INFOCOM 2016, 2015, pp. 17–18. Hong Kong [11] H. Abdelnasser, K.A. Harras, M. Youssef, Wigest demo: A ubiquitous wifi-based gesture recognition system, in: Proc. of IEEE INFOCOM 2015, 2015, pp. 17–18. Hong Kong [12] Y. Zeng, P.H. Pathak, P. Mohapatra, C. Xu, A. Pande, A. Das, S. Miyamoto, E. Seto, E. Henricson, J. Han, et al., Analyzing shopper’s behavior through wifi signals, in: Proc. of the 2nd workshop on Workshop on Physical Analytics, 2015, pp. 13–18. Florence, Italy [13] C. Han, K. Wu, Y. Wang, L.M. Ni, Wifall: Device-free fall detection by wireless networks, in: Proc. of IEEE INFOCOM 2014, 2014, pp. 271–279. Toronto, Canada [14] S. Sigg, S. Shi, F. Buesching, Y. Ji, L. Wolf, Leveraging RF-channel fluctuation for activity recognition: Active and passive systems, continuous and RSSI-based signal features, in: Proc. of International Conference on Advances in Mobile Computing & Multimedia, 2013, p. 43. Vienna, Austria [15] H. Abdelnasser, K.A. Harras, M. Youssef, Ubibreathe: a ubiquitous non-invasive WiFi-based breathing estimator, in: Proc. of ACM MobiHoc, 2015, pp. 277–286. [16] H. Li, K. Ota, M. Dong, M. Guo, Learning human activities through Wi-Fi channel state information with multiple access points, IEEE Commun. Mag. 56 (5) (2018) 124–129. [17] K. Ali, A.X. Liu, W. Wang, M. Shahzad, Keystroke recognition using WiFi signals, in: Proceedings of the 21st Annual International Conference on Mobile Computing and Networking, ACM, 2015, pp. 90–102. [18] Z. Fu, J. Xu, Z. Zhu, A.X. Liu, X. Sun, Writing in the air with WiFi signals for virtual reality devices, IEEE Trans. Mob. Comput. 18 (2) (2018) 473– 484. [19] R. Bodor, B. Jackson, N. Papanikolopoulos, Vision-based human tracking and activity recognition, in: Proc. of the 11th Mediterranean Conf. on Control and Automation, 1, 2003. [20] A. Jalal, S. Kamal, D. Kim, A depth video-based human detection and activity recognition using multi-features and embedded hidden Markov models for health care monitoring systems, Int. J. Interact. Multimed.Artif. Intell. 4 (4) (2017). [21] S.-C. Hsu, C.-H. Chuang, C.-L. Huang, R. Teng, M.-J. Lin, A video-based abnormal human behavior detection for psychiatric patient monitoring, in: 2018 International Workshop on Advanced Image Technology (IWAIT), IEEE, 2018, pp. 1–4. [22] C. Zhu, W. Sheng, M. Liu, Wearable sensor-based behavioral anomaly detection in smart assisted living systems, IEEE Trans. Autom. Sci. Eng. 12 (4) (2015) 1225–1234. [23] Y. Chen, C. Shen, Performance analysis of smartphone-sensor behavior for human activity recognition, IEEE Access 5 (2017) 3095–3110. [24] F. Attal, S. Mohammed, M. Dedabrishvili, F. Chamroukhi, L. Oukhellou, Y. Amirat, Physical human activity recognition using wearable sensors, Sensors 15 (12) (2015) 31314–31338. [25] S. Tan, J. Yang, Wifinger: leveraging commodity WiFi for fine-grained finger gesture recognition, in: Proceedings of the 17th ACM International Symposium on Mobile Ad Hoc Networking and Computing, ACM, 2016, pp. 201–210. [26] Y. Gu, Y. Zhang, M. Huang, F. Ren, Your WiFi knows you fall: a channel data-driven device-free fall sensing system, in: 2018 5th IEEE International Conference on Cloud Computing and Intelligence Systems (CCIS), IEEE, 2018, pp. 943–947. [27] E. Xi, S. Bing, Y. Jin, Capsule network performance on complex data, arXiv: 1712.03480(2017). [28] Y. Gu, Y. Wang, T. Liu, Y. Ji, Z. Liu, P. Li, X. Wang, X. An, F. Ren, Emosense: computational intelligence driven emotion sensing via wireless channel data, IEEE Trans. Emerg. Top.Comput. Intell. (2019).
463
[29] Y. Gu, C. Zhang, Y. Wang, Z. Liu, Y. Ji, J. Li, A contactless and fine-grained sleep monitoring system leveraging WiFi channel response, in: ICC 2019-2019 IEEE International Conference on Communications (ICC), IEEE, 2019, pp. 1–5. Leyuan Jia was born in Anhui, China, in 1995. He received a B. M in medicine from Anhui Medical University. He is currently pursuing the M.Sc. degree in Hefei University of Technology. His research interests include intelligent information processing and wireless sensing and machine learning.
Yu Gu received his B.E. degree from the Special Classes for the Gifted Young (SCGY), University of Science and Technology of China (USTC) in 2004. He received his D.E. degree from the same university in 2010. From 2006.2 to 2006.8, he has been an intern in the Microsoft Research Asia, Beijing, China. From 2007.12 to 2008.12, he has been a visiting scholar in the University of Tsukuba, Japan. From 2010.11 to 2012.10, he has worked in the National Institute of Informatics (Japan) as a JSPS Research Fellow. Now He is a Professor in School of Computer and Information, Hefei University of Technology, China. His research interests include pervasive computing and affective computing. He is a senior member of IEEE. He received the Excellent Paper Award in the IEEE Scalcom 2009.
Ken Cheng was born in Anhui, China, in 1995. He received the B. Eng from Hunan University of Technology and major in IOT. He is currently pursuing the M.Sc. degree in Hefei University of Technology. His research interests include intelligent information processing and wireless sensing and machine learning.
Huan Yan was born in Guizhou, China, in 1995. He received the B. Eng from Hefei University of Technology. He is currently pursuing the M.Sc. degree in Hefei University of Technology. His research interests include intelligent information processing and wireless sensing and machine learning
FuJi Ren received his B.E. and M.E. degrees from Beijing University of Posts and Telecommunications, Beijing, China, in 1982 and 1985, respectively. He received his Ph.D. degree in 1991 from Hokkaido University, Japan. He is a professor in the Faculty of Engineering of the University of Tokushima, Japan. His research interests include information science, artificial intelligence, language understanding and communication, and affective computing. He is a member of IEICE, CAAI, IEEJ, IPSJ, JSAI, AAMT, and a senior member of IEEE. He is a fellow of the Japan Federation of Engineering Societies. He is the president of the International Advanced Information Institute.