Available at www.sciencedirect.com Available online online at www.sciencedirect.com Available at www.sciencedirect.com Available online online at www.sciencedirect.com Available online at www.sciencedirect.com
ScienceDirect ScienceDirect ScienceDirect Procedia Computer Science 105 (2017) i
Procedia Computer Science 00 (2019) 000–000 Procedia Computer Science 10500 (2017) i 000–000 Procedia Computer Science (2019) Procedia Computer Science 147 (2019) 276–282
www.elsevier.com/locate/procedia www.elsevier.com/locate/procedia
2018 International Conference on Identification,on Information and Knowledge 2016 IEEE International Symposium Robotics and 2018 International Conference on Identification, Information and Knowledge in the Internet of Things, IIKI 2018 2016 IEEE International Symposium on2018 Robotics and IntelliIntelliin the Internet of Things, IIKI gent Sensors, IRIS 2016, 17–20 December 2016, Tokyo, gent Sensors, IRIS 2016, 17–20 December 2016, Tokyo, Japan Japan
A Detection IoT A HOG-SVM HOG-SVM Based Based Fall FallEditorial Detection IoT System System for for Elderly Elderly Persons Persons Board: Using Deep Sensor Editorial Board: Using Deep Sensor b c d Yussof Xiangbo Konga,∗ , Zelin MengaHanafi , Naotoah Nojiri b , Yuji Iwahoric , Lin Mengd , Hiroyuki Yussof Xiangbo Konga,∗, Zelin MengaHanafi , Naotoah Nojiri ,dYuji Iwahori , Lin Meng , Hiroyuki Tomiyamad Tomiyama a Graduate School of Science and Engineering, Ritsumeikan University, 1-1-1 Noji-higashi, Kusatsu, Shiga, 525-8577, Japan
a Graduate School of Science and Engineering, Ritsumeikan University, 1-1-1 Noji-higashi, Kusatsu, Shiga, 525-8577, Japan b Graduate School of Information Science and Engineering, Ritsumeikan University, 1-1-1 Noji-higashi, Kusatsu, Shiga, 525-8577, Japan b Graduate School c of Information Science and Engineering, Ritsumeikan University, 1-1-1 Noji-higashi, Kusatsu, Shiga, 525-8577, Japan
Department of Computer Science, Chubu University, 1200 Matsumoto, Kasugai, Aichi 487-8501, Japan
c Department of Computer Science, Chubu University, 1200 Matsumoto, Kasugai, Aichi 487-8501, Japan d College of Science and Engineering, Ritsumeikan University, 1-1-1 Noji-higashi, Kusatsu, Shiga, 525-8577, Japan d College of Science and Engineering, Ritsumeikan University, 1-1-1 Noji-higashi, Kusatsu, Shiga, 525-8577, Japan
Abstract Abstract The population of elderly persons continues to grow at a high rate, and fall accidents in elderly persons have become a major public The population elderlydeveloped persons continues to growand at a machine high rate,learning and fall accidents elderly persons have become public health problem.ofHighly IoT technology enable theinuse of multimedia devices in aa major wide variety health problem. Highly developed machine learning the use multimedia devices in a wide variety of elderly person’s protection areas.IoT In technology this paper, aand HOG-SVM based fallenable detection IoTof system for elderly persons is proposed. of protection this paper, a HOG-SVM based fall detection IoT system for elderly persons is proposed. To elderly ensure person’s privacy and in orderareas. to beInrobust to changes of the light intensity, deep sensor is employed instead of RGB camera To get ensure privacyimages and in of order to be robust The to changes light intensity, deepby sensor is employed insteadand of the RGB camera to the binary elderly persons. personsofarethedetected and tracked Microsoft Kinect SDK, unwanted to get isthereduced binary by images elderly persons. The persons are detected and tracked Microsoft Kinect SDK, and the noise noiseofreduction algorithm. After obtaining the denoised binary by images, the features of persons are unwanted extracted noise is reduced by noisegradient reduction algorithm. obtainingisthe denoised for binary images, the status features are extracted by histogram of oriented and the imageAfter classification performed judging the fall byof thepersons liner support vector by histogram gradient image classification performed for judging the fallThis status by builds the liner support machine. If a of falloriented is detected, the and IoT the system sends alert to theis hospital or family members. study a data set vector which machine. If a fall is detected, the IoT system sends alertthat to the the proposed hospital or familyoutperforms members. This study builds a data which includes 3500 images, and the experimental results show method related works in terms of set accuracy. includes 3500 images, and the experimental results show that the proposed method outperforms related works in terms of accuracy. c 2019 ⃝ 2019 The The Authors. Authors. Published Published by by Elsevier Elsevier B.V. B.V. © c 2019 ⃝ The Authors. Published by Elsevier B.V. This is This is an an open open access access article article under under the the CC CC BY-NC-ND BY-NC-ND license license (https://creativecommons.org/licenses/by-nc-nd/4.0/) (https://creativecommons.org/licenses/by-nc-nd/4.0/) This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/) Peer-review under responsibility of the scientific committee the 2018 International Conference on Identification, Information under responsibility of the scientific committee ofof the 2018 International Conference on Identification, Information and Peer-review responsibility the scientific committee of the 2018 International Conference on Identification, Information and Knowledge the Internet of of Things. Knowledge inunder theinInternet of Things. and Knowledge in the Internet of Things. Keywords: fall accident, elderly persons, IoT system, privacy protection, HOG, SVM; Keywords: fall accident, elderly persons, IoT system, privacy protection, HOG, SVM;
1. Introduction 1. Introduction The proportion of elderly persons (over 65 years old) is increasing all over the world. For example, in European, proportion of elderly persons (over 65 is is increasing world. For example, in European, theThe proportion of elderly persons is 22.5% in years 2005 old) and it expectedall to over reachthe30% in 2050. In the United States, the proportion of elderly persons is 22.5% in 2005 and it is expected to reach 30% in 2050. In the United States, the proportion of elderly persons is 13% in 2010 and it is expected to reach 20.2% in 2050. In China, the proportion the proportion of elderly persons is 13% in 2010 and it is expected to reach 20.2% in 2050. In China, the proportion ∗ ∗
Corresponding author. Tel.: +81-77-561-5013 ; fax: +81-77-561-4928. Corresponding Tel.: +81-77-561-5013 ; fax: +81-77-561-4928. E-mail address:author.
[email protected] E-mail address:
[email protected] c 2019 The Authors. Published by Elsevier B.V. 1877-0509 ⃝ 1877-0509 ⃝ © 2019 The The Authors. Published by B.V. c 2019 1877-0509 Authors. Published by Elsevier Elsevier This open article under the CC license (https://creativecommons.org/licenses/by-nc-nd/4.0/) Amsterdam -article Boston - London - NewB.V. Yorklicense - Oxford - Paris - Philadelphia - San Diego - St Louis This isisan an openaccess access under the BY-NC-ND CC BY-NC-ND (https://creativecommons.org/licenses/by-nc-nd/4.0/) This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/) Amsterdam - Boston -ofLondon -committee New York - Oxford - Paris - Conference Philadelphia - SanonDiego - St Louis Peer-review under of the scientific of the 2018 on Identification, Information and Knowledge in Peer-review underresponsibility responsibility the scientific committee of theInternational 2018 International Conference Identification, Information and Peer-review under of the scientific committee of the 2018 International Conference on Identification, Information and Knowledge in the Internet of Knowledge inThings. theresponsibility Internet of Things. the Internet of Things. 10.1016/j.procs.2019.01.264
2
Xiangbo Kong et al. / Procedia Computer Science 147 (2019) 276–282 Xiangbo Kong / Procedia Computer Science 00 (2019) 000–000
277
of elderly persons is 8.87% in 2010 and it is expected to reach 30% in 2050 [1]. In Japan, the proportion of elderly persons is 27.7% in 2017 and it is expected to reach 35.3% in 2040 [2]. Many elderly persons live alone at their home and according to the World Health Organization, 28-35% of the elderly persons have a fall accident each year [3]. Many studies have indicated that fall accidents have become a very serious problem, especially for those that live alone [4][5]. If elderly persons have a fall accident, it may lead to immediate physical injuries such as torn ligament, fracture and cut. These may lead the elderly persons lying on the floor without any help and sometimes there are serious life-threatening[4]. Therefore, a system which can detect the fall automatically and notify the family members or the hospital, is very important for elderly persons living alone. As the fall events lead to immediate injuries, a good fall detection system should detect a fall with sufficient accuracy, in real time. In the past years, significant research and development (R&D) activities focus on developing reliable intelligent home appliances to record human activity and detect the fall. Various data related to the activities of elderly person are recorded and sent to a family member to enable that the person can determine whether the elderly person is safe or not. For example, Zojirushi Corporation has developed an electric kettle with wireless communication capabilities. Information relating to the operation of the kettle is sent to the family members to enable them to monitor whether the elderly persons use the kettle today. If there is no operation of the kettle someday, the elderly person may have a fall accident. The Tokyo Gas Company also provides services to send daily information to family members about the operation of gas equipment, enabling them to monitor the activities of those who use the equipment [6]. However, such devices cannot detect dangerous situations in real time. To address this issue, real time fall detection methods are proposed and these methods can be classified as three main categories: (wearable) devices based fall detection system, microphone based fall detection system and vision based fall detection system. Wearable devices are considered to be a promising approach to high-precision and realtime fall detection algorithm[7]-[10]. The approach of [7] uses the accelerometer of a smart phone to detect the fall. In this algorithm, the change of acceleration and six typical actions of humans are analyzed. If fall is detected, GPS module is used and speeds up the arrival of help. The approach of [8] employs accelerometer signals and timedomain features to detect fall events, resulting in a wonderful accuracy. The approach of [9] presents a fall detection algorithm using a wrist-worn wearable device, and the experimental results show that this is more power-efficient than conventional algorithms. The approach of [10] employs a smart watch to detect the fall. The fall can be accurately captured as Genetic Algorithm for SVM is used. A common drawback of these algorithms is that they are only effective when the elderly persons take or wear the device. They cannot be protected if they do not take/ware the device or the battery is not recharged. The approach of [11] employs a circular microphone to detect the fall and achieves 100% sensitivity at a specificity of 97%. Therefore, sound-based fall detection is not reliable. If elderly persons have a heart attack and fall slowly, microphone cannot get a appropriate sound signal to detect the fall. Furthermore, impinging on privacy is also a problem for microphone based fall detection system. Vision based fall detection system is considered as a promising approach to high-quality fall detection method. Traditional vision based fall detection algorithm based on RGB images does not work in a dark room, although many falls are caused by weak light. Furthermore, impinging on privacy is also a problem, the same as microphone based fall detection systems. The deep sensor can get a deep image of the room and the deep image can be used to detect the fall. Thanks to the infra-red LED, a deep sensor can work well in week light condition. Compared with the RGB images, as deep images only contain the distance information from the pixel to the camera, a certain amount of privacy can be protected. In this paper, deep sensor is employed to detect the fall. Our aim is to develop a vision based fall detection system that can detect a fall in real time with high accuracy and do not impinge on privacy of the users. The features of the human in deep images are extracted by the histogram of oriented gradients (HOG) and classified by support vector machine (SVM), and the fall is detected by trained model. Testing shows that our system can detect a dangerous situation within 1 minute with sufficient accuracy. The contributions of our study are as follows:
• This study builds a data set of fall/non-fall events with 3500 images, not only for this study, but also useful for future works. • This study builds an IoT system to detect the fall without impinging on privacy of the users. • This paper gives a brief review of HOG+SVM algorithm and directional depep images based HOG + SVM method is proposed to detect the fall with sufficient accuracy.
278
Xiangbo Kong et al. / Procedia Computer Science 147 (2019) 276–282 Xiangbo Kong / Procedia Computer Science 00 (2019) 000–000
3
Fig. 1. Person direction based fall detection
Fig. 2. Overview of the proposed fall deteciont system
The rest of this paper is organized as follows. Section 2 gives a brief introduction of related works. Section 3 presents a detailed introduction to our fall detection IoT system. Section 4 compares the experimental results of our system with other vision based fall detection systems. Section 5 concludes this paper.
2. Related Works The approach of [12] provides a fall detection system which can detect the fall automatically. Proposed system in this work uses image difference method to detect moving objects and mark these objects with rectangular bounding boxes and elliptical bounding boxes. The 2D direction of the person is calculated by using the features of the bounding boxes. This work indicates when person falling down, the direction of this person moves to 180 degrees or 0 degree. So this work set 45 degrees and 135 degrees as a threshold. As shown in Figure 1(b), if the 2D direction of the person is more than 135 degrees or less than 45 degrees, fall is detected, However, image difference method based fall detection is not reliable. When the luminance changes, difference method cannot detect the moving object well, and RGB difference method based fall detection system is even disabled in dark room. Furthermore, this system cannot make sure whether the moving object is a person or not. To address this issue, we proposed a tangent line based fall detection system using Microsoft Kinect [13]. Thanks to Kinect SDK, the person is detected well even when the light significantly changes or in a dark room. This study gets the outline of the detected person, and calculates the direction of each pixel in the outline images. Finally, proposed method detects the fall by analyzing the distribution of tangent line angles. However, direction or angles based fall detection is limited since these methods can only detect the fall when the falling down person’s direction is about perpendicular to the camera, as shown in Figure 1(b). When the falling down person’s direction is about parallel to the camera, the fall is not detected, as shown in Figure 1(d). In order to detect the fall with these algorithms, at least 3 cameras is necessary.
3. Fall detection system 3.1. System Overview The overview of the proposed fall detection IoT system is shown in Figure 2, and this system contains three parts: Local A, Local B and online server [14][15].
Xiangbo Kong et al. / Procedia Computer Science 147 (2019) 276–282
4
279
Xiangbo Kong / Procedia Computer Science 00 (2019) 000–000
Fig. 3. Flow chart of the porposed algorithm
• Local A contains a local board which is connected to the deep sensor. When persons are detected by the deep sensor, the binary images of detected persons are stored here. HOG features of the binary images are extracted and sent to Local B. In order to avoid the impinging on users’ privacy, Local A does not connect to the Internet. • Local B employs SVM to detect the fall based on the features sent from Local A. Local B only uses the features to detect the fall, and images are not sent to Local B. In other words, even if Local B is attacked from the Internet, the risk of impinging on users’ privacy is low. • When the online server recieves the fall alarm, family members or the hospital are informed. The procedure of the proposed system contains three processing, as shown in Figure 3. Training: Negative images (stand case) and positive images (fall case) are included in the training data set. The features of these images are extracted by HOG, and the hyper plane function is calculated by the support vector machine in order to correctly differentiating these two types of images. After SVM training, a model is established to detect falls. Feature detection: When the system starts up, the deep sensor starts to capture the images of the room. If a person is detected, the binary image of this person is output by the SDK of deep sensor. Then the HOG features of this image are calculated and sent to the model which is established in the above process. Fall detection: When a fall state is output by the model, a timer is started and if this fall state last in more than 60 seconds, alarm is sent to the family members or the hospital. 3.2. Image Classification The proposed system employs HOG to extract the features of the images which contain the detected person. In the traditional HOG algorithm, gamma correction is used to pre-process the images, since the light condition affects the extraction result. However, as the employing of deep sensor, the output image is not affected by the light conditional. For fast processing, this study does not use gamma correction in our algorithm. Furthermore, as traditional HOG algorithm is sensitive to the noise in the images, the noise affects the extraction result on a large scale. Therefore, this study uses a noise reduction algorithm instead of gamma correction as the preprocessing. The algorithm of noise reduction is shown in formula 1. P(i. j) =
255 1+e
β − 255
∑(i+2, j+2) (i−2, j−2)
P(x.y)
(1)
After denoising, HOG feature is extracted. HOG algorithm calculates the gradient vector of each pixels by formula 2 and formula 3, then calculates the value and direction of these gradient vectors by formula 4 and formula 5. V(x, y) means the pixel value of pixel(x.y). Histogram is calculated by using these gradient vectors in the small blocks (cells)
Xiangbo Kong et al. / Procedia Computer Science 147 (2019) 276–282
280
Xiangbo Kong / Procedia Computer Science 00 (2019) 000–000
5
Fig. 4. Data set of training images and test images
and the directions of cells in the huge blocks is the HOG feature [16][17]. The images are classified by using these features and SVM. P x (i, j) = V(i + 1, j) − V(i − 1, j)
(2)
Py (i, j) = V(i, j + 1) − V(i, j − 1) √ P(x, y) = P x (i, j)2 + Py (i, j)2
(3)
Angle(i, j) = tan−1
Py (i, j) P x (i, j)
(4) (5)
4. Experimental Result 4.1. Establishment of Data Set This study builds a data set of fall/non-fall events with 3500 images, as shown in Figure 4. This data set contains positive images and negative images. Positive images in data sets A, B, C and D show the person fall perpendicular to the camera, person fall with a large inclined angle to the camera, person fall with a small inclined angle to the camera and person fall parallel to the camera, respectively. 4.2. Evaluation Metrics This paper analyzes the experimental results by the method suggested by [5]. When the person falls down and the detection result is also “fall”, this state is defined as state A. When the person stands and the detection result is also “stand”, this state is defined as state B. When the person falls down but the detection result is “stand”, this state is defined as state C. When the person stands but the detection result is “fall”, this state is defined as state D. True positive (T P) is the number of state A. True negative (T N) is the number of state B. False positive (FP) is the number of state C. False negative (FN) is the number of state D. In study [5], Sensitivity (S e), Specificity (S p), Accuracy (Ac) and Error(Er) are given by the following formulas: S e = T P/(T P + FN)
(6)
S p = T N/(T N + FP)
(7)
6
Xiangbo Kong et al. / Procedia Computer Science 147 (2019) 276–282 Xiangbo Kong / Procedia Computer Science 00 (2019) 000–000
281
Ac = (T P + T N)/(T P + T N + FP + FN)
(8)
Er = (FP + FN)/(T P + T N + FP + FN)
(9)
Therefore, S e shows the ability of this system to detect fall states; S p shows the ability of this system to detect safe states; Ac shows the detection ability of the system and Er shows the error rate of this system [13]. 4.3. Comparison with Familiar Fall Detection Systems Table 1 - Table 5 show the experimental results of the sensitivity, specificity and accuracy with only one deep sensor/camera. Table 1. Comparison of proposed system with existing fall detection systems using data set A. Method
TP
TN
FP
FN
Se(%)
Sp(%)
Ac(%)
Reference 12 Reference 13 Proposed
194 200 200
195 198 200
5 2 0
6 0 0
97 100 100
97.5 99 100
97.3 99.5 100
Table 2. Comparison of proposed system with existing fall detection systems using data set B. Method
TP
TN
FP
FN
Se(%)
Sp(%)
Ac(%)
Reference 12 Reference 13 Proposed
199 193 200
195 198 200
5 2 0
1 7 0
99.5 96.5 100
97.5 99 100
98.5 97.8 100
Table 3. Comparison of proposed system with existing fall detection systems using data set C. Method
TP
TN
FP
FN
Se(%)
Sp(%)
Ac(%)
Reference 12 Reference 13 Proposed
2 1 132
195 198 200
5 2 0
198 199 68
1 0.5 66
97.5 99 100
49.3 49.8 88
Table 4. Comparison of proposed system with existing fall detection systems using data set D. Method
TP
TN
FP
FN
Se(%)
Sp(%)
Ac(%)
Reference 12 Reference 13 Proposed
1 0 182
195 198 18
5 2 200
199 200 0
0.5 0 91
97.5 99 100
49 49.5 95.5
It is obvious that, when person falls perpendicular to the camera or falls with a large inclined angle to the camera (data set A and data set B), both algorithms perform well. However, the approaches of [12] and [13] only use the angles or directions to detect the fall, detection errors still occur sometimes. For example, in Figure 1(c), non-fall state is not detected correctly. When person falls with a small inclined angle to the camera or falls parallel to the camera (data set C and dataset D), the approaches of [12] and [13] are unable to detect the fall, since the direction of the person is almost the same as standing, as shown in Figure 1(d). In data set C and data set D, the proposed method gives a much better result, as
Xiangbo Kong et al. / Procedia Computer Science 147 (2019) 276–282 Xiangbo Kong / Procedia Computer Science 00 (2019) 000–000
282
7
Table 5. Comparison of proposed system with existing fall detection systems using mixed data set. Method
TP
TN
FP
FN
Se(%)
Sp(%)
Ac(%)
Reference 12 Reference 13 Proposed
396 394 781
195 198 200
5 2 0
404 406 19
49.5 49.3 97.6
97.5 99 100
59.1 59.2 98.1
the features of standing and falling is different. However, the accuracy of proposed fall detection system is still hard to be satisfied. To address this issue, this work establishes a mixed trading data set, which contains all kinds of fall directions, to train the SVM model. The experimental results are shown in Table 5. Quite evidently, the sensitivity of detection has been greatly improved. 5. Conclusions A real time fall detection system with high accuracy is proposed in this paper. The proposed system contains three processes, data training, feature detection and fall detection. In this study, two boards are employed to detect the fall, and the board which stores the images, does not connect to the Internet, hence the risk of impinging on users’ privacy is low. Furthermore, this study builds a data set which contains 3500 images to train a HOG+SVM model, for improving the accuracy of the fall detection system. Experimental data shows that the accuracy of this fall detection system improves to 98.1%, outperforming similar vision-based fall detection systems. References [1] Z. Pang, L. Zheng, J. Tian, S. Kao-Walter, E. Dubrova, and Q. Chen. (2015) Design of a terminal solution for integration of in-home health care devices and services towards the Internet-of-Things. Enterprise Information Systems 9 (1) 86-116. [2] Ministry of Internal affairs and Communications, Japan, 2018. [3] [WHO] http://http://www.who.int/en/news-room/fact-sheets/detail/falls [Accessed: August 23th 2018] [4] E. Cippitelli, F. Fioranelli, E. Gambi, S. Spinsante. (2017) “Radar and RGB-Depth Sensors for Fall Detection: A Review.” Sensors Journal IEEE, 17, 3585-3604. [5] N. Noury, A. Fleury, P. Rumeau, A. K. Bourke, G. O. Laighin, V. Rialle, and J. E. Lundy. (2007) “Fall detection-principles and methods.” Annual International Conference of the IEEE Engineering in Medicine and Biology Society. [6] L. Meng, X. Kong, D. Taniguchi. (2017) “Dangerous Situation Detection for Elderly Persons in Restrooms Using Center of Gravity and Ellipse Detection.” JRM 29 1057-1064. [7] Y. W. Bai, S. C. Wu, and C. L. Tsai. (2012) “Design and implementation of a fall monitor system by using a 3-axis accelerometer in a smart phone.” IEEE Trans. Consum. Electron 58 (4) 1269-1275. [8] Y. Liu, S. J. Redmond, N. Wang, F. Blumenkron, M. R. Narayanan, and N. H. Lovell. (2011) “Spectral analysis of accelerometry signals from a directed-routine for falls-risk estimation.” IEEE Trans. Biomed. Eng. 58 (8) 2308-2315. [9] J. Yuan, K. K. Tan, T. H. Lee, and G. C. H. Koh. (2015) “Power-efficient interrupt-driven algorithms for fall detection and classification of activities of daily living.” IEEE Sensors Journal 15 (3) 1377-1387. [10] H.C. Kao, J.C. Hung, C.P. Huang. (2017) “GA-SVM applied to the fall detection system.” International Conference on Applied System Innovation (ICASI). [11] L. Yun, K. C. Ho, and M. Popescu. (2012) “A microphone array system for automatic fall detection.” IEEE Transactions on biomedical engineering 59 (4) 1291-1301. [12] M. Chamle, K. G. Gunale and K. K. Warhade. (2016) “Automated unusual event detection in video surveillance.” International Conference on Inventive Computation Technologies (ICICT). [13] X. B. Kong, L. Meng and H. Tomiyama. (2017) “Fall Detection for Elderly Persons Using a Depth Camera.” International Conference on Advanced Mechatronic Systems (ICAMechS) [14] [Nvidia Japan] http://www.nvidia.co.jp/page/home.html [Accessed: August 23th 2018] [15] X. Kong, Z. Meng, L. Meng and H. Tomiyama. (2018) “A Privacy Protected Fall Detection IoT System for Elderly Persons Using Depth Camera.” International Conference on Advanced Mechatronic Systems (ICAMechS). [16] N. Dalal and B. Triggs. (2005) “Histograms of Oriented Gradients for Human Detection.” IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR). [17] N. Dalal. 3ew3(2006) “Finding People in Images and Videos.” GRAVIR - IMAG - Graphisme, Vision et Robotique, Inria Grenoble - Rhne-Alpes, CNRS - Centre National de la Recherche Scientifique: FR71.