Journal Pre-proofs Driver Fatigue Detection based on Deeply-Learned Facial Expression Representation Zhongmin Liu, Yuxi Peng, Wenjin Hu PII: DOI: Reference:
S1047-3203(19)30344-X https://doi.org/10.1016/j.jvcir.2019.102723 YJVCI 102723
To appear in:
J. Vis. Commun. Image R.
Received Date: Revised Date: Accepted Date:
10 October 2019 21 November 2019 22 November 2019
Please cite this article as: Z. Liu, Y. Peng, W. Hu, Driver Fatigue Detection based on Deeply-Learned Facial Expression Representation, J. Vis. Commun. Image R. (2019), doi: https://doi.org/10.1016/j.jvcir.2019.102723
This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
© 2019 Published by Elsevier Inc.
Driver Fatigue Detection based on Deeply-Learned Facial Expression Representation Zhongmin Liu1,2,3*, Yuxi Peng1,2,3, and Wenjin Hu4 College of Electrical and Information Engineering, Lanzhou University of Technology, Lanzhou 730050, China Key Laboratory of Gansu Advanced Control for Industrial Processes, Lanzhou University of Technology, Lanzhou 730050, China National Electrical and Control Engineering Experimental Teaching Center, Lanzhou University of technology, Lanzhou 730050, China 4 College of Mathematics and Computer Science, Northwest Minzu University, Lanzhou 730000, China 1 2 3
Corresponding author: Zhongmin Liu (e-mail:
[email protected]).
ABSTRACT Driver fatigue detection is a significant application in smart cars. In order to improve the accuracy and timeliness of driver fatigue detection, a fatigue detection algorithm based on deeply-learned facial expression analysis is proposed. Specifically, the face key point detection model is first trained by multi block local binary patterns (MB-LBP) and Adaboost classifier. Subsequently, the eyes and mouth state are detected by using the trained model to detect the 24 facial features. Afterwards, we calculate the number of two parameters that can describe the driver's fatigue state and the proportion of the closed eye time within the unit time (PERCLOS) and yawning frequency. Finally, the fuzzy inference system is utilized to deduce the driver's fatigue state (normal, slight fatigue, severe fatigue). Experimental results show that the proposed algorithm can detect driver fatigue degree quickly and accurately. INDEX TERMS fatigue detection, MB-LBP, PERCLOS, fuzzy reasoning
I. INTRODUCTION
Fatigue driving is one of the main causes of traffic accidents. Research shows that the probability of traffic accidents caused by fatigue driving is five times higher than normal driving. The annual traffic accidents caused by fatigue driving account for about 20% of the total accidents, accounting for more than 40% of the serious traffic accidents. Therefore, it has a great practical significance to monitor driver’s fatigue status as well as response warning information in time.[1] Studies have shown that there are three main methods for fatigue driving detection domestic and overseas: 1) Detection methods based on physiological signals, including EEG, ECG or EEG signals. Determine whether the fatigue is fatigued by constructing a contact device to monitor the driver's physiological signals in real time, such as Wang Fei[2] collects EEG signals through sensors, uses wavelet packet transform and co-space mode algorithm to extract features of EEG signals, and uses support vector machine to classify EEG signals to achieve qualitative analysis of driver fatigue state. Prove the feasibility of using EEG features to detect driving fatigue. Wang Lin[3] uses the sensor to collect the physiological signals of the driver's biceps during driving, and uses the independent component analysis to quickly separate the EMG signal and the ECG signal, and uses the empirical mode to decompose and decouple the muscle. Three characteristic parameters: electrical signal complexity, ECG signal complexity and ECG signal sample entropy. These three characteristic parameters can distinguish the driver's normal and fatigue states. Although this methods have certain feasibility, direct or indirect contact with the testing equipment
can have a certain impact on the driver's handling of the car. 2) Detection method based on vehicle behavior. Since the control under fatigue will decreased. It can determine whether the driver is fatigued because the steering system 、 accelerator and the wheel courses will be changed. Li Wei[4] used the steering wheel rotation information and road offset value as the fatigue characteristics of the driver, and inputted this information into the neural network model. The BP algorithm was used to train the model to monitor the fatigue state of the driver. In spite this kind of method does not need to touch the driver's body [18-21]. However, due to different driving habits, different road conditions and different vehicle models, the robustness of the detection algorithm will be greatly affected. 3) Detection method based on machine vision. By collecting the driver's facial expression, detecting the open state of the driver's eyes and mouth, calculating the driver's blink frequency and yawning frequency, to detecting the driver's fatigue state. This method is not only simple to implement, but also do not affect the driver's handling of the vehicle. However, the efficiency of the detection system is easily affected by the complex environment of the vehicle and the low hardware conditions of the vehicle. So how to improve the accuracy and rapidity of detection algorithms has become a hot and difficult point in recent years. In this paper, a fatigue detection algorithm based on facial expression analysis is proposed. The model of eye and mouth position is trained by MB-LBP feature and Adaboost classifier. Accurately locates the eyes and mouth of the collected driver's face image so that extracts the state parameters of the eyes and mouth, and then calculates the PERCLOS (percentage of eyelid closure over the pupil over time) and the yawn frequency to describe the driver's fatigue. Finally, the fuzzy inference
system is designed to infer the driver's fatigue state (normal, slightly fatigue, severe fatigue). Compared to Haar-like features and LBP features, MB-LBP features have fewer numbers and contain more structural mode information. Therefore, the fatigue detection algorithm proposed in this paper has better accuracy and timeliness than the traditional fatigue measurement algorithm based on Harr-like features and LBP features.
6
9
4
1
4
5
5
0
7
1
3
1
1
0 1
0
0
II. FACE KEY POINT LOCATION BASED ON MB-LBP FEATURES A. TRADITIONAL HAAR-LIKE FEATURES
At present, majority of fatigue detection methods are based on Haar-like facial feature key point detection algorithm [5]. The traditional method of Haar-like[6] rectangular feature calculation is to measure the difference of pixel intensity values in different rectangular regions. In Fig. 1, two kinds of rectangular frames of black and white are combined into a feature template, and the pixel intensity of the black rectangular area is subtracted from the intensity of the pixel of the white rectangular area in the feature template, and the obtained difference value is used to represent the feature value of the template. The Haar-like feature can be extracted by moving the template on the image. Viola and Jones[7] use these three simple Haar-like features to locate the key points of human face. However, due to the large number of these rectangular features, the amount of calculation is huge, and the feature is only sensitive to some simple graphic structures, such as edges and line segments, and can only describe obvious pixel modules in a specific direction (horizontal, vertical, diagonal). So the speed and accuracy of detection are not high.
15
FIGURE 2. LBP feature calculation diagram
C. MB-LBP FEATURES
Because the information redundancy of traditional Haar-like and LBP features is too high and can only describe a small amount of image structure information, this paper proposes a facial keypoint localization algorithm based on MB-LBP feature to predict the driver's fatigue degree. MB-LBP features are developed on the basis of Haar-like and LBP features, and its idea is to convert the rectangular region difference representation method in Haar-like features into the LBP feature to encode the rectangular region. Compared with the traditional LBP feature, which compares each pixel in 3 *3 region with the central pixel, MB-LBP feature compares the average intensity of a rectangular region with the average intensity of adjacent rectangular region, and obtains a binary string representing the rectangular region feature. There are 256 types of MB-LBP feature. The calculation method is shown in formula (1). 8
MB LBP s ( gi g c )2i
In the formula, gc represents the average intensity of the pixels in the central rectangular region, gi (i = 0, , 8) is the average intensity of the pixels in the surrounding rectangular region, and the determination method of s is as shown in formula (2).
FIGURE 1. Haar-like feature calculation diagram
1, if ( x 0) s ( x) 0, if ( x 0)
B. TRADITIONAL LBP FEATURES
The facial keypoint detection algorithm using traditional LBP features in literature[8] is used for fatigue detection. LBP feature is an operator proposed by Ojala[9], which describes image texture features. It takes the pixel intensity of the center point as the threshold and compares it with the pixel intensity of the surrounding points in turn. If it is greater than the center point pixel intensity, it is set to 1, else set to 0. It will generate a string of binary codes to represent the local texture features. As shown in Figure 2, there is a 3*3 window, and the value in the window represents the gray value of the pixel. The left window (6, 4, 7, 1, 3, 5, 4, 9) is compared in counterclockwise direction with the center point 5, greater than 5 is set to 1, less than 5 is set to 0, the right window is obtained, and the inverse is obtained. When the clockwise direction is connected, the socalled LBP features are obtained: 10100101. However, due to the selected starting pixels are different in the process of extracting features, it will produce excessive LBP features.
(1)
i 1
(2)
The MB-LBP feature calculation process is shown in Figure 3. The obtained binary string is used as the MB-LBP feature value. Compared with the traditional LBP feature, MB-LBP can detect the edge, line, point and flat of the image on various scales. Compared with the traditional Harr-like features, MBLBP features are much smaller. Assuming that the size of the sub-window is 20×20, the number of MB-LBP features is 2049, but the number of Haar-like features is 45891. It is nearly 20 times the number of MB-LBP features[10], so the training time of MB-LBP feature in the classifier much less than the 原始矩形框 Haar-like feature. 像素灰度值
6 8 8 6 7 7
8 6
8 12 9 11 20 19
矩形框内像 素灰度均值
0 0 0
0 1
FIGURE 3. MB-LBP computing process
1 1 1
D. CONSTRUCTING CLASSIFIER
Since the MB-LBP feature is a non-metric value[11], the threshold function cannot be used as the learning method. Therefore, the Adaboost algorithm with better generalization ability is selected to train the classifier. The algorithm is an iterative machine learning algorithm based on Boosting idea[12]. The core idea is to train different weak classifiers, for different features of the same training set, and then combine these weak classifiers into a strong classifier. For a given training set (x1, y1), ..., (xn, yn), xn represents the sample point, yn represents the class mark of the sample point, and the final strong classifier can be expressed as formula (3). M
F ( x) f m ( x)
(3)
m 1
In the formula, M is the number of iterations, fm(x) is the weak classifier, and the training process is as follows: (1) The initial training sample weight distribution as: wi=1/ n, (i=1, 2, ..., n), F(x)=0. (2) M iterations are performed, and each iteration ensures that the weight mean square error function Jwse is the smallest, and the error function as formula (4). N
J wse wi ( yi fm ( xi ))
2
(4)
i 1
(3) Update the strong classifier, F(x)=F(x)+fm(x). (4) Update weights and normalize them, as shown in formula (5).
wi wi e
y i f m ( xi )
(5)
(5) Output strong classifier:
FIGURE 4. The ROC curve of three classifiers
It can be found from the comparison of the operating characteristic curves (ROC) of the three classifiers that the facial keypoint detection model based on the combination of MB-LBP features and Adaboost classifier have higher detection accuracy than the traditional human face keypoint detection model. Moreover, according to the time counted in the training process, it is found that the classifier training takes less time than the other two methods. III. DRIVER FATIGUE DETERMINATION SYSTEM
Most fatigue detection methods only determine whether the driver is fatigued based on the degree of opening of the eyes[1315]. When the driver is fatigue, there is usually a yawning behavior, so the opening frequency of the mouth can also reflect whether the driver is fatigue. Therefore, adding a mouth state detection when judging the state of the eyes can increases the accuracy of the fatigue detection system. A. EYES AND MOUTH STATE DETECTION
Using the trained Adaboost classifier to perform facial key point detection on the driver, the result is shown in Figure 5.
M F ( x) sign f m ( x) m 1 For each MB-LBP feature, a multi-branch tree is used as a weak classifier with 256 branches (due to 256 MB-LBP features), and a weak classifier is defined as formula (6).
a0 , f m ( x) a j , a255 ,
xk 0 … xk j … x k 255
(6)
xk represents the k feature in the x eigenvector, and aj represents the regression parameter that needs to be learned. In order to compare the detection results of the facial keypoint detection model combined with the MB-LBP feature and the Adaboost classifier and the traditional facial keypoint detection model. The face keypoint model training was performed using the Caltech10k Web Faces dataset, and the test was performed on the FDDB dataset. The results are shown in Figure. 4.
FIGURE 5. Detection effect of face key pionts
After many experiments, it was found that the characteristics of the corners of the eyes close to the nose were relatively stable, and the result of judging the fatigue state was better. Therefore, using the opening angle of the inner corner of the eye as the judgment basis of the eye opening state, according
(a) Eyes
1 0.9
眼睛张开角度的cos值
to the detected coordinate information of the key points of the eyes and the mouth, the opening angle of the eyes and the mouth can be respectively calculated, as shown in Figure. 6(a) and 6(b).
(b) Mouth
0.8 0.7 0.6 0.5 0.4 0.3 0.2
FIGURE 6. Calculation of eye and mouth angle
0
100
500
600
700
800
900
800
900
800
900
800
900
0.8
(7) (8)
(9)
In the formula: ae is the opening angle of the eye, and the opening angle of the mouth (am) can also be calculated by the same method. In order to obtain the closed-eye threshold Te and the yawn threshold Tm, four testers were selected in the laboratory to simulate the yawning and frequent blinking scenes during fatigue driving, and the tester's facial expression video was collected by repeatedly adjusting the installation angle of the camera. Each video randomly intercepts 900 consecutive images as experimental data, and the results of the calculation of the eye opening angle of the four testers are shown in Figure. 7(a)(b)(c)(d). The results of the calculation of the mouth opening angle are shown in Figure 8(a)(b)(c)(d). According to the experimental results, the maximum and minimum values of the eye and mouth opening angle within one minute before the video are taken, and 80% of the maximum opening angle of the eye is defined as the closed eye threshold. The calculation method is shown in formula (10). 60% of the maximum opening angle of the mouth is the yawn threshold, and the calculation method is as shown in formula (11). (10) Te 0.2( e max e min ) (11) Tm 0.4( m max m min ) In the formula: aemax and ammax are the maximum angles at which the eyes and mouth open in the first minute of the video, and aemin and ammin are the minimum angles at which the eyes and mouth are closed.
眼睛张开角度的cos值
0.75 0.7 0.65 0.6 0.55 0.5 0.45 0.4 0.35 0
100
200
300
400
500
600
700
视频图像帧
(b) Experimenter 2's eye angle calculation result 1
0.9
眼睛张开角度的cos值
AB cos e A B
400
(a) Experimenter 1's eye angle calculation result
0.8
0.7
0.6
0.5
0.4
0
100
200
300
400
500
600
700
视频图像帧 (c) Experimenter 3's eye angle calculation result 1
0.95
眼睛张开角度的cos值
B (cx bx , cy by )
300
视频图像帧
According to vector formula (7) (8) (9):
A (ax bx , ay by )
200
0.9
0.85
0.8
0.75
0.7
0.65
0
100
200
300
400
500
600
700
视频图像帧 (d) Experimenter 4's eye angle calculation result FIGURE 7. Tester eye angle calculation result
1
B. FATIGUE STATE JUDGMENT
PERCLOS is a physical measurement of driver alertness. Studies have shown that PERCLOS is the most relevant parameter to fatigue[16]. The PERCLOS calculation method for eyes and mouth is shown in formula (12) and (13).
眼睛张开角度的cos值
0.8 0.6 0.4 0.2
P
0
Ttime of closed eye per unit time Tunit time
100%
-0.2
-0.6
0
100
200
300
400
500
600
700
800
900
视频图像帧 (a) Experimenter 1's mouth angle calculation result 0.8
Tyawning time per unit time
N
p
100% (12)
n
k (13) 100% Tunit time N In the formula: N represents the total number of frames in the time of T, np and nk represents the total number of frames in which the eyes are closed and the mouth is open during the time of T.
K
-0.4
n
100%
眼睛张开角度的cos值
0.6
C. FUZZY INFERENCE SYSTEM
0.4
0.2
0
-0.2
-0.4
-0.6
0
100
200
300
400
500
600
700
800
900
800
900
视频图像帧
(b) Experimenter 2's mouth angle calculation result 1
眼睛张开角度的cos值
0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8
0
100
200
300
400
500
600
700
视频图像帧 (c) Experimenter 3's mouth angle calculation result 0.8
眼睛张开角度的cos值
0.6 0.4 0.2 0 -0.2 -0.4
Since there is no clear standard for the determination of fatigue state, according to the experience that fatigue is gradually generated, from light to heavy. Using the PERCLOS figure calculated in the previous section, the eyes and mouth are divided into three states: closed, half open, and fully open. The fatigue state is also divided into three levels: normal (T1), slight fatigue (T2), and severe fatigue (T3). This division can alert the driver when fatigue begins to occur, avoiding personal danger and property loss in time. In addition, fatigue is a fuzzy concept, it is difficult to describe with mathematical model, so this paper designs a fuzzy reasoning system for fatigue judgment. The advantage of the fuzzy system is that it can transform human experience into the rules of system and make effective decision for problems that are difficult to establish mathematical models[17]. There is no specific database for driver fatigue testing, so the experimental data in this paper is from 400 labs who simulated the driving environment (normal, slight fatigue, severe fatigue) in different states. Each video takes 10 minutes, has a resolution of 640 x 480 and a frame rate of 15 fps. In order to test the robustness of the algorithm, the data set contains different skins. Colors, drivers of different ages, and everybody's lighting conditions are different. Among them, 200 videos are used for PERCLOS threshold analysis of eyes and mouth in three states, and the remaining 200 videos are used to test the accuracy of fatigue determination fuzzy system. By testing the blinking and yawning frequencies of all the testers from normal to slight fatigue and finally to severe fatigue under different scenarios and illumination conditions, the PERCLOS values corresponding to different fatigue states are comprehensively analyzed. The three state thresholds for the PERCLOS of eyes (P), and the PERCLOS of mouth (K) are shown in Table 1. TABLE I
-0.6
PERCLOS THRESHOLDS FOR EYES AND MOUTH
-0.8 -1
state 0
100
200
300
400
500
600
700
视频图像帧 (d) Experimenter 4's mouth angle calculation result FIGURE 8. The results of the mouth angle calculation
800
900
Low-frequency
Intermediate-frequency
High-frequency
Eye
P<13%
13%
P>21%
mouth
K<20%
20%
K>30%
According to the table above, the three state thresholds of eyes and mouth: Low-frequency, Intermediate-frequency,
High-frequency, combining with actual fatigue experience, the fuzzy inference system for determining three fatigue state levels: normal (T1), minor fatigue (T2), and severe fatigue (T3) can be designed. The fuzzy rules of the system are shown in Table 2. TABLE 2 RULES OF FUZZY REASONING SYSTEM FOR FATIGUE DETECTION Situation
Eye
mouth
Reasoning result
1
Low
Low
Normal (T1)
2
Intermediate
Low
Mild Fatigue (T2)
3
High
Low
Severe fatigue (T3)
4
Low
Intermediate
Normal (T1)
5
Intermediate
Intermediate
Severe fatigue (T3)
6
High
Intermediate
Severe fatigue (T3)
7
Low
High
Mild Fatigue (T2)
8
Intermediate
High
Severe fatigue (T3)
9
High
High
Severe fatigue (T3)
IV. EXPERIMENTAL RESULTS AND ANALYSIS
Using the 200 videos collected in Section 3 for two-part testing. Firstly, the open state of eyes and mouth is detected, and compared with the traditional fatigue detection algorithm based on the aspect ratio of eyes and mouth area and the vertical integral projection method which calculates the distance between upper and lower eyelids and lips to determine whether to blink or yawn. The test results are shown in the table3. The detection success rate of the eye state was 93.4%, and the detection success rate of the mouth state was 85.9%, both were higher than the traditional fatigue detection method. Then the fuzzy inference system is used to sequentially detect the fatigue state of the experimental personnel in 200 videos. The data set contains 57 normal videos, 94 slightly fatigued videos and 49 severely fatigued videos. Compared with traditional fatigue detection algorithms based on single feature of blink frequency and yawn frequency, the test results are shown in Table 4. The detection success rate under normal conditions is 96.5%, the detection success rate under mild fatigue is 94.7%, and the detection success rate is 100% when severe fatigue occurs. The success rate of detection is higher than the traditional fatigue detection method. The detection success rate is higher than that of traditional fatigue detection algorithm. In the test, the detection speed of the algorithm is 53 frames/s, which meets the requirements of real-time detection.
Mild
94.7%
94.2%
89.1%
Severe
100%
100%
98.65
V. CONCLUSION
This paper proposes a fatigue detection algorithm based on facial expression analysis, which uses the MB-LBP feature and the Adaboost classifier to detect the driver's facial key points. The results of the tests on the FDDB dataset show higher accuracy and less training time than the traditional face-based detection model based on Haar-like features and LBP features. The traditional fatigue detection algorithm only detects the driver's eye state. While the algorithm proposed in this paper detects the driver's eyes and mouth state detection, and the fusion of multiple information improves the detection accuracy of the system. Finally, the fuzzy inference system is used to classify the driver's fatigue state into normal, slight fatigue and severe fatigue. Since there is no specific data set for fatigue testing, experiments were performed on the fatigue test data set that made in the lab. The results prove that the accuracy of the driver's eyes and mouth open state and the driver's fatigue degree is higher than the traditional fatigue detection algorithm. It is robust to illumination changes and jitter blur, and the detection speed of the algorithm is faster, which can meet the requirements of real-time detection. However, when the driver wears glasses or the face rotates at a large angle, the calculation accuracy of the algorithm will decrease. Hope that in the future work, people can study the better human eye state detection in the case of wearing glasses for the driver, and improve the robustness of the fatigue detection system. REFERENCES [1]
driving detection method based on eye movement characteristics[J]. Journal of Harbin Engineering University, 2015 (3): 394-398. [2]
maneuvering features[J]. Journal of Scientific Instrument, 2014, 35(2): 398-404. [3]
Wang Lin, Zhang Chen, Yin Xiaowei, et al. A Non-contact Driving Fatigue Detection Technology Based on Driver's Physiological Signal[J]. Automotive Engineering, 2018 40(3): 333-341.
TEST RESULTS OF MOUTH OPENING STATE IN EYSE In this paper
Wang Fei, Wang Shaonan, Wang Xihui, et al. Driving fatigue detection based on EEG recognition combined with
TABLE 3
Algorithm
Niu Qingning, Zhou Zhiqiang, Jin Lisheng, et al. Fatigue
[4]
Li Wei, He Qichang, Fan Xiumin. Driver Fatigue State
Based on
Based on Vertical
Detection Based on Vehicle Maneuvering Signals[J].
aspect ratio
Integral
Journal of Shanghai Jiaotong University, 2010, 44(2): 292296.
Blink
93.4%
82.7%
80.6%
Yawning
85.9%
62.4%
72.1%
TABLE 4
[5]
Zheng C, Ban X, Yu W. Fatigue driving detection based on Haar feature and extreme learning machine [J]. Journal of
DRIVER STATUS TEST RESULTS
China Universities of Posts & Telecommunications, 2016,
Algorithm
In this paper
Based on blink frequency
Based on yawning frequency
Normal
96.5%
95.8%
92.2%
23 (4): 91-100.
[6]
[7]
Wang Qingwei, Ying Zao. A Face Detection Algorithm
[18] ZHANG Xu, LI Ya-li, CHEN Chen, et al. Implementation
Based on Haar-Like T Feature[J]. Pattern Recognition &
and Optimization of Embedded Driver Status Detection
Artificial Intelligence, 2015, 28(1): 35-41.
Algorithm[J]. Acta Automatica Sinica, 2012, 38(12):2014-
Viola P, Jones M. Rapid Object Detection using a Boosted
2022. [19] Trutschel U, Sirois B, Sommer D, et al. PERCLOS: An
Cascade of Simple Features [C]. [8]
Mingliang Xu, Hua Wang*, Shili Chu, Yong Gan, Xiaoheng
Alertness Measure of the Past [C]. Driving Assessment
Jiang, Yafei Li, Bing Zhou. Traffic Simulation and Visual
2011: 6th International Driving Symposium on Human
Verification in Smog. ACM Transactions on Intelligent
Factors in Driver Assessment, Training, and Vehicle Design. 2011: 172-179.
Systems and Technology, 10(1): 3:1-3:17, 2019. [9]
Zhou Yunpeng, Zhu Qing, Wang Yaonan, et al. Driver
[20] Mingliang Xu, Mingyuan Li, Weiwei Xu, Zhigang Deng,
fatigue detection method based on facial multi-feature
Yin Yang, Kun Zhou. Interactive Mechanism Modeling
fusion[J].
from Multi-view Images. ACM Transactions on Graphics,
Journal
of
Electronic
Measurement
and
35(6): Article 236, 2016.
Instrument, 2014(10). [10] Ojala T, Pietikainen M, Maenpaa T. Multiresolution gray-
[21] Bai Zhonghao, Jiao Yinghao, Bai Fanghua. Driving fatigue
scale and rotation invariant texture classification with local
detection based on active shape model and fuzzy
binary patterns [J]. IEEE Transactions on Pattern Analysis
inference[J]. Chinese Journal of Scientific Instrument, 2015,
& Machine Intelligence, 2002, 24 (7): 971-987.
36(4): 768-775.
[11] Zhang L, Chu R, Xiang S, et al. Face Detection Based on Multi-Block LBP Representation [J]. Lecture Notes in Computer Science, 2007, 4642:11-18. [12] Cai Z, Gu Z, Yu Z L, et al. A real-time visual object tracking system based on Kalman filter and MB-LBP feature matching [J]. Multimedia Tools & Applications, 2016, 75 (4): 2393-2409. [13] Junwei Han, Dingwen Zhang, Xintao Hu, Lei Guo, Jinchang Ren, Feng Wu Background prior-based salient object detection
via
Transactions
deep on
reconstruction
Circuits
and
residual.
Systems
for
IEEE Video
Technology, 25(8): 1309-1321, 2015.
Zhongmin
Liu
received
his
Ph.D.
from
Lanzhou University of Technology in 2002 and 2009. Now he is an associate professor in Lanzhou University of Technology of China. His main research areas is
[14] CAO Ying, Miao Qiguang, Liu Jiachen, et al. Research progress and prospect of AdaBoost algorithm[J]. Acta
machine
vision,
pattern
recognition
and
image
processing.
Automatica Sinica, 2013, 39(6): 745-758. [15] Mohammad F, Mahadas K, Hung G K. Drowsy driver mobile application: Development of a novel scleral-area detection method. [J]. Computers in Biology & Medicine, 2017, 89. [16] Zhang Z, Zhang J. A new real-time eye tracking based on nonlinear unscented Kalman filter for monitoring driver fatigue [J]. Control Theory and Technology, 2010, 8 (2): 181-188. Yuxi Peng received his bachelor's degree in
[17] Mingliang Xu, Chunxu Li, Pei Lv, Lin Nie, Rui Hou, Bing Zhou. An Efficient Method of Crowd Aggregation Computation in Public Areas. IEEE Transactions on Circuits and
Systems
for
Video
Technology,
10.1109/TCSVT.2017.2731866, 2017.
DOI:
North China University of Technology in 2016. He is a
graduate
student
of
Lanzhou
University
of
Technology now. The main research direction is image processing, detection.
pattern
recognition
and
fatigue
Wenjin Hu, University associate
of
Society.
Technology
professor
Nationalities. Main
Ph.D. at
Member research
graduated and
is
Northwest of
the
areas:
pattern recognition, data mining.
from
Lanzhou
currently University
Chinese image
an for
Computer
restoration,
We argue that there is no conflict of interest.
1) Compared to Haar-like features and LBP features, MB-LBP features have fewer numbers and contain more structural mode information. 2) The fatigue detection algorithm proposed in this paper has better accuracy and timeliness than the traditional fatigue measurement algorithm based on Harr-like features and LBP features.