Fuzzy Sets and Systems 51 (1992) 29-40 North-Holland
29
A self-tuning fuzzy controller Mikio Maeda and Shuta Murakami Department of Computer Engineering, Faculty of Engineering, Kyushu Institute of Technology, Tobata, Kitakyushu 804, Japan Received August 1991 Revised October 1991
Abstract: The aim of a fuzzy controller is to compensate the dynamic characteristics of the controlled system. It is the same purpose as for the fuzzy logic controller, FLC. Now the best design of the FLC is required for achievement of this purpose, but such a controller has not yet been completely designed, because the fuzzy controller is constructed of control rules which are ambiguously described by conventional controller strategies and expert knowledge and because its structure theory is not well established yet. That is, in order to construct the best rules, a highly skilled technique and a tuning technique using trial and error are required. As a way to easily design a FLC, a method which improves a given standard FLC is considered. In this paper, we propose a self-tuning algorithm of the FLC. It has two functions, in adjusting the scaling factors which are the parameters of the FLC and in improving the control rules of FLC by evaluating the control response at real time and the control results after operations. We add these functions to the FLC and design the self-tuning fuzzy controller, STFC. This makes the FLC a good controller.
Abstract: Fuzzy controller; self-tuning; auto-tuning; scaling factor; linguistic control rule; heuristic rule modifier.
1. Introduction
Recently, fuzzy control [1] has become of general interest. As the applications of fuzzy control, cement kiln control [2], heat exchanger process control [3], automatic train control [4], vehicle speed control [5], and autonomous mobile robot control [6] have been implemented. One of the features of fuzzy control is that the if-then rules are described on the basis of the conventional control strategy and the expert knowledge. But it is difficult to perfectly represent the expert knowledge by linguistic Correspondence to: Dr. M. Maeda, Department of Computer Engineering, Faculty of Engineering, Kyushu Institute of Technology, Tobata, Kitakyushu 804, Japan.
control rules. Moreover, the fuzzy control system has many parameters and its control depends on the tuning of this control system. In many cases, the parameters of a fuzzy control system are tuned through trial and error. In order to overcome these difficulties, we need a function that tunes the system parameters (which include the rule-parameters) automatically by evaluating the response (result) of fuzzy control [7, 8 I. This function was introduced to fuzzy control by Baaklini [9] and Procyk [10] for the first time. This fuzzy controller is called the self-organizing controller, SOC. After that the SOC was improved by Yamazaki [11]. And recently a self-learning adjustment based on neural net concepts has been proposed [12]. In the initial investigation by Baaklini and Procyk, there exist some problems such as the cyclic phenomenon appearing in the control response, the large settling time, and the instability control force. Therefore, to clear these points, Yamazaki proposed an improvement algorithm which ambiguously evaluates the control responses and which tunes the consequent part (i.e. fuzzy labels) of the fuzzy control rules. But the adjustment rules of the scaling factors, which are called the parameters of controller, are not shown. From the standpoint of control performance, the above new SOC may not sufficiently control the system in the case that the characteristics of the system change dynamically. The recent FLC employing neural net concepts left us with worries and a skeptical feeling because the substance and the explanation of the neural net mechanism is not clear. Therefore, we propose new clear algorithm of a self-tuning fuzzy controller [13]. This equals the method which added the adjustment function of the scaling factors to Yamazaki's idea except for several different points. The major different points are the input variables of the controller and the use of approximate reasoning. Yamazaki uses the control error and the first difference of
0165-0114/92/$05.00 t ~ 1992--Elsevier Science Publishers B.V. All rights reserved
30
M. Maeda, S. Murakami / Self-tuning fuzzy controller
• _ • _ _Evaluat _ • Ion L . - - ~
I *
~ ] I] _R~)e^~ I
Table 1. Linguistic control rules "'
~ Data stock I I ' unit J:
Y~
A2e Ae
Yk |" / L. . . . . . . . . . .
..........
L ....... J "'-' V-LC"
Fig. 1. Self-tuning fuzzy controller.
the control error as input variables, while we use the control error, the first difference of the control error, and the second difference of the control error. Furthermore he employs the fuzzy reasoning proposed by Mamdani but we employ simplified fuzzy reasoning [8]. Especially a different point is the function for adjustment of the scaling factors, which is introduced by the authors. It repeatedly adjusts the scaling factors,
S,(= l/a,),
$2( = l/a2),
$3( = l/a3),
2. Self-tuning fuzzy controller [13] The fuzzy controller is composed of following linguistic control rules. These are also shown in Table 1. If ek is P1, Aek is Pz, A2ek is P3 then Auk is PB,
Z
P
Au for 'e is P' P PM Z PS N ZE
PB PM PS
PB PB PM
Au for 'e is Z' P ZE Z NS N NM
PS ZE NS
PM PS ZE
Au for 'e is N' P NM Z NB N NB
NS NM NB
ZE NS NM
P: Positive, N: Negative, Z: Zero, PB: Positive Big, PM: Positive Medium, PS: Positive Small, ZE: Zero, NS: Negative Small, NM: Negative Medium, NB: Negative Big.
LCR2:
$4( = d)
(see Figure 1) of the FLC from the evaluation of control response or control result. In this paper, first we show the learning method for the self-tuning, which makes the fuzzy controller a better controller, and secondly we design a self-tuning fuzzy controller, STFC, as shown in Figure 1. The self-tuning rules are constructed with three heuristic rule sets which are the repeated-learning rule (i.e. the scale adjustment rule), the real-time learning rule (i.e. the modification rule which improve the control rule), and the evaluation rule of control response and control result. Finally, the usefulness of this method is discussed by control simulations.
LCR 1:
N
IfekisP1, Aek is P2, A2ek is Z3 then AUk is PB,
LCR 27: If ek is N1, Aek is Nz, AZek is N3 then AUk is NB, where ek=r--yk, Aek=ek--ek_l,
A2ek = Aek -- Aek-l,
AUk~Uk--Uk_I, with r: the reference, Yk: the control value, e k: the control error, Aek: the difference of ek, A2ek: the difference of Aek, Auk: the difference of the manipulated variable, k : the sampling instance, N: negative, PI: positive, Zi: zero (i = 1, 2, 3), PB: positive big, NB: negative big.
M. Maeda, S. Murakami / Self-tuning fuzzy controller
The membership functions of the above fuzzy sets are shown in Figure 2. Those of antecedent clause and consequent clause are of triangle type and bar type (singleton), respectively. The support sets of these fuzzy sets are normalized to the interval [ - 1 , 1 ] . The control space is symmetric and monotonic for variables, e., Ae. and AZe. as shown in Table 1. Now, if the non-fuzzy values ek, Ae,, and A2ek are input to the fuzzy controller, the non-fuzzy values Au~ are obtained by the following equations [6, 8]:
31
zTu
i
a,
) e
27
Au~ -
~i=I wi " b,i
27
'
Ei=l wi
(1)
Fig. 3. Gain function for input e.
where Wi = ~m,l(ek) A IZA,2(Aek) A I.tm,3(AZek),
with A i . : positive, negative or zero ( i = 1, 2 . . . . . 27), and bi: the typical real value of the label ( P B , P M , . . . , or NB) of the consequent part of rule i. In (1), the consequent of control rules are interpreted as "Au. is b/' (i-- 1, 2 . . . . . 27). And at time k, the value of the manipulated variable which is the input of the controlled object, Uk, is obtained by
Uk = Uk-l + AU~.
(2)
NOW the aim of the scaling factor (SF) is to
N
ZE
P
0
1
0 -1
NB 1 9....
b
i
.....X ." .
NM NS ZE PS , . . . . . . . . ~. . . . . . . ~. . . . . . . ,~
....",
.'
i
.............,,,, .................... ,
\ •
•........ ."\ / ...."i'\. .. iE \ i i \
PB
\
/'
/i
',,................ \ ............. \, . . . . ,."'\
°i ........ i...... i' ..... i ........ i: -I
PM .~
0
'i ....
"i I
Fig. 2. Membership functions (control rules). (a) Antecedent, (b) consequent.
convert the input values and the output value of the controller into their internal values. By the decision of the scaling factors, the gain of the controller is settled apparently. The relation between the scaling factors and the gain characteristic of the controller for only the control error e is shown in Figure 3. In the figure when scaling factor d is constant, the controller gain seemingly increases by decreasing the scaling factor al. And even when the SFal is fixed, the same effect is obtained by changing the SF d. However the results of the gain adjustment by these ways are not completely the same because the characteristics in the figure are nonlinear functions. In this way the change of the scaling factors gives elasticity to the characteristics of the controller without changing the tendency of the characteristics. This tendency of characteristics (i.e. the so called gain function) is changed by modification of the control rules, that is, the consequent parts of the control rules. From the above, it is found that the adjustment of scaling factors and the modification of control rules are important.
Adjustment of scaling factors When the control ends, the learning function of the STFC adjusts the scaling factors by evaluating the control results. The objects of evaluation are 'overshoot', 'reaching time' and 'amplitude', as shown in Figure 4. The evaluation values expressing how good they are, are given by fuzzy evaluation functions defined
M. Maeda, S. Murakami / Self-tuning fuzzy controller
32
Table 2. Scale adjustment rules Antecedent performance e°v eRT eAM 0
Nel Pel Ne2 Pe2 Pe2
Consequent zia I Aa 2
Aa 3
Ad
NBall PBal, PiCa21 NBa21 PBa3j
PSal3 NSaI3 NB~23 PBaz3 NBa33
PBal NBdl NBd2 PBd2 NSd3
PSa12 NSal2 NBa22 PBa22 NBa32
RT
Fig. 4. Performance index of control response.
in advance. These evaluation values are given at the end of the control interval and are used in following final fuzzy performance, FP: FP:=min{/~ov(eov),/~RT(eRT),/~AM(eAM)},
(3)
[ - d , d], and also each Aai and Ad imply an increase or decrease in the quantity of a i and d. By applying the control results to the previous tuning rules and doing simplified fuzzy reasoning, Aai ( i = 1, 2, 3) and Ad are calculated. Then, the parameters of the fuzzy controller are adjusted by the following equations: anew = a °ld -I- (1 -- FP) Aai
(i = 1, 2, 3),
(4a)
where
d new-- d °~d+ (1 - FP) Ad.
eov = OV - OV*,
In this way the scaling factors a i (i----1, 2, 3) and d are decided. When a good response is obtained, the adjustment of scaling factors is finished.
eRT = RT
RT*,
-
eAM = AM
-
AM*,
with OV, OV*: the real value and the target value of overshoot, RT, RT*: the real value and the target value of reaching time, AM, AM*: the real value and the target value of amplitude, and where/~. (.) stands for the grade of goodness and its function is of triangle type or trapezoid type. The adjustment finishes when a following non-fuzzy rule fires. If FP is greater than 0 or ~t letl takes a lower limit value, then the adjustment ends, for 0 e [ 0 , 11. Here, ~t stands for the sum of control errors over the considered interval, and the lower limit value means near a convergent value of ~t le, I. Now, the heuristic rules for the adjustment of the scaling factors are shown in Table 2. Each membership function of the condition part is of linear type and those of the consequent are of singleton type as shown in Figure 5. These rules change the interval [-ai, ai] (i -- 1, 2, 3) and
(4b)
Improvement of controls rules First, we decide the target response as shown in Figure 6. The target response is determined by a target value, r, a target reaching time RT*, and a lag time, L. This reference is used to estimate to what degree the control response agrees with the target response in each sampling time. A real-time learning method for the improvement of the control rules modifies those rules (i.e. the control rules used in the past) which are most probably related to the present control
/t(e.)
NB
e
NS
ZE
PS
P8
.
)
-h
0 (a)Antecedent
h
h
0 (b)Consequnt
h
Fig. 5. Membership functions (scale adjustment rules). Left: Antecedent, right: consequent.
M. Maeda, S. Murakami / Self-tuning fuzzy controller
state. And when the control response is obtained in each sampling point, those rules are adjusted to make the control response agree with the target response. Now, the consequent clauses of control rules have real number bg (i = 1, 2 . . . . . 27). These bi are adjusted in each sampling point. For example, suppose that the control response toward the target response is the pattern shown in Figure 7(1) (the lag time is omitted). Then we can see that "the error of the response was negative before m sampling
Y
Yt r
0-->
>t
<:--
RT*
Fig, 6. Target response.
Y /~
e
y*
r
~_ :/
/r
0
Y
Y i
/
y -
,
*
, .;.... e k_ m~ / ~ -
y
r
e k-m/J c ./[wf
Z,o*
k-m
33
k
t
0 k-m
t
k
(1)ek* is N, Ae~* is N
(2)e** is N, fie,* is ZE
y
Y
y r
0 kJm ~
~
Y Y
,
r
j V ¸¸
e"~1% > Ok-mk
t
(3)ek* is N, Ae~* is P
y *
r
,
ek k-m
t
t
k
>
t
0 kJm
~
I
>t
(4)ek* is ZE, de~* is N
(5)ek* is ZE, de,* is ZE
(6)e~* is ZE, dek* is P
Y
Y
Y
y*
,
Y
/
ek-m/
,
e k_ /
lek
fT
0
k-m
Y
r
k
t
(7)e~* is P, zJe~" is N
0 k-m
,s k-
.../
t
k
7
I--~
t
(8)e~* is P, de~' is ZE
k'm- k
Lk I >t
(9)ek* is P, dek* is P
Fig. 7, Relation pattern between target and real response.
M. Maeda, S. Murakami / Self-tuning.fuzzy controller
34
times", "it is negative now", and "it will be bigger in the future". Therefore the manipulated quantity before m sampling is too large and we have to decrease the value b. of the consequent clauses of the control rules used in the inference before m sampling times. The above sentence is changed into the following rule. If e~ is N, Ae~ is N then Ab. is NB,
(5)
where e~ = y~ - Yk,
Ae~ = e~ -- e~-m,
the the the the the
adjusting value of parameter b., error of response, target response, control response, difference of e~.
In the same way, the other eight modification rules are obtained. These rules are shown in Table 3, where the consequent values of the rules are symmetric. Then the membership functions are of the same type (see Figure 5) as those of the SF modifying rule. The new b,. (i = 1, 2 , . . . , 27) are calculated by b new =
b °'d + (1 - FP) Ab. w} k-m), fori=l,
2. . . . .
13,
(6a)
27,
(6b)
w!k-m):the adaption degree of rule sampling time (k - m),
i at
b~ew = b °'d - (1 - FP) A b . w } k-m), for i = 15 . . . . . with
Table 3. Modification rules for control rules Antecedent
b?eW:
the value of consequent clauses of rules used at sampling time (k - m), the improved value of b°ld.
In order to do this rule modification, we must let the controller memorize the rules whose adaptation degree, w. are not zero and the w. of those rules. This memorization is done in the data stock unit. By using two algorithms, the scaling factor adjustment and the rule modifier, the FLC is tuned in the following steps. 1. 2. 3. 4.
with Ab.: e~: y~,: Yk : Ae~,:
b°~°:
Initial setting of the parameters of the FLC. Scaling adjustment. Rule modifying. Evaluation of the control result: if the FP is greater than 0, or •t letl is small (:fuzzy label) and the deviation of Et le, l is zero (:fuzzy label), then go to step 5 else go to step 3. 5. Learning end. When the response is evaluated at the end of control, FP is calculated by equation (2), the evaluation of Et et is performed and FP attains the upper limit level of goodness (i.e. FP is not changed); the tuning of the FLC ends. Note that rn and the sampling time are decided on the basis of values such as lag time and time constant of the target system. Furthermore the consequent values of all control rules and the SF values are rearranged on the basis of the monotone characteristics of the FLC (see Table 1 and Figure 3) and bi ~ [ - 1 , 1].
3. Simulations
Consequent
e~
Ae~
Ab.
(1) (2) (3) (4)
N N N Z
N Z P N
NB NM NS NS
(5)
z
z
ZE
(6) (7) (8) (9)
Z P P P
P N Z P
PS PS PM PB
The controlled system is a second order delay system as follows: g(t) = -tr.f(t) - fix(t) + yu(t - L),
(7)
with x: the system output, u: the system input, tr,/3, y: the system parameters, L: the lag time. As simulation models of the above system,
M. Maeda, S. Murakami
three types based on the speed control system of an automobile are selected. These system models are an asymptotic damped system (model 1), an over-damped system (model 2), and a damped oscillation system (model 3), respectively. Model 1 is the controlled object (i.e. from throttle actuator to speed sensor) of a vehicle speed control system [5, 14]. Model 2 and model 3 are variations of model 1 under potential disorder condition. Their step responses are shown in Figures 8a-8c. The parameters of these systems are given in Table 4. In the simulations each sampling time is selected in consideration of the actual controller performance and for realization of simplified simulation. Figure 9a shows the control result of model 1. The real line in the figure indicates the result of a normal fuzzy control system. The mark 'o' indicates the result when adjustment of the scaling factors ended. The mark 'o' indicates the result when the improvement of the control rules was done and the all tuning was finished. The target reaching time is set at 12 seconds. By
?
70
6.5
G5
6
60
5.5
55
35
Self-tuning fuzzy controller
adjusting the scaling factors, the overshoot has increased, but the reaching time has approached the target. In this case, the learning adjustment of the scaling factors was finished on the first try. After that, by 7 times learning modification of the control rules, the reaching time has become about 12 seconds and the overshoot has decreased. The scaling factors which before and after adjustment are shown in Table 5a and the rule tables which before and after improvement are shown in Table 5b. A control result of model 2 is shown in Figure 9b. The target reaching time is set at 30 seconds. After the 3 times scaling adjustment and successively 10 times rule modification, nearly target response was obtained. The scaling factors Table 4. Systemparameters Model
o~
fl
~,
L
1 2 3
0.638 0.8994 0.40
0.034 0.0064 0.54
1.228 0.2310 19.54
0 1 0
RESPONSE
5
~50
4s]40 4.5 ~45
=m-
3"~3o o,.
2.N ~25 2
20
I. ~ .
15
1
i0
.5
5
.0
.0
MANIPULATED VALUE
52 IME(S)
Fig. 8a. Step response of asymptoticdamped system(samplingtime 0.3 s).
36
M. Maeda, S. Murakami / Self-tuning fuzzy controller
I;.5 RESPONSE
5.5 5
•
~4.5
~3.5
N3 ~2.5 MANIPULATED VALUE
.5 .0
10
TIME(S) Fig. 8b. Step response of over-damped system (sampling time 1 s).
14. 13 12
11 I0 ~9
~8
RESPONSE
~5
MANIPULATED VALUE
.0
TIME(S) Fig. 8c. Step response of damped oscillation system (sampling time 0.3 s).
M. Maeda, S. Murakami / Self-tuning fuzzy controller 7
?0
G . ~.
G5
37
RESPONSE GO 5.
5 45 -
e-~
Normal After scale adjustment After rule modification
E 2.
NANIPULnTED VALUE
.5 .0 TIME(S) Fig. 9a. C o n t r o l r e s u l t s b y S T F C ~ r m o d e l l ( s a m p l i n g t i m e 0 . 3 s ) .
?
70
G.5
G5-
G
G0-
5.5
55-
RESPONSE
5 - ~50 Z? 4.5
~4
45 ~40 -
~3.5
~35 -
N3
,.~30 -
x:2.5
==25 -
2
20-
1.~
15-
1
10-
.5
5
.0
.O
-
o~
Normal After scale After
rule
adjustment modification
MnNZPULATEO VALUE
-
140 TIME(S) Fig. 9b. Control results by STFC for model 2 (sampling time 1 s).
Z
38
RESPONSE 5.5J 5 4. --Normal ~-After scale adjustment • After rule modification
4 3. 3
-
2.5. 2
MANIPULATED VALUE
1.5 1 .5 .8
TIME(S) Fig. 9c. Control results by STFC for model 3 (sampling time 0.3 s). Table
al a2 a3 d
5a. Scaling factors before after learning Before learning
After learning
60.0 0.65 0.20 1.50
58.81 0.708 0.218 2.296
and
Table 5b. Control rules before and after learning
Ae
Before learning
After learning
A2e
AZe
N
Z
P
N
Z
P
Au for 'e is P' P 0.667 Z 0.333 N 0.000
1.000 0.667 0.333
1.000 1.000 0.667
0.471 0.271 0.079
0.706 0.474 0.262
1.000 0.706 0.451
Au for 'e is Z ' P 0.000 Z -0.333 N -0.667
0.333 0.000 -0.333
0.667 0.333 0.000
0.064 -0.229 -0.463
0.326 0.000 -0.326
0.463 0.229 -0.064
Au for 'e is N' P -0.667 Z - 1.000 N -1.000
-0.333 -0.667 -1.000
0.000 -0.333 -0.667
-0.451 -0.706 - 1.000
-0.262 -0.474 -0.706
-0.079 -0.271 -0.471
M. Maeda, S. Murakami /
Self-tuning fuzzy controller
39
Table 6a. Scaling factors before and after learning
aI a2 a3 d
Before learning
After learning
60.0 0.68 0.17 0.80
57.86 0.830 0.208 1.371
Table 6b. Control rules before and after learning
Ae
Before learning
After learning
z~Ze
A2e
N
Z
P
N
Z
P
1.000 0.667 0.333
1.000
0.445 0.259 0.038
0.667 0.445 0.163
1.000 0.667 0.401
0.343 0.000 -0.343
0.442 0.223 -0.109
-0.163 -0.445 -0.667
0.038 -0.259 -0.445
Au for 'e is P' P Z N
0.667 0.333 0.000
1.000 0.667
Au for 'e is Z' P Z N
0.000 -0.333 -0.667
0.333 0.000 -0.333
0.667 0.333
0.000
0.109 -0.223 -0.442
Au for 'e is N' P -0.667 Z -1.000 N - 1.000
-0.333 -0.667 - 1.000
0.000 -0.333 -0.667
-0.401 -0.667 - 1.000
and the control rules are shown in Tables 6a and 6b. Figure 9c shows the control result of model 3. The target reaching time is set at 4 seconds. After 3 times scaling adjustment and 8 times rule modification, nearly target response was obtained. The scaling factors and the control rules are shown in Tables 7a and 7b. Note that the numbers of times of scaling adjustment and rule modification equal the repetition frequency of simulation. These numbers become large or small by the selection of
the value 0. In this paper, 0 is set at 0.6. Each longitudinal axis of figures has a nominal scale. The initial values of the scaling parameters are selected by heuristic rules, for example, al: about[reference], a2: al * [sampling interval]/[time constant]. The simulation for learning may be continuously carried out at evaluation intervals for a long time.
4. Conclusions Table 7a. Scaling factors before and after learning
a1 a2 a3 d
Before learning
After learning
60.0 3.50 1.20 1.00
62.1 3.10 1.06 0.86
In this paper a learning method for the self-tuning of a fuzzy controller, that is, an adjustment method of the scaling factors and a modification method was proposed. A selftuning fuzzy controller (STFC) which applies this method was designed. From the results of control simulations using the STFC, it was found that the STFC is necessary and useful. By using this STFC, one will be able to achieve high
40
M. Maeda, S. Murakami / Self-tuning fuzzy controller Table 7b. Control rules before and after learning
Ae
Before learning
After learning
A2e
A2e
N
Z
P
N
Z
P
Au for 'e is P' P 0.667 Z 0.333 N 0.000
1.000 0.667 0.333
1.000 1.000 0.667
0.569 0.283 0.005
0.853 0.569 0.297
1.000 0.853 0.574
Au for 'e is Z' P 0.000 Z -0.333 N -0.667
0.333 0.000 -0.333
0.667 0.333 0.000
-0.142 -0.286 -0.574
0.270 0.000 -0.270
0.574 0.286 0.142
Au for 'e is N' P -0.667 Z -1.000 N -1.000
-0.333 -0.667 -1.000
0.000 -0.333 -0.667
-0.574 -0.853 -1.000
-0.297 -0.569 -0.853
-0.005 -0.283 -0.569
quality control on the basis of only the initial setting of the parameters of the controller. One of the future considerations is how well the STFC applies to systems with a large lag time. We will have to append some rules to the STFC to enable that.
[6]
[7]
[8]
References [9] [1] E. H. Mamdani, Application of fuzzy algorithms for control of simple dynamic plants, Proc. IEE 121(12) (1974) 1585-1588. [2] L.P. Holmblad and J.J. Ostergaard, Control of cement kiln by fuzzy logic, in: M.M. Gupta and Sanchez, Eds., Fuzzy Information and Decision Process (North Holland, Amsterdam, 1982) 389-399. [3] J.J. Ostergaard, Fuzzy logic control of a heat exchanger process, in: M.M. Gupta et al., Eds., Fuzzy Automata and Decision Processes (North-Holland, Amsterdam, 1977) 285-320. [4] S. Yasunobu and S. Miyamoto, Automatic train operation system by predictive fuzzy control, in: M. Sugeno, Ed., Industrial Applications of Fuzzy Control (North-Holland, Amsterdam, 1985) 1-18. [5] H. Takahashi, Y. Eto, S. Takase, S. Murakami and M. Maeda, Application of a self-tuning fuzzy logic system
[10] [11]
[12]
[13]
[14]
to automatic speed control devices, Preprints of SICE'87, Hiroshima (1987) 1241-1244. M. Maeda, Y. Maeda and S. Murakami, Fuzzy drive control of an autonomous mobile robot, Fuzzy Sets and Systems 39 (1991) 195-204. T. Yamazaki and M. Sugeno, Self-organizing fuzzy controller, Trans. Soc. Instr. Control Engrs. 20(8) (1984) 720-726 in Japanese). M. Maeda and S. Murakami, Self-tuning fuzzy logic controller, Trans. Soc. Instr. Control Engrs. 24(2) (1988) 191-197 (in Japanese). N. Baaklini, Automata learning control using fuzzy logic, Ph.D. Thesis, London University (1976). T.J. Procyk, A self-organizing controller for dynamic processes, Ph.D. Thesis, London University (1977). T. Yamazaki, An improved algorithm for a selforganizing controller, and its experimental analysis, Ph.D. Thesis, London University (1982). C.C. Lee, A self-learning rule-based controller employing approximate reasoning and neural net concepts, lnternat. Intelligent Systems 6(1) (1991) 71-92. M. Maeda, T. Sato and S. Murakami, A design of the self-tuning fuzzy controller, Proc. lnternat. Conf. on Fuzzy Logic Neural Networks, Vol. 1 (Iizuka, Japan, July, 1990) 393-396. S. Murakami and M. Maeda, Automobile speed control system using a fuzzy logic controller, in: M. Sugeno, Ed., Industrial Applications of Fuzzy Control (NorthHolland, Amsterdam, 1985) 105-123.