Symmetric quaternionic Hopfield neural networks

Symmetric quaternionic Hopfield neural networks

Communicated by Guang Wu Zheng Accepted Manuscript Symmetric Quaternionic Hopfield Neural Networks Masaki Kobayashi PII: DOI: Reference: S0925-2312...

483KB Sizes 1 Downloads 93 Views

Communicated by Guang Wu Zheng

Accepted Manuscript

Symmetric Quaternionic Hopfield Neural Networks Masaki Kobayashi PII: DOI: Reference:

S0925-2312(17)30335-1 10.1016/j.neucom.2017.02.044 NEUCOM 18118

To appear in:

Neurocomputing

Received date: Revised date: Accepted date:

21 September 2016 9 January 2017 19 February 2017

Please cite this article as: Masaki Kobayashi, Symmetric Quaternionic Hopfield Neural Networks, Neurocomputing (2017), doi: 10.1016/j.neucom.2017.02.044

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

ACCEPTED MANUSCRIPT

Symmetric Quaternionic Hopfield Neural Networks Masaki Kobayashi

CR IP T

Mathematical Science Center, University of Yamanashi Takeda 4-3-11, Kofu, Yamanashi 400-8511, Japan

Abstract

M

AN US

Quaternion algebra is an extension of complex number system and has the property of non-commutativity. Quaternionic Hopfield neural networks (QHNNs) are extensions of complex-valued Hopfield neural networks (CHNNs) using quaternions, and several models have been proposed. Both the CHNNs and QHNNs have low noise tolerance due to rotational invariance. Recently, a novel CHNN model, a symmetric CHNN (SCHNN), has been proposed to improve noise tolerance of CHNNs. In the present work, this scheme is extended to the QHNNs. The proposed model is referred to as a symmetric QHNN (SQHNN). We show that the SQHNNs improve the noise tolerance by computer simulations.

PT

ED

Keywords: Hopfield neural network, quaternion, high-dimensional neural networks, noise tolerance 1. Introduction

AC

CE

High-dimensional neural networks have been applied to various fields [1, 2, 3, 4, 5]. A complex-valued neural network is one of the most successful models of high-dimensional neural networks. The complex-valued neural network was extended to several models using Clifford algebra. Quaternion algebra belongs to Clifford algebra, and is a non-commutative number system. Nitta proposed the four-dimensional versions of quaternionic feed-forward neural networks, and showed that their learning speeds were improved by computer simulations [6, 7, 8]. Kobayashi and Nakajima proposed the twisted quaternionic neural networks and improved the learning speeds [9]. Arena et al.[5] proposed the three-dimensional versions of quaternionic feed-forward

Preprint submitted to Neurocomputing

February 22, 2017

ACCEPTED MANUSCRIPT

CE

PT

ED

M

AN US

CR IP T

neural networks, and applied them to control in robotics. The other applications of the quaternionic feed-forward neural networks, such as imagecompression and counting pedestrians, have also been studied [10, 11, 12, 13]. A Hopfield neural network (HNN) is a neural network model with symmetric mutual connections, and always converges to a fixed point. An HNN is extended to a complex-valued HNN (CHNN) using complex numbers. In the CHNNs, the stability condition is that the mutual connection weights are conjugate. The CHNNs can process multilevel information, such as grayscale images [14, 15, 16, 17, 18]. Several extensions of CHNNs have also been proposed [19, 20, 21, 22, 23]. Quaternionic HNNs (QHNNs) are extensions of CHNNs. The QHNNs are HNN models utilizing quaternions, and several versions, such as the split, multistate, and hybrid QHNNs, have also been proposed. The split QHNNs are the simplest models [24]. The multistate QHNNs are useful for the storage of color image data [25, 26, 27, 28]. The hybrid QHNNs improved noise robust by using non-commutativity of quaternions [29]. In the QHNNs, the stability condition is that the mutual connection weights are conjugate. Both the CHNNs and QHNNs have weak noise tolerance due to rotational invariance [20, 30, 31]. To improve the noise tolerance, a novel model of CHNNs, a symmetric CHNN (SCHNN), was proposed [32]. It improved noise tolerance by solving rotational invariance. It has the symmetric mutual connection weights, and modifies the weighted sum inputs. Moreover, it always converges to a fixed point. In the present work, we extend this scheme to the QHNNs, and propose symmetric QHNNs (SQHNNs). Then, we have to take non-commutativity of quaternions into consideration; the order of multiplication of neuron state and connection weight is different in the mutual connections. We performed computer simulations, and evaluated the noise tolerance of QHNNs and SQHNNs. 2. Symmetric Complex-valued Hopfield Neural Networks

AC

2.1. Complex-valued Hopfield Neural Networks The CHNNs are briefly described before the SCHNNs [14]. In the CHNNs and SCHNNs, the neuron states and connection weights are represented by complex numbers. For simplicity, we employ the split activation function. For a complex number x = x0 + x1 i, where i is the imaginary unit, the

2

ACCEPTED MANUSCRIPT

activation function f (x) is defined as x0 x1 + i. |x0 | |x1 |

f (x) =

(1)

CR IP T

Therefore, the set of neuron states is {±1±i}, and a complex-valued neuron is a 4-state neuron. We denote the connection weight from neuron b to neuron a by wab . Let za be the state of neuron a. Then, the weighted sum input Ia to neuron a is X Ia = wab zb . (2) b6=a

M

AN US

The CHNNs require the stability condition wab = wab , where w means complex-conjugate of w. Due to the stability condition, a CHNN converges to a fixed point. We describe a primitive learning rule, the Hebbian learning rule. Let N and P be the numbers of neurons and training patterns, respectively. p We denote the pth training pattern by zp = (z1p , z2p , · · · , zN ). The Hebbian learning rule for CHNNs gives the connection weight wab by wab =

P X

zap zbp .

(3)

p=1

b6=a

= 2(N − 1)zaq +

CE

PT

ED

The obtained connection weights satisfy the stability condition. Given the qth training pattern to the CHNN, the weighted sum input Iaq to neuron a is X wab zbq Iaq = (4) XX

zap zbp zbq .

(5)

b6=a p6=q

AC

The first term helps recall the qth training pattern. The second summation interferes the correct recall, and is referred to as the crosstalk term. In the CHNNs, the patterns −zp and ±izp are also fixed points. In fact, given the pattern αzq (α = −1, ±i) to the CHNN, the weighted sum input Iˆaq to neuron a is X (6) Iˆaq = wab (αzbq ) =

b6=a αIaq .

3

(7)

ACCEPTED MANUSCRIPT

CR IP T

2.2. Symmetric Complex-valued Hopfield Neural Networks We modify a CHNN to an SCHNN [32]. An SCHNN employs the same activation function as a CHNN. The weighted sum input is modified to X (8) Ia = wab zb . b6=a

The stability condition changes to wab = wba . The Hebbian learning rule for SCHNNs is given by P X

wab =

zap zbp .

(9)

AN US

p=1

The obtained connection weights satisfy the stability condition. Given the qth training pattern to the SCHNN, the weighted sum input Iaq to neuron a is X wab zbq Iaq = (10) b6=a

M

= 2(N − 1)zaq +

XX

zap zbp zbq .

(11)

b6=a p6=q

AC

CE

PT

ED

The first term maintains and only the crosstalk term changes. In SCHNNs, the patterns −zp is a fixed point, but ±izp are not. Thus, SCHNNs have higher noise tolerance than CHNNs. In fact, given αzq (α = ±i) to the SCHNN, we calculate the weighted sum input Iˆaq to neuron a. X (12) Iˆaq = wab (αzbq ) b6=a

=

X b6=a

  wab −αzbq

= −αIaq

(13) (14)

Therefore, ±izq is not stable. 3. Quaternions Quaternion algebra is an extension of complex number field. A quaternion is composed of one real and three imaginary parts. The three imaginary units 4

ACCEPTED MANUSCRIPT

are represented by i, j, and k, and satisfy the following relations: (15) (16) (17) (18)

CR IP T

i2 = j 2 = k 2 = −1, ij = −ji = k, jk = −kj = i, ki = −ik = j.

The multiplication is non-commutative from these rules. A quaternion is represented, using real numbers q 0 , q 1 , q 2 , and q 3 , as q = q 0 + q 1 i + q 2 j + q 3 k.

(19)

AN US

For a quaternion q = q 0 + q 1 i + q 2 j + q 3 k, the conjugate q of q is defined as q = q 0 − q 1 i − q 2 j − q 3 k.

(20)

PT

ED

M

The real part q 0 of q is denoted by Re(q). Obviously, Re(q) = Re(q) holds. For two quaternions q1 = q10 + q11 i + q12 j + q13 k and q2 = q20 + q21 i + q22 j + q23 k, the addition and the multiplication are defined as follows:   q1 + q2 = q10 + q20 + q11 + q21 i   + q12 + q22 j + q13 + q23 k, (21) 0 0 1 1 2 2 3 3 q1 q 2 = q1 q 2 − q1 q2 − q 1 q2 − q1 q2  + q10 q21 + q11 q20 + q12 q23 − q13 q22 i  + q10 q22 − q11 q23 + q12 q20 + q13 q21 j  + q10 q23 + q11 q22 − q12 q21 + q13 q20 k. (22)

CE

The following equalities also hold: q 1 q2 = q 2 q 1 , qq = qq = q

 0 2

+ q

 1 2

+ q

 2 2

+ q

 3 2

(23) .

(24)

AC

Moreover, quaternions satisfy the partition and associative laws. 4. Symmetric Quaternionic Hopfield Neural Networks 4.1. Quaternionic Hopfield Neural Networks We briefly describe the QHNNs prior to the SQHNNs. In the QHNNs, the neuron states and connection weights are represented by quaternions. First, 5

ACCEPTED MANUSCRIPT

we define the activation function. For a quaternion q = q 0 + q 1 i + q 2 j + q 3 k, the activation function is defined as q1 q2 q3 q0 + i + j + k. |q 0 | |q 1 | |q 2 | |q 3 |

(25)

CR IP T

f (q) =

AN US

Therefore, the set of neuron states is { ±1 ± i ± j ± k } and a quaternionic neuron is a 16-state neuron. This activation function is especially referred to as a split activation function. The other activation functions have also been proposed [26, 30]. Let za and wab be the state of neuron a and connection weight from neuron b to neuron a, respectively. The weighted sum input is defined in the similar manner to (2). The Hebbian learning rule is also given in the same manner as (3) and satisfies the stability condition wab = wba . In fact, the following equality holds: P X

wab =

zbp zap

(26)

p=1

M

= wba .

(27)

ED

Given the qth training pattern to the QHNN, the weighted sum input Iaq to neuron a is X Iaq = wab zbq (28)

PT

b6=a

= 4(N − 1)zaq +

XX

zap zbp zbq .

(29)

b6=a p6=q

AC

CE

In the QHNNs, the patterns zp α (α = −1, ±i, ±j, ±k) are also fixed points [30]. In fact, given zq α to the QHNN, the weighted sum input Iˆaq to neuron a is as follows: ! X q Iˆaq = wab z α (30) b

b6=a

= Iaq α.

(31)

Therefore, zq α is a fixed point. Remark that αzq (α = ±i, ±j, ±k) are not fixed points. 6

ACCEPTED MANUSCRIPT

b6=a

P X

wab =

zbp zap .

p=1

CR IP T

We can modify the weighted sum input to neuron a and the Hebbian learning rule utilizing the non-commutativity of quaternions as follows: X Ia = zb wab , (32) (33)

A QHNN with the above weighted sum input is another QHNN model. Then, αzp is a fixed point, though zp α is not.

M

AN US

4.2. Symmetric Quaternionic Hopfield Neural Networks We extend the scheme of SCHNNs to the QHNNs. We have to take the non-commutativity into consideration. We modify the weighted sum input, connection weights, and Hebbian learning rule. The connection weights will be required to be symmetric. The scheme of SCHNNs originated in the Hebbian learning rule. First, we modify the Hebbian learning rule. If we simply imitated the SCHNNs, the following rule would be obtained: X wab = zap zbp . (34) p

PT

ED

From the non-commutativity of quaternions, the connection weights (34) do not satisfy wab = wba . We define the Hebbian learning rule as  X  zap zbp (a > b)    p wab = (35) X p  p  zb za (a < b)  

CE

p

AC

The connection weights (35) satisfy wab = wba . We require that the connection weights satisfy wab = wba , based on the Hebbian learning rule. Next, we modify the weighted sum input Ia from neuron b to neuron a to X X zb wab . (36) Ia = wab zb + a
a>b

Thus, we achieved an extension of the scheme of SCHNNs to QHNNs, and the new QHNN model is referred to as a symmetric QHNN (SQHNN). We 7

ACCEPTED MANUSCRIPT

confirm that an SQHNN works well. Given the qth training pattern to the SQHNN, the weighted sum input to neuron a is X X q Iaq = zb wab (37) wab zbq + a
= 4(N − 1)zaq +

X X p6=q

a>b

zap zbp zbq +

!

CR IP T

a>b

X

zbq zbp zap

a
.

(38)

We obtained the similar result to (5). Finally, we investigate the stability of SQHNNs. Suppose a > b.

M

AN US

0 0 1 1 2 2 3 3 wab zb = wab zb + wab zb + wab zb + wab zb  1 0 0 1 3 2 2 3 + wab zb − wab zb + wab zb − wab zb  2 0 3 1 0 2 1 3 + wab zb − wab zb − wab zb + wab zb  3 0 2 1 1 2 0 3 + wab zb + wab zb − wab zb − wab zb 0 0 1 1 2 2 3 3 za wab = wab za + wab za + wab za + wab za  1 0 0 1 3 2 2 3 + wab za − wab za − wab za + wab za  2 0 3 1 0 2 1 3 + wab za + wab za − wab za − wab za  3 0 2 1 1 2 0 3 + wab za − wab za + wab za − wab za

i j k

(39)

i j k

(40)

PT

ED

We consider four components of a quaternionic neuron as four independent real-valued neurons. Figure 1 illustrates the connection weights when a quaternionic neuron is regarded as four real-valued neurons. The connection weights between the components are symmetric. Since an SQHNN can be regarded as a real-valued HNN, it converges to a fixed point.

CE

5. Computer Simulations

AC

We performed computer simulations for the storage capacity and noise tolerance. In our computer simulations, the number of neurons was N = 500. First, we describe our computer simulations for the storage capacities. We describe the procedure of a trial. 1. P training patterns were randomly generated. 2. All the generated training patterns were learned by the Hebbian learning rule. 3. If all the training patterns were fixed points, the trial was regarded as successful; otherwise, as failed. 8

ED

M

AN US

CR IP T

ACCEPTED MANUSCRIPT

PT

Figure 1: Networks architecture of SQHNN.

AC

CE

P varied from 5 to 40 every 5. For each P , 100 trials were carried out. Figure 2 shows the results of computer simulations. The horizontal and vertical axes indicate the number P of training patterns and the success rate, respectively. Both QHNNs and SQHNNs had almost same storage capacities. These results coincided with the results in CHNNs and SCHNNs. Next, we describe our computer simulations for the noise tolerance. Our computer simulations were carried out for P = 15, 20 and 25. For each P , 100 training pattern sets were randomly generated, and each training pattern set was learned by the Hebbian learning rule. The noise was added to a training pattern by replacing each neuron state with randomly selected state at the rate r, referred to as noise rate. We describe the procedure of trial.

9

0

10

AN US

0.0

0.2

Success rate 0.4 0.6

CR IP T

0.8

1.0

ACCEPTED MANUSCRIPT

20 P

40

QHNNs

M

SQHNNs

30

ED

Figure 2: Storage capacities.

CE

PT

1. A training pattern was randomly selected from the training pattern set and given to the network with noise. 2. We repeated updating neurons from neuron a to neuron N until the network converged. 3. If the final state is identical to the original training pattern, the trial was regarded as successful; otherwise, as failed.

AC

The noise rate r varied from 0 to 1 every 0.05. For each P and r, 100 trials were carried out. Therefore, the total number of trials was 10 000 for each P and r. Figure 3 shows the results of our computer simulations. The horizontal and vertical axes indicate the noise rate and success rate, respectively. The success rate fell at the beginning of noise rate = 0.8 and 0.9. The similar results were shown in the SCHNNs and hybrid QHNNs, which solved rotational invariance [29, 32]. In general, the storage capacity of the Hebbian learning rule is very small. Our computer simulations were 10

ACCEPTED MANUSCRIPT

0.2

0.4

0.6

Noise rate

0.8

1.0

1.0 0.8 0.0

0.2

0.4

CR IP T

0.0

0.6

Success rate

0.8 0.6 0.0

0.2

0.4

Success Rrte

0.8 0.6 0.4 0.0

0.2

Success rate

P=35

1.0

P=25

1.0

P=15

0.0

0.2

0.4

0.6

Noise rate

0.8

1.0

0.2

0.4

0.6

Noise rate

0.8

1.0

QHNNs

AN US

SQHNNs

0.0

Figure 3: Noise tolerance.

6. Discussion

ED

M

performed under small training pattern sets. For small training pattern set, the numbers of pseudomemories, such as mixture patterns, would be small. This could be one reason why the success rate fell at the beginning of noise rate = 0.8 and 0.9.

AC

CE

PT

The QHNNs and SQHNNs had almost same storage capacities. This result coincided with the results in the cases of CHNNs and SCHNNs. As for noise tolerance, we performed computer simulations in the case of P = 15, 25 and 35. In the case of P = 15, all the training patterns were successfully stored. In the case of P = 25, few training patterns were not stored. In the case of P = 35, a few training patterns were not stored. In all the cases, the SQHNNs outperformed the QHNNs. Table 1 shows the success rate at r = 1, i.e., random initial states. In the case of SQHNNs with P = 15, the 1 success rate is approximately 2P . Since there are 2P fixed points, the training and reversed patterns, this implies that most of the trials converged to the training and reversed patterns, As the number of training patterns increased, the trials more often converged to the patterns other than the training and

11

ACCEPTED MANUSCRIPT

QHNN SQHNN 1/(2P ) 0.0003 0.0297 0.0333 0.0000 0.0109 0.0200 0.0000 0.0030 0.0143

CR IP T

P 15 25 35

Table 1: Success rates for random initial states.

reversed patterns. In the case of QHNNs, most of the trials did not converge to the training patterns.

AN US

7. Conclusion

CE

PT

ED

M

An SCHNN is a novel modification of a CHNN, and highly improves the noise tolerance. We extend the scheme of SCHNNs to the QHNNs, and proposed the SQHNNs. We performed computer simulations to compare the storage capacities and noise tolerance of QHNNs and SQHNNs. They had almost same storage capacities. These results coincide with the results in the CHNNs and the SCHNNs. In the noise tolerance, the SQHNNs outperformed the QHNNs as we expected. In the current status, we have two difficulties for applications. One is to employ the Hebbian learning rule. The Hebbian learning rule is too primitive, and its storage capacity is very small. We plan to study the other learning algorithms, such as gradient descent learning [17, 33]. The other is the activation function. We employed the split activation function, whose states are limited to 16. The QHNNs with the multistate activation function were applied to the storage of color images. We plan to study the SQHNNs with the multistate activation function and apply them to the storage of color images. References

AC

[1] A.Hirose, Complex-valued neural networks: theories and applications, Series on Innovative Intelligence 5, World Scientific (2003). [2] A.Hirose, Complex-valued neural networks, second edition, Series on Studies in Computational Intelligence, Springer (2012).

12

ACCEPTED MANUSCRIPT

[3] A.Hirose, Complex-valued neural networks: advances and applications, The IEEE Press Series on Computational Intelligence, Wiley-IEEE Press (2013).

CR IP T

[4] T.Nitta, Complex-valued neural networks: utilizing high-dimensional parameters, Information Science Publishing (2009).

[5] P.Arena, L.Fortuna, G.Muscato, M.G.Xibilia, Neural networks in multidimensional domains: fundamentals and new trends in modelling and control, Lecture Notes in Control and Information Sciences 234 Springer (1998).

AN US

[6] T.Nitta, A quaternary version of the back-propagation algorithm, Proceedings of IEEE International Conference on Neural Networks 5 (1995) 2753-2756. [7] T.Nitta, An extension of the back-propagation algorithm to quaternions, Proceedings of International Conference on Neural Information Processing 1 (1996) 247-250.

ED

M

[8] M.Kobayashi, Uniqueness theorem for quaternionic neural networks, Signal Processing http://dx.doi.org/10.1016/j.sigpro.2016.07.021 (2016).

PT

[9] M.Kobayashi, A.Nakajima, Twisted quaternary neural networks, IEEJ Transactions on Electrical and Electronic Engineering 7(4) (2012) 397401.

CE

[10] N.Muramoto, T.Minemoto, T.Isokawa, H.Nishimura, N.Kamiura, N.Matsui, A scheme for counting pedestrians by quaternionic multilayer perceptron, Proceedings of the International Symposium on Advanced Intelligent Systems (2013) F5C-3.

AC

[11] N.Matsui, T.Isokawa, H.Kusamichi, F.Peper, H.Nishimura, Quaternion neural network with geometrical operators, Journal of Intelligent & Fuzzy Systems 15(3-4) (2004) 149-164. [12] H.Kusamichi, T.Isokawa, N.Matsui, Y.Ogawa, K.Maeda, A new scheme for color night vision by quaternion neural network, Proceeding of the 2nd International Conference on Autonomous Robots and Agents (2004) 101-106. 13

ACCEPTED MANUSCRIPT

[13] T.Isokawa, T.Kusakabe, N.Matsui, F.Peper, Quaternion neural network and its application, Lecture Notes in Artificial Intelligence 2774 (2003) 318-324.

CR IP T

[14] S.Jankowski, A.Lozowski, J.M.Zurada, Complex-valued multistate neural associative memory, IEEE Transactions on Neural Networks 7(6) (1996) 1491-1496. [15] G.Tanaka, K.Aihara, Complex-valued multistate associative memory with nonlinear multilevel functions for gray-level image reconstruction, IEEE Transactions on Neural Networks 20(9) (2009) 1463-1473.

AN US

[16] M.Kobayashi, Attractors accompanied with a training pattern of multivalued Hopfield neural networks, IEEJ Transactions on Electrical and Electronic Engineering 10(2) (2015) 195-200. [17] M.Kobayashi, Gradient descent learning rule for complex-valued associative memories with large constant terms, IEEJ Transactions on Electrical and Electronic Engineering 11(3) (2016) 357-363.

ED

M

[18] M.K.Muezzinoglu, C.Guzelis, J.M.Zurada, A new design method for the complex-valued multistate Hopfield associative memory, IEEE Transactions on Neural Networks 14(4) (2003) 891-899.

PT

[19] D.L.Lee, W.J.Wang, A multivalued bidirectional associative memory operating on a complex domain, Neural Networks 11(9) (1998) 16231635.

CE

[20] Y.Suzuki, M.Kobayashi, Complex-valued bipartite auto-associative memory, IEICE Transactions on Fundamentals of Electronics, Communications and Computer Science E97-A(8) (2014) 1680-1687.

AC

[21] M.Kobayashi, H.Yamazaki, Complex-valued multidirectional associative memory, Electrical Engineering in Japan 159(1) (2007) 39-45. [22] M.Kobayashi, Hyperbolic Hopfield neural networks, IEEE Transactions on Neural Networks and Learning Systems 24(2) (2013) 335-341. [23] M.Kobayashi, Global hyperbolic Hopfield neural networks, IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E99-A(12) (2016) 2511-2516. 14

ACCEPTED MANUSCRIPT

[24] T.Isokawa, H.Nishimura, N.Kamiura, N.Matsui, Associative memory in quaternionic Hopfield neural network, International Journal of Neural Systems 18(2) (2008) 135-145.

CR IP T

[25] T.Isokawa, H.Nishimura, A.Saitoh, N.Kamiura, N.Matsui, On the scheme of quaternionic multistate Hopfield neural network, Proceedings of Joint 4th International Conference on Soft Computing and Intelligent Systems and 9th International Symposium on advanced Intelligent Systems (2008) 809-813.

AN US

[26] T.Minemoto, T.Isokawa, H.Nishimura, N.Matsui, Extended projection rule for quaternionic multistate Hopfield neural network, Proceedings of International Symposium on Artificial Life and Robotics (2015) 418-423. [27] T.Minemoto, T.Isokawa, H.Nishimura, N.Matsui, Quaternionic multistate Hopfield neural network with extended projection rule, Artificial Life and Robotics 21(1) (2016) 106-111.

M

[28] T.Minemoto, T.Isokawa, H.Nishimura, N.Matsui, Feed forward neural network with random quaternionic neurons, Signal Processing http://dx.doi.org/10.1016/j.sigpro.2016.11.008 (2016).

ED

[29] M.Kobayashi, Hybrid quaternionic Hopfield neural network, IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E98-A(7) (2015) 1512-1518.

PT

[30] M.Kobayashi, Rotational invariance of quaternionic Hopfield neural networks, IEEJ Transactions on Electrical and Electronic Engineering 11(4) (2016) 516-520.

CE

[31] M.Kobayashi, Fixed points of split quaternionic Hopfield neural networks, Signal Processing http://dx.doi.org/10.1016/j.sigpro.2016.11.020 (2016)

AC

[32] M.Kobayashi, Symmetric complex-valued Hopfield neural networks, IEEE Transactions on Neural Networks and Learning Systems, to be published. [33] M.Kobayashi, H.Yamada, M.Kitahara, Noise robust gradient descent learning for complex-valued associative memory, IEICE Transactions on 15

ACCEPTED MANUSCRIPT

CR IP T

Fundamentals of Electronics, Communications and Computer Sciences E94-A(8) (2011) 1756-1759.

AC

CE

PT

ED

M

AN US

Masaki Kobayashi is a professor at University of Yamanashi. He received the B.S., M.S., and D.S. degrees in mathematics from Nagoya University in 1989, 1991, and 1996, respectively. He became a research associate and an associate professor at the University of Yamanashi in 1993 and 2006, respectively. He has been a professor since 2014.

16