Design and investigation of a chaotic neural network architecture for cryptographic applications

Design and investigation of a chaotic neural network architecture for cryptographic applications

Computers and Electrical Engineering 72 (2018) 179–190 Contents lists available at ScienceDirect Computers and Electrical Engineering journal homepa...

2MB Sizes 0 Downloads 32 Views

Computers and Electrical Engineering 72 (2018) 179–190

Contents lists available at ScienceDirect

Computers and Electrical Engineering journal homepage: www.elsevier.com/locate/compeleceng

Design and investigation of a chaotic neural network architecture for cryptographic applicationsR A. Ruhan Bevi, Sriharini Tumu∗, N. Varsha Prasad Department of ECE, SRM Institute of Science and Technology, Kattankulathur 603203, India

a r t i c l e

i n f o

Article history: Received 30 August 2017 Revised 13 September 2018 Accepted 13 September 2018

Keywords: Chaotic neural network Memristor Nonlinear equations Cubic map 2D Logistic map Encryption

a b s t r a c t Artificial neural networks are an integral part of emerging technologies, and ongoing research has shown that they can be applied to a variety of applications. This paper proposes a new cryptographic algorithm using chaotic neural networks, whose function is enhanced by construction with polynomials that exhibit chaos, namely, nonlinear Hermite and Chebyshev polynomials. These polynomials incorporate a memristor conductance, which is used as an activation function in the chaotic neural networks. Further, a function of the weights obtained from the chaotic neural networks, is used to generate the initial values that are used in the cryptographic process. The encryption algorithm employed here is inspired by the Lai–Massey block cipher with cubic and two-dimensional logistic maps, and the evaluation of these chaotic equations is performed using correlation values. The correlation values between the cipher and plain text are also examined to determine the undecipherability of the message to be sent on a public channel. © 2018 Elsevier Ltd. All rights reserved.

1. Introduction There is a growing need for secure transmission of information, to ensure confidentiality between the sender and recipients. Cryptography deals with the study of developing and analyzing processes that improve the level of security and protect the information from reaching the hands of adversaries. The construction of a chaotic neural network (CNN) with the help of a memristor conductance equation to enhance the performance of the network has been inspired by Wang et al. [1]. A memristor is a nonlinear, passive device having two terminals, which exhibits nonvolatile properties as its electrical resistance is dependent on the previous current values. The physical model of a memristor was introduced by a team at HP laboratories, 37 years after the theoretical proposal was made by Chua in 1971 [2,3]. This model consists of two layers with platinum contacts and a thin film of TiO2 , placed between them. One of the layers acts as a semiconductor, due to the doping of oxygen vacancies and the other exhibits insulating properties. The width of the doped region can be determined by the electrical charge passing through the memristor. There is a drift in the oxygen vacancies due to an external excitation, causing the boundary between the regions to shift in the same direction. Memristors have been used in a variety of applications since its model came into existence and some of its potential applications include simulations that can recall patterns,

R ∗

Reviews processed and recommended for publication to the Editor-in-Chief by Guest Editor Dr. G. G. Devadhas. Corresponding author. E-mail addresses: [email protected] (A.R. Bevi), [email protected] (S. Tumu).

https://doi.org/10.1016/j.compeleceng.2018.09.015 0045-7906/© 2018 Elsevier Ltd. All rights reserved.

180

A.R. Bevi et al. / Computers and Electrical Engineering 72 (2018) 179–190

in a manner analogous to the human brain. The use of memristor conductance and chaotic polynomials for synaptic weight updation contributes to faster convergence and constitutes an integral part of the system. The use of memristors for data encryption, has interested scientists around the world. Du et al. [4] demonstrated the use of nonlinear resistance change in BiFeO3 memristors for the generation of higher harmonics to develop a novel encoding system. Further, Thomas [5] explained the use of memristors in neural networks due to their unique ‘memory’ property, which can be used to imitate the synapse as in biological neural networks. The nonvolatile nature of memristors can also be exploited to develop physical unclonable functions (PUF), which provide higher security by preventing the extraction of secret keys [6]. Shi et al. [7], proposed the use of memristors in an artificial neural network (ANN) architecture by the synaptic weight updation process using Hermite polynomials as the activation functions. In this work, a neural network is constructed, using the chaotic properties of nonlinear equations like Hermite and Chebyshev polynomials and theoretical equations concerned with the fourth circuit element, namely memristor, which is used to catalyze the process of convergence. The encryption and decryption process that is proposed is a quasi-Feistel cipher. This algorithm makes use of the chaotic maps that produce a sequence, using which, the plaintext is encrypted. The use of the 1D cubic map as well as 2D map (two dimensional logistic) is explained using the given process. The faster convergence rate reduces the overall time taken to complete the process. These weights, which are obtained from the neural network once it converges to the output with zero error, can neither be computed manually, nor predicted without knowing the architecture of the CNN and the initial value, thereby making the process secure. This paper is organized as follows: In Section 2, the overall architecture of the proposed CNN, with the key generation scheme targeted for cryptographic applications, is explained. Sections 3, 4, and 5 explain the developmental stages involved in the encryption process using the CNN. The results and discussion on the performance of the proposed system are addressed in Section 6 with the conclusions detailed in Section 7. 2. Architecture of the proposed CNN The process flow of the CNN architecture for cryptographic application is shown in Fig. 1. The work is organized in three stages that together constitute the entire process from the processing of the key till the generation of the ciphertext. The input consists of the plaintext and the key value that has to be transmitted across the secure communication channel. The input key A is of any value within the boundary (0,1) and the plaintext P is the sequence of characters that has to be encrypted. The architecture of the proposed network targeted for encryption comprises three stages as shown in Fig. 1. • Stage I: Learning of CNN • Stage II: Chaotic series generation • Stage III: Cryptographic process Stage I of the CNN is a learning stage wherein the memristive conductance equation aids in the synaptic weight updation. It is made up of two functional units, namely, the memristive conductance equation for synaptic weight updation and the polynomials as the complement simulator of the error function integral. The derivation of the memristor conductance and the usage of polynomials capable of exhibiting polynomial chaos form a part of the key function of stage I. These generated weights are taken note of, and in stage II, they are used as chaotic initial values in the series generator containing the chaotic equations. The generated series are used as components in the encryption algorithm, which is a part of stage III. 3. Stage I: Learning of CNN Stage I is the learning stage of the proposed CNN. The input key A is of any value within the boundary (0,1) and the plaintext P is the sequence of characters for the input. The integral of the error in each epoch is an integral of the chaotic polynomial function of the error, which in turn is a function of the input variable x. This integral of the error is fed back into the weight updation (wi ) equation by which the weights are updated. The Fig. 2 depicts the network made up of three layers, namely, an input layer x having a single neuron, a hidden layer (h1 , h2 ...h10 ) having 10 neurons and an output layer y having one neuron. The 10 weights between the hidden and output layers are represented as wi . The number of neurons chosen for the hidden layer is 10, because the number of keys generated from this architecture with 10 weights is associated with 10 iterations in the encryption process. The equation corresponding to the change in weight w is derived in the following section, wherein the calculated change in weight is added and supplied to the input for the next epoch. Further, the weights between the input and hidden layers are 1, whereas the weights between the hidden and output layers are calculated from the chaotic polynomials under consideration. Once the 10 weights from the input to the hidden layer and the 10 weights from the hidden to the output layer are established, the network converges to the output and is ready to be used in the encryption process. Memristor conductance. The memristor conductance equation for synaptic weight updation is derived from the relationship between the voltage v(t) and current i(t) represented by the following equation.

     w(t ) w(t ) v(t ) = Ron + Ro f f 1 − i(t ) D

D

(1)

A.R. Bevi et al. / Computers and Electrical Engineering 72 (2018) 179–190

Fig. 1. Complete architecture of the chaotic cryptosystem.

Fig. 2. Neuron structure.

181

182

A.R. Bevi et al. / Computers and Electrical Engineering 72 (2018) 179–190

Fig. 3. Graph of current vs voltage across a memristor.

where D is the thickness of the TiO2 layer that is sandwiched between the two outer layers, w(t) represents the thickness of the doped area at time t, Ron and Roff are the low and high resistances of the device. The memristance M(t) is obtained as the ratio of v(t) to i(t) as a function of time. The relation between the current and voltage of a memristor is shown in Fig. 3. The graph shows that the memristor is suitable for use in chaotic applications due to its unique nonlinear dependence of voltage with respect to current. Analyzing Eq. (1), it is observed that, the memristance M(t) = Ron when w = D and Roff when w = 0. When x ∈ [0,1], the memristance can be written as:

M (t ) = −Rx(t ) + Ro f f

(2)

When t=0,

M (t ) = −Rx0 + Ro f f

(3)

The total change in resistance R is given by

R = Ro f f − Ron

(4)

The resistance of the doped area, the current passing through the memristor, the average mobility, and the thickness of the TiO2 layer determine the rate of change of movement between the two regions, doped and undoped, and is given by

dx = ki(t ) f (x ) dt

(5)

Ron , μv refers to the average mobility, which can be approximately given by 10 − 14 m2 s−1 V−1 . The Jokular D2 window function is applied to model the dopant drift, producing chaos when small voltages are capable of producing large electric fields. This drift produced is given by where k = μv

 w

f (x ) = 1 − 2

D

−1

2 p

(6)

The dependence of the variables on the parameter p, is given in Figs. 4 and 5 wherein the graphs between memristance and charge, as well as the charge and flux for various values of p are plotted. The nonlinear behavior of the memristor is more evident when p = 1, and hence, the above equation is simplified to

f ( x ) = 4x − 4x2

(7)

This function assures the nonmovement of the x-coordinate while approaching either boundary. With the help of the Jokular window function and Eq. (6), the expression for memristance M(t) and memristive conductance G(t), is given by Eqs. (8) and (9) as

M (t ) = Ron + R

1 Ae4kq(t ) + 1

(8)

A.R. Bevi et al. / Computers and Electrical Engineering 72 (2018) 179–190

183

Fig. 4. Graph of memristance vs. charge.

Fig. 5. Graph of charge vs. flux.

where A =

Ro f f − Ron R0 − Ron

G(t ) =

1 M (t )

(9)

Upon differentiating Eq. (9) with respect to t, the rate of change of conductance is obtained as

dG Ae4kq(t ) R = dt (Ron Ae4kq(t ) + Ro f f )2 The values of the parameters considered are Ron = 100 , Ro f f = 20 k, R0 = 5 k, and D = 10 nm

(10)

184

A.R. Bevi et al. / Computers and Electrical Engineering 72 (2018) 179–190

The change of weight in neural network wi is replaced by g. The nonvolatile characteristic of the memristor helps in stabilizing the neural network and makes the process of weight updation convenient by accelerating towards convergence. This unique property justifies the use of the memristor in the CNN. 3.1. Proposed training algorithm for learning of CNN 1. The input given to the CNN is denoted by x. 2. The values of the neurons in the hidden layer are computed as a product of the input x and the associated weights between the input and hidden layers. 3. The output layer y is the product of the hidden layer value and the corresponding weights wi between the hidden and output neurons. The equations concerned with the output neuron y are as follows, considering t = 1. (a) In the case of the Hermite polynomial, the output y is given by

y=

n−1 

xwi hi (t )

(11)

i=1

(b) In the case of the Chebyshev polynomials, the output y is given by

y=

n−1 

xwiCi j (t )

(12)

i=1

where j = 0 for Chebyshev T and j = 1 for Chebyshev U 4. The calculated error is given by

E = f ( x ) − yi

(13)

5. The synaptic weight updation equation as derived from the memristor conductance equation is given by

w i =

4kAe4kqR ρ RJi (t ) (Ron Ae4kq + Ro f f )2

(14)

where Ji (t ) =Chaotic polynomial a) In the case of the Hermite polynomial, the integral of the error q is given by

q = f (x ) −

  n−1

hi (t )dt

(15)

i=1

b) In the case of the Chebyshev polynomials of the first kind and the second kind, the integral of the error q is given by

q = f (x ) −

  n−1

Ci j (t )dt

(16)

i=1

where j = 0 for Chebyshev T and j = 1 for Chebyshev U 6. Finally, after each epoch, the weights are updated with the following equation

(wi )new = (wi )old + w

(17)

The error is calculated for the respective polynomials for a fixed number of epochs. The resulting updated weight values computed using the respective polynomials are given for the encryption process. 3.2. Chaotic polynomials as activation functions The chaotic polynomials are used as special activation functions because of their orthogonal and recursive properties. Wiener chaos expansion or polynomial chaos often determines the uncertainty of the output in a dynamical system. The expansions of the Hermite and Chebyshev polynomials used in this work exhibit chaos; hence, they are used as activation functions for the CNN. This section deals with the polynomials that can be used for improving the performance of the CNN. 3.2.1. Hermite polynomial In the case of the Hermite polynomial [8], the output y is given by

Hn (x ) = (−1 )n ex

2

dn −x2 e = dxn



2x −

d dx

2n

(18)

where n is the order of the chosen polynomial. The function of x for any value n is generalized to Eq. (19) given below

Hn (x ) = 2xHn−1 (x ) − 2(n − 1 )Hn−2 (x ) It can be seen that the values obtained from Eq. (19) for n = 0–5 can be random in nature.

(19)

A.R. Bevi et al. / Computers and Electrical Engineering 72 (2018) 179–190

185

3.2.2. Chebyshev polynomial The Chebyshev polynomials being the consequences of the De Moivre’s formula [9], are the sources of chaos dynamics. They are the derivatives of the Gegenbauer polynomials exhibiting properties that are applicable to cryptographic techniques. For any value x, they are given by the relation

Tn (x ) = cosnθ

(20)

where x=cos θ and n is the order of the polynomial. When the range of x is [−1,1], the range of θ is [θ , π ]. The mentioned ranges are transverse in opposite directions, i.e., x = −1 corresponds to θ = π and x = 1 corresponds to θ = 0. Hence, the Chebyshev polynomial of the first kind, for any n is given by the recurrence relation

Tn+1 (x ) = 2xTn (x ) − Tn−1 (x )

(21)

The generating function for Eq. (21) is ∞

 1 − tx = Tn (x )t n 1 − 2tx + x2

(22)

t=0

The Chebyshev polynomials of the second kind are denoted by

Cn (x ) = sin((n + 1 )θ )/sinθ

(23)

The range of the polynomials of the second kind is similar to that of the polynomials of the first kind. The generating function and its expansion are given in Eq. (24) and for any value n, the corresponding Chebyshev polynomial is given by the Eq. (25). ∞

 1 = Cn (x )t n 1 − 2tx + x2

(24)

t=0

The important property of these polynomials is the semi-group property and commutation under composition, which is represented as

Tr (Ts (x )) = Ts (Tr (x )) = Tsr (x )

(25)

3.2.3. Relationship between Chebyshev polynomials of the first and second kind The analysis of the Chebyshev polynomials used in the integral of error function is extended to the relation between the polynomials of the first and second kind. Both the polynomials correspond to a complementary pair of Lucas sequences [10] with V˜ (P, Q ) and U˜ n(P, Q ) by considering the parameters to be P = 2x and Q = 1 represented by Eqs. (26)–(29).

Un (P, Q ) = Un−1 (x )

(26)

Vn (P, Q ) = 2tn (x )

(27)

Un (x ) =

∞ 

T j (x )

(28)

oddj

Un (x ) =

∞ 

T j (x ) − 1

(29)

even j

4. Stage II: Chaotic series generation The weights computed by the CNN aimed to converge at the output with zero error, cannot be estimated manually and involve a tiresome process. The proposed work uses the memristive CNN with polynomial chaos to resolve the problem. The number of cycles that the network needs to learn is reduced as the network is complemented by a nonvolatile memristor. Further, the generated keys (weights) used in the complex mapping procedures render the encryption random and increase the confidentiality of communication. The basis of the encryption algorithm employed has been inspired by the encryption standard proposed by Lai et al. [11]. Chaotic maps [12] are used to calculate the subsequent values from the initial value of the weight as obtained from the previous section. The number of values thus calculated depends on the number of characters present in the plaintext sequence. The properties of the chaotic equations considered are discussed as follows:

186

A.R. Bevi et al. / Computers and Electrical Engineering 72 (2018) 179–190

Fig. 6. Bifurcation diagram of cubic map.

4.1. Cubic map A cubic map is a 1D discrete chaotic map capable of generating random values from one single value. The cubic map has a control parameter given by a. As represented in Fig. 6, at a = 1, a pitchfork bifurcation occurs in the system, resulting in the emergence of two separate chaotic bands due to chaotic attractors crisis [13]. Hence, the value of a is considered to be 2.58, which is a part of the chaotic band. 4.2. 2D Logistic map The 2D Logistic chaotic map is researched with respect to its mathematical properties. The physical dynamics in two dimensions show complicated chaotic behavior like basin structures and attractors. The general form of the 2D Logistic map function for any value of u and v is given by the Eqs. (30) and (31).

un+1 = a(3vn + 1 )un (1 − un )

(30)

vn+1 = a(3un + 1 )vn (1 − vn )

(31)

The Eqs. (30) and (31) exhibit chaotic behavior [14] at a = 1.19. 5. Stage III: Cryptographic process The encryption algorithm has been inspired by the Lai–Massey algorithm. To complete the encryption process, mapping F and rotation R operations are performed several times. The definitions of these functions and the steps to be followed are discussed in this section and represented in Fig. 7. 5.1. Mapping function 1. Consider the input sequence A = An An−1 An−3 . . . .A3 A2 A1 and the chaotic initial value a0 . From a0 , the subsequent values can be calculated with the help of the chaotic equations discussed earlier. 2. Let us suppose that the chaotic series obtained is a1 , a2 , a3 , a4 . . . .an . Since these values are chaotic, they would not have a specific increasing or decreasing order.

A.R. Bevi et al. / Computers and Electrical Engineering 72 (2018) 179–190

187

Fig. 7. Algorithm for encrypting a plaintext.

3. The obtained chaotic values are sorted and their respective array indices recorded. For example, if the fifth element is the least (while considering the ascending order), its new index would be 1. Therefore, the fifth element in A would be the first element in B. In this manner, all the input sequence characters are re-ordered according to the indices to obtain a new sequence B = Bn Bn−1 Bn−3 . . . .B3 B2 B1 . 4. This function is represented as F (A ) = B 5.2. Rotation function 1. Consider the input sequence A = An An−1 An−3 . . . .A3 A2 A1 . The term “Rotation” refers to the circular rotation of the entire series of characters by their values corresponding to a round number. 2. For example, the round number 2 represents the number of characters to be circularly shifted equal to 10∗ 2 = 20. To generalize the statement, the number of characters to be shifted circularly for a round number “m” is 10∗ m. 5.3. Encryption algorithm 1. The plaintext sequence is P = Pn Pn−1 Pn−2 ....P3 P2 P1 . It is then divided into two left and right halves as L0 and R0. The sequence is then concatenated to form H. 2. The concatenated sequence undergoes the mapping operation with a chaotic initial value that serves as w1 . Then, the sequence is divided into two and each of them undergoes a rotation based on the round number. 3. The left and right halves are concatenated and the sequence goes to the next round following steps 1 and 2 corresponding to the round number. 4. After 10 rounds the left and right halves are sent along with the initial key to the receiver. 5. The decryption procedure is the reverse of the encryption procedure, hence, it is not explained here.

188

A.R. Bevi et al. / Computers and Electrical Engineering 72 (2018) 179–190 Table 1 Simulation results of the CNN. Initial value

No of epochs

Hermite Error

Output

Error

Output

Error

Output

Error

Output

0.2

75 100 200 75 100 200 75 100 200

0.5514 0.4448 0.3566 0.7239 0.4008 0.3345 0.8788 0.6780 0.5743

0.0866 0.1952 0.2834 0.1161 0.4392 0.5055 0.0722 0.2820 0.4127

0.2761 0.1908 0.0127 0.1246 0.0344 1.24E-04 0.1034 0.0109 1.27E−05

0.3639 0.4492 0.6273 0.6954 0.8156 0.8399 0.8566 0.9491 0.9600

8.824E−06 2.83E−8 0 9.93E−10 3.65E−16 0 7.84E−13 2.90E−16 0

0.6400 0.6400 0.6400 0.8400 0.8400 0.8400 0.9600 0.9600 0.9600

0.1271 0.08567 0.0267 0.0212 0.0167 0.0018 0.0039 0.0072 0.0061

0.5129 0.5543 0.6133 0.8188 0.8233 0.8382 0.9561 0.9528 0.9539

0.3

0.4

Chebyshev T

Chebyshev U

Backpropagation

Table 2 Time taken to compute chaotic polynomials. No of epochs

Hermite

Chebyshev T

Chebyshev U

Back propagation

100

11.103 s

11.4577 s

10.926 s

10.939 s

6. Results and discussion The proposed process of encryption and decryption, involving memristive CNN is performed on a Matlab 2016a platform with an 8GB RAM computer. The results are divided into two stages. The three-layered neural network, with an input, output and 10 hidden layers, was trained using the Hermite and Chebyshev Type I and II chaotic polynomials for various number of cycles and inputs. The output of the polynomials, along with their respective errors, is calculated using Eqs. (21)–(23). The performance of the proposed work is compared with the output of the back propagation network using similar architecture and a sigmoid activation. A comparison of the performance is tabulated in Table 1. For input values ranging from 0.2 to 0.4, it is observed that the Chebyshev U polynomial converges faster with the least error. The number of cycles required for Chebyshev U to converge to the output of the window function is more than 4 times that of all the tested input values. Subsequent to Chebyshev U, Chebyshev T converges around 500 cycles for input values greater than 0.2. The neural network using the Hermite polynomial for weight updation converges slowly at around 700 cycles. However, after 100 cycles, the rate of reduction of error is higher for the Hermite polynomial as compared to the other polynomials. The back propagation network with no chaotic elements and sigmoid function is observed to converge slower than the proposed memristive CNN and, hence, is a slow learner. The use of orthogonal functions like Hermite and Chebyshev polynomials as activation functions has helped in overcoming the drawbacks of the traditional back propagation networks. These drawbacks include determination of initial weights, problem of local minima, and slow convergence rate in the case of many processing elements. The improvement associated with the CNN is attributed to [15] where it was shown that the performance of orthogonal neural networks is better, or in some cases similar to that of traditional back propagation networks. The Chebyshev polynomials were used because of their unique property of completeness and recursion, which enabled better performance than their counterparts in terms of speedy convergence. Further, the time taken to compute each of the chaotic polynomials in the mentioned platform was evaluated and tabulated in Table 2. It is observed that, for computing 100 epochs, each epoch consisting of 5 cycles, the time taken to compute the Chebyshev U polynomials is the least. The time taken to compute the Hermite polynomial is 0.177 s more than that for computing the Chebyshev U polynomial, but 0.354 s less than that for computing the Chebyshev T polynomial. The back propagation network, tends to take about 0.013 s more than that for the Chebyshev U polynomial. Though the computing time of BPN is close to that of the Chebyshev U, the slow learning of the network makes it less preferable. In view of the faster convergence and least time for computation in the platform used, Chebyshev U can be used for the process of weight updation in the overall network architecture. Moreover, the CNN described in this paper, converges for a lesser number of cycles, which is approximately 42.8% faster than the one described in [7]. The weights obtained in the last epoch of the neural network are computed and used as a key for the subsequent process. The encryption and decryption processes as described in Section 4, were implemented using the cubic and 2D logistic polynomials. The output sequence (Cipher text) can be assessed using the correlation function, which is given by:

ρC,D = corr (C, D ) =

cov(C, D )

σC σB

Where C = Indices of plaintext D = Indices of ciphertext μC = Mean of sequence C

=

E[C − μC ][D − μD ]

σC σD

(32)

A.R. Bevi et al. / Computers and Electrical Engineering 72 (2018) 179–190

189

Table 3 Correlation values. Mapping function

Input values

Correlation

Cubic

0.05 0.25 0.45 (0.05, 0.15) (0.25, 0.35) (0.35, 0.45)

0.08901 0.09180 0.10819 0.00342 0.00411 0.01019

2D Logistic

μD = Mean of sequence D σ C = Standard deviation of sequence C σ D = Standard deviation of sequence D The respective ciphertexts from the chaotic equations require to have the least correlation with the plaintext, P. This is considered as a tool to indicate the dependence of the ciphertext on the plaintext. A lesser dependence would make it difficult for a third party to intrude and decipher the message. The correlation coefficients obtained for cubic and 2D logistic polynomials using Eq. (32), are given in Table 3. It is observed that the correlation values are smaller, in the range of 0.01–0.001, for text and audio signals. Further, the encryption carried out using the 2D logistic map results in a lesser correlation as compared to that using the cubic map, for both the sample messages. This indicates that the 2D logistic map, which is two-dimensional, can produce an output with larger confusion than a one-dimensional cubic map. The performance of the proposed CNN is evaluated for both plaintext and audio signals. The process is performed with a key of value 0.2, which is sent across the channel to the CNN of the receiver. It uses the Chebyshev U polynomial converted into a chaotic initial value of 0.640 0 0. This value is used in the cubic map (1D) to get the chaotic series thereby employed in encryption and decryption (16 iterations). The encryption using the 2D logistic map is also performed for the text and audio signals, keeping the value of y as 0.3 (set for the intended receiver). The correlation between the input and output sequences for both cubic and 2D Logistic maps is found to be 0.0065 and 0.0009, respectively. The sampled plaintext and its respective ciphertext for 1D and 2D encrypted audio snippets is listed below: Plaintext. Artificial neural networks are an integral part of emerging technologies and ongoing research has shown that they can be applied to a variety of applications. This paper proposes a new cryptographic algorithm using CNNs consisting of nonlinear equations such as Hermite or Chebyshev polynomials, with a memristor conductance that is used as an activation function in the CNN. Ciphertext cubic map. tdeftuoi whvc i uua ppicsln roia kutaht iir.aoeoebsdrhi aiennpaahalsmrit tnMcwohkrroe n Cnt rsoo iuee espnllteo mise-ots norc qo ncohaontsalai nNrn tnae nll tsowee eneeNe gregi.smra,icytgoh ai ttotopmtohn hv wnHopllcoacsacnfoeac gprunhaeegaaro lp gtocag uaihd y srcevanaaltnaNcs fnrAsaos csf r c ger r iiaud poki aanrTiw.no e tCaacmust, epntiNpeifnnlfrrs oygio hiasth ct ra eb erorriiiithyey t tges sphin Ciphertext – 2D Logistic map. n pturat fc gn mCp ym fccelsirnttn NguaaoqpereMiwhraeaon s e oaleotlesohwti easc trotnenna ktnhseemagrnokfisns i r h o tco ie epi ecpcoi l lctNaa ifty sitgse p eehnhitse ostl nsttcoonNi-psaneadnlsu igrionvepyapori.e ag iarnarain rdhtsisnhAp ooeraoirieg tc ertfvapdua scherw aosaa nucrpnlyontei. rrtoceh ,HN hghao obaauocran rbg varmniwtii raor o oshy TaisitioafmlttsuhCrli ghann ctweukcado,oelicl.neu Audio signal. A snippet of an audio signal lasting for 5 s is used as an example. The captured signal is stored in a high precision matrix, after which, it is used for encryption using the above mentioned process. This encrypted signal is as shown on the right side of the Fig. 8. A 5 s audio signal whose values are stored in a high precision matrix having 80 0 0 discrete points. The values from the encrypted text are again stored in a matrix, and decryption is performed using the reverse mapping function. The decrypted matrix is compared with the high precision input matrix and is found to be the same. This process can be used for encrypting audio recordings to prevent piracy. 7. Conclusion The CNN that is constructed with the memristor conductance equation and Chebyshev U polynomials has shown that there is an improvement in the performance as compared to the traditional back propagation networks. As the error obtained after 250 epochs is 0, the network can be used effectively to calculate the weights even at the receiver’s end. This can be extended to a “k” receiver configuration. In the case of 2D networks, the second initial value can be given as an index to the intended receiver. Synchronization is not required in this cryptosystem. The ciphertext and the key can be sent on a public channel. The initial value is used to generate a group of cryptographic keys, and these weight values cannot be calculated without the CNN, which makes the process of communication more secure, thus ensuring a high degree of confidentiality.

190

A.R. Bevi et al. / Computers and Electrical Engineering 72 (2018) 179–190

Fig. 8. Graphs depicting audio signals and their encrypted version.

References [1] Wang L, Duan M, Duan S. Memristive chebyshev neural network and its applications in function approximation. Math Probl Eng 2013;2013 Article ID 429402, 7 pages. doi:10.1155/2013/429402. [2] Chua L. Memristor-the missing circuit element. IEEE Trans Circuit Theory 1971;18(5):507–19. [3] Strukov DB, Snider GS, Stewart DR, Williams RS. The missing memristor found. Nature 2008;453(7191):80–3. [4] Du N, Manjunath N, Shuai Y, Bürger D, Skorupa I, Schüffny R, et al. Novel implementation of memristive systems for data encryption and obfuscation. J Appl Phys 2014;115(12):124501. [5] Thomas A. Memristor-based neural networks. J Phys D Appl Phys 2013;46(9):093001. [6] Gao Y, Ranasinghe DC, Al-Sarawi SF, Kavehei O, Abbott D. Memristive crypto primitive for building highly secure physical unclonable functions. Sci Rep 2015;5. [7] Shi X, Duan S, Wang L, Huang T, Li C. A novel memristive electronic synapse-based hermite chaotic neural network with application in cryptography. Neurocomputing 2015;166:487–95. [8] Grad H. Note on n-dimensional hermite polynomials. Commun Pure Appl Math 1949;2(4):325–30. [9] Diaconis P, Zabell S. Closed form summation for classical distributions: variations on a theme of de moivre. Stat Sci 1991:284–302. [10] Chebychev PL. Théorie des mécanismes connus sous le nom de parallélogrammes. Mémoires des Savants étrangers présentés à l’Académie de Saint-Pétersbourg 1854:539–86. This is the first presentation of Chebychev polynomials (untranslated version). [11] Lai X, Massey J, Murphy S. Markov ciphers and differential cryptanalysis. In: Advances in cryptology EUROCRYPT 91. Springer; 1991. p. 17–38. [12] Pareek N, Patidar V, Sud K. Cryptography using multiple one-dimensional chaotic maps. Commun Nonlinear Sci Numer Simul 2005;10(7):715–23. [13] Grebogi C, Ott E, Yorke JA. Chaotic attractors in crisis. Phys Rev Lett 1982;48(22):1507. [14] Hua Z, Zhou Y. Image encryption using 2d logistic-adjusted-sine map. Inf Sci 2016;339:237–53. [15] Sher CF, Tseng C-S, Chen C-S. Properties and performance of orthogonal neural network in function approximation. Int J Intell Syst 2001;16(12):1377–92. A. Ruhan Bevi is an Associate Professor in the Department of Electronics and Communication Engineering, SRM institute of Science and Technology. Her research interests include security architecture design, adaptive security using FPGA, Low power embedded design, neural network for security, and deep learning. She has authored many publications in peer reviewed journals and conferences in her area of interest. Sriharini Tumu has completed her undergraduate degree in Electronics and Communication Engineering from the SRM Institute of Science and Technology in 2017. Her research interests include machine learning, deep learning, and applications of neural networks. N. Varsha Prasad has completed her undergraduate degree in Electronics and Communication Engineering from the SRM Institute of Science and Technology in 2017. Her research interests include cryptography, networking and artificial intelligence.