Journal of Information Security and Applications 47 (2019) 410–420
Contents lists available at ScienceDirect
Journal of Information Security and Applications journal homepage: www.elsevier.com/locate/jisa
A reformed grasshopper optimization with genetic principle for securing medical data M M Annie Alphonsa a,∗, N. MohanaSundaram b a b
Karpagam Academy of Higher Education, Coimbatore, 641 021, Tamil Nadu, India CSE Department, Karpagam Academy of Higher Education, Coimbatore, 641 021, Tamil Nadu, India
a r t i c l e
i n f o
Keywords: Health care data Cloud computing Privacy preservation Genetic algorithm Grasshopper optimization
a b s t r a c t Cloud computing is an emerging computing technology that uses the internet and central remote servers to sustain data and applications. Using cloud computing, users can access database resources through internet from anywhere without distressing about the management of actual resources. This concept can be used almost in all fields including the healthcare sector. In such situations, data mining and their effective investigation from cloud are more demanding due to the privacy in data preservation. Consequently, the privacy and security of the information system is under major risk because of security attacks like Known Plain text attack (KPA) and Cipher Text attack (CPA). To manage these issues, this paper describes privacy preservation approach with data sanitization and data restoration process for securing medical data. Moreover, several researchers have proposed enhancements to the restoration process, since it causes a low accuracy. As a solution to this crisis, this paper uses a hybrid algorithm known as Grasshopper Optimization with Genetic Algorithm (GOAGA) for the data sanitization and restoration process. Further, the performance of this hybrid approach is compared with other conventional approaches in terms of the sanitization and restoration effectiveness, convergence, statistical and key sensitivity analysis and thus the governance of the progressed approach is validated. © 2019 Published by Elsevier Ltd.
1. Introduction The cloud computing technologies afford several reserved and effortless access to the challenging network [1], which are very effortless to set up and can be retained with lower attainment. Conventionally, cloud computing is considered as subscription-based service in which, it facilitates the copious admittance for allocating various grouped resources. In academia, a number of scientist and researchers have focused on cloud computing, and hence it has attracted a growing number of studies in recent years on cloud computing technology and Information Technology (IT) as well. The accessing during the cloud computing can frame the users to employ the complete set of resources for accessing various platforms, storage and so on by means of internet, and thus primarily appreciated the services rendered by the cloud providers. The National Institute of Standards and Technology (NIST) declares that the cloud computing technology is considered as the most confidential representation for deploying various computer resources and some other new practical implementations in the world of IT [2,3].
∗
Corresponding author. E-mail address:
[email protected] (M.M. Annie Alphonsa).
https://doi.org/10.1016/j.jisa.2019.05.007 2214-2126/© 2019 Published by Elsevier Ltd.
Though, many imminent consumers are quite cautious to gain the benefits of cloud computing, owing to maintain its safety as well as confidentiality deprivations [4–6]. In this distress, privacy [7,8] defense is perturbed as the most magnificent complication, and thus a number of massive companies, as well as the healthcare [9–13] centers, have structured their health services to the internet. Hence, the collections of records that are occupied in these cloud applications are more responsive [14,15]. For both the medical as well as pharmaceutical researchers, certain diseases can be customized and learned by accessing the health [16] associated records about the patients. In recent times, the cloud computing has enhanced its accessing services in this health sector by conceding several healthcare records [17] to the cloud, which makes high calibration in demanding services at low cost [18]. Even though the cloud delivers different advantageous healthcare services, a number of interrelated confidentiality issues are also examined extensively both by the governments and individuals [18]. Thus the confidentiality threats get increased at the time of outsourcing the individual healthcare information to the cloud since this kind of information is responsive in nature [18,19]. In addition to this, the cloud is classified based on the intrusive model, which often improves the execution of the requirements of several protocols, but it aims to extract the private PHI of the patients from the interfaces
M.M. Annie Alphonsa and N. MohanaSundaram / Journal of Information Security and Applications 47 (2019) 410–420
with the patients as well as physicians. Hence, the manipulation of secured and confidential preservation [20–25,39–41] of data is considered as the most demanding problem, which must need several effective solutions [2]. The performance of the security services is considered as the solution for the deprived particles in the privacy preservation schemes. It is not sufficient since the interaction among the cloud users and the suppliers has the ability to exhibit more than the removed contents from the record. For instance, if a specific service is provided to the particular user from all the users, then the user must not assimilate about the dispersion of the supplementary data such as their activities or other social connections to the cloud suppliers. Thus, the fortification of the customer’s privacy in data needs not only to be encrypted but also the privacy should be maintained in the cloud. Hence this type of protection includes various techniques such as data processing privacy, access pattern privacy and identity privacy [26]. Some of the privacy preservation approaches such as data portioning approach, Privacy preserving access control methods, preserving cloud computing privacy approach and so on are enhanced to obtain the privacy in the cloud. Hence the attainment of the optimistic results in the privacy of data is still a challenging issue, and thus an effective privacy preservation approach is required for the preservation of the medical data records. Cloud computing plays a major role in health care sector. But it leaves the authorised users into vulnerable situation, if the proper precautions are not taken. Some of the most common cloud computing security risks are Distributed-Denial-of-Service attack, ransom ware attack, Phishing and Social Engineering attack, Hijacking of accounts, Malware injection, Data loss etc. To overcome certain issues, this paper contemplates to initiate an efficient privacy preservation approach using a hybrid algorithm known as GOAGA, to implement both the sanitization and the restoration process for the sensitive healthcare data records in cloud. This advanced model compares its effectiveness with the other traditional models such as Genetic Algorithm (GA), Artificial Bee Colony (ABC), Particle Swarm Optimization (PSO), Firefly (FF), Glowworm Swarm Optimization (GSO), Genetically Modified Glowworm Swarm (GMGW), Glow Worm Swarm Employed Bee (GWOSEB), and Grasshopper Optimization Algorithm (GOA). The rest of the paper is structured as follows: Section 2 reviews the literature work. Section 3 specifies the privacy preserved medical data transmission system. Section 4 illustrates the data hiding and restoration. Section 5 analyses the acquired results, and Section 6 concludes the paper. 2. Literature review 2.1. Related works In 2014, Rongxing Lu et al. [27] have proposed a secure and privacy-preserving opportunistic computing method known as SPOC, for all kinds of emergency in the healthcare field. Here, resources such as computing power and the energy of the smartphones were collected to process the computation of the personal health information (PHI) during the emergency cases. In specific, to control the privacy and reliability, this approach was used, and it is based on the attribute-related access control and a new privacypreserving scalar product computation (PPSPC) technique to help out in the dispensation of data. From the analysis, this framework was used to achieve user-centric privacy access control, and thus the effectiveness was evaluated based on the transmission of data. In 2015, Ximeng Liu et al. [9] have proposed a novel privacy preservation model known as ‘patient-centric clinical decision support system’ that has assisted the clinician with respect to the diagnosing of patients in a privacy routine. In this expanded model,
411
the particulars of the patient were stored in the cloud, and this model can be used to qualify the classifier (naive Bayesian) without proliferating the privacy data of the patients. Then the trained classifier could be able to contrive the diseases of the new patients and thus can be used to enable the patients to retrieve the k disease names depending upon their execution. They have also depicted a new ‘additive homomorphic proxy aggregation model’ to secure the details of the patients. Thus the detailed analysis has ensured that the privacy of the patients details can be preserved with this enhanced model and the efficiency of the proposed model was validated with more accuracy in a privacy preservation routine. In 2014, Qinghua Shen et al. [28] have proposed an healthcare monitoring system for the privacy preservation with minimum service delay by using geo-distributed clouds. In this model, the cloud server allocates the servers to the users under the load condition, and thus the resource allocation technique was enabled. Hence the service delay of the user gets reduced. Here, an additional traffic shaping algorithm is proposed to convert the user health data traffic into the non-health data traffic, and thus the various attacks in this model were minimized. From the numerical results, efficiency of this approach in terms of their service delay can be used for the privacy preservation. Furthermore, using both the distributed control law and short queuing techniques the resources were allocated and thus the service delay was minimized. In 2014, Kuan Zhang et al. [17] have presented a ‘Priority based health data aggregation (PHDA)’ framework along with the privacy preservation process for the enhancement of the aggregation between the various health data types. They first investigated several social spots for helping to forward the health data and thus have permitted the users for determining the optimal transfer based on the social attachment. In accordance with the data preferences, the conformable approaches could be selected for forwarding the health information of the user to the cloud server. Thus the exploration of this model was made, and the research has revealed that the PHDA has attained the identity as well as the data privacy preservation. Moreover, it was observable that the proposed model has the capacity to avoid all the attacks and at last, the execution of the several advancements revealed that the PHDA achievements of the delivery ratio have equal communication cost. In 2017, Yang Lu et al. [29] have proposed a semantic privacy preservation approach for the linkage of the electronic health records. This approach was used to maintain the formulation of the access control policy for the users, and thus the attacks in the privacy were inhibited. Thus by enhancing the capabilities of the eXtensible Access Control Mark-up Language (XACML), various data risk disclosure methods were supported based on the access control of the user. The next phase of this approach includes the usage of attributes in different types of datasets. Thus based on the geospatial concepts, several real-world concerning applications were constructed from the different data sources. Hence, from the analysis, it was clear that the measurement based on the relatedness between the various attributes were done to preserve the privacy of the data records. In 2012, Adeela Waqar et al. [26] have emphasized the possibility of the metadata manipulation that was stored in database of cloud for embarrassing the privacy of the user. Subsequently, based on the representation of database, several configurations have been initiated. The proposal of database has been reconstructed based on the cryptographic and the interactive privacy preservation process and the effective use of responsive parameterization of parent class membership. Simultaneously, with the use of the effective reconstruction called as metadata, the unaffected access of all the records of database was fortified for the cloud provider. Thus, it was initiated to attain the restoration process. Further, the establishment of the suitability of
412
M.M. Annie Alphonsa and N. MohanaSundaram / Journal of Information Security and Applications 47 (2019) 410–420
several anticipated models was done by contriving its consistent steps. In 2013, Xuyun Zhang et al. [14] have proposed an effective quasi-identifier index-based model for fortifying the privacy preservation. Here, the high data use on the incremental as well as the dispersed data sets on the cloud can be acquired. Thus, ‘Quasiidentifiers’ renders the troop of removing several information and can be used for several processes. Hence, they have alienated an algorithm for the achievement of the proposed model. Further, they have used various locality based hash functions for the arrangement of analogous quasi-identifiers. Thus the efficacy of the privacy preservation model has intensified moderately over the traditional approaches. In 2017, Farzana Rahman et al. [30] have proposed a privacy preservation approach for Radio Frequency Identification (RFID) based healthcare systems. There are two main types of privacy preservation techniques used in an RFID based healthcare system such as using a privacy-preserving authentication protocol to intellect the RFID tags for different recognition and monitoring applications and using a privacy preservation access control approach using the RFID tag in hindering the illicit access of private information. Here, thus an approach known as PriSens-HSAC model was used to reveal these two privacy issues. The PriSens components in this model allocate enhanced privacy compared to several other RFID validation protocols, and the HSAC components regulate the illicit access to patient’s private data. Thus from the analysis, various privacy levels were explored for different types of service requirements and for the various types of attacks. 2.2. Review The characteristics and challenges of models that are precise in literature work are examined. Privacy-preserving scalar product computation (PPSC) approach [27] can preserve the privacy of the data associated with the dispensation of data. But, there occur several security issues due to the internal attackers. Naïve Bayesian Classifier [9] has the ability to withstand collision attacks and attains effective privacy preservation, but due to its complications, the communication and computational overheads are high. A Geodistributed cloud [28] is used to allocate the servers to the users and thus reduces the delay in the service to preserve the privacy of the data. However, more complicated cases where considered and thus it is still challenging. PHDA approach [17] has the ability to preserve identity as well as data privacy, but the performance seems to be a difficult task. Moreover, the model should be reduced to attain better results regarding efficiency. Semantic privacy preservation approach [29] is used to maintain the formulation of the access control policy for the users, and thus the privacy can be preserved. Moreover, more number of attributes in the dataset cannot be considered for the preservation of privacy. Quasiidentifier index [14] affords high portability. However, the scheduling of anonymized data set is still challenging. PriSens-HSAC approach [30] is used to regulate the illicit access of patient’s private data, and finally, they can be preserved concerning the attacks and service requirements. But, the assignment of several new sets of identifiers in the data is still an issue. Hence, to solve all the privacy issues in the cloud, it is essential to have an efficient privacypreserving model. 2.2.1. Problem statement The literature has been reviewed in different perspectives, and is illustrated in Table 1. The sanitization processes that are reported in the literature [27,9] either deletes or modifies the sensitive data and so, the reconstruction may not be possible by the authentic users.
Few works [28,17,29,14] that contribute on restoration principle does not investigate the sensitivity of key change, which is an important factor of secure key generation. The present key generation process is of high dimension and so they are computationally intensive. 2.3. Comparative analysis Table 2 illustrates the detailaed comparative analysis on existing over conventional models 3. Privacy preserved medical data transmission system 3.1. System architecture The structural design of the proposed medical data hiding and restoration process is demonstrated in Fig. 1. For efficient access and supporting mobility for both the healthcare professionals as well as the patients healthcare data are stored in cloud. In spite of the popularity of the healthcare cloud, several security issues are aroused for instance, data theft attacks are considered to be one of the most serious security breaches of healthcare data in the cloud. The main intention of this approach is to protect the sensitive medical data that to be preserved. This process is designated in two stages: Data Hiding and Data Restoration. Data hiding is effectively practiced by creating the optimal key health care provider has the optimal key. With that key the medical data is preserved. The preserved data is thus entitled as the sanitized data, and this data is forwarded to the recipient or patient. On the other hand, after receiving the subsequent sanitized data, the recipient can analyze the original data only if the identical key is given. Hence, by deploying the optimal key, the recipient retrieves the obscured sensitive original data, and thus it can be regarded as the restoration process. Finally, in case of any emergency situations, the received original data can be further concerned for the clinical examination, and the information from the records can be remitted to the relevant patients and their guardians and so on.
Fig. 1. Architecture diagram of the proposed medical data preservation and restoration process.
M.M. Annie Alphonsa and N. MohanaSundaram / Journal of Information Security and Applications 47 (2019) 410–420
413
Table 1 Literature Survey. Author [Citation]
Adopted Methodology
Features
Challenges
Rongxing Lu et al. (2014) [27]
PPSC approach
❖ Preserve the privacy of the data
❖ security issues ❖ Less reliability
Ximeng Liu et al. (2015) [9]
Naive Bayesian Classifier
Qinghua Shen et al. (2014) [28]
Geo-distributed cloud
❖ Less sensitive to the variation parameters of the system ❖ Withstand collision attacks ❖ Better privacy preservation ❖ Reduces the delay in the service
Kuan Zhang et al. (2014) [17]
PHDA
Yang Lu et al. (2017) [29]
Semantic privacy preservation approach
Adeela Waqar et al. (2012) [26]
Metadata manipulation in database of cloud
Xuyun Zhang et al. (2013) [14]
Quasi-identifier index-based model
Farzana Rahman et al. (2017) [30]
PriSens-HSAC approach
❖ ❖ ❖ ❖ ❖ ❖
Preserve the privacy of the data preserve identity Data privacy Maintain access control policy Preserve privacy for attributes Provide better data restoration
❖ Offer unaffected access of all the records ❖ High portability ❖ Control the illegal access of patient’s private data
❖ ❖ ❖ ❖
Computational overhead is high Complication occurs Allocate servers to users High cost
❖ poor performance ❖ Not provide privacy for all attributes ❖ Rate Information loss ❖ Hiding failure occurs
❖ Scheduling of anonymized data set is difficult ❖ New sets of identifiers in the data causes privacy issue.
Table 2 Comparative Analysis on proposed vs Existing methods. Author [Citation]
Comparative Description
Rongxing Lu et al. (2014) [27] and Ximeng Liu et al. (2015) [9] Kuan Zhang et al. (2014) [17] and Adeela Waqar et al. (2012) [26] Kuan Zhang et al. (2014) [17] and Yang Lu et al. (2017) [29]
Developed security protocol, which requires complex cryptographic schemes and communication overhead. Our work transforms the data in which the selective data are encoded, which cannot be suspected by third parties. Privacy preservation is of least interest, yet the privacy is preserved by storing the data into secure clouds. This means are highly expensive because it creates high demand for private cloud structure. Priority based methods, and so it requires hard switching logic and not adaptive.
Fig. 2. Solution encoding.
3.2. Key based data Hiding: an optimization problem The foremost objective of this paper is to conquer the optimal key, which is used by both data hiding and restoration processes. This is performed by deploying the hybridization of prominent GA with GOA algorithm. The ‘key’ is considered to be the input solution (also contains the sensitive data to be preserved to the algorithm) which is shown in Fig. 2, where NK denotes the number K of key elements. The length of the chromosome (key) is 401 × K2 , where K1 can be denoted as the records and K2 can be denoted as the fields. The bounding limit of the solution is 1(minimum bounding limit) to 2m − 1(maximum bounding limit), where mdenotes the number of bits, for instance, m=40. Eq. (1) is the objective function, whereRj be the original data, Rpj be the data to be preserved, Rsj be the sanitized data, Mrefers to the total number of sensitive data to be preserved.
U = min Pj M j=1
Rs j
j=1
Rj
P j = M
3.3.1. Grasshopper optimization algorithm GOA [31] is a heuristic search and optimization method stimulated by the swarming behavior of grasshoppers. This algorithm can be used for solving the optimization problems based on the behavior of grasshopper swarms in nature. There are two main characteristics of the swarm, such as the movement and the food source seeking. Generally the search process is divided into two stages such as the exploration and the exploitation stage. In exploration stage, the search agents are optimistic to move suddenly and while in the exploitation stage, the search agents move locally. Thus these two functions, as well as the target seeking process, is performed by the grasshoppers naturally. Thus depending upon these behavior’s, a new nature-inspired algorithm known as GOA is proposed [31]. The mathematical model to demonstrate the swarming behavior of the grasshopper is shown in Eq. (3), where Pi defines the position of the ith grasshopper, Ii denotes the social interaction, Fi denotes the gravity force on thei − thgrasshopper, and Wi determines about the wind advection.
Pi = Ii + Fi + Wi
(3)
(1)
M −
j=1
Rj − M
M
j=1
j=1
Rj
Rp j
(2)
To produce a random behavior the Eq. (3) can be re-written as in Eq. (4), wheren1 ,n2 , n3 are the random numbers in the interval [0, 1].
Pi = n1 Ii + n2 Fi + n3Wi
(4)
3.3. Key generation scheme In key generation, an optimization algorithm process is employed in order to find the best solution (best key). Here, GA is effectively combined with GOA to get better results.
The social interactions can be defined as in Eq. (5), where aij is the distance between ith and thejth grasshopper, and it can be caly −y culated asaij = |yj − yi |, and aˆi j = ja i is a unit vector from the ith ij
414
M.M. Annie Alphonsa and N. MohanaSundaram / Journal of Information Security and Applications 47 (2019) 410–420
grasshopper to the jth grasshopper.
Ii =
M
Algorithm 1 GOA algorithm.
r (ai j )aˆi j
(5)
j=1 j=i
The strength of the social forces r can be defined as in Eq. (6), where gdenotes the intensity of at traction and tdescribes the attractiveness regarding the length scale.
r (q ) = ge
−q t
− e−q
(6)
The distance between the two grasshoppers is mapped in the range[1, 4]. Thus the Fcomponent in Eq. (3) can be calculated as shown in Eq. (7), where g is the gravitational constant and eˆu shows a unity vector towards the centre of the earth.
Fi = −geˆu
(7)
Input: Random Key Output: Optimal Key Initialize the randomPi population, values of cmx, cmnand the value of L. Estimate the fitness, Pj of each search agent. T is considered as the best search agent. While (t < L) Update c using Eq. (11). For (each search agent) Stabilize the distance between the grasshoppers in the interval [1, 4]. Update the position of current search agent using Eq. (10) Bring the current search agent to the original position. End for Update the value of T t=t+1 End while Return T( Optimal Key)
The Wcomponent in the Eq. (3) can be calculated as shown in Eq. (8), where vis a constant drift and eˆw is a unity vector in the direction of wind.
Wi = veˆw
(8)
Substituting the values ofI, Fand Win Eq. (3), this equation can be expanded as in Eq. (9), where Mdenotes the number of grasshoppers.
Pi =
M
y j − yi
r y j − yi
j=1 j=i
ai j
− geˆu + veˆw
(9)
However, this model cannot be used directly to solve the optimization problems, because the grasshoppers reach the comfort zone very quickly and thus the swarm does not join to a particular point. Thus Eq. (9) can be modified in order to solve the optimization problem as in Eq. (10), where uba and lba is the upper and the lower bound in the Ath dimension. Ta is the value of the Ath dimension in the target and cis the decreasing coefficient to shrink the comfort zone, repulsion zone, and attraction zone. Eq. (10) denotes about the next position of the grasshopper based on its current position, the position of the target and all other grasshoppers.
⎛ ⎜
Pia = c⎝
⎞
M j=1 j=i
c
y j − yi ⎟ uba − l ba a r ( y j − yai ) ⎠ + Tˆa 2 ai j
(10)
In this algorithm, for balancing both the exploration and the exploitation stage, the parameter cshould be reduced with respect to the number of iteration, and it can be described in Eq. (11), where cmxis the maximum value andcmnis the minimum value. tindicates the current iteration, and Lis the maximum number of iterations.
c = cmx − t
cmx − cmn L
(11)
In this, the main target is the global optimum value, and thus in each step of optimization, the target for grasshopper should be found. In GOA, the value with best objective function is considered as the target value. And finally, more accurate target is chosen as the global optimum value in the search space. The pseudo code of GOA algorithm is given in Algorithm 1. Moreover, the initialization of random key generation is illustrated in Fig. 3. 3.3.2. Genetic algorithm GA is a heuristic search and optimization method stimulated by the natural evolution process [32]. GA deploys an exceeded abstract description of evolutionary implementations to improve the solutions to definite problems. It is incorporated with six major steps such as (1) Crossover, (2) Mutation (3) Genotype-Phenotype Mapping (4) Selection (5) Termination.
Fig. 3. Initialization of random key generation.
3.3.3. Crossover This operation authorizes the combination of the genetic factual with two or more solutions. Undoubtedly, most of the group includes two parents. Besides, in case of certain exception about the diverse sexes, that will include only one parent. Thus this operator executes a purpose in which the genetic factual of the parents can be affiliated with it. Hence, the renowned one for the bit string demo is ppoint crossover. This crossover is divided into two solutions at ppositions, and they were collected together in a sequential manner. 3.3.4. Mutation This is considered to be as the consequent move, in which the mutation operator alters the solution by allocating them. Mutation principle works on the basis of several arbitrary improvements. The strength of this interruption or the disorder is represented as
M.M. Annie Alphonsa and N. MohanaSundaram / Journal of Information Security and Applications 47 (2019) 410–420
the mutation rate. Three leading requirements are there in the mutation operation. The first state is reachability. Every point in the solution space is available from the random point in the solution space. The second state is un-biasedness. This operation should not control a point of the assessment to a specific path. The last state or the main state is scalability. Every mutation operator allocates the degree of freedom in which the strength is adaptable. 3.3.5. Genotype-phenotype mapping After the assessments of both the crossover and mutation operators, a new offspring population has been evaluated. Population is nothing but the collection of sanitization key. Here, the mapping of the chromosome (genotype) merely depends on the demonstration, and this is referred to as phenotype. This kind of mapping thus evades the introduction to the bias. 3.3.6. Fitness Here, based on the fitness model, the phenotype of solution can be evaluated. In this algorithm, the quality of the solution is evaluated based on the fitness function. Moreover, this is an element in the modeling process of the entire optimization model. Thus the performance of this algorithm in problem-solving is evaluated with respect to the number of fitness functions, till the best solution is found. 3.3.7. Selection In this process, the parents of new generations are selected, and this type of selection is termed as survival selection. Here, this operation describes about two effects such as the survival of the operators and the death of the operators. This awareness directly outfits the ‘Darwin’s Principle of survival,’ and thus the advanced survival operators can be pertained for the segment of crossover operators called as the mating selection. Additionally, this mating approach determines which parent must connect in the crossover progression. 3.3.8. Termination This situation determines about the termination of the main evolutionary loops. Thus GA runs frequently in the predefined number of generations. Both the time and the fitness model cost thus confines the length of this optimization principle. The pseudo code of GA algorithm is given in Algorithm 2. 3.3.9. GOAGA algorithm Generally in GOA, the comfort zone parameter is considered to be the most important parameter to solve the optimization problems. But in case of any discontinuities in the comfort zone range, only a single objective problem can be solved with the uncertain variables. So in order to solve the isolated and multi-objective problems, an advanced genre of this algorithm can be used. On the other hand, GA utilizes probabilistic selection regulations to solve the optimization problems even from a distinct point. Thus, it is intended to considerably integrate the advantages of GA to
Algorithm 3 GOAGA algorithm for Key Extraction. Input: Random Key Output: Optimal Key Initialize the random Pi population, values of cmx, cmn and the value of L. Estimate the fitness of each search agent. T is considered as the best search agent. While (t < L) Update c using Eq. (11). For (each search agent) Stabilize the distance between the grasshoppers in the interval [1, 4]. Update the position of current search agent using Eq. (10) Perform the fitness estimation using Eq. (2) Find the best two solutions based on the minimum fitness function Update the cross-over using GA Perform the fitness estimation using Eq. (2) after the cross-over operation. Combine both the best solutions obtained from GA and GOA. Find the best solution from the combined results. Bring the current search agent to the original position. End for Update the value of T t=t+1 End while Return T(Optimal Key)
GOA, which results in tuning the main controlling parameters in GOA to attain optimal key that should attain the objective function as given in Eq. (2). Here, two best solutions are selected from the objective function based on the minimum position and then it is subjected to the cross-over operation using GA. Finally, the fitness function is estimated after the cross-over operation. Thus the best solutions obtained from both the GA and GOA is combined together to attain an optimal key. The pseudo code of the proposed GOAGA for key extraction is given in Algorithm 3. The pseudo code of GOAGA algorithm is given in Algorithm 3. 4. Data hiding and restoration 4.1. Data hiding The complete operation of data hiding is demonstrated in Fig. 4. This process can be used to safeguard the data, where the sensitive medical data is conserved with respect to the deployment of the optimal key. The origination of the optimal key is evidently described in the next section. Thus the optimal key is transformed into binary value, and this created key is multiplied with the medical data to hide the sensitive data, thus this obtained data result is termed as the sanitized data. The creation of binary data is a detailed process, which is as follows: Let us consider the size of the data to be K1 × K2 (for example, 200 × 4). The size of the optimal
Algorithm 2 GA algorithm. Input: Random Key Output: Optimal Key Initialize the arbitrary D population Estimate the fitness f(s)of each s chromosome in the population Reiterate Choose the ‘best’ operation, which is to be utilized by the genetic operators Evolution of new operations with the help of crossover as well as mutation operators. Fitness estimation of new operations ‘Poorest’ operations are substituted by ‘best’ operations Until it influences the ‘best’ solution
415
Fig. 4. Demonstration of data hiding process.
416
M.M. Annie Alphonsa and N. MohanaSundaram / Journal of Information Security and Applications 47 (2019) 410–420
tion Section 5.5 demonstrates the sensitivity and security of the proposed model in comparison with conventional models by altering the key’s percentage level. Moreover, Section 5.6 depicts the algorithmic analysis of proposed privacy preservation model. 5.1. Experimental procedure
Fig. 5. Illustration of data restoration process.
key is considered to be as 20 × 1, and this key is then multiplied with the original data to generate the sanitized data. During the conversion of key into binary values, the length should be identical as that of the original data. Thus the elements in the (20 × 1) key is thus segmented into five subsets, in which each subset contains 4 elements and hence each element in the key is converted into 40 bit binary values so that each subset in the key attains 40 × 4 data. Hence the segmented data set contains five (40 × 4) data, which can be considered as the acquired binary data. The complete five (40 × 4) data is then finally concatenated to get the aggregated binary data of size 200 × 4. Thus after the generation of binary data, the sanitized data can be attained by multiplying the binary data with the original data. 4.2. Data restoration The process of data restoration technique is embellished in Fig 5. The converse of generated optimal key constitutes two information; such as index and the sensitive data. Primarily, a vector of the sanitized data of comparable length is generated for sensitive data, which is then multiplied with the index of optimal key. Thus the multiplied data is then added with sanitized data to acquire restored or original data. If the optimized key generated by GOAGA is precise, then original data is recovered effectively. In another case, if the key generated is not optimal, then restoration of original data cannot be done efficiently. Therefore, in this paper, the correlation coefficient of both original and recovered data is resolved to demonstrate the effectiveness of the proposed GOAGA approach. 5. Results and discussions To show the superiority of the proposed model simulation results are shown in below sub-sections. In short, Section 5.1 depicts the experimental procedure, Section 5.2 demonstrates the optimal key generation of the proposed model over conventional models for Test case 1, Test case 2 and Test case 3. Section 5.3 describes the convergence analysis of the proposed GOAGA model over conventional models with the minimization of cost function. Section 5.4 shows the sanitization and restoration effectiveness of proposed over conventional models. In addi-
The advanced experiments of data sanitation and restoration model for preserving medical data were done in MATLAB 2015a, and the simulation results were noticed. Here, the testing procedure was regulated using the heart disease data. In this, each data is of [200 × 4] size, with 200 records and 4 fields. A synthetic data was produced from the original data and deviated in 10%, 20%, and 30% correspondingly. Thus three test cases were generated after this deviation such as Test case 1, Test case 2 and Test case 3. For 10% deviation, the random data were generated either by adding or subtracting the value in the specified range of (−10 to +10), for 20% deviation, the random data were generated either by adding or subtracting the value in the specified range of (−20 to +20), for 30% deviation, the random data were generated either by adding or subtracting the value in the specified range of (−30 to +30). Thus in each test cases, there would be 10 synthetic data and the performance of the proposed GOAGA algorithm is then compared with the other existing algorithms such as, GA [32], ABC [33], PSO [34], FF [35], GSO [36], GWOSEB [37] and GOA [31] to validate its enactment. 5.2. Key generation efficacy Naturally, the meta-heuristic algorithms have assured stochastic behavior, and hence it could not create precise result. Thus it is essential to implement the algorithms five times for taking measures like best, worst, mean, median and standard deviation. Tables 3–5 show the statistical analysis of the proposed approach over conventional approaches for test case 1, test case 2 and test case 3, respectively. The proposed algorithm outperforms over other algorithms when compared to other conventional models for all test cases. For example, in mean case, the proposed approach is 26.3%, 16.8%, 11.88%, 4.46%, 14.2%, 2018.3%, 13.72% and 8.13% better than GWOSEB, GOA, GMGW, GSO, FF, PSO, ABC, and GA, respectively. Similarly, in test case 2, the proposed approach for mean case, the developed approach is 14.67%, 12.89%, 23.98%, 9.89%, 4.67%, 6.90%, 9.87% and 15.89% better than GWOSEB, GOA, GMGW, GSO, FF, PSO, ABC, and GA, respectively. Similarly, in test case 3, the mean of the developed approach is 9.89%, 5.45%, 12.89%, 6.90%, 13.89%, 17.33%, 12.39% and 25.92% better than GWOSEB, GOA, GMGW, GSO, FF, PSO, ABC, and GA, respectively. Not only for Mean case, but the developed approach shows better results for all best, worst, median and standard deviation cases. 5.3. Convergence analysis Generally, in the sanitization process, the obtained optimal key with cost function can be considered as the best key. Here, the minimization of cost function can be done with respect to the iterations. The convergence analysis of the proposed GOAGA approach over conventional methods for all the three test cases is demonstrated in Fig 6. The convergence analysis can be obtained by altering the number of iterations like 20, 40, 60, 80 and 100 over diverse cost functions. The Fig. 6 exhibits that the proposed approach has achieved a steady converged cost function. Hence the minimum cost function is achieved at 100th iteration. Thus similarly for both the test case 2 and the test case 3, the same results are observed. The analysis proved that the proposed approach could efficiently minimize the cost function, and this denotes the
M.M. Annie Alphonsa and N. MohanaSundaram / Journal of Information Security and Applications 47 (2019) 410–420
417
Table 3 Statistical analysis of proposed and conventional approaches for Test case 1. Metrics
GA [32]
ABC [33]
PSO [34]
FF [35]
GSO [36]
GMGW [38]
GWOSEB [37]
GOA [31]
GOAGA
Best Worst Mean Median Standard Deviation
1.4789 1.5771 1.5286 1.537 0.037041
1.4391 1.4796 1.4548 1.4479 0.018018
1.4603 1.5156 1.4874 1.4887 0.019836
1.2933 1.4028 1.3594 1.3895 0.049445
0.57302 0.75397 0.65918 0.64714 0.086649
0.38518 0.71468 0.58657 0.65247 0.13902
0.017422 0.55463 0.39493 0.46268 0.21536
0.38791 0.78303 0.60798 0.62234 0.17815
0.19349 0.54808 0.37411 0.37155 0.12679
Table 4 Statistical analysis of proposed and conventional approaches for Test case 2. Metrics
GA [32]
ABC [33]
PSO [34]
FF [35]
GSO [36]
GMGW [38]
GWOSEB [37]
GOA [31]
GOAGA
Best Worst Mean Median Standard Deviation
1.5865 1.6835 1.6282 1.6191 0.043255
1.4961 1.5463 1.5186 1.5138 0.020843
1.5214 1.6402 1.5822 1.579 0.042695
1.3556 1.4995 1.4032 1.3944 0.057022
0.68432 0.79041 0.72757 0.70893 0.046167
0.59408 1.1734 0.81809 0.70604 0.24536
0.53151 0.75988 0.65854 0.67016 0.10017
0.27849 0.82551 0.60825 0.58624 0.21396
0.25765 0.67198 0.43853 0.37651 0.18471
Table 5 Statistical analysis of proposed and conventional approaches for Test case 3. Metrics
GA [32]
ABC [33]
PSO [34]
FF [35]
GSO [36]
GMGW [38]
GWOSEB [37]
GOA [31]
GOAGA
Best Worst Mean Median Standard Deviation
1.5511 1.7611 1.6586 1.6738 0.078338
1.5824 1.6575 1.62 1.6178 0.026732
1.6249 1.6963 1.6662 1.6854 0.036384
1.4392 1.5907 1.53 1.5322 0.057793
0.67604 0.80796 0.74866 0.74306 0.050471
0.39685 0.94615 0.75692 0.83076 0.23192
0.39188 0.74913 0.56969 0.59056 0.14108
0.44819 1.0518 0.7212 0.62998 0.23833
0.11008 0.69934 0.5054 0.57395 0.22808
Fig. 6. Convergence analysis of proposed and conventional approaches for (a) Test case 1 (b) Test case 2 (c) Test case 3.
effectiveness of the developed model over the other conventional approaches.
5.4. Hiding and restoration performance The sanitization effectiveness of the ten obtained synthetic data of test case 1, test case 2 and test case 3 is concise in Fig. 7. The proposed GOAGA approach has attained effective minimization of objective function over other conventional approaches. For example, in test case 1, the experiment 1 of the proposed approach is 22.7%, 12.87%, 13.67%, 7.34%, 3.98%, 8.65%, 9.12% and 17.98% better than GWOSEB, GOA, GMGW, GSO, FF, PSO, ABC, and GA, respectively. For experiment 2, the developed approach is 6.73%, 7.08%, 9.61%, 9.10%, 11.35%, 9.44%, 13.65% and 16.63% better than GWOSEB, GOA, GMGW, GSO, FF, PSO, ABC, and GA, respectively. For experiment 3, the proposed approach is 5.91%, 6.76%, 5.98%, 7.98%, 5.90%, 1.87% and 4.87% better than GWOSEB, GOA, GMGW, GSO, FF, PSO, ABC, and GA, respectively. The same examination is made for test case 2 and 3 as well. Hence, it is resolute that the developed approach is more efficient over other approaches since it can effectively minimize the objective function.
The restoration experiment is performed for all the three test cases, where 10 experiments are accomplished, which is demonstrated in Fig 8. For example, in test case 1, the experiment 1 of the proposed approach shows 5.93%, 4.08%, 5.61%, 9.10%, 7.35%, 7.814%, 13.65% and 18.33% better than GWOSEB, GOA, GMGW, GSO, FF, PSO, ABC, and GA, respectively. Similarly for all experiments and test cases the proposed approach shows better performance when compared to that of conventional methods. Thus, it is clear that the proposed approach enhances the effectiveness in sanitization process over the other conventional approaches. 5.5. Sensitivity and security analysis The sensitivity of obtained optimal key is examined by altering the key’s percentage level to 10%, 30%, 40%, 50% and 70% respectively as demonstrated in Tables 6–8. The correlation among encrypted data that uses original key and the key with dissimilarity must be low. Based on this feature, the proposed GOAGA approach has achieved high performance for 10% variation with reduced correlation. The similar examination is made by changing the key with 30%, 40%, 50%, and 70% respectively. The proposed GOAGA
418
M.M. Annie Alphonsa and N. MohanaSundaram / Journal of Information Security and Applications 47 (2019) 410–420
Fig. 7. Sanitization effectiveness of proposed and conventional approaches for (a) Test case 1 (b) Test case 2 (c) Test case 3.
Fig. 8. Restoration effectiveness of proposed and conventional approaches for (a) Test case 1 (b) Test case 2 (c) Test case 3.
Table 6 Key sensitivity of test case 1.
Table 8 Key sensitivity of test case 3.
Method
Per_10
Per_30
Per_40
Per_70
Method
Per_10
Per_30
Per_40
Per_70
GA [32] ABC [33] PSO [34] FF [35] GSO [36] GMGW [38] GWOSEB [37] GOA [31] GOAGA
0.8551 0.86145 0.86455 0.84274 0.86268 0.86078 0.84309 0.84943 0.87756
0.79886 0.76726 0.80667 0.78773 0.79567 0.78742 0.76502 0.70399 0.67507
0.72971 0.76114 0.8069 0.75792 0.77027 0.74563 0.70145 0.7479 0.70597
0.65852 0.69905 0.72956 0.7276 0.72093 0.72385 0.65371 0.70828 0.64679
GA [32] ABC [33] PSO [34] FF [35] GSO [36] GMGW [38] GWOSEB [37] GOA [31] GOAGA
0.89671 0.90022 0.88671 0.90838 0.89165 0.89621 0.85467 0.88138 0.84353
0.87194 0.85501 0.86855 0.87268 0.84942 0.85769 0.86026 0.75343 0.7126
0.86671 0.8336 0.83347 0.82457 0.85576 0.84756 0.82777 0.80239 0.79366
0.78733 0.74877 0.71529 0.77026 0.77516 0.76192 0.69925 0.63627 0.5953
Table 7 Key sensitivity of test case 2.
Table 9 CPA analysis of test case 1, 2 and 3.
Method
Per_10
Per_30
Per_40
Per_70
Method
Test case 1
Test case 2
Test case 3
GA [32] ABC [33] PSO [34] FF [36] GSO [36] GMGW [38] GWOSEB [37] GOA [31] GOAGA
0.91876 0.93397 0.91899 0.91747 0.92637 0.92625 0.91501 0.92097 0.91857
0.81605 0.83971 0.81657 0.80322 0.84045 0.82948 0.79475 0.74168 0.68948
0.7993 0.81588 0.79986 0.80075 0.81679 0.81 0.76879 0.67424 0.64171
0.68155 0.69708 0.68244 0.69561 0.65265 0.65162 0.62385 0.61914 0.59337
GA [32] ABC [33] PSO [34] FF [35] GSO [36] GMGW [38] GWOSEB [37] GOA [31] GOAGA
0.99787 0.94602 0.96587 0.96103 0.97994 0.99778 0.9659 0.98481 0.9854
0.98947 0.97448 0.95594 0.99383 0.99655 0.97716 0.95913 0.99265 0.95418
0.98547 0.99871 0.98848 0.9301 0.96424 0.98363 0.97457 0.83514 0.78089
approach for 30% variation is 9.2%, 28.7%, 32.7%, 30.6%, 13.9%, 7.8%, 14.7%, and 29.9% better than GWOSEB, GOA, GMGW, GSO, FF, PSO, ABC, and GA, respectively. For 40% key variation, the proposed approach is 19.7%, 27.7%, 12.7%, 15.6%, 23.7%, 27.8%, 14.07%, and 16.9% better than GWOSEB, GOA, GMGW, GSO, FF, PSO, ABC, and GA, respectively. Thus it is finalized that the proposed approach has achieved minimum correlation when compared to other ap-
proaches, which proves the efficiency of proposed approach concerning privacy preservation of data. The attacks like Known Plain Text Attacks (KPA) and Cipher Plain Text Attacks (CPA) are perceived, and it is shown in Tables 8 and 9. The examination of KPA is done by correlating one original data with all original data and one sanitized data with all sanitized data. In the same way, the CPA analysis is perceived by
M.M. Annie Alphonsa and N. MohanaSundaram / Journal of Information Security and Applications 47 (2019) 410–420
419
Fig. 9. Algorithmic analysis for Proposed privacy preservation model (a) Test case 1 (b) Test case 2 (c) Test case 3.
Table 10 KPA analysis of test case 1, 2 and 3. Method
Test case 1
Test case 2
Test case 3
GA [32] ABC [33] PSO [34] FF [35] GSO [36] GMGW [38] GWOSEB [37] GOA [31] GOAGA
0.96978 0.98457 0.97321 0.98988 0.99005 0.9672 0.99621 0.96856 0.95063
0.97632 0.99342 0.99444 0.98921 0.98846 0.99199 0.86727 0.95438 0.931
0.9958 0.94992 0.98475 0.99692 0.96153 0.9749 0.98772 0.87966 0.75619
correlating each sanitized data with its consequent restored data. From Table 10, it is perceived that the proposed GOAGA model for test case 3 is 19.2%, 20.7%, 22.7%, 25.6%, 23.9%, 17.8%, 24.7%, and 26.9% better than GWOSEB, GOA, GMGW, GSO, FF, PSO, ABC, and GA respectively. The same analysis is made for the remaining test cases and shown the dominance of proposed model with declained attack. 5.6. Algorithmic analysis The correspondence between original data and recovered data is examined, which is more for the proposed GOAGA model is concise in Fig. 9. The minimum objective function can be obtained with respect to the current and the target position of the grasshoppers. Here, the minimum and maximum shrinking coefficients can be used to shrink the comfort, attraction and repulsion zone at each iterations. This analysis can be done for all the three test cases such as test case 1, test case 2 and test case 3 respectively. In test case 1, the maximum and minimum coefficients are adjusted to attain an minimum objective function at 0.6. Similarly, in test case 2 and 3, the minimum objective function can be obtained at 0.9 and 0.3 respectively. 6. Conclusion This paper has developed a GOAGA algorithm for both data sanitization and restoration process, which has provided better privacy preservation of sensitive data. The developed GOAGA algorithm was the hybridization of GAO with GA algorithm. GOAGA approach has organized in the creation of optimal key that used for both sanitization and restoration process. The proposed GOAGA was compared to the other conventional approaches like GWOSEB, GOA, GMGW, GSO, FF, PSO, ABC, and GA respectively in terms of assorted analysis such as sanitization and restoration effectiveness,
convergence analysis, key sensitivity analysis and statistical analysis. From the results, it was obvious the dominace of proposed approach to efficient privacy preservation of data, that has also examined that the proposed GOAGA approach for 30% key variation was 9.2%, 28.7%, 32.7%, 30.6%, 13.9%, 7.8%, 14.7%, and 29.9% better than GWOSEB, GOA, GMGW, GSO, FF, PSO, ABC, and GA, respectively. Thus it was concluded that the proposed approach had attained a better performance for the preservation of sensitive healthcare data. The hybrid GOAGA scheme is defined in a way to secure the medical data. However, the rapid changing real world requires updated security by considering the existing challenges. In future, a novel model of cloud security system can be designed by introducing new advanced optimization algorithms with the consideration of more relevant security constraints. Conflict of interest The authors declare that they have no conflict of interest. References [1] Sahi A, Lai D, Li Y. Security and privacy preserving approaches in the eHealth clouds with disaster recovery plan. Comput Biol Med 2016;78:1–8. [2] Zhou J, Cao Z, Dong X, Lin X. PPDM: a Privacy-Preserving protocol for cloud-assisted e-healthcare systems. IEEE J Select Top Signal Process 2015;9(7):1332–44. [3] Gatzoulis L, Iakovidis I. Wearable and portable eHealth systems. IEEE Eng Med Biol Mag 2007;26(5):51–6. [4] Takabi H, Joshi JBD, Ahn GJ. Security and privacy challenges in cloud computing environments. IEEE Secur Priv 2010;8(6):24–31. [5] Zissis D. Author links open the author workspace.DimitriosLekkas, addressing cloud computing security issues. Future Gener Comput Syst 2012;28(3):583–92. [6] Grobauer B, Walloschek T, Stocker E. Understanding cloud computing vulnerabilities. IEEE Secur Priv 2011;9(2):50–7. [7] Nallakumar R, Sengottaiyan N, Arif MM. Cloud computing and methods for privacy preservation: a survey. Int J Advan Res Comput Eng Technol (IJARCET) 2014;3(11). [8] Zhang K, Liang X, Shen X, Lu R. Exploiting multimedia services in mobile social networks from security and privacy perspectives. IEEE Commun Mag 2014;52(3):58–65. [9] Liu X, Lu R, Ma J, Chen L, Qin B. Privacy-preserving patient-centric clinical decision support system on naïve Bayesian classification. IEEE J Biomed Health Inform 2016;20(2):655–68. [10] Lee SH, Song JH, Kim IK. CDA generation and integration for health information exchange based on cloud computing system. IEEE Trans Serv Comput 2016;9(2):241–9. [11] Monkaresi H, Calvo RA, Yan H. A machine learning approach to improve contactless heart rate monitoring using a webcam. IEEE J Biomed Health Inform 2014;18(4):1153–60. [12] Barua M, Liang X, Lu R, Shen X. ESPAC:enabling security and patient– centric access control for ehealth in cloud computing. Int J Secur Netw 2011;6(2–3):67–76. [13] Viswanathan H, Chen B, Pompili D. Research challenges in computation, communication, and context awareness for ubiquitous healthcare. IEEE Commun Mag 2012;50(5):92–9.
420
M.M. Annie Alphonsa and N. MohanaSundaram / Journal of Information Security and Applications 47 (2019) 410–420
[14] Zhang X, Liu C, Nepal S, Chen I. An efficient quasi-identifier index based approach for privacy preservation over incremental data sets on cloud. J Comput System Sci 2013;79(5):542–55. [15] Zhou Y, Zhou G, Wang Y, Zhao G. A glowworm Swarm optimization algorithm based tribes. Appl Math Inform Sci 2013;7(2):537–41. [16] Azadeh A, Fam IM, Khoshnoud M, Nikafrouz M. Design and implementation of a fuzzy expert system for performance assessment of an integrated health, safety, environment (HSE) and ergonomics system: the case of a gas refinery. Inform Sci 2008;178(22):4280–300. [17] Zhang K, Liang X, Baura M, Lu R, (Sherman)Shen X. PHDA: a priority based health data aggregation with privacy preservation for cloud assisted WBANs. Inform Sci 2014;284:130–41. [18] Wang W, Chen L, Zhang Q. Outsourcing high-dimensional healthcare data to cloud with personalized privacy preservation. Comput Netw 2015;88:136–48. [19] kovidis I. Towards personal health record: current situation, obstacles and trends in implementation of electronic healthcare record in Europe. Int J Med Inf 1998;52(1–3):105–15. [20] Lu R, Liang X, Li X, Lin X, Shen X. EPPA: an efficient and privacy-preserving aggregation scheme for secure smart grid communications. IEEE Trans Parallel Distrib Syst 2012;23(9):1621–31. [21] Li H, Xiong L, Ohno-Machado L, Jiang X. Privacy preserving RBF kernel support vector machine. BioMed Res Int 2014;2014:1–10. [22] Lu R, Lin X, Shen X. SPRING: a social-based privacy-preserving packet forwarding protocol for vehicular delay tolerant networks. In: 2010 Proceedings IEEE INFOCOM); 2010. p. 1–9. [23] Shi E, Chan T, Rieffel E, Chow R, Song D. Privacy-preserving aggregation of time-series data. Proc. NDSS 2011. [24] Shi J, Zhang R, Liu Y, Zhang Y. PriSense: privacy-preserving data aggregation in people-centric urban sensing systems. In: 2010 Proceedings IEEE INFOCOM; 2010. p. 1–9. [25] Dhasarathan C, Thirumal V, Dhavachelvan, Ponnurangam. A secure data privacy preservation for on-demand cloud service. J King Saud Univer Eng Sci 2017;29(2):144–50. [26] Kaaniche N, Laurent M. Data security and privacy preservation in cloud storage environments based on cryptographic mechanisms. Comput Commun 2017;111:120–41. [27] Lu R, Lin X, Shen X. SPOC: a Secure and privacy-preserving opportunistic computing framework for mobile-healthcare emergency. In: IEEE Transactions on Parallel and Distributed Systems, 24; 2013. p. 614–24.
[28] Shen Q, Liang X. Exploiting geo-distributed clouds for a E-health monitoring system with minimum service delay and privacy preservation. IEEE J Biomed Health Inform 2014;18(2):430–9. [29] Lu Y, Sinnott RO. Semantic privacy-preserving framework for electronic health record linkage. Telemat Inform 2017;35(4):737–52 In press. [30] Rahman F, Bhuiyan ZA, Ahamed SI. A privacy preserving framework for RFID based healthcare systems. Future Gener Comput Syst 2017;72:339–52. [31] Saremi, Mirjalili S, Lewis A. Grasshopper optimisation algorithm: theory and application. Adv Eng Software 2017;105:30–47. [32] Call JM. Genetic algorithms for modelling and optimisation. J Comput Appl Math 2005;184(1):205–22. [33] Karaboga D, Basturk B. On the performance of artificial bee colony (ABC) algorithm. Appl Soft Comput 2008;8(1):687–97. [34] Tanweer MR, Suresh S, Sundararajan N. Self regulating particle swarm optimization algorithm. Inform Sci 2015;294:182–202. [35] Fister I, Fister I, Yang X, Brest J. A comprehensive review of firefly algorithms. Swarm Evolut Comput 2013;13:34–46. [36] Wu B, Qian C, Ni W, Fan S. The improvement of glowworm swarm optimization for continuous optimization problems. Expert Syst Appl 2012;39(7):6335–42. [37] Annie Alphonsa MM, Amudhavalli P. Privacy Preservation for Health Care Sector in Cloud Environment by Advanced Hybridization Mechanism, in Communication. [38] Annie Alphonsa MM, Amudhavalli P. Genetically modified glowworm swarm optimization based privacy preservation in cloud computing for healthcare sector. Evol Intell 2018;11(1-2):101–16. [39] De Giorgio A, Dante A, Cavioni V, Padovan AM, Rigonat D, Iseppi F, Graceffa G, Gulotta F. The IARA model as an integrative approach to promote autonomy in COPD patients through improvement of self-efficacy beliefs and illness perception: a mixed-method pilot study". Front Psycol 2017;8:1–9 eCollection. doi:10.3389/fpsyg.2017.01682. [40] De Giorgio A. The roles of motor activity and environmental enrichment in intellectual disability. Somatosens Mot Res 2017;34(1):34–43. [41] Sable AH, Talbar SN. An adaptive entropy based scale invariant face recognition face altered by plastic surgery. Pattern Recognit Image Anal 2018;28(4):813–29.