Future Generation Computer Systems 101 (2019) 534–541
Contents lists available at ScienceDirect
Future Generation Computer Systems journal homepage: www.elsevier.com/locate/fgcs
Biometric data on the edge for secure, smart and user tailored access to cloud services ∗
Silvio Barra a , Aniello Castiglione b , Fabio Narducci b , , Maria De Marsico c , Michele Nappi d a
University of Cagliari, Department of Mathematics and Computer Science, Cagliari, Italy University of Naples "Parthenope", Department of Science and Technology, Naples, Italy c "Sapienza", University of Rome, Rome, Italy d University of Salerno, Department of Computer Science, Salerno, Italy b
highlights • • • •
A cloud architecture exploiting biometric data is proposed. Data and privacy protection over cloud/edge computing can benefit from biometric recognition. Fusing biometric trait with sensors data from mobile device can customise the cloud service offer. Experimental results witness the feasibility of the approach.
article
info
Article history: Received 3 February 2019 Received in revised form 18 April 2019 Accepted 12 June 2019 Available online 21 June 2019 Keywords: Edge Computing Mobile Computing IoT Biometric Recognition Context Awareness
a b s t r a c t We are living an era in which each of us is immersed in a fully connected environment. The smart devices worn everyday by everyone let the users be projected into the so called IoT world, whose main aim is to provide tools and strategies to solve problems of everyday life as well as to improve well-being and quality of life. Such a goal can be achieved in several ways, but consumer specific services should be provided according to their daily habits, as well as temporary needs. This can be made possible by current mobile devices and their built-in sensors, which can infer the context where the owner operates and the performed activities to build up a user profile. In this work, an architecture for cloud services supply is proposed, based on subject continuous authentication and context (status/activity) recognition. A subject and the context are authenticated/recognised by means of the cyclic recognition of signals captured by the sensors of his/her smartphone, e.g., accelerometer and gyroscope. In order to test the accuracy of the proposed context recognition, the H-MOG dataset has been used, which provides joint acquisitions of status data (Sitting or Walking) related to the activity performed (Reading, Writing, or Navigating Map). © 2019 Elsevier B.V. All rights reserved.
1. Introduction The increasingly flexible and effective interaction among some of the most recent advances in computing, namely Mobile Computing, Internet of Things (IoT), cloud/fog/edge strategies of computing, and massive Machine Learning (with or without deep networks), offer novel possibilities to build smart city services. Furthermore, biometric recognition is joining this group of enabling factors. It can effectively support smart operations entailing an adaptive application behaviour, that can depend on the involved user. The attractive characteristic of biometric recognition is that it can be triggered without an explicit action by the subject, according to the principle of implicit interaction [1]. ∗ Corresponding author. E-mail address:
[email protected] (F. Narducci). https://doi.org/10.1016/j.future.2019.06.019 0167-739X/© 2019 Elsevier B.V. All rights reserved.
Without need for passwords or tokens, the transparent identification of the user can allow a flexible adaptation of services and resources, which is the base for that kind of context proactivity characterising ambient intelligence (smart ambients). A mixture of mobile and distributed computing allow splitting the required processing in a way that optimises the use of computational resources. This is possible on one side by fully exploiting the increasing power of mobile devices (edge), and, on the other side, by demanding the most critical operations to the cloud. IoT technologies act in this context as both sensors and actuators, by sensing the environment, transmitting information and triggering specific actions. Of course, this complex scenario requires devising, and ultimately learning, suitable behavioural rules by which the smart ambient reacts to specific users and situations. Advanced machine learning techniques should allow a continuous adaptation to changing conditions (Fig. 1).
S. Barra, A. Castiglione, F. Narducci et al. / Future Generation Computer Systems 101 (2019) 534–541
535
Fig. 1. The smart city star: biometric recognition joins mobile and distributed computing, IoT and machine learning to provide smart tailored services bound to the user identity.
In the scenario described above, a strong relation is being enforced by recent research between biometric authentication and mobile computing. The former ever more exploits wearable sensors, including those embedded in smart mobile devices, without need of special equipment. Nowadays, these sensors range from cameras of increasing resolution and MEMS-based sensors (Micro Electro-Mechanical Systems), ubiquitously embedded in everyday mobile communication devices, to fingerprint sensors that are becoming quite common too. This makes the investigation of new and cheap solutions increasingly attractive. For instance, it is possible to control the access to a smartphone by recording and processing the dynamic signals produced by the simple gesture of lifting the phone [2]. Such information can also be used to build an overall context awareness of the user condition. Context awareness is major component of Ambient Intelligence, which is, in turn, a core element in smart cities. Ambient Intelligent environments combine ubiquity, awareness, intelligence and natural interaction, with the crucial support of Internet of Things (IoT) technologies. Regarding the latter, it is worth reporting the definition given in [3]: "Ubiquitous sensing enabled by Wireless Sensor Network (WSN) technologies cuts across many areas of modern day living. This offers the ability to measure, infer and understand environmental indicators, from delicate ecologies and natural resources to urban environments. The proliferation of these devices in a communicating-actuating network creates the Internet of Things (IoT), wherein sensors and actuators blend seamlessly with the environment around us, and the information is shared across platforms in order to develop a common operating picture (COP)’’. In particular, intelligence is the ability of the system to recognise/analyse the detected user/context, to adapt its behaviour to people and situations, and to learn over time, possibly through Machine Learning methods and strategies, in order to provide users with personalised services. In this scenario, Cloud Computing plays a fundamental role in providing massive/special storage and processing services [4]. Possible, often unexpected
applications range individual as well as group recommendation systems [5] to energy-efficient traffic control in wireless sensor networks [6] to intelligent public bicycle services assisted by data analytics [7]. The smart city star in Fig. 1 allows for further elements and supports different application scenarios (see Section 2.1). In this work, a special interest is devoted to the use of continuous multibiometric authentication to the aim of both identifying a specific user, and to adapt the provided services to a specific user profile taking into account the present state (e.g., walking or sitting) and the present activity on a smart device (e.g., answering a call or writing a message) (see for example [8]). In all the addressed scenarios it is possible to sketch the role of both edge and cloud computing according to different strategies. Of course, sensing of relevant (biometric) signals is carried out on the edge, i.e., the user device. In order to limit the network traffic, pre-processing and extraction of a biokey (the user identifier but also the user’s state and activity) can be carried out on the smart device itself, so that identification only can be performed at a higher level (fog/cloud). However, capture and processing of suitable biometrics (e.g., face and appearance-based gait features) could also be carried out in fog clusters of servers devoted to user identification at a distance via remote acquisition. The distribution of tasks among different levels of the cloud hierarchy, starting from the edge, can be adapted to the kind of application and of processing, according to the processing speed/response time requirements: the lower the level, the smaller the amount of processed data, and therefore the faster the processing speed and the response time. According to the most popular model of hierarchy [9], mostly massive business analytics and intelligence (big data processing and data warehousing) reside in the higher level. Of course all transmission and storage of user data must assure both privacy (no unwanted disclosure of sensible data due to unfair action of the data handler) and security (data is suitably protected against security breaches both on the way and in data centres).
536
S. Barra, A. Castiglione, F. Narducci et al. / Future Generation Computer Systems 101 (2019) 534–541
In the remaining of the paper, the architecture of the system is described (Section 2), together with four sample scenarios which involve context and subject recognition. In Section 3, subject, status and activity recognition methodology is described, together with experimental results confirming the feasibility of the proposed approach. Finally, Section 4 concludes the paper. 2. System architecture The overall schematic workflow of the proposed solution is depicted in Fig. 2. The system can be mainly divided into four parts. The first is represented by the user and the devices he/she wears. These can come from a broad collection of smart devices which embed acquisition sensors like gyroscope and accelerometers or even small cameras for biometric traits acquisition. The second crucial component of the system is represented by the raw data, consisting in biometric traits acquired during user’s daily activities as well as signal sequences measuring movements of the user in the environment; they are, in turn, used as input for Machine Learning methods. Those methods (third component) are trained to address two main issues: (i) classifying the user status (i.e., sitting, standing, walking, running) [2] and (ii) inferring the action of the user on the mobile device. Combined with the biometric key of the user, this information (status/action/biokey) is transferred over the cloud services, which is the fourth main component of the system architecture, and responsible for the activation of the proper services. The latter are chosen among those resulting more appropriate for the recognised current condition of the user as well as for his/her attitudes that have been trained during usage. For example, when a phone ring arrives, the user can be expected to perform the gesture of bringing the device to the user’s ear. Such a gesture can activate the device sensors, e.g., the cameras and the accelerometers/gyroscope, so that the user’s ear can be acquired to recognise the identity of who is answering the call. Also the gesture dynamics of answering a call has been demonstrated to be a soft biometric trait that can work for biometric recognition [10]. Putting these two pieces of information together, a biometric key of the user (abbreviated in biokey in Fig. 2) can be derived and it can be used during the interaction with the cloud in order to trigger personalised services. This approach aims at empowering the typical desiderata of any cloud service offer: the service is tailored on both the generic preferences of the logging account and also on the specific needs of the user who is currently using the devices according to the present state. In turn, this leads to an improvement of security and privacy, enabling biometrics acting as a kind of authentication service, recently defined as BaaS (Biometrics asa-Service) by Barra et al. [11], by Castiglione et al. [12] and by Talreja et al. [13]. Of course the architecture in Fig. 2 implicitly entails an ubiquitous element that is instantiated by processes and services assuring security and privacy of personal/sensible data, both when stored and during transmission over the network [14]. To achieve this goal, both general and biometrics-specific strategies can be used. In particular, regarding BaaS, recent research and development efforts have been devoted to design robust solutions to allow outsourcing of services, e.g., biometric routines, yet preserving privacy. For instance, in biometric applications, personal sensitive data related to the identity of a person are directly exposed. General solutions to the security/privacy problem are based on encryption during data transmission, and further on multiparty computation [15]. In the latter, neither the servers nor the clients know each other’s data, and some form of encryption (homomorphic encryption) is used; each pair of parties is connected by a secure channel, and the communication is assumed to be synchronous. Such solutions are applicable to a
cloud deployment and have been also specifically used for remote biometric identification [16,17]. Most proposals rely on servercentric models. As an alternative, [18] proposes a user-centric biometric authentication scheme (PassBio) that allows users to encrypt their own templates by a lightweight encryption scheme; in this way, the server never sees the actual data directly. It is to consider that mobile consumer devices possibly involved in the smart process may present a lower computational power than desktop equipment. To this aim, [19] address the problem of an efficient privacy-aware authentication (PAA) scheme for mobile Cloud Computing (MCC). The proposed authentication scheme uses an identity-based signature scheme to ensure resilience to service-provider-impersonation attacks. Last but not least, to face the possibility of biometric template theft, biometric cryptosystems and cancellable biometrics represent a biometric-specific solution [20]. 2.1. Use cases and scenarios In the following section, a set of possible system application scenarios is reported. In particular, four different scenarios have been described, according to the scope of the user involved in the actions. The purpose of the first scenario is focused on a common life situation, in which the user is in rush for not being late to a working meeting. This describes a borderline event in which the emergency is related to the user needs, i.e. not being late on his first day of the new job. The second scenario shows the capability of the system of supporting people day-life habits in quite situations; in such cases the system supports the user in doing shopping and acts as a recommender system. The third scenario depicts a real emergency case where the system would need a previous specific-event training phase. Finally, the last scenario discusses a typical condition of elderly people, often living alone and therefore needing special cares in case of a fall or when calling for specialised assistance services. SCENARIO 1 Paul got job in London; so he had to leave his birthplace Plymouth and rent a house in London. On his first working day, he goes out home and starts walking along the streets. The ECG recorder on his smartwatch carries out the authentication of Paul, based on his previous acquisitions and habit recordings. From now on, at fixed intervals, the ECG recorder keeps authenticating Paul: both the user status and the user activity modules are activated. Paul does not exactly know where the working place is located and keeps walking into the streets of the city centre, following the shallow indication gathered along his way. Suddenly, Paul realises that he is going to be late on his first day; so he starts accelerating the pace. The smartwatch records the rush (that is a cue for an occurring hurry) and, as soon as Paul launches the Map application on his smartphone (1-to −1 bind to the smartwatch), the car-pool cloud service gives him the location of the nearest car-sharing parking. Thanks to this, Paul manages to reach the job place with 2 min in advance on the set time. SCENARIO 2 Eric and his wife Gina go shopping in a famous shopping district. Eric and Gina are recognised thanks to their gait signals captured by accelerometer/gyroscope sensors on their respective smartphones. No way of mixing up the signals, since each is captured and transmitted by a different sensor over a different channel. The GPS is tracking their path, and as they approach their preferred shops, a proper app connected to the cloud uploads information on ongoing offer that can be found in such preferred shop(s).
S. Barra, A. Castiglione, F. Narducci et al. / Future Generation Computer Systems 101 (2019) 534–541
537
Fig. 2. The schematic representation of the system workflow. The user, wearing the smart devices, activates a collection of cloud services by his/her actions, which are detected and recognised through machine learning methods. Various biometric data is also acquired while performing the recognised actions; this data used for user recognition that activates user-tailored services based on inferred specific user’s attitudes and preferences.
SCENARIO 3 The fire-fighting unit is approaching the site of a disaster. Each member wears a smartwatch that is able to use the walk signal to confirm the identify of the person wearing it (and walking), so that a central control system deploys an application interface suited to the member role (e.g., command or normal): the same mobile device can to record the vitality signals (e.g., heartbeat), the walking speed, and also the accelerometer and gyroscope dynamics useful to recognise some conventional gesture (e.g., raise an arm and shake it). The GPS is tracking individual paths, and an event management system continuously monitors the situation to provide guidelines and strategies. As the unit members spread in the relevant area and approach the most critical points, the interface of the emergency application on their mobile devices can change according to the context: if decisions and operations must be taken in a hurry (close to dangerous points) the interface is simplified to allow few clear operations. Otherwise, it allows data browsing, analysis, and enquiry. If a member detects a criticality, a raise and shake of arm detected by the wearable sensors is recognised and an alert is transmitted to the closest companions, that can get the alarm and reach the place following the itinerary instructions triggered on their mobile interfaces. If a member is in danger or is wounded or fallen, this can be detected by the wearable sensors and the condition recognised, and a message transmitted as well to the closest companions. If a member loses the smartwatch and another person gets it, the system can recognise that there is an intrusion and suspend any kind of communication towards that device. SCENARIO 4 John is an elder gentleman who lives alone. His relatives live in the same city, but have a lot of family and job commitments. Notwithstanding this, they want to take care of John, and have registered him to a cloud service that help continuously checking for anomalous conditions. The service entails the use of a smartphone and of a smartwatch with suitable apps. For instance, an app installed on John’s smartwatch uses accelerometer and gyroscope embedded sensors to detect a possible sudden fall. If the owner does not stand up by himself in a couple
of seconds, the app raises an alarm which is transmitted to a cloud server together with the GPS data. In this way, persons registered to the service as ‘‘caretakers’’ for John are immediately alerted and know exactly the location of the fall. Further wearable sensors (possibly the smartwatch itself) record John’s hearth rate, and possibly send out an alarm to relatives if an anomaly arises that lasts beyond an acceptable time. As another example, using a simple interface on the smartphone, feed by his position, John can also know where the closest assistance service can be found, in case of needing a non-critical help, and can be guided to reach it by walk or by public transportation. He is alerted by a sound alarm when the desired bus/taxi is approaching (of course a cloud application tracks the transportation network). 3. Methodology In the architecture depicted in Fig. 2, the first step entails the recognition of the subject. Afterwards, a further processing step is in charge of activating the cloud services subject to the proper recognition of the user status and of the user activity. The latter are carried out on the device (edge) and only the result is transferred. The above mentioned authentication/recognition phases can be detailed as follows:
• subject continuous recognition: the subject is continuously authenticated, according to the movement registered by its own personal device; therefore, the following recognition phases are activated according to the positive result of this step; • user status recognition: as the user is properly authenticated, its status is recognised (sitting, walking, running, standing, going down/up the stairs); • user activity recognition: the recognition of the activity that the user is accomplishing on his/her smart device is combined with the user status, in order to activate the proper cloud service. The continuous authentication of the subject is necessary both to guarantee the correct binding between the device and owner and to load the status/activity profile of the correct subject; this
538
S. Barra, A. Castiglione, F. Narducci et al. / Future Generation Computer Systems 101 (2019) 534–541
is activated by the modules of user status and activity recognition that can be considered complementary one to each other, since a specific service is launched/provided by their proper coupling. 3.1. Dataset In order to achieve both user authentication and status and activity recognition, the H-MOG dataset has been used [21]. The dataset has been collected from 100 volunteers in two different statuses (sit and walk) and over three activities on a smart device (reading, writing and map navigation). Given the acquisition modalities of this dataset, it perfectly fits our needs, since it supports the three authentication purposes which have been considered in our system; therefore there is no need to build fake Chimera Subjects [22]. For each of the 100 subjects, 24 sessions have been considered; data from each session was recorded as a set of CSV files, containing data related to:
• the activity performed (Activity.csv) • accelerometer, gyroscope and magnetometer data related to the activity (Accelerometer.csv, Gyroscope.csv and Magnetometer.csv) • a set of specific smartphone activity events, like key press (KeyPressEvent.csv), scroll (ScrollEvent.csv), stroke (StrokeEvent.csv), touch (TouchEvent.csv and OneFingerTouchEvent.csv), and pinch (PinchEvent.csv) Further details on the dataset are reported in [23]. 3.2. Time series aggregation method The information described above are provided as time series; the arising problem with such format is that different samples from the same subject generally have different sizes, since it is quite impossible to obtain exactly the same number of values in different acquisitions. Therefore, as often happens when dealing with these vectors, an aggregation approach is adopted. Each time series is replaced by a number of statistical (aggregate) descriptors which are meant to provide as much information as possible on the original values and their distributions, so to obtain the best possible accuracy when comparing vector of values from different acquisitions. Let S1 = [s11 , s12 , . . . , s1M ], S2 = [s21 , s22 , . . . , s2N ], . . . , Sk = [sk1 , sk2 , . . . , skW ], be k different time series with lengths M , N , . . . , W respectively. The series are respectively replaced by:
• the mean value of the time series (mean(S1), mean(S2), . . . , mean(Sk)); • the median value (median(S1), median(S2), . . . , median(Sk)); • the skewness value (skewness(S1), skewness(S2), . . . , skew ness(Sk)); • the max value (max(S1), max(S2), . . . , max(Sk)); • the min value (min(S1), min(S2), . . . , min(Sk)); • the standard deviation (std(S1), std(S2), . . . , std(Sk)). It is to consider that some sensors, e.g., accelerometer, gyroscope and others, produce two or three series each, one for each axis or dimension. Fig. 3 shows an example of the aggregation method, when executed over two different time series, each representing the values produced on the x − axis by two Stroke events from two different subjects. Each series is replaced by a 1x6 statistical feature vector. This allows to obtain a fixed dimension feature vector, suitable for being further analysed and classified.
3.3. Subject continuous authentication The approach exploited for authenticating the user recalls the method proposed in [24], which achieved continuous authentication on smartphone while texting or navigating a map. This section will not deal yet with the recognition of the activity the user is performing on his smart device (this will be the topic of Section 3.4), but with the preliminary authentication of the user. This exploits the movement information of the worn smart device (later on smartphone, wrist band, wrist ECG recorder, etc.), in order to build a biokey, to be used for the continuous verification of the user. For each user and each activity, the time series are aggregated as described above1 [25]. For each subject a binary classifier is built, which is in charge of deciding whether the current user is the actual owner of the device or not. This procedure follows the approach presented in [24], in which a Linear Discriminant Analysis with Random Subspace support has been exploited [26]. The experimental results obtained with the 100 subjects of the H-MOG dataset are reported in terms of the Area Under the Curve (AUC) of the Receiver Operating Characteristic (ROC) curve, and of Equal Error Rate (EER). AUC achieves a value of 0.76 (76.4%), with an EER value of 0.26%. Fig. 4 shows the plots of the ROC, of FAR (False Acceptance Rate) vs. GAR (Genuine Acceptance Rate) and the relation between Genuine/Impostor score distributions. 3.4. User status and activity recognition Once the user is authenticated, the system is aware that the owner of the device is the one who is actually using it. The following step is in charge of recognising both the status and the activity involved, in order to launch the proper cloud service. Afterwards, the status/activity pair for a specific user is considered. What follows is the list of six status/activities that the system presently recognises:
• • • • • •
Sitting + Reading; Walking + Reading; Sitting + Writing; Walking + Writing; Sitting + Map Navigating; Walking + Map Navigating;
For each activity, the information regarding the gestures performed on the smartphone are used; more in details we have:
• OneFingerTouchEvent: touch pressure and size of contact area on the screen;
• ScrollEvent: distance of the scroll over the x and y axes; • StrokeEvent: velocity of the stroke over the x and y axes. In conjunction with these, also the x, y and z axes of accelerometer, gyroscope and magnetometer related to the smart device movements in the space are taken into account. For each pair status/activity the considered time series are preprocessed by using the same approach as in the previous step. The trained classifier is a KNN with Coarse configuration [27]. The difference between the fine configuration of KNN and the coarse one is that as for the fine KNN, the current sample belongs to the cluster which the nearest neighbour belongs to, whereas in the coarse one 100 neighbours are taken into account for the cluster choice; the distance between the current sample and the neighbours is computed by means of the Euclidean Distance. Three balanced folds are used for cross-validation. For each fold, 80% of the samples of a user are used for training and 20% for 1 https://www.kaggle.com/talmanr/a-simple-features-dnn-using-tensorflow.
S. Barra, A. Castiglione, F. Narducci et al. / Future Generation Computer Systems 101 (2019) 534–541
539
Fig. 3. The described aggregation method is applied to two time series representing the same feature (x − axis of a Stroke event) from two different subjects. Besides showing that the waves significantly differ from each other, it is also interesting to see that the aggregating values obtained by the computation of the statistical measures still keep this difference.
Fig. 4. On the left, the ROC curve for the system for experiments on the 100 subjects considered in H-MOG dataset. On the right, the related FAR/FRR plot and the Genuine/Impostor distributions.
testing, with no overlap. The average testing accuracy over the three folds is 93.4%. Fig. 5 shows the confusion matrix over the six considered classes. For each class, the matrix shows the number of times each class has been properly predicted. Also, the Positive Predicted Values (PDV) rate, the False Discovery Rate (FDR), the True Positive Rate (TPR) and the False Negative Rate (FNR) are shown. An average True Positive Rate of 90.3% is obtained over the true classes, whereas the 94.8% (on average) of correct predictions is achieved. In terms of performance results, the computing time is a negligible factor, since the system relies on the use of a Machine Learning approach; this means that, even if the training may take some time for being properly trained, its operating version acts in near-real-time. 4. Smart environments and sensors Nowadays, IoT systems and applications can rely on several sensors which aim at gathering the every day life information by means of acquisitions which are the most of times transparent for the users, still respecting their privacy. A typical example is the use of smartwatches and smartphones, which, once obtained the opportune permissions, can support the user in several ways. Popular examples are apps aiming at improving the physical
activity of the user, or at helping scheduling the appointments along the day in a more efficient manner. The ubiquitous nature of the sensors allows the development of many specific-purpose applications, ranging from transparent continuous authentication to home automation and automotive. In this scenario, also the services provided in smart cities can take advantage from these sensors. Video surveillance camera systems are probably the most widespread smart city application; born to serve as a mean to guarantee protection and safety to citizens and pedestrians [28, 29], little by little these have been provided with further sensors for detecting the flow of people through the streets [30], like counter and proximity sensors. The former are often mounted also on public transportation means and station, while the latter are usually exploited in the crossroads for detecting the presence of pedestrians. Mainly they became very useful in home automation, for detecting whether a person is entering a room; this kind of sensor are used, as an example, when the home automation system wants to adapt the illumination of a room according to the user requirements. In smart home systems also humidity and temperature sensors are often placed in specific room, in order to adjust humidity and heat levels.
540
S. Barra, A. Castiglione, F. Narducci et al. / Future Generation Computer Systems 101 (2019) 534–541
Fig. 5. The confusion matrix obtained from the application of the KNN with Coarse configuration classifier over the H-MOG dataset.
5. Conclusions The use of data captured from sensors embedded in smart personal devices can be included among the possible trends of evolution of cloud services. Proposals along this line are gaining increasing research as well as commercial attention. The main aim is to offer solutions and improvements to our daily life and habits. This paper has described a user-oriented supply architecture for cloud services. It establishes a two-way association between activity on the smart-device and real world activity. The built-in smart device sensors gather information that support the recognition of both the user and related status and mobile activity, which triggers a cloud service useful in the real world scenario at hand. On the top of the architecture, a continuous authentication module performs the biometric recognition of the user, in order to load the proper user profile and the related activity features, preferences and behavioural patterns that can be useful to select the best service to offer from time to time. Experimental results witness the need for further research, but also the feasibility of the approach. Declaration of competing interest The authors declared that they had no conflicts of interest with respect to their authorship or the publication of this article. References [1] A. Schmidt, Implicit human computer interaction through context, Personal Technol. 4 (2–3) (2000) 191–199. [2] A.F. Abate, M. Nappi, S. Barra, M. De Marsico, What are you doing while answering your smartphone?, in: 2018 24th International Conference on Pattern Recognition (ICPR), IEEE, 2018, pp. 3120–3125. [3] J. Gubbi, R. Buyya, S. Marusic, M. Palaniswami, Internet of things (IoT): A vision, architectural elements, and future directions, Future Gener. Comput. Syst. 29 (7) (2013) 1645–1660.
[4] R. Buyya, C.S. Yeo, S. Venugopal, J. Broberg, I. Brandic, Cloud computing and emerging IT platforms: Vision, hype, and reality for delivering computing as the 5th utility, Future Gener. Comput. Syst. 25 (6) (2009) 599–616. [5] X. Wang, Y. Liu, J. Lu, F. Xiong, G. Zhang, Trugrc: Trust-aware group recommendation with virtual coordinators, Future Gener. Comput. Syst. 94 (2019) 224–236. [6] J. Lu, L. Feng, J. Yang, M.M. Hassan, A. Alelaiwi, I. Humar, Artificial agent: The fusion of artificial intelligence and a mobile agent for energy-efficient traffic control in wireless sensor networks, Future Gener. Comput. Syst. (2018). [7] Y. Zhang, H. Wen, F. Qiu, Z. Wang, H. Abbas, Ibike: Intelligent public bicycle services assisted by data analytics, Future Gener. Comput. Syst. (2018). [8] X. Li, J. Niu, S. Kumari, F. Wu, K.-K.R. Choo, A robust biometrics based three-factor authentication scheme for global mobility networks in smart city, Future Gener. Comput. Syst. 83 (2018) 607–618. [9] A.N. Toosi, R.N. Calheiros, R. Buyya, Interconnected cloud computing environments: Challenges, taxonomy, and survey, ACM Comput. Surv. 47 (1) (2014) 7. [10] S. Barra, G. Fenu, M. De Marsico, A. Castiglione, M. Nappi, Have you permission to answer this phone?, in: 2018 International Workshop on Biometrics and Forensics (IWBF), IEEE, 2018, pp. 1–7. [11] S. Barra, K.-K.R. Choo, M. Nappi, A. Castiglione, F. Narducci, R. Ranjan, Biometrics-as-a-service: Cloud-based technology, systems, and applications, IEEE Cloud Comput. 5 (4) (2018) 33–37. [12] A. Castiglione, K.-K.R. Choo, M. Nappi, F. Narducci, Biometrics in the cloud: Challenges and research opportunities, IEEE Cloud Comput. 4 (4) (2017) 12–17. [13] V. Talreja, T. Ferrett, M.C. Valenti, A. Ross, Biometrics-as-a-service: A framework to promote innovative biometric recognition in the cloud, in: 2018 IEEE International Conference on Consumer Electronics (ICCE), IEEE, 2018, pp. 1–6. [14] S. Barra, A. Castiglione, M. De Marsico, M. Nappi, K.-K.R. Choo, Cloud-based biometrics (biometrics as a service) for smart cities, nations, and beyond, IEEE Cloud Comput. 5 (5) (2018) 92–100. [15] R. Cramer, I. Damgård, J.B. Nielsen, Multiparty computation from threshold homomorphic encryption, in: International Conference on the Theory and Applications of Cryptographic Techniques, Springer, 2001, pp. 280–300. [16] J. Bringer, H. Chabanne, A. Patey, Privacy-preserving biometric identification using secure multiparty computation: An overview and recent trends, IEEE Signal Process. Mag. 30 (2) (2013) 42–52. [17] M. Barni, G. Droandi, R. Lazzeretti, Privacy protection in biometricbased recognition systems: A marriage between cryptography and signal processing, IEEE Signal Process. Mag. 32 (5) (2015) 66–76.
S. Barra, A. Castiglione, F. Narducci et al. / Future Generation Computer Systems 101 (2019) 534–541 [18] K. Zhou, J. Ren, Passbio: Privacy-preserving user-centric biometric authentication, IEEE Trans. Inform. Forens. Secur. 13 (12) (2018) 3050–3063. [19] D. He, N. Kumar, M.K. Khan, L. Wang, J. Shen, Efficient privacy-aware authentication scheme for mobile cloud computing services, IEEE Syst. J. 12 (2) (2018) 1621–1631. [20] C. Rathgeb, A. Uhl, A survey on biometric cryptosystems and cancelable biometrics, EURASIP J. Inform. Secur. 2011 (1) (2011) 3. [21] Q. Yang, G. Peng, D.T. Nguyen, X. Qi, G. Zhou, Z. Sitová, P. Gasti, K.S. Balagani, A multimodal data set for evaluating continuous authentication performance in smartphones, in: Proceedings of the 12th ACM Conference on Embedded Network Sensor Systems, ACM, 2014, pp. 358–359. [22] S. Barra, A. Casanova, M. Fraschini, M. Nappi, Fusion of physiological measures for multimodal biometric systems, Multimedia Tools Appl. 76 (4) (2017) 4835–4847. [23] Z. Sitová, J. Šeděnka, Q. Yang, G. Peng, G. Zhou, P. Gasti, K.S. Balagani, HMOG: New behavioral biometric features for continuous authentication of smartphone users, IEEE Trans. Inform. Forens. Secur. 11 (5) (2016) 877–892. [24] S. Barra, M. Marras, G. Fenu, Continuous authentication on smartphone by means of periocular and virtual keystroke, in: International Conference on Network and System Security, Springer, 2018, pp. 212–220. [25] M. Malekzadeh, R.G. Clegg, A. Cavallaro, H. Haddadi, Protecting sensory data against sensitive inferences, in: Proceedings of the 1st Workshop on Privacy By Design in Distributed Systems, ACM, 2018, p. 2. [26] X. Zhang, Y. Jia, A linear discriminant analysis framework based on random subspace for face recognition, Pattern Recognit. 40 (9) (2007) 2585–2591. [27] Y. Xu, Q. Zhu, Z. Fan, M. Qiu, Y. Chen, H. Liu, Coarse to fine k nearest neighbor classifier, Pattern Recognit. Lett. 34 (9) (2013) 980–986. [28] J. Neves, F. Narducci, S. Barra, H. Proença, Biometric recognition in surveillance scenarios: a survey, Artif. Intell. Rev. 46 (4) (2016) 515–541. [29] X. Wang, M. Wang, W. Li, Scene-specific pedestrian detection for static video surveillance, IEEE Trans. Pattern Anal. Mach. Intell. 36 (2) (2014) 361–374. [30] E. Mathews, A. Poigne, An echo state network based pedestrian counting system using wireless sensor networks, in: 2008 International Workshop on Intelligent Solutions in Embedded Systems, IEEE, 2008, pp. 1–14.
Silvio Barra was born in 1985 in Battipaglia (Salerno, ITALY). In 2009 and in 2012 he received the B.Sc. degree (cum laude) and the M.Sc. degree (cum laude) in Computer Science from University of Salerno. In 2017 he took the Ph.D. at the University of Cagliari. Currently he is a Assistant Professor at the University of Cagliari. He is member of CVPL (ex GIRPR). His main research interests include pattern recognition, biometrics and video analysis and analytics. Contact him at
[email protected].
Aniello Castiglione received the Ph.D. degree in Computer Science from the University of Salerno, Italy. Actually, he is Assistant Professor at the University of Naples ‘‘Parthenope’’, Italy. He published around 200 papers in international journals and conferences. He served in the organisation (mainly as Programme Chair and TPC member) in around 230 international conferences. He served as a reviewer for around 100 international journals and was the Managing Editor of two ISI-ranked international journals. He acted as a Guest Editor in around 20 Special Issues and served as an Editor in around 10 Editorial Boards of international journals.
541
In 2014, one of his paper (published on the IEEE TDSC) has been selected as ‘‘Featured Article’’ in the ‘‘IEEE Cybersecurity Initiative’’, while in 2018 another paper (published on the IEEE Cloud Computing) has been selected as ‘‘Featured Article’’ in the ‘‘IEEE Cloud Computing Initiative’’. His current research interests include Information Forensics, Digital Forensics, Security and Privacy on Cloud, Communication Networks, Applied Cryptography and Sustainable Computing. Contact him at
[email protected]. Fabio Narducci was born in Caserta, Italy, in 1985. He received the Ph.D. degree in Computer Science at the Virtual Reality Lab (VRLab) of the University of Salerno, in 2015. He is currently an Assistant professor at the University of Naples ‘‘Parthenope’’ (Italy) and research collaborator at the BIPLab of the University of Salerno (Italy). He is CVPL (ex GIRPR) member. His research interests include biometrics, gesture recognition, augmented reality and virtual environments, mobile and wearable computing, human computer interaction, haptics. Contact him at fabio.narducci@uniparthenope. it. Maria De Marsico is an associate professor of computer science at Sapienza University of Rome. Author of more than 170 papers in peer reviewed international journals, international conferences and book chapters, She has been co-editor of international books. Her research interests include pattern recognition and image processing, focusing on biometrics, and human– computer interaction. She is a member of IEEE, ACM, the European Association for Biometrics (EAB), and the Institute for Systems and Technologies of Information, Control and Communication (INSTICC), and is the vicepresident of the Italian chapter of the IEEE Biometrics Council. She is the editor in chief of the IEEE Biometrics Newsletter and is on the editorial board of the IEEE Biometrics Compendium. She’ is an area editor for Pattern Recognition Letters. Contact her at
[email protected]. Michele Nappi received the laurea degree (cum laude) in Computer Science from the University of Salerno, Italy, in 1991, the M.SC. degree in Information and Communication Technology from I.I.A.S.S. ‘‘E.R. Caianiello,’’ in 1997, and the Ph.D. degree in Applied Mathematics and Computer Science from the University of Padova, Italy, in 1997. He is currently a full professor of Computer Science at the University of Salerno. Author of more than 160 papers in peer review international journals, international conferences and book chapters, He is co- editor of several international books. His research interests include pattern recognition, image processing, image compression and indexing, multimedia databases and biometrics, human computer interaction, vr\ar. Dr. Nappi serves as associate editor and managing guest editor for several international journals. He is also member of tpc of international conferences. He is team leader of the Biometric and Image Processing Lab (BIPLAB) and received several international awards for scientific and research activities. IEEE Senior Member, GIRPR/IAPR Member, He has been the President of the Italian Chapter of the IEEE Biometrics Council. In 2014 He was one of the founders of the spin off BS3 (biometric system for security and safety). Contact him at
[email protected].